id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
119356811
pes2o/s2orc
v3-fos-license
The Rotation and Other Properties of Comet 49P/Arend-Rigaux, 1984 - 2012 We analyzed images of comet 49P/Arend-Rigaux on 33 nights between 2012 January and May and obtained R-band lightcurves of the nucleus. Through usual phasing of the data we found a double-peaked lightcurve having a synodic rotation period of 13.450 +/- 0.005 hr. Similarly, phase dispersion minimization and the Lomb-Scargle method both revealed rotation periods of 13.452 hr. Throughout the 2011/12 apparition, the rotation period was found to increase by a small amount, consistent with a retrograde rotation of the nucleus. We also reanalyzed the publicly available data from the 1984/85 apparition by applying the same techniques, finding a rotation period of 13.45 +/- 0.01 hr. Based on these findings we show that the change in rotation period is less than 14 seconds per apparition. Furthermore, the amplitudes of the light curves from the two apparitions are comparable, to within reasonable errors, even though the viewing geometries differ, implying that we are seeing the comet at a similar sub-Earth latitude. We detected the presence of a short term jet-like feature in 2012 March which appears to have been created by a short duration burst of activity on March 15. Production rates obtained in 2004/05, along with reanalysis of previous results from 1984/85 imply a strong seasonal effect and a very steep fall-off after perihelion. This, in turn, implies that a single source region dominates activity, rather than leakage from the entire nucleus. INTRODUCTION Comets spend most of their lifetimes in the cold outer Solar System and are therefore believed to be largely unchanged since the era of planetary formation (e.g., Mumma et al. 1993, Dones et al. 2004). This makes them ideal tools for studying the early conditions of the Solar System as well as properties of the protoplanetary disk. Furthermore, their physical properties must be explained by any unified theory of the evolution of the Solar System, and thus they are valuable for testing such theories. For a complete understanding of comets, one requires knowledge of the orbital path, rotational period and activity as these properties are all closely linked. In order to correctly interpret observations of the coma and determine nuclear activity across the surface, for example, it is essential to know the rotation period of the cometary nucleus (e.g., Samarasinha et al. 2004). Furthermore, the rotation period can help infer information about the comet's internal structure, as has been confirmed by occasional spacecraft visits (e.g., Barucci et al. 2011). The rotation period of a comet can be determined by analyzing its lightcurve. The necessary photometric measurements, however, are often difficult to obtain due to contributions from the cometary coma, which scatters light and thus overwhelms the light reflected off the nucleus. Due to this, ground based observations of cometary nuclei are limited to comets at large heliocentric distances or to comets which are relatively 'anemic' in their production of the coma. The latter means that observations can take place when the comet is close to the Earth, and small photometric apertures can be used to reduce the coma contribution. The first nucleus lightcurves of an anemic comet were obtained in 1984 of comet 28P/Neujmin 1 (A'Hearn et al. 1984a; Campins et al. 1987). Observations were carried out in both the optical and the thermal IR; however, without complete lightcurves in either of these wavebands they were unable to unambiguously state the cause of the variations in brightness. Soon thereafter, the anemic comet 49P/Arend-Rigaux was observed. As was done for comet 28P/Neujmin 1, Millis et al. (1985; observed comet 49P/Arend-Rigaux in the optical as well as in the thermal IR. In the case of 49P/Arend-Rigaux, the observations in the thermal IR were sufficient to allow them to conclude that the variations in optical brightness were due to the shape of the nucleus as opposed to changes in albedo across the surface. The thermal IR data also confirmed the shape of the nucleus to be that of a near triaxial ellipsoid with dimension of 13 × 8 × 8 km, resulting in a double-peaked lightcurve. The thermal IR measurements of both of these objects revealed that comet nuclei have extremely low albedos before this was 'discovered' by the 1986 spacecraft visit to 1P/Halley (Keller et al. 1986). Comet 49P/Arend-Rigaux was observed and analyzed independently by three groups during its 1984/85 apparition: Jewitt & Meech (1985), Millis et al. (1988) and Wisniewski et al. (1986). All three groups concluded different values for the nucleus rotation period and although all values were simple ratios of one another, no individual set of data had sufficient observations to allow for high levels of precision or the complete removal of aliases. By combining the optical data from these three independent groups, we increased the total number of nights of observations and thus were able to obtain a more precise value of the rotation period during the 1984/85 apparition. The calculated value was compared to the rotation period obtained for the most recent 2011/12 apparition, which reached perihelion on 2011 October 19.1. During this apparition we observed comet 49P/Arend-Rigaux from 2012 January until May, obtaining images in the broadband R-filter to measure the nucleus lightcurve. Further observational data were collected during the 2004 apparition, however, the data were of poor quality and proved to be unusable for constraining the rotation period. The direct comparison between the 1985 and 2012 data enabled us to determine whether there was a significant change in rotation period between the two apparitions. Computational models suggest that changes in rotation periods should be common, nonetheless, this has only been conclusively demonstrated in a small number of comets, e.g., Comet Levy (Feldman et al. 1992;Schleicher et al. 1991), 10P/Tempel 2 (Mueller & Ferrin 1996;Knight et al. , 2012Schleicher et al. 2013), 2P/Encke (Fernández et al. 2005), 9P/Tempel 1 (Chesley et al. 2013), 103P/Hartley 2 (e.g., Samarasinha et al. 2011;, and 41P/Tuttle-Giacobini-Kresak (Bodewits et al. 2017;. Furthermore, recent observations from the Rosetta spacecraft showed that the rotation period of 67P/Churyumov-Gerasimenko changed throughout the orbit, with an increase in rotation period of 0.2% as it approached perihelion followed by a rapid decrease of 1% as it moved further away (e.g., Keller et al. 2015;Jorda et al. 2016). The lack of more observational evidence for changes in rotation period is assumed to be largely due to the lack of high quality data for multiple apparitions of the same comet. The most common cause for changes in the rotation period of a comet is believed to be asymmetric outgassing resulting in torquing (e.g., Samarasinha et al. 2004). This suggests that comets with a smaller nucleus are more prone to changes in rotation period and thus that the large nucleus of comet 49P/Arend-Rigaux is unlikely to undergo rapid changes. Furthermore, the 2012 observations showed very low levels of outgassing and, despite efforts to enhance the images (e.g., Schleicher & Farnham 2004), we were unable to detect any morphological evidence of dust jets which could affect the rotation period. Nonetheless, the stacking of nightly images revealed the presence of a short term jet-like feature in 2012 March, which will be discussed later. The comet was too faint for our standard narrowband imaging and therefore it is unknown if any gas jets exist. Likewise, the comet was too faint for our standard photoelectric photometer observations in 2011/2012; however, we were able to obtain data during the 2004/05 apparition using a photoelectric photometer with narrowband comet filters. Similar observations were carried out during the 1984/85 apparition (Millis et al. 1988), thus allowing us to compute and intercompare production rates and abundance ratios of a number of gas species from these two apparitions. The layout of the paper is as follows. A summary of the observations and reductions of the 2012 imaging data is found in Section 2, followed by an in depth analysis of the light curve in Section 3. Section 4 explains and analyzes a number of properties of the coma and the final section provides an overall summary and discussion of all the results. Observing Overview Useful images of comet 49P/Arend-Rigaux were obtained on a total of 33 nights between 2012 January and May with sampling at monthly intervals (Table 1). Observations were obtained at the Lowell Observatory Hall 1.1 m telescope with the e2v CCD231-84. On-chip 2 × 2 binning produced images with a pixel scale of 0.740 arcseconds pixel -1 . On-chip 3 × 3 binning was used for observations in May, producing images with a pixel scale of 1.11 arcseconds pixel -1 . The images obtained with the 1.1 m telescope were guided at the comet's rate of motion, with the exception of the data collected in May, which were trailed at half the comet's rate, resulting in equal trailing of the stars and the comet. Additional observations were obtained with the 0.8 m telescope, also at Lowell Observatory, with the e2v CCD42-40. On-chip 2 × 2 binning produced images with a pixel scale of 0.456 arcseconds pixel -1 . The images obtained with the 0.8 m telescope were guided at the sidereal rate, with the exception of the first three nights in January, which were tracked at the comet's ephemeris rate. Broadband R-filters were used for all observations except those carried out in May, which used the VR-filter (about twice as wide as a standard R-filter) in order to improve the signal-to-noise. Exposure times prior to 2012 March 21 were typically 120 seconds; exposure times thereafter were always 300 seconds. The large variety of techniques used for these observations are due to individual observational runs being carried out by different observers. Absolute Calibrations The data were reduced using standard techniques in IDL to remove bias and apply flat fields (e.g., . Landolt standard stars (Landolt 2009) were observed to determine the instrumental magnitude and extinction coefficients on 2012 January 25 and 26 for the 0.8 m and 1.1 m telescope respectively (although Table 1 shows these nights as "cirrus", only parts of these nights had cirrus and it was photometric when the standard stars were observed). Standard stars were not observed on other nights so typical zero-point and extinction coefficients were used. The application of absolute calibrations on non-photometric nights provided first order corrections for airmass, which allowed us to determine the additional offsets necessary due to clouds. We confirmed that these produced reasonable calibrations by spot-checking selected dates against UCAC5 R-filter catalog values (Zacharias et al. 2017) for a number of field stars, finding typical photometric accuracies of <0.1 mag. As will be discussed in Section 2.6, additional small nightly offsets were applied in order to align the light curves. We are not aware of any calibration fields for the VRfilter (used in May) so have treated these images like R-band images. This likely resulted in a small (<0.05) tilt to fainter magnitudes at high airmass due to a different extinction correction and larger (0.1-0.3 mag) absolute calibration offsets to brighter magnitudes, as reflected in the Δm 2 values given in Table 1. As a result, the May data were used only for period determination since this was unaffected by the calibration issues. Comet Measurements The flux of the comet was determined by centroiding on the nucleus and integrating inside circular apertures. Similarly, the median sky flux was calculated in an annulus centered on the nucleus, with a radius large enough to avoid coma contamination. Apertures from 3 to 30 pixels in radius were used for the comet, allowing us to monitor for passing stars, which showed up in the larger apertures first. The aperture with the most coherent lightcurve was independently determined for each night (given in column 13 of Table 1), on the basis that it had to be large enough to include as much light from the nucleus as possible but small enough to avoid contamination from passing stars. This depended on a variety of factors, including trailing and seeing, as well as how crowded the field was, but was generally ~ 3 arcseconds, i.e. around twice the FWHM. Comparison Star Correction Following the methodology of earlier papers (e.g., , we conducted photometry on seven field stars per night, allowing us to correct for transparency variations or changes in the sensitivity of the equipment. The magnitude of each field star was tracked throughout the night using the same range of apertures as used for the comet. As the field stars are expected to maintain a constant brightness in good weather conditions, any deviation from this least obscured brightness suggests a change in observing conditions. The fourth brightest measurement for each star was used as the least obscured brightness because it is statistically unlikely for more than four frames to be affected by random fluctuations such as cosmic rays, bad pixels etc. which might bias occasional pixels too high. The corrections necessary to bring fainter measurements into agreement with this value were determined for each frame and the median offset of the seven field stars calculated to give a correction value for each image. These magnitude corrections were then applied to the comet's magnitude in that image, yielding a corrected lightcurve, as shown in Figure 1. This method is based on the assumption that the conditions were photometric at least once during the night. If this was not the case, then the night in question will be systematically fainter and will have a correspondingly smaller (more negative) ∆m 2 , as tabulated in column 11 of Table 1. As discussed in Section 2.6, all ∆m 2 values were within 0.12 mag of 0.0 except for nights with known reasons (outburst, different filter), f Magnitude necessary to correct for changes in the geometry (∆ # = −5 log , ∆ − ). g Offset (in magnitudes) necessary to make data on all nights peak at the same magnitude, after correcting for geometry. h Average uncertainty calculated for the night. i Radius of the aperture used to extract the lightcurves (see Section 2.3). confirming that the comparison star correction technique worked as expected. Coma Contamination Comet 49P/Arend-Rigaux was already known to be one of the least active periodic comets (e.g., Jewitt & Meech 1985), nonetheless, there was some evidence of coma contamination throughout this apparition. Nightly observations were stacked and median combined into a single image in order to enhance the coma (see Figure 2). This revealed a persistent tail oriented due west from January through February, even though the position angle of the Sun changed from 102˚ to 60˚ over the same time span (column 7 of Table 1). The most likely explanation for this is that the dust was released near perihelion, when the dust activity was at its greatest (see Section 4.2). Even though the rotation period can be obtained without removal of the coma, as will be discussed, in order to accurately determine the amplitude (peak-to-trough variation) of the lightcurve and to compare results to those of other authors, the extent of the coma contribution was considered. The coma contribution was calculated following the commonly used coma removal method of e.g., Millis et al. (1988) and for images tracked at the comet's rate. This method is based on the assumption that the dust grains move out from the nucleus isotropically at a constant velocity and thus that the coma flux per pixel decreases as ρ -1 , where ρ is the radial distance from the nucleus. Conversely, the area of equally spaced annuli increases as ρ, and therefore the total coma flux in each annulus should be constant. Even though many factors can influence the validity of this assumption, such as contamination from field stars or cosmic rays, it has been shown that a linear fit to the total annular flux as a function of radial distance gives a good first order approximation for the coma contribution. The total flux was calculated in 3 pixel wide annuli centered on the nucleus ranging from 3 to 30 pixels (i.e. 3-6 pixels, 6-9 pixels,…, 27-30 pixels). These values were used to create a radial profile (total flux in annulus as a function of ρ), for each image over the course of a night (solid grey lines in Figure 3). A straight line was fit to the total annular flux as a function of radial distance for radii chosen to begin beyond significant nucleus signal. This threshold value varied with pixel scale and seeing. When determining the coma contribution, frames with obvious star contamination were omitted on the basis of having a considerably higher than average flux at large ρ (dashed blue lines in Figure 3). Stars at large radial distances would have resulted in an underestimate of the coma whilst stars at small radial distances would have resulted in an overestimate of the coma. Even though the stars with large contamination were removed, fainter stars will still be present. Nonetheless, the median combination of a large number of images minimizes their effect and produces an excellent fit to the coma as shown in Figure 3. The nucleus flux was estimated for all nights that were guided at the comet's rate, by subtracting the modeled flux of the coma from the integrated flux within the photometric aperture for each frame. The nucleus and coma fluxes were then compared for all frames of a night to estimate the coma contamination. This revealed that the coma contribution on nights with good seeing steadily decreased from 20% to 13% between January and May, as would be expected as the comet moved further away from the Sun. This result confirms that our photometry is dominated by nucleus signal. Nonetheless, nights with worse seeing conditions yield a lower percentage of nucleus flux because the PSF was worse, resulting in more nucleus signal falling outside of our photometric aperture, whilst the coma signal was calculated across larger apertures making it relatively impervious to seeing. In order to determine whether the coma contribution varied over the course of a night we implemented the same methodology as described above for 30 minute time intervals. Although there were changes in the coma contribution during the course of the nights, there was not an obvious pattern in the variations. Due to the lack of evidence that the coma flux changed as a function of rotational phase, we decided not to remove the coma contribution. Furthermore, coma removal would introduce additional errors and would not likely improve our period determination. Geometric Correction As observations took place over several months, geometric effects had to be taken into account. The absolute magnitude (magnitude reduced to unit heliocentric and geocentric distances at zero solar phase angle), M, was found using standard asteroidal normalization (e.g., Jewitt 1991): where m R is the apparent magnitude (corrected for extinction and comparison stars as described in Sections 2.2 and 2.4), r h and Δ are the heliocentric and geocentric distances in AU respectively, α is the solar phase angle (Sun-comet-observer) and β is the linear phase coefficient. The linear phase coefficient for comet nuclei has values 0.025 -0.083 mag deg -1 (Snodgrass et al. 2011). A value of 0.04 was adopted throughout. Equation 1 corrects for the geometric variation in brightness and brings all the lightcurves to a similar scale. The geometric corrections (Δm 1 in column 10 of Table 1) were calculated at the midpoint of each night's observations, as the differences between the mid-points and the extremes on each night were usually comparable to the statistical uncertainty. The geometric correction alone, however, was not sufficient to bring all the lightcurves from different nights to the same peak brightness. This could be due to a number of reasons, such as that our field star calibration was based on the assumption that each night was photometric at some point or due to changing levels of the comet's activity. Furthermore, the absolute calibrations assumed typical values for all but two nights. An additional adjustment was introduced in order to bring all the lightcurves to the same peak and is given as Δm 2 in column 11 of Table 1. This is simply a scaling factor to aid comparison of lightcurves and does not have physical significance. These corrections were always within 0.12 magnitudes of 0.0 except for March 16-22 when the brightness was enhanced following an outburst (discussed in Section 4.1) and in May 12-14 when the VRfilter was used and we could not perform absolute calibrations. This additional correction factor also accounts for the difference in magnitudes due to nightly variations in the aperture radius. Whilst this variation could have also been minimized by using the same physical aperture size at the comet, we decided against doing this due to the generally worse seeing conditions at the 0.8 m telescope. The worse seeing would have required us to use a larger than optimum aperture on the 1.1 m telescope images, thus degrading these data. The extent of the effect of this is described in Section 2.5. Whilst the additional correction factors and geometric correction helped to align the lightcurves in order to bring them to the same peak magnitude, they did not affect the time of the peaks and thus did not affect the rotational phasing. The data were, however, corrected for the time it took light to travel between the comet and us (column 9 in Table 1). Due to the change in Sun-comet-Earth geometry, this time differed by 0.13 hr over the course of our observations. The reduced magnitudes (m R *), as given in Table 2, have had absolute calibration, geometry, field star, and light travel time corrections, as well as Δm 2 offsets applied. Further Corrections and Uncertainties By close examination of the lightcurves, in conjunction with iterating through the nightly images, we identified frames which were contaminated by field stars, cosmic rays or tracking problems. These were discarded from the data set. A plot of the nightly field star correction values for each frame also helped to identify images with significant extinction due to clouds. Frames where the correction was larger than 0.5 magnitudes were individually reassessed in order to determine their utility, with the result of only a small number of frames being discarded. Photometric uncertainties were calculated from photon statistics. These uncertainties are not shown on the lightcurve plots as they were typically smaller than the data points, however, the average uncertainty for each night is given in column 12 in Table 1. Uncertainties due to coma, absolute calibrations, etc. were not formally estimated but are likely at least as large as the statistical uncertainties. The coherent shapes of our lightcurves suggest that such effects are minimal and can be safely ignored. Figure 3. An example of a radial profile on 2012 January 30. Each curve represents the total annular flux in a 3 pixel wide annulus from images throughout the night. The solid grey curves are the frames which were used to calculate the nightly median coma profile (red solid line) whilst the dotted blue curves represent the frames which were disregarded due to star contamination. In this particular case, the coma was fit for annuli greater than 13.5 pixels in radius (the vertical line) and the nucleus lightcurve was extracted using an aperture radius of 8 pixels. Rotation Period of the 2011/12 Apparition The combined thermal IR and optical data from Millis et al. (1988) showed the shape of the comet to be approximately that of a triaxial ellipsoid, and therefore we expected a double-peaked lightcurve. In order to derive the period of the brightness variation, and thus the rotation period of the nucleus, we superimposed all the lightcurves from different nights with the data phased to a 'trial' period and zero phase set as perihelion (2011 Oct 19.1). This was possible as the geometry of the system did not change considerably throughout the apparition (see Section 3.2). By adjusting the trial period with a slider in Python, we could easily scan through potential rotation periods in real time and make rapid 'better-or-worse' comparisons. Whilst iterating through the -1 - Jan 25 a UT date of observations in 2012. Data acquired with the 1.1 m telescope are denoted with an *; all other data were acquired with the 0.8 m telescope. b UT at midpoint of the exposure (uncorrected for light travel time). c Observed R-band magnitude (after applying absolute calibrations, extinction corrections, and comparison star corrections). d m R (1,1,0) corrected by m 2 (given in Table 1) so that all nights have the same peak magnitude. Notes. a UT date of observations in 2012. Data acquired with the 1.1 m telescope are denoted with an *; all other data were acquired with the 0.8 m telescope b UT at midpoint of the exposure (uncorrected for light travel time). c Observed R-band magnitude (after applying absolute calibrations, extinction corrections, and comparison star corrections. d mR (1,1,0) corrected by ∆m2 (given in Table 1)so that all nights have the same peak magnitude. different periods we looked for alignment of the peaks and troughs of the lightcurves in order to determine the optimal period as well as the period for which the data were first clearly out of phase. An example of this is shown in Figure 4 where the data are phased to 13.44 hr, 13.45 hr and 13.46 hr. From this plot one can clearly see that 13.45 hr is in phase while 13.44 hr is too short and 13.46 hr is too long. Based on phasing the data at smaller steps of 0.001 hr, the uncertainty is estimated to be 0.005 hr. We therefore conclude a rotation period of 13.450 ± 0.005 hr for the combined data. The same process was carried out for the individual months as well as for combined adjacent months (as shown in Figure 5 and tabulated in Table 3). Due to the shorter time intervals, the individual months yielded larger uncertainties than the combined months, and uncertainties increased during the apparition due to deteriorating signal-to-noise as well as shorter nightly observing windows in April and May. Phasing of data from single months revealed the shortest rotation period to be in January, with 13.45 ± 0.03 hr, and the longest in April, with 13.47 ± 0.04 hr. Combined adjacent months showed values ranging from 13.450 ± 0.005 hr in January-February to 13.458 ± 0.010 hr in April-May (see Table 3). These numbers hint at a small increase with time (which we will revisit below), but are consistent within the uncertainties. A further search for periodicity was carried out using phase dispersion minimization (PDM; Stellingwerf 1978) and Lomb-Scargle (L-S;Lomb 1976, Scargle 1982) algorithms in Python 3.0. The former is a popular method often used to analyze non-sinusoidal lightcurves that have poor time coverage as it does not require uniformly sampled data. The method phases the data according to an assumed period before dividing the data into a series of bins. The individual variances of each bin are combined and compared to the overall variance of the dataset. This process is carried out for a range of trial periods. For a true period, this ratio will yield a small value θ and the phase dispersion minimization plot will reach a local minimum. The blue line in Figure 6 illustrates the PDM for all of our 2012 data. The L-S technique, on the other hand, is similar to Discrete Fourier Transform (DFT), in that it transforms the data from the time domain to the frequency domain. However, whilst DFT usually requires evenly sampled data points, L-S does not. Both PDM and L-S agreed on an optimal double-peaked period of 13.452 hr for the full data set. The L-S algorithm is optimized to only find the single-peaked solution, while PDM returned an equally likely single-or double-peaked solution. The L-S single-peaked result was doubled to determine the double-peaked solution since Millis et al. (1988) showed that the double-peaked solution yields the true rotation period. The doubling of the single-peaked answer will have yielded some error due to the differences in shapes of the two peaks. Values for various subsets of the data are summarized in Table 3. The uncertainties associated with both PDM and L-S are indeterminate. Figure 6 shows that, even though the PDM algorithm presents a distinct lowest θ, it is uncertain how far from the absolute minimum can still be considered a viable solution. This is also the case with L-S. Conversely, with the manual phasing of the data to a number of trial periods, we were able to identify where the phasing broke down; this was greatly aided by the use of different colors for different days, as shown in Figure 4. As a whole, the rotation periods obtained by inspection agree well with the values obtained through PDM and L-S, to within reasonable uncertainties. The exception to this being the values of the PDM and L-S for 2012 March and the value of the L-S for 2012 May, which are unreasonably high at 13.496 hr, 13.486 hr and 18.844 hr respectively. As illustrated in Figure 4, even deviations by 0.01 hr from a period of 13.45 hr result in the lightcurves to be significantly out of phase. Based on this we conclude that these solutions are incorrect. By iterating though the different periods, we found a best period of 13.450 ± 0.005 hr. Zero phase was set at perihelion (2011 October 19.1). Lightcurves are aligned to the peak brightness, as these were more consistent throughout the apparition than the troughs. The nightly points are as given in Figure 1. Implications of Viewing Geometry The rotation period obtained through phasing of the data is the time it takes for the brightness to appear the same as viewed from Earth, known as the synodic period. Conversely, the sidereal period is the period relative to a fixed point in space, and is the 'true' rotation period. As both the Earth and the comet are moving in their orbits, the geometry of the system changes, resulting in different parts of the comet being illuminated and hence in subtle changes in the synodic period. In order to confirm that this did not affect our determined rotation periods, and in particular our comparison of results between different apparitions (Section 3.4), the extent of this effect was assessed. This assessment was based, in part, on our assumption that we were viewing the comet near equator-on, e.g., the comet's rotational pole was near the plane of sky. If the comet was not being viewed near equator-on, the large amplitude observed in our lightcurves would only occur if the comet was highly elongated, and we have no reason to believe that this is the case. Furthermore, the amplitude of the lightcurves from the 1984/85 apparition (see Section 3.4) were found to agree with the amplitude of the lightcurves from the 2011/12 apparition. Due to the differences in viewing geometry between apparitions, the similar amplitudes suggest that we are viewing the comet at similar sub-Earth latitudes and hence, that the rotational pole is near the plane of the sky. As discussed above, the 2012 data showed a hint of an increase in the rotation period from January to May, although they are consistent with a constant value within the uncertainties (see Table 3). Without consideration of the Earth's geometric position, this is suggestive of retrograde rotation (obliquity near 180˚), which would result in the Millis et al. 1988, Jewitt & Meech 1985, Wisniewski et al. 1986). sidereal period being longer than our measured synodic period. The prograde case (obliquity near 0˚), would have a sidereal period shorter than our measured synodic period by a comparable amount. For an obliquity of 180˚, the offset between the synodic and sidereal periods ranges from 0.010 hr to 0.005 hr between 2012 January and May. As our uncertainties are smallest in January, we use 0.010 hr as the most likely synodic-sidereal offset, resulting in a sidereal rotation period of 13.460 hr when only the solar component is considered. During this same interval, the viewing angle from Earth changed much less, and thus the phase angle bisector (cf. Harris et al. 1984) varied by only about half of the solar component alone, implying a somewhat smaller sidereal value. Even though we have reason to believe that the pole is near the plane of the sky, there is evidence for strong seasonal effects (see Section 4.2). Our assumptions, however, change minimally even if the axis is intermediate, e.g., the synodic-sidereal offset is only significantly different if the pole is nearly perpendicular to the plane of the sky. Lightcurve Shape The lightcurves of the 2011/12 apparition show a clear asymmetry, with one sharp, deeper trough (near phase 0.9 in the middle panel of Figure 4) and one flatter, shallower trough (near phase 0.4 in the middle panel of Figure 4). In addition, the peak-to-peak times of all the monthly lightcurves are larger than the trough-to-trough times with the latter being approximately 10% shorter. These asymmetries, which are likely due to the shape of the nucleus deviating from that of a simple triaxial ellipsoid due to e.g., large boulders or flat areas, reduce the uncertainty in the period, as they highlight a clear correct phase and eliminate solutions that are a half phase off. The sudden change in lightcurve shape from February to March, with the sharp trough disappearing, further suggests deviations from the triaxial ellipsoid (e.g., Durech et al. 2011). These distinct features in the lightcurve can also be seen in the 1985 data (see Section 3.4). As seen in Figure 5 and tabulated in Table 3, the amplitudes of the lightcurves vary by approximately 0.15 magnitudes between 2012 January and May. This could be due to a change in orientation of the comet relative to us, resulting in a change in the apparent cross section. Furthermore, the coma suppresses the nucleus contribution, resulting in a decrease in the amplitude of the lightcurve. Removal of the coma would increase the amplitude by around 10 -20%; however, this would also greatly increase the uncertainties, as previously discussed. In addition to the effects of the coma, the position angles of the Sun and the solar phase angles (columns 7 and 8 of Table 1) imply that the tail is highly projected, resulting in a large amount of tail remaining in the photometric aperture. The effect of this was not formally assessed. The uncertainty in the amplitude steadily increased between January and May as the comet became fainter and thus the signal-to-noise got worse. The minimum axis ratio of the nucleus can be calculated from the observed amplitude of the lightcurve using the equation (e.g., Mueller & Ferrin 1996): where m is the magnitude and a and b are the semi-major and semi-minor axes respectively. The peak-to-trough variation of the lightcurves ranged from 0.35 to 0.50 (Table 3), corresponding to minimum axial ratios of 1.38 and 1.63. This is in agreement with the axial ratio of 1.6 that was obtained by Millis et al. (1988) by averaging optical and infrared amplitudes and confirms that coma contamination was minimal. Although we have elected not to remove the coma contamination for our determination of the rotation period, a first-order removal yields a plausible estimate of the nucleus size. For example, on February 26 the middle of the lightcurve was at an apparent magnitude of m R =16.38 and we estimated 15% of the aperture flux came from coma contamination. Removal of the coma yields a nucleus magnitude of 16.56, which can be converted to a nuclear radius by the standard methodology (e.g., Jewitt 1991). Assuming a geometric albedo of 0.028 (Millis et al. 1988) and a nucleus solar phase angle correction of 0.04 magnitudes per degree, we estimate an effective radius of 4.6 km for this night. Similar calculations throughout the apparition yield effective radii in the range 4.4-4.8 km, in excellent agreement with Kelley et al. (2017) who found an effective radius of 4.57 km using thermal modeling of mid-IR data. Reanalysis of the 1985 Data We reanalyzed the publicly available data from three independent groups collected during the favorable 1984/85 apparition. Millis et al. (1985) derived a rotation period of 13.47 ± 0.02 hr, based on their optical observations spanning six nights in late January 1985. Further optical observations of comet 49P/Arend-Rigaux were made by Jewitt & Meech (1985) on four consecutive nights between 1985 January 18 and 21. Their optical observation showed lightcurves with a single peaked period of either 9.58 ± 0.08 or 6.78 ± 0.08 hr, or a multiple of one of these. Similarly, Wisniewski et al. (1986) observed the comet on a total of eight nights January 17-21, 1985, derived a quadruple peaked rotation period of 27.312 hr. Based on the thermal IR data from Millis et al. (1988), as well as the asmmetry observed in the 2012 data, we eliminate the single and quadruple peaked solutions. None of these data sets were ideal in terms of both removing aliases and obtaining precision. This was largely due to the limited amount of temporal coverage acquired by any one group, as well as the lack of knowledge of the shape of the nucleus (Jewitt & Meech 1985;Wisniewski et al. 1986). In order to improve upon their individual results, we combined the data from the three independent groups, thus significantly increasing the baseline and allowing us to eliminating potential aliases as well as increasing the overall precision. Whilst the first two papers tabulated their data, Wisniewski et al. (1986) only presented figures of their results, which were phased using their preferred rotation period. See the Appendix for details of the procedure used to extract the data that we required. Prior to phasing the data, we arbitrarily adjusted the lightcurves in order to bring them all to the same peak magnitude. The large differences in magnitudes between the different data sets were due to the lack of instrumental magnitude correction in the Jewitt & Meech (1985) data, as well as methodological differences and small changes in the geometry or the comet's activity; amplitudes also differ, consistent with each group's use of a different aperture size resulting in differing amounts of coma contamination. The rotation period of the 1985 data was determined in the same way as described in Section 2 for the 2012 data. We obtained a value of 13.45 ± 0.01 hr by inspection and values of 13.450 hr and 13.448 hr using PDM and L-S, respectively, with the phased results shown in Figure 7. Within reasonable uncertainty, these values are consistent with the rotation period of 13.47 hr reported by Millis et al. (1988). As shown in Section 3.2, the offset between the synodic and sidereal periods is between 0.010 hr and 0.005 hr during the 2011/12 apparition for an obliquity of 180˚. Similarly, for the same obliquity, there was an average offset of 0.012 hr during the published 1985 observations. Since the offsets are in the same direction during each apparition, the relative effect differs by a maximum of 0.007 hr, a value that lies within the uncertainties of the synodic period of either of these apparitions. This shows that it is safe to ignore the synodic-sidereal effects when intercomparing the rotation periods of the two apparitions, and we can compare the rotation periods directly. Our determined rotation period for the 1984/85 apparition agrees with the values obtained for the 2011/12 apparition within the calculated uncertainties, constraining the maximum change in rotation period to 54 s (0.015 hr). As there were four intervening perihelion passages, this is equal to a maximum change of less than 14 s per apparition. Unexpected Feature As noted previously, the Δm 2 values for 2012 March 16-22 stand out as unusually large, suggesting the comet was brighter than expected by 0.2-0.3 mag. Nightly stacks of images during this time revealed a jet-like feature in a direction very different from the expected tail direction of older material (Figure 8). The jet-like feature first appears in the stacked images of March 16, where the last observed night prior to this date, February 26, showed no sign of activity (see Figure 2). The feature continued to grow in projected length until it separated from the nucleus around March 25. In order to better determine the point of separation, we removed individual frames with significantly worse seeing as well as multiple images on March 27 that were contaminated by a bright star passing through the feature. Unlike a normal, e.g., sublimation-driven, jet, we believe this to have been an impulse type outburst, which has an elongated appearance due to the range of particle sizes and masses travelling radially outwards at different velocities. Based on the relatively narrow angular width of the outburst, we hypothesize that the duration of the event was less than ~2 hours. If the event had gone on for a longer period of time, we would expect to see an increased amount of angular spreading of the feature due to the rotation of the comet (unless the jet was near-polar). Detailed modeling of the evolution of the jet's shape and extent would likely constrain aspects of the outburst such as source orientation, grain sizes, and duration but is beyond the scope of this paper; however, some properties can be derived. In order to extrapolate back to a time of the onset of activity, the distances from the nucleus to the trailing and leading extent of the jet-like feature were measured for each night. Based on the assumption that the grains travel at a constant projected velocity, a trend line enabled us to extrapolate backwards to the point where the grains originated. This was found to be on March 15 around 18 hours UT and is presented as time zero hours in Figure 9. The trailing and leading particles traveled at projected velocities of 17 m s -1 and 56 m s -1 , respectively, and the near constant velocity implies that the acceleration due to radiation pressure was primarily in our line of sight (consistent with the solar phase angle being near 15°) and thus had minimal effect on the projected velocity. There is no evidence of a similar event in the previously analyzed apparitions; however, it cannot be ruled out that this is a seasonal effect rather than an isolated outburst. Regardless of its origin, an order of magnitude calculation of the quantity of material involved shows that it is trivial compared to the large size of the nucleus and would not have had a discernible effect on the rotation period. Based on the Millis et al. (1988), the crosses the data from Jewitt & Meech (1985) and the filled circles data from Wisniewski et al. (1986). Different colors are used for different nights by the same authors. The best rotation period is found to be 13.45 ± 0.01 hr. Table 1, we can crudely estimate that the cross section, C, of material released by the outburst was ~30% of the total nucleus cross section. This can be converted to a mass, M, by M = (4/3) × ρ × a avg × C (e.g., Jewitt 2013) where a avg is the average particle radius (assumed to be 1 micron) and ρ is the material density (assumed to be 1900 kg m -3 ; Rotundi et al. 2015), yielding ~5×10 4 kg. For reasonable assumptions about the bulk density of Arend-Rigaux (~500 kg m -3 ) and dust-to-gas ratio (~1), this mass of material can be easily explained by the excavation of a hemispherical pit <10 m in radius. This is comparable to or smaller than many pits observed on the surface of 67P/Churyumov-Gerasimenko by Rosetta (e.g., Sierks et al. 2015), and significantly smaller than the crater produced by the Deep Impact experiment (200 ± 20 m diameter; Schultz et al. 2013). Thus, an outburst such as this is likely unexceptional, and it should come as no surprise that it did not produce a detectable change in the rotation period. Gas and Dust Production Rates Overall, comet 49P/Arend-Rigaux was simply too faint for us to obtain our standard narrowband photometric measurements of the coma during the 2011/12 apparition. However, we were able to obtain data during its 2004/05 apparition, when it was somewhat brighter but only available for one or two sets per night due to the short observing window from our northern hemisphere location and having competing targets. Here we present these data, along with a reanalysis of similar observations obtained in 1984/85 by Millis et al. (1988), so that both data sets utilize the same reduction parameters; note that in particular the Haser scalelengths and daughter lifetimes used by us to derive gas production rates changed in the decade following the Millis et al. (1988) paper. Using our now standard observing and reduction procedures (cf. A'Hearn et al. 1995;Schleicher & Bair 2011), observations at both apparitions were obtained with photoelectric photometers using narrowband comet filters (the IHW set in 1984/85 and the HB set in 2004/05; cf. Osborn et al. 1990, Farnham et al. 2000. Reduced fluxes, aperture abundances, and production rates were computed for each gas species --OH, NH, CN, C 3 , and C 2 . We also compute abundance ratios, water production rates, the effective active area on the surface of the nucleus required to produce the water based on a standard vaporization model, and the active fraction based on the surface area of the nucleus. For the dust, the fluxes and the now standard proxy for dust production, Afρ, (A'Hearn et al. 1984b) are determined from the continuum measurements (see Tables 4, 5 and 6). Because of the wide range of solar phase angles, particularly in 1984/85, phase adjustments were made to yield A(0°)fρ. Furthermore, due to evidence for trends in Afρ with aperture size (with a very wide range of aperture sizes) we apply an aperture adjustment. In Figure 10, we plot the log of the production rates for each species with respect to time from perihelion. In spite of the fact that the temporal coverage at each apparition is sparse, it is evident that there is a significant pre-/postperihelion asymmetry, with production rates as much as 50% to 100% greater during comet 49P/Arend-Rigaux's approach to the Sun. Less certain is the time of peak production because of differences between the two apparitions; we estimate that peak production occurred near ∆T ~ -20 days. Both properties imply a seasonal effect due to a changing sub-solar latitude and one or more active source regions, rather than uniform leakage of gas from the entire surface. However, as indicated in Section 3.3, the obliquity of the pole cannot be too large or we should have seen a significant change in the lightcurve amplitudes as a function of viewing geometry. Inter-comparison of the 1984/85 and 2004/05 data reveals a surprise: CN and C 3 clearly imply higher values at the later apparition, C 2 and NH are less certain but also consistent with this, but OH and the dust exhibit the opposite long-term secular trend. As discussed later, the apparent dust behavior is primarily an artifact due to phase effects and aperture trends, but the OH is a puzzle. In our photometric database (Schleicher & Bair 2016) we have many examples of the OH having differing amounts of asymmetry or differing r H -dependencies from the minor species. Comet 49P/Arend-Rigaux is the first case where OH and the minor species exhibit secular changes in opposite directions, usually indicative of at least two source regions having different compositions and a possible precession of the pole. However, significant precession seems highly unlikely due to the lack of change in the rotation period, thus little or no evidence of torquing, coupled with the small outgassing rates and the large nucleus size. Also, the OH secular change is large; we estimate about 2 times greater in 2004/05 than in 1984/85. We therefore tentatively conclude that a change in the relative outgassing rates between two source regions (having different 1 http://www.astro.umd.edu/~ma/evap/ relative abundance of minor species vs water) is not due to solar insolation but rather changes to the source regions themselves. In spite of the secular variations seen in the relative abundances, comet 49P/Arend-Rigaux remains in the 'typical' compositional group throughout (A'Hearn et al. 1995;Schleicher & Bair 2016). Water production rates, based on OH and listed in Table 6, imply an effective active area that ranges from 0.53 km 2 to 2.27 km 2 over the entire dataset, using a vaporization model by A'Hearn 1 based on the work of Cowan & A'Hearn (1979) for a pole-on, rapidly rotating nucleus. The overall mean value is 1.00 km 2 while the median value is smaller at 0.88 km 2 . When combined with the effective radius given earlier of 4.57 km (Kelley et al. 2017), it yields an active fraction of 0.38% (mean) or 0.34% (median). In the context of our entire photometric database, comet 49P/Arend-Rigaux thus has the fifth lowest active fraction. The most extreme is recently investigated 209P/LINEAR at ~0.024% (Schleicher & Knight 2016), followed by 28P/Neujmin 1 at ~0.05%, P/LONEOS (2001 OG10) at ~0.06%, and P/Siding Spring 3 (2006 HR30) at Figure 10. Log of the production rates for each observed molecular species and A(θ)fρ for the green continuum plotted as a function of time from perihelion. Data points from the 1984/85 apparition are shown as triangles while those from 2004/05 are shown as circles. Even with so few pre-perihelion points, it is evident that each of the gas species exhibit a significant seasonal effect with production rates substantially lower following perihelion, indicative of a source region moving from summer towards winter. The opposite behavior exhibited by the dust is entirely an artifact, primarily due to phase effects and secondarily due to a trend with aperture size (see text and Figure 11). There is also possible evidence for a long-term secular change, but with the minor species increasing between 1984/85 and 2004/05 while OH decreases. ~0.13% (Schleicher & Bair 2016). Interestingly, while the first two are Jupiter-family objects, the latter two are both in the Halley-type dynamical class and presumed to originate from the Oort Cloud rather than the Kuiper Belt, implying Oort Cloud comets can also evolve to a nearly inert state. As noted earlier, the dust production, as given by A(θ)fρ, differs greatly from those of the minor gas species, most closely resembling the behavior of OH (see Figure 10). However, this perception is an artifact due to a combination of viewing circumstances, specifically solar phase angle effects, and the plate scales of the telescopes used, associated with aperture trends. In particular, while all of the 2004/05 observations were taken at a narrow range of solar phase angles (37°-43°), only the first night in 1984 had a comparable value (40°) while on later nights the solar phase angle ranged between 5° and 28°. We therefore normalized the results to 0° solar phase angle by applying our composite phase curve (cf. Schleicher & Bair 2011 and references therein); the specific adjustment factors are listed in Table 4. As is evident from the table, Afρ for the largest solar phase angles are adjusted by 2.3 times more than for the smallest angles, negating the apparent increase in Afρ seen near the end of the apparition for the 1985 observations. Comet 49P/Arend-Rigaux also exhibited a trend with aperture size in Afρ, with larger apertures yielding smaller values, implying a steeper radial profile for the dust than the canonical 1/ρ expected for coasting and unchanging grains. This is not a surprise as few comets actual follow the 1/ρ curve, but the small number of cases where two apertures were measured on a given night made determining an appropriate adjustment difficult. While we might normally just note the issue but not make any adjustments, the nearly order of magnitude range in projected aperture sizes, with ρ varying from 4300 km to 38,900 km, requires a nominal adjustment. Based on the trends observed, including that from the imaging in early 2012, we have normalized all log A(0°)fρ values to log ρ = 4.0, using an adjustment of 0.02 in the log for each 0.10 change in log ρ. Thus Afρ for the largest projected radius (log ρ = 4.59) increases by 31% when normalized to 10,000 km, while the smallest value (log ρ = 3.63) decreases by 19%. SUMMARY AND DISCUSSION We imaged comet 49P/Arend-Rigaux on 33 nights between 2012 January and May and obtained lightcurves of the nucleus. By phasing all of the lightcurves, a synodic rotation period of 13.450 ± 0.005 hr was determined. Similarly, PDM and L-S both yielded a rotation period of 13.452 hr. Rotation periods of monthly and bi-monthly subsets, as determined by inspection, are suggestive of a slight increase in the rotation period during the 2011/12 apparition, consistent with a retrograde rotation of the nucleus. Even though the change of 0.008 hr between January and May is small, and within the calculated uncertainties, it is in agreement with the expected synodic-sidereal offsets. In order to determine whether the rotation period of 49P/Arend-Rigaux has undergone significant change, we reanalyzed data from the 1984/85 apparition. By combining the observational data from three independent groups, we significantly increased the number of nights of data and thus were able to determine the rotation period to a higher degree of precision. Inspection revealed a period of 13.45 ± 0.01 hr, implying that any change in rotation period was less that 14 s per apparition between 1984/85 and 2011/12. This small change in the rotation period comes as no surprise considering the large size of the nucleus combined with the lack of a detectable jet which could result in a torque. This result further highlights that comet 49P/Arend-Rigaux is largely inactive. Samarasinha and Mueller (2013) introduced a parameter, X, in order to predict changes in rotational periods, that should be approximately constant for comets with similar bulk densities, nucleus shapes, and activity patterns. Their data were limited due to the very low number of comets with both detectable changes in rotation period and reasonable estimates of nucleus size. Using their Equation 12 and our upper limit for the change of the rotation period of 14 s per orbit and ζ A-R/Encke =0.31 (Samarasinha, personal comm.) we find X/X Encke <0.45 where X has been normalized to comet 2P/Encke following the methodology of Samarasinha and Mueller (2013). Although X/X Encke for comet 49P/Arend-Rigaux is lower than the four comets in their sample, the upper limit differs from the most extreme case by only a factor of 4. This further suggests what is intuitively obvious, that such a low change in rotation period may not be unusual given comet 49P/Arend-Rigaux's low activity rate and large nucleus size. Alternatively, the formalism for X is not valid for 49P/Arend-Rigaux as the rotation period might not have changed. Despite the negligible change in comet 49P/Arend-Rigaux's rotation period over the past three decades, we encourage additional measurements of the rotation period on future apparitions. Comet 49P/Arend-Rigaux was one of the first comets to have its nucleus rotation period determined to high precision, so it offers a nearly unique opportunity to monitor the long-term effects of cometary activity on rotation. The only other comparable object is 10P/Tempel 2, another large, weakly active comet. Comet 10P/Tempel 2's rotation period has been measured on multiple epochs since 1988 (e.g., Jewitt & Meech 1988, Mueller & Ferrin 19962012), yielding the smallest measured change in rotation period of any comet, ~16 s per orbit (Schleicher et al. 2013). Given that extinct or nearly dormant comets make up a non-negligible fraction of the near-Earth object population (e.g., Mommert et al. 2015), gaining a better understanding of the long term behavior of comets as they become inactive may prove helpful in efforts to assess the risk they pose. We found an unexpected increase in brightness in 2012 March which was accompanied by a jet-like structure whose appearance evolved over ~2 weeks. By measuring the projected distance of the particles relative to the nucleus, we were able to constrain the grain velocities to a minimum of 17 m s -1 and 56 m s -1 for the inner and outer ends of the jetlike feature respectively. This allowed us to estimate that the event took place on 2012 March 15 around 18 UT and lasted for no more than 2 hours. Even though this was a short impulse event, we see a jet-like feature presumably due to the Table 4). In addition, on nights when measurements were made with more than one aperture, Afρ values always exhibited a decreasing trend with increasing aperture size. Given the nearly order of magnitude range in aperture sizes, we also applied a nominal adjustment to normalize all results to a projected radius of 10,000 km. The result is quite similar in appearance to that of CN shown in Figure 10. Finally, we also include measurements extracted from the R-band imaging in early 2012 (squares), after first removing the relatively large nucleus contribution. These points confirm the steep drop-off after perihelion. particles travelling at a large range of different velocities, resulting in the grains spreading out radially from the nucleus. Whilst we do not believe this event to be a seasonal effect, we encourage observations at the same orbital position. This outburst is similar to an outburst of 67P/Churyumov-Gerasimenko detected from the ground by Boehnhardt et al. (2016) and confirmed by . Such outbursts are orders of magnitude smaller than the large outbursts of 17P/Holmes (e.g., Montalto et al. 2008, Schleicher 2009) or 29P/Schwassmann-Wachmann 1 (e.g., Roemer 1958, Whipple 1980. Small outbursts are likely common (e.g., A'Hearn et al. 2005), but require frequent, high quality observations to be detected. Appropriate observations are likely to be obtained for large numbers of comets in the near future via the Zwicky Transient Facility (ZTF) and, later, the Large Synoptic Survey Telescope (LSST). We encourage the study of outbursts with ZTF and LSST data as they are likely to yield new insights into the internal composition of comets as well as the processes acting at or near a comet's surface. The amplitudes of the light curves from the 1984/85 and 2011/12 apparitions were very similar despite the different viewing geometry, implying that we saw the comet at similar sub-Earth latitudes in both apparitions. Furthermore, the large amplitudes of the light curves suggest that we saw the comet near equator-on, at an obliquity of near 0˚ or near 180˚. The apparent small lengthening of the rotation period evident in subsets of the 2012 data implies that the retrograde (180˚) solution is correct. Narrowband photometry of the coma during the 1984/85 and 2004/05 apparitions yielded production rates for a number of species, and showed a strong pre-/post-perihelion asymmetry. Furthermore, the very steep r H -dependence postperihelion suggests a strong seasonal effect due to a changing sub-solar latitude. This implies that the axis is tilted in an intermediate position, such that the change in amplitude is minimal and yet that the source region is able to change from 'summer' to 'winter' in a short time interval. A similar effect was observed on 9P/Tempel 1, which had both a small tilt as well as strong seasonal effects, made possible by the source region being located very close to the pole (e.g., Schleicher 2007). The location of the presumed source region on the surface of 49P/Arend-Rigaux is unknown; however, the photometric measurements imply that there are distinct active regions as opposed to uniform leakage across the surface. Furthermore, photometry revealed that comet 49P/Arend-Rigaux is the first comet to show OH and the minor species exhibiting opposite trends. This is also indicative of multiple distinct active regions on the surface. Finally, water production rates, based on OH measurements, showed that comet 49P/Arend-Rigaux has the fifth lowest active fraction in our entire photometric database. This is consistent with the lack of an observable change in nucleus rotation period. Additional gas production rate measurements during future apparitions are highly desirable to investigate the surprising opposite trend of OH and minor species and/or look for evolution of activity as the comet ages. ACKNOWLEDGMENTS We thank Brian Skiff and Larry Wasserman for helping us obtain data and Jessica Sunshine for useful discussions regarding interpretation of our results. Many thanks to Emily Kramer for attempting to obtain light curves from the 2004 data, to Allison Bair for assistance in creating several tables and to Tony Farnham for helping with calculations of the synodic-sidereal effects. We also thank Beatrice Mueller for a thorough and helpful review. Additional thanks go to the University of Sheffield for presenting N.E. with the opportunity of spending an academic year at the University of Maryland. This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2013), as well as PyAstronomy. It also used SAOImage DS9, developed by Smithsonian Astrophysical Observatory and the "Aladin sky atlas" developed at CDS, Strasbourg Observatory, France (Bonnarel et al. 2000). M.M.K. and D.G.S. were supported by NASA's Planetary Astronomy grant NNX14AG81G. N.E. was partially supported by the Marcus Comet Fund at Lowell Observatory. APPENDIX As noted in Section 3.3, Wisniewski et al. (1986) presented and then published a very short paper in the proceedings of Asteroids, Comets, and Meteors II. In their paper they presented preliminary results from photometric measurements they had obtained of two comets, 28P/Neujmin 1 and 49P/Arend-Rigaux in 1984 and 1985, respectively. Specifically, for comet 49P/Arend-Rigaux they gave the derived period and presented two figures, the first a sample lightcurve of their 4 th night of observations (1985 January 20 UT), and the second a phased lightcurve using their preferred period for all eight nights of data (January 17,18,19,20,21,and February 15,16,17), with each night having a different symbol. A separate, CCD image of comet 49P/Arend-Rigaux is also presented, showing that the comet had a non-negligible coma, but that the nucleus was readily detected. While an aperture of 12 arcsec was used with the photoelectric photometer to minimize the coma contribution (Wisniewski & Fay 1985), they specifically note that the true amplitude of variability of the nucleus itself is therefore much larger than the measured amplitude due to coma contamination (Wisniewski et al. 1986). Because no data were tabulated, we had to extract data from the phase plot and compute the UT times associated with each data point based on the period and the zero point used for the phasing, and the knowledge of which night each point was associated with (based on the symbols). Unfortunately, there were several problems with what might have been a straightforward process of deriving the UT times. The first difficulty was that the authors gave different values for the period in the text (1.138 day) than in the phase plot's key (1.134 day). We therefore performed all of our derivations twice, once with each value, until we could determine which value for the period had been used in creating the phase plot (see below). The second problem is that no indication was given as to the date and time to which zero rotational phase corresponded. Finally, as was immediately evident by simply comparing the January 20 lightcurve plot to the same night's data on the phase plot, while the overall shape of the lightcurve was the same, the detailed pattern of points exhibited numerous discrepancies. After enlarging and scanning both figures, we used a digitization utility to measure each point for a given night, repeating for each of the eight nights on the phase plot; with the magnified view, the identification was ambiguous for only a few overlapping points. For each night we determined the relative number of rotational cycles based on the period, and this was added to the extracted phase value and then multiplied by the period to get a relative time in days. We then compared the derived lightcurve in units of decimal hours for January 20 to the original UT lightcurve for this night, and determined that the data points on the phase plot had non-negligible scatter in both dimensions. While most points were plotted within ±0.01 day of their values on the UT plot, a few differed by more than 0.02 day. However, magnitudes exhibited a systematic shift on average, with the majority shifted lower on the phase plot by 0.01 mag while several others differed by ±0.02 mag or more. Since the points on the original UT plot exhibit a much cleaner pattern and more regular spacing, we conclude that the authors were less careful when plotting the points (presumably by hand) on the phase plot, possibly because this was a preliminary result presented in conference proceedings and not intended for a final, refereed publication. Because of this 'jitter' introduced in the phase plot, determining the zero point and the period used in its creation was made more difficult but eventually was sorted out using a variety of constraints. Based on the UT plot for January 20, the zero point for phasing was near a value of -0.5 day from UT January 17.0 (observations began on the 17 th ). An offset of exactly -0.50 day would imply Julian Dates had been used, and Faye and Wisniewski (1978) had used 0 hr Julian Date for the zero point in their rotational study of Comet 6P/d'Arrest. This zero point and the shorter period of 1.134 day (listed within the phase plot) both gave matching times (to within 10 minutes, consistent with the jitter) of the lightcurve plot on January 20. Extracted times on all eight nights were also compared to the comet's ephemeris, confirming that for this scenario the comet was always quite accessible; in fact, observations usually started and stopped when the comet reached 50° altitude on either side of the meridian. In contrast, using the longer period (1.138 day) required a zero point offset of about -0.55 day, which does not correspond to a sensible starting point. We also compared the derived lightcurves with those by the Millis et al. (1988) and Jewitt & Meech (1986) for nights in common, and the longer period exhibited an unacceptable systematic drift between the times of lightcurve maxima over the apparition. Having eliminated the 1.138 day solution, we concluded that the last digit in the text was a simple typo, and that the authors had indeed originally phased the data using a period of 1.134 day. The extracted magnitudes and derived decimal dates are listed in Table 7. As with January 20, we assume that similar jitter affects all eight nights of data and we assume that similar uncertainties as detailed above are present throughout. However, as with January 20, the ensemble lightcurve on each night should be reasonable, especially when determining the timing of maxima and minima, the critical constraints for period determinations. Table 7 Dates, times and magnitudes extracted from Wisniewski et al. (1986)
2019-04-13T19:13:58.607Z
2017-09-18T00:00:00.000
{ "year": 2017, "sha1": "6b37d9fce55a7fbe34e0773b5b5e6848d7b7470a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1709.06089", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b0c2b4c094cac553e2afbedc014b62072db7f2ab", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
206273216
pes2o/s2orc
v3-fos-license
Eigenstate versus Zeeman-based approaches to the solid-effect The solid effect is one of the simplest and most effective mechanisms for Dynamic Nuclear Polarization. It involves the exchange of polarization between one electron and one nuclear spin coupled via the hyperfine interaction. Even for such a small spin system, the theoretical understanding is complicated by the contact with the lattice and the microwave irradiation. Both being weak, they can be treated within perturbation theory. In this work, we analyze the two most popular perturbation schemes: the Zeeman and the eigenstate-based approaches which differ in the way the hyperfine interaction is treated. For both schemes, we derive from first principles an effective Liouville equation which describes the density matrix of the spin system; we then study numerically the behavior of the nuclear polarization for several values of the hyperfine coupling. In general, we obtain that the Zeeman-based approach underestimates the value of the nuclear polarization. By performing a projection onto the diagonal part of the spin-system density matrix, we are able to understand the origin of the discrepancy, which is due to the presence of parasite leakage transitions appearing whenever the Zeeman basis is employed. I. INTRODUCTION Dynamic nuclear polarization (DNP) is an extremely promising technique to improve the signal-to-noise ratio in magnetic resonance imaging (MRI).DNP was predicted and discovered in the fifties, 1,2 and is nowadays a powerful method to enhance the polarization of nuclear spins, both in the solid 3 and liquid 4 states.In the DNP setup, 5,6 a compound doped with electron radicals, at low temperatures and in the presence of a magnetic field, is irradiated with microwaves at a frequency close to the electron Larmor frequency.After some time, a steady state is reached where polarization has been transferred from the electron to the nuclear spins.The stationary nuclear polarization reaches extremely high values wellbeyond those expected at thermal equilibrium in similar conditions. A large number of mechanisms are responsible for such a polarization transfer and which one is the most relevant among them depends on the temperature, the magnetic field and the radical's concentration. 3,71][12] The microscopic description of the DNP protocol has the advantage of indicating the most effective mechanisms and the main obstacles for the hyperpolarization of the nuclear spins given the setup conditions.The first step in this direction is to derive an effective equation for the time evolution of the density matrix of the spin system.This task cannot be done exactly, but it requires approximations.In particular, the Zeeman interaction with the magnetic field is much stronger than the coupling with the lattice and the action of the mi-crowaves, which can be treated within perturbation theory.As a result the effective time evolution of the density matrix is obtained from a perturbative expansion up to the second order. 13ow to account for hyperfine and dipolar interactions between spins is more controversial.The protocol developed in [12, 14-16] is based on the exact eigenstates of the spin Hamiltonian, which includes the dipolar and hyperfine interactions.On the contrary, the protocol developed in [11 and 17] treats the interactions perturbatively and is based on the Zeeman eigenstates.The former approach thus requires the exact diagonalization of the interacting spin Hamiltonian, while the latter can be described with a much simpler Zeeman basis. An important tool for the microscopical understanding of DNP are numerical simulations, but with the strong limitation that their complexity grows exponentially with the total number of spins, N . 10,11,18In particular, the Liouville scheme, which targets time evolution and steady state of the full spin density matrix, requires the diagonalization of 2 N × 2 N matrices and one is restricted to at most N ≈ 7 spins.One can reach system sizes of N ≈ 14 within the Hilbert approximation obtained by projecting the time evolution onto the diagonal elements only and reducing the dynamics to transitions between the 2 N eigenstates. 15,16n this paper, we derive and compare the time evolution obtained within the eigenstate and the Zeemanbased approaches.For simplicity, we focus on the minimal DNP system: an electron spin hyperpolarizes a nuclear spin via the well-resolved solid effect, which consists in the microwave irradiation of forbidden two-spin transitions, ultimately allowed by hyperfine interactions. 19,20n both cases the evolution equations for the density matrix are derived using the Lindblad formalism that can be easily generalized to more complex spin systems.Moreover we show that using the Schrieffer-Wolf perturbation theory 21 the transition rates of the Hilbert approxima-tion can be systematically computed.We prove that for weak hyperfine interactions, both time evolutions provide similar results.On the contrary, for large values of the hyperfine interactions, the Zeeman approach is no longer reliable, as parasite leakage transitions come to play.Nevertheless, the Zeeman approach is very useful to study how the nuclear spin hyperpolarization diffuses in real systems as the transitions rates between Zeeman eigenstates do not require matrix diagonalization and can be performed for N very large with Monte Carlo methods. 22he paper is organized as follows: In Sec.II we derive the general form of the time evolution of the reduced density matrix within the Lindblad formalism.In Sec.III, we further simplify the time evolution by restricting ourselves to the diagonal part of the reduced density matrix ρ (the aforementioned Hilbert approximation), and in Sec.IV, we present the numerical results for the steady state obtained for the eigenstate and the Zeeman-based approaches using both the Liouville and the Hilbert scheme. II. DERIVATION OF THE TIME EVOLUTION EQUATION FOR THE SPIN DENSITY MATRIX The Hamiltonian We consider an electron and a nuclear spin, interacting via hyperfine interactions and weakly coupled to the lattice, which plays the role of a thermal bath at the temperature β −1 ).The spins are irradiated by a microwave field resonant at ω MW .The full Hamiltonian reads: (1) Let us go through all the terms in (1), one by one: • The time-independent Hamiltonian ĤS contains the spins' degrees of freedom only.In our two-spins system, it writes as the sum of the Zeeman and hyperfine contributions 14,[23][24][25] : where Ŝ/ Î are the electron/nuclear spin operators, ω e/n their respective Larmor frequencies.Ā is the hyperfine interaction matrix, that includes both isotropic and anisotropic contributions. 3,7,19The microwave Hamiltonian ĤMW is timedependent and reads: To avoid dealing with an explicitly time-dependent problem, here we employ the rotating wave approximation (RWA), 26 that will be detailed at the end of this section.It is based on considering a reference frame rotating at frequency ω MW and neglecting terms with fast frequencies (i.e.2ω MW ).With this approximation the Hamiltonian in the rotating frame becomes time-independent. • We write the coupling between the lattice and the spin system in the form: where φS α and φI α are the lattice modes that linearly couple to the spin operators.The constants λ S and λ I describe the strength of the coupling with the two spin species. • ĤL contains the lattice modes which we assume to be at thermal equilibrium at the temperature β −1 .We will see 13 that the detailed form of this Hamiltonian is not important for the evolution of the spin system. Time evolution of the reduced density matrix The time evolution 27 for the density matrix ρ tot of an isolated system described by the Hamiltonian Ĥtot is given by the equation: The Hamiltonian given in (1) encodes all the degrees of freedom of both the spin system and the lattice, so obtaining the exact time evolution from Eq. ( 6) is an impossible task.One then needs to turn to some approximations in order to treat the problem.As the couplings between the spins' degrees of freedom and microwave and lattice are much weaker than the Zeeman term (λ S , λ I ω n ), we can focus on the effective time evolution of the reduced density matrix of the spin system once the lattice degrees of freedom are traced out: A set of approximations needs to be performed in order to obtain the effective time evolution of ρ from Eq. ( 6): • The approximation of weak-coupling between the spin system and the lattice.In practice this allows performing a second-order perturbative expansion in λ I , λ S that leads to an effective time evolution for the much smaller reduced density matrix ρ instead of ρ tot . • The Born-Markov approximation which supposes that the characteristic time of the lattice is much faster than the spin-lattice relaxation times, T 1e and T 1n .In this limit the lattice remains at thermal equilibrium at temperature β −1 and its state is not influenced by that of the spins.As a result, the reduced density matrix ρ at time t + dt only depends on the state at time t instead that from the full history at times t < t. • The approximations above do not guarantee that the resulting evolution for ρ(t) is physical: it should additionally be linear and preserve trace and positivity of ρ.This can be enforced if one also assumes the secular approximation, according to which oscillating phases in off-diagonal elements of ρ are neglected. 13The validity of this approximation relies on the assumption that T 2,e , T 2,n T 1,e , T 1,n . These hypothesis lead to a Lindblad formulation of the dynamics of the spin system, the full derivation being detailed in appendix VI and in reference [13] The commutator in (8) corresponds to the standard time evolution of an isolated quantum system, while the last term corresponds to the Lindblad super-operator L; it is responsible for a non-unitary evolution which still acts linearly and preserves trace, hermiticity and positivity of ρ.These requirements strongly constrain its form, and the final result reads (see appendix VI): where J Ô (ω) is the spectral function (10) and O = { Ŝx , Ŝy , Ŝz , Îx , Îy , Îz } are the different spinflip operators linearly coupled to the lattice modes.To understand the subscript in Ôω note that in this weakcoupling limit the lattice exchanges energy quanta ω with the spins by inducing transitions between the wellresolved energy levels of the unperturbed spin Hamiltonian Ĥ0 .As a result the sum over ω runs over all its energy gaps and Ôω is obtained from Ô selecting only the transitions with an energy gap ω, i.e. it reads: where |n and |m are the eigenstates of the Hamiltonian Ĥ0 . Consequently, the precise time evolution of the system then depends on which terms in Ĥtot are considered to be large and are included in Ĥ0 , determining the spectrum of well-resolved energy levels.The Zeeman term ĤZ is the largest contribution while ĤMW and ĤS-L are always weak.How to account for hyperfine interactions is more questionable. In this work, we compare and discuss two possible choices for Ĥ0 considered in the literature: (i) A Zeeman-based approach 11,17,28,29 for which we consider Ĥ0 = ĤZ as non-perturbed Hamiltonian and thus the eigenstates are factorized in the Zeeman basis.Here the hyperfine interactions are treated perturbatively at the same level as the microwave irradiation ĤMW and the lattice coupling ĤS-L . (ii) An eigenstate-based approach 10,12,[14][15][16]30,31 for which the non-perturbed Hamiltonian is Ĥ0 = ĤS . This pproach treats exactly the hyperfine interactions, but requires the exact diagonalization of the interacting spin Hamiltonian, implying in general a drastic restriction of the accessible system sizes. The difference between the two approaches can be seen also in the absence of microwave irradiation.Indeed within the Born-Markov approximation, ρ latt is assumed to be at thermal equilibrium which translates into the condition This implies that the rates of the transitions generated by the Lindblad super-operator respect detailed balance at the temperature β −1 so that Therefore, in the eigenstate-based approach, where Ĥ0 = ĤS , one finds that ρ Gibbs , ĤS = 0, so that the steady state coincides with Gibbs equilibrium.If we consider the Zeeman-based approach, the dynamics is more complex, as ρ Gibbs , ĤS = 0: on the one side the lattice tries to thermalize the spins at the Gibbs equilibrium e −β ĤZ /Z while the hyperfine interactions induce additional parasite transitions, which slightly modify the final stationary state. If the system is irradiated by the microwaves, which are continuously injecting energy, no relaxation to thermal equilibrium is expected.In the rest of the section we will detail how to treat a time-dependent Hamiltonian as the one in Eq. ( 4) in order to obtain quantum jumps analogous to the ones previously discussed. The rotating-wave approximation (RWA) In order to deal with an effective time-independent Hamiltonian instead of the original one in Eq. ( 4), we perform the so-called rotating wave approximation (see appendix VII and reference [26]).In practice, we work in a frame that is rotating at the same frequency as the microwave field ω MW and define the density matrix in the rotating frame: with Û (t) = e i ŜzωMWt/ .Once this transformation is applied to ĤS , it removes the time dependence in the microwave field, but generates rapidly oscillating terms with frequencies 2ω MW .Since ω MW ω e is much larger than the other energy scales in ĤS , we perform the rotatingwave approximation, where all such high-frequency terms are neglected.After this approximation, the Hamiltonian of the spins' system ĤS in the rotating frame has become time-independent and commutes with Ŝz but not with Îz . In particular, for the hyperfine Hamiltonian in Eq. ( 3), we obtain the simplified pseudo-secular form that reads where, B is the hyperfine strength that depends on the distance between the two spins and in this paper will take values in the range of the tens and hundreds of 2πkHz. For the sake of simplicity, in Eq. ( 15), we have neglected the secular term ∝ Ŝz Îz , as it only induces a small shift in the effective Zeeman gaps but does not imply any nuclear spin flip, thus being inessential for the solid-effect transitions considered here.Thanks to the RWA, Equation ( 8) once rewritten in the rotating frame assumes the form: where: Note that the Zeeman Hamiltonian in the rotating frame implies a shift of the electron Larmor frequency ω e → ω e − ω MW .In deriving (16), we used that, consistently with the RWA, [ Ŝz , Ĥ0 ] = 0 both for the Zeeman and the eigenstate-based approaches, and therefore This is coherent with the fact that the lattice brings the system to thermal equilibrium, which is unchanged in the rotating frame, as Û (t)ρ Gibbs Û † (t) = ρ Gibbs .While the RWA is accurate in our DNP context, it does not allow to systematically go beyond the approxima- tion in (18).More accurate treatments can be obtained considering the average Hamiltonian theory (AHT) 32 or the Floquet theory. 33The former consists on timeaveraging the original Hamiltonian by discretizing the time-intervals and the latter, which is analogous to the Bloch theorem but for temporal periodicity, translates to an expansion in higher harmonics multiple of ω MW .As we explained above, the RWA is equivalent to truncating at the lowest harmonic in the Floquet theory and for simplicity, here we will restrict to doing so.Interestingly, if one considers the second harmonics and applies AHT, a Bloch-Seigert 34 type of shift (extra renormalization for the electron Larmor frequency) would be recovered. At this point, one can exactly compute the time evolution in Eq. ( 16); more directly we can obtain the stationary state ρ stat by setting dρ (r) /dt = 0 and solving the resulting linear system.This approach has nevertheless an important drawback: the complexity of the problem grows extremely fast, as the number of components of the density matrix ρ (r) equals 2 2(Ne+Nn) .This can be straightforwardly done for the 2-spins example (N e = N n = 1) and the numerical results are shown in Sec.IV.However, as soon as one wants to consider larger systems, this fast exponential growth makes any numerical treatment prohibitive.For this reason, one considers the Hilbert approximation that we detail in the following section. III. HILBERT APPROACH: TOWARDS A SEMI-CLASSICAL MASTER EQUATION The Hilbert approximation consists on projecting the dynamics of the density matrix ρ to its diagonal components ρ nn ≡ π n , in the basis which diagonalizes Ĥ0 .Indeed, the commutator in (16) induces oscillations for the off-diagonal terms which are then exponentially suppressed by the Lindbladian term on time scales T 2,e and T 2,n (respectively if |n and |m differ for an electronic or a nuclear transition).In this limit, the off-diagonal elements are always small and the state of the system can be described by the occupation probability π n of each eigenstate |n .Lattice, microwaves and perturbative terms of the Hamiltonian (if any) induce slow transitions between pairs of those eigenstates.In practice, one wants to obtain a master equation for the occupation probabilities π n and determine the transition rates W |n →|m between eigenstates.Once those rates are determined, one can define the transition matrix w: The stationary state for the occupation probabilities π stat is the eigenvector of the matrix w with eigenvalue 0, i.e. The transition rates for the Zeeman and eigenstate-based approaches are in general different.Here we compute them for the two-spins system.As a starting point, the eigenstates and energy levels are given in Table I.Note that the Zeeman eigenstates are completely polarized along the z-direction while in the exact eigenstates-based approach, for the nuclear spin, there is some mixing (|α n or | βn ) induced by the hyperfine interactions: where tan ϕ = B/2 ωn+Ωn .Here, we introduced Ω n as a shift of the nuclear Larmor frequency: In Fig. 1 we show the possible transitions when the microwaves irradiate at the frequency of the doublequantum transition (i.e.|↑ e , ↑ n |↓ e , ↓ n ). To derive the expression of the transition rates it is useful to write the Hamiltonian in terms of the nonperturbative term Ĥ0 and the perturbation V = Ĥtot − Ĥ0 .The lattice correlation functions J Ô (ω) are supposed to be at thermal equilibrium at the temperature β −1 (see Eq. ( 12)); in order to specify its structure, we distinguish fast dephasing process at ω 0 on the scale of T 2e and T 2n from relaxation decaying process ω = 0, on the scale T 1e , T 1n .In particular, we set for Ôe = Ŝx,y ( O n = Îx,y ), with h(ω) = 1 e −βω +1 , and for n = m. The full dynamics in ( 16) can now be decomposed in fast contributions with rates of the order T −1 2e , T −1 2n , and slow ones with rates of the order T −1 1e , T −1 1n .Explicitly we have: where we define ω kn = k − n .Note that in ω kn , the energy difference is taken in the lab frame, while the term m is in the rotating frame.More compactly, we can write where L 0 accounts for the fast dynamics (T 2,e , T 2,n ) while L 1 accounts for the slow dynamics (T 1,e , T 1,n and the perturbative terms in V ).Note that L 0 is a superoperator which preserves the diagonal part of ρ, i.e.L 0 [|n n|] = 0; it has therefore a degenerate subspace corresponding to the projectors on the diagonal entries of the density matrix ρ nn = π n , with eigenvalue 0. We are interested in an effective dynamics restricted to the diagonal entries of ρ.As this transitions are induced by the small perturbation L 1 , we treat the problem perturbatively.We have two contributions: i) A dissipative part that acts only on the subspace of the diagonal elements of the density matrix.Thus these lattice-induced transitions come naturally at the first order and read ii) A part containing the perturbative terms of the Hamiltonian, V .These terms are responsible for the transition rates which connect different diagonal elements of the density matrix and can be used to build the transition matrix in Eq. ( 21).To do so, we have implemented a perturbation theory based on the Schrieffer-Wolf transformation 21 (see appendix VIII).In this procedure, we keep only the lowest order term that gives a non-zero contribution between two given eigenstates. At the second order in L 1 , one obtains the following transition: where T 2 = T e 2 if the states |n and |m differ for an electron spin flip and T 2 = T n 2 otherwise.The term in Eq. ( 31) is responsible for the microwave-induced transitions that read: In the Zeeman approach, the single-quantum transitions ) are only allowed in the eigenstatebased approach.Indeed the numerator has a nonvanishing contribution thank to the mixing of nuclear states in Eq. ( 23).As a result, in the eigenstate-based approach we can restrict ourselves to the second order in the perturbation theory without need to seek into higherorder transitions. Contrarily, in the Zeeman approach the numerator of Eq. (32) vanishes for the zero-quantum and doublequantum transitions.Obtaining the solid effect transition rates in this approach is a tough work and one must go up to the fourth order in the perturbation theory.The details are given in appendix VIII.The final result for this transition requires the joint action of microwave irradiation and hyperfine interactions.It then reads: An additional consequence of this Zeeman approach is that the hyperfine interactions also induce a transition between different nuclear states.This comes straightforwardly from the second-order formula in Eq. (31).The transition is dubbed leakage and its rate reads: To sum up, all transition rates between couples of eigenstates are given in appendix IX.Microscopic parameters modeling the 1−electron and 1−nuclear spin system in a magnetic field of H = 3.3 T and in contact with lattice at a temperature β −1 = 12 K.We have chosen different values for the hyperfine interactions in order to check its action on the relaxation basis. IV. RESULTS AND DISCUSSION Following the formalisms introduced in sections II and III, we have computed (i) the steady state density matrix ρ stat (Liouville scheme) and (ii) the occupation probabilities, π stat n , of the eigenstates of Ĥ0 (Hilbert approximation).The nuclear polarization along the z axis reads: The first definition applies for the Liouville scheme while the second one is used within the Hilbert approximation. To compute ρ stat we have used Eq. ( 16) and to compute π stat n , Eq. ( 22) has been employed.The numerical parameters are given in table II and are chosen to be realistic for an experiment involving 13 C, but for which we vary some physical parameters in order to obtain richer physics. We now present the study of the DNP profile (i.e. the nuclear polarization as a function of the frequency of irradiation of the microwaves).In particular we will focus on the zero-quantum transition to illustrate our results, but they are applicable all over the spectrum. A. An exact treatment: Zeeman vs. eigenstate-based approaches within the Liouville formalism The full DNP profile obtained from the exact Liouville treatment is shown in Fig. 2 (left).The resonances corresponding to the zero-quantum and double-quantum transitions at microwave frequencies of ω MW ≈ ω e ± ω n are clearly recognizable.Our interest lies on how the strength of the hyperfine interactions affects the results, and which basis is the most accurate choice given the scenario.It is natural to assume that the Zeeman approach requires the hyperfine strength B to be weak, as it treats such an interaction as a small perturbation. In Fig. 2 (right), we show a zoom of the DNP profile around the zero-quantum transition: red and blue lines correspond respectively to the Zeeman and eigenstatebased approaches.Two values of the hyperfine interactions are considered: in solid line a modest hyperfine strength B = 40 × 2πkHz is shown, while the dashed lines show the case of B = 160 × 2πkHz.The width of the peak is proportional to B, and we also note that increasing the interaction strength the two eigenstate approaches are significantly more different. B. Zeeman vs. eigenstate-based approaches in the Hilbert formalism We now explore how the hyperfine interactions affect the performance of the Hilbert approach with respect to the exact Liouville treatment.In Fig 3 (left and right) we compare the exact Liouville treatment in solid and dashed lines for the Zeeman and eigenstate-based approaches, respectively.Again, the values B = 40×2πkHz and B = 160 × 2πkHz are considered.We observe that -Within the Zeeman-based approach (left) the two treatments slightly differ for the weak hyperfines. -For the eigenstate-based approach (right) Liouville and Hilbert treatments give perfectly matching results. In both cases, as one increases the hyperfine interactions strength the two treatments give more and more different results.Note that the loss of resolution in the resonance peak as we increase the hyperfine strength is more remarkable in the Hilbert scheme than in the Liouville one. Summarizing, we observe that when the hyperfine interactions are weak, the Zeeman and the eigenstate-based approaches give similar results.However, in general the Zeeman-based approach is only accurate for small values of B: it underestimates the nuclear polarization and the effect becomes more and more evident increasing the value of B. In the next section, we discuss further the origin of this discrepancy. C. The role of leakage To investigate the difference between the Zeeman and the eigenstate-based approaches, we now consider a much larger hyperfine strength B = 320 × 2πkHz.The results are shown in Fig. 4 and, similarly to what we observed in Fig. 3, the Zeeman approach (blue circles) leads to a smaller value of the nuclear polarization compared with the eigenstate-based approach (yellow dots).However, it is hard to understand why the two differ, as the two polarizations are the result of two separate steps: i) the Liouville description, obtained considering Ĥ0 = ĤS (eigenstate-based approach) or Ĥ0 = ĤZ (Zeeman-based approach); ii) the Hilbert scheme was obtained projecting the density matrix ρ onto its diagonal in the basis of eigenstates of Ĥ0 . Instead of following ii), it is possible to achieve a quantitative understanding of the difference between the two Liouville formulations, if for both of them, we perform a Hilbert projection onto the same basis, i.e. the eigenstates of ĤS .Of course, for the eigenstate-based approach, this projection coincides with the alreadyconsidered Hilbert scheme, plotted in yellow in Fig. 4.However, for the Zeeman-based approach, we obtain a transition matrix, still in the basis of eigenstates of ĤS (red squares in Fig. 4).Now, the two transition matrices can be compared and remarkably, the difference between the two is pinned down as an extra transition term it is clear that this additional transition suppresses the nuclear polarization; moreover in the limit of small B coincides with the one introduced in Eq. (34).So, in practice, the choice of the Zeeman basis in deriving the Liouville formulation already induces a fictitious leakage term, which suppresses the polarization when compared with the eigenstate-based approach. V. CONCLUSIONS In this paper, we have studied one of the simplest DNP mechanisms, the solid effect in a two-spins system, within two different approaches.On the one hand, the Zeemanbased approach treats perturbatively the hyperfine interaction and considers jumps between completely polarized eigenstates.On the other hand, the eigenstate-based approach treats exactly the hyperfine interactions, and jumps occur between eigenstates which present mixing for the nuclear spin.The two schemes give very similar results when the hyperfine interactions are weak.The Zeeman approach could be more convenient for manybody systems, as it does not require the exact diagonalization of the Hamiltonian of size 2 Ne+Nn .However, the validity of this method requires weak interactions between spins.Indeed, our results show that in presence of strong hyperfine interactions the Zeeman and eigenstatebased schemes disagree and the hyperfine interactions must then be treated exactly.As discussed in Sec.IV C, the origin of the two different behaviors is not in the precise form of the eigenstates, but rather in the presence of parasite leakage transitions, induced by the perturbative treatment of the hyperfine interactions in the Zeeman approach. Supporting information Eigenstate versus Zeeman-based approaches to the solid-effect In the supporting information we provide the technical details of our computations.In App.VI we develop the formalism to treat a system weakly coupled to a lattice at a given temperature.In App.VII we provide the details on the rotating wave approximation to treat time-dependent periodical Hamiltonians.In App.VIII we develop the perturbation theory of Schrieffer-Wolf type that allows us to systematically compute the transition rates between eigenstates of the Hamiltonian.Finally, in App.IX we summarize all the transition rates between those eigenstates in the different scenarios introduced in the main text. VI. FORMAL TREATMENT OF THE COUPLING BETWEEN SYSTEM AND LATTICE Consider a system described by the Hamiltonian ĤS .The system is not isolated but in contact with a larger system described by ĤL , that we will call the lattice.The latter is considered to be a very large reservoir at equilibrium at the external temperature β −1 .The coupling between both system and lattice is considered to be weak so that the state of the system will not affect that of the lattice.We first deal with the eigenstate-based approach for which H 0 = ĤS and the total Hamiltonian reads Ĥtot = ĤS + ĤL + λ ĤS-L , with λ ĤS-L = α=x,y,z j where j labels if we are referring to the electron or nuclear spin, Ôj α is the respective spin operator in the direction α and φj α represents the lattice modes that linearly couple (with coupling constant λ j ) to the spin Ôj α .Within the assumptions of the Born-Markov approximation detailed in the main text, one can perform a perturbative theory in the coupling λ and find 13 an integro-differential equation for the reduced density matrix of the spins system ρ provided that ρ tot = ρ ⊗ ρ latt that reads: In order to solve (38) one would like to decompose the interaction Hamiltonian into eigenoperators of the Hamiltonian of the spins system.If we note the eigenstates of the unperturbed Hamiltonian as |n , |m with respective energies n , m , we can define the operators where the sum runs over all pairs of eigenstates provided that they differ from an energy ω.From the definition of these projected operators, we get that Additionally, one can verify that their time-evolution reads: If we now sum over all the energy gaps, we obtain the original operator: As a result, we get that the Hamiltonian of the coupling between lattice and system reads: Finally, we can substitute these definition in Eq. ( 38) and obtain after some algebra that dρ(t) dt = α,β=x,y,z j,ω,ω Here, h.c.stands for Hermitian conjugated, and if we assume spatial and time homogeneity then the correlation functions of the lattice: Finally, one can perform the so called secular approximation, which is an analogous to the rotating wave approximation in order to remove the time-dependence in Eq. ( 44).Retaining only the terms with ω = ω is justified if the correlation times of the spins system are much larger than the characteristic times of the correlation functions of the lattice.In that case, for time variations where we appreciate a change of ρ the exponential in (44) oscillates very rapidly, so they are averaged out.At the end, we obtain that the full time-evolution of the reduced density matrix of the spins system (without taking into account the microwaves yet) reads with L the Lindblad super-operator that acts on the density matrix as follows: and all the role of the lattice is encoded in the spectral function In the main text, we have substituted the sum over α and j in Eq. (47) by the sum over Ô ∈ O = { Ŝx , Ŝy , Ŝz , Îx , Îy , Îz }.Note that the definition of the Lindblad super-operator strongly depends on the approach that we follow, as the operators Ôα,ω that enter in equation ( 46) are projected precisely onto the eigenstates of the unperturbed Hamiltonian Ĥ0 .In particular, when one implements the Zeeman approach, the evolution equation is still given by ( 46), but the operators Ôα,ω are projected onto the Zeeman basis. VII. THE ROTATING FRAME AND THE ROTATING WAVE APPROXIMATION The rotating wave approximation is one of the options to which we can turn in order to treat time-periodic Hamiltonians like the microwave drive in (4).To do so, we consider a new frame of reference that is rotating around the z-axis at the same frequency of the periodical Hamiltonian, ω MW .This can be expressed by writing a new state which in the density matrix language reads: introducing the operator Û (t) = e i ŜzωMWt/ .If we want to study the time-evolution of this recently introduced density matrix, we simply apply the chain rule as To compute this, we introduce the time-evolution of the density matrix from the main text in (8).We thus obtain that Here, in the first step, we took into account the fact that both ĤZ and Ĥhf commute with Û (t).In the second step, we took into account the fact that Û (t) Û † (t) = 1 and redefine the microwave Hamiltonian in the rotating frame as: Note that we have neglected the terms that oscillate with a frequency 2ω MW , as they are soon averaged out.Finally, we want to see the action of the rotating frame on the Lindblad super-operator. It is clear that the only action of the rotating frame for the subset of super-operators acting on the nuclear spin (i.e.L Ôn α,ω ) has as only role of projecting the static density matrix into the rotating frame, as the operators commute: where the superscript n stands for nuclear spin.Additionally, we will prove that this is also the case for the superoperators associated to the electron spin labeled with the superscript e.One can see that: This translates to the fact that for any operator Ôj α,ω we get as in every term of the definition of the Lindbladian (9) we find the product Ôj α,ω Ôj † α,ω . VIII. SCHRIEFFER-WOLF PERTURBATION THEORY FOR NON-HERMITIAN OPERATORS In this appendix we develop the perturbation theory based on the Schrieffer-Wolf transformation.To be general we assume to have a spin system composed by N n nuclear spins and N e electron spins.Consider the Liouville equation for the density matrix, which takes the form From the mathematical point of view, this is a linear differential equation and ρ is a vector with N = 2 2(Ne+Nn) components.As discussed in the text, L 0 preserves the diagonal part of the density matrix ρ, which translates into where |n is an eigenstate of Ĥ0 , which as discussed in the text can be Ĥ0 = ĤS (eigenstate-based approach) or Ĥ0 = ĤZ (Zeeman-based approach).It is clear that for = 0, each projector |n n| would be a stable stationary state.For = 0 but small, we can derive an effective dynamics restricted to the eigenspace of L 0 with 0-eigenvalue where we introduced λ 0 = 0, to keep the treatment general to any eigenspace of L 0 .Note that L 0 has a large degeneracy, as dim G 0 = 2 Ne+Nn .Moreover, the validity of the expansion is quantified by and σ(L 0 ) indicates the spectrum of L 0 : L 0 ρ λ = λρ λ .In other words, the perturbation V is assumed to be too small to generate transition outside of the subspace G 0 ; nevertheless, it can induce virtual transitions, which by going outside and back inside G 0 , can generate an effective dynamics within the subspace. To quantitatively compute the dynamics within G 0 , we use an analogous of the Schrieffer-Wolf transformation.The idea behind this method is to consider a linear transformation ρ → U ρ.Under this transformation, the matrix L in Eq. ( 58) is transformed as U LU −1 .We look for a transformation U such that the subspace G 0 remains decoupled from the rest.Of course, if we were able to find the transformation U which completely diagonalizes L, we would have decoupled G 0 from all the other subspaces.But, the diagonalization of L is the hard problem that we want to avoid, so we settle for the simpler requirement of decoupling G 0 from the other subspaces, order by order in . Let's indicate with P the projector on the subspace G 0 and Q = 1 − P .We also introduce the projectors P λ onto all the other eigenspaces G λ of L, such that P λ G λ = δ λ,λ G λ and λ∈σ(L0) where we use the notation to indicate the sum over all the eigenvalues but λ = λ 0 = 0. It is useful to set U = e iS and we then demand that Qe iS (L 0 + V )e −iS P = 0 , P e iS (L 0 + V )e −iS Q = 0 , The effective operator L eff in the subspace G 0 can then be written as L eff = P e iS Le −iS P . This equation can be solved perturbatively in V , by writing S = S (0) + S (1) + . ... At order zero, we trivially obtain S (0) = 0 as QL 0 P = 0.In the following we will derive the subsequent orders. One can introduce an Ansatz for S (1) with the possible combinations of P and P λ connected once by V : and substitute it in Eq. (65) and using that P Q = P P λ = 0 for λ = λ 0 , it is easy to get the value of a λ and b λ .We finally obtain: In general the matrices S (j) are "off-diagonal", in the sense that they connect the spaces G 0 with the rest (and vice-versa).It means that the matrix S (1) is already enough to obtain the effective operator at second order in .Expanding the solution in Eq. ( 64), and after some algebra, we obtain the first order contribution to the effective The final effective solution for the operator L is up to fourth order: As it is clear, at the fourth order, one obtains a large number of terms.For the sake of simplicity, we have kept for each transition between a pair of eigenstates |n and |m the lowest order at which it does not vanish.For example, the lattice-induced transitions can be simply obtained with the first order contribution P V P .The solid effect transitions, on the contrary, must be obtained using Eq.(74). | FIG.1.(color online) sketch of the possible transitions between eigenstates for the Zeeman-based approach (left) and for the eigenstate-based approach (right).Note that in the Zeeman-based approach, hyperfine interactions (B) induce nuclear spin-flip transitions that we dub leakage and that are absent in the eigenstate-based approach.The scheme corresponds to a microwave irradiation of the double-quantum transition (i.e.|↑e , ↑n |↓e , ↓n ), namely ωMW ≈ ωe − ωn (or ωe − Ωn).In this limit, we can neglect other microwave-induced transitions (zero-quantum, single-quantum...) FIG. 2 . FIG. 2. (color online) Steady state DNP profile within the Liouville formalism in the Zeeman (blue) and eigenstate-based (red) approaches.(left) Full DNP profile for a hyperfine strength of B = 40 × 2πkHz.We observe the two solid-effect resonances at ωMW ≈ ωe ± ωn corresponding to the double-quantum and zero-quantum transitions.(right) Zoom of the DNP profile around the zero-quantum transition frequency ωMW ≈ ωe + ωn.The solid line corresponds to the hyperfine strength of B = 40 × 2πkHz while the dashed line shows B = 160 × 2πkHz. FIG. 3 . FIG. 3. (color online) DNP profile around the zero-quantum transition.(left) An exact Liouville treatment in the Zeeman approach is plotted in navy.Different values of the hyperfine interaction are considered: B = 40 × 2πkHz (solid lines) and B = 160 × 2πkHz (dashed lines).In light blue we show the Hilbert approach in stars/circles for the same two values of the hyperfine interaction.(right) An analogous plot to the left one in the eigenstate-based approach.The red lines show the exact Liouville formalism and the yellow ones show the Hilbert one. 19 FIG. 4 . FIG. 4. (color online) DNP profile around the zero-quantum transition within the Hilbert formalism for a hyperfine interacting strength of B = 320×2πkHz.We compare the Zeeman (light blue) and eigenstate-based (yellow) approaches.In red squares, we show the results for the Zeeman-based approach projected on the exact eigenstates, that almost overlaps with the only Zeeman approach. TABLE I . Spectrum of the system in the Zeeman and eigenstate-based approaches.
2018-03-28T15:40:44.000Z
2017-12-21T00:00:00.000
{ "year": 2017, "sha1": "c3c3b5324c5e8bd85bafb424b1d9275c3ae6f418", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1712.07895", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c57f212f41128dd05bb98b392bea6f239596aa45", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics", "Chemistry" ] }
235203159
pes2o/s2orc
v3-fos-license
Potential impacts of general practitioners working in or alongside emergency departments in England: initial qualitative findings from a national mixed-methods evaluation Objectives To explore the potential impacts of introducing General Practitioners into Emergency Departments (GPED) from the perspectives of service leaders, health professionals and patients. These ‘expectations of impact’ can be used to generate hypotheses that will inform future implementations and evaluations of GPED. Design Qualitative study consisting of 228 semistructured interviews. Setting 10 acute National Health Service (NHS) hospitals and the wider healthcare system in England. Interviews were undertaken face to face or via telephone. Data were analysed thematically. Participants 124 health professionals and 94 patients and carers. 10 service leaders representing a range of national organisations and government departments across England (eg, NHS England and Department of Health) were also interviewed. Results A range of GPED models are being implemented across the NHS due to different interpretations of national policy and variation in local context. This has resulted in stakeholders and organisations interpreting the aims of GPED differently and anticipating a range of potential impacts. Participants expected GPED to affect the following areas: ED performance indicators; patient outcome and experience; service access; staffing and workforce experience; and resources. Across these ‘domains of influence’, arguments for positive, negative and no effect of GPED were proposed. Conclusions Evaluating whether GPED has been successful will be challenging. However, despite uncertainty surrounding the direction of effect, there was agreement across all stakeholder groups on the areas that GPED would influence. As a result, we propose eight domains of influence that will inform our subsequent mixed-methods evaluation of GPED. Trial registration number ISRCTN51780222. BACKGROUND Urgent and emergency care is experiencing increasing demand globally. 1 In 2019, attendances at emergency departments (EDs) in England stood at record levels. The year 2018-2019 saw an increase of 4.4% compared with 2017-2018 and 21% since 2009-2010. 2 High levels of ED occupancy lead to crowding, 3 and this can undermine patient safety, clinical outcomes and quality of care, [3][4][5] delay service delivery, 6 increase associated mortality and reduce patient and clinician satisfaction. 7 Numerous initiatives have been introduced to address the challenge of rising demand in ED attendance globally. [8][9][10][11][12] Examples of UK initiatives include the introduction of telephone advice and guidance (National Health Service (NHS) 111/NHS Direct) and the provision of alternative facilities (eg, walk-in centres and urgent treatment centres) for patients to access primary care for non-urgent conditions. 1 13 It is estimated that between 15% and 40% of patients attending the ED could be treated in general practice. [14][15][16] Over the past decade, EDs across the UK and Europe have started to introduce general practice (GP) services in or alongside EDs. 17 In addition Strengths and limitations of this study ► A unique primary study of 10 National Health Service case sites explores the anticipated effects of introducing General Practitioners in Emergency Departments (GPED). ► Our analysis uses a large qualitative data set and incorporates the views of multiple stakeholders. ► Data are from England only and so may not be generalisable to other healthcare settings. ► Data represent the views of those individuals who agreed to take part and so may not be exhaustive. Open access to being introduced to try and tackle a rise in demand from perceived general practice patients, it was anticipated that introducing GPs in or alongside EDs would, by providing specific general practice skills and expertise, lead to improvements in patient care and control costs by reducing admission and investigation rates. 18 In 2015, a review of NHS Urgent and Emergency Care in England proposed that selected patients should be directed to an alternative healthcare provider who could better meet their needs, thereby reducing ED attendances. 19 In 2017, this recommendation was translated into policy in the 'Next Steps on the NHS Five Year Forward View' stating that, 'Every hospital must have comprehensive front door streaming by October 2017' (p. 15). 20 To provide financial support for the introduction of GPs working in or alongside the ED, the UK government also announced a capital fund of £100 million to which hospitals in England could apply. [21][22][23][24] Despite the recent political and financial commitment by the UK government to introducing GPs in or alongside EDs, recent guidance from the National Institute of Health and Care Excellence stated that based on current research, [25][26][27] there is currently 'insufficient evidence to reach a recommendation on co-located GP units'. 28 It remains uncertain how the implicit hypotheses about the effect of GPs in an ED are articulated and understood by policymakers, service leaders, health professionals and patients. These initiatives have not been subject to rigorous, independent evaluation, and there is a lack of clarity regarding the assumptions and mechanism(s) through which the predicted performance benefits for these initiatives might be achieved. 29 In this paper, we report findings from qualitative data, which were collected as part of a wider mixed methods study evaluating the impact of GPs working in or alongside the ED (GPED). Further details of the GPED study are outlined in box 1 and in the study protocol. 29 This paper uses qualitative data from service leaders, health professionals and patients to explore the expected impact of introducing GPs into the ED to generate hypotheses that inform how GPED will be evaluated in subsequent research and implemented into practice. METHODS Design We completed a qualitative study consisting of interviews with service leaders, health professionals and patients from 10 case study sites ( Sampling and recruitment Data were collected from 10 case study sites. Sites were selected purposively to ensure maximum variation according to: GPED model; GPED duration; geographical location; and deprivation index and ED volume (ED attendances). 30 Participants were sampled opportunistically by the research team, while undertaking on-site data collection. Service leaders were contacted directly via email. Data collection Telephone interviews with service leaders were conducted between December 2017 and January 2018 following informed verbal consent. During interviews participants were asked to describe: their involvement in GPED and background to the policy as well as the expected impact of GPED and any potential unintended consequences online supplemental material 1. Case study interviews with patients and health professionals were largely conducted face to face at hospital sites during GPED study data collection. Some interviews were conducted via telephone at the request of the participant. Written informed consent was provided by all participants, and all interviews were audio-recorded. Data collection took place between October 2017 and November 2018 at 10 EDs throughout England. Interviews with health professionals, patients and carers were semistructured and followed a topic guide (online supplemental material 2-7). During interviews, health professionals were asked: their current role in ED; details of their GPED model; and expected impact. Patients and carers were asked to describe why they chose to attend the ED as well as their Box 1 The general practitioners working in or alongside the emergency department (GPED) Study Objectives: to evaluate the impact of GPED on patient care, the primary care and acute hospital team and the wider urgent care system. Design: a mixed methods study consisting of three work packages. ► Work package A: mapping, description and classification of current models of GPED in all emergency departments (EDs) in England and interviews with key policymakers to examine the hypotheses that underpin GPED. ► Work package B: quantitative analysis of national data to measure the effectiveness, costs and consequences of the GPED models identified in work package A using retrospective analysis of Hospital Episode Statistics. ► Work package C: detailed mixed methods case studies of different GPED models consisting of: non-participant observation of clinical care, semistructured interviews with staff, patients and carers, workforce surveys with ED staff and analysis of locally available routinely collected hospital data. Patient and Public Involvement (PPI): a study PPI group has contributed to research design and materials and data interpretation and dissemination through a series of face-to-face workshops. Trial status: in progress (ISRCTN51780222). Funder: National Institute for Health Research Health Services and Delivery Programme. Open access experiences. Patients were also asked about their views on introducing GPED and its potential impact. Analysis AS, HA, HL and members of the wider GPED research team undertook data collection and analysis. HA is a registered nurse with experience of working in primary care. All other members of the research team involved in data collection and analysis are health services researchers. Analysis was facilitated by use of the qualitative data management programme NVivo. After familiarisation, a coding framework was developed through a series of roundtable discussions by the research team and was continually refined and revisited during researcher meetings on an ongoing basis throughout data collection and analysis. This framework was used to produce a series of summaries and pen portraits to describe each case site, 21 which informed a final thematic analysis during which themes were refined further for the purpose of this paper. 22 All participants and case sites were allocated unique personal IDs to protect anonymity and confidentiality. Unless otherwise specified, we use the term staff to collectively refer to GP and ED staff throughout the results section. Patient and public involvement Ten public contributors with experience of using ED services have been directly involved in the design, development and interpretation of the GPED study. In addition to attending external steering group meetings and supporting the development of our original application for research funding and key study materials (eg, information sheets), our 10 public contributors have participated in regular workshops throughout the GPED study. During these workshops, public contributors were given copies of anonymised interview transcripts along with pen portraits from two of our study sites. Public contributors initially discussed how they interpreted the data, before being asked to consider whether their own interpretations resonated with the research team's framework. Additional workshops are also being held to discuss the wider GPED study's findings where both quantitative and qualitative data will be presented and discussed with the group. RESULTS Service leaders and site staff perceived the national implementation of GPED as a response to increasing pressure on EDs, with a lack of supporting research evidence. Many viewed GPED as a top-down, generalised strategy that had been imposed on them without consideration of local context. Ultimately, variations in local context, ED demand and existing GP services in or alongside the ED meant it was not considered possible to implement the same system everywhere. This resulted in a 'proliferation of different models', which in turn implied that the impact of GPED on ED performance would vary substantially. Our qualitative data highlight the challenges associated with a top-down national policy that is implemented in different ways according to local context. We hope to demonstrate the complexity and uncertainty this brings when trying to predict and then evaluate how the policy may impact patients, EDs and the wider urgent care system. Our results are therefore presented as a series of areas that stakeholders believed would be affected by the introduction of GPED and the direction of the anticipated effect. Performance indicators The premise that ED staff and GPs have inherently different approaches to risk was central to the concept of GPED. GPs were perceived to frame health and illness in a different way to ED staff, with the 'wait and see' culture of primary care leading many to view GPs as more 'risk tolerant' and more appropriately qualified to care for lower acuity patients than their 'risk averse' ED colleagues. This in turn was thought to be beneficial for GPED by making GPs less likely to order unnecessary investigations, or admit or refer lower acuity patients unnecessarily, thereby reducing the time spent in the ED and enhancing patient flow. Despite this general articulation of potential performance benefits, there was significant uncertainty about the impact of GPED within the local systems included in our case studies. One of the main areas of disagreement among site staff and service leaders was whether GPs were more tolerant of risk and if so whether this would have adverse consequences for patient safety. This resulted in variation in GPED models across sites. Individual views largely varied according to the degree of integration and the specific Open access role of GPs within the system, making it difficult to identify generalised predictions relating to the potential impact of GPED. Use of investigations Many participants were accepting of models that asked GPs to work in a hybrid ED-GP role and encouraged GPs to 'go native', becoming highly integrated within ED teams. Some models were based on the premise that GP access to investigations was crucial to GPED effectiveness, with concerns that the potential scope of GPED would be limited by GPs not being able to undertake investigations and refer to specialties. In contrast, other GPED models limited GPs to working as they would in the community, and service leaders felt strongly that for the model to run effectively, GPs and the ED should work separately. There was an idea that GPs 'going native' would encourage them to behave in a similar way to ED doctors, thereby negating any assumed benefits from GPs' different attitudes to risk, investigation and referral. Therefore, prior expectations relating to unnecessary testing were mostly factored into the GPED model at the outset. Hospital admissions and the 4-hour target Reducing hospital admissions and improving performance against the '4-hour standard' (that 95% of ED patients should be discharged, admitted or transferred within 4 hours of arrival) were often quoted as among the potential benefits of GPED. However, this was not universally accepted. For example, some felt that admissions would not be affected, because the population being targeted are not those that would normally be admitted from the ED. Equally, targeting primary care patients was welcomed by ED managers, as although GP patients can be dealt with quickly in theory, in many localities, these patients are present in high volumes and were perceived to be at risk of breaching the 4-hour standard. However, some feared there might be an unintended worsening effect-diverting people with minor conditions that are theoretically quick to resolve increases the acuity of the remaining ED patient workload. If the ED is left with only high-acuity patients, there is a possibility that both the time spent in the ED and the proportion of patients who are admitted will increase, worsening the reported '4-hour' performance. GPs lack skills to work in ED. By 'going native' and having access to investigations/testing, GPs may lose their unique skills and work similarly to ED doctors. Whether GPs were given access to investigations varied depending on the GPED model in place and so any impacts associated with this would be negligible. 'It was suggested that those problems could be better dealt with by primary care clinicians who had the appropriate skills for the job and would be perhaps confident about seeing and treating and discharging without over-investigation'. (Rowan, staff interview, 07) Admissions Avoid unnecessary admissions of lower acuity patients and improve patient flow. If the ED is left with only high-acuity patients, the proportion of ED attendances who are admitted will increase. Admissions not affected as the population targeted is not those that would be admitted from ED. Open access When stakeholders discussed possible effects of GPED on performance indicators, it was not always clear, and was not model dependent, whether GPED streamed patients were to be included or excluded from the ED figures, and assumptions regarding this influenced participants' views. Generally, performance indicators were considered blunt tools with which to evaluate impact, reflecting potential measurement issues and artefacts rather than good clinical practice. It was also anticipated that the 'visibility' and impact of GPED would be obscured by a year-on-year increase in patient attendances and hospital admissions (table 2). Patient outcome and experience A process of front door 'streaming' of patients on arrival at the ED was intended to facilitate the identification of low-acuity patients and match them with the availability and skills of the treating clinician (eg, a general practitioner). This differs from 'triage' which, although often used interchangeably with streaming, refers to the identification of high-acuity patients to ensure that more urgent cases are identified and treated in a timely way. By introducing front door streaming, 31 EDs were expected to see improvements in patient outcomes (some of which are reflected in the performance standards) and experience (table 3). Streaming lower acuity patients to a GP was anticipated to improve patient care by enabling ED staff to focus on higher acuity patients and ensure that GP acuity patients are treated in GPED rather than being 'sent round the houses'. Patients were aware of the significant resourcing and financial pressures placed on the NHS and so saw value in placing GPs in the ED. Open access There were concerns, however, from service leaders and ED staff, that patient flow could be negatively affected by GPED with a backlog created by patients being required to disclose clinical information on multiple occasions before seeing a GP or that GPED patients would prevent those with higher acuity needs being seen in a timely manner due to beliefs that GPED may increase the number of patients attending ED and associated crowding (see further). There was strong and divided opinion between staff groups and even service leaders as to what is considered a 'GPED appropriate' patient. These opinions were often underpinned by cultural differences between GPs and ED staff and staff perceptions regarding professional competencies, boundaries and skillsets. ED staff in particular made certain assumptions about the skill set of GPs, which influenced these views. In some cases, GPs were perceived to lack the appropriate skills and experience to work in the ED, which in turn was felt to limit the potential effectiveness of GPED. Models that required GPs to 'go native' were thought to ask GPs to work beyond their clinical competency, with some staff claiming that GPs are not up to date with ED knowledge and lacking in key clinical skills such as x-ray interpretation and suturing. There were also concerns that GPs may not recognise higher acuity patients, with associated risks to patient safety. Service access There was divided opinion as to how GPED may affect ED attendance (table 4). Despite one of the aims of GPED being to create a more efficient service, both staff and patients were concerned that GPED may become a product of its own success by encouraging people to attend ED with primary care problems repeatedly and that GPED would become a replacement GP service. It was felt that despite any 'educational' component, whereby patients are encouraged to use their own GP when attending GPED, the fact that GPED guaranteed same-day access to a GP was in conflict with this message Open access and could encourage 'inappropriate' attendance with routine rather than urgent care needs. Concerns that GPED could create additional demand on the ED were supported by anecdotal reports from established GPED models highlighting that the volume of patients had increased since introduction. This rise was attributed to the service generating new demand from primary care patients. Others highlighted the potential influence of general practice opening times; because primary care patients tend to present out of hours, GPED could cause peaks in ED attendance when general practice surgeries are closed. Yet this view was not universal; service leaders provided various reasons why the policy was unlikely to cause an increase in ED attendance. For example, service leaders argued that given the average person attends the ED less than once a year, it is unlikely that they would start using ED as their main access to general practice. Additionally, as many ED patients present with higher acuity, GPED was not expected to be a supply driver in the same way as a walk-in centre. To this end, GPED was not viewed as being about access to GPs but about streaming patients to the most clinically appropriate professional. A lack of advertising or promotion of the availability of GPED services, the fact that most cases would still be treated in the ED and a lack of patient awareness of GPED meant that GPED eas expectedto have an egligible impact on demand. Staffing and workforce experience Staffing issues dominated discussions about the potential impact of GPED and were seen to pose a major threat to its success (table 5). Services leaders and site staff expressed concern that GPED could draw GPs away from primary care and cause competition for GP staff. Consequently, GPED was perceived to have the potential to worsen general practice staffing issues, which in turn could increase waits for a GP appointment and further encourage people to attend ED. GPED was considered an attractive prospect for those GPs seeking portfolio careers and wishing to expand their practice, knowledge and skills. Traditional general practice was seen as a more stressful and less attractive workplace than newer service models. This was due to several pressures including increasing volume and complexity of workload and depleted community and social care provision. There was some debate as to how the flexible hours associated with GPED would impact on job satisfaction. For example, some anticipated that this flexibility would make it easier to fill rotas, while others felt that shift working goes against one of the main reasons why people choose to be a GP. Many staff perceived GPED to have training and educational benefits for junior doctors who would, in some models, become more confident about discharging patients and build up their primary care knowledge (table 6). Conversely, diverting patients with minor conditions to GPED was seen to have GPED is an attractive place to work for those wanting portfolio careers. Working 'beyond the walls of the surgery' is not appealing to all and may cause competition for GP staff between primary and secondary care. 'A concern [is] that it would, it would spread the primary care resource more thinly, so it would be less able to respond to, you know, would be less able to respond to sagittal primary care demand…' (Service leader interview, 05) Flexible working hours Flexible working hours may make it easier to fill rotas. Working out of hours is a deterrent for those who chose to work in general practice. 'Just because I'm a locum I can avoid doing nights, and chose not to do nights'. (Chestnut, staff interview, 22) Locum working Working on a locum or ad hoc basis can be attractive to some and may mitigate against GP staffing issues. Difficult to ensure the quality of locum staff and inconsistent workforce supply negatively affects collaborative working between ED and GPs. 'The barriers, yes. Often, the GPs are not there all the time, it's not the same person. They're often locum. So, the GP will, sort of, arrive, go straight into their room and then stay in the room unless you call them out for huddle … whereas A&E nurses and all of our doctors are all quite social, we're a team, we're really visible to each other. I think just the mentality of a GP is you sit in your room all day, don't you, on your own?' (Nutmeg, staff interview, 15) ED, emergency department; GP, GPs working in or alongside the ED; GP, general practice. Open access benefits for ED juniors and trainees by exposing them to more acutely ill patients. However, there was a perceived lack of suitably qualified GPs with the necessary skills and experience to work effectively in GPED. Site staff placed importance on making GPED an attractive place to work and ensuring that GPs feel valued, supported and appropriately remunerated for effective implementation. Emphasis was also placed on ensuring GPs feel protected and supported to work within their scope of practice. As a result, some felt that GPs needed to be upskilled or would require extra training. To compensate for this, some respondents emphasised the importance of recruiting experienced GPs, who had previously worked in the ED, or employing GPs that were trained at their hospital site as juniors. There was also concern that experienced nursing staff may prefer to work in GPED due to 'better' working hours and it being perceived as an easier job. This has implications on ED staffing and on streaming, which many felt should be undertaken by an experienced nurse. However, some nurses perceived streaming to be a waste of their clinical skills and believed that it took them away from their central role and left ED short-staffed. ED nurse practitioners were also concerned that although they continued to see patients with minor injuries, minor illnesses would be streamed to GPED, which could result in deskilling of the ED nursing workforce. Resources Staff and patients predicted that GPED would incur higher costs due to the cost of GP employment and placed importance on ensuring staffing and resources are carefully matched (table 7). Staff considered GPs a costly resource and felt that GPs needed to demonstrate their effectiveness. Furthermore, the employment of locums and agency staff to fill these positions was expected to lead to greater costs. There were some concerns that the funding could be better spent improving general practice provision, which may lead to the same outcome. Incidental costs such as paying for training and the setup and management of new IT systems was considered an added cost and time burden that staff felt had not always been taken into consideration. Positively, GPED was seen by some as a costeffective initiative through its presumed effect of reducing hospital admissions and unnecessary patient investigations. If patients were seen by a GP, this would release ED staff to treat more unwell patients with a potential cost saving arising from the more effective use of staff resources (i.e. patients being seen by the most appropriate staff member). Open access Main findings Since the 2017 implementation of 'comprehensive front door streaming', supported by capital funding, 14-18 a variety of different GPED models have been introduced throughout the NHS. This is in part a response to varying local needs and contexts, and also different interpretations of what GPED means on a practical level. This has resulted in disagreement at an individual, stakeholder and organisational level about the purpose and anticipated benefits and disbenefits of GPED and a lack of clarity about the impact of introducing GPED on these effects. Indeed, for each domain of influence, we present there were, in most cases, arguments for positive, negative and no effects of GPED (tables 2-6). Despite disagreeing about the 'direction of effect', stakeholders agreed about which areas of the healthcare system and patient care were most likely to be impacted by GPED. This has enabled us to generate 'domains of influence', which will form the basis of our subsequent mixed-methods evaluation of the impact of GPED on patient care, the general practice and acute hospital team and the wider urgent care system during the wider GPED study (box 2). While the domains of influence provide the foundation for our wider mixed-methods evaluation of GPED, a lack of agreement surrounding the policy's aims, coupled with uncertainty as to how the anticipated impacts will be achieved, poses a significant challenge when evaluating whether GPED can be considered a successful national policy. It is also unclear whether the success of GPED should be determined by its effect on EDs or the wider healthcare system. This warrants careful consideration since some domains, such as ED costs or performance, may be improved at the expense of the wider NHS. Additionally, many of the differences in opinion surrounding the potential impact of GPED are underpinned by confusion as to whether patients attending the GPED are considered part of, or separate from, the denominator used for measuring ED performance. This has implications for understanding the effect of GPED on key performance indicators, particularly the '4-hour target'. Comparison with existing literature In 2010, Carson et al 18 explored rationales for the introduction of GPED through an online survey. They report that 'The main reason was to meet the needs of patients or improve quality of care. This was followed by achieving the four-hour target and reducing cost'. Similar assumptions have persisted and were seen to be drivers of the policy initiative to roll out GPED in all EDs across England. Benefits of GPED, particularly to address the increasing demand in emergency care, were perpetuated through rhetoric presented in the national press, 32 clinical press releases, 33 medical journals 23 34 and within the policy documents produced at the time. 35 36 Early studies appeared to underpin some of these assumptions. Evaluations of early adopters in the UK and Europe suggested that GPs in the ED could 'result in reduced rates of investigations, prescriptions, and referrals', 9 37 increase patient satisfaction 8 and offer patients a greater range of healthcare provision. 38 However, these studies have generally been of poor quality. Open access More recently, these assumed benefits have been challenged. A realist review concluded that despite a reduction in process time for non-urgent patients, this does not necessarily increase capacity to care for the sickest patients. 31 The main cause of ED crowding is a lack of beds and congestion in the flow of sicker patients rather than absolute attendance numbers. 39 In addition, GPED may encourage patients to present to the ED with a primary care problem, with consequent increases in ED attendance. 26 40 To date, reviews that examine GPED in more detail have concluded that there is insufficient evidence to support national policy or local system change. 25 26 41 Two Cochrane reviews (2012 and 2018) concluded that there was 'insufficient evidence upon which to draw conclusions for practice or policy regarding the effectiveness and safety of care provided to non-urgent patients by GPs vs EPs in the ED to mitigate problems of overcrowding, wait-times and patient flow' (p. 2). 27 42 Strengths and limitations The 'domains of influence' that we have identified in this paper were generated from a large evaluation that used 'big qualitative data' (228 interviews) and the views of multiple stakeholders. This provided a rich and nuanced understanding of the complexity surrounding a current national policy-GPED. Our data apply to England only, and so may not be generalisable to other healthcare settings. In addition, we could only interview those who agreed to take part, and while we did not 'strive for saturation', the range of views may not be exhaustive. However, our maximum variation approach did achieve data that span a very wide range of individuals. 30 The detail we have obtained has enabled us to propose the domains of influence that will be used to inform our wider GPED study, the aim of which is to evaluate the impact of GPED on each of the domains of influence in detail. It could be argued that the data we present here represents the inherent uncertainty and resistance to change that most healthcare policy encounters prior to or during early implementation and so is representative of typical 'teething problems.' However, while it is assumed that such issues will improve over time, recent research suggests that issues that are identified early in the implementation process often persist long after establishment. 43 It is our hope that by identifying 'domains of influence', rather than a set of hypotheses, we have mitigated against this and have identified many of the key areas that the GPED policy is likely to affect, while providing a framework to guide our forthcoming mixed methods evaluation. CONCLUSION In 2017, a significant financial commitment to support hospitals introduce GPs in ED was made in a direct attempt to address growing concerns surrounding the pressures on EDs. However, the reality of introducing GPs in ED is complex. Throughout the NHS, the policy is being interpreted differently, which has created a range of GPED models to be implemented into ever-changing and variable local contexts. This variation both in terms of how the policy is being interpreted and introduced, different 'baseline levels' of GPED and the lack of agreement from stakeholders surrounding the potential benefits and dis-benefits of the policy, mean that the impact of GPED is difficult to predict. However, our findings suggest that GPED will affect eight key areas. These 'domains of influence' will be used as the foundation for our subsequent mixed-methods evaluation.
2021-05-27T06:19:21.279Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "d89c90e4b5284e7a11bf3879c2e693b6f102ae22", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "3bc4d66bafe9424e4aebad6c3728ce266178ab92", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246897351
pes2o/s2orc
v3-fos-license
Proteomics-Based Serum Alterations of the Human Protein Expression after Out-of-Hospital Cardiac Arrest: Pilot Study for Prognostication of Survivors vs. Non-Survivors at Day 1 after Return of Spontaneous Circulation (ROSC) Background: Targeted temperature management (TTM) is considered standard therapy for patients after out-of-hospital cardiac arrest (OHCA), cardiopulmonary resuscitation (CPR), and return of spontaneous circulation (ROSC). To date, valid protein markers do not exist to prognosticate survivors and non-survivors before the end of TTM. The aim of this study is to identify specific protein patterns/arrays, which are useful for prediction in the very early phase after ROSC. Material and Methods: A total of 20 adult patients with ROSC (19 male, 1 female; 69.9 ± 9.5 years) were included and dichotomized in two groups (survivors and non-survivors at day 30). Serum samples were drawn at day 1 after ROSC (during TTM). Three panels (organ failure, metabolic, neurology, inflammation; OLINK, Uppsala, Sweden) were utilised. A total of four proteins were found to be differentially regulated (>2- or <−0.5-fold decrease; t-test). Bioinformatic platforms were utilised to analyse pathways and identify signalling cascades and to screen for potential biomarkers. Results: A total of 276 proteins were analysed and revealed only 11 statistically significant protein alterations (Siglec-9, LAYN, SKR3, JAM-B, N2DL-2, TNF-B, BAMBI, NUCB2, STX8, PTK7, and PVLAB). Following the Bonferroni correction, no proteins were found to be regulated as statistically significant. Concerning the protein fold change for clinical significance, four proteins (IL-1 alpha, N-CDase, IL5, CRH) were found to be regulated in a clinically relevant context. Conclusions: Early analysis at 1 day after ROSC was not sufficiently possible during TTM to prognosticate survival or non-survival after OHCA. Future studies should evaluate protein expression later in the course after ROSC to identify promising protein candidates. Introduction Sudden cardiac arrest is a sword of Damocles causing around 20% of all deaths [1] in the world. Additionally, each year, 375,000 people in Europe [2] require immediate cardiopulmonary resuscitation (CPR) after out-of-hospital cardiac arrest (OHCA). Targeted temperature management (TTM) is considered as a grade IB recommendation [3] since 2010, as it improves mortality and the neurological outcomes significantly after the return of spontaneous circulation (ROSC). The case-fatality rate for patients after ROSC is still very high, arriving at a rate of around 71.5% after 1 year [4]. In this context, prognostication and predicting the outcome is of cardinal importance since brain injury is the determinant of morbidity and mortality in these patients [5]. According to the 2015 ERC guidelines [6], the earliest time to predict a poor neurological outcome using clinical examination in comatose patients is 72 h after cardiac arrest or 72 h after the restoration of normothermia in patients treated with TTM. The majority of mortality after ROSC is due to hypoxic-ischemic brain injury (HIBI) [7]. In addition, a good prognostication is essential to minimise a falsely pessimistic prediction in comatose patients [8]. To date, multiple prognostic tests were evaluated for neurological prognostication [5], such as cranial computer tomography (CCT), detailed clinical neurological assessment, electroencephalography (EEG), and measurement of somatosensoryevocable potentials (SEPs). Nonetheless, these are not always accurate predictors for the neurological outcome and specifically for survival for several reasons, such as the sedation for induction. In addition, the maintenance of TTM decreases the validity of prognostication [7]. Enolase-2 (NSE) or S-100B serum markers are other possible methods. However, these markers alone do not provide a valid prognostication of the clinical outcome since they are influenced by multiple factors [9]. Ulterior limitations are based on the fact that prognostication cannot be made prior to the return of normothermia [6]. Additionally, to date, other reliable single-protein markers do not exist to prognosticate survivors and non-survivors prior to the end of TTM. However, the current guidelines recommend a multimodal strategy for prognostication. In this prospective cohort study, serum proteins of survivors and non-survivors of OHCA are analysed by both proteomic and bioinformatic methods to identify proteins of interest, which could allow for prognostication if used in an array or as a set of proteins. This study aims to investigate the proteome in the context of TTM after cardiac arrest and to identify specific protein patterns that are employable in prognostication, which could be useful as a complete set for estimating survival. Material and Methods The main goal of this study was to identify protein markers that are useful for clinical outcome prediction in the early phase of treatment (i.e., 24 h after ROSC and during TTM) in patients after CPR and ROSC. Study Design This prospective observational study included 20 patients with cardiac causes, resuscitated from a non-traumatic, non-hypoxic-related, out-of-hospital cardiac arrest (OHCA), and treated with TTM, according to the standard protocol of the hospital for at least 24 h. Patients Adult patients, admitted for an OHCA after CPR, ROSC, and TTM were included in the study. Patients presenting hypoxia-related, traumatic or other causes were excluded from the study. Sample Collection In all patients, blood was drawn on day 0 (i.e., the day of CPR in the emergency department or ICU after arrival) and day 1 (i.e., after 24 h and during TTM). If no blood sample was available for one of both days, patients were excluded from the analysis ( Figure 1). Patients that are not eligible for TTM were also excluded from the study. Survivors and non-survivors were dichotomized on the 30th day after CPR to allocate patients to the two compared groups: Survivors vs. non-survivors. For all (n = 20) patients, demographic and clinical variables were collected from the electronic files of the patients via the ORBIS software (AGFA HealthCare, Bonn, Germany). Blood samples were collected daily at the same time from an arterial line into serum tubes. After the acquisition, the serum was centrifuged at 5000× g for 5 min and stored at −80 • C until proteomic analysis at the end of the study. TTM Therapy In clinical routine, the ERC guideline recommendations [10] for the treatment of cardiac arrest and post-cardiac arrest care management were utilised. Briefly, patients were treated with TTM for 24 h, according to our standard operating procedure (SOP). The emergency medical service initiated peripheral cooling with ice packs to the femoral and/or neck area in patients with OHCA. The controlled cooling to a target temperature of 32-34 • C was continued in the intensive care unit (ICU) using an endovascular cooling device (Thermogard XP ® catheter, Zoll Medical Corp., Chelmsford, MA, USA) and maintained for 24 h. TTM was terminated by rewarming through the same endovascular device at a controlled rate of 0.3 • C/h until the physiologic body temperature of 36.5 • C was reached. This temperature was maintained for further 48 h. The basic metabolic panel, magnesium, phosphorus, ionized calcium, CBC with differential, and PT/PTT were monitored every 6 h during the clinical routine. Proteomic Analysis The concept of proteomic and biostatistical analysis of proteins is defined as the separation, identification, and quantification of the entire protein of a cell, organism or tissue under specific conditions. Cardiac arrest leads to a critical whole body ischemia and in the case of ROSC, additional damage occurs during and after reperfusion. The so-called Post-Cardiac Arrest Syndrome is a combination of pathophysiological processes, which is associated with post-cardiac arrest brain injury, post-cardiac arrest myocardial dysfunction, and systemic ischemia/reperfusion response. To improve the complex interaction between the different organ systems, we decided to choose the following panels: Inflammation panel, organ damage panel, and neurology panel. In summary, as a first step, statistically significantly regulated proteins were identified by OLINK and analysed by bioinformatic network analyses (GeneMania ® , Toronto, ON, Canada; http://www.genemania.org, accessed on 14 December 2021). Thereafter, these statistically significant proteins were grouped using a hierarchical cluster analysis (Perseus ® , Martinsried, Germany). As a third step, proteins of similarly early upregulated clusters underwent further network analysis to evaluate possible corresponding proteins or functions. This approach, related to pooled proteomic data, is described in detail below. Sample Preparation The collected and stored serum samples were sent to OLINK (Analysis Service, Uppsala, Sweden) on dry ice for further proteomic analysis to allow for the high-quality and blinded proteomic analysis by a certified laboratory. The preparation was conducted according to their quality-checked protocol (ISO/IEC 17025:2005). Four internal controls were added to each sample to monitor the quality of the assay performance, as well as the quality of individual samples. The quality control (QC) is performed in two steps: Evaluation of each sample plate, based on the standard deviation of the internal controls, and the median value of the controls. Ninety percent of the samples passed for the OLINK inflammation panel, 95% for the neurology panel, and 100% for the organ damage panel. OLINK Panels Three OLINK panels were used for the analysis: Inflammation panel, organ damage panel, and neurology panel. For each protein, a unique pair of oligonucleotide-labelled antibody probes binds to the targeted protein, and if the two probes are close, a new PCR target sequence is formed by a proximity-dependent DNA polymerization event. The resulting sequence is subsequently detected and quantified using the standard real-time PCR. Then, the data are normalized and transformed using internal extension controls and inter-plate controls, to adjust for intra-and inter-run variation. The final assay read-out is given in normalized protein expression (NPX), which is an arbitrary unit on a log2 scale where a high value corresponds to the higher protein expression. Each proximity extension assay (PEA) measurement has a lower detection limit (LOD) calculated based on negative controls that are included in each run, and measurements below LOD were removed from further analysis. All of the assay characteristics, including detection limits and measurements of assay performance and validations, are available from the manufacturer's webpage (http://www.olink.com, accessed on 14 December 2021). The analyses were based on 1 µL of serum for each panel of 92 assays [11]: • The inflammation panel covers a wide range of inflammation-related protein biomarkers, which enables the analysis of 92 biomarkers through a multiplex immunoassay. The panel is assembled to detect an assortment of traditional, as well as exploratory, biomarkers within the inflammation research field. • The organ damage panel investigates 92 biomarkers from 1 µL of the biological sample. It provides the optimal dynamic range and focuses on proteins that are relevant for processes involved in the biological response to organ damage. The proteins analysed in this panel are important in processes of response to stress, regulation of cell proliferation, cell cycle, and cell death/apoptosis. • The neurology panel consists of a proximity extension assay (PEA) technology, which tests 92 neurology-related protein biomarkers across 96 samples simultaneously without compromising on data quality. Bioinformatic Analysis of Proteins After the protein expression analysis by OLINK, the identified and altered proteins were used for further bioinformatic investigations to classify underlying networks, signalling cascades, and affected pathways. Biological functions of regulated proteins were identified using the functional network analysis. • Heatmapper (http://www.heatmapper.ca/, accessed on 14 December 2021) is an online server, which allows for the visualization of the results of gene expression profiling and cluster analysis in the form of heat maps through a graphical interface [12]. It allows for the accurate inspection of combinations of dataset characteristics to identify correlations and clustering results, as well as sample-related characteristics (e.g., survival time and gene expression levels). This approach allows for the visualization, as well as the accurate and rapid interpretation of the data obtained by large scale gene expression profiling [13]. By organizing complex data as matrix, the visualization of these data is improved. Heat mapping reorders rows and columns of the dataset to place the data with similar profiles, which are close to each other. In a second step, ranges of similar values are assigned to specific colour codes [14]. A heat map performs two actions on a matrix: First, it reorders the rows and columns to ensure that rows (and columns) with similar profiles are closer to one another, causing these profiles to be more visible. Second, each entry in the data matrix is displayed as a colour, making it possible to view the patterns graphically [14]. • GeneMANIA (http://www.genemania.org/, accessed on 14 December 2021) is a tool that helps in predicting the interactions and functions of a list of genes in a network form or when feasible, in pathways [15,16]. GeneMANIA provides the possibility of customizing the network, allowing for the choice of data sources or highlighting specific functions, with a more comfortable graphic experience [16]. GeneMANIA knowledge is based on data from large databases, which comprehend Gene Expression Omnibus, BioGRID, EMBL-EBI, Pfam, Ensembl, Mouse Genome Informatics, the National Center for Biotechnology Information, InParanoid, and Pathway Commons [15,16]. A network of interactions is created and the strength of the interaction is weighed. In the case of no interaction, an association weight of zero is assigned, while in the case of interaction, a positive value reflecting the strength of the interaction and the reliability of the finding, is assigned [17]. For example, the association of a pair of genes in a gene expression dataset is the Pearson correlation coefficient of their expression levels across multiple conditions in an experiment. The more the genes are co-expressed, the higher the weight they are linked by, ranging up to 1.0, indicating a perfectly correlated expression [15]. • WebGestalt is a tool to interpret the lists of genes from large scale x-OMICS (proteomics, genomics) studies [18]. The proteins of interest were uploaded to the tool where user IDs are unambiguously mapped to unique Entrez gene IDs, and all of them are mapped from a selected platform genome. Through the GoSlim classification plot, it is possible to examine the distribution of the genes of interest across the major branches of the gene ontology (GO) biological process, cellular component, and molecular function ontologies [19]. Each biological process, cellular component, and molecular function category is represented by a red, blue, and green bar, respectively. Statistics A p-value of < 0.05 was considered as statistically significant. For the analysis of demographic parameters, the U-test was utilised. For the protein expression analysis (OLINK data), the t-test was primarily utilised and supplemented with a Bonferroni correction to avoid the type I error due to multiple testing (n = 276 tests). In addition to statistical significance, the fold changes (FC) in protein regulation were analysed to address clinical relevance. Proteins with a fold change ≥2.0 and ≤−0.5 were considered clinically relevant and utilised for a second analysis approach. The patients' sample size was calculated using the t-test. From a preliminary set of patients and protein changes, as well as the assumption of an alpha error of 5% and a beta-error of 80%, adult patients (n = 20) were considered sufficient for the analysis. Ethical Registration This prospective observational cohort study was approved by the Ethics Committee of the University of Cologne, Faculty of Medicine, Cologne, Germany (No. 14-053) and was registered with ClinicalTrials.gov (Identifier: NCT02247947). Results A total number of patients (n = 20) were included in this study (Figure 1). The mean patient age was 69.9 ± 9.5 years (survivors: 60.9 ± 3.8 years; deceased: 69.2 ± 12.2 years; each, n = 10; p = 0.697; Table 1). All of the studied patients had a cardiac cause, which primarily led to cardiac arrest. Bioinformatic Analysis Bioinformatic analysis was conducted on both groups of proteins with a significant t-test (prior to the Bonferroni correction) and to the group of proteins with a significant fold change. Heat map analysis for the four clinically relevant proteins in survivors and nonsurvivors showed no difference in clustering (Figure 2A,B). IL1A was downregulated, and CRH, IL5, and n-CDAS were upregulated in both groups. Concerning the analysis of the 11 statically significant proteins, clustering for regulation was different for the proteins in surviving and non-surviving patients ( Figure 2C,D). Solely clustering of TNF-B was different between both groups and was allocated to another cluster. From WebGestalt, all four proteins with >2/<−0.5-fold changes were shown to be involved in metabolic processes, response to stimuli, and cell communication (biological process category) (Figure 3). Three proteins were involved in extracellular space (cellular component category) and protein binding (molecular function category). The GeneMania software was utilized to examine the network and correlating proteins for each group. Discussion The aim of the present study was to identify protein biomarkers to facilitate the prognostication for survival in OHCA patients after CPR, ROSC, and TTM. Of the 276 proteins analysed from the three OLINK panels, four showed a clinically relevant regulation and 11 proteins showed statistical significance. However, after Bonferroni correction, the statistical significance was no longer demonstrated. Bioinformatic analysis revealed the pathways involved and the related proteins which were significantly altered. Patient Population For the present study, the patient group was dichotomized into surviving and nonsurviving patients for the investigation of specific protein regulation patterns in each respective group. Since survival is most often used as a hard outcome parameter after CPR [20,21], it was chosen to separate the patient groups. Concerning the demographic parameters of the patients, the ages of the patients were comparable between the survivors and non-survivors (70.8 vs. 69.2 years) and the mean age (69.9 years) is comparable to the age of patients that are presented in other papers regarding cardiac arrest and CPR [22,23]. In the present study, at least the age of the patients indicates that the patient group may be comparable to the other patients. However, the male:female proportion (19:1) is significantly different and gender aspects seem of low relevance for this specific aspect of protein expression. Nevertheless, the role of gender aspects seems to be controversially discussed [24]. Protein Identification In the present study, four vs. 11 proteins were found to have a significantly different serum expression to discriminate survivors and non-survivors. Although significance was not achieved after Bonferroni correction, the proteins are of interest for a clinically relevant approach (fold >2 and <−0.5). The aim of this study was not to find single biomarkers for a definite answer, but a full set or array of proteins which could facilitate prognostication. Of all the proteins found to be significantly regulated in the present study, TNFB seems to be most promising. TNFB was significantly downregulated in the non-survivors' group, which could be an early indicator of low survival. However, from the cluster analysis of all the other proteins analysed, up and downregulation was comparable on both groups. In addition to the findings of the present study, a recent trial from Cakmak et al. [25] found that serum copeptin levels predict ROSC and the short-term prognosis of patients with OHCA. Therefore, the authors concluded that serum copeptin levels may serve as a guide in diagnostic decision making to predict ROSC in patients undergoing CPR and in determining the short-term prognosis of patients with ROSC. In this study, blood was drawn at patient admission. However, the aim of the present study was to identify protein alteration with a potential to predict the overall outcome after TTM, for use in a later protein expression profile. Biological Processes and Cascades Concerning the biological function and the associated cascades, proteins were mainly involved in the metabolic process and biological regulation with a protein binding function. In addition, the proteins originated from the membrane and extracellular space. Although this indication does not directly reveal the relevant proteins, it may give some suggestion for identifying the important proteins in this context. Since the present study was not designed to find these new or unknown proteins, it can provide a suggestion for which other proteins may be interesting for analysis in future studies. Moreover, this evidence provides insight into the function of the identified proteins in the metabolic and cellular processes, which might be relevantly affected, and consequently require attention for treatment. Limitations In the present study, three different panels, with each containing 96 different proteins, were used for the analysis. For a careful interpretation of the results, several limitations are noteworthy. First, the approach utilised specific proteins and not a broad analysis of potentially unknown proteins. Therefore, it was not possible to identify new biomarkers for prognostication. Second, this can be considered only as a first pilot study for future analysis, and only patients (n = 20) were examined. Potentially, if several hundred or thousands of patients are analysed, additional markers could be found. This study could be a solid base for future clinical studies, even if it has specific limitations. Conclusions The present study aimed to identify proteins associated with survival or death at day 30 after out-of-hospital cardiac arrest and ROSC. Although several proteins were identified to reveal statistical or clinical relevance, bioinformatic analyses unveiled no promising candidates. Therefore, early analysis at 24 h after ROSC was not sufficiently possible during TTM to prognosticate survival or non-survival after cardiac-induced OHCA. As a result, further studies are necessary to better evaluate protein expression after ROSC and to identify promising protein candidates for prognostication, e.g., 6 h post-ROSC. This study could be considered as a launching platform for future multi-centric studies. Institutional Review Board Statement: No. 14-053; NCT02247947. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study or the relatives, respectively. Conflicts of Interest: The authors declare no conflict of interest. Vesicle transport through interaction with T-SNAREs 1A VTI1B Abbreviations Vesicle transport through interaction with T-SNAREs 1B
2022-02-17T16:04:58.489Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "a3babd2ac7a8b3bf1afa1819bbd13c75e61d6962", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/4/996/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14e44e5b0021fb0e03b2bb5290d49c29360acb3c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236910150
pes2o/s2orc
v3-fos-license
Referee comment on acp-2021-396 The manuscript describes very well-designed studies of vanillin photooxidation in bulk liquid solutions where pH, concentrations, reactant ratios, dissolved gases (N2 or O2), ions (nitrate, bicarbonate) and other species (isopropanol) were varied in many combinations. The work is technically sound, with the loss of reactants, the identification and quantification of products, and the absorbance changes in solution all monitored hourly. The authors exhaustively discuss the differences between each experimental variation, pulling out as much detail as possible. This paper will be of interest to those interested in biomass burning aerosol and brown carbon formation, and is publishable after major revision to address the following points. The authors exhaustively discuss the differences between each experimental variation, pulling out as much detail as possible. This paper will be of interest to those interested in biomass burning aerosol and brown carbon formation, and is publishable after major revision to address the following points. In places the discussion veers off into speculation, or suggests theories that aren't adequately explained enough to be convincing to the reader, as noted below. Generally the discussion is convincing and well-connected to the literature, but the discussion section reads like it has a thousand detailed conclusions, leaving the reader often feeling "lost in the weeds" and blunting the impact of the work. In general, the focus of the paper could be improved by moving Table 1 to the SI, removing a lot of speculative discussion, and bringing Tables S2 and maybe S3 from the SI to the main paper. These tables are more vital to the discussion at many points, in my opinion. I do not trust using results for IPA to make generalizations about the effect of all VOCs on vanillin photooxidation. The authors repeat this questionable generalization several times throughout the manuscript, including twice in the abstract. Especially because the authors' explanation for the effect of IPA on their results relies on alcohol / water microstructure arguments, generalization to all VOCs seems unwarranted. Plus, IPA would be present only at very low concentrations in aqueous aerosol or cloud droplets due to its high volatility. It would be more appropriate if the authors remove (or heavily qualify) all statements about VOCs. At several points, the authors discuss rather small differences between experiments (factors of 1.2 to 1.5) as significant, but the uncertainties in the parameter values being compared are never quantified. This raises doubts in readers' minds about which differences are actually statistically significant. Some discussion of uncertainties and random error is needed. The argument that 3VL* is more reactive in its protonated form as an explanation for the observed pH effects does not make sense to me. The pKa of VL is 7.4, which means that more than 99.9% of it is protonated in all experiments, negating the possibility of any detectable acceleration at low pH by this mechanism. Furthermore, the authors describe reasonable alternative explanations for their observed pH effects, such as the more efficient photolysis of HONO vs NO2-producing more OH radicals at low pH. However, the questionable claim that 3VL* is more reactive in its protonated form is repeated several times throughout the manuscript (for example, lines 267, 270, 280, 449 and 500). This claim needs to be convincingly justified or removed from the manuscript. Specific comments: Line 25: The authors conclude that photosensitized reactions of VL were "more efficient" relative to nitrate-mediated photo-oxidation. However, as pointed out by the authors, VL is much more light-absorbing that nitrate. Can the authors make a comparative statement after taking this difference into account? Which is more efficient on a perphoton-absorbed basis? This would be a more appropriate comparison of reaction efficiency. Line 226: The authors at several points claim that VL triplet states and nitrate photolysis products have a "synergistic effect," but evidence in support of this claim is lacking, or at best the evidence supporting it is not adequately explained. The inadequately supported claim is repeated in line 497. Line 258: This explanation of opposite pH trends at 0.1 and 0.005 mM VL is extremely speculative. Line 272: For greater clarity, it would be helpful if the manuscript would always match product formulas mentioned in the text to the structures shown in Table S3. Is this product structure #21 in Table S3? Line 297: is this dimer product structure #5 in Table S3? Line 334: The solvent cage effect explanation seems questionable. Why would two negatively charged ions share a solvent cage, given their electrostatic repulsion? Furthermore, in line 339 the authors state that "NaBC did not cause any substantial change in the decay of VL," thus making this whole solvent cage discussion irrelevant to the data at hand. Line 341 -346: the authors state that "no tetramers were observed in VL*+NaBC" and "VL+AN+IPA had more oligomers," and then go on to suggest that the formation of oligomers can be promoted by inorganic ions, likely via the generation of radicals such as .CO3. No evidence has been provided, as far as I can tell, that NaBC promotes oligomer formation, so I was confused by the authors' claim here that bicarbonate does in fact promote oligomer formation via .CO3 radicals. Line 363: ESI-MS is routinely used to detect macromolecules in biochemistry. This suggestion that the method cannot detect molecules with more than 25 carbons is an erroneous conclusion to draw from Lin et al. (2018). Line 379: The logic needs to be better spelled out here. Why is the formation of more oxidized products suggested by a larger fraction of small-mass products observed for 1:1 VL/nitrate mixtures compared to 1:100? Do small product masses imply fragmentation, or is there a competition with oligomerization? Line 389: C 8 H 9 NO 3 should be identified as product structure #2 (an amine) on Table S3. Line 408: The nitrate photolysis explanations may not be needed, given that the observed enhancement of nitrate on guaiacol decay rates was only a factor of 1.2. Is this a statistically significant change? Line 418: The word "Similarly" is being used to relate two seemingly dissimilar observations, causing needless confusion. In the previous sentence, VL shows much higher absorbance enhancement than nitrate, but in this sentence nitrate is being compared to an experiment without nitrate. Line 471: This sentence is confusing. Doesn't this work address (among other things) the effects of nitration on triplet-generating aromatics? Line 481: Why would VL photodegrade 10 times slower in ALW relative to dilute cloudwater? This effect is important for applying this work to the atmosphere. Could the authors provide some theory or explanation here? On Table S2, experiments without nitrate are listed as "-" in the column of normalized abundances of N-containing compounds. Is this because no N-containing compounds were detected in the top 50, or because these samples were not analyzed for N-containing compounds? It would be helpful to map the reactant molecule onto the Figure S12 graph. Technical Corrections: Line 349: "increased" should be "increase" Line 377: "an important" should be "a more important" Line 459: "decompose" should be "decomposes" Sodium nitrate in my opinion would be better abbreviated "NaN" to be more consistent with other abbreviations such as "NaBC." Table S3: Compound number 4, the most abundant product in some studies, is missing an oxygen atom. It should be clarified that structure #1 is the reactant molecule vanillin rather than a product.
2021-08-04T12:20:29.004Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "bdca8462a935101593d47b035c0e80288de233d8", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/preprints/acp-2021-396/acp-2021-396.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bdca8462a935101593d47b035c0e80288de233d8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
231899036
pes2o/s2orc
v3-fos-license
Greater Power but Not Strength Gains Using Flywheel Versus Equivolumed Traditional Strength Training in Junior Basketball Players The main aim of the present study was to compare the effects of flywheel strength training and traditional strength training on fitness attributes. Thirty-six well trained junior basketball players (n = 36; 17.58 ± 0.50 years) were recruited and randomly allocated into: Flywheel group (FST; n = 12), traditional strength training group (TST; n = 12) and control group (CON; n = 12). All groups attended 5 basketball practices and one official match a week during the study period. Experimental groups additionally participated in the eight-week, 1–2 d/w equivolume intervention conducted using a flywheel device (inertia = 0.075 kg·m−2) for FST or free weights (80%1 RM) for TST. Pre-to post changes in lower limb isometric strength (ISOMET), 5 and 20 m sprint time (SPR5m and SPR20m), countermovement jump height (CMJ) and change of direction ability (t-test) were assessed with analyses of variance (3 × 2 ANOVA). Significant group-by-time interaction was found for ISOMET (F = 6.40; p = 0.000), CMJ (F = 7.45; p = 0.001), SPR5m (F = 7.45; p = 0.010) and T test (F = 10.46; p = 0.000). The results showed a significantly higher improvement in CMJ (p = 0.006; 11.7% vs. 6.8%), SPR5m (p = 0.001; 10.3% vs. 5.9%) and t-test (p = 0.045; 2.4% vs. 1.5%) for FST compared to the TST group. Simultaneously, th FST group had higher improvement in ISOMET (p = 0.014; 18.7% vs. 2.9%), CMJ (p = 0.000; 11.7% vs. 0.3%), SPR5m (p = 0.000; 10.3% vs. 3.4%) and t-test (p = 0.000; 2.4% vs. 0.6%) compared to the CON group. Players from the TST group showed better results in CMJ (p = 0.006; 6.8% vs. 0.3%) and t-test (p = 0.018; 1.5% vs. 0.6%) compared to players from the CON group. No significant group-by-time interaction was found for sprint 20 m (F = 2.52; p = 0.088). Eight weeks of flywheel training (1–2 sessions per week) performed at maximum concentric intensity induces superior improvements in CMJ, 5 m sprint time and change of direction ability than equivolumed traditional weight training in well trained junior basketball players. Accordingly, coaches and trainers could be advised to use flywheel training for developing power related performance attributes in young basketball players. Introduction It has been acknowledged in the scientific literature that strength training produces several morphological and neural adaptive changes in the human body, including increases in muscle's cross-sectional area, muscle fiber pennation angles and musculotendinous stiffness as well as motor unit recruitment, rate coding (firing frequency), synchronous motor unit activity and neuromuscular inhibition [1]. These types of adaptations enable an increase in strength and power-both of them have been extensively proven to be related to sport performance across a continuum of sports events [2]. Consequently, strength training has become a cornerstone of strength and conditioning programs for athletes [3]. In addition, optimizing the load and time spent in strength training may be one of the most important considerations for strength and conditioning coaches (especially in team sports), where success is multifaceted and with a broad spectrum of physical, physiological, technical and tactical abilities that need to be targeted regularly in the training process and integrated periodization [4]. Consequently, in both sport science and everyday practice there is a need for elucidating and incorporating effective but also time sparing strength training methods [5]. In this vein, many different strength training methods have been presented in the past, including the use of free weights, kettlebells, elastic bands and resistance training machines [6]. These different traditional strength training methods, including both eccentric and concentric muscle actions, are prescribed based on concentric force parameters with propensity to underload the lengthening phase of movement as muscle produces more force during eccentric phase of movement [7]. There is a growing body of research asserting that strength training programs which adequately load the lengthening phase of movement, called eccentric training, might induce superior neuromuscular adaptations (faster cortical activity, inversed motoneuron activity pattern, improved muscle-tendon unit morphology and structure) compared with traditional strength training. In addition, there is increasing evidence in recent scientific literature implying that eccentric strength training is a potent stimulus for boosting physical performance [8,9], with flywheel iso-inertial resistance training especially highlighted recently for its efficiency in both performance and clinical settings [10] as well as specificity [11]. Concisely, flywheel training is a relatively new training method consisting of participants accelerating a flywheel during concentric phase of movement with kinetic energy returned during the eccentric phase of movement, thus requiring significant eccentric muscle action (eccentric overload) to slow the flywheel. This presents an alternative means of providing external load in resistance exercises which can be achieved by flywheel resistance [12]. Flywheel training enables overload in the eccentric phase, by resisting the eccentric force later in the eccentric range of motion [13]. Considering performance outcomes in the athletic population, eight to eleven weeks of flywheel training with one/two sessions a week has been found effective to enhance countermovement jump height (CMJ), change of direction ability and linear sprint in young and adult soccer players [14][15][16][17][18]. Furthermore, literature found six and seven weeks of flywheel training (two-three and one session per week, respectively) to be a robust tool to significantly enhance cmJ, squat jump, 20 m sprint, change of direction (t-test) and maximal strength [19] as well as maximal strength (Half squat 1 RM) and 20 m sprint [20] in professional handball players. Interestingly, the effects of flywheel training on performance outcomes in basketball are scarce. To the best of the authors knowledge, only one study [21] reported significant improvements in countermovement jump and squat power after implementing one session a week of flywheel training (four sets of eight repetitions of the squat, 24 weeks) in a sample of 26 regional level adult basketball players (males and females). Change of direction, muscular strength, vertical jumping ability and repetitive short-distance sprints are all important fitness attributes required for the physical demands of a basketball game [22]. In addition, the relevance of performing explosive and fast movements, such as sprints, jumps and change of direction has increased in modern basketball [22]. Finally, lower body strength has been extensively reported to be related to lower body power performance [23]. Therefore, it is of interest for both sport scientists and basketball practitioners to elucidate the effects of innovative training methods for power and strength development in basketball. Although strength and conditioning coaches use various methods to develop neuromuscular factors in youth basketball players [24], no studies to date, as far as we know, have investigated the effects of flywheel training on strength and power attributes in young basketball players. It should be recognized that the continuation of habitual team sport practice during puberty was proven to induce substantial improvements in lower body strength per se, without additional resistance training performed [25]. Consequently, the inclusion of the control group with regular basketball practice would improve clarity of whether performance adaptations are consequence of strength training or specific sport training linked to the possible growth and muscular development. This was not the case with previous similar investigations conducted on young soccer players [14,15]. Furthermore, to the best of our knowledge, there are no studies with relatively old (U-18), highly trained and resistance training-experienced adolescents that have compared the effects of continuing with specific sport practice or including flywheel or traditional strength training to regular basketball practice. Recently, meta-analysis exploring the flywheel training performance effects revealed that most interventions caried out on 5 to 10 weeks training period [13]. Further, in vertical inertial flywheel training, similar to our research design [14,19,26], differences in strength and power performance in 6-and 8-week training period were found. As a result of this analysis, an eight-week training period is consistent with previous research. Taking all aforementioned, the main aim of this study was to compare the in-season effects of eight week of equivolumed flywheel vs. traditional strength training on lower body strength, countermovement jump, t-test and 5 and 20 m sprint performance in welltrained young basketball players. We hypothesized that flywheel training will produce superior effect in all observed fitness attributes. All players where regional level, from Novi Sad (Serbia) and played for the teams contesting in the junior league of Vojvodina province during the season in which the investigation took place. All players had basketball training experience of a minimum of 4 years, without lower limb injury or illness 4 months prior to the study. During the program, all participants had 5 basketball trainings (90 min per training) and one game a week. In addition, participants were all familiar with resistance training regularly exploited throughout the season, but without previous experience with flywheel device. The requirements and obligations during the study were explained to all participants, as well as the purpose of the research. Each participant could withdraw from the research at any time. No players reported injuries throughout the study duration and no one withdraw from the research. The study fits the Declaration of Helsinki (2008), actualization in Fortaleza 2013 [27], for medical research involving human participants. The study protocol was reviewed and approved by the ethics committee of the University of Novi Sad, Serbia. (Ref. No. 44-01-02/2019-3). All participants voluntarily accepted to enroll in the study and signed an informed consent, while parents or legal representatives signed for underage subjects. Study Design The experimental program was organized during the second part of the competition period, in March and April 2019. Initial testing was organized seven days prior to the first practice session, and after testing players from the FST and TST groups attempt two sets with six-eight repetitions on an isoinertial device (FST group) and with weights (TST group) in order to familiarize with the training protocol. Three days before starting the program, both experimental groups had a second familiarization training with exercises on an isoinertial device and with free weights. Supervised strength training for the experimental groups was conducted during the morning hours in the lab facility at a Faculty of sport and physical education, University of Novi Sad, supplied with all necessary equipment (flywheel, bars, plates, elastic bands . . . ). All participants were supervised by PhD students with extensive strength training experience to help ensure high quality training sessions. The sessions were performed on every Tuesday, Wednesday and Thursday, in groups of no more than 6 players and monitored by at least two PhD students at all times. Three to six days after the intervention period final testing was conducted, identical with initial one considering time of testing, order and protocols of testing procedures and examiners. All participants were strongly advised to avoid any strenuous activity 24 h before testing. The control group did not receive any additional training apart from regular basketball trainings and weekend-games during the intervention. During the week, but not on the same day as the experimental program, one basketball training session, was supplemented with bodyweight strength training for all groups. This training was regularly implemented throughout the season, at the beginning of the training, lasting 25 to 30 min. The participants were not allowed to take stimulants, or any other substances for improving performance during the study. Measurements Anthropometric measurements were taken by an International Society for Advancement in Kinanthropometry (ISAK) level three anthropometrist, following the standard procedures prior to initial testing [28]. The height and body mass technical error of measurement (TEM) was less than 0.02%, and were measured with an SECA (Seca GmbH, Hambrug, Germany) measuring rod, (precision of 1 mm; range: 130-210 cm) and an SECA model scale (precision of 0.1 kg; range: 2-130 kg). Prior to initial testing, data on training experience and anthropometric measures of standing height and body weight were taken for each subject. The lower extremity isometric strength test (ISOMET) was performed with peak force measured on an isoinertial device (D11 full, Desmotec, Biella, Italy). The participant was connected to the device by a strap with one end tied to the device and the other to a waistcoat worn by the participant. The strap was tightened not to allow the respondent to move up. The Desmotec device has two contact panels that are connected to a computer equipped with the software (D.Soft, Desmotec, Biella, Italy). The participant stands in a semi-squat position, flexion at 100 degrees angle, and his hands are placed on his hips. At the sign, the subject exerts pressure on the plates for 10 s, maximum voluntary isometric contraction. The contact panels measure the force that the participant produces and which is read on the computer. The test was done twice, with a rest period of 2 min, and the better result, expressed in kilograms, was recorded. Good test-retest reliability (α = 0.889) was found for this parameter. Countermovement jump test-CMJ-was conducted according to Bosco protocol [29] on a contact platform Just Jump, Probotics, USA. During the cmJ, all participants were instructed to start with upright posture and their hands on their hips. After swift downward phase to semi squat position, participants jump up in the air maximally keeping hands on their hips and landing in an upright position with their knees extended. Three attempts were allowed, with 45 s of passive recovery between trials. The best jump performance was registered and used for further analysis. cmJ is characterized by a very low variability between tests (coefficient of variation of 3.0%) [30], with excellent test-retest reliability (α = 0.918) found in our study. Subjects performed a 20 m sprint test, with 5 m split time and times were recorded using light gates (Microgate-Witty, Italy). Two submaximal efforts were included at the end of specific warm up, followed two 20 m sprint trials, with two minutes of passive recovery between trials. After a specific warm-up, including the 2 submaximal efforts (around 90% of max speed), two trials were completed. The subject started from the crouched position with the front foot positioned 0.3 m behind the first timing gate, where players started voluntarily and accelerate maximally to the finish line. During the test, the participants were verbally encouraged to run with maximum effort. The better results were used for further statistical analysis (SPR5 m and SPR20 m). The 20 m sprint test has demonstrated high level of reliability in our study (α = 0.901 and α = 0.914 for 5 m and 20 m sprint, respectively), which is similar to previous study findings [31]. Agility t-test was conducted according to Semenick [32]. The participants starts with front foot positioned 0.3 m before the light gate. The test includes forward running, shuffling sideways and in the end backwards running. The trial was not counted if the player crossed one foot over the other while shuffling or failed to touch the base of the cones. Times were recorded using light gates (Microgate-Witty, Italy), placed at the start/end position. Two trials were completed with 2 min of passive recovery, and better result was taken for analysis [33]. Good test-retest reliability (α = 0.875) was found for this parameter. Training Interventions Two sessions were conducted to familiarize participants with the training method in order to optimize training adaptations. Two experimental groups (FST and TST) attended 8 weeks of individually supervised strength training, 1-2 training sessions per week, with 12 training sessions in total. The number of training sessions and sets increased progressively throughout the program (Table 1), with at least 48 h rest between sessions. The experimental groups (FST and TST) had the same number of training sessions, sets and repetitions per set during the experimental treatment for each training session (equivolumed training protocols). Moderate inertial load (0.075 kg m 2 ) was chosen for half squat and Romanian deadlift for FST group based on findings by Sabido et al. [34] reporting that these loads maximized eccentric overload. All other exercises except Rotational pallof press for both FST and TST participants were conducted with 85% of 1 RM. Each training session consisted of 5 drills, with the only difference in the two exercises: while the FST group practiced Romanian deadlift (RDL) and half squats (HS) on the isoinertial device, the TST group practiced half squats (HS) and Romanian deadlift (RDL) with free weights. Two minutes of passive recovery was allowed between exercises and sets. For flywheel exercises each set begins with two submaximal attempts that are not counted in the total number of repetitions, and then the subject continues to exercise with maximum voluntary attempts the required number of repetitions. For half squat exercise, the subject begins with concentric phase caried out from about 90-degree knee angle to near full extension and then continues, without stopping, the phase of eccentric contraction. Participants were briefed to perform the concentric phase with maximum effort, while applying maximal force after the first third of the lengthening phase in order to stop the flywheel at about 90 of knee flexion, thus achieving eccentric overload [21]. It has been recognized that special eccentric strategies are required to apply breaking force over the entire range of motion at certain joint angles to achieve the desired eccentric overload [35]. Romanian deadlift was standing upright holding the Kbar in front and with shoulders width apart. For Romanian deadlift, the participants stands on an isoinertial device, placing a Kbar in front of the body, connected to the device by a strap. In the initial position the participant is bent at the hips, the back is straight, the arms are outstretched and the bar is below the knee (knee almost fully extended). The exercise begins by raising the body with maximal voluntary contraction (concentric phase) to an upright position when the strap is stretched to the maximum. It is immediately continued by winding the tape and the participants enters the braking phase in order to stop in the initial position (eccentric phase), after which the next repetition follows without a pause. The bar moves close to the body during exercise. One-arm dumbbell row (4 × 8) Rotational pallof press 2 × (4 × 12-15) Rotational pallof press 2 × (4 × 12-15) Biceps curls + upright row complex (4 × 8) Biceps curls + upright row complex (4 × 8) Half squat on isoinertial device (4 × 8) Half squat with free weights (4 × 8) Romanian Deadlift (RDL) on isoinertial device (4 × 8) Romanian deadlift (RDL) with free weights (4 × 8) Statistical Analysis Data are presented as mean ± standard deviation (SD). Normality of distribution was examined using the Shapiro-Wilk test. Levene's test for the assessment of homoscedasticity was applied. At pre-test, between-group comparisons were analyzed by univariate analysis of variance (ANOVA) with the factor group (FST, TST and CON), and between-group comparisons under the influence of experimental treatment were analyzed by a twoway ANOVA (3 × 2). Statistical significance was set a priori at p ≤ 0.05. Post-hoc test (Least Significant Difference test-LSD) following ANOVA was used to determine the significance of factors interaction. Cohen's d as the measure of the effect size of the mean difference was calculated by subtracting the means and dividing the result by the pooled standard deviation. A Cohen's d of ≤0.20 = trivial, 0.20-0.60 = small, 0.61-1.20 = moderate, 1.21-2.0 = large and ≥2.01 = very large, as suggested by Hopkins et al. [36]. Data were processed using the SPSS statistical software package, version 20 (Chicago, IL, USA). Results No significant between-group differences were detected in pretest for any variable analyzed. In addition, no meaningful group-by-time interaction was found for sprint 20 m (F = 2.52; p = 0.088) ( Table 2). Significant group-by-time interaction was found for ISOMET (F = 6.40, p = 0.000), while post hoc analysis revealed differences between FST and CON groups (p = 0.014). Comparing the results of the initial and final measurements, FST group had an improvement of 18.7%, (large effect size) the TST group achieved an improvement of 16.6% (large effect size), while the CON groups result was improved by 2.9% (small effect size). Significant group-bytime interaction was found for cmJ (F = 7.45; p = 0.001), with post hoc analysis revealing differences between FST and TST group (p = 0.006), but also FST and CON (p = 0.000) as well as CST and CON (p = 0.006). The experimental groups, FST and TST achieved progress of 11.7% (very large effect size) and 6.8% (large effect size), respectively. The CON group had an improvement of 0.3% (trivial effect size). The group-by-time interaction for the 5 m sprint variable (SPR5m) showed a significant difference between groups (F = 7.45; p = 0.010). Post hoc analysis showed that there were significant differences between the FST and TST groups (p = 0.001) and between FST and CON groups (p = 0.000), while there was no significant difference between the TST and CON (p = 0.333). Considering the percentage of improvements, 10.3% (very large effect size), 5.9% (moderate effect size) and 3.4% (moderate effect size) were reported for the FST, TST and CON groups, respectively. For the t-test, an analysis of the group-by-time interaction showed statistically significant differences (F = 10.46; p = 0.000) between groups. Post hoc analysis showed a significant difference (p = 0.000) between the FST and CON groups as well as between TST and CON groups (p = 0.018). Furthermore, a statistically significant difference was also found between the FST and TST groups (p = 0.045). When expressed as a percentage, the reported improvements were 2.4% (very large effect size) for the FST group, 1.4% (large effect size) for the TST group and 0.6% for the CON group (moderate effect size). Discussion It has been proposed that flywheel training is an efficient method for enhancing a myriad of fitness attributes in team sport athletes [13]. However, studies exploring the effectiveness of flywheel training with basketball athletes is lacking. Therefore, the aim of the present investigation was to compare the in-season effects of equivolumed flywheel vs. traditional strength training on lower body strength, countermovement jump, change of directions ability and sprint performance in well-trained young basketball players. The results of this research indicate that there were no differences in strength improvements for two experimental protocols while flywheel training was proved to be superior for developing agility, vertical jump and 5 m sprint time. Flywheel group displayed significantly higher improvements in strength, vertical jump, 5 m sprint time and change of direction ability compared to control group. Players from traditional strength training group showed better results in vertical jump and change of direction ability compared to players from control group. Interestingly, adding one/two sessions a week of flywheel training appears to be an appropriate strategy for enhancing lower body strength during competitive period in young basketball players while adding equivolumed traditional strength training seems less effective. Finally, neither training modality was proved effective for enhancing 20 m sprint performance. Although this type of practice is very popular in the last decades [13], scanty studies have compared the effects of flywheel and traditional weight training on performance in athletic population [17,19,37], and generally presented data similar to our study findings. In a six week study by Maroto-izguierdo et al. [19], 15 flywheel training sessions (4 × 7 maximal intensity half squats done with 0.145 kg·m 2 moment inertia) produced superior improvements (p < 0.05-0.001) compared to traditional weight training (4 × 7 leg presses with load corresponding to 7 repetitions maximum (7 RM) for each set) for vertical jump (9.8% vs. 3.4%), change of direction ability (−7% vs. −4.4%) but also 20 m sprint time (−10% vs. −5.1%) in professional handball players. In addition, no significant differences between strength training modalities were observed for maximum strength improvement (12.2% and 7.9% for flywheel and traditional weight training, respectively). The outcomes of the 8 week Corratela et al. [17] study demonstrated that flywheel strength training performed once per week with up to 6 sets of 8 repetitions of squats produced superior improvements to equivolumed traditional weight training (80% of 1 RM) for change of direction ability (−7% vs. −2%, respectively) and 20 + 20 m sprints (−4% vs. −1%, respectively) but not for jumping (squat jump and countermovement jump) and sprinting abilities (10 m sprint and 30 m sprint) in professional soccer players. Furthermore, lower body strength increased significantly and similarly in both groups. Finally, effects of flywheel and traditional strength training on 10-m sprint, cmJ and lower body strength (1 RM squat) were examined on 38 active male football players by Sagelv et al. [37]. During six weeks of intervention (2 sessions per week), both flywheel and traditional strength training progressively increased squat exercise from 3 sets with 6 repetitions (week one) to 4 sets with 4 repetitions (week six). Flywheel group performed exercise with individually adjusted inertia enabling high power outputs (>4 watts·kg −1 ) while traditional strength training comprised of 4 sets with 4 repetitions (85% of 1 RM) was performed with maximum intended velocity. In addition, an equivolumed Nordic hamstring exercise was included for both groups with three sets of 4-10 repetitions to counteract expected strength gains in quadriceps muscle. Both groups significantly improved cmJ (9% and 8% for flywheel and traditional strength group, respectively) and identically decreased 10 m sprint time (2%) without between group differences for either variable. Interestingly, traditional strength training was proved superior to flywheel training in improving lower body strength (46% vs. 19%, respectively), with the noteworthy observation that traditional weight training was conducted with high loads (85%) and maximal intended velocity which is likely the primary reason for observed improvements [38]. Collectively, the aforementioned study corroborates our study findings that flywheel training induces superior power-related performance but not strength outcomes to traditional weight training modalities in the athletic population. In addition, these studies suggest that flywheel training is potent tool for strength and power related performance attribute improvements in the well-trained population, which is broadly supported with several other studies. Indeed, a recent meta-analysis reported flywheel-training induced strength improvements, but also no difference in strength increase after flywheel vs. traditional weight training [39]. Askling et al. [16] and deHoyo et al. [14] after 10 weeks of flywheel training (16 and 17 sessions, respectively) in elite soccer players (seniors and juniors, respectively) reported significant strength (p ≤ 0.05; 19% and 15% for eccentric and concentric strength, respectively) and 30 m sprint time (p ≤ 0.05; 2.4%) improvements as well as vertical jump (7.6%) and sprint time (20 m sprint, 1.5%; and 10 m flying sprint, 3.3%) improvements, respectively. In addition, six weeks of flywheel training, performed twice a week, has been shown to induce statistically higher improvements in squat jump and drop jump performance as well as change of direction ability compared to volume-matched plyometric training in well-trained junior soccer players [18]. On the contrary, implementing one flywheel training session per week for 7 weeks was found ineffective for lower body strength (1 RM in the half squat), 20 m sprint time, and cmJ improvements in professional handball players [20], suggesting that more than one flywheel training per week, with up to 4 sets (7 reps), is needed for substantial power-related performance improvements in the athletic population [1]. Indeed, Corattela et al. [17] reported significant improvements in change of direction ability and vertical jump performance (SJ and cmJ) after 6 weeks of flywheel strength training performed just once per week but with higher number of sets and reps (6 and 8, respectively). Collectively, these data support efficacy of flywheel training for improving broad range of strength and power -related performance attributes in well trained population, with noteworthy caution considering threshold load that needs to be met in order to obtain significant improvements. Clearly, additional investigations about the topic are warranted. It is interesting to note that we found no significant effects of flywheel nor traditional strength training on 20 m sprint performance in our participants. Somewhat in line with our findings, no change in 20 m sprint time was reported after horizontal flywheel training in physically active men [40]. It has been previously reported that low-velocity strength training may not be effective in improving sprinting ability in adolescents, especially welltrained athletes [41]. However, two-to-three flywheel sessions per week has been proven to increase the sprinting ability in handball players [19]. In addition, sprint time (10 and 20 m) significantly improved following 9 weeks of strength training in youth soccer players [42]. We can speculate that our study results are on one side consequence of the training status of our participant (well trained), as it has been shown that trained adolescents displayed hindered improvements in sprint outcomes with strength training compared to untrained one [43]. On the other hand, training and testing specificity could be also responsible as upward force-vector application during training likely play an important determinant in inducing specific functional adaptations [27]. In addition, 20 m sprint is rarely seen in a basketball game and practice and consequently sprint tests over shorter distances (5-10 m) might be more specific with acceleration and deceleration, rather than speed, as a far stronger predictor of basketball performance [44,45]. Although beyond the scope of this study, mechanisms that enables reported improvements in strength and power related performance outcomes should be concisely hypothesized. Flywheel training enables maximal force output throughout the entire concentric part of movement, but also short periods of overload in the eccentric phase of movement [13]. As exercise intensity has been acknowledged as a major determinant for strength training induced adaptations [46,47]. It can be speculated that this flywheel specific loading pattern (concentrically maximally loaded-eccentrically overloaded) is most likely responsible for superior effects for power-related performance outcomes in our study. Furthermore, eccentric overload induced specific neuromuscular adaptations such as dampened motor recruitment [48] with preferential recruitment of high threshold motor unit and higher cortical activity [49]. Finally, it has been reported that increase in eccentric phase force output leads to increase in following concentric phase force output [50][51][52]. Collectively, this physiological distinctiveness supports our study findings and the beneficial use of flywheel training to optimize strength and power adaptations in young basketball athletes. Several limitations of the study should be highlighted. We did not monitor load of regular basketball practice done by all participants with their respective coaches, which could somewhat blur the picture of obtained strength training effects. In addition, this study engaged male trained basketball players, without preceding experience in the flywheel training. Accordingly, the results may not translate to flywheel-experienced athletes. Finally, our study lasted for 8 weeks only, while comparative investigations with traditional strength modalities of longer durations are needed. Conclusions In summary, eight weeks of flywheel training with 1-2 sessions per week, including up to 4 sets of 8 repetitions of the half squat and Romanian deadlift exercises performed with maximum concentric intensity produces superior enhancement in vertical jump, 5 m sprint time and change of direction ability to equivolumed traditional strength training in well-trained young basketball players. In addition, both strength training modalities were equally effective in maximal strength gains. Therefore, low-volume/high-intensity flywheel strength training seems to be an efficient tool to induce strength and power-related adaptations in well-trained young basketball players. Informed Consent Statement: Written informed consent has been obtained from the patient(s) to publish this paper. Conflicts of Interest: We declare no conflict of interest.
2021-02-13T06:16:37.461Z
2021-01-29T00:00:00.000
{ "year": 2021, "sha1": "f988984d8af9cc751e3df0367eb457176e0bdcb8", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7908554", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "7bd5c312698373ced5fa1be4a5b89c718adf5759", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
53010179
pes2o/s2orc
v3-fos-license
Therapeutic role of long non-coding RNA TCONS_00019174 in depressive disorders is dependent on Wnt/β-catenin signaling pathway. Chronic stress is one of the major causes that lead to major depressive disorder, which is a prevalent mood disorder worldwide. Many patients with major depressive disorder do not benefit from available medication due to the complex etiology of the condition. Recently, long non-coding RNAs, molecular switches of downstream genes expression, have been reported to be involved in the pathogenesis of major depressive disorder. The long non-coding RNA TCONS_00019174 has been implicated in major depressive disorder risk and antidepressant effects, However, the effect of long non-coding RNA TCONS_00019174 on antidepressant responses has not been investigated. This study is designed to determine whether altered expression of long non-coding RNA TCONS_00019174 contributes to depression-like behaviors associated with chronic stress. We found that mice exposed to chronic ultra-mild stress displayed apparent depression-like behaviors and decreased expression of long non-coding RNA TCONS_00019174 in hippocampus. Both changed behaviors and long non-coding RNA TCONS_00019174 expression level were rescued by chronic treatment with imipramine. Viral-mediated long non-coding RNA TCONS_00019174 over expression in hippocampal neurons improved the behaviors of mice exposed to chronic ultra-mild stress. Further, it was found long non-coding RNA TCONS_00019174 over expression upregulated phosphorylated-GSK3β (p-GSK3β) protein and β-catenin in the hippocampus. These findings suggest that long non-coding RNA TCONS_00019174 exerts antidepressant-like effect in mice by activating a Wnt/β-catenin pathway, and that long non-coding RNA may serve as a potential therapeutic target for major depressive disorder in clinical application. Introduction Stress is a potent negative factor that can lead to psychiatric disease such as depression and anxiety disorders [1,2]. Major depressive disorder (MDD) is one of the most common mental disordersassociated with depressed mood, anhedonia, and low self-esteem [3]. Nowadays, more than 350 million people suffer from MDD worldwide. It has been reported that MDD might become the second world's leading causes of disease burden by 2020 [4]. Although the pathological mechanisms underlying depression remain to be fully clarified, stress-induced aberrant synaptic functions, neuronal plasticity, and nerve cell metabolisms may be key participants in the pathological changes of major depression [1-3, 5, 6]. Great efforts have been made to examine metabolic dysregulation [7] and structural and functional changes in depression-related brain regions [8]. Growth factors, pro-inflammatory cytokines, and endocrine factors [9] have been found, and that presumably lead to major depression. Recent research has reported that either the environment or behavior could cause epigenetic changes at specific gene loci, such as dysregulated transcription of genes expression, contributing to the pathogenesis of depression [10]. Initially, studies focused on the differential expression of mi-croRNAs (miRNAs) in subjects with depression. MiRNAs are a class of small non-coding RNAs that mediate cleavage or translational repression of target mRNAs [11,12]. Other evidence shows that miRNAs such as miRNA-26b, miRNA-1972, miRNA-4743, miRNA-4485, and miRNA-4498 are involved in the pathophysiology of depression and antidepressant treatment [6,[13][14][15][16][17]. Long non-coding RNAs (lncRNAs), defined as non-protein-coding RNAs, play an important role in the epigenetic regulation of the human genome, such as DNA methylation, histone modification, and chromatin remodeling, that result in gene activation or silencing. LncR-NAs participate in various biological processes, such as human cancer, cardiovascular diseases, and diseases of the central nervous system [18,19]. Recently,it has been found that six lncR-NAs (TCONS 00019174, ENST00000566208, NONHSAG045500, ENST00000517573, NONHSAT034045, and NONHSAT142707) were significantly downregulated in patients with MDD compared with healthy controls, suggesting the diagnostic and therapeutic value of lncRNAs for MDD [20]. To reveal the potential mechanism of lncRNAs in the process of depression, the behavioral changes and expression level of lncRNA TCONS 00019174 was evaluated in the hippocampus of mice exposed to chronic unpredictable mild stress (CUMS). It was found that lncRNA TCONS 00019174 significantly decreased in mice with depression. Further, depressive behavioral effects and the downstream signaling pathway were examined by manipulating hippocampal lncRNA TCONS 00019174 expression. Results suggested that CUMS-induced downregulation of hippocampal lncRNA TCONS 00019174 level and concomitant dysregulation of glycogen synthase kinase 3β (GSK3β ) contribute to depressive behaviors and antidepressant-like effects. It is well documented that the activation of the Wnt pathway leads to inhibition of GSK-3β phosphorylation, stabilization of cytosolic β -catenin, subsequent nucleus translocation, and further activation of downstream target gene transcription [21]. Previous study has shown that a heterozygous GSK-3β deletion manifested an anti-depression effect in a forced swimming test (FST) of mutant mice [21], while infusion of one GSK-3 selective inhibitorinduced the same change for a FST [22]. Therefore, upregulated β -catenin signaling has been viewed as a marker for antidepression-like behavior [22]. In this study, the potential antidepressant effect of lncRNA TCONS 00019174 was explored in mice with depression, as well as the antidepressant-like effect of β -catenin signaling.Results confirmed the diagnostic and therapeutic value of lncRNAs for depression. Animals 8 weeks old adult male BALB/c mice (SLAC Laboratory Animal) were maintained on a 12 h/12 h light/dark cycle with ad libitum access to normal food and water. All experimental procedures were performed according to the "Principles of laboratory animal care" (National Institutes of Health 1985). The mice were randomly divided into groups for the behavioral experiments. Experimenters were blinded from the groups when collecting behavioral data. Chronic stress The procedure used in this study was a revised version of the induction of chronic mild stress. The stress was based on environmental and social stressors without food/water deprivation or any nociceptive event [23]. Briefly, mice were subjected to various mild stressors such as paired housing, cage tilt (30 • ), a soiled cage (50 ml of water/L of sawdust bedding), and confinement to small cages (11 × 8 × 8 cm). A reversed light/dark cycle was used from Friday evening to Monday morning. This procedure was scheduled over a one week period and repeated six times, but the reversed light/dark cycle was removed during the weekend of the last session. Mice were subjected to a one hour period of morning stress, a two hour period of afternoon stress, and overnight stress. Drug treatments Imipramine (IMI; Sigma-Aldrich) was dissolved in drinking water at a concentration of 160 mg/L for 3 weeks [23]. Social interaction (SI) test A single mouse was placed in a measuring cage for 120 min. A male juvenile (4-5 weeks old) was then introduced into the cage and the amount of time spent in SI such as sniffing, licking, grooming, or crawling over or under the other mouse was recorded during a 3-min session. Sucrose preference test (SPT) Mice were subjected to water deprivation for 16 h and then two preweighed bottles, one containing 1.5% sucrose solution and the other containing tap water, were presented for 1.5 h. The positions of the sucrose and water bottles were switched every 30 min. To calculate the volume consumed from each bottle, bottles were weighed and the weight difference during the last 60 min was recorded. The total water and sucrose intake was defined as the total intake and sucrose preference was expressed as the percentage of sucrose intake relative to the total intake. Forced swim test (FST) Experiments were carried out in a transparent plexiglass cylinder (diameter 20 cm, height 30 cm). Before the experiment, the tank was filled with water to a depth of about 25 cm Water temperature was maintained at 23 25 • C. First day, mice were placed in the cylinder to adapt to swimming for 5 min. Second day, mice were placed in the cylinders to swim for adaptation times of six minutes and two minutes. The duration of any immobility of the mice in the water during the 2-6 min swimming period was recorded. A mouse was considered to be immobile when it floated on the the surface of the water, do nothing except making tiny movements of its head to keep it out of the water. Following each experiment, the mice were removed from the water, towel dried, and returned to normal temperature in warm air. Total duration of aquatic murine immobility was analyzed by video recording. Novelty suppressed feeding test One cm thick sawdust was placed at the bottom of a plastic box (76 × 76 × 46 cm) where 12 similarly sized food pellets were evenly distributed in the center. The mice were fasted for 24 hours (h), placed in the experimental apparatus, and the incubation period was calculated. The criterion for eating was that mice began chewing rather than just sniffing or playing with the food. The incubation period was employed as a parameter to determine the behavioral activity effects of the drug. In the experiment, the test environment was different from the feeding environment as the light intensity was greater in the former. Each time, mice should be placed at the same position and orientation. Quantitative real-time PCR Adult mouse hippocampi were isolated as described with modification (Sabine, 2011). In brief, the brain was dissected out and placed on ice dorsal side up in petri dish. Two parallel cuts about two mm apart were made on the cortex from the midpoint of the midline to the caudal-lateral side (about 40 • to the midline), the cortical tissue in-between was removed, and the 'C'-shaped hippocampus was exposed. After dissociating the convex side with fine forceps, the hippocampus was dissected out for RNA isolation or protein extraction. Total RNA was extracted using TRIzol Reagent (Life Technologies) and digested with DNase (DNA-free; Life Technologies). one µg of total RNA was reverse-transcribed to cDNA using the QuantiTect Reverse Transcription Kit (Qiagen). Real-time PCR was performed using the Applied Biosystems Step One Real-Time PCR System with SYBR Green PCR Master Mix (Applied Biosystems) according to the manufacturer's protocol. PCR conditions were 15 min at 95 • C, followed by 45 cycles of 15 s at 95 • C and 30 s at 60 • C. The relative quantification method following the manufacturer's protocol was used for the estimation of target mRNA expression. All measurements were duplicated. Gapdh mRNA or U6 snRNA were used to normalize the relative expression levels of target mRNAs. Construction of viral vectors AAV-mediated gene transfer was performed as previously described [23,24]. A DNA fragment of mouse Camk2a promoter (1.3 kb) was amplified from BALB/c mouse genomic DNA and inserted into the pAAV-CMV-MCS vector, yielding the pAAV-Camk2a-MCS vector. EGFP cDNA was then inserted into the pAAV-Camk2a-MCS vector, yielding the pAAVCamk2a-EGFP vector. To generate the lncRNA TCONS 00019174 expression plasmids, the DNA fragment of lncRNA TCONS 00019174 was PCR amplified from mouse genomic DNA using the forward primer 5'-TCGTGGGGGGAGCGCTCC-3'and the reverse primer 5'-ACCATCCCCTTGAGGTGAA-3'. The fragments were inserted in front of an EGFP-coding region within the pAAV-Camk2a-EGFP vector, yielding the pAAV-Camk2a-lncRNA TCONS 00019174 vector. Recombinant viruses (AAV serotype 8) were generated at Vector Laboratories. The genomic titer of each virus was determined using real-time PCR. The titers of AAV8-Camk2a-EGFP (AAV-GFP) and AAV8-Camk 2a-lncRNA TCONS 00019174-EGFP (AAV-Lnc) were measured as 6.0 × 10 13 and 6.7 × 10 13 viral genomes/ml, respectively. For viral vector injections, mice were anesthetized intraperitoneally with sodium pentobarbital (50 mg/kg) and placed in a stereotaxic frame. The skull was exposed and a small portion of the skull over the hippocampus was removed bilaterally with a dental drill. Subsequently, AAV vectors were dissolved in physiological saline (0.5 µl) and injected bilaterally into the hippocampus (AP, −2.0 mm; ML, ± 1.5 mm; DV, −2.0 mm) at 0.1 µl/min. After surgery, the mice were first maintained in cages with a heat lamp until fully recovered from anesthesia, then transferred to normal housing conditions for three weeks for maximum transgene induction. Successful transduction in the hippocampus was confirmed by immunohistochemistry with an antibody against GFP. Cell cultures Primary hippocampal neurons were isolated from embryonic day 17 (E17) mouse embryos and cultured with minor modification as previously reported [25]. In brief, the entire E17 mouse brain was dissected out and placed on sterile gauze under a laminar flow hood. After removing the cerebellum, the brain was separated into two hemispheres along the midline. Under a dissection microscope, the meninges were carefully removed and the hippocampus was fully exposed as a curved structure at the distal part of the hemisphere. The hippocampus was dissected out; cut into pieces and incubated in 2.5% trypsin (Life Technologies) for 20 min at 37 • C, then incubated with a trypsin inhibitor (Worthington), triturated into single cells with polished Pasteur pipette and subsequently resuspended in DMEM supplemented with 10% fetal bovine serum. Viable cells were seeded on poly-D-lysine-coated 24-well dishes. After four hours, the medium was replaced with Neurobasal medium (Life Technologies) containing 1% B27 supplement (Life Technologies) and 50 µg/ml streptomycin. The cultures were maintained at 37 • C in a 5% CO 2 humidified atmosphere. On day 2 in vitro (2 DIV), hippocampal neurons were treated with 2 µM 1-β -D-arabinofuranosylcytosine (Sigma-Aldrich) to remove any proliferating non-neuronal cells. siRNA oligos targeting TCONS 00019174 were designed and synthesized by GenePharma (Shanghai).Five DIV hippocampal neurons were treated with control or TCONS 00019174 targeting siRNA together with Lipofectamine TM RNAiMAX Transfection Reagent according to the manufacturer's instructions. Statistical analyses Multiple groups were compared using ANOVA (one-way or twoway). Unpaired t-tests were used for two-group comparisons. Tests were two-tailed and considered significant when p < 0.05. All data are presented as mean ± SEM. Chronic stress reduces hippocampal lncRNA TCONS 00019174 expression level It has been reported that BALB/c mice are more vulnerable to a CUMS challenge [23], thus BALB/c mice were chosen for this study. BALB/c mice were either exposed to a 6-week CUMS environment or not, with or without IMI, and assessed for depressive behavior. It was shown that mice exposed to CUMS had a reduced SI time compared with nonstressed (NS) mice; this change could be rescued given IMI treatment (Fig. 1a). In the FST test, the CUMS group increased their immobility time, whereas, following NS and CUMS exposure, IMI treatment decreased immobility time for the mice This indicated the antidepressant efficacy of IMI [23,26,27] (Fig. 1b). In the SPT test, mouse anhedonia (diminished interest or satisfaction) was verified and mice exposed to CUMS displayed decreased sucrose preference, which was rescued by IMI [ Fig. 1c]. In another noveltysuppressed feeding (NSF) test, which used the latency to eating food in the center of an open field to test anxiety and antidepressant-like response [24,26,28], the CUMS group showed increased latency to feeding compared with NS mice, which also was rescued by IMI treatment (Fig. 1d). The preceding behavioral data demonstrates that stressed BALB/c mice manifested depression-like behavior. The hippocampal expression level of lncRNA TCONS 00019174 was then examined in these groups by Northern blotting, it was found that lncRNA TCONS 00019174 level was significantly reduced in stressed mice compared with NS mice and the expression level of lncRNA TCONS 00019174 improved after IMI treatment (Fig. 1e-1f). Effect of lncRNA TCONS 00019174 on depression suppression in mice To explore the role of hippocampal lncRNA TCONS 00019174 in the pathogenesis of depression, a mouse model was constructed by bilateral hippocampus injection of AAV vector expressing pre-lncRNA TCONS 00019174 and GFP under the control of Camk2a promoter, which limited expression to excitatory neurons. Mice injected with AAV vector expressing GFP alone was used as the control treatment ( Fig. 2a and 2b). Mice were grouped for behavioral experiments. It was found that NS mice overexpressing lncRNA TCONS 00019174 showed no significant behavioral changes compared with NS control mice (Fig. 2c-2f), while CUMS mice overexpressing lncRNA TCONS 00019174 developed resistance to depression-like behavior compared with CUMS control mice. In the SI test, lncRNA TCONS 00019174 overexpressed mice exposed to CUMS spent more time than CUMS-stressed mice infected with control vector (Fig. 2c). In FST, there was no significant difference between CUMSexposed mice with lncRNA TCONS 00019174 overexpression and control mice (Fig. 2d). But similar antidepression-like phenotypes were found in the SPT and NSF tests as CUMS mice exhibited decreased sucrose preference and increased latency to feeding ( Fig. 2e and 2f) when compared with NS control mice, while CUMS-exposed mice overexpressing lncRNA TCONS 00019174 showed no significant difference in these tests when compared with the NS control mice. lncRNA TCONS 00019174 exerts antidepressant-like effect in mice by activating Wnt/β -catenin pathway The activation of the Wnt pathway, which leads to inhibition of GSK-3β and upregulation of β -catenin signaling, has been interpreted as a marker for antidepressive-like behavior. It was hypothesized here that alteration of lncRNA TCONS 00019174 expression level by chronic stress and/or chronic IMI treatment would mediate these behaviors through the GSK-3β pathway, based on findings that chronic stress significantly decreased β -catenin mRNA and protein, while increased GSK-3β mRNA and protein levels in the hippocampus of mice exposed to CUMS compared with NS control (Fig. 3a-3c). However, these expression variations were reversed by IMI treatment ( Fig. 3a-3c), suggesting that the lncRNA TCONS 00019174-GSK-3β -β -catenin pathway may contribute to the behavioral responses to stress, whereas the depression-like behaviors could be rescued by IMI treatment. Subsequently, it was investigated whether the expression levels of pGSK-3β and β -catenin are regulated by lncRNA TCONS 00019174 in vivo. It was found that the pattern of the Wnt signaling pathway resembled IMI treatment in CUMS mice. Decreased pGSK-3β level and β -catenin level by CUMS was reversed by lncRNA TCONS 00019174 overexpression (Fig. 3e-3f). Overall, these results suggest that lncRNA TCONS 00019174 regulates depression-like changes in mice through the Wnt/β -catenin signaling pathway under CUMS stimulation. lncRNA TCONS 00019174 knock-down inhibits activation of the canonical Wnt pathway in hippocampal neurons To further explore the effects of lncRNA TCONS 00019174 on the Wnt pathway, RNAi experiments were undertaken in cultured hippocampal neurons with two different siRNA oligonucleotides targeting TCONS 00019174. As shown in Fig. 4a, compared with control siRNA, the TCONS 00019174 knock-down dramatically reduced GSK-3β phosphorylation. Consequently, protein levels of β -catenin in both RNAi groups were significantly up-regulated, confirming the activation of canonical Wnt pathway when TCONS 00019174 expression was blocked. It was of interest to note that, contrary to the p-GSK-3β trend, total GSK-3β protein levels were elevated after TCONS 00019174 knock-down, which might be the result of a negative feed-back effect. Disscussion The antidepressant effect of lncRNA TCONS 00019174 in mice was explored in this study. It was found that elevated lncRNA TCONS 00019174 expression level alleviated depression-like behaviors in CUMS mice, as indicated by increased time of communica-tion in SI, decreased immobility in FST, reduced latency of time to feeding in NSF, and an elevated sucrose preference index in SPT. Moreover, lncRNA TCONS 00019174 overexpression induced the activation of the canonical Wnt/β -catenin pathway in hippocampus of CUMS mice, which was mainly responsible for the observed anti-depressive effects of lncRNA TCONS 00019174 in mice. Currently, there is accumulating evidence about epigenetic biomarkers for MDD, such as DNA methylation, miRNAs and lncR-NAs, that indicates a significant contributions of miRNAs and lncR-NAs to depression, anxiety, and antidepressant actions [29]. miR-NAs (miR-16) [30,31], miR-221-3p, miR-34a-5p, let-7d-3p, and miR-451a [32], miRNA-26b, miRNA-1972, miRNA-4743, miRNA-4498, and miRNA-4485 [33] were found to be involved in the occurrence and development of MDD, and changes of miRNAs expression have been closely associated with the improvement of MDD symptoms [34,35]. More research attention has been focused on the expression profiles of another non-coding RNA, lncRNAs, in MDD. The biological functions of the differentially regulated lncR-NAs in the body have shown the important role of lncRNAs in multiple biological processes including translational elongation, protein transport and localization, protein complex biogenesis and assembly. LncRNAs are also involved in central nervous system diseases such as Alzheimer's disease, Parkinson's disease, and even depression. Six lncRNAs (TCONS 00019174, ENST00000566208, NONHSAG045500, ENST00000517573, NONHSAT034045, and NONHSAT142707) were significantly downregulated in MDD patients compared to controls, indicating the potential diagnostic and therapeutic biomarker roles of lncRNAs for MDD [20]. Thus, posttranscriptional regulation by miRNAs and lncRNAs networks in de-pression and antidepressant drug actions attract increasing attention. It has been demonstrated here that hippocampal lncRNA TCONS 00019174 influences behavioral responses to chronic stress in mice. Inhibition of hippocampal lncRNA TCONS 00019174 enhanced behavioral preference to depression under mild stress stimulation. These behavioral effects were paralleled by changes in downstream gene expression as it was found that lncRNA TCONS 00019174 would regulate a Wnt/β -catenin signaling pathway associated with mediating p-GSK3β /GSK3β level. It is known that both genetic background and social stress may have effects on gene expression profiles. Further, studies have been conducted to clarify how gene-environment interaction change lncRNA expression in depression. AAV vectors with the Camk2a promoter were used to overexpress lncRNA TCONS 00019174 specifically in excitatory neurons, and it was found that lncRNA TCONS 00019174 overexpression markedly improved depression-like behaviors exposed to chronic stress. Though chronic stress decreased both the mRNA and protein level of p-GSK3β /GSK3β ,this effect was blocked by lncRNA TCONS 00019174 overexpression as one therapeutic antidepressant treatment. In contrast, lncRNA TCONS 00019174 overexpression did not affect mRNA and protein levels of GSK3β levels in the control condition. LncRNAs participate in mediating transcription or translational repression of target genes, and lncRNAs have been involved in biological processes of the central nervous system, especially hippocampal development, which has been associated with the development of depression. Data suggests that lncRNA TCONS 00019174 might increase GSK3β expression in response to chronic stress and suppresses β -catenin signaling under conditions of stress. Intriguingly, both GSK3β and β -catenin have been associated with depression and antidepressant drug effects. Enhancement of GSK-3β activity helps to reduce synaptic spine density in response to stress [1,36] and patients with MDD have increased GSK-3β activity [37]. Increasing evidence suggests that inhibition of GSK-3β may contribute to antidepressant treatment [38]. Consistent with this, lower p-GSK-3β and p-GSK-3β /GSK-3β levels in the hippocampus of SCH rats, leads to depression-like behavior [39]. These results suggest a role for GSK-3β in anti-depressive effects and highlight GSK-3β as a potential target in the treatment of depression. β -catenin, a substrate of GSK-3β [40], has been implicated in brain development and growth [41]. Phosphorylation of β -catenin by GSK-3β induces the degradation of the protein, while phosphorylation of GSK-3β would stabilize β -catenin in the cell cytoplasm. In depressed subjects, the level of β -catenin was decreased in postmortem prefrontal cortices compared with controls [42], thus β -catenin level could be viewed as an index for antidepressant behaviors [43]. Above all, depression would bring about a high GSK-3β activation state and low β -catenin levels. In conclusion, It is proposed that lncRNA TCONS 00019174 acts as an important regulator of behavioral responses to chronic stress. Detailed investigation of the lncRNA TCONS 00019174associated gene networks involved in activating Wnt/β -catenin signaling pathway should be carried out. Behavioral changes and stressinduced epigenetic gene regulations reported here reveal new aspects of depression pathophysiology and treatment. At least and in part, direct modulation of lncRNAs expression and the Wnt/β -catenin pathway may be an effective therapeutic strategy for depression.
2018-04-03T02:52:47.994Z
2017-10-13T00:00:00.000
{ "year": 2018, "sha1": "370e7a43885ef5c2b021f92e10e9e0bd9df1f510", "oa_license": "CCBY", "oa_url": "https://doi.org/10.31083/jin-170052", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "30df945f5e63e25179f15d6f40640e6c5ec4b833", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
221760983
pes2o/s2orc
v3-fos-license
Liquid Levothyroxine Formulation Taken during Lunch in Italy: A Case Report and Review of the Literature Levothyroxine (L-T4) is among the most widely prescribed medications in the world, and it is considered by the World Health Organization an essential medicine for basic health care. Replacement therapy has always been considered straightforward although different factors may interfere with intestinal absorption of L-T4, including food, dietary fibre, coffee, drugs, and gastrointestinal diseases. For these reasons, current guidelines recommend that L-T4 should be taken in a fasting state because its absorption is maximised when it is taken on an empty stomach, reflecting the importance of gastric acidity in the absorption process. In addition to sodium L-T4 in tablet form, various formulations (soft-gel capsules and liquid solutions) have become available for clinical use in the last years promising improved absorption. We described a 31-year-old Italian man who took liquid levothyroxine formulation during lunch. He was under replacement therapy with liquid levothyroxine 75 mcg daily for hypothyroidism due to Hashimoto thyroiditis for three years. During confirmation of the L-T4 replacement therapy, the patient stated that he was going to continue to “take liquid levothyroxine during (his) lunch every day.” We recommended taking the medication correctly in the morning at least thirty minutes before breakfast and repeating TSH, fT4, and fT3 after three months. The thyroid hormonal profiles taken after 3 and 6 months were comparable to those when the patient was taking the medication during lunch. In conclusion, liquid levothyroxine formulation should be preferred in case of malabsorption or potential malabsorption. Liquid formulation should be preferred due to the possibility of taking it during breakfast, which significantly improves the compliance of patients. Further studies are needed to evaluate the possibility of taking liquid L-T4 during lunch. Introduction Levothyroxine is among the most widely prescribed medications in the world, and it is considered by the World Health Organization an essential medicine for basic health care [1][2][3]. e first use of thyroid hormone for the treatment of hypothyroidism was documented in the 1890s, when Bettencourt and Serrano described a patient grafted with an ovine thyroid gland to treat severe hypothyroidism [4]. Synthetic formulations of thyroxine have been available for use since the 1950s although desiccated animal thyroid gland remained the mainstay of therapy until the 1970s [5]. In addition to sodium levothyroxine (L-T4) in tablet form, today various L-T4 formulations (soft-gel capsules and liquid solutions) are being used in most but not all countries for clinical use in the last years [6]. Approximately 60-90% of a tablet L-T4 dose is absorbed within 3 h of ingestion, and the absorption is maximal when it is taken on an empty stomach, reflecting the importance of gastric acidity in the process. Differently, many reports showed that liquid formulations circumvent gastric acidity, improving their absorption even if ingested with breakfast [7][8][9][10][11]. We described a young Italian man who took liquid levothyroxine formulation during lunch. He exhibited two thyroid hormonal profiles (TSH, fT4, and fT3) taken, respectively, in the previous fifteen days and 6 months by outpatient evaluation (Table 1). e patient underwent thyroid ultrasound that showed an atrophic gland with marked hypoechoic tissue, without nodules ( Figure 1). During confirmation of the L-T4 replacement therapy, the patient stated that he was going to continue to "take liquid levothyroxine during (his) lunch every day." e patient said that, only during the first year, he had taken liquid L-T4 at least 30 minutes before breakfast, while in the last two, he had been taking liquid L-T4 with a glass of water during his lunch "for convenience." We recommended taking the medication correctly in the morning at least thirty minutes before breakfast and repeating TSH, fT4, and fT3 after three months. e patient was reassessed on 27 July. He was in good health. e thyroid hormonal profiles are reported in Table 1. No difference in TSH, fT4, and fT3 was observed compared to the values found when the patient took the medication during lunch. New examinations were performed in the same laboratory after 6 months; TSH, fT4, and fT3 were still superimposable with the previous values (Table 1). Discussion We described for the first time, a patient taking liquid L-T4 during lunch in Italy. ere was no change in serum TSH level after changing the timing of L-T4 ingestion to at least 30 minutes before breakfast. Until ten years ago, the only formulation available was the tablet. Replacement therapy has always been considered straightforward although different factors may interfere with intestinal absorption of L-T4, including food, dietary fibre, coffee, drugs, and gastrointestinal diseases [12]. For these reasons, current guidelines recommend that L-T4 should be taken in a fasting state at least 30 minutes before breakfast or at bedtime (at least three hours after the evening meal) because its absorption is maximised when it is taken on an empty stomach, reflecting the importance of gastric acidity in the absorption process. [13,14]. Indeed, discordant results on timing of L-T4 administration with respect to main meal are reported [15,16]. Ten years ago, pharmaceutical companies introduced new levothyroxine formulations (liquid and soft-gel capsules) promising improved absorption. is could be due to the fact that these formulations circumvent the phase of gastric dissolution, which is closely dependent on gastric pH. is is true for liquid formulations because the active ingredient is already dissolved in 85% glycerol and 96% ethanol. Cassio et al. showed that, for the first time in 2013, there is no full bioequivalence between drops and tablets, especially for infants with severe congenital hypothyroidism, suggesting that liquid form could be more effective and/or better absorbed than tablets [17]. e following year, our group observed by chance a series of euthyroid patients who wrongly took liquid L-T4 with coffee at breakfast; after changing the time of intake to 30 min before breakfast, no change in thyroid hormonal profile was observed. is feedback suggested that that liquid T4 can be taken orally at breakfast both with water and with coffee. Taking into account these data, we hypothesized that high temperatures (i.e., coffee temperature) do not alter the molecular properties or stability of L-T4 [10]. is was later clearly demonstrated by Bernareggi and colleagues [18]. Moreover, the TICO study, a double-blind placebo-controlled crossover trial, confirmed that liquid L-T4 can be ingested directly at breakfast, thus potentially improving therapeutic compliance [9], data recently confirmed in more than seven hundred patients [10]. Many other subsequent studies and observations have shown the significant superiority in terms of TSH normalization of the liquid formulation compared to tablets even in different subsets of patients, such as those with or without gastrointestinal malabsorption, submitted to bariatric surgery or taking multiple concomitant drugs [19][20][21][22][23][24]. Furthermore, the use of liquid L-T4 showed a significantly reduced variability in TSH values, both in young and older people, with a higher number of patients who remained euthyroid during follow-up [25,26]. is is relevant for at least two reasons: firstly, the simultaneous reduction in risk of developing other disorders mainly associated, but not exclusive, with clinical or subclinical hyperthyroidism, such as atrial fibrillation, osteoporosis, and coronary heart disease [27,28]; secondly, subjects with stable euthyroidism during replacement L-T4 therapy required fewer blood checks, with a predictable relative reduction in total health care expenditure [29,30]. Finally, the possibility of taking levothyroxine treatment during breakfast improves quality of life [31] and significantly improves adherence to the treatment [27] as recently reported by two Italian surveys. Differently from liquid solution, soft-gel capsules contain the drug dissolved in glycerin and enclosed in a gelatinous matrix. is structure should promise protection from variations in gastric pH. In agreement, in vitro research showed that the dissolution profile was more consistent than for tablets in the entire pH range after 60-120 minutes [32]. is has been confirmed in vivo by Fiorni et al., showing a total dissolution time of about 20 minutes [33]. Literature shows that this formulation improves the TSH profile in patients with impaired gastric secretion [34], taking pump inhibitors [35], with central hypothyroidism [36], in postmenopausal woman taking calcium supplements [37] and also a few minutes before breakfast with coffee [38], as compared to tablets. On the contrary, Di Donna et al. demonstrated that there was no difference in L-T4 requirement between soft-gel capsules and tablets in patients without malabsorption although the serum TSH was lower in patients taking soft-gel capsules [39]. To the best of our knowledge only one study, the "TITI" study evaluated whether a soft-gel capsule of L-T4 could also be ingested at breakfast, instead of liquid formulation. e study showed that both the liquid and soft-gel capsule formulations of L-T4 can be taken with breakfast although a significant decrease in fT4 and fT3 was observed 6 months after the switch from liquid to soft-gel capsules [40]. For these 2 Case Reports in Endocrinology reasons, the authors suggested that "liquid L-T4 would be the preferred formulation for patients in whom even small changes in fT4 and fT3 levels are to be avoided." Further studies are needed to clarify this important issue. Conclusion e new levothyroxine formulation should be preferred in case of malabsorption or potential malabsorption. Liquid formulation should be preferred due to the possibility of taking it during breakfast, which significantly improves the compliance of patients. Further studies are needed to evaluate the possibility of taking liquid L-T4 during lunch. Data Availability e data used to support the findings of the study are available on request. Conflicts of Interest e authors declare that there are no conflicts of interest. Case Reports in Endocrinology 3
2020-09-10T10:22:31.832Z
2020-09-05T00:00:00.000
{ "year": 2020, "sha1": "e77599f3ec2bfb0f900f7888e9b1aa58a946467f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/crie/2020/8858887.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a2d67ed7aa2e7109242b317d4a660d4b7ca328a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256030687
pes2o/s2orc
v3-fos-license
A chromosome-scale reference genome assembly of the great sand eel, Hyperoplus lanceolatus Abstract Despite increasing sequencing efforts, numerous fish families still lack a reference genome, which complicates genetic research. One such understudied family is the sand lances (Ammodytidae, literally: “sand burrower”), a globally distributed clade of over 30 fish species that tend to avoid tidal currents by burrowing into the sand. Here, we present the first annotated chromosome-level genome assembly of the great sand eel (Hyperoplus lanceolatus). The genome assembly was generated using Oxford Nanopore Technologies long sequencing reads and Illumina short reads for polishing. The final assembly has a total length of 808.5 Mbp, of which 97.1% were anchored into 24 chromosome-scale scaffolds using proximity-ligation scaffolding. It is highly contiguous with a scaffold and contig N50 of 33.7 and 31.3 Mbp, respectively, and has a BUSCO completeness score of 96.9%. The presented genome assembly is a valuable resource for future studies of sand lances, as this family is of great ecological and commercial importance and may also contribute to studies aiming to resolve the suprafamiliar taxonomy of bony fishes. Introduction The great sand eel Hyperoplus lanceolatus (Le Sauvage, 1824) (Fig. 1) is a coastal species that is distributed in the northeastern Atlantic, more particularly on the European continental shelves between Portugal and Murmansk and the Baltic Sea, at a maximum depth of 60 m (Rutkowicz 1982). The species occurs along the continental coastline and around islands, most notably Iceland, Svalbard, and the British Isles (Rutkowicz 1982;Nadolna-Ałtyn et al. 2017). Sand eels are commercially and ecologically important due to their high abundance and high-fat content. Natural predators include sea mammals, piscivorous birds, and predatory fish. Industrial fisheries target the species for fish meal and oil production, while small-scale fisheries aim for human consumption and fishing bait (Frimodt 1995). Therefore, concerns have been raised about the potentially detrimental effects of sand eel stock depletion on the marine food web (Dunn 2021). The great sand eel is included in the family of sand lances (Ammodytidae), which contains 33 species in 7 genera (Fricke et al. 2022). Sand lances feed primarily on small crustaceans and small fishes and are characterized by an elongated body with long dorsal fins, reduced or missing pelvic fins, and the absence of a swim bladder (Muus and Nielsen 1999). The latter trait is likely an adaptation to a burrowing lifestyle (Muus and Nielsen 1999). Besides the great sand eel, the genus Hyperoplus includes one other species, the less common Corbin's sand eel (H. immaculatus), which also occurs in the northeastern Atlantic. These 2 species can be distinguished from other sand lances by their 2 sharply pointed vomerine teeth and by the relatively short pectoral fins, which do not extend to the base of the dorsal fin. Within the genus itself, the great sand eel can be distinguished from the Corbin's sand eel by its larger size (up to 20 to 40 cm length) and a species-specific dark spot on either side of the snout below the anterior nostril (Reay 1986). The great sand eel is also more piscivorous, sometimes even feeding on other sand lances (Frimodt 1995). The sand lance family was originally classified as part of the large order Perciformes but has recently been moved into other orders, either Trachiniformes or Uranoscopiformes (Nelson et al. 2016;Betancur-R et al. 2017). These taxonomic revisions illustrate the uncertainty surrounding the phylogenetic relationships within the series Eupercaria as a whole, many of which are still unresolved and in need of clarification through genetic studies (Betancur-R et al. 2017). To date, the only genetic data available for the great sand eel are mitochondrial gene sequences. As part of a master's course at the Goethe University in Frankfurt am Main, Germany (Prost et al. 2020), we generated a de novo, chromosome-level genome assembly of H. lanceolatus. This genome has been assembled from Nanopore long reads, polished with Illumina short reads, and scaffolded into chromosomes with Omni-C proximity-ligation data. The genome assembly represents the first in the genus Hyperoplus and the second within the family of sand lances after the recently published Ammodytes dubius assembly (Jones et al. 2023) and may facilitate future studies which aim to resolve the suprafamiliar taxonomy of sand lances or to evaluate fisheries' effects on individual species. Materials and methods Sampling, DNA extraction, and sequencing Two adult H. lanceolatus individuals were collected in the North Sea during a regular monitoring expedition to the Dogger Bank (Hlan001: N 54°59ʹ37.0608, E 2°56ʹ26.9772; Hlan002: N 55°1ʹ30.054, E 1°34ʹ57.2952) with the permission of the Maritime Policy Unit of the UK Foreign and Commonwealth Office in 2020. One specimen (Hlan001) was initially frozen at −20 °C on the ship and later stored at −80 °C until further processing. High molecular weight genomic DNA of this individual was extracted from muscle tissue using the protocol of Mayjonade et al. (2016) with the addition of Proteinase K during lysis. We used the Genomic DNA ScreenTape on the Agilent 2200 TapeStation system (Agilent Technologies) to evaluate DNA quantity and quality. In addition, we dissected the second specimen (Hlan002) during the expedition and preserved tissues from different inner organs (brain, heart, gills, muscle, liver, gonads, and pyloric gland) in RNALater for RNA extraction. These tissue samples, along with a DNA sample from the first individual, were sent to Novogene (UK) Company Limited for RNA extraction and sequencing. A standard 150 base pair (bp) paired-end whole-genome sequencing library from genomic DNA was prepared using the NEBNext Ultra II library preparation kit for Illumina sequencing (New England Biolabs Inc., Ipswich, USA) and sequenced on a Novaseq 6000 Illumina platform (Illumina, Inc., San Diego, California, USA). In addition, short-read paired-end RNA-Seq libraries for each of the RNA extracts from the different tissue types were prepared and sequenced on the same Illumina platform. Furthermore, we prepared five long-read libraries for sequencing on the Oxford Nanopore Technologies (ONT, Oxford, UK) MinION v.Mk1B sequencer following the protocol of ONTs Rapid Sequencing Kit (SQK-RAD004). Each library was sequenced on an individual flow cell (FLO-MIN106 v.9.41). Lastly, we prepared a proximity-ligation library from muscle tissue using the Dovetail Omni-C Kit (Dovetail Genomics, Santa Cruz, California, USA). The library was sent to Novogene (UK) for sequencing on the Illumina Novaseq 6000. Scaffolding and quality assessment To anchor the contigs into chromosome-scale scaffolds, we used the Dovetail Genomics scaffolding service. For that, we sent the Omni-C data, generated in this study, and the polished assembly to Dovetail Genomics as input for the HiRise pipeline (Putnam et al. 2016). Afterwards, gaps in the scaffolded assembly were filled using TGS-GapCloser v.1.1.1 (RRID: SCR_017633) (Xu et al. 2020) using the same long reads used for the initial assembly. Finally, haplotypic duplications (haplotigs) were identified and removed using purge_dups v.1.2.5 (RRID: SCR_021173) (Guan et al. 2020) adjusting the command for minimap2 in the pipeline to the preset for ONT reads. Transcriptome assembly and quality assessments In addition to the genome assembly, we assembled the transcriptome of H. lanceolatus. The RNA-seq data for the 7 different tissue types were combined into a single dataset and used to assemble the transcriptome with Trinity v2.9.0 (RRID: SCR_013048) (Grabherr et al. 2011;Haas et al. 2013) following the step-by-step protocol of (Freedman and Weeks 2020). The completeness of the transcriptome assembly and assembly statistics were assessed with BUSCO v5.3.1 (RRID: SCR_015008) (Seppey et al. 2019) and Quast v.5.0.2 (RRID: SCR_001228) (Mikheenko et al. 2018) using the same settings as described previously. Repeat annotation To annotate repeats in the assembly, we first generated a de novo repeat library with RepeatModeler v.2.0.1 (RRID: SCR_015027) (Flynn et al. 2020), which was combined with an Actinopterygii database, derived from the RepeatMasker v.4.1.0 (http://www.repeatmasker.org/RepeatMasker/; RRID: SCR_012954) Repeat Sequence Database using the utility script "queryRepeatDatabase.pl," to a custom repeat library. Then RepeatMasker was used with the custom library to annotate and mask the repeats in the assembly. We hardmasked interspersed repeats and soft-masked simple repeats to increase the accuracy of the subsequent gene annotation. Genome sequencing and assembly The 5 sequencing runs on the ONT MinION generated a total of 32 Gbp or an approximate 40-fold coverage of longread data with a mean read length of 4.56 kbp and a mean read quality of 12 (Supplementary Table 1A, Supplementary Fig. 1). Illumina whole-genome and Omni-C sequencing generated 42.3 and 43.1 Gbp of short-read and proximityligation data, respectively (Supplementary Table 1B). The final chromosome-scale scaffolded, gap-closed, and haplotig-purged de novo genome assembly of H. lanceolatus has 965 scaffolds (incl. mitochondrial genome), a length of 808.5 Mbp, and 6 gaps of 100 N's each resulting in a scaffold/contig N50 of 33.7 and 31.3 Mbp, respectively (Table 1). Proximity-ligation scaffolding resulted in 97.1% of the total assembly length being anchored into 24 chromosome-scale scaffolds larger than 15 Mbp ( Fig. 2A and B), which is the expected haploid number of exclusively acrocentric chromosomes (2n = 48) described for the species (Ocalewicz et al. 2019). The remaining 2.9% are comprised of scaffolds/contigs smaller than 400 kbp. The separately conducted mitochondrial genome assembly resulted in a circular mitochondrial sequence with a length of 16,509 bp, which conforms to the standard vertebrate gene organization ( Supplementary Fig. 2). Genome completeness and quality assessment The heterozygosity and haploid genome size of H. lanceolatus was estimated by GenomeScope as 0.48% and 695 Mbp, respectively, which is about 113 Mbp shorter than the length of the haplotig-free assembly. A high percentage of identified complete BUSCO genes (96.9%) (Fig. 2C) of the Actinopterygii dataset and the k-mer completeness of 91.5% calculated by Merqury suggest an overall high completeness of the assembly. In addition, Merqury also suggests a low base-level error rate of 0.04% and a corresponding QV of 33.6. Furthermore, both long and short reads mapped to the assembly with a high mapping rate of 94.8% and 98.9%, respectively, and the BlobPlot generated with Blobtoolkit shows no clear evidence for contamination (Fig. 2D). Yet, a congregation of "no-hit" and "Chordata" scaffolds with a lower GC content (~34%) compared with the chromosome-scale scaffolds (41%) might be a sign of contamination of unknown origin due to a lack of sequences in the nucleotide database or simply scaffolds containing AT-rich repeats that were not placed into the chromosomescale scaffolds. Transcriptome assembly The final transcriptome assembly is based on 50.9 Gbp of short-read RNA-seq data (Supplementary Table 1B) and has a total length of 223.6 Mbp (Table 1). BUSCO analyses found 91.6% of Actinopterygii orthologous genes in the transcriptome, indicating high transcriptome completeness (Fig. 2C). Repeat annotation The de novo repeat library generated by RepeatModeler2 was comprised of 2,515 sequences (for details, see Supplementary Table 2). The annotation of repetitive elements in the genome identified 44.37% of the genome assembly of H. lanceolatus (359 Mbp) as repeats (Supplementary Table 3). DNA transposons were found to be the most common repeat elements spanning 16.6% of the genome, followed by Long Interspersed Nuclear Elements (LINEs) with 6.1% and simple repeats with 4.5%. However, a large percentage of repeats, spanning 13.6% of the genome, could not be classified. Gene annotation The homology-based gene prediction with GeMoMa identified 22,274 genes with a median length of 6,597 bp spanning 294.8 Mbp of the assembly. BUSCO analysis found 94.4% complete orthologous of the Actinopterygii dataset indicating high completeness of the predicted genes (Fig. 2C). Furthermore, InterProScan functionally annotated 50,694 (99.5%) of the 50,935 predicted proteins and assigned at least 1 GO term to 39,171 (76,9%) of the proteins. In addition, 43,600 proteins (95,2%) were assigned to entries within the Swiss-Prot database. Conclusion The chromosome-level reference assembly of H. lanceolatus presented here is not only the second genome assembly for the family Ammodytidae but, in fact, also the second of the order Uranoscopiformes that contains approximately 174 recognized species (Encyclopedia of Life, http://eol.org, 2018). It will be an invaluable resource for future phylogenomic and population genomic studies of sand lances and bony fishes in general, as the systematics of Eupercaria is not fully resolved yet (Betancur-R et al. 2017). In addition, it is an important reference for genomic assessments of fisheries stocks, as sand lances are a valuable resource and play an irreplaceable role in the survival and breeding success of many seabirds (Frimodt 1995;Dunn 2021). Supplementary material Supplementary material is available at Journal of Heredity online. Funding The present study is a result of the LOEWE-Centre for Translational Biodiversity Genomics (LOEWE-TBG) and was supported through the programme "LOEWE-Landes-Offensive zur Entwicklung Wissenschaftlich-ökonomischer Exzellenz" of Hesse's Ministry of Higher Education, Research, and the Arts.
2023-01-21T06:16:40.765Z
2023-01-20T00:00:00.000
{ "year": 2023, "sha1": "7b57f06d478899390e5266950ea722aad64224d9", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jhered/advance-article-pdf/doi/10.1093/jhered/esad003/48800280/esad003.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d676d3f5773d6e841bda8ce09eeeebf38305f4ad", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
216513080
pes2o/s2orc
v3-fos-license
Finite-length effects on cylindrical Langmuir probes Kinetic simulations are used to compute current characteristics of finite-length cylindrical probes, with particular attention to end effects. Currents collected per unit lengths, as a function of distance to the ends, are calculated and fitted to empirical analytic functions. These fits, in turn, can be interpolated and used to predict probe characteristics; that is, collected current as a function of applied voltage, for a broad range of physical parameters of relevance to laboratory and space plasma. species current is virtually unaffected by these effects for cylinders of radius up to one Debye length (see Table 6c in [2]). Finite-length effects, on the other hand, are a different matter. Cylindrical Langmuir probes are often equipped with a guard at one end, as illustrated in Fig. 1, which itself is attached to a supporting body such as a spacecraft bus [3,5,12]. The guard is ideally an extension of the cylindrical probe, having the same voltage as the probe, but being electrically insulated from it such that the current collected by the guard can be disregarded in the measurement. Edge effects due to the mounting point are then supposed to only affect the guard. Nevertheless, many practical probes still exhibit finitelength effects, which degrades the accuracy of the inferred plasma density, when based on the theory developed for an infinitely long cylindrical probe. Increasing the probe length will of course help, although there is an upper limit to what is practical. Besides, the literature reports different results on how long probes must be to overcome these effects, ranging from about 10 to more than 50 Debye lengths [8,[13][14][15][16]. Some of these references also attempt to find expressions for the characteristics of the probes. However, present studies on finite-length effects are limited to very specific ranges of probe lengths, plasma parameters and/or experimental setup, which may explain the discrepancy between these studies; one laboratory experiment is performed using a conducting guard [13], the other with an insulating one [14]. Numerical experiments and analytical results often disregard the guard altogether, focusing on an idealized cylinder, possibly with some discussion on the expected changes due to the guard [8,15,16]. Clearly, there is a need for a more fundamental understanding of finite-length effects in cylindrical Langmuir probes. In this paper, in order to better understand edge effects, we investigate the attracted-species current per unit length, i(z), as a function of the position z along a thin, cylindrical probe with both ends free. This approach is widely applicable: edge effects are clearly visible on i(z), which can be used to evaluate how long a guard must be in order to mitigate edge effects. Moreover, by integrating i(z) along the probe, a current-voltage characteristic can be obtained, and used to assess the applicability of the OML theory. If the probe has a guard, edge effects can be removed from one end of the function i(z). A natural next step, which is not pursued here, would be to use the new characteristic to more accurately infer plasma parameters, for new as well as existing missions with available data. Through theoretical derivations and particlein-cell (PIC) simulations, we characterize the function i(z) for thin, cylindrical probes of virtually arbitrary lengths, for normalized voltages up to 100 . The plasma is assumed to be collisionless, nonmagnetized, and nondrifting Maxwellian. A possible application where the above conditions are well satisfied is to Langmuir probes on-board satellites and rockets at hundreds of kilometers altitude. In particular, we are interested in multineedle Langmuir probes (m-NLPs) [9]. The geometrical extent of these probes, including the sheath, is small enough compared to both the mean free path and the gyroradius of both electrons and ions that the plasma can be approximated as both collisionless and nonmagnetized, at least to first approximation [9,13]. Moreover, the m-NLP is operated exclusively at positive voltages with respect to the background plasma, such that electrons are always the attracted species and ions the repelled species. The plasma is typically streaming towards a satellite at roughly the orbital velocity of v d ≈ 8 km/s. However, this drift velocity is orders of magnitude smaller than the thermal speed of the electrons, which have a temperature in the order of 1000 K, and can therefore be neglected. While space plasmas are usually not in thermodynamic equilibrium, at mid latitudes, ionospheric electrons are generally well described by Maxwellian velocity distributions [17,18]. As for the ions, they also have a temperature in the order of 1000 K, which leads to a thermal speed that is much less than the drift velocity. However, it is easily demonstrated that the ions contribute negligibly towards the total collected current: a rough estimate of the ion current can be computed as the flux of ions streaming through the perpendicular cross section of the probe, I i 2rlq i n i v d , where r and l are the radius and length of the probe, respectively, and q i and n i are the charge and density of ions. The ions are singly charged oxygen of the same density as the electrons. The "less than" sign accounts for drift velocities not perpendicular to the probe as well as ion repulsion leading to a smaller effective cross section. For estimating the electron-to-ion current ratio I i /I e , the electron current I e may be approximated using OML theory, which should at least be accurate enough for the sake of this argument (in reality I e will be somewhat larger than predicted by OML theory). I i /I e is then in the order of 1% for a probe voltage of 1 V. The ion current can therefore be neglected, and the remaining electron current should be well described by the attracted-species current under the assumptions used in this paper. It is also worth noting that while other accurate instruments exists, the m-NLP offers exceptional spatial resolution while on board high-speed vehicles due to its high sampling frequency [9]. For satellites, the resolution is in the order of 10 m. Efforts towards a better understanding of such probes are therefore most relevant. The remainder of the paper is organized as follows. Following a review of some background equations in Sec. II, the theory is presented in Sec. III and the simulations characterizing i(z), in Sec. IV. Several applications are briefly discussed in Sec. V, and finally the conclusion is given in Sec. VI. II. BACKGROUND It will be useful to briefly revisit OML theory [1]. Let us assume a collisionless, nonmagnetized, and nondrifting Maxwellian plasma with a species with charge q, mass m, density n, and temperature T . For a probe situated in this plasma with a voltage V with respect to the background, we can define a normalized voltage where k is Boltzmann's constant. According to OML theory, an infinite cylindrical probe would collect the following current from an attracted species, i.e., when qV < 0 or η > 0 [1]: where the approximation on the second line is usually considered accurate for η 2, and is the current that would pass through the surface of the probe due to random thermal particle motion if it were at the same potential as the background plasma. v th = √ kT /m is the thermal speed of the species, and S = 2π rl is the surface area of the probe, with r and l being the probe radius and length, respectively. For repelled species, i.e., when qV > 0 or η < 0, the current collected is given by [1] I ret (η) = I th exp(η). It is remarkable that this latter equation is in fact true regardless of the shape of the probe (given the correct surface area S), including finite-length cylinders, and that the collected current is distributed evenly on the probe's surface [1]. We shall therefore limit our study on finite-length cylinders to the attracted-species current. For multiple species, the current collected according to OML theory is the sum of the currents due to the different species. III. THEORY In the following, we are interested in determining the attracted-species current i(z) per unit length along a cylindrical Langmuir probe of radius r and voltage V with respect to the background plasma. z is the position along the probe, which stretches from z = 0 to z = l. We assume an attracted species to be collisionless, nonmagnetized, nondrifting Maxwellian and uniform in the background. The species is then fully described by its charge q, mass m, background density n and thermal energy kT , where k is Boltzmann's constant and T the temperature. Moreover, since this is an electrostatic problem, it is also reasonable to expect the vacuum permittivity ε 0 to enter the equations. We also assume, as in OML theory, that the current contributions due to different species can simply be superposed linearly. This means that the density and temperature of other species do not enter the equations of our attracted-species current (other species may well have different densities and temperatures). The attractedspecies current i per unit length can then be described through a relation between all mentioned quantities: F (i, V, z, l, r, q, m, n, kT, ε 0 ) = 0. The physical dimensions of the variables in this relation is given in Table I. Since this forms a 4 × 10 matrix of rank 4, according to Buckingham's π theorem [19], [20, pp. 22-26], the above relation can be written as a relation between 10 − 4 = 6 dimensionless variables, which can be chosen freely as long as they are independent. We choose the normalized lengths where λ D = ε 0 kT /q 2 n is the attracted-species Debye length, the normalized voltage and current, where i th = I th /l = qnr √ 2π kT /m, and finally, the plasma parameter, nλ 3 D . Equation (6) can thus be reduced to We shall limit the discussion to thin probes, r/λ D → 0, and weakly coupled plasmas, nλ 3 D → ∞, and can therefore disregard the latter two variables. According to Laframboise [2] the first assumption is well justified for infinitely long probes when r/λ D < 1. It is reasonable to assume that the same holds for finite-length probes. Equation (7) can then be inverted with respect to the first argument to yield where G is a hitherto unknown function. Alternatively, one can extract a factor from G, allowing the expression to be rewritten as a modification to the OML theory, where i OML = I OML /l, and I OML is given by Eq. (2). We refer to G and g as normalized current profile functions or just profile functions, and while either can be used, we use g for easier comparison with OML theory. Note that the use of g do not rely on the correctness of OML theory, since the OML current merely appear as a normalization. For convenience, we introduce the following dimensionless variables: Given Eq. (9), the problem has been reduced from finding a relation between all quantities in Eq. (6) to characterizing the function g(ζ ; λ, η). Before characterizing the profile function, we shall make a few theoretical predictions on its shape. First, due to the geometrical symmetry of the problem, it must be left-right symmetric about the center of the probe, g(λ − ζ ) = g(ζ ). Second, suppose the probe is lengthened until a section emerges in the middle, from which the edges cannot be "seen." In this region g will necessarily be flat, and equal to the value of an infinitely long probe. We can define this value rigorously as For our model to comply with OML theory, C should equal 1. However, we do not enforce compliance with OML theory, but instead treat C as a coefficient to be determined. This allows the degree of compliance to be measured as the deviation of C from 1. Third, when the flat region of g emerges, the edge effects no longer overlap. That is, any point on g close enough to either end to experience edge effects do not "see" the other end, and as such is independent of the distance to it. As a consequence, the shape of the edge effects near either end remains the same as the probe is further extended to arbitrary lengths. IV. SIMULATIONS The characterization of g(ζ ; λ, η) is constructed from simulation results obtained with PTETRA [21,22], an electrostatic PIC code in which space is discretized with unstructured tetrahedral cells. PTETRA records the current density through the probe surface by counting the number of particles passing each boundary facet every time step. An example of this simulated surface current density can be seen in the lower part of Fig. 2, which is for a probe with λ = 5.63 and η = 25. This can then be used to infer g(ζ ; λ, η), as represented by a curve fit, for these particular values of λ and η. In order to fully characterize g, we have to do a sweep of simulations for different values of λ and η. Interpolation of the fitting coefficients is used to obtain g between the simulated values of λ and η. A. Simulation setup We simulate a plasma consisting of two species, electrons and singly charged ions. Both species are nonmagnetized, nondrifting Maxwellian and have a density of 3.5 × 10 11 m −3 and a temperature of 0.08 eV for all simulations, yielding an electron Debye length of approximately 3.55 mm. Recall that this arbitrary choice is by no means limiting, since g does not depend upon density and temperature independently, but only on its arguments ζ , λ and η. Simulations have been run for probes of length (l) 2, 5, 10, 20, 30, 40, 80, 200, 400, 1000, 2000 mm and in each case, for voltages (V ) 0.16, 0.48, 0.8, 1.36, 2, 2.56, 4, 6, 8 V, in total 99 simulations. The lengths and voltages were chosen heuristically to cover realistic probes, span a wide parameter range, and have a finer representation in regions where the fitting coefficients of g change more rapidly. Notice also that for these voltages, the electron current is the attracted-species current. Since Laframboise [2] found finite-radius effects to be negligible for radii up to the Debye length for infinite-length cylinders, we have chosen a probe radius of 1 mm, well below the Debye length. The cylindrical probe is centered inside a cylindrical simulation domain of radius 40 mm and a length extending 40 mm beyond the probe in the ±z directions. It is important that the outer boundary be sufficiently far away from the probe, since the potential is set to 0 V there (Dirichlet boundary conditions). The mesh is generated using Gmsh [23], with a resolution of 6 mm on the outer boundary and 0.2 mm on the probe surface. Extensive experiments were carried out with different domain sizes and resolutions prior to settling at these values, to ensure the results are as accurate as practically possible. PTETRA automatically computes a suitable time step that resolves the plasma period, as well as being small enough that a typical particle trajectory does not cross more than one Voronoi cell in any given time step. The time step also accounts for possible increases in particle energies and speeds near objects biased to various potentials. Whereas the profile function g-representing the attractedspecies current-should be independent of the mass of the repelled species (ions), PTETRA does not discriminate species when recording surface currents. It is therefore important that we reduce the ion current in the simulations to an acceptably low level by having sufficiently massive ions. We have chosen to use an ion mass of 1 atomic mass units (hydrogen ions) for η < 10, and a reduced mass of 1/16 atomic mass units for η 10, where the ions are more strongly repelled. Using artificially low mass ions in the simulations has the advantage of reducing the time needed to reach steady state, while having a negligible effect on the collected currents. Indeed, using OML theory as an order-of-magnitude estimate (see Sec. II), the resulting repelled-species current should be less than roughly 0.2% of the attracted-species currents. Whereas we run the η < 10 simulations to 40 μs to reach a steady state, the higher voltage simulations only need to run 10 μs due to the lower ion mass, which is convenient, since these simulations have a smaller time step to account for more energetic electrons near the probe. As is a standard practice in PIC simulations, we employ simulation particles that correspond to multiple physical particles in order to reduce the cost of the simulations. In each simulation, we prefill the domain uniformly with 50 million simulation particles of each species, meaning that each simulation particle corresponds to between 2.9 and 73 physical particles depending on the size of the domain (which depends on the probe length). Experiments were also carried out with different amounts of simulation particles to verify that these values were indeed sufficient. It is also worth noting that the current density through the probe is averaged with a relaxation time of 1 μs according to the relaxation scheme in [21]. B. Selected results An example from one of our simulations is shown in Fig. 2. The lower part shows the probe surface, with the current density through each facet. The current density at each facet is then multiplied by 2π r to get current per unit length, and then divided by i OML to get data points for the profile function as indicated in the scatter plot. Currents from the circular faces at the ends of the probe are excluded. Clearly, there is a significant amount of particle noise which is not part of the underlying profile function. From the dimensional argument in Sec. III, the profile function cannot vary on spatial scales much smaller than the Debye length. To smooth out this noise without making any assumption on the form of the profile function (to perform a nonparametric regression), we use a local quadratic regression with a Gaussian weighting window [24, pp. 191-199]. The window has a standard deviation of 0.1 Debye lengths in order not to suppress actual variations in the profile function. Local polynomial regression is superior compared to for instance a moving average in that it does not underestimate the slope near the edges. Note that we do not employ the popular method for enhanced robustness described in Ref. [25], since in our case this mostly treats higher-valued datapoints as outliers, leading to a small negative bias in the obtained profile function. The global regression also seen in Fig. 2 will be explained in Sec. IV C The barely visible yellow band in Fig. 2 is a pointwise 99% confidence interval of the regression, created by performing local regression on 2000 bootstrap datasets [24, pp. 249-250] and taking the middle 99% as the confidence interval. Bear in mind that this is only an estimate of the distribution of data points mapped through the local regression operation. It neither accounts for errors due to an incorrectly applied window nor errors in the underlying PIC simulation. Figure 3 shows the profile function for a selection of probe lengths and voltages. The profile functions feature a characteristic peak near each end of the probe, and the peaks' magnitudes increases with increasing voltage (mind the axes). For shorter probes, the peaks merge into one another, whereas for longer probes, there is a flat mid-section which approaches the value predicted by OML theory for an infinitely long probe. The simulations with η = 2 exhibit more noise than the other simulations, even after the local regression. This is reasonable, given that a probe with a lower potential attracts fewer electrons, i.e., collects less current, and that the signalto-shot noise ratio is proportional to the square root of the current [26], [27, pp. 475-476]. VTK files with surface current densities are made publicly available for all 99 probe simulations [28], as is the computer program LOCALREG [29] used to perform local regressions. The characteristic profile of g(ζ ) can be understood from the velocity distribution of particles at selected points along the cylinder. This is illustrated in Fig. 4 where cross sections of the particle velocity distribution function f at the probe surface (x = 0, y = 1 mm), are plotted at ζ = 0.56 and 11.25; the former corresponding to the left peak in Fig. 3 (η = 2, λ = 22.51), and the latter, to the middle of the probe. These positions are identified with two vertical lines in the third panel from the left, in the third row in Fig. 3. The cross sections, corresponding to the v x = 0 plane, was chosen so as to illustrate the left-right asymmetry near the left end of the probe. Distributions were calculated using Liouville's theorem for the one-particle distribution function in a collisionless plasma, and particle backtracking [30]. The figure shows the distribution function multiplied by the thermal speed to the third power, f × v 3 th , while the velocity coordinates v y and v z are normalized to the thermal speed. The peak in the collected current density at ζ = 0.56 is consistent with the larger extent of the distribution function for v z > 0, which in turn corresponds to particles coming from the left and, for an infinite probe, would have been collected farther to the left. Indeed, in that case, particles approaching from the left with a velocity nearly parallel to the probe axis would be collected to the left of ζ = 0. Moreover, particles grazing the end of the probe from below can be deflected by the attractive sheath electric field, and be collected at y = 1 mm near the edge, while with an infinite probe, they would have been collected below the probe and to the left of ζ = 0. The energization of particles collected at the probe is clearly seen in Fig. 4. The background nondrifting Maxwellian electron distribution function assumed in the simulations has its maximum at v = 0, and it is spherically symmetric in velocity space. At the probe, all particles must have an energy of at least ηkT = 0.16 eV, corresponding to a normalized speed of 2. In particular, the maximum of the distribution function, corresponding to particles at rest away from the probe, is exactly at normalized speed of 2 in both figures, and it marks the boundary between where f vanishes identically (v/v th < 2), and where it has nonzero values (v/v th 2). In the upper panel of Fig. 4, the extension to the right in the circle arc boundary is due to particles coming from the left, that would have been collected elsewhere in an infinitely long cylinder, as explained above. In the lower panel, the boundary in f is nearly straight at v y /v th = −2, but a close look reveals a small upward curvature due to finite-length effects. In this case, with the value of λ and η considered, it is possible for particles with the right energy and grazing incidence coming from below on either side, to be deflected by the sheath, and be collected at the probe center. It should also be noted that the boundary between f = 0 and f > 0 in the figure is not perfectly smooth as one might expect analytically. This is due in part to the discretization of f on a finite grid, and the discretization of the cylinder in terms of triangular facets. C. Curve fits As mentioned in Sec. IV A, many more simulations (99 in total) were made than illustrated in Sec. IV B. In all cases, similar basic profiles were found as illustrated in Fig. 3. With shorter probes, with small values of λ, there is strong overlap between end effects, g(ζ ) has a single maximum at the probe center, and a monotonic decrease toward the ends. As λ increases, the characteristic central hump progressively splits, leading to two humps that will remain near each end. It turns out that the profile function, as indicated by the nonparametric local regression in Fig. 3, can be parametrized rather well with the following analytic expression: whereh g(ζ ) is constructed so as to be left-right symmetric about the probe center. In addition, eachh function describes the edge effects due to one end, approaching zero far away from its respective end. For long probes, δ is the distance from either end to the peak ing (this interpretation is the reason for including the α −1 term). Further on, α relates to how fast the edge effects decay when moving inwards from the peak and A relates to the amplitude of the peaks, the amplitude being Aα −1 exp(αδ). Due to the normalizations, all coefficients are expected to be of order unity. The fitting coefficients C, A, α, and δ are determined using a weighted nonlinear least squares method, i.e., by minimizing the sum of weighted squared residuals, where (ζ i , g i ) are the data points in the scatter plot, w i the weight assigned to each data point and N the number of data points/facets. As the probe lengthens, a decreasing fraction of the data points will be within the peak region near the edges, and if equal weights were used, the algorithm would fail to capture the peaks near the edges for long probes with small voltages. To correct this, at least half of the total weight is spent on the N e data points within 5 Debye lengths of either edge. More precisely, if w e is the weight of the N e data points within 5 Debye lengths of either edge, and w m is the weight of the N m remaining data points in the mid-section, the weights satisfy the following equations: The max operator prevents w e < w m for λ < 20 by making the scheme reduce to a nonweighted least squares method. The fitting coefficients are illustrated graphically in Fig. 5. For λ > 3, the coefficients form more or less smooth surfaces, which can then be used to do interpolation. For the three shortest lengths, i.e., λ < 3, the coefficients are less wellbehaved. In addition, it was necessary to manually bound some of the coefficients for these simulations, since there are several minima in the sum of squared residuals. Coefficients for λ < 3 should therefore be regarded as less reliable, and used with caution. Since the coefficients are determined independently for each simulation, they are in effect functions of λ and η as independent variables. It is interesting to observe that as λ increases, however, all the fitting coefficients asymptotically approaches a value dependent only on η. This is not a coincidence; since eachh function describes the edge effects due to one end, its shape will necessarily become independent of λ for sufficiently long probes, when edge effects do not overlap, as described in Sec. III. This happens at λ ∼ 300. It follows that the coefficients can be extrapolated to arbitrary lengths, by using the right-most values as the asymptotic values. The model can thus be applied to probes of arbitrary length, all the way down to below the Debye length. As for the probe potential, we have simulated the range η ∈ [2, 100]. It further follows from the definition of g and the thermal current i th that g(ζ ; λ, 0) = 1 everywhere. This can be represented in terms of the parametrization Eq. (12) by setting C = 1 and A = 0. α and δ are not uniquely defined, but we choose them as for η = 2 to provide for smooth interpolation, and add data-points such as to make the grid of coefficients rectangular and structured. With this, the model covers the range η ∈ [0, 100]. For η < 0, Eq. (5) can be used. Numerical values of the coefficients are included in the dataset along with the VTK files [28]. Moreover, current profiles and integrated total currents can be programmatically accessed through the Langmuir library [31], in which we have implemented linear interpolation between the coefficients, as used for the remainder of this paper. We have also made use of [32,33]. While it is visually evident from Fig. 3 that the parametrization is quite good, it is of interest to quantify the errors of the fits. However, due to the large particle noise, the sum of squared residuals [Eq. (14)] is a large number and is not representative of the statistical errors in the fits. The coefficient of determination (often referred to as R 2 ) is also not suitable, due to the nonlinearity of the fit. Instead, we report two measures. The first is the error in the total current collected by the probe. The total current collected by the probe can be computed from the fitted expressiong as This current agrees with the total current collected by all facets of the probe (excluding the circular end faces) within 0.4% for all 99 simulations. The smallness of the error according to this measure is certainly desirable. Nonetheless, it does not say much about how well the shape ofg matches the true g. The second measure is therefore a relative L 2 error norm against the local regressionḡ, defined as follows: The integrals were evaluated numerically using the mid-point rule [34, p. 286] with a step-size ζ = 0.02. For the shortest probes (2 mm), this error is up to 0.05 and the fit visibly deviates from the local regression. For all other probes, the error is below 0.03, and there are small visible deviations. It should be understood that this error is not all ing, but also includes leftover noise in the local regressionḡ that is more effectively filtered out byg. The fact that the fits are so good, while there is still some irregularities in the coefficients in Fig. 5, indicate that the fits are not very sensitive to the exact values of the coefficients. It is also of interest to see how the model compares to OML theory. Since g → C as λ → ∞ and C is between 1.00 and 1.05 for all η for the largest value of λ, our model is within 5% of OML theory. The error is in part due to inaccuracy in the PIC simulations, and in part, to the fitted expressiong's inability to exactly match the flat, middle part of g, as witnessed in Fig. 3. These errors are believed to be roughly equal contributors, and as such the error of the PIC simulations is believed to be accurate within a few percent. Considering δ, when λ is sufficiently large and the peaks do not merge together, the peaks are always roughly 1 Debye length from the edges; a little more for the higher voltages, and a little less for lower ones. V. APPLICATIONS AND DISCUSSION In this section, we briefly describe possible applications of the results presented above. A. Finite-length probe characteristics It can be seen from Eq. (2) that the current collected by an infinitely long cylindrical probe can be written as a power law: with β = 0.5 for η 1. It turns out that this expression also holds true for spherical probes but with β = 1 [1]. For this reason, it has several times been assumed that this expression will also hold for thin, finite-length cylindrical probes, approaching 0.5 as the length increases and 1 as it decreases and the probe approaches the shape of a small grain [13,15,16]. With our model, we can compute the characteristics for a probe by evaluating Eq. (16) for a sweep of voltages η ∈ [10, 100]. Figure 6 shows the characteristics obtained for five different probe lengths. We remark that the interpolation is not linear in I but in the coefficients, and that this may lead to irregular behavior for short probes where the coefficients changes more rapidly. This is seen for the case λ = 1. An alternative approach could be to compute I on the grid of (λ, η) values for which simulations were carried out, and then interpolate I. We also made a least squares curve fit to Eq. (18) in order to test the power-law hypothesis, as indicated by dotted lines. The estimated β values are also shown in the figure. It is clear that the power law is indeed a very good approximation, and moreover, that β behaves as expected. One should remember, however, that for a practical probe β is not constant. As the density and temperature vary, the probe length-to-Debye length λ changes, and β is a function of λ. Recall also that the current collected by the circular end faces is not included in the integrated expression, and that this may change the characteristics for the shortest probes somewhat. B. Guarded probes Until now, we have mainly been concerned with probes with both ends free. The current collected near the end will certainly be altered in a nontrivial way when the probe is attached to some other, arbitrary object. To get a more predictable behavior, it is customary to use a "guard" which should ideally be an extension of the cylindrical probe, and which often has the same potential as the probe but is not in direct contact with it, such that the current collected by the guard can be excluded from measurements [3,5,12]. This is believed to eliminate end effects near the guard. In practice, the guard usually has a slightly larger radius, and there is an insulating transition in-between the guard and the probe, but these differences are kept as small as possible such that they can be ignored, at least to first approximation. There are two ways of accounting for a guard with our model. The first approach is to let λ = λ g + λ p where λ g is the length of the guard, and λ p is the length of the actual probe. The current collected by the probe can then be computed similarly as in Eq. (16), except that the integration should start at a lower limit λ g . Figure 7 shows the current profile for a probe with λ p = 30 and η = 10 both without a guard (λ g = 0) and with a finite-length guard with λ g = 5. The dotted lines indicate the part excluded from the integration. Notice also that the profile functiong (or even just theh part of it) can be used as a first approximation on how long the guard must be in order to suppress edge effects to a certain level. We emphasize, however, that the dotted line for λ g = 5 is not representative for the actual current profile throughout the guard, since the leftmost end of the guard is not free but instead attached to some other carrier object, probably with another voltage, as indicated in Fig. 1. This alters the current profile, possibly also extending the edge effects, especially if the voltage difference is large between the probe and the carrier. If significant edge effects from the carrier object extends into the probe, the model presented herein may no longer apply. It is therefore important to have a sufficiently long guard, which leads us to the second method of accounting for a guard. If we define an ideal guard as one which lets no edge effects extend into the probe whatsoever, the end effects may be eliminated from the profile function by removing the second termh(ζ ) from Eq. (12), and use the fitting coefficients for a very long probe. This is, in fact, the same as letting λ g → ∞, and is also illustrated in Fig. 7. This approach will not include trace edge effects for a nonideal guard, which in any case, may not be representative. As in Sec. V A, this has been evaluated for a sweep of voltages η ∈ [10,100], and used to estimate the β parameter, as indicated in the figure. The guard is seen to lower β a bit compared to a free probe, but not by much. β is similar for the two approaches of including a guard, to within the accuracy of the model. Our results support previous findings [14,16] that cylindrical Langmuir probes must be much more than ten Debye lengths in order to make end effects negligible, even with an ideal guard. Too short a guard may have nontrivial effects on the characteristics, and β in particular, and is not covered by our model. C. Inferring plasma parameters Langmuir probes are used to infer plasma parameters such as the electron density by inverting the current-voltage characteristics for a set of measured currents at known bias voltages with respect to for instance a spacecraft or a rocket [1,2,9,13]. As an example, several recent space missions have used the fixed-bias m-NLP instrument [9][10][11][12], where a few cylindrical probes at different, positive voltages are used to measure dI 2 /dV , and OML theory is used to make predictions on the electron density. According to OML theory, dI 2 /dV is proportional to the electron density, and independent of other unknowns, such as ion density, electron and ion temperatures, and the floating potential. However, if an inaccurate currentvoltage characteristic is assumed, for instance neglecting finite-length effects, it will necessarily lead to errors in the measurements. Using a more accurate, finite-length characteristic should alleviate these errors. While this is left as a future study, two ways of doing this are envisaged. Consider a set of measured currents {I p } collected by probes p with known voltages {V 0p } with respect to some common ground, i.e., the unknown floating potential V 0 of a spacecraft. In addition to the voltage, the probe current can be considered a function of plasma parameters such as the electron density and temperature, I (V 0 + V 0p ; n, T ). One may then find the parameters (n, T, V 0 ) by minimizing the mean squared deviation between measured currents I p and model predictions I (V 0 + V 0p ; n, T ) for all p. This requires the problem to have a unique solution. In particular, the characteristic must be sufficiently sensitive to all parameters involved. Similar methods have been considered where an analytical expression is available for I (V 0 + V 0p ; n, T ) [13,16,35], but we remark that the method may be feasible even without one. It is sufficient for minimization algorithms to have a single callable function, and this function may very well interpolate between coefficients internally. It may be difficult to arrive at a sufficiently robust and efficient inference algorithm by means of minimization. A promising alternative, avoiding the inversion problem altogether, is to use the measured currents {I p } as an input to a machine learning network or multivariate kriging regression, which is trained to predict plasma parameters such as the density. Such an approach has already been investigated for synthetic data [36]. For further studies along these lines, it would be natural to do comparisons with existing analysis techniques on real data. It should also be remembered that a cylindrical probe attached to a low Earth orbit satellite moving with velocity v d through a geomagnetic flux density B would be affected by an induced motional potential gradient v d × B, that would in particular deteriorate measurements by long probes [5]. This variation of potential would have a similar "energy smearing" effect as probe contamination. For example, assuming an orbital speed v d ≈ 7500 m/s, a magnetic flux density B ≈ 35 μT, the motional potential gradient would be ∼0.26 V/m. For a cylindrical probe of length 5 cm oriented perpendicularly to B, however, this would amount to a potential smearing on a given probe of approximately 13 mV, which is small compared to typical probe bias voltages. On the other hand, if multiple fixed bias and well separated probes are used on a given satellite, depending on the distance between probes and their relative orientations with respect to B, the motional potential differerence between probes could be significant, and would have to be accounted for in processing probe data. VI. CONCLUSION Collected current profiles along thin finite-length cylindrical probes have been studied with a particular attention to end effects. Making use of Buckingham's π theorem, the current profile along a probe was shown to depend on only five independent dimensionless parameters. By limiting our attention to probes with a radius smaller than the attracted-species Debye length λ D , and assuming a large plasma parameter, nλ 3 D 1, the number of dimensionless parameters was reduced from five to three, consisting of the normalized position along the cylinder ζ = z/λ D , the normalized probe length λ = l/λ D , and the normalized probe potential η = −qV/kT with respect to background plasma. Simulations have been made to cover a broad range of parameters relevant to probes used on many recent satellites deployed in ionospheric plasmas [11,12]. Based on kinetic simulation results for the normalized current collected per unit length g(ζ ), as a function of the normalized axial position ζ , it was possible to construct accurate empirical fits involving coefficients that can be interpolated in parameter space, in order to predict magnitudes and profiles of collected currents for arbitrary values ζ , λ, and η within the range of parameters considered. The empirical formula derived, along with the fitted parameters have also been found to predict total collected currents with excellent accuracy. Our main result is the quantification and parametrization of the current collected from the attracted species along thin finite-length cylindrical probes. The reported parametrization can be interpolated and used to calculate current collected by positive, fixed-bias multineedle Langmuir probes used on several satellites. For short probes, the overlap between this enhancement at the two ends, leads to a single maximum in g(ζ ). As the normalized length of a probe increases however, the two end effects separate, and g(ζ ) exhibits a distinctive two-hump profile, with maxima approximately one Debye length from either end. For long probes, sufficiently far from these maxima, the collected current per unit length accurately reproduces currents reported by Laframboise [2]. We show that for sufficiently large voltages (η = −qV/kT > 10), the current collected by a thin probe does scale approximately as η β . For probes of practical lengths in ionospheric plasma conditions however, we find that β is typically larger than the 0.5 value predicted with OML theory, for an infinite probe. This has implications, for example, in the proposed use of fixed-bias needle probes on satellites to infer plasma electron density independently of the temperature [9]. Indeed, referring to Eq. (18), it can be seen that with 0.5 < β < 1, the derivative of I 2 with respect to V is proportional to n 2 V 2β−1 /T 2β−1 , and with β > 0.5, this derivative depends on density, the temperature, as well as on the probe voltage. The inference of the density based on the OML β = 0.5 must therefore lead to discrepancies at levels depending on these three parameters. Extending our analysis to correct this predicament is beyond the scope of the present study, but we believe that the characterization of finite-length probe characteristics presented here provides the needed tools to better interpret fixed-bias multi-needle probe measurements in terms of plasma density and temperature. We note in closing that our analysis was based on several simplifying assumptions, and that we recognize that it does not answer all questions concerning current collection with a cylindrical probe. It was assumed that finite-radius effects were negligible for radii less than the Debye length, which must according to Laframboise be the case in the middle of long probes [2]. A secondary finite-radius effect may, however, still exist within a few radii of either end. Considering space applications, the assumption of a Maxwellian background distribution for electrons is well justified by the fact that mid latitude ionospheric plasma is sufficiently collisional for the electron distribution to be near Maxwellian. The neglect of a drift electron velocity is also justified by the fact that the electron thermal speed is generally much larger than the ram speed of low Earth orbit (LEO) satellites and ionospheric winds. As for the zero magnetic flux density, with ionospheric electron thermal Larmor radii of order of a few centimeters, geomagnetic flux densities will likely affect current characteristics and collected current profiles. In addition, as pointed out by Brace [5], a cylindrical probe attached to a low Earth orbit satellite moving through a geomagnetic flux density would be affected by an induced motional potential gradient leading to "energy smearing." In future work our analysis analysis could be repeated with a magnetic flux density included. It would then become considerably more complex, as two nontrivial parameters consisting of the magnitude of the magnetic flux density, and angle with respect to the probe axis, would have to be taken into account. Buckingham's π theorem would then lead to i(z) depending on five parameters, instead of the three in Eq. (9). Kinetic simulations covering a broad range of relevant parameter space would then be significantly more computer intensive. This could be carried out for specific missions operating in restricted space environment conditions. This work should be considered in future studies. Additional complexities could be considered, such at those associated with the proximity of other satellite components and their detailed geometry, but detailed analyses of such cases should be made for specific missions, well ahead of deployment in space.
2020-04-16T09:15:03.484Z
2020-04-08T00:00:00.000
{ "year": 2020, "sha1": "f0ea0fa90097fe698dd87387374e241440bebfeb", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.2.023016", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e10993ca7e2c85d828c312a7ad0531beca68d5ab", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252562451
pes2o/s2orc
v3-fos-license
Tensile Strength of Carbon Fiber : Carbon fiber is a material with a vast area of utilization. When combined with a resin, we get a composite, which provides great mechanical properties, has low mass and can be utilized in many different applications. Carbon fiber cloth can be found in a wide range of forms and shapes, differing from weight to the shape of their weave. Manufacturing a composite part can be achieved with a variety of techniques and processes, each of them having advantages and disadvantages over the other. One of the most common process for producing composite parts is the open moulded hand lay - up process. This process is economically viable, has low cost for equipment, and can produce parts with satisfactory results. A composite sheet is manufactured using the hand lay - up process. Three testing samples are taken out from the composite sheet, and their tensile strength is determined using the ASTM D3039 standard. INTRODUCTION The composite material produced with the combination of polymer matrix and reinforcement fibers, provides a highperformance composite. The composite material obtained with combination of carbon reinforcement fibers and epoxybased polymer matrix, can provide the same mechanical properties of conventional materials used in the industry, like aluminum or steel, with the benefit of being lightweight. Carbon fiber composite, compared to aluminum can have a weight reduction of more than 20%, while when compared to steel, the weight reduction exceeds 50% [1]. Because of the weight saving and the rest of the mechanical properties the carbon fiber is primarily used in aerospace and space industries. This material is taking a major progress as well as in the high-end car industry, civil engineering, wind turbine blades and sporting goods [1]. There are large number of companies and research facilities which are running research and development programs to develop processes and technologies for mass production. To maximize the performance of the fibers in the composite structure, the orientation of the fibers has to be arranged in the direction of the loads, according to the geometry of the part. The fibers can be arranged as a unidirectional configuration in which all fibers are placed in parallel and as a quasi-isotropic in which fibers are placed at 0, 90 and 45° , as presented in Fig. 1 [1]. Figure 1 Unidirectional and quasi-isotropic laminates To take full advantage of the mechanical properties of the reinforcement fibers, it is important to have a highvolume fraction of fibers in the material (55-60%), for the purpose of avoiding fiber curvature of misalignment as well as limiting the void content in the resin (<3%) [1]. There are different types of technologies and processes to manufacture a part from carbon fiber material. These technologies and processes depend of the form of the fibers. The carbon fibers can be found and used in the form of longfibers, short-fibers or woven in the form of sheets (Fig. 2 [3]). Most promising is the application of the carbon fiber weave in combination with thermoset resins, of which epoxy resins are the most commonly used. Epoxy resin have high mechanical strength, good chemical corrosion and exhibit high performances on elevated temperature of 120 °C [2]. Based on these custom-built sheets complex structures can be produced. The anisotropy and inhomogeneity offer the designer many aspects for structural optimization, but these characteristics also lead to complex stress and strain states within the material. The easiest way to manufacture composite parts is hand lamination. For the purpose of reaching high quality its required simple tooling and good manual skills. This process is performed at room temperature. The volume of reinforcement fiber in this process is limited to 40% due to the nature of the process. This process produces a composite part with many imperfections and high void content, due to the lack of vacuum assistance or other type of pressure on the composite part. The reinforcement fibers can be found in a wide variety of weaves (Fig. 3), with different weights and forms, each offering different benefits. To achieve the best result in mechanical properties, a combination of weaves is required to be used. TESTING METHOD AND MECHANICAL PROPERTIES The test method determines the in-plane tensile properties of polymer matrix composite materials, reinforced by high-modulus fibers. The composite material forms are limited to continuous fiber or discontinuous fiber-reinforced composites in which the laminate is balanced and symmetric with respect to the test direction. A thin flat strip of material having a constant rectangular cross section is mounted in the grips of a mechanical testing machine and monotonically loaded in tension while recording load. Determining the ultimate strength of the composite material is carried out by testing the maximum load that the composite coupon can carry before failure. By monitoring the strain of the coupon with strain or displacement transducers through the process of testing, a stress-strain curve can be obtained, from which the ultimate tensile strength, Poisson's ratio, tensile modulus of elasticity and transition strain can be derived. ASTM D3039 provides the minimum requirements for designing a specimen for composite coupons, presented in Tab. 1. These requirements are insufficient for producing a properly dimensioned coupon drawing with tolerances. Therefore, recommendations from Tab. 2 of ASTM D3039 are taken into accord for manufacturing a specimen [4]. This method of testing, provides valuable information for the materials specifications, which are beneficial for research and development, analysis, quality assurance and structural design. The tensile strength of the composite material is dependent on many factors: type of material, lay-up of the material, preparation, specimen stacking, specimen preparation, specimen conditioning, specimen alignment, the environment of which the testing is conducted, type of grips used on the machine, speed of testing, temperature, void content in the coupon and volume percent reinforcement. A carbon fiber sheet is manufactured for the purpose of testing the mechanical properties of a composite, who is produced using the hand lay-up process. From the composite sheet, three samples are taken which are used for testing the tensile strength of the composite, dimensions are presented on Fig. 4. CARBON FIBER MANUFACTURING PROCESSES There are many different processes for manufacturing carbon fiber components, varying from manual process of manufacturing, with hand lay-up of carbon fibre cloth, to a more complicated process, using autoclave ovens with preimpregnated carbon fiber. Each of these processes has its own advantages and disadvantages when compared to another process of manufacturing. A big consideration should be taken beforehand in the requirements of the manufactured part, and choosing the suitable process for the right project. [7]. This process, has low demands and requirements for tools and environmental conditions, but has a high requirement in the workers skill [6]. The final results achieved with this process, can vary from high grade produce with satisfactory results, to a low grade produce with many imperfections, weak areas (due to excess resin), voids, pin holes, etc., depending on the workers skill. Hand lay-up process, consists of laying carbon fiber cloth in the mould, and soaking the cloth with a predetermined amount of resin. The wetting of the carbon fiber cloth, can be achieved with a brush, squeegee or a roller. This task is repeated, until the desired thickness of the laminate is achieved. Before starting the process of hand lay-up of the carbon fiber cloth, the mould surface has to be thoroughly cleaned with acetone, to remove any traces of wax or grease. Also, the mould surface has to be dust free, to prevent inclusions of dust particle in the surface of the part. To prevent sticking between the part and the mould, there has to be a barrier between these two parts. This barrier is attained with the help of a release agent, which can be: PVA (polyvinyl alcohol), wax, or a suitable chemical release agent. One of the oldest and most common release agents is PVA, which is a release agent dissolved in ethanol. This release agent is commonly used in combination with wax and provides excellent results in releasing the part from the mould. Peel ply is a woven cloth typically made of nylon, glass or other synthetic materials. This layer of woven fabric, is applied as the final layer of the composite part to provide a porous surface, suitable for adhesive bonding to other surfaces. Other than that, peel ply is used for drawing excess resin from the part, in combination with a bleeder layer. MANUFACTURING OF A CARBON FIBER SHEET Manufacturing of the composite sheet occurs in a room with low humidity and temperature of 21 °C, which is in the range of the optimal room temperature for producing composite parts. The carbon fiber sheet is manufactured using the open moulded hand lay-up process. The sheet is made of three individual layers of 12K carbon fiber cloth, who's weave is a 2x2 twill weave pattern, presented on Fig. 6. This weave pattern is composed of tows who go in directions of 0° and 90°, and give an illusion of a diagonal pattern. The 2×2 (also can be found in 4×4 weave) in the name stands for tows going under two tows, and then again over two tows of carbon fibers. This weaving is often used for complex shapes, because of its looser weave, which can be easily manipulated to fit more complex shapes in the mould. The dimensions of the cloths are 350×200 mm, and approximately 0.6 mm thickness. In theory these layers should produce a ~1.8 mm thick carbon fiber sheet. The fabric to epoxy ratio in this process is 50:50. The weight of the carbon fiber cloth used for manufacturing of the carbon fiber sheet, is approximately 130 g, so the amount of resin used will be about 155-160 g. The amount of resin used is higher due to the method of application with a brush. This is done to prevent a shortage of resin, because of the nature of the brush to soak a small amount of resin. Manufacturing of this carbon fiber can be done in moulds made of a variety of materials, ranging from wood, aluminium, composites moulds to 3D printed plastics moulds. Manufacturing Process The manufacturing of the carbon fiber sheet is done on, glass surface which serves as a mould surface, because it provides a flat surface (which allows producing a carbon fiber sheet with a smooth finish), and has no porosity (no need for using a sealant for preventing a mechanical bond between the mould and the carbon fiber sheet). The glass surface is cleaned beforehand with acetone, to remove any traces of grease or dust. A layer of Rexco Formula Five Mould Release Wax is applied using a clean dry cloth, according to the instructions given in the manual. After applying the final layer of wax, the mould is left for one hour for the residual solvents to gas-out before applying PVA. A uniform layer of PVA is applied using a spray gun. After applying the release agent, the PVA is left to dry itself for 10 -15 min, to create a very thin release film. The resin used in this process of producing a carbon fiber sheet is SIKA CR-82 epoxy resin, with SIKA CH80-1 catalyst. This resin has low viscosity and is suitable for use in the process of hand lay-up and vacuum bagging where curing temperature more than 75 °C can't be achieved. The mixing ratio of this epoxy system per manufacturer instructions is 100:27 [9], when mixing the two parts by weight. To reduce the number of voids and pinholes on the surface area on the manufactured sheet, a layer of resin is applied on the mould surface. After wetting the mould surface, a layer of carbon fiber cloth is applied on the surface. The cloth is wetted out using a brush, and the excess resin is removed with the help of a squeegee or a roller. This part is repeated with the remaining layers of carbon fiber cloth. After applying the last layer of carbon fiber cloth, a final layer of peel ply is applied on the composite sheet, and using a roller, the excess resin is drawn out from the part. After finishing with the stacking of the layers of carbon fiber cloth and finalizing with the peel ply, the carbon fiber sheet is left to cure for 24 hours at room temperature. Additionally, the composite sheet is post cured in an oven for eight hours at 80 °C, Fig. 7 [9]. This is done to acquire a fully cured part, and obtaining the best properties. With this epoxy resin system, different glass transition temperature can be achieved, depending on the curing conditions of the composite part. Curing the composite sheet at elevated temperature of 80 °C for additional 8 hours, raises the glass transition temperature of the composite sheet to a higher tolerance. TESTING AND TEST RESULTS From the manufactured carbon fiber sheet, three samples were cut-out with dimensions of 300 mm length, and 25 mm width, presented on Fig. 8. The three coupons are used to test the tensile strength of the manufactured carbon fiber sheet. The test is implemented using the ASTM D3039 standard, using a 250 kN tensile strength tester. The test of the three coupons was conducted in a room temperature environment. The test sample is mounted on the tensile strength testing machine, using suitable grips, used for holding the specimen, and providing a sufficient and even distribution of pressure to prevent slippage of the specimen during the test. The two grips hold on combined length of 120 mm of the specimen (60 mm on each side), presented on Fig. 9. The displacement between the grips determined by the ASTM D3039 standard is 2 mm/s. To calculate the tensile strength of the composite samples, Eq. (1) was applied: where: σ max -ultimate tensile strength (MPa); F maxmaximum load before failure (N); A -average cross section of the coupon (mm 2 ). Figure 8 Representation of the test samples before the testing procedure The results of the ultimate tensile strength of the coupons are shown in Tab. 3. CONCLUSION Open moulded hand lay-up process combined with its low cost for tools, provides adequate product, which almost entirely depends on the workers ability. The final result from the manufacturing of the composite sheet, is a product with some surface voids, pinholes and trapped air. The surface of the carbon sheet, due to the nature of the process (lack of vacuum environment) has many voids. To get a surface rid of any imperfections and voids, the sheet needs to undertake additional treatment to get a perfect smooth finish. The final thickness of the manufactured sheet is 2.5 mm. When compared to the thickness of the carbon fiber cloth, which is approximately 1.8 mm, it's concluded that the remaining 0.7 mm is gained from the resin. With this procedure, the final product is resin rich, which in turn gives us a part with an increased mass and lower mechanical properties in contrast to the other carbon fiber manufacturing processes, due to the excess resin.
2022-09-28T15:03:57.194Z
2022-09-26T00:00:00.000
{ "year": 2022, "sha1": "25b3a0bc1f938ff9bab7e71313c92f361b18aaf4", "oa_license": "CCBY", "oa_url": "https://hrcak.srce.hr/file/410709", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c5d80386b3cf801190e6daf23e09779470134e26", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
9370576
pes2o/s2orc
v3-fos-license
Spatial variation in automated burst suppression detection in pharmacologically induced coma Burst suppression is actively studied as a control signal to guide anesthetic dosing in patients undergoing medically induced coma. The ability to automatically identify periods of EEG suppression and compactly summarize the depth of coma using the burst suppression probability (BSP) is crucial to effective and safe monitoring and control of medical coma. Current literature however does not explicitly account for the potential variation in burst suppression parameters across different scalp locations. In this study we analyzed standard 19-channel EEG recordings from 8 patients with refractory status epilepticus who underwent pharmacologically induced burst suppression as medical treatment for refractory seizures. We found that although burst suppression is generally considered a global phenomenon, BSP obtained using a previously validated algorithm varies systematically across different channels. A global representation of information from individual channels is proposed that takes into account the burst suppression characteristics recorded at multiple electrodes. BSP computed from this representative burst suppression pattern may be more resilient to noise and a better representation of the brain state of patients. Multichannel data integration may enhance the reliability of estimates of the depth of medical coma. I. Introduction Burst suppression is a stereotypical time domain EEG pattern characterized by high voltage activity alternating with periods of low-voltage activity ('suppressions'). It generally reflects a state of profound brain inactivation and unconsciousness, associated with various normal developmental (early development), pathological (hypothermia, diffuse anoxic brain injury) and therapeutic (deep anesthesia) scenarios. While the neurophysiology of burst suppression remains an area of active investigation, current theories suggest that it arises from a nonlinear interaction. This consists of a 'fast' dynamical process that generates background EEG activity, and a 'slow' process that periodically interrupts the fast process, leading to suppression of background activity. The slow process is thought to be a depletion-recovery cycle in which some energy resource (such as ATP stores) necessary for maintenance of background activity is periodically depleted during high-voltage EEG 'bursts', and regenerated during suppressions [1]. Significantly for medical engineers, this process exhibits robust parametric sensitivity to the depth of anesthesia. That is, the duration of suppressions becomes progressively longer and bursts become progressively briefer as the concentration of anesthetic in the brain increases. This makes burst suppression a neurophysiology-based EEG signature that can be used to non-invasively monitor the depth of pharmacologically induced coma in real-time. Pharmacologically induced coma is currently used in clinical settings as treatment for patients with high risk of brain injury either from physical trauma, drug overdose or disease such as intracranial hypertension and status epilepticus. In the case of refractory status epilepticus, defined as ongoing seizure activity resistant to first line and second line anticonvulsant agents and lasting more than 30 min, burst suppression-targeting pharmacologically induced coma over extended periods of time is the standard of care [2]. It is thought to stop seizure activity and thereby achieve neuroprotection [3]. A standard medical goal in such pharmacologically induced coma is to maintain the brain in a burst-suppressed state with less than 1 burst per 10 seconds for 12-24 hours or more. This duration is significantly longer than any human operator can maintain tight control over. Therefore, defining a precise quantitative target level of burst suppression and maintaining the target automatically using a closed-loop feedback system would be a much more efficient and pragmatic approach. Over the past years, there has been tangible effort by researchers to advance towards this goal. A statistically-rigorous algorithm based on Bayesian estimation and pharmacokinetic and pharmacodynamic models to compactly quantify the state of burst suppression as the burst suppression probability (BSP) has been developed for real time quantification, and models have been developed to relate BSP to the underlying anesthetic states [4]. This has enabled real-time monitoring of coma depth and has allowed rapid advances in design of closed-loop anesthetic delivery systems (CLAD) to control burst suppression. Such systems have been built and shown to work in rodent experiments with high reliability and precision [5]- [7]. Furthering the work to translate these research devices into a clinical tool requires considerable effort to adapt and account for the difference between laboratory rodent experiments and real clinical applications. Two such differences in terms of data collection are 1) rodent experiments use a single intradural electrode for recording while multichannel scalp EEG is often collected in the clinical setting; and 2) rodent experiments are conducted in a controlled environment with the electrode affixed to the scalp for a short duration of 1-4 hours. With rodent recordings, there is thus less concern for movement artifacts, dislodged electrodes and other disruptions in recording; these issues can be significant in a clinical setting with extended recording periods and can affect detection of burst suppression patterns. Many groups have discussed methods of automated burst suppression detection but no work to our knowledge has explicitly addressed how multi-channel EEG data affects burst suppression detection and monitoring [8]- [12]. Research effort has been directed at describing how suppression patterns can be successfully extracted automatically, usually by first identifying one or more features that distinguish bursts from suppressions (e.g. instantaneous variance, amplitude, median, standard deviation, entropy, 95% edge frequency, non-linear energy operator, etc.) followed by some form of classification (e.g. segmentation using hard and soft thresholds, classification by artificial neural networks, etc.). None of these studies explicitly addressed how data from multiple channels are optimally combined. In work that did utilize EEG signals across multiple channels, the authors described integrating instantaneous amplitude across all channels, stating that a priori, this should help to overcome contamination of channels by artifacts, but no further systematic analysis of the impact on detector behavior was performed [8], [12]. In the following study we explore the spatial characteristics of single channel BSP recordings and explore the impact and potential value of using multi-channel data in burst suppression monitoring. We hypothesized that different channels may capture different local information while having multiple channels may facilitate artifact rejection. A. EEG Recording and Patient Profile EEG data was recorded from 8 patients with refractory status epilepticus (RSE) who were placed under pharmacologically induced burst suppression with either propofol or midazolam or both in the Neurosciences Intensive Care Unit of Massachusetts General Hospital (MGH). The retrospective analysis in this paper was performed with the approval of the Institutional Review Board. Three of the 8 patients had a period of cardiac arrest before the onset of RSE (post-anoxic RSE, pRSE). Retrospective data collection was done under an MGH IRB approved protocol. All EEGs were recorded using 19 silver/silver chloride electrodes, affixed to the scalp according to the international 10-20 system. Data were recorded at 512 or 256 Hz, using XLTEK clinical EEG equipment (Natus Medical Inc., Oakville, Canada), and subsequently down-sampled to 200 Hz. B. Preprocessing and Burst Suppression Detection To precondition the data for analysis, a sliding window of 1 sec is used to remove regions where the average power is >40dB, to remove high voltage artifacts with frequent zerocrossing such as due to loose electrodes. Further artifact removal is done by rejecting instantaneous high amplitude data (>500uV) and electromyography artifacts (>5 SD in the 15 -30 Hz band). Finally the data is put into average montage and band-pass filtered at 3 -35Hz. The lower bound of this filter is higher than conventionally used for EEG analysis but is well suited for burst suppression detection. Next, a previously validated algorithm for burst suppression segmentation is used to generate the binary signals representing suppressions [8]. This algorithm detects suppression by thresholding a recursive estimate of the local signal variance as expressed in the following equations: where x t is the EEG signal at time t, μ t is the mean, σ t is the variance, z t is the current value of the binary signal produced, β is a parameter called the "forgetting factor", δ[․] is the indicator function (equal to 1 if the inequality is satisfied and 0 otherwise) and θ is the classification threshold. We set the forgetting factor to the globally optimal value reported in the referenced paper. The threshold θ is set to 1.75, which was determined by visually scoring the performance of 6 candidate thresholds (1.4 -3.5) in 93 30-sec single channel test segments randomly extracted from the recording. C. Burst Suppression Probability (BSP) The BSP is a compact representation of the burst suppression pattern that allows for secondto-second analysis and across-time comparison. It defines the brain's instantaneous propensity for being in the suppressed state, using a link function to map the amount of anesthetic in the brain onto a well-defined probability as shown in Fig 1 [4]. Here we used a real-time binary filter to calculate the BSP from the binary signal obtained in the previous step [5]. D. Global Representation of Binary Data To provide a basis for comparing BSP values between channels, we designed a simple global representation of binary data. This is formulated by implementing a voting system among the binary data obtained from individual channels whereby for any instance in time at least 60% of the valid (i.e. artifact free) channels have to agree on the observation of suppression for it to be considered a 'true' suppression. This scheme is motivated by the observation in previous studies that burst suppression is a global phenomenon [13]. The new binary signal summarizes the data from all channels considered. From it, a global representation of the burst suppression probability (BSP) is found using the binary filter algorithm described earlier. To distinguish this BSP from the BSP calculated from data from just one channel, hereafter we refer to the former as the global BSP and the latter as single channel BSP. III. Results A total of 210 hours of recording were analyzed, with 20-25 hours of data coming from each patient. Overall, patients had BSP>0.1 for 91 hours or 43% of the total recording time. Patients with post anoxic refractory status epilepticus (pRSE) accounted for about 67 hours of the total recording and had BSP>0.1 for 27.8 hours or 41.5% of this time. A. Significant Differences in Burst Suppression Probability Obtained from Data in Different Channels To study spatial variation in single channel BSP values we first studied the evolution of BSP for individual patients over time. Fig. 2AI shows a set of 5 representative 12-hour single channel BSPs estimated from an EEG recording of one pRSE patient. As in most other patients, it is evident from visual inspection that single channel BSP varied substantially between channels and the differences were consistent over time. Taking 1 min segments and comparing the variance signal obtained from processed EEG in the burst suppression detection algorithm (see example in Fig. 2AII) shows that while bursts can be seen globally, their magnitude can vary substantially and systematically. Some bursts are detected similarly across all channels while others are only registered by the automated detection algorithm in certain channels. This explains the source of variation in single channel BSP estimation among different channels. B. Global Representation and Associated Burst Suppression Probability (BSP) A global representation of the binary signals is obtained from multi-channel EEG recording by the voting method described earlier. The same binary filter algorithm applied to individual channels can then be used to estimate BSP from the global BSP. Fig. 2BI shows the global BSP overlaid on single channel BSPs from all 19 channels in the same pRSE patient. It can be seen that the global BSP tracks the overall trend of single channel BSPs. In a similar plot for a different patient (Fig. 2BII), we highlight that the global BSP remains relatively unaffected by an outlier BSP in one electrode. This global representation also allows us to further investigate the extent of the variation in single channel BSP estimation as described in part A. We compared the single channel BSP estimates in each of the 19 channels with the global BSP for time points where the global BSP ≥ 0.1 and plotted the mean difference and standard deviation topographically (see Fig. 2C). pRSE and non-anoxic RSE (npRSE) patients are handled separately. In both groups, the frontal and temporal leads tend to report lower single channel BSP while the occipital and central leads tend to report higher single channel BSP. The expected deviation of single channel BSP from global BSP in pRSE patients is (0.13 ± 0.11), while that in npRSE patients is (0.06 ± 0.04). These findings suggest that (a) burst amplitudes tend to be higher in frontotemporal regions, leading to increased probability of burst detection and lower BSP values when using identical threshold values at all scalp locations, and (b) the degree of spatial heterogeneity in burst amplitude tends to be greater in patients with pRSE, at least in this cohort. IV. Discussion Developing quantitative, real-time algorithms to monitor the pattern of EEG burst suppression is a medical innovation necessitated by the need to safely and effectively maintain medically induced coma at a desired depth for extended periods. Burst suppression is quantified in real time using the burst suppression probability (BSP), which has been shown to have parametric sensitivity to the depth of coma. Although burst suppression is generally considered to be a spatially homogeneous phenomenon in scalp EEG, herein we observed that when a validated segmentation algorithm is applied significant differences between the burst suppression probabilities can result in different channels. A close analysis of the burst suppression detection procedure reveals that there are different types of bursts in the burst suppression state. Some bursts, although they visually seem to occur in all channels, have characteristically smaller amplitudes in certain channels such that the segmentation threshold is less often crossed or often crossed only briefly. This leads to systematic spatial differences in BSP estimated from different channels. These spatial differences likely reflect the well-known functional differences that exist between regions of cortex, which are well described outside of burst suppression. These differences can manifest, e.g. in the relatively high amplitude of alpha (8-10 Hz) activity in posterior head regions during resting awake state, and the centrotemporal location of vertex waves in sleep. That such spatial differentiation persists in burst suppression is in keeping with recent theoretical work which views bursting activity as a continuation of processes active in the pre-burst-suppression state [1], [14]. The difference between single channel BSPs means that the location on the scalp where we collect data for monitoring matters. Using only forehead electrodes, such as in Bispectral Index monitoring, is likely to result in administering more anesthetic than if the patient is monitored with the global BSP summarized from a standard full scalp EEG, as the former would report a lower BSP than the latter. Our observations on the use of a global representation of data obtained from individual channels by means of channel 'voting' indicates that this method may be more resilient to noise than single channel BSP. This method is therefore an improvement on previous methods for utilizing multichannel data. In those methods, signals from individual channels are simply integrated and would have resulted in data from the electrode with aberrant behavior being still included in the final result. Alternative enhancements to be explored in future work include using thresholds that vary as a function of head location, and the use of more sophisticated probabilistic detection methods as opposed to our current hard thresholding method of detecting suppressions. Overall, we observed significant variation in the BSP recorded from different channels of a standard clinical EEG system. The variation follows a specific spatial pattern and is consistent over time. A global representation of information from individual channels that takes into account the burst suppression pattern recorded from multiple electrodes is likely to be useful for providing a more noise-resilient and accurate representation of the underlying brain state. Further work can be done to develop more robust ways of combining information from multiple electrodes for this purpose. Fig. 2AI. Note that the green box highlights a burst that is only observed in frontal lead Fp1 but not C3 or O1 while the orange box encases a burst that is similarly detected across all three channels. BI) Demonstration of Global BSP overlaid on all Single Channel BSP. Data from post anoxic RSE patient described in Fig 2AI and
2017-02-16T17:37:06.622Z
2015-11-01T00:00:00.000
{ "year": 2015, "sha1": "1cad54b70a66ec2d1f085bfdd97d07105ee9da0c", "oa_license": "CCBYNCSA", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/112236/1/nihms786136.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "1cc28191758ec8aa3c6e33b5d9e73fb253dab2d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine", "Computer Science" ] }
213415822
pes2o/s2orc
v3-fos-license
APA Style 7 th Edition : The Reference List This guide provides examples of how to cite sources using the American Psychological Association (APA) citation style. In APA style, a source is briefly cited within the text of a research paper using the author’s surname (family name) and the date of publication. This is known as an in-text citation. A detailed list of all in-text citations is provided at the end of the research paper on a separate page with the word References (in bold) centered at the top of the page. Reference list entries are organized alphabetically by author, and by title for entries with no author. All entries are double-spaced and have a hanging indent, meaning the second and subsequent lines of an entry are indented 1.27 cm (0.5 in) from the left margin.  For an online source that is continuously updated, include the year of last update for the specific entry that you are citing if clearly indicated. Otherwise, use "n.d." (no date) for year of publication and include a retrieval date because the content may change over time. Entry in a dictionary with a group author: American Psychological Association. (n.d.). Organizational culture. In APA dictionary of psychology. Retrieved April 7, 2020, from https://dictionary.apa.org/organizational-culture  When the author is also the publisher of the work, omit the publisher from the reference. Reports, Conference Presentations, Dissertations and Theses, Preprints The  Create a description in square brackets following the title that best describes the content you are citing e.g., Data set; Data set and code book; Unpublished raw data. etc. Audiovisual Media The following examples illustrate how to cite common types of audio works, visual works, and audiovisual works ( https://www.slideshare.net/AndreasVonderHeydt/the-magic-to-think-big  Lecture notes or PowerPoint slides that are retrievable (e.g., posted to a public website) are included in the reference list.  If notes or slides are posted on a learning management system such as Brightspace, and your reader is able to access that resource, include a citation in your reference list. Provide the name of the site and its URL in the citation (e.g., Brightspace. https://smu.brightspace.com/d2l/login).  Lecture notes, PowerPoint slides, or other materials that are not retrievable by others (e.g., lecture notes taken by a student during a class) are cited as personal communications in the text of the paper only, not in the reference list. Social Media When citing social media (e.g., Twitter, Facebook, Instagram):  Include the text of a social media post up to the first 20 words; do not alter the spelling or capitalization as found in the post; include hashtags, links, and emojis. Reproduce emojis if possible or provide the emoji's name in square brackets. Names of emojis can be found on the Unicode Consortium's website: https://unicode.org/emoji/charts/index.html  Indicate audiovisuals if present in square brackets following the text of the post. The following examples illustrate how to cite common types of social media. Please refer to the Publication Manual of the American Psychological Association, 7th ed., (pp. 348-350) Personal Communications Personal communications include sources that are not recoverable by readers such as e-mail messages, private letters, telephone conversations, and notes taken during a class lecture. These types of sources are not included in the reference list, but are cited in the text of the paper only. Include the initials and surname of the communicator and the exact date. In a telephone interview with the association's vice president … (H. Klein, personal communication, November 15, 2019). Traditional Knowledge or Oral Traditions of Indigenous Peoples that is not recoverable by readers is also cited as a personal communication in the text of the paper only. Provide sufficient detail to describe the content and origin of the information, including the communicator's full name, the nation or indigenous group to which they belong, their location and any other relevant details, followed by "personal communication", and the date that the communication took place. Capitalize most terms relating to Indigenous Peoples or Indigenous culture (e.g., Indigenous, Elder, Traditional Knowledge, etc.). Please refer to the Publication Manual of the American Psychological Association, 7th ed., (p. 260-261) for more information on how to cite personal communications. Additional Resources For more detailed information, please consult the following resources: 
2020-02-27T09:17:54.095Z
2020-03-15T00:00:00.000
{ "year": 2020, "sha1": "28b5c7bd259c33d1a37c7df79b908b6589a44364", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.29009/ijres.3.2.4", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5c694d09c3b3d6f79f838fafde2a31913c1e94a6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
149944897
pes2o/s2orc
v3-fos-license
Odontometric Analysis of Permanent Mandibular Canine to Determine Sexual Dimorphism: A Preliminary Study Introduction: Crown diameters of a teeth are reasonably accurate predictors of sex and are good adjuncts for sex determinations. The aim of the study was to determine the reliability of mesiodistal width of mandibular canine in sexual dimorphism. Materials and methods: Medical students of Nepalgunj Medical College, Chisapani, Banke, Nepal were selected for data collection. Sample consisted of 300 subjects which included 150 males and 150 females of age group 18-25 years. The mesiodistal width of the mandibular right and left canine teeth were recorded by Vernier calliper. Descriptive statistical analysis was done from odontometric measurements data to calculate sexual dimorphism for mandibular right and left canine. The student t-test was used to determine the level of significance among the parameters measured. Results: The mean values for mesiodistal width of mandibular right canine for male and female subjects were 7.1665±0.28576 and 6.3777±0.37875 respectively. The sexual dimorphism for mandibular right canine was calculated to be 12.368%.The mean values for mesiodistal width of mandibular left canine for male and female subjects were 7.3875±0.35506 and 6.2847±0.41115 respectively. The sexual dimorphism for mandibular left canine was calculated to be 17.5%. Conclusion: Statistical analysis showed significant sexual dimorphism in odontometric analysis of permanent mandibular canines between male and female with the mandibular left canine showing the highest percentage. Introduction S exual dimorphism refers to the diff erences in size, stature and the appearance between male and female in relation to various structures of the human body. The skeleton as a whole in general and individual bones such as vertebrae, especially the fi rst cervical (or Atlas) vertebra, sacrum, pelvis, clavicle in particular have been reported to be of great signifi cance in relation to the sex diff erences in various population. Odontometric analyses have also been reported to be of immense value for sex identifi cation because no two mouths are alike 1 . In cases of mass disasters, where there are no personal items of the victims, or the circumstances of the accident destroys soft tissue of body that might help us for identifi cation of the individual; we can use techniques such as facial reconstruction; use of diff erent laboratory procedures of bones and identifi cation from DNA study etc. But of all morphological structures including human skeleton there is only one structure that does not change in size or shape after the initial development that is the teeth. In the process of identifi cation of skeletal human remains subjected to deterioration by chemical or physical agent, teeth play a fundamental role 2 . Teeth have been identifi ed to show extreme durability because of being the hardest as well as chemically the most stable tissues in the body. The permanent canines off er defi nite advantages as: they are less aff ected by periodontal diseases, are the least extracted ones, are exposed to less plaque, show minimal abrasion from brushing and are the last teeth to be extracted with advancing age 3 . The above causes led earlier workers to use measurements of mesiodistal and buccolingual width of practically all human teeth in the assessment of the sexual dimorphism in world-wide population. Mandibular canines are regarded to show the greatest sexual dimorphism amongst all teeth 4,5 . Studies on the mandibular canines by earlier workers 5-9 indicated them as key teeth for personal identifi cation of individuals. Teeth in general, have been reported to be larger in size in males when compared with those of female [10][11][12][13][14][15] . Only scanty reports are available on the abovementioned dental measurements and associated indices in Nepalese population. Hence, the present work related with the mesiodistal widths of the mandibular canine would be of great importance for comparison with the data analysed by the earlier workers in non-Nepalese subjects. Materials and Methods This cross-sectional study was conducted in Nepalgunj Medical College, Department of Anatomy after the approval from the ethical review board. Duration of study was 12 months. Medical students of Nepalgunj Medical College, Chisapani, Banke, Nepal were selected for data collection. Each individual was informed regarding objective and method of the study and written consent was obtained from them. Personal informations regarding name, age and sex were recorded. The resultant study sample consisted of 300 subjects which included 150 males and 150 females of age group 18-25 years. Inclusion criteria of the subjects were medical students of Nepalgunj Medical College between ages of 18-25 years, with healthy anterior teeth free of any pathology, with standard over-jet and overbite (between 2 to 3 mm) and absence of spacing and rotation in anterior region of the jaw. Missing anterior teeth on either side with prosthesis on concerned teeth and past history of any trauma/ surgical treatment on the concerned teeth were excluded in this study. Measurement of Mesiodistal width Each individual was asked to sit comfortably on a chair. Intraoral examination of the anterior mandibular teeth was done to detect occlusion, over-jet, overbite, and rotation and/or malpositioning. The mesial and distal surfaces of right and left mandibular canine were identifi ed (fi gure 1) and the distance between the crests of the curvature on the mesial and distal surface was recorded by Vernier caliper (fi gure 2). Calculation of Sexual dimorphism From the above obtained odontometric measurements, sexual dimorphism for right and left mandibular canine was calculated by using following formula 17 . Where, X m = mean mesio-distal width in males: X f = mean mesio-distal width in females. Statistical Analysis Descriptive statistics were calculated from the obtained measurements and indices. For each parameter the diff erences between the means for the male and female were assessed for statistical signifi cance by using SPSS version 16 at the p=< 0.05 level of signifi cance. The student t-test was used to determine the level of signifi cance among the parameters measured. Mesiodistal width of Mandibular Right Canine: The mean mesiodistal width of mandibular right canine for male and female subjects (Table1) were 7. 1665±0.28576 and 6.3777±0.37875 respectively. The mean value for total sample was 6.7721±0.51794. Independent t-test revealed p>0.001, which was statistically highly signifi cant. The sexual dimorphism for mandibular right canine was calculated to be 12.368%. Mesiodistal width of Mandibular Left Canine The mean mesiodistal width of mandibular left canine for male and female ( Discussion Gender determination in damaged and mutilated dead bodies or from skeletal remains constitutes the foremost step for identifi cation in medicolegal examination and bioarchaeology. Whenever it is possible to predict sex, identifi cation is simplifi ed because the missing person of only that sex need to be considered 18 . Although DNA profi le give accurate results, yet odontometric parameters has to be used for determination of sex in a large population because of being simple, reliable, cost eff ective and easy. There are diff erences in odontometric features in specifi c population, even within the same population in the historical and evolutional context, it is necessary to determine specifi c population values in order to make identifi cation possible on the basis of dental measurements 19 . Doris et al have indicated that early permanent dentition provide the best sample for tooth size measurements because early adulthood dentition has less mutilation and attrition 20 . Consequently, the eff ect of these factors on the actual mesiodistal width would be minimum. Thus only subjects in the 18-25 years age group were included in the study sample. Most commonly, the width and length of the crown were taken in consideration, among this former is considered to be more reliable 21 . In the present study, the mean values for mesiodistal width was found to be 7.16±0.28mm in males and 6
2019-05-12T13:39:12.613Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "102948f151b257de536e0e4ad7502138301ff751", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/jnprossoc/article/download/23861/20219", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aa3cb44a4f76ff2faecc591b9490ff8b65ff8b7c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
11135288
pes2o/s2orc
v3-fos-license
PP13, Maternal ABO Blood Groups and the Risk Assessment of Pregnancy Complications Background Placental Protein 13 (PP13), an early biomarker of preeclampsia, is a placenta-specific galectin that binds beta-galactosides, building-blocks of ABO blood-group antigens, possibly affecting its bioavailability in blood. Methods and Findings We studied PP13-binding to erythrocytes, maternal blood-group effect on serum PP13 and its performance as a predictor of preeclampsia and intrauterine growth restriction (IUGR). Datasets of maternal serum PP13 in Caucasian (n = 1078) and Hispanic (n = 242) women were analyzed according to blood groups. In vivo, in vitro and in silico PP13-binding to ABO blood-group antigens and erythrocytes were studied by PP13-immunostainings of placental tissue-microarrays, flow-cytometry of erythrocyte-bound PP13, and model-building of PP13 - blood-group H antigen complex, respectively. Women with blood group AB had the lowest serum PP13 in the first trimester, while those with blood group B had the highest PP13 throughout pregnancy. In accordance, PP13-binding was the strongest to blood-group AB erythrocytes and weakest to blood-group B erythrocytes. PP13-staining of maternal and fetal erythrocytes was revealed, and a plausible molecular model of PP13 complexed with blood-group H antigen was built. Adjustment of PP13 MoMs to maternal ABO blood group improved the prediction accuracy of first trimester maternal serum PP13 MoMs for preeclampsia and IUGR. Conclusions ABO blood group can alter PP13-bioavailability in blood, and it may also be a key determinant for other lectins' bioavailability in the circulation. The adjustment of PP13 MoMs to ABO blood group improves the predictive accuracy of this test. Introduction ABO blood-group antigens are oligosaccharides attached to cell-surface glycoconjugates expressed by epithelia, endothelia and erythrocytes (RBCs) in primates [1,2]. Although their function has not yet been revealed, ABO antigens might have been evolutionarily advantageous in conferring resistance against pathogens [3]. The susceptibility to various diseases, such as infections, cancer, cardiovascular diseases and hematologic disorders, have been associated with ABO blood groups [3][4][5][6][7][8][9][10]. Interestingly, ABO blood group is a key determinant of coagulation factor VIII and von Willebrand factor plasma concentrations [4,5]. Low plasma concentrations of these glycoproteins in blood-group O individuals may lead to excess bleeding, while elevated plasma concentrations of these factors in non-O blood-group individuals have been implicated in increasing the risk of thromboembolic and ischemic heart diseases [5][6][7][8][9]. Preeclampsia, a syndrome unique to human pregnancy and one of the leading causes of maternal and fetal morbidity and mortality [11,12], is also associated with maternal blood group [13][14][15]. Patients with blood group AB have an increased risk of severe-, early-onset-, or intrauterine growth restriction (IUGR) associated forms of preeclampsia [14,15]. Although AB blood group and low first trimester maternal serum PP13 concentrations may separately be associated with increased risk of preeclampsia, we hypothesized that ABO blood group may affect PP13 bioavailability in maternal blood in normal and disease conditions. Indeed, PP13 may bind to betagalactosides on ABO antigens and be sequestered on cell surfaces covered by these antigens similar to other galectins [37][38][39], and this phenomenon may affect maternal serum PP13 concentrations and the prediction accuracy of the PP13 test for pregnancy complications. Therefore, the objectives of this study were to 1) determine the relation between maternal serum PP13 and maternal blood groups throughout pregnancy; 2) confirm the differential binding of PP13 to RBCs of various ABO blood types; and 3) investigate whether the adjustment of maternal serum PP13 multiples of the medians (MoMs) to maternal blood groups could improve the predictive value of the PP13 test for preeclampsia and IUGR. Ethics statement The reported studies were approved by the Institutional Review Boards of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), National Institutes of Health (NIH), Department of Health and Human Services (DHHS, Bethesda, MD, USA) and the Sótero del Río Hospital (Santiago de Chile, Chile), the Maccabi Institutional Review Board (Israel), the Health Science Board of Hungary (Budapest, Hungary) and the Human Investigation Committee of Wayne State University (Detroit, MI, USA), respectively. Written informed consent was obtained from women prior to sample collection. Specimens were coded and data were stored anonymously. Determination of the effect of maternal blood groups on maternal serum PP13 Longitudinal and cross-sectional study on Caucasian patients. Gonen et al. [21] performed a prospective, longitudinal, multi-center study in Maccabi Healthcare Services, enrolling pregnant women with singleton pregnancy at prenatal community clinics in Israel. From the recruited 1366 women, 254 were excluded due to missed abortion (n = 95), non-compliance with the protocol (n = 32), or lack of blood-group information (n = 127). From the 1078 women included in this analysis, 20 patients developed preeclampsia (five complicated by IUGR), 52 patients had a fetus with IUGR, while 1006 women had pregnancies unaffected by these conditions. Patient characteristics are provided in Table S1. Maternal blood was obtained at 6-10, 16-20 and 24-28 weeks of gestation; sera were stored at 220uC and tested for PP13 with ELISA (Diagnostic Technologies Ltd, Yokneam, Israel). Intra-and inter-assay variations were 6.5% and 9.4%, respectively [21]. Cross-sectional study on Hispanic patients. Romero et al. [19] performed a nested case-control study on samples from a prospective, longitudinal study at the Perinatology Research Branch of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NIH, DHHS, USA), enrolling pregnant women with singleton pregnancy at the Sótero del Río Hospital (Santiago, Chile). Two hundred and forty-two normal pregnant women with blood-group information were included in this analysis. Patient characteristics are provided in Table S2. First trimester serum samples were collected between 8 and 13 +6 weeks of gestation, stored at 280uC and tested for PP13 with ELISA (Diagnostic Technologies Ltd). Intra-and inter-assay variations were 7.3% and 19.5%, respectively [19]. Clinical definitions. Gestational age was determined by the last menstrual period and verified by crown rump length (CRL) [21] or by CRL and fetal biometry [19]. Preeclampsia was either defined [21] by the International Society for the Study of Hypertension in Pregnancy [40], or defined [19] by the Report of the National High Blood Pressure Education Program Working Group on High Blood Pressure in Pregnancy [41] and Sibai et al. [11]. IUGR was defined as birth-weight below the gestational agespecific 5th percentile according to local growth charts and birthweight percentiles [42,43]. Determination of in vivo PP13-binding to RBCs Placental tissue collection. PP13 immunostaining of RBCs was investigated in maternal and fetal blood spaces of placentas (n = 9) from normal pregnant women with no medical complications, delivering a term newborn with birth-weight appropriate for gestational age [44]. Placentas were collected at the First Department of Obstetrics and Gynecology (Semmelweis University, Budapest, Hungary, Federalwide Assurance: FWA00002527). Patients with a multiple pregnancy or a fetus having congenital or chromosomal abnormalities were excluded. Statistical analyses Maternal serum PP13 concentrations were not normally distributed; therefore, the Wilcoxon rank-sum test was used for group-comparisons. A stepwise multiple regression analysis was performed to reveal the correlation of covariates to PP13, including gestational age (GA), body mass index (BMI), ethnicity, smoking, maternal age, and parity. Possible significant interactions were evaluated by specifying a regression equation that included each individual covariate and any interaction between covariate-pairs. The following correlations were found in the Caucasian cohort [21]: GA, P,0.001; BMI, P = 0.099; ethnicity, P = 0.135; smoking, P = 0.497; maternal age, P = 0.07; parity, P = 0.204; BMI*ethnicity, P = 0.001; GA*BMI, P = 0.025. Correlations found in the Hispanic cohort [19] are the following: GA, P,0.001; BMI, P = 0.092; ethnicity, none (all Hispanic); smoking, P = 0.249; maternal age, P = 0.888; parity, P = 0.312; GA*BMI, P = 0.035. PP13 concentrations were converted into gestational weekspecific multiples of the medians (MoMs) among unaffected women [19,21]. Gestational age-adjusted MoMs were sequentially adjusted to BMI, ethnicity, smoking, maternal age, and parity, and then further adjusted to ABO blood groups. Changes in PP13 concentrations and MoMs between the test periods were calculated as (X2-X1)/(W2-W1), where X1 and X2 were PP13 values at gestational weeks W1 and W2 [21]. Cross-sectional comparisons were performed with Kruskal-Wallis, Mann-Whitney, and Wilcoxon rank-sum tests. The dataset used to 'fit' the regression models included individual subjects whose risk of preeclampsia we aimed to predict. To avoid potential bias due to 'over-fitting' of the models, the risk of preeclampsia for each woman was calculated using the 'out of sample' model in which values were calculated by running the analysis repeatedly, each time excluding one subject from the group. Sensitivities and specificities were calculated from PP13 MoMs for the disease groups (IUGR, preeclampsia and preeclampsia with IUGR) before and after adjustment for ABO blood groups. Receiver-operating characteristic (ROC) curves were generated to assess the test accuracy. The overall accuracy of the test was estimated with the area under the curves (AUCs). Data were analyzed using SASH 9.1.3 (SAS Institute, Cary, NC, USA). A p,0.05 was considered statistically significant. Maternal serum PP13 bioavailability in pregnant women is dependent on ABO blood groups To test whether maternal serum PP13 concentrations may be influenced by ABO blood groups, we re-analyzed published datasets on maternal serum PP13 in Caucasian [21] and Hispanic [19] populations. Changes in maternal serum PP13 concentrations and MoMs according to maternal ABO blood group in Caucasian women [21]. Among unaffected women, maternal serum PP13 concentrations (expressed in pg/ml before adjustment) increased with advancing gestation in all ABO blood groups. The regression slope of PP13 concentrations across the three trimesters was steeper in blood group B than in blood groups A (P = 0.019) and O (P = 0.024), but not in blood group AB ( Figure 1A). Similarly, the regression slope of PP13 MoMs (adjusted to 6 confounders) across the three trimesters was steeper in blood group B than in blood groups A (P = 0.020) and O (P = 0.008), but not in blood group AB. Of note, the regression slope in blood group AB ran below the regression slopes in all other blood groups when comparing either PP13 concentrations or MoMs. Regression slopes of PP13 concentrations or MoMs did not differ according to maternal Rh status (data not shown). When comparing the data in the three trimesters separately, we found that 1) women with blood group AB had the lowest median PP13 MoM in the first trimester, while median PP13 MoM in this blood group was similar to those in blood groups O and A in the third trimester, and 2) women with blood group B had the highest median PP13 MoMs throughout pregnancy ( Figure 1B). Changes in maternal serum PP13 concentrations and MoMs according to maternal ABO blood group in Hispanic women [19]. To validate these observations, we re-analyzed the Hispanic cohort data. Among controls, PP13 MoM was also the lowest in blood group AB and the highest in blood group B in the first trimester (Table 1). Similar to the Caucasian cohort, PP13 concentrations or MoMs were not different between Rh+ and Rh2 women (data not shown). PP13 binds to maternal and fetal RBCs in vivo To test whether PP13 binds to RBCs in vivo, TMAs of normal term placentas were immunostained for PP13. Similar to earlier data [28,29,33,34], the syncytiotrophoblast and endothelial cells of fetal vessels, unique sources of PP13 [29], were stained in all specimens. Although endothelial cells carry ABO antigens, we were [21]. The slope of the regression line (fitted on the medians) was steeper in blood group B than in blood groups A (P = 0.019) and O (P = 0.024). (B) Median PP13 concentrations and median PP13 multiple of the medians (MoMs) (both provided with +/295% CIs) were compared among unaffected women with various blood groups in the three trimesters. Median PP13 MoMs were calculated after converting gestational-age specific PP13 medians to MoMs and then step-wise adjusting it to BMI, smoking, ethnicity, maternal age and parity but not to ABO blood groups. For statistical analysis, median PP13 concentrations and median PP13 MoMs in each blood group were compared to blood group A by the Wilcoxon rank-sum test; *P,0.05, and **P,0.001. The distribution of PP13 medians and median MoMs were significantly different among the four blood groups in the first, second and third trimesters with a P value of ,0.05, ,0.05, and ,0.001, respectively (Kruskal-Wallis test unable to evaluate their PP13-binding regarding ABO blood-groups or disease status because of their PP13 expression. Of note, PP13 staining of fetal and maternal RBCs was also found, suggesting that PP13 binds to these cells. Interestingly, not all RBCs were stained for PP13, and the PP13 immunostaining intensity varied between immunopositive RBCs in each specimen ( Figure 2). PP13 has a differential binding to RBCs of different ABO blood types in vitro To reveal differential binding, we incubated PP13 and control proteins with four ABO blood-type RBCs. PP13-binding to all blood-type RBCs was detected, while BSA and trPP13, a truncated protein that lacks the functional CRD of PP13, had minimal binding to RBCs, proving that PP13-binding was specific and mediated by its CRD ( Figure 3A). Consistent with its differential binding to sugars on terminal positions of ABO blood-group antigens [28,29], PP13 had differential binding to RBCs according to ABO blood types. PP13-binding was similar in blood groups A and O, the weakest in blood group B, and the strongest in blood group AB in comparison to other blood groups ( Figure 3B). As with other galectins [37,38], PP13-binding to various blood-type RBCs dynamically changed with increasing PP13 concentrations ( Figure 3C) and inversely mirrored the changes seen in serum PP13 with advancing gestation and concentrations (Figure 4). The quantity of bound PP13 to individual cells varied within a wide range (1000-fold) in each blood group as with binding of other lectins to RBCs [47]. Senescent RBCs, characterized by smaller size and higher granularity [48], bound 1.5-2-fold more PP13 than young RBCs within each blood group (data not shown). PP13 binds to blood-group H antigen in silico Multiple sequence alignment revealed that out of seven conserved residues in human galectin CRDs, four are conserved in PP13 ( Figure 5A). Three of these four residues form the core binding-site [27,29], while residues in the opposing side of the CRD, which have been under positive selection in PP13 [27,29], form a positive binding groove. The B-site in PP13 CRD resembles B-sites in human galectins, which participate in blood-group antigen binding [38,39]. Structural alignment revealed that the structural similarity of PP13 [27] to fungal galectin CGL2 [46] is high (TM-score = 0.77), suggesting that the same oligosaccharides, such as blood-group antigens [46], may be bound by their CRDs. Indeed, our 3D modeling revealed a very similar accommodation of blood-group H trisaccharid in PP13 CRD as in CGL2 CRD [46] ( Figure 5B). Prediction of pregnancy complications is improved by including ABO blood group in the test Using the Caucasian dataset [21], we re-evaluated the performance of the PP13 test in predicting pregnancy complications after the adjustment of PP13 MoMs to maternal ABO blood groups. In this cohort, the frequency of ABO blood groups was not significantly different in women with preeclampsia compared to unaffected women (Table S1). PP13 concentrations and MoMs adjusted to six confounders (GA, BMI, ethnicity, smoking, maternal age, and parity) were significantly lower in all disease groups than in unaffected women in the first trimester, while these were significantly higher in all disease groups than in unaffected women in the second and third trimesters. Women with preeclampsia associated with IUGR had the lowest PP13 MoMs in the first trimester and the highest MoMs in the second and third trimesters ( Table 2). First trimester medians of PP13 MoMs in the three disease groups were further lowered after adjusting MoMs to ABO blood groups. In the second and third trimesters, medians of PP13 MoMs in the three disease groups were further raised after adjusting MoMs to ABO blood groups. Blood-group B patients had the highest PP13 MoMs among the disease groups in the second and third trimesters (Table 2). Thus, the adjustment to ABO blood groups increased the differences in PP13 MoMs in all disease groups compared to unaffected controls and improved the prediction accuracy of the PP13 test. In accord, the sensitivities derived from ROC curves ( Figure 6) showed an increase of #13% for a fixed specificity of 20% false positive rate (FPR) and #25% for a fixed specificity of 15% FPR when examined in the first trimester. These differences in sensitivities of the PP13 test after adjustment to ABO blood groups were statistically significant ( Table 3). The corresponding increases in areas under the curves (AUCs) after adjustment to ABO blood groups were 6%, 5% and 5% for IUGR, preeclampsia and preeclampsia with IUGR, respectively ( Figure 6). Discussion Principal findings of this study 1) PP13 binds to ABO blood-group antigens on RBCs by its CRD. 2) The differential binding of PP13 to ABO blood-group antigens affects maternal serum PP13 concentrations. 3) Individuals with blood group B have the highest maternal serum PP13 MoM, while those with blood group AB have the lowest PP13 MoM in the first trimester. 4) By adjusting to ABO blood group, the prediction accuracy of the PP13 test is improved for preeclampsia, IUGR and preeclampsia with IUGR. ABO blood group confers susceptibility to disease Glycosylation is the most common post-translational modification in humans, affecting approximately 50-70% of our proteins. Glycans on glycoproteins and other glycoconjugates constitute a complex array termed the ''glycome''. Lectins are glycan-binding proteins that decode the high-density ''glycocode'' stored in the glycome [1,49,50]. ABO blood-group antigens are oligosaccharides conjugated to cell-surface glycoproteins and glycolipids or secreted into body fluids by ''secretor'' individuals [2]. These antigens are synthesized by glycosyltransferases encoded by the H, Se and ABO loci in RBCs, epithelial and endothelial cells, and are also called ''histo-blood-group antigens'' [2]. The common Figure 3. PP13 differentially binds to erythrocytes of distinct ABO blood groups in vivo. Erythrocyte-binding assay was run with recombinant PP13, truncated PP13 (TrPP13), bovine serum albumin (BSA) and buffer (PBS), and quantified with flow-cytometry. A) PP13binding to RBCs was specific and mediated by its CRD, as trPP13 bound negligibly to RBCs, similar to BSA. B) PP13 bound to blood-group AB RBCs with the strongest affinity and to blood-group B RBCs with the weakest affinity (data presented for 50 ug/ml PP13 concentration). C) PP13-binding to RBCs of different ABO blood groups dynamically changed according to the applied PP13 concentrations, similar to that observed for other galectins [37,38]. Mean values of mean fluorescence intensities (6SEM) are presented from five independent experiments that were run in triplicate. doi:10.1371/journal.pone.0021564.g003 precursor H antigen is synthesized by fucosyltransferase 1 (H locus) in RBCs and by fucosyltransferase 2 (Se locus) in the secretory epithelium of gastrointestinal and respiratory tracts of ''secretor'' individuals [2]. The final synthetic step for ABO antigens depends on the ABO locus, which has three major alleles [51]. The A allele encodes alpha-1,3-N-acetylgalactosaminyltransferase, which catalyzes the transfer of N-acetylgalactosamine to the terminal position of the A antigen; the B allele encodes a1,3-galactosyltransferase, placing D-galactose into the terminal position of the B antigen; the O allele harbors a frame-shift deletion, resulting in the synthesis of a protein without enzymatic activity that leaves the common precursor H antigen unmodified [51]. There are six major genotypes and four phenotypes in the ABO blood group with differing frequencies among various populations, which might have been evolutionarily advantageous in conferring resistance against pathogens [3]. Indeed, ABO antigens may alter the presentation of cell-surface glycans and modulate their interactions with pathogens [52] or may provide receptors for pathogen attachment [3]. For example, P. falciparum binding to sialoglycans on erythrocytes is indirectly affected by ABO antigens [52]. On the other hand, C. jejuni strains directly attach to H antigen, and E. coli enterotoxin attaches to A and B antigens in the gastrointestinal tract, while uropathogenic E. coli strains bind to A antigen, and S. saprophyticus strains bind to A antigen in the urinary tract [3]. In contrast, natural antibodies against ABO antigens can protect the host against pathogens; for example, blood-group B individuals are protected against an E. coli (086) that presents blood-group B antigen on its surface [3]. Gastric cancer is also associated with maternal ABO group, having an increased incidence in blood-group A individuals, while bloodgroup O individuals more frequently have ulcer of the stomach or duodenum [5,10]. ABO blood-group antigens are linked to the protein backbone of coagulation factor VIII and von Willebrand factor and critically affect coagulation [4,5]. Indeed, patients with blood-group O are prone to excess bleeding because of the approximately 25% lower plasma concentrations of these coagulation factors [5], which is the consequence of the increased clearance of these glycoproteins, a phenomenon that is related to the H antigen linked to their backbone [5]. Conversely, the elevated plasma concentrations of coagulation factor VIII and von Willebrand factor in non-O blood-group individuals has been implicated in the increased risk for thromboembolic disease and ischemic heart disease [5][6][7][8][9]. It was recently suggested that blood group differences in glycosylation of these glycoproteins may alter their interaction with galectins and siglecs, and influence systemic immune functions [53]. Blood group as a risk factor for preeclampsia ABO antigens may play a role in the cross-roads of the immuneand coagulation systems by influencing gene-environment interactions. As the ''great obstetrical syndromes'' [54] (e.g. IUGR, preeclampsia, preterm labor) are characterized by changes in maternal immune-and coagulation systems, differences in ABO blood groups may put a patient at a specific risk according to her inherited antigens. Indeed, large cohort studies identified bloodgroup AB women at risk to develop preeclampsia [13][14][15]. A population-based case-control study including 100,000 pregnant women revealed that women with blood-group AB were at elevated risk to develop severe preeclampsia (OR: 2.3, 95%CI: 1.3-3.9), early-onset preeclampsia (OR: 3.8, 95%CI: 2.0-7.1), and preeclampsia with IUGR (OR: 3.4, 95%CI: 1.6-7.1) [15]. As the proportion of Caucasian women with preeclampsia and those with blood groups AB and B were low in our study, it was impossible to accurately evaluate the correlation between these blood groups and preeclampsia. The only confirmation that can be derived from our study of the blood-group effect on the risk of preeclampsia is the increase in the significance of the likelihood ratio of developing preeclampsia, particularly preeclampsia with IUGR, following the adjustment of PP13 MoMs to ABO blood groups. Why would blood group be a risk factor for preeclampsia? An earlier view suggested that inherited thrombophilias may confer increased risk for preeclampsia [55,56], and increased plasma concentrations of coagulation factors in blood-group AB individuals may have a prothrombotic effect [15], triggering or exacerbating the pathophysiologic events leading to preeclampsia [11]. The current view on preeclampsia suggests that preeclampsia has an exaggerated maternal systemic immune response component [12,57,58], and indeed, blood-group antigens influence the bioavailability of E- selectin, TNF-alpha and ICAM1 [59], factors implicated in the pathogenesis of preeclampsia [60]. As galectins are at the cross-roads of the immune and coagulation systems, differences in their bioavailability in different blood groups may suggest a role for galectins in the pathophysiologic regulation of these systems [53,61]. ABO blood groups, maternal serum PP13 and preeclampsia We found ABO blood-group-related differences in maternal serum PP13 in two ethnic populations and in vivo and in vitro sequestration of this galectin on RBCs, the main sources of ABO antigens in the circulation. Confirming our clinical data, PP13binding to RBCs inversely mirrored serum PP13 concentrations according to ABO blood groups. PP13 values were almost identical in blood-group O and A women throughout pregnancy as was PP13binding to blood-group O and A RBCs. Blood-group B women had the highest serum PP13 values throughout pregnancy, and PP13binding was the weakest to blood-group B RBCs. The lowest first trimester PP13 values were found in blood-group AB women in parallel with the strongest PP13-binding to blood-group AB RBCs. In this context it is important to note that in the placenta of anthropoid primates PP13 is primarily produced by the syncytio- trophoblast [28][29][30][31][32]. This galectin localizes to the cytoplasm and also to the brush border membrane of the syncytiotrophoblast, from where it can be secreted and/or shed into the maternal circulation [28][29][30][32][33][34]. In normal pregnancies, there is a continuous rise in maternal serum concentrations of PP13 with advancing gestational age [21,33], similar to the increase in maternal serum concentrations of other proteins synthesized by the syncytiotrophoblast (e.g. Placental Protein 5, alkaline phosphatase, pregnancy-specific beta1-glycoprotein) [62], and similar to the increase in trophoblast cell volumes [63]. Thus, in normal pregnancies, maternal serum concentrations of PP13 primarily depend on the trophoblast volume and the trophoblastic synthesis of PP13 [33]. Of importance, several case-control studies revealed reduced first trimester maternal serum PP13 concentrations in patients who subsequently developed preterm severe preeclampsia [16][17][18][19][20][21][22][23][24][25][26]. This can be the consequence of the decreased placental PP13 mRNA expression observed in these patients as early as in the first trimester and throughout pregnancy [33,35,36]. This is important since the origins of preeclampsia can be dated back to the very early events in placentation [11,12,57,58], and the reduced first trimester placental expression of PP13, a galectin that may have important immunobiological functions at the maternal-fetal interface [28,64], may contribute to the early events in the placental pathogenesis of preeclampsia in these patients. In this context, the reduced bioavailability of PP13 in blood group AB women in the first trimester may hypothetically contribute to the early pathophysiologic events at the maternal-fetal interfaces and increase the risk of preeclampsia in these women. This study has also shown that as maternal serum PP13 concentrations increase during pregnancy, these become similar in women with blood group AB to those in women with blood groups A and O in the third trimester. At this phase an exaggerated maternal systemic inflammatory response already dominates preeclampsia [11,12,57,58], and maternal serum concentrations of PP13 and its bioavailability at the maternal-fetal interface may not have a similar effect on the development of preeclampsia compared to the first trimester. The structural basis for the differential binding of PP13 to ABO blood group antigens In the current study we revealed that the differential binding of PP13 to various ABO blood-group RBCs is mediated by the CRD of PP13, consistent with our previous in vitro and in silico studies [27][28][29] demonstrating the affinity of PP13 to sugars present at terminal positions on ABO blood-group antigens. Importantly, serum PP13 was not affected by Rh antigens, which do not carry glycans. Similarly, several galectins were also demonstrated to bind differentially to various ABO antigens or RBCs carrying various ABO antigens [37][38][39]46], and ABO antigen-binding was suggested to be mediated by an extended pocket in the CRDs of these galectins [39,46]. Our sequence alignment and 3D modeling showed that three residues in the core binding-site of galectins which are involved in disaccharide-binding are also conserved in PP13 [27][28][29]. Moreover, the B-site in PP13 CRD resembles to the B-sites of other galectins (e.g. galectin-8), which are involved in blood-group antigen binding [38,39]. In accord with its overall structural similarity to fungal galectin CGL2 [46], PP13 accommodated blood-group H trisaccharid in its CRD similar to CGL2 [46], suggesting the structural basis for the observed in vitro and in vivo blood group antigen-binding capability of PP13. Median PP13 concentrations and median PP13 MoMs (before and after adjustment to ABO blood groups) (all presented +/2 95% confidence intervals) are provided for the four study groups in the first, second and third trimesters. Although most of the patients gave three blood samples during the study of Gonen et al. [21], some of them gave only two; thus, the number of investigated blood specimens decrease from the first to the third trimester. doi:10.1371/journal.pone.0021564.t002 Figure 6. Receiver-operating characteristic (ROC) curves depicting the sensitivity and specificity of PP13 MoM for pregnancy disorders with or without its adjustment to ABO blood groups. ROC curve analysis was used to evaluate the accuracy of PP13 MoM for first trimester prediction of intrauterine growth restriction (IUGR; N = 52), preeclampsia (N = 20) and preeclampsia complicated with IUGR (N = 5) before (A) and after (B) adjustment to ABO blood groups. Areas under the ROC curves (AUCs) for all disease groups were above (P,0.001) the diagonal lines, which represent random prediction. As galectin interactions with oligosaccharides become stronger by cross-linking a large numbers of ligands on cell surfaces [38,46,65,66], the differences observed in PP13-binding affinities in vitro and in vivo cannot simply be explained by differences in antigen-binding energies between PP13 and its ligands. Other determinants that may also contribute to the differential binding of PP13 to RBCs with various ABO blood types include the following: 1) there is a larger number of A and H antigen-sites compared to B antigen-sites on the RBCs of individuals with the respective blood groups; 2) there is a dynamically changing affinity of galectins to the RBCs with changing lectin concentrations (0.06-10 uM) [37,39], also found for PP13 (0.175-1.4 uM); 3) the mode of the presentation of glycans on cell-surfaces strongly influences their galectin specificity [37]; and 4) the availability of the B antigen for galectin-binding may be different in blood-group B and AB RBCs due to antigen proximity differences. Indeed, there is a different localization of ABO blood-group antigen clusters on RBC surfaces since H and A antigen clusters are localized outside or in the periphery of sialylated glycophorin clusters, while B antigen clusters are localized in the center of these sialylated clusters [52]. It is possible that a stronger steric inhibition by sialic acids decreases PP13-binding to B antigens. As indirect evidence for this inhibition, we observed a 1.5-2-fold increase in PP13-binding to ''old'' compared to ''young'' RBCs as ''old'' RBCs lose approximately half of their terminal sialic acid residues [48]. In blood group AB, the close proximity of A and B antigens may be the basis for the stronger binding of PP13 to blood-group AB erythrocytes, leading to its sequestration and lower first trimester serum concentrations, which was also independently observed in cases of preterm preeclampsia, secondary to diminished placental PP13 expression [33,35]. In light of our findings, we hypothesize that the bioavailability of other galectins that were previously shown to bind ABO blood group antigens [37][38][39]46] may also be associated with ABO blood groups in the circulation. Improvement of the PP13 test for predicting preeclampsia and IUGR An important outcome of this study is that the adjustment to ABO blood groups further improved the predictive accuracy of first trimester PP13 MoMs for IUGR, preeclampsia and preeclampsia with IUGR. The degree of improvement is not negligible as at false positive rates of 15-20% the adjustment of PP13 MoMs to ABO blood groups improved the detection rate by 13-25%, a change which usually requires the engagement of additional markers into concurrent tests. When further adjusted to ABO blood group, this improvement turned PP13 into a reasonable marker for IUGR, bringing its value to the clinically relevant range for using as a potential predictor. Blood-group adjustment of PP13 MoMs also improved the prediction accuracy for severe preeclampsia (term and preterm combined), complicated by IUGR. This is remarkable since PP13 was earlier shown to be a good marker only for early and preterm preeclampsia [16][17][18][19][20][21][22][23][24][25][26]. However, the potential value of the PP13 test for predicting term severe preeclampsia can only be revealed by investigating larger cohorts. Conclusions and implications Our study revealed that ABO blood group affects maternal serum PP13, requiring the addition of blood group as an important confounder in the risk prediction for preeclampsia. This is also the first report suggesting that maternal blood group may be important in the first trimester risk assessment for the subsequent development of IUGR, as well. In light of these findings, we hypothesize that the bioavailability of galectins other than PP13 may also be associated with ABO blood group in the circulation, and we propose that when assaying galectins or other lectins as biomarkers in blood, ABO blood group status need to be taken into account. Our results showed that there is a greater sequestration and lower maternal serum concentration of PP13 in blood-group AB individuals in the first trimester. Blood group AB, similar to low first trimester maternal serum PP13, is a risk factor for severe preeclampsia. It is possible that the low bioavailability of PP13 in pregnant women with blood group AB in the first trimester contributes to the increased risk of preeclampsia in these patients, and that the coincidence of blood group AB and low PP13 expression may exacerbate the severity of preeclampsia. Although the exact functions of PP13 at the maternal-fetal interface have not been completely discovered, it was recently shown that PP13 can induce apoptosis of activated T cells to a similar extent as galectin-1 [29], a protein implicated in maternal-fetal immune tolerance [67,68]. Supporting Information Table S1 Patient characteristics in the Caucasian cohort. *P,0.05, **P,0.01, ***P,0.001 compared to unaffected women in the Caucasian cohort. Values are presented as median (interquartile range) a or number of patients (percentage) b . (DOC)
2014-10-01T00:00:00.000Z
2011-07-25T00:00:00.000
{ "year": 2011, "sha1": "0cd50023109ad205f0ce9b11e8cab11d7e677cd4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0021564&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0cd50023109ad205f0ce9b11e8cab11d7e677cd4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
33213634
pes2o/s2orc
v3-fos-license
The Effects of Using Academic Role-Playing in a Teacher Education Service-Learning Course Popular abstract: Academic role-playing is one of the more effective active-learning instructional strategies currently being used at the American university level in the preparation of future educators. This mixed methods study is an investigation of the use of role-play in an undergraduate university course designed to prepare students to become public school coaches and physical education teachers. The five original vignettes that were role-played were specifically written to prepare the students to successfully handle situations they might reasonably encounter in their future work. The role-play model used in the research was originally created by the Shaftels in the 1960s, but several creative variations devised by the current investigators were added to that model for this study causing it to be an adapted version. Data collected included questionnaire responses from two different questionnaires, information from a focus group, and observations by the two investigators. Investigators concluded that the students not only exhibited skill in the techniques used to resolve the issues in the vignettes, but that students gained confidence the more they participated in the role-plays which occurred over a 4-week period. The students themselves reported that learning from one’s peers, trying out their ideas in a safe environment, being forced to plan an intended outcome in advance, and hearing feedback from others were their most valued experiences. They also overwhelmingly reported preferring role-play to the more traditional university lecture method. The Effects of Using Academic Role-Playing in a Teacher Education Service-Learning Course INTRODUCTION Academic role-playing is one of the more effective and frequently used active learning instructional strategies currently being used at the American university level in the preparation of future educators. 1 If the focus of instruction is the learning of new skill sets, role-playing those skills in a realistic yet safe classroom environment allows students to implement them correctly in a mentored and structured learning setting. It also allows students to gain the confidence to execute them appropriately in the real world. The investigators used several varieties of a classic academic role-playing strategy in a course designed to prepare students to become future coaches and physical education teachers. This particular course was also designed as a service-learning course in which students served as volunteer student teachers in after-school programs for area schools by coaching team sports and facilitating content-specific events and activities. The role-playing vignettes that were acted out were specifically designed to prepare the students to successfully handle situations that might reasonably arise both in their service-learning activities and when they become full-time physical educators. The purpose of this study was to determine the effects of using the role-playing strategy with a group of student teachers. The success, or lack of same, was to be determined according to three factors: student attitudes toward participating in the strategy, their skill level while role-playing the required skills, and the degree of confidence they expressed toward the use of these skills in the future as compared to learning the same content through a lecture format. The specific research questions were: will the use of the adapted version of Shaftel's role-play model (1) increase students' classroom interaction with peers and with instructors? (2) increase students' positive responses to course content? (3) increase students' confidence toward their future participation in the service-learning activity as well as in their student teaching? Academic role-playing (not the same as roleplaying games) can be defined as the involvement of participants and observers in a real problem situation along with the desire for resolution and understanding that this involvement engenders (Joyce, Weil, and Calhoun, 2009). The role-playing process provides a live sample of human behavior that serves as a vehicle for students to (1) explore their feelings; (2) gain insights into their attitudes, values, and perceptions; (3) develop their problem-solving skills and attitudes; and (4) explore subject matter in varied ways (Joyce and Weil, 1980). According to Henriksen (2004), role-play is "…a medium where a person, through immersion into a role and the world of this role, is given the opportunity to participate in, and interact with the contents of this world, and its participants" (p. 108). Seaton, Dell'Angelo, Spencer, and Youngblood (2007) suggest the use of role-play to help in the development of self-awareness, selfregulation, and self-monitoring. In a Finnish study of role-playing games, Meriläinen (2012) describes the self-reported social and mental development of role-players. Specific skills that can be gained by roleplay include modifying one's performance in light of Popular abstract: Academic role-playing is one of the more effective active-learning instructional strategies currently being used at the American university level in the preparation of future educators. This mixed methods study is an investigation of the use of role-play in an undergraduate university course designed to prepare students to become public school coaches and physical education teachers. The five original vignettes that were role-played were specifically written to prepare the students to successfully handle situations they might reasonably encounter in their future work. The role-play model used in the research was originally created by the Shaftels in the 1960s, but several creative variations devised by the current investigators were added to that model for this study causing it to be an adapted version. Data collected included questionnaire responses from two different questionnaires, information from a focus group, and observations by the two investigators. Investigators concluded that the students not only exhibited skill in the techniques used to resolve the issues in the vignettes, but that students gained confidence the more they participated in the role-plays which occurred over a 4-week period. The students themselves reported that learning from one's peers, trying out their ideas in a safe environment, being forced to plan an intended outcome in advance, and hearing feedback from others were their most valued experiences. They also overwhelmingly reported preferring role-play to the more traditional university lecture method. feedback, becoming a good listener, showing sensitivity to social cues, managing emotions in relationships, and exercising assertiveness, leadership, and persuasion (Elias et al, 1997). Karwowski and Soszynski (2008) used role-play successfully to train undergraduate education students in creativity, but they also believe that it can develop a capability for constructive criticism. Sileo, Prater, Lukner, Rhine, and Rude (1998) suggest roleplaying as well as service-learning as appropriate strategies to facilitate pre-service teachers' active involvement in learning. According to Randel, Morris, Wetzel, and Whitehill (1992), students should not be expected to learn to deal with complexity unless they have the opportunity to do so, and the authors of the current study believe that role-playing provides an opportunity to address such complexity. In a study designed to compare lecture versus role-playing in the training of the use of positive reinforcement, Adams ,Tallon, and Rimell (1980) found that the performance of the lecture-trained staff was stable or declined after an initial improvement whereas the performance of staff that role-played continued to improve. Moore (2005) reminds that teachers often use role-playing to facilitate learner involvement and interaction in the process of decision making. Svinicki and McKeachie (2011) see the chief advantage of role-playing to be that students are active participants rather than passive observers and therefore must make decisions, solve problems and react to the results of their decisions. Dell'Olio and Donk (2007) believe that role-playing helps students make responsible autonomous choices because it provides a forum for exploring multiple ways of acting and reacting in a given situation. Hall, Quinn, and Gollnick (2008) state that experiences gained through role-play can take the place of firsthand experiences that may be impossible to otherwise achieve, and further explain that teacher-education candidates often cite such experiences as the most informative and influential part of their teacher-education coursework. Randel et al. (1992) found that students reported more interest in role-playing when compared to traditional methods of teaching. A concern, however, regarding the use of role-play is raised by Shepard (2002) who describes the anxiety often experienced by students who have not previously role-played before, particularly since they would be required to do it in front of their classmates. Henriksen (2004) too expresses concerns that not only might students be anxious but also that they may think that role-play is associated with a childish image. For their part, teachers are attracted to role-play, particularly if their theoretical orientation is constructivism, allowing to learn by making connections between their own knowledge and experience and the real world (Kindsvatter, Wilen, and Ishler, 1996). CONSTRUCTIVISM AND THE NATURE OF THE LEARNING PROCESS As used in this study, the nature of the learning process is that it is an intentional process on the part of the learner of constructing meaning from information and experience. Academic roleplaying is an example of the use of constructivism and student-centered learning wherein students are enabled to create their own meaning from participating in realistic life situations. According to Lainema (2009), constructivism has recently gained popularity again although it is certainly not new, but even today it is difficult to define it unambiguously. Building on the ideas of Dewey (1910), Piaget (1970Piaget ( , 1972, Vygotsky (1978), constructivism can be defined in a variety of ways with differing areas of focus. Kauchak and Eggen (2007) define it as an "eclectic view of learning that emphasizes four key components: (1) learners construct their own understandings rather than having them delivered or transmitted to them; (2) new learning depends on prior understanding and knowledge; (3) learning is enhanced by social interaction; and (4) authentic learning tasks promote meaningful learning"(p .9). Ormrod (2000) says that while there may be no single constructivist theory, most adherents recommend these same five beliefs: complex, challenging learning environments and authentic tasks; social negotiation and shared responsibility as a part of learning; multiple representations of content; understanding that knowledge is constructed; and student-centered instruction. Lainema (2009) agrees that constructivism has been described by some as more a set of principles than a coherent theory and that all advocates do not necessarily share the same view of these principles. Marlowe and Page (2005) contrast constructivism with the more traditional lecture approach in four ways: constructivist learning is about constructing knowledge, not receiving it; constructivist learning is about understanding and applying, not recall; constructivist learning is about thinking and analyzing, not accumulating and memorizing; and constructivist learning is about being active, not passive. Most constructivists agree that constructivism focuses on what students do and experience, and learners are therefore encouraged to take control of and become increasingly responsible for their own learning. Building then on the theory of constructivism, we further define learning as the intentional, meaningful, coherent representation of knowledge. It occurs best when learners are goal-directed, and it is successful when they can link new information with existing knowledge in meaningful ways. It can be enhanced when learners have the opportunity to interact and collaborate with others (American Psychological Association, 1997). Role-play as an instructional strategy takes advantage of these practicesconnecting new experiences to previous knowledge and experience, and doing it in the company of others. According to Gunter, Estes, and Schwab (2002), the only thing that really matters to learners is the meaning students construct for themselves. Lainema (2009) defines learning as an active process of constructing rather than communicating knowledge. It also is best when learners experience insight which is defined by Bigge and Shermis (2004) as getting the feel of, or catching on to a situation. All of these conditions are further enhanced when students feel psychologically safe (Rogers, 1969). Overall, learning should involve purpose and movement toward a goal. To design curriculum so that this type of experience occurs for students, professors should design active, learner-centered strategies that ideally start with relevant problems that students are motivated to resolve and apply to their own lives. In our opinion, role-playing satisfies these criteria. In our use of role-playing to prepare students to become effective teacher/coaches, we defined our roles as facilitators and discussion leaders. Participants of the Study Participants of the study were undergraduate physical education seniors from a large diverse urban American research university who were taking a capstone secondary teaching methods course with a sustained service-learning component (i.e., coaching an after school soccer program). The course was specifically designed to prepare pre-service teachers to become physical education teachers and coaches in the public schools. Those taking the course in the fall of 2010 and spring of 2011 were in a control group (N=50), and the other students who took the course in the fall of 2011 and spring of 2012 participated in the role-play intervention (N=52). A subset of 24 of the 52 intervention group students (13 males and 11 females) participated in the specific role-play activities and responded to both of the two questionnaires administered in this study. The two investigators were professors in the same College of Education and Health Professions (one from a Department of Curriculum and Instruction, and the other from a Department of Kinesiology). An internal review board for research approved protocols for this study. Role-Play Activities (The Intervention) The role-playing model used in the study is by George and Fannie Shaftel and consists of nine steps: (1) warm up the group, (2) select participants, (3) set the stage, (4) prepare the observers, (5) enact, (6) discuss and evaluate, (7) reenact, (8) discuss and evaluate, and (9) share experiences and generalize (Shaftel and Shaftel, 1967). The intent of the Shaftel model, and of the investigators' several variations of it, was to teach problem-solving attitudes and skills such as the ability to identify a problem, to design a plan to resolve it along with alternative techniques, and to experience the consequences of a variety of different ways to handle problem situations. No game-like elements or rewards were added to the role-playing used in this study. A unique educational advantage of the Shaftel model is their fourth stage -preparing the observers. By assigning students who are not actually playing one of the roles to specifically observe one of the players, all members of the class become directly involved in the process. Then, during the sixth stage -discuss and evaluate -non-participating students are asked to report out on their reaction to the role that was played: was it realistic, was it successful, what values were being upheld by the players, is there another way the role could be played to reach the same conclusion or a different conclusion? In large university classes, students are more likely to become and remain engaged in the role-play if they have been given a direct assignment to observe and critique one particular player than if they are simply present in the room when other students are roleplaying. In therapeutic settings, when role-play is used, participants are encouraged to focus on feelings, and that type of role-playing known as psychodrama or sociodrama is therefore designed to allow for feelings to be expressed along with insight into one's own behavior and that of others. On the other hand, in the educational setting, the Shaftel model emphasizes the intellectual content as much as the emotional content, and the analysis and discussion that follow the enactment are as important as the role-play itself (Joyce and Weil, 1980). In the roleplays in the current study, students were encouraged to do both -to acknowledge their feelings and to address the cognitive course content being tapped by the vignette. Further they were asked to look for the assumptions which underlie people's verbalizations and behavior. As the post-role-play discussions continued, students were also asked to identify the values that were being expressed. The Shaftel model is designed to deemphasize the traditional role of the professor and instead for the professor to listen and learn from the group. When the learner has an opportunity to interact and to collaborate with others on instructional tasks, learning is enhanced (American Psychological Association, 1997). A final goal of the Shaftel model and of this research was therefore to allow students the opportunity to bring to their conscious awareness their own values while testing them against the views of others. In teacher education this is of significant importance as instructors try to move students to where they may either validate their current values or to revise them as they learn from other positions and value systems. An original vignette or written scenario was provided to the students on an overhead projector, and students were instructed to determine what they thought the "intended outcome" or solution to the problem should be. They were then instructed to plan techniques or dialogue they would use to accomplish their "intended outcome". While the students were writing their plans, a table and chairs were placed in front of the classroom. At that point students volunteered (and in some cases were selected) to role-play the parts in the vignette. After the roleplay was concluded, the investigators and the other class members provided feedback and reactions. Additional role-plays were then conducted using the same vignette to give other students the opportunity to try out their own implementation ideas and interaction styles. Variations or adaptations that were added to the Shaftel model for this study included having the students plan in advance and to write out how they would act out their roles, focusing on their designation of an "intended outcome." A second variation allowed the student portraying the coach/ physical education teacher to pick a "back-up" to sit behind him/her during the role-play to serve as a helper (to make helpful suggestions from the sidelines) if he/she hit an impasse with the person playing the other character in the vignette. A final very popular variation called for all the students in pairs to do practice role-plays at their seats (to try out their ideas and plans) before volunteering to roleplay in front of the class. Each of these variations was used with some of the vignettes, but not with all of the vignettes. The Vignettes Used for Role-Plays According to Schick (2008), role-play participants are more likely to give full effort and to be accomplish the tasks and thereby acquire the skills being taught when a role-play is about something that they find personally meaningful. The five original vignettes used for the role-plays were composed because the content was deemed to be personally meaningful to this group of students. The following issues that students might reasonably encounter both in their service-learning activity and as beginning teacher/ coaches were: public school students not motivated to participate; aggressive students who are hurting other students; sexual harassment toward the teacher/ coach; challenging the teacher/coach's authority; and establishing a working relationship with a senior coach who is not interested in the school's physical education program. In all of the vignettes except the one with the senior coach, all roles were played by members of the class. In the vignette about the senior coach, one of the investigators played the role of the coach. When the investigator was role-playing, the students loved getting a chance to "outsmart" their professor. One of the most interesting responses that occurred after each of the post-role-play discussions was completed was that students volunteered other similar situations that they would also like to roleplay. After the role-play on sexual harassment by a male student toward the young female teacher/ coach, for example, students suggested they roleplay sexual harassment toward a male from a female student and also same-sex harassment for both genders. Like the pre-service students described by Sobel and Taylor (2005), our students too requested more real-world scenarios to solve. This is the vignette used for the too-aggressive student: Fifth period rolls around and this time the juniors and seniors enter the gym for a class called "team sports." They tell you they have been playing a flag football unit and a few students go into the closet and pull out the necessary equipment. A senior named Dominick divides up teams and runs the class very efficiently leaving you very little time and opportunity to manage and/ or control anything. The game begins and Dominick exhibits extremely aggressive behavior toward the opposing team -hitting students hard and tripping and tackling them to the ground violently. He is also abusive to his own teammates, yelling at them when they make mistakes and blaming them for anything that goes wrong on their team. It is obvious the students are afraid of him and will do anything to try and just appease him and/or stay out of the way. You ask Dominick to speak with you in the office. What is your next move? Sources of Data: Quantitative Two sources of quantitative data were used for analysis in this study. The first utilized a 14-item Likert-scale questionnaire developed by the provost's office regarding course effectiveness and class interaction in a university course. This instrument was administered three times at even intervals to pre-service teachers throughout each of the control and role-play semesters. Based on relevance to this study, only six of the original 14 questions were retained for analyses. Because data was collected on participants in this course the academic year previous to when the role-play interventions were conducted, a quasi-experimental non-equivalent groups design was applied to this dataset using a paired samples T-test analysis. This test compares the means of two variables, computes the difference between the two variables for each case, and tests to see if the average differences are significantly different at the p<.05 level. The second set of quantitative data was collected from a summative and descriptive questionnaire specifically addressing the usefulness of the roleplay activities in the course and comparing it to traditional lecture-style methods. This questionnaire was only administered to the pre-service teachers who participated in the role-play activities during the very last semester of the intervention (i.e., intervention group-spring 2012 [N=24]). Sources of Data: Qualitative Using a naturalistic approach (Lincoln and Guba, 1985), qualitative data was collected in the form of a role-play questionnaire, a student focus group, and individual reflections written by the instructors. This data was recorded, transcribed, and analyzed, noting all salient and recurring units of meaning that were reported. These themes not only helped explain and clarify the quantitative findings, but they also served to address some of the quantitative limitations and provided a more complete and in-depth description of phenomena happening within the study. Course Effectiveness Questionnaire Findings from the course effectiveness questionnaire showed significantly higher scores reported among pre-service teachers who participated in the roleplay activities on two of the six items (Figure 1). The first item, "The instructor asked students in class to participate in a discussion of the topic at hand?" exhibits how using role-play in a course can force the instructor to engage students with the content at hand and create more of a student-centered teaching and learning environment. The second item, "The student asked or responded to a question from the instructor or fellow students?" demonstrates and supports what others have found in the literature about the level of participant engagement required of role-play activities and the effect it can have on participants. Role-Play Questionnaire Responses to the five descriptive questions on the summative role-play questionnaire were as follows: Q1) Have you had previous participation in role-play activities? Yes: 7 Note: The 7 students who said they had previously participated in role-play activities stated that they had all experienced role-play in their university teacher education courses except for one student who said she had experienced role-play in a high school theatre arts course. Q2) Describe your reaction to the use of role-play as preparation for your service learning as well as for your first teaching job: Very helpful: 24 Not helpful: 0 Q3) Comparing role-play to the traditional university lecture method, which do you prefer? Prefer traditional lecture method: 0 Prefer role-play scenarios: 22 Likes both equally: 2 Q4) Describe your learning engagement level during role-plays compared to lecture style: More engaged during role-play -21 Mentally engaged but did not volunteer to role-play in front of the class -3 Note: One of these three students explained: "There were sometimes where I could have participated, but opted not to. In my mind I was engaged with classmates during their individual role-plays." It became obvious to the researchers that these three students had misunderstood the use of the term "engaged" as used in this role-play questionnaire. Q5) Regarding your critical thinking ability, compare the two styles: More engaged in critical thinking during role-play -23 More engaged in critical thinking during lecture -1 Note: This second student's explanation was "Because everyone was thinking at one time, I didn't have to." But further in the questionnaire, he wrote: "I'm a hands-on learner, and the role-playing scenarios actually put me in the situation instead of just reading about it in a book." Other specific comments on the questionnaire included: "Role-play gets you closer to the real deal rather than listening to someone just tell you how to react. It was never boring; I was always eager to see how different people would respond. I looked forward to seeing all the different techniques. I feel like it forces you to respond quickly while thinking critically as opposed to lecture where people can just act like they are paying attention." "I had to pay attention because I didn't have the situations in a book to read later." "Being able to reflect back on these role-plays and notes I took will help me handle that situation better than I would if I had no prior experiences." "They help me figure out the "goal" because we may not have really known it before then. I need to focus on the goal and not allow my emotions to overtake the goal." (Note: The comment by this student refers to their instructions to write out the intended goal before attempting the role play). "Role-Play gives me a lot better idea of "real world" situations and it has put more tools in my bag." "Only when you find yourself in a problem situation do you learn the feelings, obstacles, etc. as if you really would in the real world. It really doesn't help me personally to be told how to handle a situation. It is easier to learn through DOING." "The biggest benefit was that I was able to hear how others would respond to specific situations. As I watched others participate, I was able to place myself in the situation and think more critically about my answers." Overall, their answers on the questionnaires revealed that learning from one's peers, trying out ideas in a safe environment, being forced to plan an intended outcome in advance, and hearing feedback from others were their most valued experiences. The Focus Group Discussions that emerged from the focus group included themes relevant to situations likely to occur while working in a secondary public school setting (e.g., planning for success, confidence building, effective communication, and utility of process). All comments were in one way or another reflections about the authenticity of training for life in schools despite the fact that it was not literally "on the job" training. Finding one's style or strategy for dealing with the challenges and realities of the profession was explicit, as it was noted time and again that this learning strategy was effective at bringing out individual strengths and weaknesses when it came to dealing with common situations educators are faced with every day. As a result, the role-play experience provided an initial or baseline realization about how pre-service teachers are likely to respond on the job, allowing for deep-seated reflection and self-analysis of how to handle similar situations that are just around the corner in their service-learning projects and/or student teaching residency. The other part of the focus group discussion reflected some of the benefits of going through an authentic and mentored kind of learning exercise without being tied to a "real" situation with direct consequences. Not too often are novice teachers allowed a trial "run-through" that encourages mistakes to be made without any real consequences to students. This includes the affordability role-play allows to take a time-out, consider multiple angles and solutions, and re-think how to approach a particular situation. These exercises allowed for extra time and space for questions, new ideas, elaborations, and redirection of an experience in order to gain depth and understanding of the appropriate (and inappropriate) ways to approach or handle teaching interactions and learning situations. This is critically important since we know that a teacher's word choice, body language, and personal disposition represents everything meaningful when working with students. It was also discussed that this platform makes it possible to learn from multiple people with diverse experiences (not just the professor), and to gain a multi-dimensional perspective about how to deal with the problem effectively and in different contexts. Finally, there was consensus among participants that the role-play strategy helped pre-professionals better foresee challenges and to take the necessary time (or buy time) to prepare for precarious situations that are likely to occur at some point in their career. In effect, the role-play activities enabled teacher candidates to be on the lookout for conflict or divergence, to be proactive rather than reactive, and to know how to best take advantage of an opportunity when presented. Investigators' Observations Much to the credit of teacher education research over the past few decades, literature has repeatedly pointed to experiential training as an effective means for preparing student teachers for their work in education (Coffee, 2010;Domangue and Carson, 2008;Wasserman, 2009). This work has largely focused on the implementation of theoretical and content knowledge mixed with practical field experience provisions (e.g., service-learning), beyond mere lecture and examination of generalized course material. Although this push has enhanced teacher education methodology to include practical experience and guided reflection with experienced mentorship, service-learning in itself still has its limitations. Foremost, service-learning is affecting learners in real-time and you do not get "do-overs." You can't just call "time-out" and reexamine how you would handle a situation or take a moment to analyze all the variables that go into split-second decision-making when working with large groups of students; you are still teaching in real-time. By adding in a third component like role-play into this teacher training trifecta, teacher educators have another tool to prepare for likely situations by evaluating, analyzing, and redirecting a preparatory experience before the actual service-learning experience takes place. In determining whether learning did or did not take place, we agree with Jonnassen, Peck, and Wilson (1999) that assessment of this type of activity is process-oriented, and one of the most valid forms of assessment is therefore to assess while the learning is occurring. CONCLUSION Every secondary level teacher knows that working with teenage students is not always an easy job. Every day there is a new challenge that educators must face, and it takes time and experience to learn how to handle situations appropriately with this population. Gaining real-world experience in a university International Journal of Role-Playing -Issue 5 Figure 1: Paired Samples Tests of Course Effectiveness Survey Items setting is oftentimes difficult because access to schools and students is also never easy or convenient. Using role-play techniques to guide future educators for those likely difficult encounters is an effective way to construct a platform for the exploration of issues, provide practical mentorship, and inspire reflection about best practices. This study has shown that academic role-play in a teacher education course with a service-learning component can improve course interaction between instructors and students and also between students and students, therefore strengthening the active-learning dynamic in a university classroom. With regard to the specific questions addressed in this research, we conclude that the use of the adapted version of Shaftel's roleplay model did (1) increase students' classroom interactions with peers and with the instructors; (2) did increase students' positive responses to course content, especially as compared to the same content taught without the use of role-play; and (3) did increase students' confidence toward their ability to succeed in the service learning activity as well as in their student teaching. Future research, however, is needed to explore whether and to what extent student background variables such as age, gender, performance anxiety level, and previous academic as well as non-academic role-playing experience would make a difference in students' reactions and responses. Since this research utilized an adapted version of the Shaftel role-play model, results may have been different if only the original nine-step Shaftel model had been used. It would also be interesting to determine if the students would have responded in the same way if they were only going to become future teacher/coaches but were not also preparing to do a service-learning project that would affect their course grades. Because this study did not control for those variables, and because of the small N, generalizing regarding the use of role-play with all teacher education students studying to become coaches and physical education teachers while enrolled in service-learning courses should be made with caution. Notes. 1. To prepare future teachers for their student teaching/practicum/residency semester required by most states in America, teacher educators often require students to role-play a teacher in a micro teach format within which they teach a lesson to a simulated class of students.
2017-08-27T07:50:51.101Z
2015-01-20T00:00:00.000
{ "year": 2015, "sha1": "7ca55b273b7427e24ff271f840856f67901f2b36", "oa_license": "CCBY", "oa_url": "https://journals.uu.se/IJRP/article/download/234/246", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "29c7da82e85559053e2c4e17f3fb8223eb3a63ee", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
85566912
pes2o/s2orc
v3-fos-license
Interleukin-8/CXCR2 signaling regulates therapy-induced plasticity and enhances tumorigenicity in glioblastoma Emerging evidence reveals enrichment of glioma-initiating cells (GICs) following therapeutic intervention. One factor known to contribute to this enrichment is cellular plasticity—the ability of glioma cells to attain multiple phenotypes. To elucidate the molecular mechanisms governing therapy-induced cellular plasticity, we performed genome-wide chromatin immunoprecipitation sequencing (ChIP-Seq) and gene expression analysis (gene microarray analysis) during treatment with standard of care temozolomide (TMZ) chemotherapy. Analysis revealed significant enhancement of open-chromatin marks in known astrocytic enhancers for interleukin-8 (IL-8) loci as well as elevated expression during anti-glioma chemotherapy. The Cancer Genome Atlas and Ivy Glioblastoma Atlas Project data demonstrated that IL-8 transcript expression is negatively correlated with GBM patient survival (p = 0.001) and positively correlated with that of genes associated with the GIC phenotypes, such as KLF4, c-Myc, and HIF2α (p < 0.001). Immunohistochemical analysis of patient samples demonstrated elevated IL-8 expression in about 60% of recurrent GBM tumors relative to matched primary tumors and this expression also positively correlates with time to recurrence. Exposure to IL-8 significantly enhanced the self-renewing capacity of PDX GBM (average threefold, p < 0.0005), as well as increasing the expression of GIC markers in the CXCR2 population. Furthermore, IL-8 knockdown significantly delayed PDX GBM tumor growth in vivo (p < 0.0005). Finally, guided by in silico analysis of TCGA data, we examined the effect of therapy-induced IL-8 expression on the epigenomic landscape of GBM cells and observed increased trimethylation of H3K9 and H3K27. Our results show that autocrine IL-8 alters cellular plasticity and mediates alterations in histone status. These findings suggest that IL-8 signaling participates in regulating GBM adaptation to therapeutic stress and therefore represents a promising target for combination with conventional chemotherapy in order to limit GBM recurrence. Introduction Glioblastoma (GBM) is the most aggressive and prevalent primary brain tumor in adults, with 10,000 new diagnoses each year. Recurrent tumors, with increased invasive and resistance capacities, are an inevitability for GBM patients despite aggressive therapeutic intervention. Glioma-initiating cells (GICs) are considered a key driver of primary tumor development, as well as major contributors to tumor recurrence 1 . Recent reports demonstrate that differentiated GBM cells undergo cellular and molecular changes to acquire GIC-like states 2,3 . This cellular plasticity dramatically complicates our ability to prevent tumor recurrence in the clinical setting. Our group and others have shown that therapeutic stress and microenvironmental dynamics induce cellular plasticity in GBM, driving the conversion of differentiated GBM cells to GIC states [3][4][5] . However, the exact mechanisms governing post-therapy GBM plasticity remain unknown. Unraveling the signaling pathways that drive this plasticity will provide key insight for improving treatment of GBM. Using gene expression and bioinformatic analysis, we identified interleukin-8 (IL-8) as a key player in promoting post-therapy cellular plasticity. IL-8 enhances the selfrenewal capacity of patient-derived xenograft (PDX) GBM cells and is elevated in recurrent GBM patient specimens. Reducing levels of IL-8 in murine models significantly improved survival and enhanced the efficacy of Temozolomide chemotherapy. We demonstrate that IL-8/ CXCR2 signaling alters the epigenomic landscape in GBM cells, inducing a GIC-like state and increasing the proportion of GICs after treatment. This study highlights IL-8 signaling as a key influence on GBM plasticity and recurrence as well as a potential novel therapeutic target in GBM. Materials and methods Cell culture U251 human glioma cell lines were procured from the American Type Culture Collection (Manassas, VA, USA). These cells were cultured in Dulbecco's Modified Eagle's Medium (DMEM; HyClone, Thermo Fisher Scientific, San Jose, CA, USA) supplemented with 10% fetal bovine serum (FBS; Atlanta Biologicals, Lawrenceville, GA, USA) and 1% penicillin-streptomycin (P/S) antibiotic mixture (Cellgro; Herdon, VA, USA; Mediatech, Herdon, VA, USA). PDX glioma specimens (GBM43, GBM12, GBM6, GBM5, and GBM39) were obtained from Dr. C. David James at Northwestern University and maintained according to published protocols 6 . Cells were propagated in vivo by injection into the flank of nu/nu athymic nude mice. In vitro experiments with these cells were performed utilizing DMEM supplemented with 1% FBS and 1% P/S antibiotic mixture. All cells were maintained in humidified atmosphere with CO 2 and temperature carefully kept at 5% and 37°C, respectively. Dissociations were performed enzymatically using 0.05% trypsin and 2.21 mml/L EDTA solution (Mediatech, Corning, Corning, NY, USA). For experiments, cells were cultured in their appropriate cell culture media treated with temozolomide (TMZ; Schering Plough; stock solution 50 mmol/L in DMSO), Interleukin-8 (IL-8; Peprotech Company, Rocky Hill, NJ, USA), or equimolar DMSO vehicle control. For IL-8 neutralizing antibody experiments, cells were cultured as described above in the presence of an IL-8 neutralizing antibody or an IgG control antibody (R&D Systems, Minneapolis, MN, USA). Animals Athymic nude mice (nu/nu; Charles River, Skokie, IL, USA) were housed according to all Institutional Animal Care and Use Committee (IACUC) guidelines and in compliance with all applicable federal and state statutes governing the use of animals for biomedical research. Briefly, animals were housed in shoebox cages with no more than five mice per cage in a temperature and humidity-controlled room. Food and water were available ad libitum. A strict 12-h light-dark cycle was maintained. Intracranial implantation of glioblastoma cells was performed as previously published 7 . Briefly, animals received prophylactic injection of buprenex and metacamp via intraperitoneal (i.p.) injection, followed by an i.p. injection of ketamine/xylazine anesthesia mixture (Henry Schien, New York, NY, USA). Sedation was confirmed by foot pinch. Artificial tears were applied to each eye, and the scalp was sterilized repeatedly with betadine and ethanol. The scalp was then bisected using a scalpel to expose the skull. A drill was used to make a small burr hole above the right frontal lobe (~1-mm in diameter). Animals were then placed into the stereotactic rig, and a Hamilton syringe loaded with the cells was brought into the burr hole. The needle point was lowered 3 mm from the dura and injection of 5 µL of cell mixture took place over 1 min. The needle was then raised slightly and left undisturbed for 1 min to ensure proper release of the cell mixture. After this, the syringe was carefully removed. The animal's head position was maintained, and the skin of the scalp was closed with sutures (Ethicon, Cincinnati, OH, USA). Animals were then placed in fresh cages with circulating heat underneath and monitored for recovery. All instruments were sterilized with a bead sterilizer between animals and all other necessary procedures to maintain a sterile field were performed. Drug treatments were initiated 7 days after intracranial implantation. Animals received i.p. injections of either TMZ (2.5 mg/kg) or equimolar DMSO. Injections were performed daily for 5 consecutive days. After injections, animals were monitored daily by a blinded experimenter for signs of sickness, including reduction in body weight, lowered body temperature, lack of grooming, hunched appearance, and behavioral changes. Animals were euthanized when, in the opinion of the blinded experimenter, they would not survive until the next day. Killing of animals was performed according to Northwestern University guidelines. Briefly, animals were placed into CO 2 chambers and the flow of CO 2 was initiated; the flow rate did not exceed 2 L CO 2 /min while the animals were conscious. Whole brains were removed and washed in ice-cold phosphate buffer saline (PBS; Corning, Corning, NY, USA). For those brains utilized for FACS analysis, please see Flow Cytometry section of the Materials and methods. For those brains employed for immunohistochemistry analysis, please see the Immunohistochemistry section of the Materials and methods. RNA isolation and microarray After treatment, cells were dissociated with trypsin and washed with PBS. RNA extraction was performed using Qiagen's RNeasy kit (Qiagen Inc., Germantown, MD, USA) according to the manufacturer's instructions. Quantification of RNA concentrations was performed using a NanoDrop (Thermo Fisher), and cDNA was synthesized according to established protocols using BioRad's iScript kit using 1000 ng of the total RNA per sample (BioRad, Hercules, CA, USA). The following cycles were used in a C1000 Thermal Cycler (BioRad) to synthesize cDNA: 5 min at 25°C, 30 min at 42°C, 5 min at 85°C, and then temperature stabilized to 4°C. For gene expression analysis, the Illumina HumanHT-12 v3 BeadChip array (Illumina, San Diego, USA) was used. GBM43 PDX GBM was treated with TMZ (50 μM) for 8 days. Cells were harvested and mRNA was isolated as described above. Samples preparation and mRNA array hybridization was performed according to the manufacturer's guidelines. Illumina Bead Array Reader was used to read the array and the Illumina's GenomeStudio software, as well as the Gene Expression Module was utilized for data analysis. For quality control, samples with less than 6000 significant detection probes (detection pvalue < 0.01) were excluded. Normalization was performed for each microarray by using the Lumina package. A log transformation to base two was also performed on the normalized data. In order to determine the expression of various genes of interest, quantitative polymerase chain reaction (qPCR) was performed. Briefly, cDNA was diluted and combined with SYBR green (BioRad) and corresponding primers; qPCR was then performed using BioRad's CXF Connect Real Time machine with the following protocol: initial activation stage of 10 min at 95°C, followed by 40 cycles of 3 min at 95°C and 30 s at 60°C. After these cycles were completed, temperature was brought to 65°C for 5 min and then 95°C for 5 min. Primers were obtained from Integrative DNA technologies (IDA; Coralville, IA, USA). Western blot analysis In order to analyze protein expression in total extracts, cells were dissociated using trypsin after the appropriate number of days following treatment, washed with PBS, and resuspended in mammalian protein extraction reagent (M-PER; Thermo Fisher) supplemented with protease and phosphatase inhibitor (PPI; Thermo Fisher) and EDTA (Thermo Fisher). Cells were then sonicated in a water bath sonicator for 30 s, followed by a resting phase of 30 s, for a total of five cycles. Lysates were centrifuged at 21,000 × g for 10 min in a temperature-controlled centrifuge held at 4°C. Supernatants were collected and protein concentration was determined by Pierce bovine serum albumin (BSA) assay (Thermo Fisher). In the case of nuclear and cytoplasmic fractionation, a cytoplasmic and nuclear protein extraction kit was used (Pierce; Thermo Fisher). Briefly, cells were dissociated and washed with PBS. Next, cells were pelleted and resuspended in ice-cold cytoplasmic extract buffer, and extraction was performed according to the manufacturer's instructions. After collecting the cytoplasmic contents, the remaining nuclear pellets were pelleted and resuspended in ice-cold nuclear extraction buffer. Nuclear extraction was completed as instructed. All samples were stored at −80°C when not in use. Coimmunoprecipitation For coimmunoprecipitation (Co-IP) experiments, proteins were extracted and quantified as described above. Then 50-100 µg of proteins were incubated with primary antibody overnight at 4°C with gentle rocking. The next day, anti-rabbit IgG antibodies conjugated to agarose beads were added to the cell lysates and incubated for at least 1 h at 22°C. Next, the mixture was spun down and washed several times in PBS. Finally, proteins were eluted from the mixture and loaded into gels, as described above. Flow cytometry analysis For in vitro experiments, cells were collected at serial time points after the beginning of treatment (days 2, 4, 6, and 8), and fresh surface staining was performed. Next, cells were treated with fixation and permeabilization buffers (eBioscience, San Diego, CA, USA) according to the manufacturer's instructions. For those cells that were collected based on surface expression, no fixation or permeabilization was performed to maintain cell integrity. After this fixation, intracellular staining was performed overnight, followed by triplicate washing and the addition of appropriate secondary antibodies. In vivo studies began with the killing of tumor-bearing mice and immediate removal of the whole brain. Brains were washed in icecold PBS, and then bisected down the longitudinal fissure and right brains (tumor-bearing) were passed through a 70 µM strainer. These single cell suspensions were then incubated in ACK lysis buffer (Lonza, Walkersville, MA, USA) for 5 min at 20-25°C to lysis any blood cells. After washing with PBS, cells were stained as in in vitro experiments. Human leukocyte antigen (HLA) staining was used to identify human tumor cells. All cells were collected in PBS supplemented with 1% BSA (Fisher Scientific, Fair Lawn, NJ, USA) and sodium azide and kept on ice until read. The following antibodies were used: anti-HLA-PB For reporter cells, antibody staining was not performed. Rather, cells were dissociated using trypsin and washed with PBS. They were then resuspended in PBS with 1% BSA and sodium azide and kept on ice until analysis. Enzyme-linked immunosorbent assay ELISAs were performed to determine the level of IL-8 protein in cell supernatants. After the noted number of days since treatment, supernatants were collected and centrifuged at 1200 × g for 5 min to pellet any cellular debris. Supernatants were then collected in fresh tubes. ELISAs were obtained from eBioscience and performed according to the manufacturer's instructions. In summary, supernatants were collected from culture flasks on the appointed day. Supernatants were then centrifuged to pellet any floating cell or debris. Supernatants were collected and placed into clean microcentrifuge tubes. Samples were then added to ELISA plates that had been coated overnight at room temperature with capture antibody diluted in coating buffer, washed five times, and blocked for 1 h. Initial optimization runs of supernatants from non-treated cells showed that GBM43 expressed high levels of IL-8, and that supernatants needed to be diluted 1:20 to ensure signaling within the range of our standard curve; all other cell lines were diluted 1:5. After 2 h of incubation at room temperature, supernatants were removed and the ELISA plate was washed five times. Detection antibody was then added for 1 h, followed by washing, and the addition of avidin-horseradish peroxidase solution for 30 min. The plate was then washed seven times to ensure no false-positive signals were generated. Tetramethylbenzidine (TMB) solution was added and incubated for 15 min. Reactions were halted using 1 N hydrochloric acid. Plates were immediately read using a BioTek plate reader (Abs 450 nm -Abs 570 nm ). Standard curves were generated and IL-8 concentration was determined. Immunohistochemistry After whole brains were removed from animal skulls, they were washed in ice-cold PBS. Brains were then flash frozen in optimal cutting temperature compound (OCT; Electron Microscopy Sciences, Hatfield, PA, USA). Sections, 8 -microns thick, were obtained by cryostat (Leica Biosystems, Wetzler, Germany) and kept frozen at −30°C until IHC was begun. Staining for IL-8 was performed as follows: sections were allowed to dry at 22°C for 30 min. Excess OCT compound was then scrapped away from the margins, and an Immuno-Pen was used to create a border around each section. After one wash with ice-cold PBS, sections were fixed with 4% paraformaldehyde (Boston BioProducts, Boston, MA, USA) for 10 min. Sections were then washed three times with ice-cold PBS. A solution of 1% BSA and 0.3% Triton-X100 was then placed on top of each section for 1 h to permeabilize and block the section. Primary mouse anti-IL-8 antibodies (R&D) diluted in 1% BSA solution with 0.3% Triton-100 were then added and sections were incubated overnight at 4°C. The following day, antibodies were removed and sections were washed three times with ice-cold PBS. Appropriate secondary goat anti-mouse IgG conjugated to FITC was diluted 1:2000 in 1% BSA with Triton-X100 and incubated for 1 h at 22°C. Sections were then washed with ice-cold PBS three times. Fluro-Gold with DAPI (Thermo Fisher) was gently applied to each section and coverslips were carefully placed on top. A Leica microscope was utilized for IHC analysis (Leica). For each section, tumors were identified by cell morphology and density. A blinded experimenter analyzed the slides for IL-8 expression and generated each image. ImageJ (National Institutes of Health) was used for final image processing and the generation of images for publications. Human sample histology Human primary and matched recurrent GBM tissues were obtained from the Northwestern University's Nervous System Tumor Bank. All patients were consented according to the Institutional Review Board (IRB) policies prior to the obtainment of samples. Samples were formalin-fixed and paraffin-embedded (FFPE). Immunohistochemistry of tumor samples was performed on 4-μm-thick sections heated at 60°C for at least 1 h. Staining for IL-8 was carried out manually, and antigen retrieval was performed with a BioCare Medical Decloaking Chamber using high (LC3) or low pH antigen retrieval buffer from Dako. Primary antibodies were incubated for 1 h at room temperature. A secondary antibody was EnVision-labeled polymer-HRP (horseradish peroxidase) anti-mouse or anti-rabbit as appropriate. Staining was visualized using 3, 3′diaminobenzidine (DAB) chromogen (Dako, K8000). IL-8 immunohistochemical results on TMAs were semiquantified on a relative scale from 0 to 3, with 0 = negative and 3 = strongest (see Supplementary Fig. 1). Each tumor was represented by three separate cores on three separate blocks. Bioinformatics analysis We utilized the publicly available The Cancer Genome Atlas (TCGA) GBM database for all examination of gene expression. We performed the following analyses: Correlation between IL-8 and all the other genes was determined by Pearson correlation coefficients. Those genes with coefficients > 0.5 or < −0.5 and false discovery rate (FDR) < 0.05 were selected to be correlated with IL-8. Then non-negative matrix factorization (NMF) was employed to identify clusters of all the genes that are correlated with IL-8 using the R package "NMF" 8 . Brunet algorithm was used to estimate the factorization. We performed 40 runs for each value of the factorization rank r in range 2:7 to build consensus map. The optimal cluster was determined by the observed cophenetic correlation between clusters, and validated by silhouette plot and principle component analysis (PCA). Function "aheatmap" was used for plotting the heatmap and clustering with "euclidean" as the distance measure and "complete" as the clustering method. Differences of IL-8 expression among different WHO grade and subtypes were examined using one-way ANOVA, and followed by Bonferroni correction for the multiple comparison. GBM patients were stratified into IL-8-upregulated and IL-8-downregulated groups based on IL-8 gene expression using quartile (Q1, Q3) as split points. Survival curves were generated via Kaplan-Meier method, and compared by log-rank test. For clinical factor comparison, we used the TCGA U133a dataset. Cox proportional hazards model with stepwise variable selection was conducted to examine whether IL-8 could be independent factor for predicting survival with major clinical variables adjusted. C index (95% CI) or C statistics was provided to see how well the models are fitted, and likelihood ratio test was conducted to compare the multivariable models with and without the targeted variable. For expression localization, we utilized the Ivy Glioblastoma Atlas Project (IVY GAP; Allen Institute for Brain Science, Seattle, WA, USA) and their online platform (glioblastoma.alleninstitue.org). ChIP-Seq reads underwent FastQC quality analysis after sequencing and no abnormalities were detected. Alignment was done using Bowtie2 software, and peak calling was performed using the MACS2 CallPeak function with p-value set to 0.05. All ChIP-Seq data were visualized using Integrated Genomics Viewer (IGV). Extreme limiting dilution analysis and neurosphere assays PDX cells fresh from the flank of nu/nu mice were washed with PBS and plated in serial dilutions 200, 150, 100, 50, 25, 12, 6, 3 cells per well, 12 wells per dilution, in neurobasal media (Gibco cat. no. 21103049, Thermo Fisher) supplemented with the following growth factors: B27 (without vitamin A, Invitrogen), basic fibroblast growth factor (10 ng/mL; Invitrogen), epidermal growth factor (10 ng/mL; Invitrogen), and N2 (Invitrogen) treated with either PBS or IL-8. A blinded experimenter examined each well after 7 and 14 days and counted the number of formed neurospheres with a diameter greater than 20 cells. These counts were analyzed using the Walter + Eliza Hall Institute of Medical Research outline platform (http://bioinf.wehi.edu.au/software/ elda/). This platform allows for the determination of stem cell frequency, as well as quantification of differences between the IL-8 treated and non-treated samples. Generation of shRNA constructs In order to knockdown the expression of IL-8, shorthairpin RNA (shRNA) constructs were obtained commercially (Genecopoeia, Rockville, MD, USA). All shRNA constructs expressed IL-8 shRNA under the control of a CMV promoter and included a GFP construction for simple identification of successfully transfected cells. Plasmids for these constructs were packaged into lentiviral vectors using X293 cells. Briefly, plasmids and all necessary transfection reagents were added to lowpassage X293 cells growing as adherent cultures in DMEM media fortified with 10% FBS and 1% P/S antibody mixtures. After 3 days, supernatants were collected and ultracentrifugation was performed. Viral titration was determined by sequential dosing of collected viruses in X293 cells, followed by analysis of GFP expression. After determination of viral titer, human glioma and PDX cells were transfected with 25 IU of the lentiviral vector. This transfection was performed in suspension for 30 min at 22°C with gentle agitation every 5 min. For in vitro experiments in human glioma cells, cells were propagated and GFP + populations were purified using FACS sorting. IL-8 knockdown was confirmed via ELISA and/or FACS analysis. Cell cycle analysis These analyses were completed using the propidium iodide/RNase staining buffer (BD Pharmingen, cat. no. 550825) according to the manufacturer's guidelines. Briefly, after the desired number of days of treatment with either DMSO or TMZ, cells were dissociated from the plate and washed with PBS. They were then treated with 70% ethanol solution as a fixative, followed by permeabilization. Cells were then treated with RNase and proteases to ensure maximum DNA staining. After incubation with these reagents, cells were washed thoroughly with PBS. Then, cells were stained with propidium iodide for 30 min at 4°C. After washing, cells were analyzed by flow cytometry. Gating strategy was as follows: SSC-A and FSC-A were plotted and cellular debris was excluded. Then, SSC-W and FSC-W were plotted to identify only single cells. Then, SSC-A and PI stainings were plotted. Unstained controls were used to establish background signal. Finally, PI staining was plotted as a histogram. DNA content was assayed, and the progression of the cell cycle was determined based on the histogram plot. Any aberrations caused by the introduction of shRNA constructs were noted and these constructs were excluded. Statistical analysis All statistical analyses were performed using the GraphPad Prism Software v4.0 (GraphPad Software, San Diego, CA, USA). Where applicable, one-way ANOVA, unpaired t test, and log-rank test were applied. Survival distributions were estimated with the Kaplan-Meier method. A p-value < 0.05 was considered statistically significant. Therapeutic stress increases IL-8 expression in vitro and in vivo To investigate if Temozolomide (TMZ) chemotherapy promotes the adoption of a GIC state via cellular plasticity, gene set enrichment analysis (GSEA) using the Affymetrix platform was performed. Data from PDX GBM43 cells 4 and 8 days post-treatment with either vehicle control (DMSO) or physiological doses of TMZ (50 μM, see Supplementary Fig. 1) 9-12 revealed a significant (FDR q = 0.08, FWER p-value = 0.046) enrichment of a network of genes responsible for supporting the GIC phenotype (Fig. 1a) 13 . Interestingly, gene expression revealed that interleukin-8 (IL-8) is significantly upregulated post-TMZ therapy (Supplementary Table 1). To investigate epigenetic plasticity during TMZ therapy, we performed genome-wide ChIP-Seq analysis of TMZtreated PDX GBM43 cells for histone 3 lysine 27 (H3K27) acetylation (ac), a marker of open chromatin, and H3K27 trimethylation (me3), a maker of closed chromatin. TMZ significantly augments H3K27ac levels, but not H3K27me3 levels, at an IL8 enhancer locus identified in astrocytes. (Fig. 1b, chromosome 4: 74783222-74783418, fold enrichment 3.02 as compared with input, p-value < 0.0001, FDR = 0.004). Therefore, TMZ may promote a GIC state by altering the epigenomic landscape of GBM. Quantitative polymerase chain reaction (qRT-PCR) confirmed that TMZ treatment increased IL-8 mRNA levels in a time-dependent manner (Figure 1c) (***p < 0.0001). To validate this effect in vivo, immunofluorescence analysis was performed on a previously established orthotropic recurrent GBM model, which confirmed that recurrent tumors had increased IL-8 expression (Fig. 1e). Considering the ability of therapy to induce both IL-8 expression and a GIC state, we examined if alterations in cell state can influence such expression without therapy. PDX lines were cultured in GIC maintenance media (neurobasal media supplement with appropriate growth factors) or differentiation condition media (1% FBS) with or without TMZ (50 μM). Even without any chemotherapy, culturing GBM PDX lines in the GIC maintenance media significantly elevated c Expression of IL-8 mRNA after exposure to 50 µM TMZ was determined by quantitative real-time polymerase chain reaction (qPCR) after treatment with TMZ across 8 days. All IL-8 values were normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH). Bars represent means from three independent experiments and error bars represent the standard deviation. Multiple Student's t tests were performed. **p < 0.01, ***p < 0.001. d Immunohistochemistry was performed on mouse brains with intracranial xenografts of GBM43. 1.5 × 10 5 GBM43 PDX cells were implanted to establish orthotropic xenograft tumors. Animals received 2.5 mg/kg of either DMSO or TMZ for five consecutive days, beginning 7 days after tumor implantation. Animals were killed 5 days after the cessation of treatment, and whole brains were extracted, flash frozen, then sectioned (8 µm) and analyzed by immunofluorescence. 4′-6-diamidino-2phenylindole (DAPI) stained DNA (blue) in the nuclei and allophycocyanin-conjugated secondary (orange) antibody was used against primary antibody for IL-8. Dotted lines represent the edge of the tumor based on cell density and morphology. e-f PDX GBM xenografts were harvested and immediately plated in either mild differentiation media (DMEM containing 1% FBS) or GIC maintenance media (neurobasal supplemented with FGF and EGF). Cells were then treated with either DMSO or TMZ (50 μM). After 4 days, IL-8 levels were determined by ELISA. g GBM cells were treated with physiologically relevant dose of TMZ (50 μM), carmustine (BCNU, 100 μM), or equimolar DMSO. After 24 h, conditioned media was collected and IL-8 levels were quantified by ELISA. Bars represent means from three independent experiments and error bars represent the standard deviation. Multiple Student's t tests were performed. **p < 0.01, ***p < 0.001 the expression of IL-8 expression measured by enzymelinked immunosorbent assay (ELISA, Fig 1c) (****p < 0.0001). Proneural subtype GBM43 and classical subtype GBM6 expressed about 20-and 200-fold higher IL8, respectively, in the GIC maintenance media. TMZ exposure induced IL8 expression in both culture conditions (****p < 0.0001). Moreover, this induction was specific to TMZ, as another anti-glioma alkylating agent BCNU failed to promote IL8 expression in any GBM lines tested (***p < 0.001 and ****p < 0.0001, Fig. 1g). Based on these observations, we investigated the role of IL-8 in promoting therapy-induced cellular plasticity and disease recurrence. Critically, we utilized differentiation condition media (1% FBS or neural basal media with BMP2), as these conditions initiate differentiation of GBM cells and enable us to observe how stimuli induce dedifferentiation to the GIC state during therapy 5,7 . Culturing cells in a GIC-promoting media (neurobasal supplemented with EGF and FGF) would be inappropriate for this study, as it would force the cells to a GIC state and mask GICinducing effects of our experimental manipulations. In silico analysis establishes IL-8 importance in GBM progression and patient outcomes In order to examine the contribution of IL-8 to GBM clinical progression, we employed the Cancer Genome Atlas (TCGA) patient gene expression dataset, including wild-type and IDH mutation tumors. Analysis showed that IL-8 transcript expression is elevated in World Health Organization (WHO) Grade IV glioma (GBM) (Fig. 2a, IL-8 To investigate IL-8 expression patterns across different tumor compartments, we utilized the Ivy Glioblastoma Atlas Project (IVY GAP) 14 , which demonstrated that IL-8 mRNA is elevated in pseudopalisading cells and the perinecrotic zone, two areas linked to the GIC subpopulation (Fig. 2f). All of these data further justify our interest in IL-8 as a critical participant in GBM progression and therapy-induced plasticity. IL-8 receptor CXCR2 + GBM cells acquire CD133 expression during anti-glioma chemotherapy Next, we set to investigate how IL-8 signaling influences GBM proliferation and cellular signaling. CXC motif chemokine receptors 1 and 2 (CXCR1 and CXCR2) are the major receptors for IL-8 15 . To investigate their role in IL-8-mediated signaling in GBM, we interrogated TCGA data. We observed that expression of both these receptors was significantly elevated in GBM tumors compared with low-grade gliomas ( Figure S2-A). Analysis of our ChIPseq data showed post-therapy accumulation of open chromatin mark at a known enhancer site 16 for CXCR2 (Fig. 4a, p-value < 0.0001), as well as significant decreases in H3K27me3 levels in the gene body ( Figure S2 C-H, pvalue < 0.001). In contrast, gene body H3K27me3 levels were significantly increased at the CXCR1 gene locus ( Figure S2-B). FACS analysis for CXCR1/2 demonstrated that all cell lines increased expression of both receptors post-TMZ treatment. (Fig. 4b, p = 0.00015). CXCR2 expression was also elevated in the CD133 + GIC population (Fig. 4c). Time course FACS following TMZ treatment revealed that a CXCR2 + cell population exists prior to TMZ treatment; this population rapidly gains CD133 expression during treatment (Fig. 4d). CXCR1 expression was not altered during therapy (Fig. 4b and Figure patients had higher IL-8 expression level than low-grade (Grade II, Grade III) glioma patients. b All GBM patients were stratified into IL-8-upregulated and IL-8-downregulated groups based on IL-8 gene expression using quartile (Q1, Q3) as split points. High expression of IL-8 correlated with reduced median survival. Survival curves were generated via the Kaplan-Meier method and compared by log-rank test. **p < 0.01. Multivariate stepwise Cox proportional hazards model with stepwise variable selection was conducted to examine whether IL-8 could be an independent factor for predicting survival with major clinical variables adjusted. This analysis confirmed that IL-8 was an independent prognostic factor for survival in all GBM patients (HR [95% CI]: all GBM 1.07 [1.01, 1.14], p = 0.0467). c Index (95% CI) or C statistics are provided. c Within grade IV glioma subtypes, proneural GBM patients had the lowest level of IL-8 expression. Boxplots represent means and interquartile range. One-way ANOVAs with Bonferroni correction for the multiple comparisons were performed. *p < 0.05, ***p < 0.001. d Patients with proneural GBM were stratified into IL-8-upregulated and IL-8downregulated groups based on IL-8 gene expression using quartile (Q1, Q3) as split points. Kaplan-Meier survival curves and multivariate stepwise Cox proportional hazards models were generated as in B. e In TCGA database (U133a) GBM patients with IL-8-downregulated also have a longer time to recurrence, compared with IL-8-upregulated patients. f The Ivy Glioblastoma Atlas Project (IVY GAP) was employed to determine the location of IL-8 in glioblastoma samples. Each column represents the data for one biopsy from a tumor. Microdissection for the noted anatomically portions of the tumor and subsequent mRNA extraction and expression analysis demonstrated that IL-8 is upregulated in the perinecrotic zone and pseudopalisading cells. Heatmap illustrates most significantly and differential expressed genes with a false discovery rate < 0.01. mRNA expression in each anatomical compartment were compared. IL-8 was significantly upregulated in the perinecrotic zone. Bars represent means from three independent experiments and error bars represent the standard deviation. Multiple Student's t tests were performed. ***p < 0.001. g Brain tumor samples from primary biopsies or surgical resections were stained for IL-8 at the Northwestern Brain Tumor Tissue Bank. Histological and morphological analysis confirm that IL-8 is present in the perinecrotic zone and pseduopalisading cells. Scale bar 50 microns IL-8 increases the self-renewing capacity of GBM cells and the expression of GIC markers To determine if the IL-8-CXCR signaling axis promotes induction of the GIC state in GBM, we performed extreme limiting dilution assay (ELDA) on GBM6 and GBM43 cells in neurosphere media containing IL-8 (50 ng/ml). IL-8 increased GIC frequency about 3.3-fold for GBM43 and 2.3-fold for GBM6 (Fig. 5a, p = 0.001). We next examined how IL-8 alters the expression of known GIC-promoting genes, using our proprietary GICspecific reporter cell line 7 . IL-8 activation significantly increased reporter activity (Fig. 5b and Figure S3-A for SOX2-RFP reporter and Nanog-RFP reporter p = 0.0005). To investigate how the IL-8-CXCR signaling cascade interacts with GIC-promoting genes in patient tumors, we selected the top GIC-associated genes activated during TMZ therapy (Fig. 1a and Supplementary Table 2) and correlated with their levels with IL-8 mRNA using the GlioVis data portal 16-18. IL-8 expression was significantly correlated with critical GIC-associated genes, including KLF4, CD44, HIF1A, HIF2A, Myc, and Twist expression (Fig. 5c). Ivy Gap was used to explore colocalization of GIC-specific genes to IL8 expression within tumor compartments. We observed that GIC-specific genes were correlated with IL-8 expression in both the perinecrotic zone and pseudopalisading cells (Fig. 5c, heatmap; Ivy Glioblastoma Atlas Project.). Immunoblot analysis of PDX lines exposed to IL-8 shows timedependent induction these genes (Fig. 5d) as well as various critical GIC-associated transcription factors, such as C-myc, Nanog, Sox2, and OCT4 ( Figure S3-D). Finally, to examine the role of post-therapy IL-8 in inducing the GIC-specific gene expression, we combined DMSO or 50 µM TMZ with either control IgG or anti-IL-8 neutralizing antibody. Blocking of IL-8 both reduced basal expression of GIC markers and prevented TMZ-induced increases in SOX2 and C-Myc (Fig. 5e). IL-8 enhances GBM growth and therapy resistance in vivo To elucidate IL-8's role during in vivo GBM growth, U251 cells with stable IL-8 knockdown were established using shRNA technology. IL-8 secretion was effectively knocked down by two shRNA constructs ( Figure S4-A and B, p < 0.0005). Proliferation capacity was altered significantly in cells with the highest IL-8 knockdown compared with control population (Figure S4-C, p = 0.018), but cell cycle profiles remained stable ( Figure S4-D and E). Athymic, immunodeficient mice then had GBM cells expressing either sh-Control or anti-IL-8 shRNA#1 implanted into the right cerebral hemisphere. Each group was then divided into two groups that received either DMSO or TMZ (2.5 mg/kg, i.p.) (n = 7/group). IL-8 knockdown significantly increased the median survival of animals with orthotropic GBM regardless of chemotherapy exposure (Fig. 5f, first graph median survival shcontrol 38 days vs. sh#1 IL-8 140 days; hazard ratio of survival = 4.737, 95% CI = 4.190 to 101.3, p = 0.0021). In a clinically relevant model, PDX GBM43 (which express high basal levels of IL-8, Fig. 1c and e) were infected with a lentivirus carrying shRNA against IL-8, reducing IL-8 expression by 50% ( Figure S6-A, p > 0.0005) after transient transfection. Implantation of these IL-8 knockdown GBM43 prolonged median survival about 38% compared with control shRNA (Fig. 5f, 2nd graph median survival for sh-control 29 days vs. sh#2 IL-8 40 days, p = 0.0003). Moreover, IL-8 knockdown significantly enhanced the therapeutic efficacy of TMZ and improved survival about 51 days (Fig. 5f). IL-8 signaling promotes epigenetic alterations in GBM To identify potential mechanisms by which IL-8 influences growth and promotes therapeutic resistance, we analyzed correlations between IL-8 and 12042 other genes using TCGA GBM patient data via Pearson correlation coefficients. Our analysis returned 68 genes with coefficients > 0.5 or < −0.5 and FDR < 0.05 that correlate with IL-8; we then conducted unsupervised hierarchical clustering. Observed cophenetic correlation determined optimal clusters, which we validated by silhouette plots and principal component analysis (PCA). Two clusters with the highest cophenetic coefficient at 0.95 and average silhouette width at 0.46 ( Figure S5 and Fig. 6a) separated well upon visualization of PCA. Using the Enrichr platform, we determined that the first (see figure on previous page) Fig. 4 Therapeutic stress alters epigenetic status and increases expression of CXCR2, one of the major receptors for IL-8. a ChIP-seq analysis was performed for H3K27ac, a marker of open chromatin associated with activation of gene expression, and H3K27me, associated with closed chromatin and repressed gene expression, on PDX GBM43 cells. Cells were treated with either TMZ (50 μM) or equimolar DMSO for 4 days prior to analysis. Top track shows the location of the CXCR2 gene, with higher magnification analysis of the area shown below. TMZ-treatment led to increased H3K27ac enrichment, including in a well-established enhancer region for CXCR2 (green box and red box) [Chr2:218714857-218715098, fold enrichment 3.14 relative to IgG input, p-value < 0.0001, FDR 0.0004; Chr2:218714701-218715149, fold enrichment 3.63 comp. input, p < 0.0001, FDR < 0.0001]. Furthermore, gene body H3K27me3 was significantly reduced following TMZ [p = 00120 in TMZ relative to DMSO control levels]. b FACS analyses were performed to determine how TMZ treatment alters the levels of CXCR2 in three GBM cell lines-GBM43, GBM6, and U251. Samples were analyzed 8 days after initial treatment with either DMSO or TMZ (50 µM). All data are expressed as the mean fluorescent intensity (MFI). Bars represent means from three independent experiments and error bars represent the standard deviation. Multiple Student's t tests were performed. ***p < 0.001. c Representative FACS scattered plot analyses from GBM43 cells treated with either DMSO or 50 µM TMZ across 8 days. Circle highlights the clear shift of the CXCR2 expressing population into the CD133 + compartment. d TMZ treatment significantly increased expression of CXCR2 in both GBM43 and GBM6, with CXCR2 expressing cells beginning to co-express CD133 GIC markers in a time-dependent manner. All data are expressed as the percentage of total live cells. Bars represent means from three independent experiments and error bars represent the standard deviation. Multiple Student's t tests were performed. **p < 0.01; ***p < 0.001 of these clusters (Group A) is involved in regulating cell chemotaxis (GO: 0060326, adjusted p-value 1.058e-11) and cytokine activity (GO: 0005125, adjusted p-value 2.495e-9), well-established canonical roles for IL-8 17,18 . Cluster B includes genes enriched for wounding (GO: 009611, adjusted p-value 0.002) and hypoxia (GO: 0001666, adjusted p-value 0.004), also known IL-8 connections 19,20 . Interestingly, IL-8 signaling also positively correlated with genes known to regulate epigenetic processes, specifically histone 3 lysine 27 trimethylation (H3K27me3) (GO: 0001666, adjusted p-value 0.03). Trimethylation of H3K27 suppresses gene expression via recruitment of the polycomb repressor complex To determine the ability of IL-8 to influence cellular plasticity, we employed a reporter cell line in which RFP expression is controlled the OCT4 promoter. Cells were treated with 50 ng/ml of IL-8, and RFP expression was monitored by FACS over 6 days. Treatment increased both Oct4 and Sox2. Bars represent means from three independent experiments and error bars represent the standard deviation. Multiple Student's t tests were performed. **p < 0.01, ***p < 0.001. c The network of GICpromoting genes in patient tumors, we selected the top GIC-associated genes activated during TMZ therapy ( Fig. 1a and Supplementary Table 2) and correlated with their levels with IL-8 mRNA using the GlioVis data portal for visualization and analysis of brain tumor expression database (gliovis. bioinfo.cnio.es; dataset LeeY) 16-18. IL-8 expression was significantly correlated with critical GIC-associated genes including KLF4, CD44, HIF1A, HIF2A, Myc, and Twist expression (Fig. 5c). The IL-8 expression in different anatomical location and potential colocalization of these GIC-specific genes with areas of high IL-8 transcript level (Fig. 5c, heatmap). d Immunoblot analysis of endogenous glioma-initiating cell-associated transcription factors expression upon stimulation with escalation dose of IL-8 (0-100 ng/ml) for 24 h. Protein extracts of IL-8-treated PDX lines GBM43 and GBM6 were immunoblotted with antibody against several GIC markers, including c-myc, Sox2, Nanog, KLF4, OCT4, or an antibody against β-actin as a control for equal loading. e GBM43 PDX cells were treated with neutralizing antibody against IL-8 or control IgG antibody (100 ng/ml) prior to treatment with DMSO or 50 µM TMZ. Neutralizing antibody was added every day for 8 days,and protein extracts from this experiment were immunoblotted with antibody against c-myc, Sox2, OCT4, or an antibody against β-actin as a control for equal loading. f Schematic diagram of experiment design for in vivo testing. Top graph, U251 cells were infect with lentivirus (Sigma Mission shRNA) shRNA against IL-8 or scrambled shRNA (control) with 10 infectious unit/cell. In total, 2 × 10 5 transduced cells were stereotactically injected into the right hemisphere of the brain of athymic nude mice (n = 8 per group, four males and four females). Two weeks after implantation, two groups of mice, control, and knock down, were treated with vehicle treated (DMSO, top curve) or TMZ (2.5 mg/kg) intraperitoneally. Survival curves were obtained by the Kaplan-Meier method, and overall survival time was compared between groups using log-rank test. All statistical tests were two-sided. Bottom graph, to examine the role of IL-8 in GBM progression in a more clinically relevant manner, next the same method was used to knockdown the IL-8 expression in GBM43 PDX line. In total, 1.5 × 10 5 cells were injected stereotactically into the right hemisphere of the brain of athymic nude mice (n = 8 per group, four males and four females). Survival curves were obtained by the Kaplan-Meier method, and overall survival time was compared between groups using log-rank test Fig. 6 IL-8 signaling alters histone marks, promoting post-therapy epigenetic plasticity. a Correlation between IL-8 and 12042 genes from TCGA was determined by Pearson correlation coefficients. Sixty-eight genes with coefficients > 0.5 or < −0.5 and false discovery rate (FDR) < 0.05 were selected. Unsupervised hierarchical clustering of those genes found two clusters with the highest cophenetic coefficient at 0.95 and an average silhouette width at 0.46. Principal component analysis was used to validate these clusters. Enrichment analysis found one cluster of genes was enriched at cell chemotaxis (GO: 0060326, adjusted p-value 1.058e-11) and cytokine activity (GO: 0005125, adjusted p-value 2.495e-9), while another cluster of genes enriched at wounding (GO: 009611, adjusted p-value 0.002) and hypoxia (GO: 0001666, adjusted p-value 0.004). b Representative immunoblot of different histone marks. A panel of PDX lines from a different subtype of GBMs was exposed to IL-8 (50 ng/ml) for 24 h. Nuclei were extracted from the harvested cells and subjected to immunoblot analysis for suppressive histone marks H3K27 and H3K9 trimethylation (me3) and activating mark H3K27 acetylation(ac). Immunoblotting for total histone three was performed to confirm the equal loading. c The extracted nuclei from the U251 IL-8 knockdown cells as described in Fig. 5f were subjected to immunoblot analysis for various histone marks as described above. d GBM43 cells were treated with IL-8 (50 ng/ml) for 2 and 24 h, cytoplasmic and nuclear extract was prepared, cells were harvested, and immunoprecipitation assays were performed with the anti-EZH2 antibody. Immunoprecipitated protein was subjected to immunoblot analysis with antibodies against phosphor-EZH2 (S21, and Thr345), and SUZ12. β-Actin and histone 3 were used as loading controls. e IL-8 (50 ng/ml)-treated GBM43 PDX lines were harvested at 6, 12, and 24 h post IL-8 exposure. mRNA was extracted and subjected to reverse-transcription polymerase chain reaction (RT-PCR) analysis of OLIG2, SERPINB2, IL6R, and BMP2K transcripts. Bars represent means from two experiments in triplicate and error bars represent the standard deviation. Multiple Student's t tests were performed. **p < 0.01, ****p < 0.0001. f A panel of GBM PDX lines was treated with TMZ (50 µM) for 48 h. Nuclei were extracted from the harvested cells and subjected to immunoblot analysis for suppressive histone marks H3K27me3 and H3K9me2 and activating mark H3K27ac and H3K4me3. Immunoblotting for total histone three was performed to confirm equal loading. g The U251 IL-8 control and knockdown cells as described in Fig. 5a were treated with TMZ(50 µM) for 4 days. Nuclei were extracted from the harvested cells and subjected to immunoblot analysis for suppressive histone marks H3K27me3 and H3K9me2 and activating mark H3K27ac. Left, representative densitometry analysis is expressed as percent of control shRNA (shCtrl). h The GBM43 PDX line was treated with TMZ(50 µM) in the presence of 3deazaneplanocin A (DZNep, EZH2-I, 5 µmol/L), a histone methyltransferase EZH2 inhibitor for 8 days. Cells were harvested and the GIC population was analyzed by FACS analysis of the CD133 and CD15 positive cells. Bars represent means from two experiments in triplicate and error bars represent the standard deviation. Multiple Student's t tests were performed. ***p < 0.001 (PRC), predominately regulated by two methyltransferases, EZH2 and G9a 21,22 . Confirming our in silico result, treatment with IL-8 significantly increased trimethylation of H3K27, as well as another PRC complex target, H3K9 23 , in three PDX lines (Fig. 6b). Moreover, reduction of IL-8 expression via shRNA abolished methylation of the H3K27 and H3K9 residues (Fig. 6c), with dose-dependent effects (Supplementary Figure S6-B). To further elucidate the connection between IL-8 and PRC, we examined the status of PRC members following IL-8 exposure. Phosphorylation of EZH2 in response to various extracellular stimuli remodels the epigenomic landscape, allowing cellular adaptation 24 . Specifically, phosphorylation at Thr345 enhances recognition of target genes leading to recruitment of PRC2 and suppression of transcription via H3K27me3. 25 Contrastingly, extracellular AKT signaling suppresses the methyltransferase activity of EZH2 by phosphorylating Ser21 26 . Our previous results illustrated that IL-8-CXCR interaction could activate various downstream signaling cascades, including PI3K-AKT ( Figure S6-C and D) 27 . We, therefore, examined the alteration of the phosphorylation status of EZH2 by IL-8 via immunoprecipitation (IP) in the nuclear and cytoplasmic fraction (Fig. 6d). IL-8 stimulation enhanced phosphorylation of EZH2 at both Ser21 and Thr345, exclusively in the cytoplasmic fraction. However, within 2 h of IL-8 stimulation, S21-phosphorylated (inhibited) EZH2 levels were decreased in the nucleus, while Thr345phosphorylated (activated) EZH2 accumulated. Binding of EZH2 to SUZ12, an essential PRC protein, increased only in the cytoplasmic compartment after IL-8 exposure, while the nuclear accumulation of EZH2-SUZ12 complex gradually decreased. We conclude that within 2 h of IL-8 exposure, the PRC2 activity may increase, but by 24 h nuclear accumulation of PRC2 complex is reduced by heightened phosphorylation of EZH2 at Ser21. To determine the functional effect of IL-8 on EZH2 activity, we analyzed genes positively regulated by EZH2 (OLIG2, SERPINB2) and negatively regulated genes IL6R and BMP2K via qRT-PCR (Fig. 6e) 28 . Remarkably, IL-8 exposure reversed expression of all four EZH2 target genes, indicating that IL-8-induced EZH2 modifications does alter gene transcription. Next, we expanded our investigation into TMZ-induced therapeutic stress-induced alterations in histone markers that are targets of EZH2/PRC2. Treatment with TMZ induced subtype-and time-dependent global changes in epigenetic markers H3K27me3 and H3K9me3 and openchromatin marker H3K27 acetylation (ac) (Fig. 6f). Additionally, PRC2 target H3K9me3 was upregulated in GBM6. H3K27 trimethylation and acetylation were upregulated within 48 h post-TMZ exposure and stayed elevated. H3K4 trimethylation also increased within 96 h, indicating acquisition of bivalency 29 . Given that TZM induces IL-8 signaling and EZH2dependent changes in histone status, we next examined the relationship between TMZ, IL-8, and EZH2 target histones. We treated IL-8-knockdown cells with TMZ for 96 h and analyzed histone status. Reduced IL-8 levels abolished methylation of EZH2 targets H3K27 and H3K9; however, H3K27ac decrease was minimal. Consequently, we suspect that chemotherapy-induced IL-8 signaling participates in EZH2-dependent epigenetic modifications during therapeutic stress. Finally, to investigate the role of EZH2/PRC2 complex activity in promoting therapyinduced cellular plasticity, the GBM43 PDX line was treated with TMZ in the presence of 3-deazaneplanocin A (DZNep, EZH2-I), a histone methyltransferase EZH2 inhibitor. DZNep abolishes the induction of a GIC population after therapy, as measured by FACS analysis of the CD133 + and CD15 + populations (Fig. 6h, p > 0.0005). Discussion The ability of GBM cells to adapt to current therapies and generate treatment-resistant recurrences represents a critical challenge facing brain tumor researchers and clinicians. Here, we provide data that illustrates new mechanisms that may underlie this powerful ability to react to and overcome standard of care therapies. This study highlights the IL-8/CXCR2 signaling pathway as a critical player in this process and a potential target for blocking GBM cellular plasticity during therapy. Specifically, our data show: (1) therapeutic stress alters the epigenetic regulation of IL-8 leading to increased expression and secretion of IL-8; (2) bioinformatics analysis and IHC analysis of matched primary and recurrent patient tissues suggest that IL-8 significantly influences patient progression and time to recurrence; (3) therapeutic stressinduced IL-8 alters the phenotype of GBM cells, shifting them to a more GIC-like state; (4) IL-8 supports GBM aggression and resistance to chemotherapy in vivo; and (5) IL-8 signaling may influence the acquisition of GIC state via modulation of the histone modifying PRC2 complex. Induction of cellular plasticity, is a well-established player in the formation of GBM recurrence. Indeed, we and several other groups have demonstrated how standard therapies can initiate this process and enrich GBM tumors with GIC cells 4,5,7,30 . However, targetable players in this process have yet to be identified. Here, we provide evidence that IL-8 represents one such target. Our results show that IL-8 is sufficient to induce the GIC state on its own and that it is both adequate and necessary for the adoption of GIC state during therapeutic stress. These results highlight IL-8 signaling as a potent regulator of GBM phenotype. Not only was IL-8 able to induce the expression of CD133 and CD15, two GIC phenotypic markers, it also caused increased expression of many transcription factors known to promote the GIC phenotype, including HIF, c-myc, Sox-2, and CD44. In light of the ongoing debate regarding the precise gene expression profile of GICs, this robust induction, combined with our matched patient data, provides strong evidence that IL-8 is capable of activating cell reprogramming toward a more GIC-like state during chemotherapy. Another key aspect of this study concerns the tumor microenvironment and how therapy can influence its composition. We show that GBM cells manipulate their microenvironment following treatment with chemotherapy. This fact further corroborates a growing body of evidence that tumor cells cultivate a pro-growth microenvironment. An interesting facet of our data is the fact that culturing GBM cells in pro-GIC conditions alone led to an increase in IL-8 levels, suggesting that tumor cells may utilize positive-feedback loops to respond to their surroundings rapidly, a potential new therapeutic target. One open question from this study is potential non-tumor sources of IL-8 in the tumor microenvironment. While our results illustrate a robust role for GBM autocrine IL-8, it remains an open question how IL-8 from surrounding nontumor cells might participate in the recurrence process. Cytokines from the stroma and infiltrating immune cells have been identified as regulators of tumor behavior in other cancer types, including breast cancer 31,32 . Previous work has shown that IL-8 signaling in the perivascular niche is critical to GIC behavior and phenotype 33 . In fact, our IHC analysis shows that infiltrating macrophage express IL-8 in the tumor microenvironment. Moreover, astrocytes and brain endothelial cells are both known to release cytokines, especially in response to damage [34][35][36] . Indeed, one of the top hits from our bioinformatics analysis of pathways correlated with IL-8 expression in patient samples was wound healing. These data and the potential involvement of non-tumor cells provide further evidence for the theory that cancer represents an "un-healing" wound and is caused by inappropriate activation of damage response and inflammatory pathways. New research will shed light on this hypothesis. In light of the fact that GBM is highly sensitive to changes in the microenvironment 2,37 and factors secreted by nearby neurons 38 , it is highly possible that non-tumor IL-8 may participate in the processes described here. Another open question is the specific mechanisms activating the production of IL8 during therapy. We have previously reported that therapeutic stress activates hypoxia inducible factor (HIF) signaling in GBM and promotes cellular plasticity. Evidence exists that HIF signaling can promote the synthesis and secretion of cytokines, including IL8 [39][40][41] . The fact that IL-8 is strongly correlated with HIF levels in patient samples supports the idea that HIF and IL8 may also be linked in GBM's response to therapeutic stress. Another potential mechanism driving IL8 induction is IKK-regulated transcription, a key process shown to regulate responses to chemotherapy in other cells 42 . Further research will illuminate the driving force of chemotherapy-activated IL8 induction and secretion. Our data indicate for the first time that IL-8 is capable of causing alterations in the epigenetic status of key gene regulation factors, such as H3K27. While evidence increasingly shows the importance of epigenetic regulation in GBM growth and therapy resistance, the mechanisms activating these processes remain incompletely understood. IL-8 appears to act as one tumor-derived trigger for activating epigenetic responses to tumor therapy via modulation of the canonical PRC2 complex. In sum, our data show that IL-8 is a key microenvironmental factor involved in promoting cellular plasticity in GBM. Through analysis of murine models and patient data, we illustrate the high degree of influence IL-8 holds over tumor progression. Furthermore, our work connects IL-8 signaling to the increasingly important area of epigenetic regulation of gene expression to allow tumor growth. These results highlight IL-8/ CXCR signaling as a key target for GBM drug development, especially in combination with standard of care therapies.
2019-03-30T13:37:55.744Z
2018-11-09T00:00:00.000
{ "year": 2019, "sha1": "34789613747ee01cf4351b809d16e12269539ee7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-019-1387-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34789613747ee01cf4351b809d16e12269539ee7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
238025748
pes2o/s2orc
v3-fos-license
Socialization of educational services online marketing in the context of personality harmonization The harmonious development of personality in modern conditions is largely associated with the socialization of educational services based on Internet marketing technologies. The gist of this article is devoted to the search for effective methods of promoting educational services on the Internet in order to attract the target audience and increase the effectiveness of marketing activities. The activities of the subjects of the education market are not only commercial in nature, but also have a high social purpose - the preservation and development of the intellectual potential of the nation. All these factors require a high level of corporate social responsibility from educational institutions. The scientific article presents an algorithm for developing an Internet marketing program in the field of promoting educational services of a training center, a feature of which is the socialization of processes focused on the availability of target markets and providing them with maximum information on obtaining educational services. The dedicated tools allow you to increase the level of competitiveness in the markets of educational services and ensure communication with external and internal users. In accordance with this, the authors have identified the most effective advertising sites, taking into account the analysis of consumers, their preferences and interests. Based on the results of the study, a promotion program was developed and the economic efficiency of the proposed measures was assessed. Introduction The market of educational services in Russia has long acquired the framework of a socially significant process and is designed to create conditions for the harmonious development of the individual. The construction of a European zone of higher education and the development of information technologies represent a new impetus for the modernization of vocational education, opens up additional opportunities for the participation of universities in the system of the national educational space. In the context of state regulation and increased competition in the market of educational services, educational institutions of higher and additional education should pay great attention to finding effective methods of promotion on the Internet in order to attract the target audience and increase the efficiency of their activities. The development of educational institutions' marketing activities with the use of modern online marketing tools is becoming increasingly important [1][2][3]. The importance of Email marketing is difficult to overestimate -this tool forms loyalty, trust, interest in the company and its products. The development and use of Internet technologies in practice allow the organization to achieve marketing goals and objectives, promote the offered services to the relevant markets by meeting the needs of the subjects of the educational services market. Correctly chosen strategy and tactics of promotion of services on the Internet allows the company to reach a leading position, promotes awareness in General and the formation of interest in the Internet audience. The advantages of the Bologna process, thus, are: increased access to higher education, the subsequent increase in the attractiveness and quality of European higher education, increased mobility of teachers and students, as well as ensuring the successful employment of University graduates through the fact that all academic degrees and other qualifications should focus on the labor market [4]. A lot of scientific articles and monographs are devoted to the socialization of the educational market and the harmonization of personality development, and enough attempts have been made to analyze the system of promoting educational organizations. Currently, certain results of theoretical and practical research on marketing development in educational organizations have been accumulated [5][6][7][8][9][10][11]. However, not enough attention has been paid to the conduct of scientific research of educational institutions in this area based on the use of Internet technologies. At the present stage, specialists in the field of Internet marketing do not have a clear understanding of the development of an integrated approach to assessing the functioning of a company's website, the most effective channels for attracting and tools for promoting educational services have not been identified, based on their specific characteristics and goals of the company on the Internet. The peculiarities of the educational services market are determined by the presence of state regulation of this area, a significant share of the public sector, and the great influence of the state on the activities of non-state educational institutions, which actively influence the formation of a harmonious human personality. Research materials and methods Promotion of educational services through the educational portal can include a huge Arsenal of tools, including search engine optimization, contextual advertising, banner advertising, e-mail marketing, affiliate, or affiliate marketing. However, as the main and perhaps the most important tool is the official website of the institution. The effectiveness of the promotion of educational services through the website is achieved not only by the attendance of all categories of the audience, but also by the attendance of the target audience, if the share of target visitors is small, the effectiveness of the educational institution will be very low. The site should be such as to provide the most optimal array of information to a wide range of customers. It is important for students to have a constant opportunity to quickly find out: class schedule, changes in the schedule of classes, as well as any information that will help them to Orient in the educational and extracurricular activities of the University. The main advantage of Internet marketing in the educational sphere is the availability and unlimitedness of the consumer to information about educational services. In the promotion of educational services, Internet marketing can include such elements of the system as: display advertising, contextual advertising, search marketing, direct marketing, mobile marketing, social marketing, time marketing, confidential marketing. Also, in order to effectively promote educational services, search engine marketing in general and SEO (search engine optimization, search optimization) in particular, SMO (social media optimization, promotion of a site in social media networks) and SMM (social media marketing, marketing in social media networks). The main goal of this study is to develop a plan for promoting the organization's educational services on the Internet. Research of the strategy of Internet promotion was carried out on the basis of the educational computer center «Arena Center». «Arena Centre» is a training center operating on the basis of the Ural State Economic University (USUE). The center started its work in 2007 and has been successfully implementing its activities for 8 years, which consists in providing educational and consulting services, training the professions in accordance with trends in IT technologies, design and animation. Results and discussion When choosing the types of marketing communications and promotion tools, the goals of the promotion and the target audience are taken into account. In Table 1 the authors give the main groups of the target audience and the corresponding promotion tools. An analysis of the site's traffic showed that 47% of the site's audience «comes» from search engines, 24% from advertising, 17% from direct conversions, 11% from referrals and 1% from social networks. In order to identify the factors influencing the choice of the educational institution, to obtain data on sources of information about the educational institution, the results of a written interview with the students at the Arena Center were recorded. The results showed that the quality of service provision (47%), the qualifications and experience of the teaching staff (19%) primarily influence the choice of the educational institution. According to the survey, it was revealed that the majority of new clients (90%) in Arena Center, rather than the permanent ones (10%), usually choose no more than 1-2 training directions. In general, the goal of education is to obtain new and modern knowledge for personal self-development (39%), to gain the knowledge and experience necessary for current work (28%), the possibility of additional earnings (31%). The majority of students learned about the activity of the educational institution through Internet sources (59% of respondents), 19% through the recommendations of acquaintances. The survey also determined the degree of satisfaction of students with the convenience of using the site of the training centre, on the whole, the respondents noted a rather positive result. However, 12% of the respondents noted that they visited the site through mobile phones, and this made it difficult to obtain the necessary information on the site. It is also important to correlate the purpose of visiting the site with the result of its implementation. In order to distribute the audience by segments depending on the purpose of the visit, it is possible, for example, to divide key phrases into commercial and information requests. Closely related to this is the failure rate, average time on the site, depth of viewing. A high percentage of refusals can mean not only the fact that visitors do not consider the site interesting. It is possible that they get all the necessary information on the page and close it. We consider the same situation to be with viewing depth. The overall efficiency of the site is tracked by conversion. In statistical systems, this indicator is automatically counted. It shows the ratio of attendance and fulfillment of goals for a certain period of time. The most accurate data can be obtained if the period is from 1 to 6 months, because in this case the deferred conversion will also be included in the final result. The authors provide a comprehensive assessment of the site of the educational computer center. The results of the analysis of the site allowed us to formulate recommendations for improving its work on the Internet. Among the most important, the authors note the correction of technical parameters (page indexing optimization, download speed), the need to adapt the site for mobile devices. As the main marketing communications of the educational institution with the subjects of the market of educational services on the Internet, the authors identify direct marketing, PR and advertising, which include a wide range of tools for promotion on the Internet. Based on this, the authors propose to use the following tools to promote educational services of the center «Arena Center»: the official website, e-mail, contextual advertising, and media advertising. The allocated tools allow increasing the level of competitiveness in the market of educational services and providing communication with external and internal users. In accordance with this, the authors identified the most effective advertising platforms (presented in the media plan table 2), taking into account the analysis of consumers, their preferences and interests. As a result of the work, the effectiveness of the proposed activities was assessed. It was revealed that the most suitable sites in addition to those already used by the Arena Center training center are also Avito.ru, Mail.ru and the already used E1.ru site. These resources are the most visited and are of the greatest interest to the target audience. Therefore, it was required to assess the volume of crossed audiences and to reveal the share of unique visitors for further economic argumentation of the proposed measures. Having determined the efficiency of each site, it was possible to establish an approximate number of attracted customers, respectively, to calculate the conversion rate of each resource. The received data allowed drawing a conclusion about the planned increase in the flow of visitors to the site arenaekb.ru and the number of possible customers. Thus, the availability of data by the possible number of customers and advertising costs allows you to determine the amount of cash flow to evaluate the payback period of investments. Evaluation of the payback period requires special attention, due to the heterogeneity of the types of advertising, the big difference in the terms of attraction, the optimal response time for potential customers, and the reason for granting deferred payment opportunity to the company&apos`s customers. Accounting for the amount of deferred payments should be made on the basis of the debt turnover indicator, the average volume, which will establish the average payment term for the services of the clients of the center. Based on the above data, for each of the proposed advertising sites, a monthly cash flow matrix was compiled within a certain period of 12 months. The calculations made allowed to establish the average expected payback period of investments, as well as the fact that investments in each of the proposed sites pay off within the established period (Table 3). The resulting indicators in the analysis selected conversion rate and profitability of customers. You can see the results of the calculations in Table 4. Analyzing the given data, it follows that the negative profitability is observed with a decrease in both indicators by 30% or 45%. The total economic effect of the proposed activities in the billing period will be 1459 thousand rubles, which is 14% more than in the previous year, while revenue will grow by 21% and amount to 16392 thousand rubles. The calculated indicators should be compared with those for the previous period, for this it is necessary to refer to Table 5. Thus, in the analysis of Table 5, it was revealed that the figures for revenue and net profit after tax paid are positive in the period under review. Relative to the indicators of 2016, the indicators of the planned period have a positive trend: the increase in revenue to 21% (at 2876529.557 rubles); net profit will increase by 12%, which indicates the economic feasibility of the proposed activities. Also, the author considers it necessary to mention the change in the expenditure part in the planning period, the indicator of which increased by 5% from the indicator of 2016. The change in costs was calculated based on the change in the amount of advertising costs in the planning period, while the constant part of the costs remained unchanged. However, the cost of marketing activities using Internet sites increased by 237,702 rubles in relation to the previous period. Conclusion The conducted research made it possible to establish that the advertising campaign proposed earlier will allow increasing the company's revenue by 21% in the future and the profit margin by 13%. Evaluation of the economic feasibility of the proposed measures to promote the educational services of the computer center «Arena Center» proves the usefulness of the proposed tools for optimizing the level of sales by using an effective advertising campaign. The final stage of realization of the program of marketing activity on the Internet is the development of directions for its implementation. Thus, a set of proposed recommendations for improving the marketing activities of an educational institution based on the development of a plan for promoting educational services on the Internet will improve the efficiency of the company as a whole, increase the flow of consumers, which is an important component of competitiveness in the context of socialization, and helps to strengthen loyal relationships with target audiences.
2021-08-27T16:54:48.999Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "1d2a3bf311e4355ee55a6bf5595a22955be28d67", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/67/e3sconf_sdgg2021_05046.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "14432dba349397b5f6c8e9897db3f793e2ebc436", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
169210935
pes2o/s2orc
v3-fos-license
DID model based research on the policy effect of national independent innovative demonstration zones . National Independent Innovation Demonstration Zone set up as combination of regional technology innovation policy in nine areas of Beijing Zhongguancun, East Lake in Wuhan, Shanghai Zhangjiang as a national independent innovation demonstration area as a sample of policy mix to when the impact of technological innovation ability by using Difference-In-Difference model. Studies have shown that compared to after the implementation of the policy, local regional technological innovation capability has increased, and fail to carry out the policy of Chongqing, Shenyang, Beijing and Wuhan, to carry out the policy of regional technological innovation capability enhancement is more obvious;The whole society investment in fixed, population density and area of technical level of regional technology innovation in inhibition, human capital, research funding, foreign real investment and government funding support to promote the regional technology innovation, After we re-estimate the same models by the stability test and conclude the similar results. Introduction Since the 21st century, enterprise competition intensifies, the development environment showing a high degree of uncertainty, especially in the era of economic marketization, information technology, consumer demand diversification, adjusting the industrial structure, optimizing the economy has become the main form of economy, science and technology plays a more and more important role in the modern economic transformation, role of technological innovation in transition acted as a catalyst. It can improve the technological innovation ability of enterprises, enhance the core competitiveness of enterprises, has become the determinants for enterprises to enhance their market, the comprehensive national strength competition determinants. Since the first National Independent Innovation Demonstration Zonethe establishment of Zhongguancun in Beijing in March 2009, by the end of April 2016, there have been in Lake Donghu, Wuhan, Shanghai Zhangjiang hi-tech park, Tianjin Binhai, etc. 14 national independent innovation demonstration zone established. The establishment of demonstration zone is to play a role in the demonstration of independent innovation and the development of high and new technology, lead, radiation, driven the development of regional economy, how to measure the effect in the demonstration area, which the most important standard is The most important criterion is whether the demonstration zone policy can promote the regional technological innovation ability. For the impact of regional technology innovation capability, the effect of National Independent Innovation Demonstration Zone of policy on regional innovation capability is immeasurable. Therefore, evaluate Net policy effect, adjust the policy and implementation direction, has a certain theoretical and practical significance, Research on the national independent innovation demonstration zone policy has reference value on Implementation and improvement. This article attempts to use of policy effect evaluation of the mainstream analysis methoddouble difference analysis method from the quantitative perspectives, analysis and compare the National Independent Innovation Demonstration Zone policy combination of the net effect analysis. Literature review Research on the technological innovation policy. The main part of the research on the efficiency of technology innovation policy; In the article, Tijssen proposed the of innovation effect principle (including the proposed the design principle of the system), including a specific, measurable, accept, and with time and change. Forster et al. proposed the use of scientific and technological achievements of the industrial income tax revenue as a measure of the efficiency of scientific research tools and supply the existing research benefits of the evaluation tool range [1]. Research on policy performance and policy effect. Wang Baoshun used the data envelopment analysis to evaluate the efficiency of China's local urban environmental governance, and empirically tested the environmental variables that affect the efficiency of fiscal expenditure [2]. Deng Li calculated and analyzed the total factor productivity of Guangdong province by using non-parametric DEA Malmquist index method, estimated from 1980 to 2004 in Guangdong Province 21 prefecture level cities total factor productivity growth, efficiency change and technological progress rate. The analysis showed that the economic growth of Guangdong was mainly driven by the increase of factor inputs [3]. Liu Qilin analyzed the sources of growth in China's energy industry productivity, technical progress, technical efficiency, difference and the change trend of empirical analysis by Malmquist DEA method, the selection of data in 1999-2010 includes 29 provinces in China, energy industry panel data [4]. The trend of total factor productivity and the key factors and convergence conditions are analyzed. Chenqiuying used the analysis method of DEA CCR model, BCC model and Malmquist index analysis of the Xiamen, Zhangzhou and Quanzhou City of science and technology policy performance to pure technical efficiency change, scale changes, total factor productivity, technical efficiency change, technical change index [5]. Research on the application of DID method. Cao Honghua et al. analyzed of the control effect of ecological agricultural policy on the point source pollution of industrial and service industries, selected the Erhai River Basin as the study area, and revealed the effect of ecological agriculture policy [6]. Wang Rongcheng et al. constructed the "mountain town" policy effect evaluation system, the policy effect of prediction and test of plateau towns [7]; Yang Shali et al. surveyed 2009 the transformation of value-added tax after the enterprise business environment from the micro level enterprises, economic benefit and the effect of policy [8]; Zhao Luan et al. assessed the reform of rural credit cooperatives to improve the effect of financial support for agriculture policy [9]; Lin Chen et al. studied on higher education expansion policy on technology innovation efficiency effect [10]. On the long-term development of the level of regional technology, the influence of National Independent Innovation Demonstration Zone of policy of regional technological innovation capacity is far-reaching, for that reason, this article will use the DID model, nine regions of 2010-2017 Zhangjiang hi-tech park, Zhongguancun, Beijing, Shanghai, Wuhan East Lake policy area, zone and non policy area, zone technology innovation capacity index of double difference analysis, comparing the variations between the regional policy and non policy area, the impact of measure policy of regional technological innovation capacity, in-depth analysis of policy in the implementation process of influence factors and the extent of the impact. Model setting The analysis of Policy effect mainly include causal model and treatment effect model, causal model although is widely used, but whether causality is true causal relationship and to much extent reflect causality was still in doubt [11]. DID model (difference -in -difference model) in recent years was used in the implementation of the policy effect evaluation, the principle is according to the divided into the experimental group and the control group before and after the implementation of the policy of different performance or the implementation of the policy and without the implementation of policy effects, according to the number of changes before and after the specified index calculated the fold difference policy on the implementation of the policy in the experimental group, the net effect can be obtained. By its basic principles, the establishment of a national autonomous innovation demonstration area for the experimental group, the country did not get the national independent innovation demonstration zone as the control group. The general model of the model is as follows: The result of the individual i is Yi,t in the period, the group dummy variable is Di,t, Di,t = 1, indicate that the individual belongs to the policy value, Di,t = 0, indicate that it belongs to the non-policy group, the period of virtual variables Ti,t, the experimental period Ti,t = 1, nonexperimental group Ti,t = 0. Di,tTi,t is effect of interastion. For the parameter to be estimated, for random , disturbance term. According to the purpose of use can be divided into four groups: control group before the implementation of the control group after the implementation of the policy, the policy, the implementation of the policy of the experimental group and after the implementation of the policy of the experimental group, the net impact of the policy implementation of the results 2 , as shown in Table 1. Source: Authors. Variable selection and description According to the double differential measurement model of thought, we must first select the experimental group and the control group, because currently has 14 national independent innovation demonstration zone of our country, excluding newly established policy no obvious effect of the demonstration zone, finally, select Zhongguancun, Beijing, Shanghai Zhangjiang and Wuhan East Lake as the experimental group, Xian, Shenzhen, Chengdu, Chongqing, Guangzhou, Shenyang as the control group. The selected variables are: invention patents, high-tech enterprise scientific research personnel number, high-tech enterprise R &amp; D expenditures, whole society fixed assets investment, torch plan government funding support, the actual foreign investment, population density and GDP. Specific models are as follows: In this paper, the main sources of data from the China Torch Statistical Yearbook The statistical characteristics of the above variables are shown in Table 2, the sample size is the 72 National Independent Innovation Demonstration Zone 2010-2017 years in 9 cities. Overall, in addition to the regional GDP, overall data in all the maximum of all data sources in the experimental group, and general data about minimum values are derived from the control group, show and control groups, the experimental group in funds and personnel input, innovation output is more significant. Empirical results and analysis This paper for panel data are variable intercept model with fixed effects regression variable intercept model and random effects regression, after using the Hausman test, test results reject the fixed effect model assumptions. Therefore, the random effects model is more appropriate. By using Stata 12.0 to analyze the model, regression results as shown in Table 3. Model 1 is the basic model, which contains the influence of the internal expenses, the control group, the time and the cross terms of the R&D funds of the high and new technology enterprise's scientific research personnel and the high and new technology enterprises. Model 2 to 6 model is continue to join the other explanatory variables in the regression results of model 6 includes a scientific research personnel of high-tech enterprises, high-tech enterprise R&D expenditures, whole society fixed assets investment, the torch plan implement project government funding support, population density, the actual foreign investment and regional GDP, explanatory variables. Note: t statistics in parentheses; * p < 0.1, **p < 0.05, ***p < 0.01 From dummy variable cross coefficient,it can be seen that the regression coefficient is positive, indicate that output in the establishment of national independent innovation demonstration zone can indeed significantly improve the park where the local authorized invention patent, help to improve the regional technology innovation ability, and in fact also proved this point. High technology enterprise's scientific research personnel proportion coefficient is positive. It is proved that it has a positive effect on regional technological innovation capability, which is owing to an increase in the number of researchers and need a lot of money support, human capital transform for the output, which has greatly improved on the level of R&D technology. The coefficient of High-tech enterprise R&D expenditures is positive, which is consistent with reality, reflecting the enterprises pay more attention to technology innovation. Generally speaking, corporate R&D investment intensity bigger, the more innovation output, technology innovation ability is stronger. The coefficient of total social investment in fixed assets is negative, although side effects had little effect on that in the more developed regions. After the development of innovative environmental infrastructure to a certain extent, it has not like at the beginning of the reform and opening up of technological innovation ability has improved significantly with the increase. The coefficient of population density is negative, theoretically, the greater population density areas more conducive to innovation, but in many cases, for example the official term effect of pure pursuit of GDP growth and ignore the long-term is more favourable to the output of innovation education, together with the population density of the region,the more foreign population, the lower the quality of human capital accordingly, which makes the population density and innovation output negative correlation. The coefficient of actual foreign investment is positive, showing the positive effect, foreign investment has been a conscious to the high and new technology industries and strategic emerging industry, the government has improved the market for technology strategy, while the introduction of foreign and domestic competition will promote the enterprise technology innovation desire. Under the dual stimulation, the foreign investment introducing has an influence on technology innovation output slightly outweigh the disadvantages, but only have little effect. The coefficient of regional GDP is negative, although this result is not significant. In general, in regional GDP higher area, the more developed the economy, but also pay attention to R &amp; D investment accounted for the ratio did not increase with the increase of GDP. At the same time, government funding support variable coefficient is positive, indicating that capital investment is the benefit for the regional technological innovation ability. In conclusion, in view of the relationship between the cross term coefficient, we can get that the establishment of National Independent Innovation Demonstration Zone can improve the ability of regional technology innovation, namely, the policy measures are significantly effective. Robustness test In order To make the conclusion correct, this paper use the fixed effect model benchmark regression model estimation, Table 4 is the results of robustness test, from this table, we found coefficient of cross multiplication (T*D) are different, but little difference and bith has a positive effect, at the 1% level significantly, after robustness tests the result still holds, that is robust in the National Independent Innovation Demonstration Zone Policy effect of DIDS model, namely, the National Independent Innovation Demonstration Zone Policy has a positive role in promoting regional technology innovation output. Conclusions and prospects This paper adpot the DID model to the evaluate policy effect of the National Independent Innovation Demonstration Zone, the invention of the patent can be used as the effect of the policy variables, scientific research personnel of high-tech enterprises, high-tech enterprise R &amp; D expenditures, whole society fixed assets investment, the torch plan implement project government funding support, population density, the actual foreign investment and regional GDP can be used as explanatory variables. Through the analysis, we draw the following conclusions: The establishment of National Independent Innovation Demonstration Zone has a significant role in promoting the regional technological innovation ability. This shows that our country according to all levels from the overall planning, establish demonstration area, and then extended to the country, produce technology diffusion effect, to take the role, through policy to increase output of technological innovation activities, enhance the ability of regional technology innovation. From the perspective of previous measurement, it shows that the establishment of national independent innovation demonstration zone areas brings technological innovation ability promotion, demonstration zone was established to make government funding to support more targeted and aggregation, stimulate enterprise science and technology personnel and funding of research and development investment. The establishment of the National Independent Innovation Demonstration Zone is more confident in the future of the enterprise. The demonstration zone demonstration should be in aspect of the investment policy, talent introduction, tax policy and financial policy aspects, in order to promote the technology innovation ability as the core goal to achieve, to achieve this, we must solve the current policy in the implementation process in the performance, outstanding technological innovation capability and technology innovation efficiency. Innovative environment has a certain role in promoting the ability of regional technology innovation. In a certain extent, the actual foreign investment will stimulate business activity, it bring enterprises to have a sense of crisis and "catfish effect". In order to survive, develop and grow, enterprises should improve their ability of technology innovation, but the same time, we also want to pay attention to wary of the purpose of occupying the Chinese market of foreign capital control the core technology, at this stage we should introduce the core technology, the initiative in their own hands, and human capital, technology digestion, knowledge spillover effect. Scientific and technical personnel and scientific research funds has played an important role in improving the ability of regional technology innovation. The increase of human capital on regional technological innovation capability is more important, this viewpoint in previous studies has been verified, but the promotion of population density on the quality of human capital without an obvious role in promoting, therefore, the effect of regional technological innovation capacity is not very obvious. This is mainly reason on the one hand, the quality of human capital increases are not overnight and mobility of the population also limits the government's support to education, on the other hand, the implementation of regional policy can attract more foreign population, making the quality of human capital at a lower level, it plays a negative role for regional technology innovation. The support of government funds, R&D expenditure is a kind of emphasis on R&D, which can stimulate enterprise development, give the confidence of enterprise technology innovation ability. In the future, it should be the purpose of the implementation of the special introduction of talent mechanism to accelerate the accumulation of human capital. Further aggregation of government innovation fund support, advanced industry into the focus of the object support, advocacy and corporate R&D investment at the same time, reduce the risk of enterprise development, in order to maximize the mobilization of demonstration zone enterprises technological innovation of enthusiasm. The combination of high and new technology industries together, which is conducive to the spread of technology and also easy to produce aggregation effect. Due to the imperfect policy and short implementing period, the implementation of small area, policy and the possible delay, effect has not been fully demonstrated, the future will be tracking survey analysis from multiple perspectives of the national independent innovation demonstration zone policy effect; in addition, due to the availability of data, we did not refine the policy. In the near future will analysis the policy impact on the ability of technology innovation.
2019-05-30T23:46:47.150Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "fc0f2f56e681046b701c60af43ee1e9eb4ec0d66", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2019/02/shsconf_ies2018_01033.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8238ed63c61089ea6a88d4729bc5294702ce376f", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
257180074
pes2o/s2orc
v3-fos-license
Effect of Quercetin Nanoparticles on Hepatic and Intestinal Enzymes and Stress-Related Genes in Nile Tilapia Fish Exposed to Silver Nanoparticles Recently, nanotechnology has become an important research field involved in the improvement of animals’ productivity, including aquaculture. In this field, silver nanoparticles (AgNPs) have gained interest as antibacterial, antiviral, and antifungal agents. On the other hand, their extensive use in other fields increased natural water pollution causing hazardous effects on aquatic organisms. Quercetin is a natural polyphenolic compound of many plants and vegetables, and it acts as a potent antioxidant and therapeutic agent in biological systems. The current study investigated the potential mitigative effect of quercetin nanoparticles (QNPs) against AgNPs-induced toxicity in Nile tilapia via investigating liver function markers, hepatic antioxidant status, apoptosis, and bioaccumulation of silver residues in hepatic tissue in addition to the whole-body chemical composition, hormonal assay, intestinal enzymes activity, and gut microbiota. Fish were grouped into: control fish, fish exposed to 1.98 mg L−1 AgNPs, fish that received 400 mg L−1 QNPs, and fish that received QNPs and AgNPs at the same concentrations. All groups were exposed for 60 days. The moisture and ash contents of the AgNP group were significantly higher than those of the other groups. In contrast, the crude lipid and protein decreased in the whole body. AgNPs significantly increased serum levels of ALT, AST, total cholesterol, and triglycerides and decreased glycogen and growth hormone (*** p < 0.001). The liver and intestinal enzymes’ activities were significantly inhibited (*** p < 0.001), while the oxidative damage liver enzymes, intestinal bacterial and Aeromonas counts, and Ag residues in the liver were significantly increased (*** p < 0.001, and * p < 0.05). AgNPs also significantly upregulated the expression of hepatic Hsp70, caspase3, and p53 genes (* p < 0.05). These findings indicate the oxidative and hepatotoxic effects of AgNPs. QNPs enhanced and restored physiological parameters and health status under normal conditions and after exposure to AgNPs. Introduction Thanks to nanotechnology, it has been possible to manage compounds with smaller dimensions (less than 100 nm) that facilitated their pickup by cells and made them effective in small doses. Recently, nanotechnology applications have increased in veterinary sition, hormonal assay, intestinal enzymes' activity, and gut microbiota. In addition, the relative mRNA levels of some stress and apoptosis-related genes were investigated in Nile tilapia (Oreochromis niloticus), the predominant and most commonly cultured species in many countries, especially for intensive aquaculture. AgNPs and QNPs Preparation To obtain AgNPs, the Bacillus subtilis MT38 isolate was inoculated in Luria Bertani broth (LB) medium and incubated at 35 • C for 24 h. Twenty milliliters of the bacterial suspension, obtained after centrifugation at 8000 rpm for 20 min, were added to 80 mL of AgNO 3 (3 mM) at pH 6, 30 • C, and subjected to an agitation speed of 150 rpm for 24 h. All chemicals were purchased from Sigma-Aldrich International GmbH (St. Louis, MO, USA). To obtain QNPs, a solution with 50 mL of ethanol containing 100 mg of quercetin was prepared. The internal organic phase solutions were quickly injected into a 150 mL external aqueous solution containing the appropriate amount of polyvinyl alcohol (PVA), and then the solutions were homogenized at 20,000 rpm for 30 min. The ethanol was evaporated using a rotary vacuum evaporator at 45 • C, and the obtained material was lyophilized using a freeze dryer. The obtained AgNPs and QNPs were characterized using UV-Vis Spectrophotometer (UV-Vis; LaxcoTM dual-beam spectrophotometer, Lake Forest, Il, USA), dynamic light scattering (DLS, Malvern Hills, Worcestershire, UK), which is a technique used to study size and charge of suspended nanoparticles, and transmission electron microscopy (TEM, JEOL 1010, Tokyo, Japan) to measure the AgNPs size in colloidal solution. Zeta potential analysis was carried out to determine the surface charge of the nanoparticles. Fish and Diet Formulations Two hundred and forty O. niloticus (40 ± 0.45 g body weight) were purchased from a hatchery (El-Abbassa Fish Hatchery, El-Abbassa, Al-Sharkia, Egypt) and subjected to an acclimatization period of 14 days in dechlorinated tap water in glass aquaria. Fish were fed 3 times daily a basal diet (without AgNPs or QNPs) corresponding to a 5% of their biomass. The recommendations of the American Public Health Association regarding water quality parameters were followed [45]. The same rearing conditions were adjusted in all glass aquaria, including temperature, pH, ammonia, and dissolved oxygen, with a photoperiod of 10 h: 14 h (light: dark). The QNPs (400 mg/kg) were mechanically mixed with the basal diet ingredients, pelletized, and left to dry at 25 • C for 24 h. The prepared diet was kept in the refrigerator at 4 • C until use. The composition of the basal diet was 32% crude protein, 45.5% fat, 42.50% fiber, 73% ash, and 518% nitrogen-free extract. Nile tilapias were allocated into four groups (n = 60/group), each with four replicates (fifteen fish/replicate). Fish were kept in glass aquaria (100 × 50 × 40 cm) containing 160 L of dechlorinated tap water. The first group (control) did not receive AgNPs or QNPs in the water or the diet. The second group was fed a basal diet supplemented with 400 mg QNPs per kg diet (QNPs-supplemented group). The third group was fed a basal diet and exposed to AgNPs (1.98 mg/L; corresponding to 1/10th LC 50 ). The fourth group (AgNPs/QNPs co-administered group) received QNPs and was exposed to AgNPs at the previously mentioned concentration. The daily feeding regime was performed three times at 7:00 a.m., 11:00 a.m., and 4:00 p.m. throughout the experimental period (60 days), and the amount of feed was adjusted every two weeks according to the body weight. Chemical Composition of the Whole Body On the 60th day of the experiment, five fishes were randomly selected (n = 5/replicate) from each group to estimate the proximate chemical composition of the whole body, represented as percentages of the wet weight [46]. The crude protein was estimated by the Kjeldahl method (Velp Scientifica, Usmate Velate, MB, Italy). The moisture was estimated by a natural convection oven (JSON-100, Gongju-City, Republic of Korea). Ash and fats were estimated by muffle furnace and Soxhlet extraction (Thermo Scientific, Greenville, NC, USA), respectively. Blood and Tissue Sampling Blood samples were collected from the caudal blood vein by sterile syringes and then placed in sterile tubes (free from anticoagulant). The samples were left to coagulate, centrifuged at 1075 g for 20 min to separate the serum, and then stored at −20 • C until physiological, biochemical, and hormonal analyses. Fish from the different groups were sacrificed by spinal cord sectioning, and the liver and whole intestine were collected. The collected organs (100 mg each) were homogenized in 10 mM phosphate/20 mM Tris-pH 7.0 using a mechanical homogenizer at 600× g for 3 min at 4 • C, and the supernatant was collected after centrifugation. Intestinal and liver enzymes' activity was also analyzed. Parts of livers were frozen until the determination of silver residues. Another set of liver tissue samples was quickly transferred to liquid nitrogen and then stored at −80 • C until RNA extraction. Other intestine samples were used for the bacterial count. Oxidative Injury Assays and Antioxidant Status The activities of the antioxidants catalase (CAT) and superoxide dismutase (SOD), the concentration of reduced glutathione (GSH), and the oxidative injury marker malondialdehyde (MDA) were assessed in the liver tissue using a colorimetric method [52][53][54][55]. The same method was also used to monitor the protein carbonyl (PC) content in hepatic tissue (Cayman Chemical Company, Ann Arbor, MI, USA). Expression of Liver Apoptosis and Stress-Related Genes RNA was extracted from the hepatic tissue, and its integrity and concentration were checked by 1% agarose and spectrophotometry. First-strand cDNA was synthesized using a QuantiTect RT kit (Qiagen, Hilden, Germany). The primers of the tested genes (cas-pase3, casp3; heat shock protein 70, Hsp70; tumor suppressor protein, p53; the internal housekeeping gene β-actin) are presented in Table 1. Real-time PCR was performed using a QuantiTect SYBR Green PCR kit (Qiagen, Hilden, Germany) and a Rotor-Gene Q apparatus. The thermocycler conditions were 95 • C for 10 min, followed by 40 cycles of 95 • C for 15 s, 60 • C for 30 s and 72 • C for 30 s. The relative expression of the studied genes was analyzed using the 2 −∆∆Ct equation [60]. Intestinal Enzyme Activities The intestinal lipase and α-amylase activities were estimated with a fast colorimetric kit (Spectrum Diagnostic Co., Cairo, Egypt) [61,62], according to the manufacturer's directives. The intestinal protease activity was estimated according to the method proposed by Bezerra et al. [63]. Determination of Aeromonas Counts and Total Intestinal Bacteria Intestine samples were taken from 5 fish/group to enumerate Aeromonas and total bacteria. The samples were homogenized in sterile saline peptone water (8.5 gL −1 NaCl and 1 gL −1 peptone), followed by serial dilution up to 10 7 . The total bacteria and Aeromonas were counted after incubation at 37 • C for 24 h on plate count agar [64] and agar medium [65], respectively. Determination of Silver Residues The liver samples were exposed to digestion by acids [66]. One gram of each sample was transported to a screw-capped glass bottle and exposed to a 4 mL digestion solution of nitric and perchloric acid (1:1). The samples were left at room temperature for 24 h for an initial digestion and then heated for 2 h at 110 • C. After that, the samples were cooled, and deionized water was added. Then, the solutions were warmed in a water bath for 1 h to eliminate nitrous gases. The digestion products were filtered, and deionized water was added up to 25 mL. Silver residues were determined by flame atomic absorption spectrophotometer (FAAS). Statistical Analysis The obtained data were statistically analyzed by SPSS (version 16.0, SPSS Inc., Chicago, IL, USA). All data are presented as means ± standard deviation. One-way analysis of variance (ANOVA) with Tukey's multiple comparison post hoc test was applied to compare means among groups (* p < 0.05). AgNPs and QNPs Characterization (Surface Chemistry) The results of the characterization of AgNPs are presented in Figure 1. UV-Vis spectroscopy results showed the maximum peak at 420 nm. TEM analysis revealed a spherical shape with an average size of 30-60 nm and a net surface charge of −22 mV. According to the DLS analysis, the exact size was 59 nm. Regarding the QNPs, TEM analysis revealed a spherical shape absorbing UV at 310 nm, an average size of 45-65 nm, and a net surface charge of −23 mV. DLS analysis showed an exact size of 77 nm ( Figure 2). AgNPs and QNPs Characterization (Surface Chemistry) The results of the characterization of AgNPs are presented in Figure 1. UV-Vis spectroscopy results showed the maximum peak at 420 nm. TEM analysis revealed a spherical shape with an average size of 30-60 nm and a net surface charge of −22 mV. According to the DLS analysis, the exact size was 59 nm. Whole-Body Chemical Composition The moisture percent of fish that received AgNPs was significantly higher than that of the other groups by approximately 3.5% ( Table 2). The same trend was also observed in the ash, which recorded an increase of 1.8% compared to the QNPs and control groups. Fish that received AgNPs and QNPs showed increased ash percentages; however, these Whole-Body Chemical Composition The moisture percent of fish that received AgNPs was significantly higher than that of the other groups by approximately 3.5% ( Table 2). The same trend was also observed in the ash, which recorded an increase of 1.8% compared to the QNPs and control groups. Fish that received AgNPs and QNPs showed increased ash percentages; however, these increases were nonsignificant and lower than those in the AgNP group. The crude lipid percentage showed significant changes among the treated groups; the lowest and highest values were observed in the AgNPs and control groups. The crude lipid percentage of groups that received AgNPs + QNPs or AgNPs was around 5%. AgNPs markedly reduced the crude protein percentage, and such a decrease remained significantly lower than those of the QNPs and control groups. Serum Physiological Assays AgNPs notably increased serum levels of ALT and AST, with values double to triple those of the control; while QNPs significantly reduced these close to those of the control ( Table 3). The glycogen level was significantly low in the AgNP group; however, this effect was rescued in the AgNPs + QNPs group. QNPs significantly reduced the levels of TG and TC in serum levels in the QNPs group and kept them at lower values than those of the control. Antioxidant Status and Oxidative Injury Assays The activities of CAT, SOD, and GSH were significantly inhibited in the liver of the AgNP group (Table 4). Notably, GSH recorded a very low activity in the AgNP group, which reached a third of the values of the control group. MDA and PC levels were increased in the liver in response to AgNPs exposure. QNPs improved the negative effect of AgNPs on the activities of SOD, CAT, and GSH and, to a reasonable extent, increased the activities of MDA and PC in the liver. Values are presented as mean ± SEM. Values with common superscript letters (a, b, c) significantly differ (p < 0.001). Expression of Apoptosis and Stress-Related Genes The expression of the hepatic Hsp70, casp3, and p53 genes was significantly upregulated in the AgNP group, with values between five-and six-fold increases (Figure 3). The expression of these genes was unaffected by QNPs treatment. Interestingly, the expression levels of these genes returned to the normal range in the AgNPs + QNPs group, except for Hsp70, which decreased by two-fold and remained at higher levels than the control. Intestinal Enzyme Activity QNPs increased intestinal enzyme activities (i.e., amylase, lipase, and protease) ( Table 5). QNPs preserved much of the reduced intestinal enzyme activities resulting from AgNPs challenge. QNPs showed a marked effect on intestinal lipase activity in the Intestinal Enzyme Activity QNPs increased intestinal enzyme activities (i.e., amylase, lipase, and protease) ( Table 5). QNPs preserved much of the reduced intestinal enzyme activities resulting from AgNPs challenge. QNPs showed a marked effect on intestinal lipase activity in the QNP and AgNP + QNP groups. Hormonal Assay The GH, T3, T4, and glucagon levels were lowered in the AgNP group; however, QNPs kept them at normal levels in the AgNP + QNP group ( Table 6). The changes in GH were statistically significant, while those in T3, T4, and glucagon were not significant. Total Intestinal Bacteria and Aeromonas Counts Notably, AgNPs markedly increased the total intestinal bacteria and Aeromonas count in the AgNP group ( Figure 4). However, QNPs significantly decreased the total intestinal bacterial and Aeromonas counts in the QNP and AgNP + QNP groups compared to the control and AgNP groups. Hormonal Assay The GH, T3, T4, and glucagon levels were lowered in the AgNP group; however, QNPs kept them at normal levels in the AgNP + QNP group ( Table 6). The changes in GH were statistically significant, while those in T3, T4, and glucagon were not significant. Total Intestinal Bacteria and Aeromonas Counts Notably, AgNPs markedly increased the total intestinal bacteria and Aeromonas count in the AgNP group ( Figure 4). However, QNPs significantly decreased the total intestinal bacterial and Aeromonas counts in the QNP and AgNP + QNP groups compared to the control and AgNP groups. Silver Residues The highest level of silver residues was detected in the liver of the AgNP group compared to other groups ( Figure 5). QNPs lowered the silver residues in the liver. Silver Residues The highest level of silver residues was detected in the liver of the AgNP group compared to other groups ( Figure 5). QNPs lowered the silver residues in the liver. Discussion The rapid expansion in the applications of engineered nanomaterials showed environmental impacts that are gaining greater and greater attention, associated with their novel advantages and potential hazards to living creatures. The AgNPs' toxicity was investigated and found to be dependent on the shape, coating material, size, dose, duration of exposure, and species differences [9,67]. Characterization of AgNPs showed a spherical shape with an average size of 30-60 nm under TEM. UV-Vis spectroscopy showed the maximum peak at 420 nm with −22 mV net surface charge by zeta potential analysis, while the DLS analysis showed the hydrodynamic size of 59 nm. AgNPs have been already characterized for size and dispersity using UV-Vis spectroscopy and TEM, showing a peak at 431 nm with the size distribution ranging from 60 to 80 nm, respectively [68]. Shaluei et al. (2013) reported an average nanoparticle size of 61 nm [69]. The morphological characteristics of AgNPs by TEM showed mono-dispersed, roughly spherical with average sizes from 80 to 90 nm without any agglomeration. The spherical configuration of AgNPs under TEM was also observed by Srinonate et al. [70]. The data of DLS analysis showed that the Z-average was 32.20 nm [71]. Sibiya et al. (2022) reported a typical high-pitched peak of absorbance recorded on UV-Vis spectrophotometer at 450 nm due to the absorption of AgNPs surface plasmon resonance which confirmed the reduction of silver nitrate [72]. The same authors examined the size, shape, and morphology of AgNPs using TEM proving that AgNPs were globular in shape. other studies reported spherical and scattered smaller-sized AgNPs with approximately 20 nm in size [73,74]. The variations among previous studies and the present one might be ascribed to the different method of AgNPs synthesis. AgNPs significantly increased serum ALT and AST, with double to triple values compared to the control. Indeed, elevated serum ALT and AST levels are considered as liver injury and stress markers [75,76]. Indeed, both regulate the transamination process, particularly during stress, to fulfill the increased energy requirement of the body [77], and modulate the metabolism of carbohydrates and proteins [78][79][80]. Thus, the activities of ALT, AST, but also ALP are highly indicated to measure the fish toxicity and recovery pattern [81]. In accordance, the ALT and ALP activities in common carp and ALP and acid phosphatase in Labeo rohita were significantly enhanced following exposure to AgNPs [22,82]. This increased activity could be ascribed to disruption of hepatocyte membranes Discussion The rapid expansion in the applications of engineered nanomaterials showed environmental impacts that are gaining greater and greater attention, associated with their novel advantages and potential hazards to living creatures. The AgNPs' toxicity was investigated and found to be dependent on the shape, coating material, size, dose, duration of exposure, and species differences [9,67]. Characterization of AgNPs showed a spherical shape with an average size of 30-60 nm under TEM. UV-Vis spectroscopy showed the maximum peak at 420 nm with −22 mV net surface charge by zeta potential analysis, while the DLS analysis showed the hydrodynamic size of 59 nm. AgNPs have been already characterized for size and dispersity using UV-Vis spectroscopy and TEM, showing a peak at 431 nm with the size distribution ranging from 60 to 80 nm, respectively [68]. Shaluei et al. (2013) reported an average nanoparticle size of 61 nm [69]. The morphological characteristics of AgNPs by TEM showed mono-dispersed, roughly spherical with average sizes from 80 to 90 nm without any agglomeration. The spherical configuration of AgNPs under TEM was also observed by Srinonate et al. [70]. The data of DLS analysis showed that the Z-average was 32.20 nm [71]. Sibiya et al. (2022) reported a typical high-pitched peak of absorbance recorded on UV-Vis spectrophotometer at 450 nm due to the absorption of AgNPs surface plasmon resonance which confirmed the reduction of silver nitrate [72]. The same authors examined the size, shape, and morphology of AgNPs using TEM proving that AgNPs were globular in shape. other studies reported spherical and scattered smaller-sized AgNPs with approximately 20 nm in size [73,74]. The variations among previous studies and the present one might be ascribed to the different method of AgNPs synthesis. AgNPs significantly increased serum ALT and AST, with double to triple values compared to the control. Indeed, elevated serum ALT and AST levels are considered as liver injury and stress markers [75,76]. Indeed, both regulate the transamination process, particularly during stress, to fulfill the increased energy requirement of the body [77], and modulate the metabolism of carbohydrates and proteins [78][79][80]. Thus, the activities of ALT, AST, but also ALP are highly indicated to measure the fish toxicity and recovery pattern [81]. In accordance, the ALT and ALP activities in common carp and ALP and acid phosphatase in Labeo rohita were significantly enhanced following exposure to AgNPs [22,82]. This increased activity could be ascribed to disruption of hepatocyte membranes and leakage of such enzymes from the hepatic cells into the bloodstream [25]. At the same time, the liver is an early target of detoxification and accumulation of various toxic substances [21]. The exposure to AgNPs enhanced the reactive oxygen species (ROS) production in the hepatoma cell line derived from fish [83], which is also confirmed by the increased MDA and PC levels in our findings. This oxidative stress could disrupt the function of mitochondria and lead to toxic effects by decreasing the integrity of the cell membrane and oxidizing the constituents of the cell [84]. ALT serum levels have been shown to be associated with liver fat [85,86]. In fish and mammals, de novo lipogenesis plays a crucial role in glucose homeostasis, in which lipogenic enzyme activities are modulated by dietary carbohydrate intake [87][88][89] and thus modulate glycogen levels [90]. Since the liver appeared to be targeted by AgNPs, hepatic glucagon signaling seemed to be inhibited, leading to decreased serum glucagon, as seen in the present study. Glucagon receptor signaling is linked to the metabolism of lipids [91] and amino acids [92]. Blockade of the glucagon receptor decreased hepatic amino acid catabolism with increased serum amino acids in animal models, including zebrafish [92][93][94]. Knockdown of the glucagon receptor upregulated the expression of hepatic lipogenic genes, increased hepatic lipid contents, and enhanced de novo lipid synthesis [95]. Glucagon inhibits hepatic de novo lipogenesis by the cyclic AMP-responsive element-binding protein H-insulin-induced gene-2a signaling pathway [96]. In the AgNP group, the whole body's crude lipid and protein percentages were lower than the control. Accordingly, AgNPs may modulate glucagon receptor signaling. Although QNPs decreased the crude lipid content compared to the control (i.e., by approximately 1%), they beneficially increased the protein content in the whole body. QNPs also increased the lowered levels of the crude lipid and protein percentages caused by AgNPs. Glucagon is secreted to regulate blood glucose levels and is strongly suggested to promote ureagenesis to regulate amino acid metabolism [97][98][99]. Hepatic knockdown of the glucagon receptor increased total plasma cholesterol and increased triglycerides [95]. Quercetin inhibited the increases in plasma cholesterol and protected pancreatic β-cells from oxidative stress, mitochondrial dysfunction (e.g., decreased ATP levels), and lipid peroxidation induced by high cholesterol treatment in vivo and in vitro [100]. Quercetin facilitates cholesterol excretion and helps protect cells from excessive accumulation of cholesterol by enhancing reverse cholesterol transport through the upregulation of related protein expression [101]. Typically, our findings indicated that QNPs decreased the TC and TG in the QNP and AgNP + QNP groups to lower levels than in the control group. AgNPs have a direct effect on SOD, CAT, GSH, MDA, and glutathione peroxidase (GPx), which can change the antioxidant capacity [102]; they also initiate the production of ROS [103]. These enzymes are responsible for the detoxification of ROS and normal homeostasis maintenance. If the antioxidant system cannot maintain safe levels of ROS, oxidative stress occurs, and cellular damage may develop [32]. Mansour et al. (2021) showed a depletion of the activities of antioxidant enzymes and significant MDA production, as an indicator of ROS, in fish exposed to AgNPs at high levels. Similarly, O. niloticus and Tilapia zillii exposed to AgNPs (4 mg/L) showed reduced gene expressions and activity of antioxidant enzymes and enhanced levels of MDA in the brain of treated fish [16]. The SOD, CAT, and GST activities were significantly reduced in different organs of Labeo rohita following the exposure to increasing AgNPs concentrations [82]. AgNPs from wastewater led to oxidative damage and reduction of SOD activity in rainbow trout [104]. Moreover, exposure of common carp (C. carpio) to AgNPs (12.5% of LC 50 ) increased the activity of CAT and SOD while exposure to 25% and 50% of LC 50 showed opposite effects [22]. Sibiya et al., (2022) showed that AgNPs induced oxidative stress by increasing the activity of PC and lipid peroxidation in the gills, and altered the antioxidants such as GPx, glutathione-S-transferase (GST), CAT, SOD and GSH in O. mossambicus [72]. Furthermore, the AgNPs can interfere with the synthesis of antioxidant enzymes [105]. Therefore, the decrease in antioxidant enzyme activity observed in the present study could be attributed to the depression of antioxidant genes expression and enzyme synthesis process leading to the weakening of the cell antioxidant capacity [84,106]. The mechanism behind this weakening is the nanoparticles' metallic nature, and the existence of ionic forms of transition metals that encourage ROS production leading to oxidative stress [107]. Our results showed that QNPs have effective antioxidant activities against the oxidative damage induced by AgNPs in the liver. Earlier reports indicated that quercetin markedly protected against the decreased activities of SOD and GPx induced by high cholesterol supplementation in animal models and in vivo [100]. In zebrafish, nano-encapsulated quercetin maintained redox status after exposure to AgNPs [108]. QNPs had moderate but effective preservation of the MDA content; however, they could not restore the activity of MDA to physiological levels. This finding could be explained by the variable resistance of the antioxidant activities toward AgNPs, in which MDA showed less resistance to AgNPs and Ag + [102]. However, the other antioxidant enzymes had variable resistance against AgNPs and Ag + , and SOD showed stronger resistance to both forms of silver [102]. According to our results, AgNPs upregulated the expression of Hsp70, p53, and casp3. AgNPs were already shown to induce inflammatory response, oxidative stress and Hsp70 stress gene expression upregulation in Nile tilapia [8]. AgNPs toxicity also induced p53 expression in the liver tissue of adult zebrafish [74]. Moreover, p53 activation in response to DNA damage can lead to cell cycle arrest or apoptosis preventing cell proliferation [113,114]. However, this action was rescued by QNPs, with a lesser effect on Hsp70, which showed an antiapoptotic effect by suppressing casp3 and releasing cytochrome c [100]. In the present study, intestinal enzyme activities (i.e., amylase, lipase, and protease) and GH, T3, and T4 were checked to assess the physiological status of the digestion process and growth. The results of the exposure to AgNPs are consistent with the disrupted growth performance observed after increasing the concentration of AgNPs in the Nile tilapia [71]. The findings revealed improved intestinal enzyme activities by QNPs in both the QNP and AgNP + QNP groups. Importantly, QNPs exhibited a pronounced effect on intestinal enzyme activities in the QNP group. This could be attributed to the protection of quercetin against intestinal oxidative damage and the maintenance of intestinal barrier function [115,116]. Furthermore, total intestinal bacteria and Aeromonas counts were unexpectedly increased in the AgNP group owing to silver antibacterial activity [103]. A possible explanation of this observation is that a high concentration of AgNPs negatively modulated the intestinal microbiota and increased harmful bacteria such as Aeromonas. Earlier studies support this hypothesis, showing that AgNPs caused gut dysbiosis in animal models, including fish [117][118][119][120]. Silver residues were highly detected in the liver, and the current findings indicate a primitive role of the liver in the detoxification of silver and AgNP-induced liver cell injury. Similar results were observed in Clarias gariepinus and Indian major carp Labeo rohita, in which AgNPs were highly detected in the liver even after 15 days of recovery [121,122]. Additionally, AgNPs were massively accumulated in the liver of common carp (C. carpio) [22]. Further, silver residues showed the highest levels in gills compared to other tissues in common carp and African catfish (C. gariepinus) [22,123]. Such variations may depend on species-specific differences or variable experimental conditions. For instance, 15 days of exposure to silver led to its considerable accumulation in the liver of C. gariepinus [121,122]. Treatment of the freshwater rainbow trout with AgNPs resulted in the accumulation of great quantities of silver in the liver, intestine, muscles, and gills [124]. Moreover, 100 µg/L of AgNPs or AgNO 3 , individually or combined with 10 mg/L of humic acids, bioaccumulated Ag in gills and altered the antioxidant status of Piaractus mesopotamicus [125]. The ability of freshwater fish to accumulate AgNPs and AgNO 3 may impair their biochemical and physiological parameters [126]. Conclusions In conclusion, our findings showed that AgNPs (1.98 mg/L) have a deleterious effect on the physiological status and antioxidant system of Nile tilapia. They markedly increased serum levels of ALT, AST, TC, and TG. SOD, CAT, and GSH were significantly inhibited in the liver, and the expression of hepatic stress-related genes was upregulated after exposure to AgNPs. In addition, the intestinal enzyme activities and bacterial counts were disrupted. This indicates a hepatotoxic effect of AgNPs. QNPs showed promising protective action against the impact of AgNPs. Additionally, QNPs exhibited beneficial effects in enhancing the physiological and health status and growth parameters of Nile tilapia when used under normal conditions. Limitations and Future Perspectives Various reports documented the possible toxic effects of AgNPs in vitro and in vivo. Therefore, investigating the mechanism of interaction between biological cells and AgNPs to better understand their potential risks as antibacterial agents seems to become a significant issue. Moreover, transforming some natural polyphenolic compounds, such as quercetin, into QNPs may provide better physical insights, thus enhancing their pharmaceutical efficacy.
2023-02-25T16:03:56.415Z
2023-02-22T00:00:00.000
{ "year": 2023, "sha1": "1f8aaae3d27866f685907eb1e8384e34f1cc4ad5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/11/3/663/pdf?version=1677153859", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5caf46358d8c59fbf02014670ecb2c8a978a776", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254636332
pes2o/s2orc
v3-fos-license
Field-free spin-orbit torque switching of an antiferromagnet with perpendicular N\'eel vector The field-free spin-orbit torque induced 180{\deg} reorientation of perpendicular magnetization is beneficial for the high performance magnetic memory. The antiferromagnetic material (AFM) can provide higher operation speed than the ferromagnetic counterpart. In this paper, we propose a trilayer AFM/Insulator/Heavy Metal structure as the AFM memory device. We show that the field-free switching of the AFM with perpendicular N\'eel vector can be achieved by using two orthogonal currents, which provide the uniform damping-like torque and stagger field-like torque, respectively. The reversible switching can be obtained by reversing either current. A current density of 1.79 10^11A/m^2 is sufficient to induce the switching. In addition, the two magnetic moments become noncollinear during the switching. This enables an ultrafast switching within 40 picoseconds. The device and switching mechanism proposed in this work offer a promising approach to deterministically switch the AFM with perpendicular N\'eel vector. It can also stimulate the development of ultrafast AFM-based MRAM. INTRODUCTION The electrical control of magnetic states provides promising solutions for the non-volatile data storage. Previous research has been focused on the study of ferromagnets (FMs), where the nanosecond switching and the sub-10 nm device have been demonstrated. 1 To further improve the performance of magnetic devices, recent research focus has been shifted from FMs to antiferromagnets (AFMs). Due to the strong exchange interactions in the AFM, it is able to produce ultrafast spin dynamics in the picosecond timescale. [2][3][4][5] In addition, since the magnetic moments in the AFM are antiparallelly aligned, this material does not produce stray field, which enables the further scale down of spintronic devices and makes it robust against external magnetic perturbations. 6 To change the magnetic state of FMs, the current-induced spin-transfer torque (STT) is widely used. 7,8 This stimulates the development of STT-MRAM. However, since the current flows directly through the memory cell, the device is prone to breakdown. To avoid this, it is found that the spin current can also be obtained through the spin-orbit torque (SOT), [9][10][11][12] which is even more efficient than the STT. Previous studies have demonstrated that SOT can be classified into the damping-like torque (DLT), τDLT=−m×(m×σ) and the field-like torque (FLT), τFLT=m×σ. [13][14][15][16] Here σ is the spin polarization. The SOT has been demonstrated to switch both the in-plane 17 and perpendicular magnetization 9,13,[18][19][20][21] of FMs, where it is commonly known that the DLT is responsible for the switching and the FLT only plays a secondary role. 22 Moreover, utilizing two orthogonal currents to achieve field-free switching in FM has also been demonstrated. 23,24 In contrast, neither the DLT nor the FLT can switch the magnetization in AFM due to cancellation of torque induced by the antiparallelly aligned magnetic moments. When only the DLT is applied, both magnetic moments experience the same torque, i.e., τDLT,A=τDLT,B=−mi×(mi×σi) with σA=σB. The subscript i denotes different sublattices. This is defined as the uniform DLT, under which the magnetization evolves into oscillation in the plane perpendicular to the spin polarization. 25 When only the FLT is applied, its effect is the same as the magnetic field, resulting in the spin flop and the magnetization is reoriented to be perpendicular to the spin polarization. 26 Recently, it has been demonstrated both theoretically 27,28 and experimentally 29-31 that 90° magnetization reorientation can be achieved in the in-plane AFMs with locally broken inversion symmetry, such as the CuMnAs and Mn2Au. In these materials, the adjacent magnetic moments form inversion partners. When an electrical current is applied, it generates opposite σ acting on the different sublattices in the form of FLT, i.e., τFLT,A=mA×σA and τFLT,B=mB×σB with σA=−σB. This is known as the stagger FLT. Furthermore, some theoretical studies have investigated the current induced switching of AFM with perpendicular Néel vector, such as Mn3Sn. 32,33 Currently, the FMs are mainly used in the MRAM as the storage element. It has been proposed that the use of FMs with perpendicular magnetization can reduce the critical switching current and enable the further scale down of memory cells. 34 However, the deterministic SOT switching of perpendicular magnetization requires an external magnetic field. 18, 35 Many studies have been carried out to realize the field-free switching in the FM system. [36][37][38] In order to further improve the performance of MRAM, it is desired to replace the perpendicular FM with AFM with perpendicular Néel vector. Therefore, it is important to realize the field-free 180° switching of the AFM with perpendicular Néel vector, [39][40][41][42][43][44][45] where the opposite states can be distinguished by the second-harmonic magnetoresistance effect combined with a second-order magnetotransport effect caused by an alternating probing current Jac along the x axis 44 or the anomalous Hall effect (AHE). 33,40 Recent studies demonstrated that a giant tunneling magnetoresistance (TMR) effect can be produced in antiferromagnetic tunnel junctions (AFMTJs) in response to the 180° rotation of the Néel vector, which enables efficient readout of the compensated AFM states. 46,47 In this paper, we propose to apply two currents into an AFM/Insulator/Heavy Metal (HM) heterostructure, where the AFM is required to possess locally broken inversion symmetry. The two currents are perpendicular to each other to produce orthogonal spin polarization. The deterministic 180° switching of the AFM with perpendicular Néel vector can then be achieved by the combined effect of the uniform DLT induced by the HM and the stagger FLT induced by the AFM. Furthermore, the proposed switching mechanism is robust against the current delay. This is similar to the toggle MRAM which is proposed to eliminate the half-selection problem. More generally, we find that the successful switching can be achieved using two torques with the following properties. Initially, their combined strength should be sufficient to induce the reorientation of magnetic moment. After the magnetic moment is switched to the opposite direction, one of the torques should reverse its direction to balance the other torque, so that the magnetic moment can stay switched. By analyzing the torques experienced by the AFM sublattices in our device, we find that the abovementioned two conditions are satisfied due to the different symmetry of uniform DLT and stagger FLT. METHODOLOGY The device studied in this paper is shown in Fig. 1(a). It consists of an AFM layer and a HM layer sandwiched by an insulating layer. 48,49 The AFM material consists of two sublattices with local inversion asymmetry. The magnetization in the AFM is assumed to be perpendicular to the film plane. Two currents are applied to the device. Jc1 is injected into the HM layer along the x direction, which induces a vertical spin current Js = θSHσ×Jc due to the spin-Hall effect (SHE). Here θSH = −0.1 denotes the spin Hall angle. The insulating layer is used to electrically isolate the AFM and HM layers but is transparent to the pure spin current Js, which can pass through the insulating layer and then acts on the two sublattices in the form of the magnon transfer torque, [48][49][50] i.e., a uniform DLT [−mi×(mi×σi)] with σA=σB. Note that the direction of DLT is the same for the two sublattices. This is easily understood since the DLT is even in m. For example, when Jc1 is along the −x direction, the SHE gives σ = −y. Therefore, both mA and mB experience a DLT pointing in the +y direction. When Jc1 is further increased, mA and mB are not able to maintain the original state, and it oscillates in the plane perpendicular to the spin polarization, i.e., the xz plane [see Fig. 1 On the other hand, when Jc1 is removed and an orthogonal current Jc2 is applied to the AFM layer along the y direction, due to the local broken symmetry, the sublattices experience stagger FLT (mi×σi) in which σ is opposite, i.e., σA=+z×Jc2 and σB=−z×Jc2. 27 Under a large Jc2, both mA and mB are stabilized in the direction along the spin polarization (i.e., the x direction) . 27,28,30,31 The corresponding magnetization dynamics is illustrated in Fig. 1 We adopt a macrospin model to describe the magnetic dynamics. For simplicity, it neglects the atomic-scale variation, such as the domain wall or nucleation switching process. We also In the dynamic simulation, we first set the initial direction of magnetic moments mA and mB along +z or −z, respectively. The dynamics of the sublattice moments m is described by the The four terms on the right hand side represent the precession torque (τpre), the Gilbert damping torque (τdamping), the DLT (τDLT) and the FLT (τFLT), respectively. γi is the gyromagnetic ratio and αi is the damping constant. In this study, we set γA = γB and αA = αB = 0.005. 51 The effective field, Heff, consists of the crystalline anisotropy with the anisotropy constant ku = 0.196 meV 32 and the exchange constant with Aex = 3.74×10 −21 J. 32 We also consider the thermal fluctuations and Oster field in the effective field calculation. However, there is a negligible effect on the results which is attributed to the robustness of AFMs. The strength of DLT is described by where ℏ is the reduced Planck constant, e is the electron charge and Ms,A = Ms,B = 1.2×10 4 A/m is the saturation magnetization. 54 The strength of FLT is similarly given . [55][56][57] The Runge-Kutta fourth-order method 58 has been used to solve the LLG equation. RESULTS AND DISCUSSION Compared to the magnetic materials with in-plane magnetization, the use of materials with perpendicular magnetization ensures a lower switching current, higher thermal stability and smaller footprint. 34,59 Therefore, it is desirable to achieve deterministic switching of the perpendicular Néel order parameter, l=(mA−mB)/2. After reproducing the well known results by applying Jc1 and Jc2 separately in Fig. 1(b) and Fig. 1(c), we further show that the deterministic switching of the perpendicular Néel order parameter can be achieved by simultaneously applying Jc1 and Jc2. As shown in Fig. 2(a), when Jc1 is applied in the −x direction and Jc2 is applied in the +y direction, l is switched from up to down. The switching trajectory is illustrated in Fig. 2(b). The switching starts with a fast reorientation of m, followed by the precession of m in a small angle, and finally m is stabilized in the opposite direction. This switching trajectory is very similar to that in the SOT induced switching of the perpendicular FM. 60 However, the switching of perpendicular FM requires an external field to break the symmetry. In contrast, we use the combined effect of two orthogonal currents to realize field-free switching. In addition, we notice that the switching is completed within 40 ps, which is two orders faster than that in the ferromagnetic devices. We also find that mA and mB are not always collinear during the switching. As shown in the right y axis of Fig. 2(a), the maximum angle difference is 9.14°. This noncollinearity gives rise to a strong exchange field, leading to the ultrafast switching. 3 Similarly, the down to up switching can be achieved by reversing either Jc1 or Jc2. Fig. 2(c) and 2(d) show the magnetization dynamics when Jc1 is applied in the +x direction and Jc2 is remained in the +y direction. The observed AFM switching can be understood by analyzing the torques experienced by the sublattices. For example, as shown in Fig. 3(a), when mA is initialized in the +z direction, the Table 1. From these discussions, one notices that there exists a competition between τDLT and τFLT, which are originated from Jc1 and Jc2, respectively. To understand the complete magnetization dynamics, we then study the switching phase diagrams (SPD). As shown in Fig. 4 τFLT is reversed since it is odd in m, resulting in a balanced τDLT and τFLT (Fig. 3). Thus, the Néel order parameter can remain in the switched direction. In addition, the blue region in the SPD can be divided into two cases. When Jc1 ranges from 1×10 10 A/m 2 to 6.5×10 10 A/m 2 , the critical Jc2 required for switching decreases as Jc1 increases. This also supports our explanation that one needs the addition of τDLT and τFLT in the beginning to initiate the reorientation of Néel order parameter. However, when Jc1 is larger than 6.5×10 10 A/m 2 , the critical Jc2 increases with Jc1. Under a very large Jc1, although the initial torque is sufficient to overcome the anisotropy to start the AFM switching, it also requires a comparable Jc2 so that the Néel order parameter can be stabilized in the opposite direction. Otherwise, the Néel order parameter will evolve into oscillation as shown in the edge green region of the SPD. Therefore, the SPD shown in Fig. 4 clearly illustrate our proposed picture of 180° deterministic switching of AFM with perpendicular Néel vector, i.e., the two torques should add up to initialize the magnetization switching. After l is switched to the opposite direction, one of the torques is required to be reversed to balance the other torque, so that l can stay switched. CONCLUSION In summary, we study the spin-orbit torque-induced 180° magnetization switching in the AFM with perpendicular Néel vector/Insulator/Heavy Metal heterojunction. When only the uniform DLT or the stagger FLT is applied, the Néel order parameter develops into oscillation or reorients to the perpendicular direction, respectively. In contrast, when the uniform DLT and the stagger FLT are applied simultaneously, field-free 180° magnetization switching of AFM with perpendicular Néel vector can be achieved. By analyzing the torques experienced by the sublattices and the switching phase diagram, we conclude that the switching is initiated by the addition of the two torques in the beginning to overcome the anisotropy, and then one of the torques reverses to balance the other torque after the Néel order parameter is switched to the opposite direction. The
2022-12-15T06:42:30.489Z
2022-12-14T00:00:00.000
{ "year": 2022, "sha1": "f156ab4a815bc830fd522f591c5aa9f4ee6628bb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f156ab4a815bc830fd522f591c5aa9f4ee6628bb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231868615
pes2o/s2orc
v3-fos-license
Progressive Thoracic Kyphoscoliosis leading to Paraplegia in a Child with Neurofibromatosis Type-1 In this case report we describe a case of a 13-year male with neurofibromatosis type-1, who had been suffering from progressive thoracic kyphoscoliosis that resulted in spinal cord compression and paraplegia. The report is meant to highlight the importance of timely detection and adequate follow-up of spinal deformities, associated with neurofibromatosis. Such deformities, if not effectively addressed, may progress corresponding to skeletal growth and lead to secondary problems, as was seen in our patient, who ultimately developed paraplegia. INTRODUCTION Neurofibromatosis (NF) is an autosomal dominant disorder of neural crest cell origin. 1 It has two subtypes (type 1 and type 2). NF1 accounts for 90% of all cases of NF with no sex or racial predilection. 1 The genetic makeup in NF1 predisposes the patients to develop multiple neurofibromas. Neurofibromas are benign tumors of the endoneurium and are frequently associated with compression of spinal cord, if they arise from spinal nerve roots. 2 The spine in NF1 may be defective irrespective of compression effects from a neurofibroma. The spinal deformity observed in these individuals may result from gradual scoliotic rotation and progression or it may occur early in the disease with an abrupt angular kyphotic curve. 3 This scoliotic deformity is categorised into two types: dystrophic and non-dystrophic. Dystrophic scoliosis is manifested by erosions of vertebral body margins, rotation of apical vertebra, rib pencilling, wedging of one or more vertebral bodies, paraspinal or intraspinal soft tissue masses, and dural ectasias. 4,5 Scoliotic deformity is very rarely associated with neurological paralysis. We report herein, a teenager with NF1, who presented with a giant paraspinal neurofibroma and progressive thoracic dystrophic kyphoscoliosis, ultimately leading to spinal cord compression and paraplegia. CASE REPORT A 13-year male presented with complaints of inability to walk with progressive weakness and stiffness of legs for the past one and a half years. There was associated bowel and bladder incontinence. He did not report breathing difficulty, dizziness, and auditory or visual problems. According to his parents, he had a swelling on his upper back since birth. They further reported that a tissue biopsy was taken from the swelling at the age of 4 years. The patient was neither advised any treatment at that time, nor did he have the biopsy report with him. His swelling progressively increased in size and at the age of 7 years, he underwent a neurosurgical consultation. X-rays and magnetic resonance imaging (MRI) of his spine ( Figure 1) were carried out which showed a severe kyphotic deformity in the mid-dorsal region with wedge collapse of the involved vertebrae. There were mixed intensity signals in the regional paraspinal areas that could be tracked in the prevertebral area as well as along the left psoas muscle. Enhancing pockets of collection were also seen in the region of quadratus lumborum as well as in the left renal bed. Severe degree of cord compression was also appreciated secondary to gibbus deformity. Based on MRI report, the neurosurgeon suspected him of having tuberculous spondylitis and performed exploratory surgery in an attempt to drain the abscess and fix the spine. However, during surgery, he did not find any abscess and rather found a tumor, the bulk of which was removed during surgery. Histopathological evaluation of the tumor showed hypocellular proliferation of elongated spindle shaped cells with few mast cells. As a precautionary measure, after operation, he was prescribed anti-tuberculosis treatment for 12 months. The tumor again started increasing in size and at the age of 11-1/2 years, the patient developed stiffness in legs, incontinence, and difficulty in walking. His symptoms were progressive. By the age of 13 years, he was unable to walk, and became completely dependent in activities of daily living. On examination, he had thoracic kyphosis with head size proportionately larger than the trunk. Cutaneous examination revealed a hyperpigmented patch, around 40x50 cm, over the back extending over the thoracolumbar region and crossing the midline (giant neurofibroma). There was increased hair growth and rugosity of the skin over the patch. Two atrophic scar marks of previous surgery were also visible in the patch (Figure-2). Multiple café-au-lait patches (number around 40) varying in size from 0.5 -3 cm were present on limbs, trunk, and face. A further probe into the family history revealed occurrence of similar marks in his brother. Neurological examination revealed a spasticity of grade 2+ in lower limbs, according to the modified Ashworth scale. Power was 0/5 in legs, according to medical research council scale. 6 The deep tendon reflexes were grade 3+ in knees and grade 4+ in ankles, bilaterally. Sensations were intact in legs including anal sensations, but anal tone was absent. Plantars were upgoing bilaterally. Rest of the systemic examinations, including ocular examination and examination of his oral cavity, were unremarkable. Based on the above findings, a diagnosis of spinal cord compression at the level of L1 with American spinal injury association impairment score (AIS) 7 B was made. The laboratory evaluation revealed normal values for complete blood count, C-reactive protein, erythrocyte sedimentation rate, parathormone, calcium, phosphate, liver and renal function tests, and urine routine examination. He was advised surgical correction of his deformity by the neurosurgeon, which he refused. He also refused any sort of bracing for legs. Thereupon, he was instructed exercises including stretching of hamstrings, gastrocnemius, soleus, and hip adductor muscles. To control spasticity, baclofen 10 mg thrice a day was advised. The child is under training for wheel chair mobility, transfer techniques, and bowel/bladder care. DISCUSSION NF1 is associated with spinal deformities that may result from gradual scoliotic rotation and progression or it occur early in the disease characterised by an abrupt angular kyphotic curve. The central nervous system manifestations affect 15% of children with NF1. 8 When a neurological deficit is present, it is usually caused by increasing deformity, pressure effects of an intraspinal or paraspinal tumor, penetration of the ribs into the spinal canal, structural instability of the vertebral column, fibrofatty tissue reaction or dural ectasia. 4 We reviewed reports of 20 cases with spinal deformity, producing neurological deficits. The mean age of the patients was 22.2 ± 14.5 years (range: 8-54 years). Among these, 15 (75%) were males while five (25%) were females. Out of these, 10 (50%) had a cervical problems, while 10 (50%) had a thoracic level of primary spinal deformity. A coexisting large neurofibroma was present in seven (35%) cases. Surgery was refused by one patient; whereas, remaining 19 underwent a surgical procedure. In one case, the outcome was not mentioned. Sixteen out of 18 cases (88.9%) had improvement in their neurological outcome; while, two did not observe any change. Halo-traction was attempted in one patient without subsequent surgery which resulted in aggravation of neurological deficit. However, in four cases where halo-traction was followed by surgery, there was complete recovery in three cases, while one had a partial motor recovery. The dystrophic spinal deformities in NF1 should be treated promptly, as there is a strong tendency for curve progression even following spinal fusion. 4 Attempts for correction through bracing are mostly ineffective and early aggressive surgical intervention is the acceptable treatment. 4,9 Curves less than 20o can be closely observed at 6-month interval until there is a rapid progression that would prompt surgical management. 4 Except for patients with scoliotic deformities measuring 20-40o with less than 50o of kyphosis, combined anterior and posterior spinal fusion is the most effective surgical approach. 4,10 The primary goal of surgery is to remove compression on the neurological elements and stabilise the vertebral column to halt further progression of the deformity. 4 Total correction of deformity sometimes may potentially aggravate the neurological deficit. 4 In conclusion, spinal deformity in patients with NF1 may cause significant morbidity. A holistic approach towards diagnosis of dystrophic changes on plain radiography and MRI is needed to establish prognosis and management choices. Dystrophic scoliotic curvatures or multiplanar spinal deformities with significant sagittal decompensation necessitate early aggressive surgical management. This report highlights the importance of early aggressive treatment of dystrophic kyphoscoliosis in a child with NF1. A timely surgical management could have protected this child from permanent disability. PATIENT'S CONSENT: The informed consent was obtained from the father of the patient to publish the data concerning the patient.
2021-02-11T06:16:34.841Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "da8336fb724886259da2c66d585ba17bc8e1ba5c", "oa_license": null, "oa_url": "https://www.jcpsp.pk/oas/mpdf/generate_pdf.php?string=eFVYWTZza21XOHBBR2pVYkdTN3dZdz09", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8d90cb5b192f69554dc2b278b70e5a0560b6b0f7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226302594
pes2o/s2orc
v3-fos-license
Enhancing protein backbone angle prediction by using simpler models of deep neural networks Protein structure prediction is a grand challenge. Prediction of protein structures via the representations using backbone dihedral angles has recently achieved significant progress along with the on-going surge of deep neural network (DNN) research in general. However, we observe that in the protein backbone angle prediction research, there is an overall trend to employ more and more complex neural networks and then to throw more and more features to the neural networks. While more features might add more predictive power to the neural network, we argue that redundant features could rather clutter the scenario and more complex neural networks then just could counterbalance the noise. From artificial intelligence and machine learning perspectives, problem representations and solution approaches do mutually interact and thus affect performance. We also argue that comparatively simpler predictors can more easily be reconstructed than the more complex ones. With these arguments in mind, we present a deep learning method named Simpler Angle Predictor (SAP) to train simpler DNN models that enhance protein backbone angle prediction. We then empirically show that SAP significantly outperforms existing state-of-the-art methods on well-known benchmark datasets: for some types of angles, the differences are above 3 in mean absolute error (MAE). The SAP program along with its data is available from the website https://gitlab.com/mahnewton/sap. Scientific Reports | (2020) 10:19430 | https://doi.org/10.1038/s41598-020-76317-6 www.nature.com/scientificreports/ Input features. As shown in Fig. 2, we use a sliding window of size W: up to W 2 amino acids at each side of a given amino acid. Depending on the window sizes, sliding windows can capture short or long range interactions between residues and secondary structures. Some backbone angle prediction methods that use recurrent neural networks (RNN) and CNNs take the whole protein sequences as input to capture interactions in the entire protein. However, with the absence of a firmly known energy function, it is not clear whether very long range interactions are really effective. So any choices regarding using sliding windows versus using entire proteins are to be made based on empirical evaluation. To make it clearer, in any distance-based energy components e.g. Lennard-Jones or charge-based potentials, the values are in effect zero after a certain distance. Moreover, if we look at the state-of-the-art backbone angle prediction method SPOT-1D, we see, besides using entire proteins, it still uses windowing to capture contact information. Our intent in this work is to explore simple models that can still achieve very good accuracy levels. While window size effectively ensures context dependence of assumed local conformations, arguably there is not enough data in the training set, even in the protein data bank, to cover all possible combinations of amino acids (e.g. 20 5 ) with a given window size (e.g. 5). So the context has to be captured via a 3-state a 8-state model that can specify the average range of angle values for each amino acid in a given protein. The data deficiency for larger windows even further spoils the training. In this work, for each amino acid, we consider one of the 8 values G, H, I, T, S, E, B, and C to represent predicted 8-state SS and then encode that using an one-hot vector. The 8-state SS prediction is obtained by running SSpro8 14 on each protein. The training set of SSpro8 comprises 5772 proteins that are released before August 20, 2013. SSpro8 uses sequence similarity and sequence-based structural similarity in SS prediction and achieves respectively 92% and 79% accuracy on proteins with and without homologs in the PDB. On one hand, we have already discussed that these highly accurate SS predictions do not necessarily solve the backbone angle prediction problem when high quality protein structures are to be constructed. On the other hand, we note that we have removed all SSpro8's training proteins from our training, validation, and tests sets and also use BLAST 28 for this purpose with e-value 0.01. In this aspect, our method differs from the state-of-the-art backbone angle predictor SPOT-1D, which uses homologous sequences to generate its HMM-based features. For each amino acid, we consider 20 values obtained from the PSSM matrix generated by three iterations of PSI-BLAST 28 against the UniRef90 sequence database updated in April 2018. We also use 7PCP (seven physicochemical properties) and ASA, and experiment with their various combinations. These features are very common in the literature. In summary, we have 20 + 8 = 28 PSSM and SS features plus various combinations of 7 or 1 feature values for 7PCP or ASA for each amino acid residue in each protein. This will be multiplied by the size of the sliding window used. We experiment with sliding windows of sizes 1, 5, 9, 13, 17, 21 as SPIDER 23 tried up to size 21. Predicted outputs. We consider 4 outputs, one for each of φ , ψ , θ , and τ angles. Each φ and ψ can be associated with exactly one residue or C α . A θ angle involving In one set of experiments, we consider these angles directly, handling their periodicity ( −180 • to 180 • ) within the loss function of the DNN used. In another set of experiments, just like the state-of-the-art method SPOT-1D, we use both sine and cosine ratios for each of the 4 angles, and thus use 8 outputs. The trigonometric ratios handle the periodicity issue of the angles and the tangent values obtained from the sine and cosine values can give the predicted angle within −180 • to 180 • . DNN architecture. Figure 3 shows the DNN architecture used in our method. The DNN in fact is a fully connected neural network (FCNN) with three hidden layers, each having 150 neurons. This architecture is similar to that used in SPIDER 23 and SPIDER2 24 . SPIDER2, however, uses a series of 3 DNNs feeding a previous DNN's output as input to the next DNN. In our experiments, we have used only one DNN with three hidden layers, although we have trialled two and four hidden layers as well and showed the results later. The inputs and the outputs of the DNN are per amino acid basis. Depending on the size of the sliding window and the combinations of 7PCP and ASA, the input layer will have different numbers of inputs. The output layer has one output for each angle when we want to predict an angle directly. However, if we consider sine and cosine ratios of an angle and consequently later calculate the angle, then the output layer will have two outputs for each angle. DNN implementation. The DNN has been implemented in Python language using Keras library and SGD optimiser with momentum 0.9. The learning rate starts from 0.01 and if the loss function does not improve in 3 iterations, then learning rate is reduced by a factor 0.5 until it reaches 10 −15 . The activation function is linear in the output layer and sigmoid in the input and the hidden layers. The kernel initialiser is glorot_uniform. We run programs on NVIDIA Tesla V100-PCIE-32GB machines. Benchmark datasets. We briefly describe the dataset used by SPOT-1D 27 . This dataset has 12450 proteins that were culled from PISCES 32 on Feb 2017 with the constraints of high resolution ( < 2.5A • ), an R-free < 1 , www.nature.com/scientificreports/ and a sequence identity cutoff of 25% according to BlastClust 28 . Among those proteins, 1250 proteins deposited after June 2015 were separated into an independent test set, leaving 11200 proteins, which were then randomly divided into a training set (10200 proteins) and a validation set (1000 proteins). Then, some proteins were removed to obtain efficient calculation. This reduced the training, validation, and independent test sets to 10029, 983, and 1213 proteins, respectively. In the SPOT-1D dataset, another independent test set was obtained from the PDB. These proteins were released between January 01, 2018 and July 16, 2018 and solved with resolution < 2.5A • and R-free < 0.25 . In order to minimise evaluation bias associated with partially overlapping training data, proteins were removed for > 25% sequence identity to structures released prior to 2018. This dataset was also filtered to remove redundancy at a 25% sequence identity cutoff and another 13 proteins with length > 700 were removed, leaving 250 high-quality, non-redundant targets. For convenience, these two independent test sets were denoted as TEST2016 (1213 proteins) and TEST2018 (250 proteins) as they were deposited between June 2015 and Feb 2017 and between Jan 2018 and July 2018, respectively. We use the same dataset used by SPOT-1D 27 . However, we have performed additional filtering since it is not precisely clear to us how SPOT-1D handles the proteins that have mismatches in their amino acid sequences specified in various data source files (e.g. .t, .pssm, .dssp, and .fasta files). To be clearer, we have found that for some proteins, the amino acid sequence specified in one data source file has additional residues at the beginning or ending compared to that specified in another data source file. For such proteins, we have taken the part common in the amino acid sequences specified in various source files. However, when there is any mismatch at the middle of any two amino acid sequences specified in two different data source files for the same protein, we have removed the protein from the dataset. Also, we have removed proteins that have X in the secondary structure sequences in their corresponding DSSP files, although we do not use the secondary structure data from the DSSP files in our learning model. As mentioned before, apart from using subsets of features from SPOT-1D, we generate 8-state SS predictions using SSpro8 14 . The training set for SSpro8 comprised 5772 proteins released in the PDB before August 20, 2013. In order to avoid over-training with SSpro8 predictions as input of our method, we have removed 3259 proteins from SPOT-1D's proteins using Blast 28 against SSpro8's training set with e-value 0.01. We show in Table 1 the numbers of proteins and residues in training, validation, and testing datasets, after performing the abovementioned filtering. As we can see later in Table 5, the remaining dataset after performing the filtering does not degrade the performance of SPOT-1D. While our main training and test proteins are from the SPOT1D dataset, for further independent testing, we use PDB150 34 and CAMEO93 35 datasets. The PDB150 dataset contains 150 proteins released between February 1, 2019 and May 15, 2019. For each protein, PSI-BLAST 28 was applied against the whole CullPDB 32 dataset with e-value smaller than 0.005. The CAMEO93 dataset contains 93 proteins that are released between February 2020 and March 2020 and has been used by OPUS-TASS in its evaluation. For both datasets, we have applied 25% sequence similarity cutoff w.r.t. our and SSpro8's training and validation datasets and also have removed proteins having X in their fasta file. For proteins with discontinuity in their amino acid sequences, we have considered largest segment of each protein so that our sliding window method can still be applied. At the end, we have obtained 71 and 55 proteins from the PDB150 and CAMEO93 datasets and we use them for independent testing of our method and the state-of-the-art method OPUS-TASS and compare their performance. www.nature.com/scientificreports/ Results We compare various settings of SAP to find the best setting for each of the 4 types of angles to be predicted. This comparison helps us understand the impact of various features and encodings. Then, we compare the best settings with the current state-of-the-art predictors. Moreover, we show various other analyses of the results obtained for the best settings. Calculating absolute errors. For each predicted angle P against the actual angle A, we calculate the difference D = |P − A| . Then, we take AE = min(D,|360−D|) as the absolute error (AE) for that predicted angle. This addresses the periodicity issue that each angle must be in the range −180 • to 180 • . When angles are predicted directly, we implement the AE calculation within the loss function for the training and validation, and also later for testing. When we use sine and cosine ratios, then we calculate AE only during testing. In all cases, the angles that are not defined for the amino acids at the beginning or ending of the proteins are ignored. Determining best settings. We run 96 settings of SAP. All of these settings having 20 PSSM and 8 SS hot-vector features. The 96 settings are obtained by using or not using ASA, by using or not using 7PCP, by using range-based or Z-score based normalisation for input feature encoding, by using 6 window sizes (1,5,9,13,17,21), and by using direct angles or trigonometric ratios to encode output angles. However, Table 2 presents performance of 16 settings only, selecting the best window size for each combination of the other parameters. From these results, it appears that window sizes 5 and 9 in most cases lead to better performances. Moreover, prediction of direct angles is better than that of trigonometric ratios. While not using ASA appears to be better than using, in contrast, using 7PCP appears to be better than not using. Overall, the best SAP setting is using 7PCP, range-based normalisation, direct angle prediction, and window size 5. Henceforth, we use this setting in further analysis. It is worth noting here that in our observation, training a DNN simultaneously for several outputs is not much different from training the DNN separately for each output in terms of the accuracy level obtained for each output. All results presented in Table 2 are for DNNs having 3 hidden layers. The choice of the number of layers was inspired by SPIDER 23 . However, in Table 3, we show the performance of the best SAP setting when run with DNNs having 2 and 4 hidden layers. In most cases DNNs having 3 hidden layers obtain the best results (shown in bold in Table 3); where this is not the case, DNNs with three hidden layers are a close second (shown in italics in Table 3), with the difference being < 0.09. So for the rest of the paper, we have chosen the DNN with 3 hidden layers as the selected SAP setting. Performing cross-validation. When we train a DNN, we specify the validation set. Consequently, the MAE values for the validation set as well as for the testing set for each SAP setting are shown in Table 2. In Table 4, we again show the MAE values but only for the best setting of SAP. However, to check the robustness Table 2. Performance of SAP settings on 1206 testing proteins. In the table, column ASA denotes whether accessible surface area is used (Yes/No), column 7PCP denotes whether 7 physicochemical properties are used (Yes/No), column OR denotes output representation is in direct angles (D) or trigonometric ratios (R), column NM denotes normalisation method for input feature encoding is [0,1] range based (R) or Z-score based (Z), WS denotes the best size of the sliding window. Note that the emboldened cells denote the best performance for each combination of ASA and 7PCP while the boxed plus emboldened cells in each respective column denote the best performance among all SAP settings. www.nature.com/scientificreports/ of SAP, we perform 10-fold cross-validation, where the training and validation sets are merged. The merged proteins are then randomly divided into 10 folds. Then, 9 out of 10 folds are used in turn for training while the remaining one is used for testing. Comparison with state-of-the-art predictors. We mainly compare the performance of SAP with that of SPIDER2 24 , SPOT-1D 27 , and OPUS-TASS 6 in Table 5. We have run these systems on the testing dataset that is used in this work and that is a subset of the SPOT-1D dataset because of more rigorous filtering. Moreover, we use 71 and 55 proteins from PDB150 34 and CAMEO93 35 datasets after performing filtering as mentioned before. However, we also compare SAP's performance with that of SPIDER2, SPOT-1D, and OPUS-TASS as they are reported in the respective publications. Below we briefly describe SPIDER2, SPOT-1D, and OPUS-TASS. 38 , which classifies 20 residues into 19 rigid-body blocks depending on their local structures. It also introduces a new constrained/output feature named CSF3 39 , which is a local backbone structure descriptor. Further, it uses a multi-task learning strategy 40 to maximise generalisation of the neural network and an ensemble of neural networks for further improvement. Since SPOT-1D and OPUS-TASS show their performance on two subsets namely TEST2016 and TEST2018 of the testing proteins, we also do the same although we show the accumulated results for all testing proteins. Notice from the table that SAP significantly outperforms both SPOT-1D and OPUS-TASS in all cases. We have performed t-tests to compare the performances of SPOT-1D and OPUS-TASS with SAP and the p values are < 0.01 in all cases, indicating the differences are statistically significant. The differences are really huge for ψ and τ . These results demonstrate the effectiveness of SAP in enhancing protein backbone angle prediction accuracy. Although our results are in Table 5, to test the generality of performance of SAP over other datasets, we have run SAP on 71 proteins of PDB150 dataset and 55 proteins of CAMEO93 datasets. In Table 6, we also compare SAP's performance with SPOT-1D's performance on the PDB150 proteins and with OPUS-TASS's performance Table 6. Performances of SPIDER2, SPOT-1D, OPUS-TASS, and SAP on filtered PDB150 and CAMEO93 proteins. The emboldened values are the winning numbers for the corresponding types of angles and datasets. OPUS-TASS does not predict θ and τ angles while the other three methods predict all four types of angles. www.nature.com/scientificreports/ Comparison on secondary structure groups. Comparison on amino acid groups. Table 8 Using angle ranges from predicted secondary structures. Given the SS predictions and their suggested ranges of φ and ψ values as shown in Table 8 Comparison of angle distributions. Figure 5 shows the distributions of the actual angles and predicted values obtained from SAP, OPUS-TASS, SPOT-1D, and SPIDER2. As we can see from the charts, the distribution of values predicted by SAP aligns very well with the distribution of the actual values. The peaks and troughs of the distributions align quite well, even multiple peaks and troughs are captured well. While the peaks of the predicted distributions are larger and narrower than those of the actual distributions, the troughs of the predicted distributions are rather smaller and wider than those of the actual distributions. When SAP's curves are compared with OPUS-TASS's, SPOT-1D's and SPIDER2's, we see SAP's curves are occasionally closer to the curves for the actual values. We also see that the distributions of φ and ψ angles for are OPUS-TASS and SPOT-1D are almost similar. Notice that the largest peaks of the predicted values are higher than the largest peaks of the actual values. One noticeable fact is in the θ chart: there are actual values between 0 and 90 although with almost zero probability, and these values are not much captured by the predictors. Overall, there is a tendency to predict the peak values with probabilities larger than that of the actual values. Results below are as we run all of the systems on our datasets Protein structure generation and refinement. Given the improvement in angle prediction accuracy, an interesting question is as follows: "Can predicted angles be directly employed in building accurate protein structures?" The direct answer to this question is yes if we reach to a very high accuracy level. This is actually the aim of this study to enhance the performance gradually to the level that would predict protein structures with very high accuracy; which is very challenging. Given the 27 proteins in our TEST2018 set, we have tried to generate entire protein structures from the predicted values obtained from SAP, OPUS-TASS, and SPOT-1D, and assuming ω = 180 • and standard bond distances. From Fig. 6, we can see very high root mean square distance (RMSD) for more proteins and only for 2-3 proteins, RMSD values are less than 6 A • , a distance considered to be practically meaningful. Although this is the case with protein structure generation, for structure refinement via ab initio structure sampling and evaluation by using perturbation techniques would obtain significant help. This is because given a prediction ρ and estimated error ǫ , with some level of certainty, one can focus searching within the region [ρ − ǫ, ρ + ǫ] . These soft constraints can thus reduce the search space significantly. With more www.nature.com/scientificreports/ www.nature.com/scientificreports/ proteins having more dihedral angles predicted with less absolute errors, ab initio or refinement search for protein structures would be benefited more from SAP's prediction than OPUS-TASS's or SPOT-1D's. Comparison on correct prediction per protein. Having the discussion regarding structure generation and refinement, we compare SAP, OPUS-TASS, and SPOT-1D on what portions of the angles of the proteins are predicted within certain error levels. Figure 7 shows the percentages of proteins that have a given percentage of particular angles with absolute errors at most a given threshold. We choose the threshold values to be 6 and 18 in the charts. Notice that SPOT-1D's and OPUS-TASS's performances are very close in the charts for φ and ψ . Moreover, SAP outperforms the other three methods in all angles in all threshold levels. Conclusions Input features and neural network architectures interact with each other when employed in prediction systems. Consequently, inclusions of just more features might cause cluttering and the complex networks might then be needed to counterbalance. In the protein backbone angle prediction research, the existing state-of-the-art www.nature.com/scientificreports/ prediction method uses ensembles of several types of deep neural networks and a number of features. In this paper, we present simpler deep neural network models for protein backbone angle prediction. Our models use fewer features and simpler neural networks but on a standard benchmark dataset obtain significantly better mean absolute errors than the state-of-the-art predictor. Our program named Simpler Angle Predictor (SAP) along with its data is available from the website https:// gitlab. com/ mahne wton/ sap. Received: 6 May 2020; Accepted: 23 October 2020 Figure 7. Percentages of proteins (y-axis) that have a given percentage of residues (x-axis) with AE at most a given threshold T where T is 6 and 18 and are denoted by T6 and T18. The lower the threshold, the better the prediction quality.
2020-11-12T09:09:29.220Z
2020-11-10T00:00:00.000
{ "year": 2020, "sha1": "f347d61c27277a7a1d31330d02c5a9b47ba10861", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-76317-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b9c9f887f22d8f3c5aec8ce1f4000a08f80ebcd", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
222125029
pes2o/s2orc
v3-fos-license
Single-objective selective-volume illumination microscopy enables high-contrast light-field imaging The performance of light-field microscopy is improved by selectively illuminating the relevant subvolume of the specimen with a second objective lens [1-3]. Here we advance this approach to a single-objective geometry, using an oblique one-photon illumination path or two-photon illumination to accomplish selective-volume excitation. The elimination of the second orthogonally oriented objective to selectively excite the volume of interest simplifies specimen mounting; yet, this single-objective approach still reduces out-of-volume background, resulting in improvements in image contrast, effective resolution, and volume reconstruction quality. We validate our new approach through imaging live developing zebrafish, demonstrating the technology's ability to capture imaging data from large volumes synchronously with high contrast, while remaining compatible with standard microscope sample mounting. Biological processes often depend on the tight spatiotemporal coordination between cells across tissue-level length scales, extending over hundreds of microns in three-dimensions (3D). Functional understanding of such processes would be greatly aided by imaging tools that offer the combined speed and sensitivity needed to observe 3D cellular dynamics without compromising the normal biology [4,5]. Light-field microscopy (LFM) is a fast, synchronous 3D imaging technique [6][7][8]. Unlike popular volumetric imaging methods that reconstruct a 3D image from intensity information collected one voxel, one line, or one plane at a time, LFM captures both the 2D spatial and 2D angular information of light emitted from the sample [ Fig. 1(A)], permitting computational reconstruction of the signal from a full volume in just one shot. Because lateral spatial resolution must be compromised to capture the angular distribution of the emitting light to yield the extended depth coverage, LFM sacrifices some resolution for its dramatically increased acquisition speed. While 3D deconvolution can be used to enhance LFM performance [7,8], out-of-volume fluorescence background, coming from parts of the sample outside of the volume of interest, limits signal detection, image contrast and resolution. Conventional wide-field illumination excites significant outof-volume background [ Fig. 1 (B)], especially for volumes within thick or densely fluorescent samples, precluding LFM's full potential in intact tissues. We recently introduced an improved light-field-based imaging approach, selective-volume illumination microscopy (SVIM), where confining excitation preferentially to the volume of interest reduces extraneous out-of-volume background, thereby sharpening image contrast, reducing unwanted photodamage, and improving the effective resolution in thick specimens (hundreds of microns or more) [1,3]. SVIM was implemented with two objective lenses: one to selectively illuminate the volume of interest, and a second objective, orthogonally aligned, to acquire the fluorescent light-field [Fig. 1(C)]. This twoobjective geometry requires that the sample be mounted within the mutual intersecting volume defined by the perpendicular objectives, complicating sample mounting and limiting sample size. Here, we implement SVIM in a single-objective geometry, eliminating the need for two orthogonally oriented objectives, greatly simplifying sample mounting and broadening its utility for biological research. This new technique, termed axial single-objective SVIM (ASO-SVIM), selectively illuminates the sample volume through the same objective used for high-numerical-aperture (NA) detection (Supplement 1, Section 1, and Fig. S1). The volume of interest is preferentially excited by either one-photon or two-photon processes (1P-or 2P-ASO-SVIM). 1P-ASO-SVIM is accomplished by using a 2D light-sheet oriented obliquely to the axial axis [ Fig. 1(D)], created via a cylindrical lens; the sample is illuminated by sweeping this oblique sheet in 1D to excite fluorescence within the desired region of interest, multiple times within a single camera exposure. 2P-ASO-SVIM [Fig. 1(E)] is accomplished using a low-NA Gaussian beam that is raster-scanned in 2D to excite the 3D sample volume of interest. To capture fluorescence light-fields emitted from the excited volume, a lenslet array is placed at the native image plane [7]; the foci of the lenslets are imaged onto a camera sensor [ Fig. 1(A)]. To enable direct, quantitative comparison of our technique to more established methods, our microscope is designed to offer seamless switching to light-sheet (also known as selective-plane illumination microscopy; SPIM) or widefield LFM modes [Supplement 1, Section 1, and Figs. S1-S2]. (A) Simplified schematic of light-field microscopy (LFM). Fluorescence light is collected from the sample volume by an objective lens, separated and filtered from the excitation light by the appropriate dichroic mirror and bandpass filter, and focused by a tube lens at an intermediate image plane where a lenslet array is positioned. The lenslet array refocuses the light onto a camera, so that each position in the 3D sample volume is mapped onto the camera as a unique light intensity pattern. The fluorescence light-field illustrated was captured with point-sources located at, above, and below the native focal plane. Such light-fields can be reconstructed to full volumes by solving the inverse problem [3]. (B) LFM with conventional wide-field illumination is compatible with standard forms of sample preparation but excites regions outside of the volume of interest (VOI). (C) Inspired by light-sheet microscopy (SPIM), SVIM selectively illuminates the VOI using orthogonal illumination and detection objectives. Shown previously [1][2][3], SVIM reduces background fluorescence outside the VOI, increasing image resolution and contrast. (D, E) ASO-SVIM preferentially excites the VOI and collects the fluorescence using a single objective lens, providing flexibility in sample mounting similar to traditional microscopy. (D) 1P-ASO-SVIM uses an oblique light sheet, that is scanned in 1D, to define the excitation volume. (E) 2P-ASO-SVIM uses nonlinear excitation of a pulsed near-infrared (NIR) beam that is raster-scanned to define the VOI. In each figure, 1P excitation is depicted in cyan (A-D) and 2P excitation in red (E); fluorescence emission is depicted in green. See also Supplement 1, Section 1, and Figs. S1-S2. We benchmarked ASO-SVIM performance by measuring the pointspread function (PSF) with 175-nm fluorescent beads suspended in agarose. After 3D deconvolution [7,8], we obtained volumetric images with the expected maximum resolution, consistent with the optical design: 2.4 ± 0.3 μm lateral full-width at half-maximum (FWHM); 5.7 ± 0.2 μm axial FWHM [Supplement 1, Fig. S3(C)]. Due to diffraction and non-uniform sampling of the light-field volume [7,8], the 3D resolution was depth-dependent (varying up to ~46 % over a z range of -50 to 50 μm) [Supplement 1, Fig. S3 (B)], and reconstructions contained grid-like artifacts near the native focal plane, as previously reported [7]. To reduce such artifacts in the reconstructions presented here, we applied a low-pass filter in Fourier space (k-space), truncating spurious spatial frequencies beyond the resolution limit of the native focal plane (Supplement 1, Section 2, Figs. S4-S5). The simple process of k-space filtering across the nominal focus dampened the artifacts and improved visualization of the 3D reconstructions, without any major loss of 3D resolution or spatial information (Supplement 1, Figs. S5-S6). To test ASO-SVIM on biological samples, we imaged the vasculature of live larval zebrafish (at 5 days post-fertilization, dpf), labeled with green fluorescent protein (GFP). Zebrafish embryos and larvae are ideal for studies involving multicellular and multiscale imaging because of their small size, transparency, and amenability to genetic engineering. As expected, ASO-SVIM, using either 1P or 2P excitation, produced less out-of-volume background than did wide-field illumination [ Fig. 2(A)], and this reduced background fluorescence yields higher contrast images, as we previously reported for SVIM [1,3]. This is clearly revealed in an x-z slice through the 3D volume [ Fig. 2(A), bottom]. Although ASO-SVIM reconstructed images all fell short of the quality of the ground truth images (a deconvolved SPIM image stack), all three dimensions were acquired simultaneously, generating the 3D image more than 100fold faster. To obtain quantitative measures of the enhanced performance of 1P-ASO-SVIM and 2P-ASO-SVIM, we calculated the image contrast (defined as the standard deviation of the pixel intensities normalized to the mean intensity) for each x-y image plane [ Fig. 2 To test ASO-SVIM on a more demanding application, we recorded the activity of large populations of neurons in larval zebrafish. Imaging the nervous system in action within the intact brain is challenging because it requires cellular resolution over thousands of cells with sufficient temporal resolution to capture the transient firing of neurons. LFM is an attractive technique to meet these neuroimaging challenges because it can synchronously capture large volumes; however, the high level of background fluorescence in wide-field LFM has remained an impediment to efforts aimed at capturing brain-wide activity with cellular resolution [8,[10][11][12][13][14][15][16]. We previously showed that the improved contrast and effective resolution of SVIM improved brain-wide functional imaging over conventional LFM [3]. We extended the demonstration and analysis to our new ASO-SVIM approach here. To permit direct comparisons between modalities, we used 1P-ASO-SVIM, 2P-ASO-SVIM, and wide-field LFM to image the spontaneous brain activity of the same 5-dpf zebrafish, labeled with a genetically encoded pan-neuronal fluorescent calcium indicator [Fig. 3]. The reconstructed 4D recordings are compared by taking the standard deviation along the temporal axis [ Fig. 3(A)], to highlight their capability in capturing active neurons, whose transient firings would produce voxels that have large intensity variation in time and thus appear as high-contrast puncta in the resulting projections. We calculated the image contrast of these temporal-standard-deviation projections: 2P-ASO-SVIM achieved the highest contrast, followed by the 1P ASO-SVIM, and then by wide-field LFM [ Fig. 3 (B)], suggesting that the ASO-SVIM modalities excel in capturing neuronal activity over wide-field LFM. To quantitatively compare the performance of the different modalities in capturing brain activity at cellular resolution, we identified neurons in the 4D recordings by spot-segmenting the temporalstandard-deviation projections. This standard protocol [3] produced spatial masks corresponding to neurons that were active during the time-lapse. These masks were then applied to the 4D datasets to extract temporal signals that represent single-neuron activity traces [Fig. 3(C)]. The improved contrast of 2P-ASO-SVIM and 1P-ASO-SVIM allowed us to detect a greater number of active neurons in the brain compared to conventional wide-field illumination [Fig. 3(C)]. 2P-ASO-SVIM captured the largest number of active neurons, due not only to its higher contrast than its 1P counterpart (expanded below) but also because the NIR excitation light is invisible to the fish and thereby significantly reduces the response of the animal's visual system to the illumination, which would otherwise cloud spontaneous activity [3]. 2P-ASO-SVIM is thus an optimal tool for studies of visually sensitive neural behaviors. 1P-ASO-SVIM and 2P-ASO-SVIM offer distinct strengths. 1P-ASO-SVIM commands lower laser costs, and offers optical simplicity and exceptionally high volumetric acquisition speed, limited largely by the rate of the camera [3]. However, the 1P excitation volume is larger and intersects the sample obliquely [ Fig. 1(D)], making 1P-ASO-SVIM less efficient at reducing background than SVIM. Like all forms of linear excitation, visible 1P excitation light increasingly scatters with depth, resulting in unavoidable background from outside the volume of interest. 2P-ASO-SVIM effectively eliminates background from out-ofvolume fluorescence [Fig. 2(a)] due to nonlinear excitation: The quadratic dependence of 2P-excited fluorescence on the laser intensity restricts the excitation volume to near the focus [3,9], resulting in negligible background even with single-objective designs. The NIR excitation light is scattered much less than visible wavelengths, and any scattered light is unlikely to generate background as it is unlikely to achieve the intensity required to excite fluorescence or autofluorescence in tissue. Through the judicious selection of illumination NA and beam-scanning, it is straightforward to match the 2P excitation volume to the desired light-field region of interest (Supplement 1, Section 1). This advantage is partially tempered by the reduced speed of 2P-ASO-SVIM, as the lower 2P excitation cross section yields lower fluorescence signal for a given laser intensity, which cannot be increased without bounds out of concern for photodamage. As a final example of the combination of high-contrast, ultrahighspeed volumetric imaging at cellular resolution and the samplemounting flexibility of ASO-SVIM, we imaged 3D blood flow in nearly the entire larval zebrafish brain, covering a 670 μm × 470 μm × 200 μm volume at ~50 Hz, in 9 zebrafish mounted in a standard multi-well plate (Supplement 1, Fig. S9, and Visualizations 2-4). Together, our results show that ASO-SVIM offers a convenient middle ground between SPIM and traditional wide-field LFM, offering improved contrast and effective resolution compared to LFM, while outperforming the 3D imaging speed of SPIM by approximately two orders of magnitude, as it requires only a single camera exposure to capture an extended volume. Compared to our earlier form of SVIM [1,3], ASO-SVIM relaxes steric constraints by using only one objective, similar to recent developments in single-objective light-sheet-based microscopy [18][19][20][21], easing sample preparation and expanding the application space to multicellular systems that are impractical for a dual-objective design. Finally, the simplicity of ASO-SVIM renders it compatible and synergistic with many recent refinements of LFM [10][11][12][13], and we envision that together they will bring LFM-based imaging techniques into a wide range of biological systems and applications. Microscope optics We describe here the light-field-based selective-volume illumination microscope used in our work. Refer to Fig. S1 for the beam paths and key components. ASO-SVIM: oblique-angled one-photon excitation and wide-field illumination modes The illumination path for one-photon (1P) excitation, represented by the blue line, is provided by a bank of continuous-wave (CW) fiber lasers (Coherent OBIS LX, UFC Galaxy: 488 nm, 30 mW; 514 nm, 50 mW; 640 nm, 75 mW) and high-power CW lasers (488 nm, 300 mW, Coherent Sapphire LP; and 532 nm, 5 W, Coherent Verdi). Light from the CW laser bank is collimated and expanded by an objective (BE; Nikon, Plan Fluorite 10×, 0.3 NA, 16 mm WD), directed by a dichroic mirror (DC1; FF750-SDi02-25x36, Semrock), and passed through a remote refocus module, which is composed of lens pair T11 and T12 (both 75-mm focal length, Thorlabs AC254-075-A-ML). Adjusting the position of T12 refocuses the beam waist so that it is coincident to the nominal detection focal plane at the sample. The illumination beam is then sent to a 2D (x-y) scanning galvo system (G; 6-mm aperture silver mirrors, Cambridge Technology H8363) before being passed through a scan lens (SL; 110-mm focal length, Thorlabs LSM05-BB), a tube lens (TL; 150-mm focal length, Thorlabs AC508-150-B), and a water-dipping objective (ASO; Nikon, CFI LWD Plan Fluorite 16×, 0.8 NA, 3 mm WD); G, SL, and TL are mounted on a computer-controlled motorized translational stage (Newport 436 and Newport LTA-HS) to control the inclination angle in ASO-SVIM mode (tilted 26.5° relative to the optical axis of ASO; purple dashed line), and easily port the beam back to the wide-field illumination mode. The illumination NA is adjusted to be ~ 0.04 to 0.06, depending on the selective illumination extent, yielding a fluorescence Gaussian-beam waist of ~ 4 to 6 µm with an axial (z) extent ranging from ~150 to 230 µm (measured as the confocal parameter of the focal volume). As G is conjugate to the back pupil of ASO, scanning along the x-and yaxes with the appropriate voltages selectively paints out the desired sample volume. For fast volumetric 1P imaging, the high-power CW laser was used to provide the high laser intensity needed beyond what the CW laser bank could provide. Light from the high-power CW lasers are collimated and expanded by BE (Thorlabs BE052-A) and directed by mirrors to a cylindrical beam-shaping module, composed of a pair of cylindrical lenses C1 and C2 (-50mm focal length, Thorlabs LK1662L1 or -30-mm focal length, Thorlabs LK1982L1; and 150mm focal length, Thorlabs LJ1629L1) which expand the beam elliptically in the y-direction. This expanded beam is reflected by a mirror mounted on a motorized motion-control stage (MM1; Newport 436 and Newport LTA-HS), where it is directed through T11 and T12 and then focused into a 2D (y-z) sheet by C3 (75-mm focal length, Thorlabs LJ703RM-A) onto G. Thus, G only needs to provide scanning along the x-axis to selectively paint out the desired volume at the sample. Note that C3 is used only for 1D scanning, and omitted in the other imaging modes. All 1P imaging data were acquired with 1D scanning except for Fig. 3, where 2D scanning was employed to provide a more precise selectively-illuminated volume, in order to avoid direct illumination of the animal's eyes. An inspection camera (not shown; PCO pco.edge 5.5) conjugate to the sample volume and coincident to the x-z plane aid in alignment and calibration of the illumination tilt angle and G scanning parameters. Tradeoffs associated with volume-scanning as well as alternative implementations of selective-volume illumination are discussed in [1]. 2P-ASO-SVIM: two-photon excitation mode The illumination path for two-photon (2P) excitation begins in red. Near-infrared (NIR) pulsed illumination is provided by a Ti:Sapphire ultrafast laser (Coherent Chameleon Ultra II) and the illumination power is controlled by a Pockels cell (PC; Conoptics 350-80). A polarizing beamsplitter (PBS; Thorlabs PBS102) is used to combine the visible and NIR beams into a colinear beam and to split the combined beam into two integrated excitation paths (towards ASO and SPIM objectives). Visible and NIR half-wave plates (λ/2VIS and λ/2NIR; Thorlabs AHWP05M-600 and AHWP05M-980), each mounted in manual rotation mounts, are used to adjust the laser power delivered to ASO and SPIM as appropriate. In the ASO path, the NIR illumination beam is transmitted through DC1 and then through lens pair T21 and T22 (75-mm focal length, Thorlabs AC254-100-B and 100-mm focal length, Thorlabs AC254-75-B-ML), used to expand and refocus the beam waist before being sent to the same illumination-scanning optics in the aforementioned 1P mode (G, SL, TL, and ASO). A mirror mounted on a motioncontrol stage (MM2) allows automated switching between 2P-and 1P-ASO excitation. The illumination NA is adjusted to be ~ 0.055 to 0.08, yielding similar fluorescence Gaussian-beam characteristics as the 1P mode: ~ 4 to 5 µm waist and ~150 to 230 µm axial extent. For all 2P imaging experiments presented (Figs. 2-3), ~ 525 mW of average laser power was delivered to the specimen. Light-field detection and reconstruction Excited fluorescence at the sample is collected by the ASO objective. A dichroic mirror (DC2; Di01-R488/561 or di01-R405/488/543/635-25x36) and a filter wheel (Sutter Instrument Lambda 10-3, 32 mm diameter) equipped with emission filters (FF01-470/28-32, FF03-525/50-32, FF01-609/54-32, and FF01-680/42-32) together block the excitation light and transmit the fluorescence signal emitted from the sample (green). An intermediate image at an overall magnification of 24× is projected onto a lenslet array (LA; 2.06-mm focal length, 18x18 mm, 136 μm pitch, AR coated, OKO Technologies APO-Q-P192-F3.17; f-number matched to the NA of ASO) by a tube lens (TL; 300-mm focal length, Edmund Optics 88-597). With LA placed at the native image plane, an array of fluorescence focal spots is created, which encode 4D spatio-angular information for each position in the 3D volume-referred to as the light-field [3,4]. The generated light-field is imaged onto an sCMOS camera (C; Andor Zyla 5.5) by a pair of photographic lenses R1 and R2 (both 50-mm focal length, Nikon NIKKOR f/1.4). These raw light-fields are reconstructed into full volumes as described in refs. [1,5]. Unless otherwise noted, all image stacks are further processed using a filtering algorithm described in Section 2. Fig. S2. 3D opto-mechanical model of the ASO-SVIM light-field detection path. Inset shows a photograph of the sample chamber, the axial-single-objective (ASO) used to both deliver selective-volume illumination at the sample and collect the excited fluorescence, as well as the light-sheet excitation objective (SPIM). Owing to the ASO design, samples can be mounted using a caddy and dive bar system as described in ref. [2] and are entirely compatible with standard sample preparation protocols (e.g., Fig. S9). Fluorescence collected from ASO passes through a dichroic mirror (DC), a filter wheel (FW), a tube lens (TL), a lenslet array (LA), and onto an imaging module. R: detection relay lens, where the subscripts refer to the sequence of lenses; C: camera. SPIM: one-photon and two-photon light-sheet imaging modes In order to operate in SPIM mode, either λ/2VIS or λ/2NIR is rotated so that enough excitation energy is transmitted through PBS and delivered at the sample. After PBS, the illumination beam is routed to a 2D (x-z) scanning galvo system (G; 5-mm aperture silver mirrors, Thorlabs GVSM002), and then passed through SL, TL, and an objective (SPIM; Olympus, LMPLN-IR 10×, 0.3 NA, 18 mm WD) to excite the sample with a scanned Gaussian-beam light-sheet. The SPIM objective is mounted on a manual translational stage to create more sample space for ASO-SVIM mode if needed. In order to collect images in SPIM mode, LA is moved entirely out of the detection path, and the entire imaging module (R1, R2, and C) is moved in -z by the focal length of LA. As shown in Fig. S2, LA and the imaging module are each mounted on motorized linear translational stages (Newport 436 and Newport LTA-HS), enabling high-precision positioning and seamless switching between light-field and conventional wide-field/SPIM detection via computer command. The stages also serve to aid in fine alignment. To assemble a 3D volume, 2D images are recorded in series by scanning the sample in z through the stationary light-sheet with a motorized stage (Newport 436 and Newport LTA-HS). Instrument control Instrument control is similar to our previous implementation [1], with the primary changes concerning the coordination between the scanning system and camera triggering. In our new single-objective configuration, a combination of custom software developed in LabView (National Instruments), ScanImage [6], and Micro-Manager [7] synchronize the scanning system, laser intensity, and camera triggering so that the volume of interest is illuminated an integer number of times within one camera exposure and the excitation intensity is nearuniform frame-to-frame during acquisition. All the motorized linear translational stages used to switch between modes are controlled by an XPS Universal Motion Controller (Newport XPS-Q8). The 3D stage stack-up (Sutter MP-285) used for sample positioning is controlled with its corresponding controller; the sample-scanning z-stage (noted in Section 1.4) is controlled via Micro-Manager. Characterizing system resolution To quantify resolution in volumetric reconstructions of light-fields, we measured the pointspread function (PSF) with 175-nm fluorescent beads sparsely suspended in agarose (Fig. S3). We stepped the sparse bead sample in z by 2 μm over a 200-μm volume, imaging the same field of beads at different axial depths, and thereby facilitating multiple measurements of isolated beads throughout the light-field volume. The z-series of light-field images were then reconstructed to yield a series of 3D-stacks with overlapping z-extents, from which we calculated the resolution as a function of relative z-depth (Fig. S3B). The observed relation between relative z-depth and the PSF are consistent with results derived from wave optics theory [4]: at different axial depths, the PSF size is different, generally broadening away from the native focus symmetrically; on the other hand, bead-measured PSFs across reconstructed 2D (x-y) slices at each corresponding z-depth are nearly identical. k-space filtering We describe here our k-space filtering process to alleviate light-field microscopy (LFM) reconstruction artifacts. These grid-like artifacts are due to the degeneracy in spatio-angular sampling at the native focal plane, and have been described theoretically and experimentally [4]. Our method is motivated by two empirical observations. First, the grid-like artifacts are mainly composed of spatial frequencies beyond the theoretical resolution limit of the detection optics (Fig. S5A, left column). Second, the artifacts are most prominent at the native focal plane and the immediate axial range around it (Fig. S5B, left column and Fig. S6C). With these observations in mind, we devised the following filtering procedure that selectively removes the bulk of reconstruction artifacts without compromising the resolution of the 3D volume. At the native focal plane, the theoretical maximum lateral resolution is determined by the diffraction-limited sampling rate of LA: the lenslet pitch divided by the effective magnification [4], which we experimentally confirmed (theory: 5.7 μm; experiment: 5.2 ± 0.2 μm). This resolution limit sets a cutoff frequency in Fourier space (k-space) where we can impose a lowpass filter to remove high-frequency noise, the main source of the image artifacts (Fig. S5A, left column). We apply this low-pass filter to the native focal plane and adjacent planes extending across a 10-μm depth, a small subvolume defined by the experimental axial PSF (see dashed yellow rectangles in Fig. S5B). Image planes outside of this subvolume are not lowpass filtered. Note that in LFM the resolution changes as a function of depth, and maximum resolution is achieved at z positions away from the native focal plane [4], as experimentally shown in Fig. S3B. Because only the subvolume that extends across the focal plane (where artifacts are most prominent) is k-space filtered, higher resolution present elsewhere in the volume is unscathed. Experimental aberrations, background, scattering, and other sources of noise break the underlying assumptions in the reconstruction [1,4], generally decreasing the highest non-zero spatial frequency achievable (i.e., the effective resolution limit)-or artificially increasing it-making our k-space filter a conservative approach. Our filtering process is outlined in Fig. S4 and can be combined with any LFM reconstruction algorithm. To quantitatively assess how well k-space filtering mitigates reconstruction artifacts, we compared standard LFM and k-space filtered reconstructions of a 300-by 200-by 200-μm field of beads in agarose (Fig. S5). In large part the field of beads are similar, but it's clear that artifacts are visible both in lateral and axial maximum-intensity projection (MIP) views of the conventional reconstruction that are not apparent with k-space filtering (Fig. S5A). Even though the periodic artifacts are only concentrated at the native focus (Fig. S5B, left column), they persist and lift the noise floor throughout the lateral MIP view (Fig. S5A, left column). Highfrequency artifacts can swamp the signal intensity of weak point sources, making it difficult to differentiate artifacts from real signal; in contrast, the k-space filtered signal intensities are weighted as expected-where real point sources are located (Fig. S5D, line 2). In addition, filtering significantly decreases reconstruction artifacts without any loss of spatial resolution throughout the 3D volume, as measured by line cuts through several PSFs (Fig. S5D, line 1). We further tested k-space filtering in vivo, where background and noise can critically affect the reconstruction quality [1]. We acquired volumetric data of transgenic zebrafish embryos expressing green fluorescent protein in the cranial vasculature by means of LFM and light-sheet microscopy (also known as selective-plane illumination microscopy; SPIM), which provided an additional ground truth (higher resolution) structural image to compare our filtering method against (same dataset as Fig. 2). When applied to living tissue, we observe a dramatic reduction in grid-like artifacts at the native focal plane compared to conventional LFM reconstruction (i.e., no filter), as shown in Fig. S6C. Comparing volumetric contrast in standard and k-space filtered reconstruction, we see a dip near the native focal plane (Fig. S6B). This is to be expected, as the grid-like patterns lead to an artificial increase in contrast. Similar to the experimentally measured PSFs, line intensity profiles along filtered blood vessels show an important decrease in spurious spatial signal without loss of resolution (Fig. S6D), alteration of structural features, or additional artifacts (Fig. S6F). Fig . S4. k-space filtering algorithm. LFM (top) reconstructs a complete 3D volume with depthdependent resolution and artifacts near the native focal plane [4,5]. Due to the non-uniform resolution across the entire volume, a single cutoff frequency cannot be applied without compromising peak resolution at other image planes. k-space filtering (bottom) splits the deconvolved volume into smaller subvolumes, and independently processes the subvolume that extends across the native focal plane. Retrieved image slices are low-pass filtered in Fourier (k) space, based on the experimental optical transfer function (OTF) bounds at that subvolume. Next, image slices are inverse transformed back into real space, and a median filter is applied to minimize ringing artifacts. The filtered image slices are then combined to assemble the final, denoised volume. Each inset shows the spatial frequency content of the corresponding axially-centered PSF at the native focus, as indicated by the dashed yellow rectangle in the image. Both real and frequency space representations show the ability of k-space filtering to reduce high-frequency artifacts, laterally and axially. OTF images were equally gamma-contrast-adjusted to aid in visualizing weak features. Scale bar, 50 μm. (C) Overlap of x-z MIPs show excellent spatial correspondence of PSFs before and after filtering. (D) Comparative line profiles as indicated by the yellow lines in (C). As expected, there is no appreciable loss of resolution by k-space filtering (line 1). Away from the native focus, bead-measured signal intensities show full quantitative correspondence, while at the native focal plane, periodic reconstruction artifacts are effectively suppressed (line 2). x-y slice from a 100μm thick slab (same dataset as Fig. 2), centered at approximately 86 μm into the specimen (z = -14 μm), comparing the performance of the indicated modalities. Remaining rows: Zoomed-in regions of structures in the yellow boxes in the x-y plane (top row), along with corresponding line intensity profiles (as shown by the 50-μm yellow line in the images) plotted on the right. Given the intrinsically higher spatial resolution of SPIM, full quantitative correspondence of the light-field-based images is not expected. All 3-line profiles were used to quantify the average FWHM and standard deviation for each modality (right column, top). Of the light-field-based methods, 2P-ASO-SVIM achieves the highest biological resolution (owing to nonlinear excitation as well as reduced background and scattering), approaching the performance of SPIM, followed by ASO-SVIM in 1P mode, and last, wide-field LFM. Scale bar, 100 μm. See also Fig. 2 and S7.
2020-10-05T01:00:49.610Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "745fbbc6cab183e6c9d52c48a3b18ffc8c9ad5c0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2010.00644", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "745fbbc6cab183e6c9d52c48a3b18ffc8c9ad5c0", "s2fieldsofstudy": [ "Biology", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Engineering", "Medicine" ] }
55864302
pes2o/s2orc
v3-fos-license
Escaping the Debt Constraint on Growth: A Suggested Monetary Policy for Brazil Existing interest rates imply explosive debt dynamics for Brazil. It also faces rising inflation from earlier currency depreciations, which could trigger future depreciation. These conditions impose a policy contradiction. Brazil needs lower interest rates for debt sustainability, but tight monetary policy to avoid exchange rate depreciation and inflation. The paper develops a strategy to escape this contradiction. Policy must bolster investor confidence to lower external interest rates, lower domestic interest rates to reduce debt service burdens, and implement domestic credit creation controls to control inflation. Brazilian Journal of Political Economy,vol. 24,nº 1 (93), January-March/2004 Escaping the Debt Constraint on Growth: A Suggested Monetary Policy for Brazil THOMAS I. PALLEY* Existing interest rates imply explosive debt dynamics for Brazil.It also faces rising inflation from earlier currency depreciations, which could trigger future depreciation.These conditions impose a policy contradiction.Brazil needs lower interest rates for debt sustainability, but tight monetary policy to avoid exchange rate depreciation and inflation.The paper develops a strategy to escape this contradiction.Policy must bolster investor confidence to lower external interest rates, lower domestic interest rates to reduce debt service burdens, and implement domestic credit creation controls to control inflation. I. BRAZIL'S MACROECONOMIC POLICY AT A CROSSROADS Taking office in January 2003, President Lula da Silva inherited a difficult economic situation marked by great uncertainty following the financial crisis of 2002.Fortunately, in the first six months of the new president's term, financial markets have responded positively.The Real has rebounded from its crisis overshoot, and the strength of Brazil's currency and stock market is being interpreted as a sign that current macroeconomic policy is working.To some, the implication is that Brazil only needs to stay the course with its existing regime of tight money. This paper argues that this position is mistaken, and that the existing policy configuration cannot achieve economic growth.At best, it can ensure financial stability.However, more likely, it will result in stagnation that is ultimately joined by renewed financial crisis.Instead, Brazil must adopt a new monetary policy mix that enables it to escape the debt constraint on growth.Analytically, this constraint can be understood through the highly indebted developing country trilemma which has countries grappling to control the exchange rate and external interest rate, the domestic inflation rate, and debt sustainability.Escaping the trilemma calls for an external financial strategy that bolsters investor confidence, domestic credit creation controls that keep the lid on inflation, and lower domestic interest rates that ensure debt sustainability. Turning to specifics, the recent strength of the Real is a welcome development, and one that the paper is supportive of -provided it does not go too far.But that strength must be capitalized upon.At this stage, the strong real should be used as occasion to twist Brazil's debt structure by reducing the extent of foreign currency denominated and indexed debt, and replacing it with pure domestic debt.In addition, domestic interest rates must be brought down.Absent this, Brazil's economy stands to be impaled by the twin forces of a strengthening exchange rate and a high cost of capital.The former stands to reduce exports, raise imports, and undermine investment by lowering manufacturing profitability: the latter stands to compound investment spending weakness by imposing too high a required rate of return. Finally, in twisting the debt structure and lowering domestic interest rates, there remains the persistent danger of inflation.Higher inflation could derail the proposed policy shift by causing renewed currency weakness.This would raise the domestic currency burden of Brazil's foreign currency linked debts, inflicting damage on both the public and private sectors.The recent strength of the Real has reduced inflation pressures, creating an opportune moment for initiating a changed stance of monetary policy.However, to the extent that inflationary fears persist, the paper recommends adoption of a new form of credit regulation based on asset based reserve requirements.Such regulation can control inflation without recourse to high domestic interest rates, thereby enabling Brazil to have low inflation, a stable currency, and domestic economic expansion. II. OUTLINE OF THE PAPER The argument for the proposed new policy configuration is developed in stages.Section III focuses on the problem of debt instability, which persists despite the current strength of the Real and stock market.Brazil's implied debt dynamics cannot be solved by additional fiscal austerity and a higher primary budget surplus.1 Instead, additional austerity risks compounding existing domestic demand weakness, while not tackling the root problem of excessively high interest rates. Section IV describes the composition of Brazil's public debt, which is both foreign and domestic currency denominated.In addition, there are significant private sector foreign currency debts.Section V then explores the policy contradiction im-posed by Brazil's debt composition.On one hand Brazil needs lower interest rates to restore debt sustainability, but on the other hand it needs tight money to guard against a vicious cycle of exchange rate depreciation and inflation.This contradiction points to the need for a coordinated external and internal financial strategy.The external strategy must bring down the cost of foreign borrowing, while maintaining confidence in the exchange rate.The internal strategy must lower domestic interest rates, while keeping the lid on inflation. Section V then develops a multiple equilibrium model of international financial markets, and argues Brazil is trapped in the bad high interest rate equilibrium.Section VI builds on this analysis, and develops an external financial strategy for escaping the pull of this bad equilibrium.Section VII then turns to the development of an internal financial strategy for lowering domestic interest rates.The combined external-internal financial strategy mix can be viewed as a means of escaping the highly indebted developing country trilemma whereby countries grapple to control the exchange rate and external interest rate, the domestic inflation rate, and debt sustainability. III. THE ALGEBRA OF DEBT INSTABILITY The unsustainability of Brazil's existing financial condition can be seen through a simple model of debt algebra.Such a model shows that existing real interest rates imply an exploding trajectory for the debt-to-GDP ratio that must eventually lead to default.The growth of the debt-to-GDP ratio is given by (1) g D/Y = g D -g Y where g D/Y = growth of debt to income ratio, g D = growth of debt, and g Y = growth of real GDP.The growth of the debt is given by (2) g D = i + d/D where i = real interest rate on debt, d = primary budget deficit, and D = national debt.This can in turn be written as If the debt-to-GDP ratio is to be stabilized (i.e g D/Y = 0), the real interest rate must fall to 9.75%.If real GDP growth falls to 2%, then the real interest rate must fall to 8%. The above simple debt -GDP algebra reveals that Brazil's problem is one of excessively high interest rates.Brazil has a primary budget surplus, a reasonable real growth rate, and a debt-to GDP ratio that is still within sustainable bounds.The one parameter that is out of balance is the interest rate.Given current real interest rates of 12% and an optimistic projected growth rate of 3.5%, the primary budget surplus must rise to 5.1% of GDP just to maintain the existing debt-to-GDP ratio.Yet, such a level of surplus will likely trigger economic stagnation, and it also risks engendering adverse political consequences from the implied cuts in government spending.In other words, sticking with the current policy may yield short term fi-nancial stability, but it is unlikely to generate economic growth and it also stands to be joined later by renewed financial instability. IV. THE COMPOSITION OF BRAZIL'S DEBT: WHY BRAZIL CANNOT IGNORE INTERNATIONAL FINANCIAL MARKETS The above analysis of Brazil's debt dynamics makes no distinction between internal and external debt.This raises the question of whether Brazil can pursue a "go it alone" policy that disregards international financial markets.With its debt purely domestic in character, Brazil would be able unilaterally to use domestic monetary policy to lower interest rates -just as the U.S. has done.However, Brazil's debt is a complicated mix of conventional domestic currency denominated debt, domestic currency debt that is indexed to the exchange rate, and foreign currency denominated debt.These features mean that the exchange rate and the confidence of international investors are critical for the success of monetary policy.In effect, Brazil is handcuffed by external constraints, and hence the necessity of an external strategy. Table 1 shows the composition of Brazil's public debt.2Brazil's public debt-to-GDP ratio stands at 60%.Of this total, 40% is traditional domestic currency denominated debt, 30% is domestic currency debt that is indexed to the exchange rate, and 30% is foreign currency denominated debt.This means that 60% of the total public debt is affected by the exchange rate, and 70% of the debt has its interest rate determined by domestic market rates.Hence, the need for an internal and external interest rate strategy. Table 2 shows the composition of Brazil's external debt.The total value of external debt is $210 billion or 41% of GDP. 3 Of this total foreign debt, 45% is attributable to public sector borrowing and 55% is attributable to private sector borrowing.The significance of this is that both the public and private sectors stand to be negatively impacted by exchange rate depreciation. V. POLICY IMPLICATIONS: WHY BRAZIL MUST WATCH INFLATION AND THE EXCHANGE RATE The composition of Brazil's debt has critical policy implications.The large amount of foreign and exchange rate indexed debt means that investor confidence is critical.If investors lose confidence and exit, this will cause immediate large exchange rate depreciation that impacts both the public and private sectors.Regarding the public sector, it will cause an increase in the burden of the foreign currency denominated public debt, and it will also increase that part of the domestic currency denominated public debt that is indexed to the exchange rate.For the private sector, it will increase the burden of foreign currency denominated borrowings.Private sector firms located in the export sector would be protected as their earnings are denominated in foreign currency terms.However, private sector firms producing for the domestic market and which have borrowed abroad to finance the purchase of imported capital goods could face bankruptcy. In addition to these negative "balance sheet" impacts, exchange rate depreciation would have negative "inflation" impacts owing to exchange rate pass-through.Increased inflation would be economically deleterious for several reasons.First, inflation is bad for working people.Money is an important store of value for the poor, and the value of money is eroded by inflation.Second, wage increases tend to lag inflation, so that rising inflation effectively lowers real wages.Third, inflation makes lenders less willing to lend long, and therefore tends to undermine the market for long term finance.This is a market that Brazil needs to develop, and undermining it runs counter to Brazil's development goals.Fourth, higher inflation stands to cause further exchange rate weakness to the extent that investors sell Brazil's currency to avoid being saddled with depreciation losses, thereby engendering more pass-through inflation and also raising the burden of foreign currency denominated debts. This inflation, debt burden, exchange rate nexus is illustrated in Figure 1.Effectively there is a danger of a vicious cycle, with exchange rate depreciation triggering rising inflation and increased debt burdens, and rising inflation and higher debt burdens then spurring further exchange rate depreciation.These conditions imply a policy contradiction.On one hand, Brazil needs lower interest rates to restore debt sustainability, yet on the other hand it needs tight monetary policy and high interest rates to head off a vicious cycle of exchange rate depreciation and inflation.To escape this contradiction, the paper develops an unorthodox two part monetary strategy.The external component of the strategy focuses on escaping a high interest rate equilibrium trap in international financial markets through new measures to instill investor confidence.Moreover, the strengthening of the Real in the first half of 2003 provides the perfect platform from which to implement these new policies.The internal component of the strategy lowers domestic interest rates to reduce the burden of servicing the public debt, while imposing domestic credit creation controls to prevent a resurgence of inflation that could undermine investor confidence and trigger exchange rate depreciation. VI. BRAZIL AND INTERNATIONAL FINANCIAL MARKETS: A PROBLEM OF MULTIPLE EQUILIBRIUM 4 Brazil's foreign interest rate problem can be interpreted in terms of a model of multiple equilibria, with Brazil now being stuck in the bad equilibrium with high interest rates.Because interest rates are high, investors expect a greater likelihood of Brazilian default, and because they hold such expectations they need a high interest rate to compensate them for bearing the risk of default.In this fashion, expectations sustain the bad equilibrium.The policy challenge is how to move financial markets from the "bad" high interest rate equilibrium to the "good" low interest rate equilibrium. A formal model of multiple equilibria is developed in the appendix.Figure 2 displays a graphical analogue of the model.The horizontal axis displays the domestic market interest rate (i), while the vertical axis displays the return on foreign investments (1 + i* + z) and domestic financial investments (E(R)).The risk adjusted return on foreign investments is described by the horizontal line equal to 1 + i* + z.The non-linear wave function, E(R), describes the expected return on domestic Brazilian bonds.In equilibrium the expected return on Brazilian domestic financial investments must equal the risk adjusted return in global markets. The expected return function is highly non-linear with respect to the domestic interest rate, and Figure 2 shows the case where it partakes of a wave motion.Initially, a higher domestic interest rate raises the expected return on Brazilian assets.However, as interest rates rise, this pulls down the expected return owing to increased bankruptcy risk from higher debt service burdens.As the interest rate continues to rise, the expected return function may increase, perhaps because government moves to increase the primary surplus.But further rises in the interest rate then bring about a decline in the expected return as unstable debt dynamics kick in. Figure 2 Multiple equilibrium in the domestic bond market There are four equilibria in Figure 2. Equilibrium A is the stable "good" equilibrium with low interest rates.Equilibrium B has a low interest rate but is unstable.Equilibrium C is the stable "bad" equilibrium with high interest rates, and equilibrium D is the unstable high interest rate equilibrium.Brazil can be thought of as trapped in the bad high interest rate equilibrium given by C or D. The policy challenge is to move the economy to A. Figure 3 shows how exogenous shocks originating in the global financial center can negatively impact periphery countries such as Brazil.In Figure 3 the external interest rate increases, perhaps as a result of tightening of monetary policy in the U.S. by the Federal Reserve.If the tightening is sufficiently strong, a developing country could potentially be driven from the good low interest rate equilibrium at A to the bad high interest rate equilibrium at C. An interesting feature is that when rates in the center come down again, the developing country remains trapped in the bad equilibrium. This scenario can reasonably be interpreted to apply to Brazil.In 1997 the East Asian financial crisis caused an increase in the risk premium, z, demanded by global investors.This in turn shifted up the required rate of return (i* + z), though this shift was mitigated by the Federal Reserve's lowering of rates to counter the crisis.The Fed's actions helped keep the U.S. economic boom going, but they also meant that subsequently (July 1999) the Fed started raising interest rates, thereby driving up the externally available interest rate, i*.Moreover, these adverse developments were compounded by appreciation of the U.S. dollar during this period.This caused Brazil's national debt to increase since much of it is either dollar denominated or indexed to the dollar.In effect, dollar appreciation served to push the E(R) function down, compounding the effect of investors' rising required rate of return. VII. AN EXTERNAL MONETARY STRATEGY TO ADDRESS BRAZIL'S EXTERNAL DEBT PROBLEM The above multiple equilibrium interpretation identifies Brazil's predicament.The challenge is how to bring down external interest rates.George Soros (2002) has proposed that the IMF issue some form of price guarantee to holders of Brazilian debt. "The challenge would be how to bring interest rates down to that level.That might require some international credit enhancements or guarantees, and the task would be to find the right instruments that keep the real risks as distinct from moral hazard within tolerable bounds (Financial Times, August 13, 2002.)This would have the effect of increasing the expected return to Brazilian debt, reducing perceived default risk.In terms of the model, it would shift up the E(R) function as is illustrated in Figure 4. A second Soros suggestion is that the central banks of center countries accept Brazilian government paper at their discount windows (On Globalization, p. 136).In terms of the above model, it would have the effect of shifting up the E(R) function.The reason is that it would increase the perceived liquidity of Brazilian paper to international investors since they would be able to use such paper as collateral at pre-set prices with center country central banks.Effect of an increase in the default payment which shifts up the E(R) function and can shift the economy to the good equilibrium Another idea that has been proposed by Lerrick and Meltzer (2001), in connection with the discussion surrounding bailouts, is that the IMF should set a floor to the price of country debt.In times of crisis, rather than bailing out countries, the IMF would establish a facility allowing countries to buy their own debt at the deeply discounted floor price.This would provide support to bond prices, set a ceiling on interest rates, help retire part of a country's debt at below par prices, and give collateral to the IMF in the form of an asset traded on financial markets rather than the un-tradable promise of repayment it currently gets. The Lerrick-Meltzer proposal is intended to provide an alternative orderly means of handling financial crises.However, in a multiple equilibrium model a similar mechanism can be used to free economies from the pull of a bad equilibrium before a crisis has taken hold.An announced guarantee floor price would effectively increase the value of the default state payment, thereby raising the E(R) function.By simply setting up a facility to support an announced guaranteed floor price for Brazilian bonds, the IMF could potentially free Brazil from the high interest rate equilibrium in which it is trapped.In a world of expectations driven multiple equilibria, this might even be accomplished without actually buying any bonds. The above arguments identify high interest rates as the cause of Brazil's financial crisis, with high interest rates driving a process of unsustainable debt dynamics.These debt dynamics have also impacted Brazil's exchange rate dynamics, with unsustainable debt leading to capital flight that depreciates the exchange rate.And exchange rate depreciation in turn generates worsened debt dynamics because a significant portion of Brazil's debt is foreign currency denominated or indexed to the dollar. This pattern has important policy implications.Too much attention has been focused on the exchange rate, and scarce foreign currency has been wasted defending it.Exchange rate weakness is a symptom of the problem, not the cause.The real problem is expectations of default created by high interest rates.This suggests that Brazil's monetary authorities should mount a "bear squeeze" in Brazilian debt mar- kets.Rather than giving Brazil $30 billion in dribs and drabs, the IMF should give Brazil $30 billion in one shot to buy up foreign currency denominated Brazilian debt.This would drive bond prices up and interest rates down.Contrastingly, giving the money in small chunks will result in it bleeding away without changing market expectations. In addition, given the strength of the Real which has reduced the value of foreign currency indexed domestic bonds, the monetary authority should now buy these bonds.These purchases can in turn be sterilized by sales of pure domestic currency bonds.This measure would eliminate a major source of domestic financial fragility, whereby the Brazilian domestic debt burden can be whip-sawed by foreign exchange movements.These indexed bonds are a hang-over from mistaken earlier "nominal anchor" anti-inflation policies that purchased lower inflation at the cost of massively increased domestic financial fragility. VIII. AN INTERNAL MONETARY STRATEGY FOR BRAZIL'S INTERNAL DEBT PROBLEM A successful external strategy can bring down Brazil's external interest rate.Moreover, to the extent that investors feel more confident about Brazil, this will tend to appreciate the exchange rate, thereby reducing that part of the public debt that is indexed to the exchange rate.However, Brazil also needs to bring down domestic interest rates. Falling yields on the external debt will make domestic assets more attractive, generating some portfolio substitution that will automatically drive down domestic interest rates.However, more may be needed.To this end, the central bank could lower its own short term interest rate, and it could also create a bear squeeze in domestic bond markets by buying up non-dollar indexed domestic currency denominated debt.Doing so would send a signal to the market that they expect bond prices to rise, and they also expect the exchange rate to rise -which is why they are not buying dollar indexed debt on which they would incur large capital losses. How low the Brazilian monetary authorities should push the domestic interest is a judgment resting on a difficult trade-off.A higher domestic interest will tend to appreciate the exchange rate, thereby lowering the value of that part of the debt that is dollar indexed or denominated in dollars.Moreover, a higher exchange rate is also anti-inflationary.Balanced against this, a lower interest rate lowers interest payments on the domestic currency denominated debt, and it is also good for aggregate demand and the real economy. Inflation is a second critical element in the calculus of domestic interest rate policy.It is critical that Brazil avoid higher inflation, both because of its negative impacts on working people and because of its negative impact on international financial markets.If these markets perceive that Brazilian inflation is about to take off, they will start selling Brazilian financial assets.This will depreciate the exchange rate and drive up the Brazil's borrowing rate in international markets.In effect, lack of attention to the inflationary impacts of expansionary domestic monetary policy rates could take back the space won by a successful external strategy. Yet, Brazil also needs lower domestic rates to lower the interest burden on the 70% of its public debt that is denominated in domestic currency terms.In effect, Brazil is stuck in what can be termed "the highly indebted developing country" trilemma.This trilemma is illustrated in Figure 5.A.The manner in which debt is denominated means that Brazil must watch the exchange rate.It is also concerned about inflation, both because of its adverse internal effects and because of its adverse impact on the exchange rate.Finally, Brazil is concerned with the public finance burden of servicing the domestic public debt, and this makes it problematic to use high interest rates to combat incipient inflation.Can this trilemma be solved?The answer is yes.An appropriate external strategy that shifts Brazil to the low interest rate equilibrium can strengthen the exchange rate and lower the burden of the foreign currency denominated debt.Lower domestic interest rates can then lower the burden of the domestic currency debt.However, lowering domestic interest rates risks triggering domestic credit expansion and inflation.To prevent this outcome, Brazil's monetary authorities need to restrict domestic credit expansion.This last "unorthodox" element is missing in Brazil's current domestic monetary policy deliberations.In its absence, the trilemma cannot be resolved. Given this Brazil should move to establish domestic credit creation controls.In particular, Brazil should consider imposing a system of asset based reserve requirements (ABRR) under which reserve requirement holdings are imposed on financial intermediary asset holdings.The full details and economic logic of ABRR are described in Palley (2000).In the current application, holdings of Brazilian government bonds would be zero-rated so as to keep down the interest rate.However, bank loans would carry positive reserve requirements so as to raise the cost of such loans and discourage excessive domestic credit creation.In this fashion, the inflation threat can be dealt with, and the trilemma is resolved as shown in Figure 5.B.Finally, in addition to the above interest rate policy measures, Brazil should make two structural changes to its debt management policies.First, all new Brazilian debt and roll-overs of existing should include a call feature allowing for early redemption.This would send another clear signal that the authorities anticipate interest rates will come down.Second, Brazil should permanently abandon the practice of indexing its public debt to the exchange rate.This practice has some parallels with dollarization, and as with dollarization, it has created enormous financial fragility.In the absence of capital controls, Brazil cannot reliably control its exchange rate.This means that exchange rate indexing means makes the public debt susceptible to massive expansion from exchange rate depreciation, as has happened.Debt expansion has in turn created twin fears of default and inflation, leading to further exchange rate weakness.Put bluntly, Brazil cannot issue dollars, and it should therefore never tie its internal financial liability structure to the dollar. IX. CONCLUSION: GROWTH WITH FINANCIAL STABILITY Brazil is currently trapped in a high interest rate "bad" equilibrium.The existing monetary policy mix has produced financial stability, but it is unlikely to produce growth.Moreover, because the debt dynamics implied by current interest rates are unsustainable, financial instability and the possibility of default will likely reappear if policy remains unchanged. The policy program described in the current paper offers a plausible escape that can generate growth with financial stability.An external strategy aimed at pulling Brazil from the high interest rate equilibrium to the low interest rate equilibrium can lower the cost of foreign borrowing and appreciate the exchange rate.This will help restore debt sustainability, and also lower inflation.Appropriate domestic monetary policy can lower the domestic interest rate, thereby reducing the domestic debt service burden and solidifying debt sustainability.Finally, appropriately structured credit creation controls can guard against the inflationary consequence that may follow from lower domestic interest rates, thereby ensuring the viability of the external strategy.So much remains to be done in Brazil regarding matters of income distribution and social policy.However, it will be hard to make progress on these issues until Brazil escapes the debt constraint on growth.Existing policy recommendations do not remove this constraint, and it is for this reason that the new government should embrace a new path for monetary policy. APPENDIX The appendix describes a model of multiple equilibrium in the market for Brazilian debt.Holders of Brazilian debt need to earn a risk adjusted expected return equal to that which they could earn on other international investments, implying the following condition (A.1) 1 + i* + z = E(R) where i* = foreign interest rate, z = risk premium required for investing in Brazil, and E(R) = expected return to investments in Brazil.The expected return to Brazilian investments is given by + ?-+ ??(A.2) E(R) = p(i, d(i), D/Y, e(i,..),..)X + [1 -p(i, d(i), D/Y, e(i,..)..)][1 + i] where p(.) = probability of default, X = payment in default state, i = Brazilian real domestic interest rate, e = real exchange rate.An increase in e represents an appreciation of the Brazilian currency.The probability of default is such that 0 ≤ p ≤ 1, and the default payment is such that 0 ≤ X ≤ 1. Signs above functional arguments represent assumed signs of partial derivatives.An increase in the Brazilian domestic interest rate increases the probability of default by increasing the government's debt service burden.An increase in the primary deficit has an ambiguous effect.On one hand there is a positive Keynesian aggregate demand effect, but on the other hand it adds to the government's indebtedness.The primary deficit may also be negatively impacted by higher interest rates, with the government being forced to cut back spending as rates rise.The impact of the real exchange rate on the probability of default is ambiguous.On one hand an appreciation reduces the debt burden since much Brazilian debt is indexed to the dollar, but on the other hand it reduces aggregate demand and economic activity by reducing net exports.Finally, the impact of the domestic interest rate on the exchange rate is ambiguous.The expected return function described by (A.2) is therefore highly non-linear, and can partake of a wave motion such as drawn in Figure 1. The model is closed by adding a dynamic interest rate adjustment mechanism given by (3) g D = i + [d/Y].[Y/D]where d/Y = primary budget deficit as a percent of GDP, and Y/D = GDP to debt ratio.Substituting (3) in (1) yields (4) g D/Y = i + [d/Y].[Y/D]-g Y For the case of Brazil i = 18%, d/Y = -3.75%,Y/D = 1.67, and g Y = 3.5%. Figure 3 Figure 3Effect of a rise in foreign interest rates which can shift the economy from the good to the bad equilibrium Figure Figure 5.A The highly indebted developing country trilemma Figure 5.B Policy mix for resolving the highly indebted developing country trilemma
2015-03-21T21:52:17.000Z
2004-03-01T00:00:00.000
{ "year": 2004, "sha1": "d34cc65fd2d6beb1073cebcae8cac4d20ad546c3", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rep/v24n1/1809-4538-rep-24-01-38.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3a6ef9541afaee545d1a9175b87e54d8fad0e1dd", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
55506727
pes2o/s2orc
v3-fos-license
The effects of chronic carbon monoxide treatment on diet- induced obesity and Rev-erb-alpha target gene expression in adipose tissue The present study determined if carbon monoxide (CO) treatment reversed diet-induced obesity and insulin resistance. Because CO is a potential ligand for the nuclear receptor Rev-erb-α, we also determined if CO treatment altered the mRNA expression of Rev-erb-α targets. Mice were fed a low fat diet (LFD) or a high fat diet (HFD) for 150 days. Following 110 days of the dietary intervention the HFD-fed mice were assigned to a CO inhalation group or a control group. The HFD-CO group was exposed to 250 ppm CO for one hour per day for 40 consecutive days. Body weight and edpididymal white adipose tissue (EWAT) mass were significantly elevated in HFD and HFD-CO mice compared to LFD mice, but no differences existed between HFD and HFD-CO mice. Area under the insulin-assisted glucose tolerance curve was significantly elevated in HFD and HFD-CO mice compared to LFD mice, but no differences existed between HFD and HFD-CO mice. Heme oxygenase-1 (HO-1) mRNA was significantly higher in EWAT of HFD-CO compared to HFD mice, indicating effective delivery of CO to adipose tissue. CO treatment did not alter Rev-erb-α expression or Rev-erb-α targets. These results indicate that CO inhalation does not attenuate diet-induced obesity/insulin resistance. Introduction Obesity is an increasingly prevalent metabolic disease that affects more than one third of United States adults [1]. Obesity is associated with insulin resistance and increases the risk of developing type 2 diabetes, a disease that results in an estimated $245 billion in total healthcare costs [2]. Despite the prevalence and health risks associated with obesity, there are few viable and effective treatment options. Therefore, the development of new interventions that can promote weight loss and improve insulin sensitivity are paramount to reducing the prevalence of obesity and its associated diseases. Recent studies demonstrate that increasing heme oxygenase-1 (HO-1) expression can reduce adiposity and heighten insulin sensitivity [3-6]. Although HO-1 induction by porphyrin molecules or genetic manipulations can improve obesity and insulin resistance, the cellular mechanism responsible for these changes remains to be established. One hypothesis is that the products of HO-1 enzyme activity, CO and biliverdin, exert the metabolically beneficial effects. CO has been shown to reduce inflammatory markers in both cultured cells and mice [7] and a novel CO releasing molecule can attenuate weight gain and preserve insulin sensitivity when mice are placed on a HFD [8]. However, whether or not CO alone can reverse already established obesity and insulin resistance is not established, although a recent study indicates that CO inhalation produces transient reduction in body fat of obese mice [9]. Further, the mechanism by which CO might promote weight loss and improve insulin sensitivity are not known. Establishing the efficacy of CO inhalation to reduce adiposity as well as identifying the mechanism of CO action may lead to novel interventions to combat obesity and type 2 diabetes. CO may bind to the nuclear receptor Rev-erb-α [10]. Rev-erb-α is a ligand regulated transcription factor that acts as a transcriptional repressor [11,12] regulating circadian rhythms and metabolism [13,14]. Although heme has been identified as the ligand for Rev-erb-α [15,16], evidence suggests the Drosophila orthologue to Rev-erb-α binds CO and represses gene expression [10]. Whether or not the expression of Rev-erb-α targets is altered in tissues following CO treatment is not known, although a Rev-erb agonist has been shown to promote weight loss and suppress PGC-1α mRNA expression, an established target of Rev-erb-α [14]. The purpose of the present study was to determine if CO treatment could reverse obesity and improve insulin resistance due to a high fat diet (HFD) in a manner that was associated with changes in the mRNA expression of well-established Rev-erb-α targets in white adipose tissue. We hypothesized that daily CO treatments would reduce adiposity and improve insulin action in mice with established diet-induced obesity. We also hypothesized that mRNA expression of the Rev-erb-α targets PGC-1α, PPARγ, Bmal1, and Rev-erb-α itself would be reduced by CO treatment. Methods Treatment of animals Wildtype C57B6 male mice were purchased from the Jackson Correspondence to: Thomas H. Reynolds IV, Ph.D., Department of Health and Exercise Sciences, Skidmore College, 815 North Broadway, Saratoga Springs, NY 12866, USA, Tel: (518) 580-8349, Fax: (518) 580-8356; E-mail: treynold@skidmore.edu Introduction Obesity is an increasingly prevalent metabolic disease that affects more than one third of United States adults [1]. Obesity is associated with insulin resistance and increases the risk of developing type 2 diabetes, a disease that results in an estimated $245 billion in total healthcare costs [2]. Despite the prevalence and health risks associated with obesity, there are few viable and effective treatment options. Therefore, the development of new interventions that can promote weight loss and improve insulin sensitivity are paramount to reducing the prevalence of obesity and its associated diseases. Recent studies demonstrate that increasing heme oxygenase-1 (HO-1) expression can reduce adiposity and heighten insulin sensitivity [3][4][5][6]. Although HO-1 induction by porphyrin molecules or genetic manipulations can improve obesity and insulin resistance, the cellular mechanism responsible for these changes remains to be established. One hypothesis is that the products of HO-1 enzyme activity, CO and biliverdin, exert the metabolically beneficial effects. CO has been shown to reduce inflammatory markers in both cultured cells and mice [7] and a novel CO releasing molecule can attenuate weight gain and preserve insulin sensitivity when mice are placed on a HFD [8]. However, whether or not CO alone can reverse already established obesity and insulin resistance is not established, although a recent study indicates that CO inhalation produces transient reduction in body fat of obese mice [9]. Further, the mechanism by which CO might promote weight loss and improve insulin sensitivity are not known. Establishing the efficacy of CO inhalation to reduce adiposity as well as identifying the mechanism of CO action may lead to novel interventions to combat obesity and type 2 diabetes. CO may bind to the nuclear receptor Rev-erb-α [10]. Rev-erb-α is a ligand regulated transcription factor that acts as a transcriptional repressor [11,12] regulating circadian rhythms and metabolism [13,14]. Although heme has been identified as the ligand for Rev-erb-α [15,16], evidence suggests the Drosophila orthologue to Rev-erb-α binds CO and represses gene expression [10]. Whether or not the expression of Rev-erb-α targets is altered in tissues following CO treatment is not known, although a Rev-erb agonist has been shown to promote weight loss and suppress PGC-1α mRNA expression, an established target of Rev-erb-α [14]. The purpose of the present study was to determine if CO treatment could reverse obesity and improve insulin resistance due to a high fat diet (HFD) in a manner that was associated with changes in the mRNA expression of well-established Rev-erb-α targets in white adipose tissue. We hypothesized that daily CO treatments would reduce adiposity and improve insulin action in mice with established diet-induced obesity. We also hypothesized that mRNA expression of the Rev-erb-α targets PGC-1α, PPARγ, Bmal1, and Rev-erb-α itself would be reduced by CO treatment. Laboratory (Bar Harbor, ME). Upon arrival to the animal facility at Skidmore College, all mice (~4-6 weeks old) were housed individually with cage enrichment nest-lets and fed ad libitum chow and water for one week. Mice were then randomly assigned to either a HFD (Test Diets, Catalog #1810251, 60% kcal from fat) or a low fat diet (LFD) (Test Diets, Catalog #58145, 12% kcal from fat). Following 15 weeks of the dietary intervention, HFD-fed mice were body weight-matched-assigned to a CO treatment group (HFD-CO) or control group. Mice from the HFD-CO group were place in individual induction chambers where they received CO gas (250 ppm) for 60 min per day for 40 consecutive days [17,18]. This CO concentration is slightly higher than what has been show to increase carboxyhemoglobin levels [9]. Induction chamber CO levels were confirmed by a CO monitor. Control mice from the HFD and LFD groups were placed in the induction chamber for 5 min daily and received air. During the intervention mice were maintained on their HFD or LFD. All animal care and surgery were conducted in accordance with the National Research Council's Guide for Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, Commission on Life Sciences, 1996). All experimental protocols were approved by Skidmore College's Institutional Animal Care and Use Committee. In vivo insulin action To assess the effects of a HFD and CO treatment on in vivo insulin action, mice were subjected to an insulin-assisted glucose tolerance (IAGT) test. Prior to the IAGT test mice were fasted overnight. Following the overnight fast, mice were simultaneously administered glucose (2 g/Kg body weight) and insulin (2 U/Kg body weight) via intraperitoneal injection. Glucose was measured by a glucometer (ACCU-CHEK Aviva, Roche Diagnostics) in blood collected via the tail vein at 0, 20, 40, and 60 min following the glucose/insulin injection. We have previously shown that the IAGT test detects insulin resistance as well as an insulin tolerance test in mice fed a HFD, but avoids the severe hypoglycemia that is typically observed during and after insulin tolerance testing [19]. Surgical procedures Prior to harvesting tissues, mice were fasted overnight and then anesthetized with a 1:1:1 mixture of promace, ketamine hydrochloride, and xylazine by an intraperitoneal injection (0.015 ml/10 g body weight). Epididymal white adipose tissue (EWAT) was rapidly dissected, frozen in liquid nitrogen, and stored at -80° C until analysis. EWAT RNA extraction and real time quantitative PCR Total RNA was extracted from EWAT using RNA extraction kits for high lipid content tissue (Qiagen). RNA was quantified by measuring absorbance at 260 nm using a spectrophotometer (Beckman Coulter, Brea, CA). A 1 ug aliquot of total RNA was reverse transcribed using the RETROscript kit from Ambion (Austin, TX). The resultant cDNA (20 ng cDNA/sample in duplicate) was then subjected to quantitative polymerase chain reaction (qPCR) using standard target specific TaqMan gene expression assays for Rev-erb-α (Assay ID: Mm00441730_m1), HO-1 (Assay ID: Mm00516005_m1), peroxisome proliferator activated receptor gamma (PPARγ) (Assay ID: Mm01184322_m1), PPARγ coactivator-1 alpha (PGC-1α) (Assay ID: Mm01208835_m1), and Bmal1 (Assay ID: Mm00500226_m1) and a real time PCR system (StepOne Plus Real-Time PCR System, Applied Biosystems, Foster City, CA). Relative quantitation of amplified cDNA targets were determined by the ΔΔCT method using StepOne v2.1 software (Applied Biosystems). Statistical analysis A one-way analysis of variance (ANOVA) was utilized to detect the effects for diet (LFD vs. HFD) and CO treatment on gene expression. A repeated measures one-way ANOVA was utilized to detect differences in blood glucose levels during the IAGTT. Following a significant F ratio and inspection of interactions, Fisher's LSD post-hoc test was used to locate statistically significant differences between groups. Data are expressed as means ± SEM, and the level of statistical significance was set at p < 0.05. For mRNA expression, means ± SEM were expressed relative to the LFD group mean (% LFD). Data was analyzed using StatView statistical software (SAS Institute Inc., Version 5.0, Cary, NC). Effect of CO treatment on adiposity To assess the effects of a HFD and CO treatment on adiposity, we measured body weight and EWAT mass. As expected, a HFD produced a significant increase in body weight and EWAT mass compared to mice fed a LFD ( Figures 1A and 1B). Following the 15 week dietary intervention, HFD mice were body weight-match-assigned to a CO treatment group or a control group (room air) and maintained on the HFD. In HFD mice. Treatment with CO (1 hr /day) for did not significantly change body weight or EWAT mass ( Figures 1A and 1B), indicating that CO does not reverse pre-existing diet-induced obesity. Effect of CO treatment on insulin action In vivo insulin action was determined by conducting IAGT tests that detect diet-induced insulin resistance at more physiologically relevant blood glucose levels than insulin tolerance tests Blood glucose values were higher during the IAGT test in mice fed a HFD compared to a LFD, indicating the presence of diet-induced insulin resistance (Figures 2A and 2B). In HFD mice treated with CO, blood glucose levels were similar during the IAGT test compared to HFD mice treated with room air (Figures 2A and 2B), indicating that CO treatment does not improve insulin resistance in mice with pre-existing diet-induced obesity. Effect of CO treatment on HO-1 mRNA Expression Because CO treatment, whether administered by gas inhalation or CO releasing molecules, increases HO-1 expression and activity [8,17,20], we used HO-1 mRNA expression in EWAT as a positive control to demonstrate that our CO gas treatment regimen was effective in adipose tissue. As expected, treating mice with CO gas (250 ppm, 1 hr/day) for six weeks resulted in a significant increase in HO-1 mRNA expression in EWAT from mice fed a HFD (Figure 3). Effect of CO treatment on the expression of Rev-erb-α targets Rev-erb-α is a gas responsive transcriptional repressor that has been shown to regulate circadian rhythms, metabolism, and adiposity [10,13,14]. Because CO is thought to bind Rev-erb and promote transcriptional repression [10], we examined the mRNA expression of Rev-erb-α targets in EWAT from mice with pre-existing diet-induced obesity that were treatment with CO for six weeks. CO treatment did not alter the expression of the Rev-erb-α targets Bmal1, Pparγ, and Pgc-1α ( Figures 4A-4C). Since Rev-erb-α represses its own expression, we assessed its mRNA expression in response to CO treatment and observed no significant changes ( Figure 4A). These finding indicate that in vivo CO treatment does not alter the expression of established Rev-erb-α targets in mice with established diet-induced obesity. Discussion The present study examined the effect of chronic CO treatment on adiposity, insulin action, and the expression of Rev-erb-α targets in mice with established diet-induced obesity. We observed no significant changes in adiposity as assessed by body weight and EWAT mass in HFD-fed mice that were treated with CO compared to HFD-fed mice that where treated with air. Further, chronic CO treatments did not improve HFD-induced insulin resistance as blood glucose values during IAGT test were almost identical between HFD-CO and control HFD-fed mice. Finally, the expression of Rev-erb-α targets were not altered in EWAT of mice that received daily CO treatment. However, CO treatment increased HO-1 mRNA levels in adipose tissue, thereby uncoupling the relationship between HO-1, insulin sensitivity, and weight loss [3]. CO is a product of HO-1 enzymatic activity and thought to mediate the improved insulin action and weight loss observed when the HO-1 gene is overexpressed [3][4][5][6]. In this context, the most interesting finding of the present study is the observation that CO treatment increased the expression of HO-1 mRNA without altering insulin action or adiposity. Our present observation is unique and different than studies demonstrating that increasing HO-1 expression by metalloporphyrin administration results in a reduction in body weight [3, 21,22]. Importantly, it appears that the ability of metallophyrins to decrease adiposity and improve insulin sensitivity is dependent on HO-1 induction because the HO-1 inhibitor, stannous mesoporphyrin, abolished the metalloporphyrin-mediated decrease in body weight [3]. However, other studies support our lack of an effect of CO mediated increases in HO-1 mRNA expression on insulin action and obesity. For example, overexpression of HO-1 in adipose tissue does not alter HFDinduced obesity or insulin resistance in mice [23] and HO-1 expression was a strong predictor of metabolic disease in humans [24]. The present study provides additional evidence that inducing HO-1 expression, at least by CO inhalation, is not sufficient for improving insulin action or reducing adiposity. Our results showing that chronic CO inhalation does not improve HFD-induced obesity and insulin resistant may be related to how CO was administered. Hosick et al. demonstrated that carbon monoxide releasing molecules promote weight loss and improve insulin action [8]. Although 10 weeks of CO inhalation appears to reduce body weight, this effect is transient and completely absence at 35 weeks [9]. Interestingly, the present results show no effect of approximately six weeks of daily CO inhalation (1 hour/day for 40 days) on body weight or insulin action in mice with established obesity. It is difficult to reconcile the differences between our findings and that of Hosick et al. [8,9]. Since some of the beneficial effects of CO are mediated by an increase in reactive oxygen species [25], this response may be abrogated in mice with established weight stable obesity because of the already high levels of ROS [26]. Finally, the lack of a CO effect in the present study and that of Hosick et al. [9] may be related to the delivery of the gas by inhalation rather than via CO releasing molecules [8], particularly since CO gas is primarily transported bound to hemoglobin and CO releasing molecules reach tissues independent of hemoglobin. However, we observed a significant increase in HO-1 expression in adipose tissue following the CO inhalation treatment indicating effective CO delivery to cells [8 17,20], although we did not directly measure tissue CO content or assess markers for CO action in other tissues. Future work is needed to determine the mechanism for the different responses between CO delivered by inhalation compared to carbon monoxide releasing molecules. The present study also tested the novel hypothesis that CO would alter Rev-erb-α dependent gene expression. Rev-erb-α is a nuclear receptor and thought to be a gas responsive transcription factor that regulates circadian rhythms and metabolism [10,13,14,27]. Recently, a Rev-erb agonist has been shown to promote weight loss and suppress PGC-1α mRNA expression, an established target of Rev-erb-α [14]. Previous work demonstrated that E75, a Drosophila orthologue of human Rev-erb-α, binds CO and NO [10]. Subsequent work in cultured HEK 293T and HepG2 cells confirmed that CO and NO bind Rev-erb-α, however CO only had modest effects on the expression of Rev-erb-α targets compared to NO [27]. The present study clearly demonstrates that CO inhalation does not alter the expression of Rev-erb-α targets in adipose tissue of obese mice. It is unlikely that the gas delivery method explains the ineffectiveness of CO treatment on the expression of Rev-erb-α targets because CO inhalation has been shown to stimulate PGC-1α dependent gene expression in skeletal muscle and anti-inflammatory signaling in liver [17,28]. Furthermore, we observed an increase in HO-1 mRNA expression in adipose tissue in HFD-fed mice that were treated with CO. However, it is possible that obese mice are resistant to CO action as we have observed a significant decline in insulin action and PGC-1α in adipose tissue of lean mice that were treated with CO for 7 days (unpublished data). Conclusion We have shown that CO treatment does not reverse established diet-induced obesity and insulin resistance despite an increase in HO-1 mRNA levels. Furthermore, CO treatment does not alter the expression of Rev-erb-α targets or Rev-erb-α itself. Future studies examining the effects of CO releasing molecules on established weight-stable obesity and Rev-erb-α are warranted. Because NO binds Rev-erb-α with greater affinity than CO, future studies that interrogate the effect of nitrate donors on adiposity and the expression of Rev-erb-α targets are needed. Authorship and contributorship THR is the principal investigator that designed and directed all aspects of the study. TB, RM, and BS all contributed to the manuscript by conducting experiments, analyzing data, and creating figures. TB and RM assisted in preparing the manuscript for submission.
2019-03-16T13:06:40.530Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "0ad3d6a939f92e219e1f079e9f889418832b1f69", "oa_license": "CCBY", "oa_url": "https://www.oatext.com/pdf/IOD-2-152.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8925b39d752bbeb85faeb1346f6135a4cdf96b5d", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
138200173
pes2o/s2orc
v3-fos-license
Strain Aging Behavior of Microalloyed Low Carbon Seamless Pipeline Steel Ocean oil exploitation become more and more important as oil requirement is rapidly increased and reserves in land are reduced. It is well known that pipeline is one of the most important oil transportation tools in ocean oil exploitation. Whether or not it is safe will have a direct influence on the production of petroleum industry. The vibration of pipeline is inevitable due to the action of the wave and the current in sea, resulting in the occurrence of deformation of pipeline. Meanwhile, oil pipeline has often been heated by high temperature oil from heavy oil thermal recovery technologies in ocean oil exploitation. This simply means that the strain aging phenomenon will be caused during the oil transportation. In addition, pipeline steel is also subjected to a natural aging during a long time period at room temperature, resulting in the changes on the microstructure and mechanical properties after long-term service.1,2) Therefore, the pipeline steels are critically required to have not only high strength but also excellent deformability in ocean oil exploitation. Besides the marine environment, to meet the other severe environment such as earthquake, landslide, debris flow, etc, the pipeline steels are also required to have excellent deformability.3) The effects of strain aging on mechanical properties can include the increase of yield strength and yield ratio, and the decrease of toughness and ductility. In view of its engineering significance, strain aging has drawn more and Strain Aging Behavior of Microalloyed Low Carbon Seamless Pipeline Steel Introduction Ocean oil exploitation become more and more important as oil requirement is rapidly increased and reserves in land are reduced. It is well known that pipeline is one of the most important oil transportation tools in ocean oil exploitation. Whether or not it is safe will have a direct influence on the production of petroleum industry. The vibration of pipeline is inevitable due to the action of the wave and the current in sea, resulting in the occurrence of deformation of pipeline. Meanwhile, oil pipeline has often been heated by high temperature oil from heavy oil thermal recovery technologies in ocean oil exploitation. This simply means that the strain aging phenomenon will be caused during the oil transportation. In addition, pipeline steel is also subjected to a natural aging during a long time period at room temperature, resulting in the changes on the microstructure and mechanical properties after long-term service. 1,2) Therefore, the pipeline steels are critically required to have not only high strength but also excellent deformability in ocean oil exploitation. Besides the marine environment, to meet the other severe environment such as earthquake, landslide, debris flow, etc, the pipeline steels are also required to have excellent deformability. 3) The effects of strain aging on mechanical properties can include the increase of yield strength and yield ratio, and the decrease of toughness and ductility. In view of its engineering significance, strain aging has drawn more and Strain Aging Behavior of Microalloyed Low Carbon Seamless Pipeline Steel Qianlin WU, 1) Zhonghua ZHANG 2) * and Yaoheng LIU 2) Strain aging behavior of microalloyed low carbon seamless pipeline steel with normalized (ferriticpearlitic) and quenched-and-tempered (ferritic-cementitic) microstructures has been investigated in different pre-strains at 250°C. The yield ratios of steels with ferritic-pearlitic microstructure at different pre-strains are significantly higher than that with ferritic-cementitic microstructure. It is attributed that an interaction between particles and dislocation is stronger for quenched-and-tempered steel in comparison with that for normalized steel. Therefore, strain aging resistance of normalized steel is, to a great extent, better than that of quenched-and-tempered steel. Unlike welded pipe, few carbon atoms in supersaturated solid solution diffuse to the mobile dislocations, forming Cottrell atmospheres and producing strain aging phenomenon in seamless pipe. This different is attributed to the different pipe making technique: TMCP (Thermo-Mechanical Control Process) for welded pipe and traditional heat-treatment for seamless pipe. KEY WORDS: microalloyed low carbon steel; seamless pipe; strain aging; microstructure; mechanical properties. more attention from scientists and steel industries, particularly for the strain-based design applications. [4][5][6][7][8] Compared with welded pipe, seamless pipe has been increasingly applied for oil transportation in severe environment, for example the marine environment, due to the homogeneity of mechanical property without welded joints. At present, increasing demand for strain aging design of seamless pipe has been needed for steel industries. However, such strain aging behavior has been extensively investigated in welded pipes, [3][4][5]9,10) but rarely in seamless pipes. To date, only very few investigations have been performed on the effect of various microstructures on the strain aging behavior of pipe so far. Thus, the aim of this work is to understand the role of the material structure on the strain aging behavior of microalloyed low carbon steel by varied heat-treatments. Experimental Procedure A microalloyed low carbon steel was provided by Baoshan Iron & steel Co., Ltd., with the composition as listed in Table 1. The steel was melted using a vacuum furnace, casted into ingots and hot-rolled into a 10 mm thick plate. The plates were heat-treated and the methods of heattreatment and corresponding microstructures are shown in Table 2. The plates were pre-strained at room temperature on a Schimadzu tensile testing machine to reach tensile strains of 2.5%, 5.0% and 7.5%, respectively. The amount of strain was measured by an extensometer attached to the sample. After that, all samples were unloaded and aged at 250°C for 1 h. The tensile specimens and Charpy impact specimens were prepared in accordance with the relevant ASTM standard. Round tensile specimens with a gauge diameter of 8 mm and length of 50 mm were obtained along longitudinal direction. For Charpy impact tests, Charpy V-notch impact specimens with a size of 10 mm × 10 mm × 55 mm were cut along transverse direction. In this study, each value of mechanical property was obtained by the average of two tests. Vickers hardness test was performed at room temperature under 0.2 kg load. Microstructural evaluations of test samples were carried out using EVO MA25 scanning electron microscopy (SEM) and JEM 2100F transmission electron microscopy (TEM) equipped with energy-dispersive X-ray spectroscopy (EDX). The TEM thin foils of 3 mm diameter were prepared by the twin-jet polishing technique with a solution containing 4% perchloric and 96% ethanol at − 30°C. Microstructure The microstructures of X52NS and X65QO before strain aging are shown in Fig. 1. The microstructure of X52NS consists of the pre-eutectoid ferrite and pearlite and no apparent particles are observed in the microstructure (see Figs. 1(a)-1(b)). Details of microstructure of X52NS are further examined by TEM, as shown in Figs. 1(c)-1(d). It can be seen that there are two types of particles in the preeutectoid ferrite, one is square-shaped with sizes ranging in 50-200 nm and another is spherical nano-particles. Microanalysis has been performed on these two types of particles and the results reveal that the first type is (Ti, Nb)C and the second is V-rich carbide, as shown in Fig. 2. For pearlite, no particles are seen in eutectoid ferrite and the typical lamellar cementite is shown clearly. Unlike X52NS, the microstructure of X65QO consists of polygonal ferrite, lath ferrite and particulate precipitates distributed within the grain and at the grain boundary (see Figs. 1(e)-1(f)). EDX microanalysis shows that the particles precipitated more or less uniformly dispersed are particulate cementite. Details of microstructure are further examined by TEM, as shown in Figs. 1(g)-1(h)). Further TEM observations confirm the presence of particulate cementite with the sizes ranging from 50 to 300 nm, as shown in Figs. 1(g) and 2(d). In addition, square-shaped (Ti, Nb)C and spherical nano-particles V-rich carbides are also revealed (Figs. 1(g)-1(h)). SEM images do not exhibit any visible changes before and after strain aging. However, significant changes in the microstructure are revealed in TEM images. Bowing of dislocations resulting from pinning effect of nanoparticles (shown by arrows) is clearly seen in Figs. 3(a) and 3(c). The density of dislocations in steels after strain aging increases significantly in comparison with that of steels before strain aging, as shown in Figs. 1 and 3. It is noted that the serve bowing and breaking of lamellar cementite in pearlite for X52NS can be observed due to the large pre-strain, as shown in Fig. 3(b). Mechanical Properties The mechanical properties of all the samples are shown in Fig. 4 and Table 3. As seen in Figs. 4(a)-4(b), with the increase of pre-strain, the strength increases remarkably while the elongation and impact toughness decreases slightly. It is worth noting that the elongation of X65QO with 7.5% pre-strain dramatically decreases and reaches about 1% due to the occurrence of brittle fracture, as shown in Fig. 4(b). The yield ratios of X65QO with pre-strains of 2.5%, 5.0% and 7.5% are 0.963, 1.0 and 1.0, respectively and corresponding that of X52NS were 0.879, 0.947 and 0.970, respectively. Clearly, the yield ratios of X65QO with different pre-strains are significantly higher than that of X52NS. It is well known that the yield ratio of sample is a key factor for the strain-based design applications, so strain aging resistance of X52NS is, to a great extent, bet- ter than that of X65QO. Although the impact toughness of X65QO is higher than that of X52NS, the impact toughness of X52NS after strain aging ( > 120 J) is enough for applications. Discussion The age hardening response of samples with varying amounts of pre-strains is shown in Fig. 5. It can be seen that the hardness of two samples gradually increases with the increase of the pre-strain and the hardness of X65QO is higher than that of X52NS at different pre-strains. The increase of hardness can mainly be attributed to an increase in the dislocation density in ferrite during strain aging process. The dislocation density in a pre-strain material can be evaluated from: 11) Where ρ is the dislocation density, G is the shear modulus, υ is the Poisson's ratio, σ is the pre-strain stress and b is the Burgers vector. The value of σ can be determined from the microhardness indentations using the following equation: 12) Where F and d are, respectively, the load and indentation diameter during the microhardness test. The dislocation density determined with the help of Eqs. (1) and (2) has been plotted against the amount of pre-strain in Fig. 6. The following constants have been used for the calculations: υ = 0.33; G = 8 × 10 10 Nm 2 ; b = 2.03 × 10 − 10 m. 13) Figure 6 reveals that the dislocation density of X65QO is higher than that of X52NS at different pre-strains. Serveral factors contribute to the observed change in hardness during aging of a pre-strain material.These include the amount of pre-strain (dislocation density), recovery and precipitation. In this case, recovery is insignificant due to low aging temperature (250°C) and small pre-strain (≤7.5%) 12) and no precipitation is observed in microstructure after strain aging. Therefore, the change of hardness can mainly be attributed to the pre-strain during strain aging process. In other word, the increase of hardness can mainly be attributed to an increase in the dislocation density in ferrite during strain aging process. Clearly, it seems appropriate to emply the Eqs. (1) and (2) in estimating the dislocation density during strain aging process. In addition, it can be seen from Fig. 1 that the amounts of particles in X65QO is remarkably larger than that in X52NS. Clearly, an interaction between particles and dislocation is stronger for X65QO during the deformation process in comparison with that for X52NS. Therefore, the strain aging response of X65QO is stronger than that of X52NS at different pre-strains, indicating that the yield ratios of X65QO with different pre-strains are significantly higher than that of X52NS. The dislocation density will increase with the increase in pre-strain from 2.5%, 5.0% to 7.5%. It is therefore concluded that the strength and yield ratio gradually increase with the increase of the pre-strain. During controlled rolling process of TMCP (Thermo-Mechanical Control Process) for welded pipe, carbon and nitrogen should be solubilized supersaturatedly in the ferrite phase because of rapid cooling after controlled rolling. Aging of pre-strain material allows the interstitial solute atoms to diffuse to the existing dislocations, forming Cottrell atmospheres and pinning them, which causes strain aging phenomenon. 3) In this case, during the heat-treatment process of X65QO (quenched/tempered) and X52NS (normalized), all carbon has been precipitated with the form of cementite, (Ti, Nb)C and V-rich carbides according to Fe-C binary phase diagram. This indicates that few free carbon atoms exist in ferrite for X65QO and X52NS. Therefore, unlike welded pipe, few carbon atoms in supersaturated solid solution diffuse to the mobile dislocations, forming Cottrell atmospheres and producing strain aging phenomenon in seamless pipe. This difference is attributed to the different pipe making technique: TMCP for welded pipe and traditional heat-treatment for seamless pipe. The impact toughness can be related the formation and propagation of microcrack in the matrix. For X52NS, the continuous lamellar cementite in pearlite will provide fastpropagation paths which decrease significantly the impact toughness. On the contrary, many particles with high hardness, such as cementite, (Ti, Nb)C and V-rich carbides, uniformly dispersed in the soft ferrite matrix will act as an effective obstacle for the crack propagation. The means that the impact toughness is higher for X65QO than for X52NS. For X52NS, the strain aging can take place in ferrite which generally leads to a decrease in impact toughness. In addition, continuous lamellar cementite in pearlite was damaged due to the large pre-strain, producing the serve bowing and breaking of lamellar cementite in pearlite which may decrease the fast-propagation paths of crack in the matrix, resulting in an increase in impact toughness. Consequently, the impact toughness of X52NS can be either decreased or increased during strain aging process, depending on the dominant events. The impact toughness decreases slightly by the strain aging, because continuous lamellar cementite in pearlite was damaged, leading to increase impact toughness. After strain aging, the hardness of X65QO (222 HV) with pre-strains of 2.5%, 5.0% and 7.5% were increased and reached 235 HV, 250 HV and 281 HV, respectively. The increase of hardness can mainly be attributed to an increase in the dislocation density in ferrite, thus obtaining ferrite matrix with high hardness. However, the increase in hardness of ferrite matrix after strain aging can be ignored in comparision with the hardness of particles in ferrite, cementite (950-1 050 HV), (Ti, Nb)C (3 200 HV) and V-rich carbides (2 100 HV). Therefore, the microstructure after strain aging consists of many particles with high hardness dispersed in the soft ferrite matrix which is similar to that of X65QO without strain aging. Clearly, if the error bars were taken into account, impact toughness of X65QO does not experience obvious change for pre-strains of 2.5%, 5.0% and 7.5% after strain aging. Conclusions Static strain aging tests with different amounts of prestrains were performed on a microalloyed low carbon steel with different microstructures. According to the changes of the microstructure and mechanical properties of steel after strain aging, the main results can be summarized as follows: (1) The yield ratios of steels with ferritic-pearlitic microstructure at different pre-strains are significantly higher than that with ferritic-cementitic microstructure, so strain aging resistance of normalized steel is, to a great extent, better than that of quenched-and-tempered steel. (2) An interaction between particles and dislocation is stronger for quenched-and-tempered steel in comparison with that for normalized steel. Therefore, the strain aging response of quenched-and-tempered steel is stronger than that of normalized steel at different pre-strains (3) Unlike welded pipe, few carbon atoms in supersaturated solid solution diffuse to the mobile dislocations, forming Cottrell atmospheres and producing strain aging phenomenon in seamless pipe. This different is attributed to the different pipe making technique: TMCP for welded pipe and traditional heat-treatment for seamless pipe.
2019-04-29T13:08:24.100Z
2016-01-15T00:00:00.000
{ "year": 2016, "sha1": "ad4f98355ba5450d8157b8d5e8471397c6b0e3fa", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/isijinternational/56/1/56_ISIJINT-2015-521/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0770291354c193edddb327a1ea9ac774cc8ea7ea", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science" ] }
8069817
pes2o/s2orc
v3-fos-license
C242T Polymorphism in CYBA Gene (p22phox) and Risk of Coronary Artery Disease in a Population of Caucasian Italians Background: specific polymorphisms of genes regulating intracellular redox balance and oxidative stress are related to atherogenesis. Some studies have identified a relationship between progression of atherosclerosis and C242T mutation in CYBA gene coding for p22phox, a subunit of the NADH/NADPH oxidase system. Design: we investigated whether the C242T nucleotide transition is associated with the presence of coronary artery disease (CAD) in a population of 494 Caucasian Italians undergoing coronary angiography to diagnose the cause of chest pain. Results: the frequency of the T mutant allele that we found in 276 patients with angiographically documented CAD was significantly higher compared to what we observed in 218 subjects with normal coronary arteries (Controls) (respectively: 0.400 and 0.332, p < 0.01). The prevalence of the T allele was even stronger when we compared: 1) early onset (age ≤55) vs late onset (age ≥65) single-vessel CAD patients (respectively: 0.75 and 0.48, p < 0.05), and 2) the subgroup of CAD patients with at least one ≥98% stenosis in a coronary vessel vs those with no ≥98% stenosis in a coronary vessel (respectively: 0.425 and 0.365, p < 0.05). Conclusions: these results support the increased risk of developing early CAD and of having rapid progression of coronary stenosis in subjects carrying the C242T nucleotide transition among the Italian population. Introduction A wide variety of pro-atherogenic functions have been attributed to elevated plasma and intracellular levels of reactive oxygen species (ROS), and include: altering the endothelial cell (EC) function, promoting macrophage infiltration, and promoting smooth muscle cell (SMC) dysfunction. The pro-oxidant theory regarding the development of atherosclerosis proposed and demonstrated that oxidative stress, which is often associated with elevated plasma lipid levels, induces EC dysfunction, accumulation of inflammatory cells, and a decrease in atheroprotective nitric oxide (NO) levels [2,17]. Superoxide anion (O 2− ) is a ROS which reacts with NO, the most potent endogenous vasodilator which also has a well-known endothelium protective property, thus producing peroxynitrite, a strong oxidant [3,16]. Furthermore, O 2− is involved in the oxidation of LDL, which promotes a robust inflammatory burst at the level of atherosclerotic plaques [8]. It has been proved that oxidants regulate and promote the expression of genes that are directly involved in the pathogenesis of atherosclerosis by binding to specific transcription factors, such as the nuclear factor (NF) kappa B. p22 phox is a subunit of the NADH/NAD(P)H oxidase system, a sophisticated enzymatic complex which represents the main source of O 2− in human blood vessels. The NAD(P)H oxidases of the cardiovascular system, which were first identified in phagocytes [9], are membrane-associated enzymes that catalyze the one electron reduction of oxygen using NADH or NADPH as the electron donor. The p22 phox subunit (CYBA gene) is essential for the assembly and activation of the NAD(P)H oxidase, and plays a major role in NADPHdependent O 2− production in the vessel wall [22]. In addition to diet, physical activity, hormones, growth factors, and physical forces, the overall ROS production may be influenced by genetic factors. The CYBA gene demonstrates a common functional variant which results in a substitution of Tyr for His at residue 72 (C242T) of the p22 phox , that could disrupt heme binding at the active site [18]. The presence of the 242T allele exerts a dominant effect resulting in significantly reduced vascular NADH /NAD(P)H oxidase activity in both genotypes, i.e., heterozygous CT and homozygous TT [13]. Initial evidence links low levels of p22 phox in normal vessels to p22 phox up-regulation in subjects with atherosclerosis and hypertension [1]. On the other hand, CT and TT patients exhibited an increased rate of angiographic worsening (coronary vessels) at a mean 2.5 year follow-up after a baseline coronary angiography [6]. However, methodological factors and different definitions of disease end-points may have contributed to these controversial results. Therefore, uncertainty still remains over whether this p22 phox gene polymorphism actually influences atherogenesis and later steps of progression of coronary atherosclerosis. Thus, in an attempt to determine whether the phenotypic impact of p22 phox gene variation is associated with coronary artery disease (CAD) in Italy, we investigated the frequency of CYBA genotypes in a group of Caucasian Italians. The cohort was sub-divided on the basis of their coronary angiograms, and was made up of 276 patients with coronary atherosclerosis, defined as CAD Patients, and 218 healthy adults, defined as Controls. Since the genetic basis of CAD may have a direct effect on setting off the disease process, or a modifying effect on the development of the process once it has started, gene frequencies were also assessed according to single-vessel or multivessel disease as well as on the basis of age at clinical coronary artery disease onset. Study population We recruited 494 consecutive Caucasian Italians presenting at our University Hospital with typical chest pain (recent onset). They were referred to our Cardiology Unit to undergo coronary angiography. Angiographically documented CAD was found in 276 of them (CAD patients), while normal vessels were found in the remaining 218 subjects (Controls). Informed consent was obtained from all the participants, and the study was approved by the Ethics Board of the Genova University Hospital. Risk factor assessment included a physician-administered, pre-formed questionnaire about health behavior in order to acquire information regarding previous cardiovascular medical history, incidence of cardiovascular risk factors, and medication being taken at admission. Arterial hypertension was defined either by chronic treatment, or, with regards to untreated patients , by the finding of systolic blood pressure >160 mm Hg or of diastolic blood pressure >90 mm Hg on two consecutive measurements. Diabetes Mellitus was defined as fasting blood sugar >126 mg/dL, glycosylated hemoglobin >7.5%, or antidiabetic therapy; while hyperlipidemia was defined either by chronic treatment, or by serum total cholesterol >220 mg/dL. Smoking habit included subjects who were current cigarette smokers(>2 pack-years) or who had quit smoking but who had a previous habit >2 pack-years. Specially trained nurses measured height, weight, body mass index (BMI) (Kg/m 2 ) and blood pressure using a standardized protocol at our Institute. A history of premature ischemic heart disease was defined as a cardiac event taking place in a first-degree relative and occurring at a (relatively) young age [i.e., 55]. Subjects were excluded if there was consanguinity within the study population, chronic ( 6 month period) intake of antioxidants or of dietary supplements, or concomitant diagnosis of chronic co-morbidities (cancer, chronic renal or liver failure), since these conditions are likely to promote a robust alteration in the redox balance. Angiographic documentation of CAD in our study required the presence of at least one major epicardial coronary vessel with 70% luminal obstruction (CAD patients). Our CAD patients were further sub-grouped to detect whether the p22 phox C242T polymorphism represented a risk factor for a specific subset of these patients. This was done on the basis of age at clinical onset of ischemic heart disease ( 55 years and 65 years) , extent of coronary atherosclerosis (single vessel vs multivessel disease), and lesion severity (presence of at least 1 coronary vessel with a 98% lesion). Our choice of the two age-based subgroups ( 55 years and 65 years) is arbitrary and only reflects our effort to separate patients with true premature onset of clinically overt coronary atherosclerosis from those without it. Controls were defined as healthy adults with normal coronary arteries at angiography, or as having vessels containing irregularities causing <70% reduction in lumen diameter. Genotyping The CYBA genotypes were analyzed on genomic DNA that was isolated from 10 mL EDTA anticoagulated peripheral blood. Genotyping was performed by a polymerase chain reaction (PCR)-based method. Briefly, we carried out PCR amplification of a 348 bp fragment from our study subjects using primers, as described elsewhere [14]. Restriction fragment length polymorphism (RFLP) was used to analyze this polymorphic site in the p22 phox gene. The C→T mutation in exon 4 of the CYBA gene produces a RsaI digestion site that makes 188 and 160 bp fragments, whereas RsaI does not cut the PCR product in the wild type. The digestion products were separated on a 2.5% agarose gel, and bands were measured with an image analyzer system (GeneGenius Syngene Cambridge, UK) and referred to a standard molecular weight ΦgX174 Marker 9 (MBI Fermentas, Milano, Italy). Statistical analysis Data regarding age, BMI, serum fibrinogen, protein and HDL-cholesterol plasma values are presented as mean ± SEM. Differences between demographic details and categorical data in Table 1 were assessed by unpaired Student's t test and by Fisher's exact test. Analysis of variance (ANOVA) was used to assess the association between genotypes and baseline characteristics. Chi-square analysis was used to test deviations of genotype distribution from Hardy-Weinberg equilibrium, and to determine whether there were any significant differences in allele or genotype frequencies between CAD patients and Controls. The criteria for statistical significance was set at p < 0.05. Baseline genotype in relation to environmental factors C242T polymorphism was obtained from 276 CAD patients and 218 Controls. Neither group had a genotype distribution that deviated significantly from what was expected for a population in Hardy-Weinberg equi-librium (Controls: χ 2 = 0.092; DF = 1, p = 0.9; CAD Patients: χ 2 = 2.731; DF = 1, p > 0.3). Demographic information for Controls and CAD populations is shown in Table 1. Both the proportion of male subjects and the mean age within the CAD patient group were significantly higher than what was observed in the Control group. The percentage of affected patients who had a personal history of arterial hypertension, of diabetes mellitus, or of hyperlipidemia was statistically higher than in the Control group. Similarly, mean values for serum fibrinogen, for total protein and for HDL-cholesterol differed between CAD patients and Controls. The presence of the C242T allelic variants in the 494 study subjects (either as a whole, or sub-divided into CAD patients and Controls) had no effect on the mean plasma levels of LDL-cholesterol, total cholesterol, triglycerides, or on the incidence of major cardiovascular risk factors (personal history of arterial hypertension, of hyperlipidemia, of diabetes mellitus, smoking status, or family history of ischemic heart disease) (data not reported). Baseline genotype in relation to CAD status When genotype results were stratified by CAD status (Table 2), we found an increased prevalence of the CT/TT genotype in CAD patients as compared to Controls, with a T allele frequency respectively of 0.400 and of 0.332 (p < 0.01). Genetic polymorphism related to pattern of CAD Next, we examined whether the frequency of the C242T mutation among CAD patients differed between those with single vessel (n = 121) and those with multivessel (two or three stenosed vessels; n = 155) disease. We found no significant relationship between CT/TT genotype and the extent of CAD (p = N S). With regards to the pre-defined criteria of CAD lesion severity ( lesion 98%), we demonstrated that more CAD patients with the CT/TT genotype (n = 110/154) had severe stenotic lesions than CAD patients with the CC genotype (n = 44/154) (p < 0.05), as reported in Table 3. Genetic polymorphism in CAD patients as related to time of onset of clinical coronary artery disease Lastly, we observed that within the subgroup of CAD patients with single vessel disease, those with early clinical onset of ischemic heart disease (n = 36: age 55 years) had a significantly increased prevalence of CT/TT genotypes as compared to patients with late onset of ischemic heart disease (n = 48: age 65 years) (χ 2 = 5, 404; DF = 1; p < 0.05) ( Table 4). Discussion Over the last few years, it has been shown that in addition to proteins and lipids, even simple chemi-cal molecules, such as O 2− and other ROS work as molecular switches for physiological and pathological cellular processes such as growth, migration, inflammation and apoptosis. Thus, regulation of vascular NAD(P)H oxidases, which are a source of O 2− , is potentially relevant to atherogenesis and to the progression of atherosclerotic lesions. Berry demonstrated that NAD(P)H oxidases and xanthine oxidases were a relevant source of O 2− in arterial vessels [5]. The p22 phox protein is essential for the assembly and activation of the NAD(P)H oxidase, and furthermore, it plays a major role in NAD(P)H-dependent O 2− production in the vessel wall. We initially concentrated our efforts on characterizing the C242T polymorphism because p22 phox expression in vascular tissue was found to increase as atherosclerosis progresses. Along the same lines, Schächinger demonstrated that the C242T poly- morphism plays a role in regulating the endotheliumdependent vasodilator function. He suggested that there may be a link between a blunted flow-dependent dilation within normal and diseased vessels and the presence of the allelic CC genotype in 93 patients referred for routine diagnostic catheterization [19]. Accordingly, we decided to test whether the genetic "protective" effect of carrying T alleles was relevant with regards to decreased coronary atherogenesis and progression of atherosclerosis in a group of Caucasian Italians. Family studies are more powerful than transversal observations which include cases (CAD patients) and control subjects (Controls) [20]. We chose a Case-controllike study, even though vulnerable to spurious result, on account of the criteria we used to define the CAD phenotype for our study, i.e., angiographically documented coronary disease. Strikingly, our data suggest that: 1) bearing the CT/TT genotype is significantly associated with the presence of coronary atherosclerosis in our population; and that 2) among CAD patients, the CT/TT genotype is also associated with premature onset of coronary atherosclerosis and with a more frequent finding of at least one 98% stenosis in a coronary vessel. Genetic polymorphism and angiographic parameters The finding that the T allele is related to worse coronary anatomy in our Italian subjects was unexpected. It is tempting to speculate that while enhanced production of ROS in the vascular endothelium is relevant to early atherogenesis, ROS generation may play a different role in later steps of atherosclerosis. Reduced ROS production (associated with carrying T allele) in the diseased vessels of patients with CAD may be sensed as a signal for reduced vascular SMC survival. In fact, some studies suggest that elevated ROS levels, and specifically O 2− , are mitogenic to SMCs and stimulate pro-survival kinase systems, thus acting as both antiapoptotic and intracellular signaling molecules in order to maintain proliferation [2,11]. SMC-depleted plaques are more prone to progression through erosion or disruption of their caps [4,21]. If we consider that plaque progression is associated with clinical manifestations of coronary atherosclerosis, and that thicker fibrous caps protect against contact with the blood, then the decreased number of SMCs associated with reduced intracellular generation of ROS in subjects with lower NAD(P)H oxidase activity may be viewed as a potentially dangerous mechanism in a stage of progression towards occlusion in coronary vessels. Recently, the knock-out mice for a subunit of the NAD(P)H oxidase complex (p47 phox ) exhibited an exacerbated pattern of inflammation as compared to the wild-type mice [23]. Our finding of an increased number of CAD patients with 98% lesion in carriers of T allele is in line with this interpretation, and suggests that this polymorphism is associated with a specific pattern of CAD development. This data is consistent with a previous report which demonstrated increased angiographic progression of CAD at 2.5 years, associated with the presence of this mutant allele [6]. In that study, the CAD population consisted of 313 subjects of the LCAS study. Those CAD patients had at least one coronary lesion causing 30% to 75% stenosis [6]. Similarly to what was reported by Cai et al. [7] in 689 Australian Caucasians, we were able to demonstrate that our Caucasian Italians with early clinical onset of ischemic heart disease were more frequently carriers of the T allele than late onset patients. Since the chance of a genetically-related component of CAD risk is more likely to trigger the disease at an early age, our observations imply that the C242T polymorphism is not neutral for the clinical expression of early onset CAD. Our data, together with those reported by Cai et al. [7] and by Cahilly et al. [6], are in contrast with previous investigations which highlighted discrepancies regarding the vascular risk related to carrying T alleles. In fact, Inoue et al. reported that carrying T alleles has a protective effect on coronary risk [14], while Gardemann et al. [10] and Li et al. [15] did not confirm this finding. Ethnic variation definitely plays a role in different genotype frequencies among these studies. Further confirmation of our findings is needed to understand whether p22 phox modulation is an important mechanistic target that needs to be explored in order to develop novel therapeutic options to fight the progression of atherosclerosis. Limits of the study Our study includes results that were derived from subgroups with small numbers of subjects, thus lacking power. Larger study groups of CAD patients with premature clinical coronary atherosclerosis are required. There is a lower likelihood among younger patients that certain genotype combinations associated with impaired survival can introduce a bias in the prevalence estimates of genotypes. Complementary strategies that we intend to apply also include performing prospective gene-association studies, in which unrelated healthy Caucasian Italians will be prospectively followed-up over the years to establish whether this p22 phox polymorphism is associated with a different incidence of cardiovascular events.
2018-04-03T01:12:10.308Z
2006-06-15T00:00:00.000
{ "year": 2006, "sha1": "08783625866ab78972fbd445e046a2b0aeb14f12", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/dm/2006/458587.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08783625866ab78972fbd445e046a2b0aeb14f12", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
9068915
pes2o/s2orc
v3-fos-license
Relationships Between Exhaled Nitric Oxide and Atopy Profiles in Children With Asthma Purpose We examined whether fractional exhaled nitric oxide (FeNO) levels are associated with atopy profiles in terms of mono-sensitization and poly-sensitization in asthmatic children. Methods A total of 119 children underwent an assessment that included FeNO measurements, spirometry, methacholine challenge, and measurement of blood eosinophil count, serum total IgE, and serum eosinophil cationic protein (ECP). We also examined sensitization to five classes of aeroallergens (house dust mites, animal danders, pollens, molds, and cockroach) using skin prick testing. The children were divided into three groups according to their sensitization profiles to these aeroallergens (non-sensitized, mono-sensitized, and poly-sensitized). Results The geometric means (range of 1 SD) of FeNO were significantly different between the three groups (non-sensitized, 18.6 ppb [10.0-34.7 ppb]; mono-sensitized, 28.8 ppb [16.6-50.1 ppb]; and poly-sensitized, 44.7 ppb [24.5-81.3 ppb], P=0.001). FeNO levels were correlated with serum total IgE concentrations, peripheral blood eosinophilia, and serum ECP levels to different degrees. Conclusions FeNO levels vary according to the profile of atopy, as determined by positive skin prick test results to various classes of aeroallergens. FeNO is also moderately correlated with serum total IgE, blood eosinophilia, and serum ECP. These results suggest that poly-sensitized asthmatic children may have the highest risk of airway inflammation. INTRODUCTION Atopy and airway inflammation are fundamental features of asthma, however, the relationship between them is complex and reports on their association are conflicting. 1 Atopy is defined as at least one positive reaction to allergens. The atopic population consists of two groups of individuals, those sensitized to only one class of aeroallergen (mono-sensitized) and those sensitized to multiple classes of allergen (poly-sensitized). 2 After it became clear that chronic airway inflammation characterizes asthma, interest in noninvasive methods for measuring airway inflammation rapidly increased. 3 Nitric oxide (NO) has been proposed as a marker of airway inflammation in asthma based on findings that fractional exhaled NO (FeNO) levels are higher in asthmatic children, which correlates with levels of eosinophilic inflammation in the airway mucosa. 4,5 FeNO appears to be a useful clinical tool for assessing airway inflammation in asthma, with the advantages of being noninvasive and increased FeNO levels in asthmatic children. 9 Several studies have indicated that both the presence and degree of atopy are important factors for increased FeNO levels in asthmatic patients. 10 Because of the high prevalence of atopy among asthmatic patients, it is important to determine the influence of these atopic conditions on FeNO. Although the atopic status itself has been found to be closely correlated with elevated FeNO levels, it is important to assess how strongly atopy affects FeNO levels in asthmatic children. The relationship between elevated FeNO levels and the degree of atopic sensitization requires clarification. To date, the relationship between the presence of atopy and its profile in terms of mono-sensitization and poly-sensitization with FeNO, a validated indirect marker of airway inflammation, has not been clearly established. We evaluated the possible relationships between the presence and profiles of aeroallergen sensitization and FeNO levels in asthmatic children. Study subjects A total of 119 children with mild to moderate asthma, aged 10-13 years, were enrolled. All subjects had a history of episodic wheezing and/or dyspnea during the previous year, which was resolved with bronchodilators. The clinical severity of the asthma was assessed according to the National Education and Prevention Program criteria. 11 Subjects were treated with inhaled short-acting β 2-agonists on demand to relieve symptoms, with or without controller medications (inhaled corticosteroids, leukotriene receptor antagonists, or inhaled long-acting β 2-agonists). All participants underwent a battery of tests, including FeNO, baseline spirometry, methacholine challenge tests, skin prick tests, and blood sampling at the Allergy Clinic and Environmental Health Center for Childhood Asthma of Korea University Anam Hospital. Subjects with concomitant allergic rhinitis after careful review of medical records were excluded. Also excluded were subjects with any history of symptoms suggestive of allergic rhinitis, such as recurrent symptoms of sneezing, rhinorrhea, and nasal stuffiness or itching, with the exception of the common cold, during the year prior to the study or when allergic rhinitis had been diagnosed by a physician. Patients with a history of near-fatal asthma, major exacerbations necessitating the use of systemic corticosteroids, or with serious respiratory diseases other than asthma were also excluded. Parents gave written informed consent for their children to participate in the study. The study protocol was approved by the Institutional Review Board of Korea University Anam Hospital (No. ED12055). Pulmonary function test Spirometry (forced expiratory volume in 1 second [FEV1] and forced vital capacity [FVC]) was performed using a computer-ized spirometer (Microspiro-HI 298, Chest; Tokyo, Japan) in accordance with the recommendations of the American Thoracic Society. 12 All subjects were asked to perform spirometry in the standard manner, and were required to have an FEV1 of at least 70% of the predicted value. Methacholine challenge test After obtaining baseline spirometry values, all subjects underwent bronchoprovocation with increasing concentrations of methacholine. The methacholine inhalation test was performed using a modified method of that described by Chai et al. 13 Patients had been free of acute respiratory tract infections and asthma exacerbations for a minimum of 4 weeks prior to the test. All patients were asked to discontinue using inhaled short-acting β-agonists for 24 hours and inhaled long-acting β-agonists, leukotriene modifier, and corticosteroids for 7 days prior to testing. Methacholine solutions (Sigma Diagnostics, St. Louis, MO, USA) were prepared at different concentrations (0.075, 0.15, 0.3, 0.625, 1.25, 2.5, 5, 10, and 25 mg/mL) in a buffered saline solution (pH 7.4). A Rosenthal-French dosimeter (Laboratory for Applied Immunology, Baltimore, MD, USA), triggered by a solenoid valve set to remain open for 0.6 sec, was used to generate aerosols from a DeVilbiss 646 nebulizer (DeVilbiss Health Care, Somerset, PA, USA) with pressurized air at 20 psi. Each patient inhaled five inspiratory capacity breaths of the buffered saline solution and increasing concentrations of methacholine at 5-min intervals. FEV1 and FVC were measured 90 seconds after inhalation at each concentration, and the largest three FEV1 or FVC measurements were analyzed. The procedure was terminated when FEV1 decreased by more than 20% of its post-saline value or when the highest methacholine concentration (25 mg/mL) was reached. Percentage declines in FEV1 from the post-saline value were plotted against log concentrations of inhaled methacholine. The provocative concentration of methacholine (PC20) producing a 20% fall in FEV1 was calculated by interpolating between two adjacent data points. Measurement of FeNO FeNO was measured using a chemiluminescence analyzer (NIOX analyzer, Aerocrine, Sweden) during single-breath exhalation according to the ERS/ATS recommendations. 14 We followed the manufacturer's recommendations: inhalation of NOfree air to total lung capacity, immediately followed by full exhalation against a positive mouthpiece counter pressure at a flow rate of 50 mL/sec into an on-line chemiluminescence analyzer to avoid any nasal contamination. Three measurements were made, and were considered validated when <10% variability was obtained. Each subject had been free of acute respiratory tract infections for at least 4 weeks prior to the measurement. Skin prick testing Skin prick testing was performed using five classes of 13 comhttp://e-aair.org mon aeroallergens: house dust mites (Dermatophagoides pteronyssinus and Dermatophagoides farinae), animal danders (cat epithelium and dog epithelium), pollens (mugwort, ryegrass, ragweed, hazel, alder, and oak), molds (Aspergillus fumigates and Alternaria alternata), and cockroach (Blatella germanica). The allergens were supplied by Allergopharma (Reinbek, Germany). A mean wheal diameter >3 mm in the absence of any reaction to the negative control was considered to indicate a positive reaction. 15 Atopy was defined as the presence of at least one positive reaction to these allergens. As for mono-sensitization and poly-sensitization, atopic subjects are frequently sensitized to more than one allergen belonging to clusters of allergen classes. 16,17 Thus, subjects sensitized to only one class of allergens were considered to be mono-sensitized, while those sensitized to two or more classes of allergens were considered to be poly-sensitized. Measurement of serum total IgE levels, peripheral blood eosinophil counts, and eosinophil cationic protein levels Serum total IgE levels were measured using a Coat-A-Count Total IgE IRMA (Diagnostic Products Co., Los Angeles, CA, USA) according to the manufacturer's instructions. The number of peripheral blood eosinophils was counted in blood samples containing EDTA using an automated hematology analyzer (Coulter Counter STKS, Beckman Coulter, Fullerton, CA, USA). Serum eosinophil cationic protein (ECP) levels were measured using a commercially available fluoroimmunoassay kit (Pharmacia ECP UniCAP System FEIA, Pharmacia Diagnostics, Uppsala, Sweden) with a detection limit <2.0 µg/L. Statistical analysis FEV1 and FVC are expressed as percent predicted values based on the data from our local population. The values for FeNO, methacholine PC20, serum total IgE levels, blood eosinophil counts, and serum ECP levels were log transformed before statistical analysis. Data are presented as mean±SD or geometric mean (range of 1 SD), as appropriate. The variables were compared between the three groups using one-way analysis of variance (ANOVA) and the post hoc Tukey's honestly significant difference test or the chi-square test for multiple comparisons, as appropriate. Correlations between FeNO and serum total IgE levels, blood eosinophil counts, serum ECP levels, or various pulmonary function parameters were calculated using a linear regression model. All statistical analyses were performed using the SPSS statistical package (SPSS Inc., Chicago, IL, USA). A P value <0.05 was considered to be statistically significant. RESULTS The clinical characteristics of the three groups are shown in Table 1. Of the 119 children, 32 were designated as non-sensitized, 46 were mono-sensitized, and 41 were poly-sensitized. The mean ages, BMIs, and the gender distribution were not significantly different between the three groups. The geometric means (range of 1 SD) of serum total IgE were significantly different between the three groups: 380. ophil counts and serum ECP concentrations were also different between the three groups, with marginal significance (P=0.011 and P=0.093, respectively). Table 2 shows the pulmonary function parameters of the subjects. The baseline FEV1, FVC, and FEV1/FVC levels were not significantly different between the three groups (P=0.781, P= 0.862, and P=0.260, respectively). However, the geometric mean (range of 1SD) of methacholine PC20 was significantly lower in poly-sensitized subjects (2.51 mg/mL [0.34-18.6 mg/mL]) than in the other two groups (mono-sensitized subjects, 6.46 mg/mL Figure). We also analyzed possible correlations between FeNO values and various biological markers of asthma. FeNO levels correlated significantly with serum total IgE levels and blood eosinophils, and displayed inverse relationships with FEV1, FVC, FEV1/FVC, and methacholine PC20 (Table 3). DISCUSSION We found significant relationships between FeNO levels and the degree of sensitization to aeroallergens in asthmatic children. We noted a weak but significantly greater elevation of FeNO levels in poly-sensitized asthmatics than in mono-sensi-tized asthmatics. FeNO levels were correlated with serum total IgE levels, blood eosinophils, serum ECP levels, and some spirometric parameters, to varying extents. These results are consistent with those of previous studies, and suggest that FeNO may be moderately correlated with the profiles of atopic sensitization and allergic inflammation. Although several studies have shown that asthma and/or atopy are closely related to FeNO, 9,10 these relationships are inconsistent in clinically selected samples. 18,19 A previous study of 222 asthmatic children showed that FeNO levels are elevated in some, but not all children with atopic asthma. 7 Prasad et al. 8 demonstrated that FeNO levels are higher in atopic children than in non-atopic asthmatic children and non-atopic, non-asthmatic children in a study with a large pediatric asthmatic population. Furthermore, they showed that non-atopic children have no significant difference in FeNO levels whether they are asthmatic or not. Previous studies suggested that the degree of atopy is associated with FeNO levels, and that increasing FeNO levels are related to the number of positive skin prick test results in asthmatic children. 20 However, Moore et al. 21 demonstrated that FeNO is not always associated with the number of positive skin test responses, blood eosinophils, or serum IgE levels in asthmatics. FeNO displays even greater independence from asthma and asthma-like symptoms after controlling for atopy. The relationship between FeNO and asthma is complex, and the mechanisms for increased FeNO levels require further investigation. We divided the asthmatic children into three groups according to their different profiles of atopic sensitization. It was hypothesized that atopic sensitization could be a marker of airway inflammation in asthmatic children. This hypothesis is supported by a previous study, which reported correlations between serum total IgE levels and FeNO concentrations in asthmatics. 7 However, it is problematic to explain airway inflammation in asthmatics by the presence of atopy alone. The degree of atopic sensitization is more important than simply the presence of at- Figure. Mean and percentile (10th, 25th, 75th, and 90th) distributions of FeNO in non-sensitized, mono-sensitized, and poly-sensitized asthma groups. http://e-aair.org opy in further elucidation of the relationship between FeNO levels and atopy. We also found a positive relationship between FeNO levels and atopy profiles, which is in agreement with previous studies. The relationship between FeNO levels and the degree of atopic sensitization potentially indicates that allergen sensitization may contribute to atopic airway inflammation in asthmatic subjects. A significant correlation between atopy scores and the severity of exercise-induced bronchoconstriction, which has been shown to be correlated with the markers of eosinophilic airway inflammation, 22 also supports our contention that the degree of allergen sensitization may contribute to allergic airway inflammation. In contrast, Silvestri et al. 23 did not find significant differences in the FeNO levels between mono-sensitized and polysensitized asthmatic children. This discrepancy may be explained by the small number of mono-sensitized children in that study. Our results indicate that aeroallergen sensitization clearly plays an important role in determining FeNO levels in asthmatic children. Thus, it could be speculated that airway eosinophilic inflammation may increase with the degree of atopy. Although the mechanism by which an increase in the degree of atopic responsiveness induces a rise in FeNO is not fully understood, it is becoming evident that NO production correlates more specifically with airway inflammation where eosinophilia and clinical atopic features dominate. 8 An increasing degree of atopic responsiveness leads to greater cellular activation with inflammation, and consequently upregulated production of inducible nitric oxide synthase (iNOS). 4 Elevated FeNO levels are thought to result from increased expression and activity of iNOS in airway epithelial and inflammatory cells. 24,25 In atopic asthmatics, the degree of atopic sensitization appears to reflect systemic (blood eosinophilis) and organ-specific (FeNO) markers of allergic inflammation. 7,26 In the present study, the elevated FeNO concentrations in the poly-sensitized subjects reflected the presence of clinical inflammation and induction of iNOS by inflammatory cytokines. 27 Inhaled allergens are thought to be an important cause of ongoing inflammation in the lungs of sensitized children who develop asthma. 28 In the present study, exposure to inhaled allergens in the sensitized groups may have contributed to the elevated FeNO. FeNO is significantly correlated with atopy in children with recent exposure to allergens, independent of symptoms, and during remission, asymptomatic adolescents with atopic asthma show elevated FeNO levels with increased eosinophilic activity in the biopsies of bronchial mucosa. 29 Airway eosinophilia is a common feature of atopy, and FeNO levels may determine the clinical expression of atopy because atopy can be considered an immune disorder associated with increased airway inflammation. 30 In contrast, we found an increasing tendency towards higher ECP in poly-sensitized than in mono-sensitized subjects, without statistical significance. Our borderline association between serum ECP concentrations and FeNO levels suggests that both markers participate in eosinophilic inflammation in asthmatics. ECP is a potent biomarker of asthma and has been used to assess inflammation in asthmatic children. FeNO has been reported to be well correlated with ECP concentrations in sputum 4 or in bronchoalveolar lavage fluid. 5 Thomas et al. 4 reported that FeNO levels were significantly correlated with sputum ECP concentrations in a cohort of Australian children and had a positive relationship with sputum eosinophilia. In contrast, Piacentini et al. 18 found no significant correlation between FeNO levels and serum ECP concentrations. Furthermore, although serum ECP levels are higher in children with atopic asthma, the wide ranges are likely to limit the relevance of measuring serum ECP levels in children as a guide to the diagnosis or management of asthma. 31 The measurement of FeNO is easier and more reliable than that of serum ECP in pediatric patients, because the effort and coordination required are minimal and more organ-specific. Indeed, FeNO is a more sensitive indicator of disease severity than circulating markers of inflammation, such as serum ECP. 27 In the present study, the non-sensitized subjects showed relatively high serum total IgE levels. It should be acknowledged that atopy is not always detectable by skin tests. We did not obtain sensitization profiles for food allergens or use other definitions of atopy, such as specific IgE levels or atopic scores. Therefore, it is possible that some of our non-sensitized subjects may have belonged in the sensitized group. In addition to subjects with the highest poly-sensitized levels, we enrolled mild to moderate asthmatic subjects. However, asthma severity itself might be associated with the degree of sensitization or FeNO levels. Investigators should be aware that asthma severity is one of the important confounding factors. Moreover, although our subjects were asked to cease medication 1 week prior to the test, FeNO could still be affected by asthma medications. Some investigators have indicated elevated FeNO levels in subjects with allergic rhinitis other than asthmatics. 19,32 In the present study, this is unlikely to have influenced the results of our analysis, because we excluded asthmatic patients who had concomitant allergic rhinitis symptoms. In summary, FeNO levels were associated with both the presence and profiles of atopic sensitization as determined by positive skin prick test results to various classes of aeroallergens in asthmatic children. Meaningful interpretation of FeNO may only be possible when the presence and the degree of sensitization are considered. 33 FeNO is a good biomarker for the severity of atopic inflammation, which indicates that poly-sensitized and/ or strongly sensitized children are at a high risk of airway inflammation. ACKNOWLEDGMENTS This study was supported in part by grants from the Environ-
2016-05-04T20:20:58.661Z
2013-02-06T00:00:00.000
{ "year": 2013, "sha1": "03223fd35aaa786bd58943e22ca4334144c095b6", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3636450?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "03223fd35aaa786bd58943e22ca4334144c095b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261536465
pes2o/s2orc
v3-fos-license
Implementation Barriers of Multidisciplinary Care in Chronic Implementation Barriers of Multidisciplinary Care in Chronic Kidney Disease Through a CFIR Framework: a Narrative Review Kidney Disease Through a CFIR Framework: a Narrative Review Introduction 37 million Americans suffer from chronic kidney disease, which affects multiple organ systems and requires multidisciplinary care. Multidisciplinary care is an inherently broad and complex topic, and while it is being implemented across health care in the United States and abroad, multidisciplinary care outcomes are poor in this patient population. It is possible that there exist gaps in the literature regarding implementation and replication of multidisciplinary care interventions such that health care practices are unable to fully take advantage of multidisciplinary care publications for chronic kidney disease. This narrative review utilizes the five domains of the Consolidated Framework for Implementation Research to address barriers to multidisciplinary care implementation for chronic kidney disease. Methods A systematized review of peer-reviewed literature including systematic reviews and meta-analyses related to chronic kidney disease and multidisciplinary care through January 1, 2021 was conducted. The five interventions with the most barriers qualitatively identified were analyzed. Results Twelve potentially eligible reviews were identified, and 5 unique systematic reviews and meta-analyses were selected for a total of 48 articles, and ultimately, 5 articles were selected for inclusion. Based on the Consolidated Framework for Implementation Research which includes 5 domains of barriers, we discussed barriers of implementation in all 5 domains within the 5 articles. Discussion Because it is essential that multidisciplinary care for patients with chronic kidney disease be improved and implemented to the fullest extent, researchers should be aware of barriers to implementation and publish results by taking into account the Consolidated Framework for Implementation Research. We would like to thank Professor Jennifer Campbell for her guidance on implementation science and chronic kidney disease. Introduction 37 million Americans suffer from chronic kidney disease, which affects multiple organ systems and requires multidisciplinary care. Multidisciplinary care is an inherently broad and complex topic, and while it is being implemented across health care in the United States and abroad, multidisciplinary care outcomes are poor in this patient population. It is possible that there exist gaps in the literature regarding implementation and replication of multidisciplinary care interventions such that health care practices are unable to fully take advantage of multidisciplinary care publications for chronic kidney disease. This narrative review utilizes the five domains of the Consolidated Framework for Implementation Research to address barriers to multidisciplinary care implementation for chronic kidney disease. Methods A systematized review of peer-reviewed literature including systematic reviews and meta-analyses related to chronic kidney disease and multidisciplinary care through January 1, 2021 was conducted. The five interventions with the most barriers qualitatively identified were analyzed. Results Twelve potentially eligible reviews were identified, and 5 unique systematic reviews and meta-analyses were selected for a total of 48 articles, and ultimately, 5 articles were selected for inclusion. Based on the Consolidated Framework for Implementation Research which includes 5 domains of barriers, we discussed barriers of implementation in all 5 domains within the 5 articles. Discussion Because it is essential that multidisciplinary care for patients with chronic kidney disease be improved and implemented to the fullest extent, researchers should be aware of barriers to implementation and publish results by taking into account the Consolidated Framework for Implementation Research. BACKGROUND It is estimated that more than 1 in 7, or 37 million, United States adults suffer from chronic kidney disease (CKD). 1 CKD is when the kidneys become damaged and are unable to filter electrolytes and toxins out of the body in addition to regulating extracellular water. Depending on the glomerular filtration rate (GFR), CKD is classified into five stages of progressively worsening kidney damage and clinical outcomes, which can cause or be caused by other devastating comorbidities and is associated with worsening risk of death. 2,3 The most common causes of CKD include but are not limited to hypertension, diabetes, obstruction, malignancy, injury, congenital, and more. The fifth CKD stage is end-stage renal disease (ESRD), which is characterized by when the kidneys permanently fail to function. Almost 800,000 adult patients are treated annually for ESRD in the United States, totaling to a prevalence of 2,382 per million. 1 Besides ESRD being a fatal disease, CKD is associated with multisystem complications including cardiovascular, rheumatological, gastrointestinal, hematological, and neurological that reduce quality of life and life expectancy. 4,5 Because CKD affects the body on a multisystem level, its comorbidities and symptoms may be too complex for one specialist like a primary care physician or a nephrologist to treat. Multidisciplinary care (MDC), or the practice of a team of various specialty healthcare workers, is encouraged for CKD patients and is being implemented widely across healthcare practices; however, there are debates about which forms of MDC are most effective, if any. As the following research studies that will be discussed demonstrate, it is difficult to implement effective MDC models from both a clinical and research standpoint, which ultimately impacts clinical outcomes and quality of life in CKD patients. As inherently multifaceted as CKD is, which may explain why MDC outcomes are often poor, 6,7 there are also many components to the very structure of designing and explaining a CKD-MDC intervention for implementation that present as barriers to their own success. However, CKD-MDC research lacks studies that address how interventions fail to meet the needs of their stakeholders including but not limited to CKD patients, healthcare practices, and the healthcare team, or be adequately described in the literature. FRAMEWORK The Consolidated Framework for Implementation Research (CFIR) is a conceptual framework that guides through contexts of an intervention to identify factors that influenced its implementation and effectiveness. 8 CFIR was first published in 2009 and is most commonly used for complex health care delivery interventions to address barriers to implementation based on five domains: intervention characteristics, inner settings, out settings, characteristics of individuals, and implementation processes. Based on its description, CFIR is likely an effective model to address the knowledge gap faced in CKD-MDC research. There are many frameworks within implementation science; another one that was considered for this paper was the Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM). RE-AIM provides practical effective implementation planning for evidence-based interventions but lacks the ability to evaluate ways implementation succeeds and fails. In one study that compared CFIR to RE-AIM in terms of their implementation planning processes for an asthma intervention, it was concluded that CFIR was capable of "explain(ing) why implementation succeeded or failed, and when used proactively, identifies relevant modifiable factors that can promote or undermine adoption, implementation, and maintenance." 9 CHARACTERISTICS OF THE INTERVENTION The first domain of CFIR is the characteristics of the intervention being implemented. This includes adaptability, as many interventions cannot be implemented into practice without a detailed methodology by the authors for others to attempt. Another component is the intervention source, or how key stakeholders came to develop the intervention whether through stakeholders or previous research. Characteristics of the intervention are also best described through the evidence strength or quality; this is typically defined through the perception of stakeholders through anecdotes, quantitative data or other publications. Stakeholders may also desire a comparison of the intervention to other alternatives and previously used interventions. As the objective of any implementation research is to encourage further utilization or not, the complexity, whether by number of steps, overlapping points, teams, patient types, and other aspects that would reflect the difficulty of implementing the intervention should be described. A similar characteristic is trialability, or how easy or difficult it is to test an intervention on a small scale relative to the organization. Finally, the cost of the intervention whether by monetary value, supply usage, opportunity cost, and time should be mentioned; this is arguably the most important characteristic for investors, yet is often not included in publications. OUTER SETTING The outer setting is based on the barriers that patients may face when interacting with healthcare interventions and organizations within the context of a community or society. This requires the organizations to have as much of a patient-centered approach as possible without sacrificing quality care. A component of that is how connected or bridged an organization is to other external organizations, which is described as cosmopolitanism. An organization may also be more likely to attempt or implement interventions if they face competition by other organizations in the healthcare market, or they may also be forced to attempt an intervention if they are mandated by external policies. These points should be mentioned in implementation research to provide readers context, especially when there are many external structures research participants experience for any clinical research. INNER SETTING Another domain is the inner setting, or the structures that influence and interact beyond the person within an intervention. Most notably, this includes the structural characteristics of an organization, such as how large it is, how established it is, and if the organization is made up by divisions or is centralized. Other components include how the organization communicates internally whether formally or informally, and if hierarchy plays a role in how communication is done. Culture is essential as well; the organization must encourage change to their method of operation for an intervention to succeed. This is different from the climate, or whether an organization has the appropriate priorities, policies, and learning aptitude for implementation. Typically for implementation research to be completed, at least one aforementioned aspect of the inner setting must be in place and helps readers understand the organizational environment in which the intervention took place. INDIVIDUALS INVOLVED Unlike the inner setting, which involved the structure of an organization, this component focuses on an organization's employees, such as their roles, skills, beliefs, behaviors, and other personal attributes. Often, an organization either attracts or is impacted by the individuals who are employed there; so while the individuals and the inner setting can appear overlapping, they are actually different. For example, individuals involved must be familiar with the inter-Implementation Barriers of Multidisciplinary Care in Chronic Kidney Disease Through a CFIR Framework: a Narrative Review vention and how to operate it, which is known as self-efficacy. Whether they are enthusiastic about the intervention or doing it as part of their job is also very important towards the outcomes of an intervention, yet is seldom discussed. Some individuals may even be resistant to the intervention which can be due to tradition or fear of being replaced or the potential extra workload. These are important to mention so readers can get a sense of who was involved and how comparable it would be to replicate it in their own practice. IMPLEMENTATION PROCESS The final component is the process of implementation. It may involve how an intervention is planned, whether stakeholders are considered, if strategies are tailored to patients, and if simulations are created. Engagement is also a part of the implementation process and includes carefully selecting members and leaders. Some people may require training, and it is useful to mention whether the members volunteered or were appointed their roles. Carrying out the implementation is part of its executing and is best understood through fidelity. The last aspect of the implementation process is reflecting and evaluating progress, success, failures, and giving and receiving feedback during and after implementation. OBJECTIVE In this systematized narrative review, CFIR will be utilized to identify barriers in five CKD-MDC intervention research publications to inform stakeholders on how they should change their approach towards maximizing outcomes. METHODS PubMed was searched from inception to January 1, 2021. The search was limited to articles in English. The search strategy used was (chronic kidney disease, end-stage renal disease OR chronic renal failure AND multidisciplinary care OR interdisciplinary care OR team-based care AND metaanalysis AND systematic review (meta-analysis[Filter] OR review[Filter] OR systematicreview[Filter])) AND (chronic kidney disease). Each article was screened by one reviewer for eligibility by screening the title followed by the abstract and then the full text. Meta-analyses or systematic reviews of MDCs that investigated the associations between their intervention and CKD-related outcomes were eligible. A reviewer searched within the reviews for studies that compared an intervention to a type of control such as standard therapy or placebo. Studies that were excluded included single-arm studies. It did not matter if meta-analyses and systematic reviews utilized the same publications, as our analysis remained relevant to the outcomes of individual interventions and CFIR. If reviews highlighted similar interventions, the intervention with the largest number of patients was considered. It was decided prior to conducting the review that the five articles with the most barriers qualitatively identified under CFIR during screening would be analyzed. The search strategy identified 12 potentially eli- gible reviews. After the screening process, five unique systematic reviews and meta-analyses were chosen, which accounted for a total of 48 articles. 6,7,10-12 Based on the criteria above, 28 articles were excluded which left 20 articles eligible. Ultimately, 5 articles were selected for inclusion for having the most identifiable CFIR barriers (see Figure 1). [13][14][15][16][17] ARTICLE RESULTS AND CFIR INTERPRETATIONS Article results and interpretation of the barriers faced in each intervention will be discussed based on the 5 domains of CFIR and are also represented in Table 1. In Hemmelgarn et al. in 2007, stage 3 or greater CKD patients were referred to an MDC which included a specialized clinic nurse, a registered dietician, and a social worker and were educated on effects of medication, complications, fluid and diet, blood pressure, exercise, and more with a focus on lifestyle modification and medical management. 13 The MDC group initiated dialysis more likely by almost 30 times and were more likely to survive over 3.5 years' time when not adjusted and when adjusted for factors like age, gender, GFR, diabetes, and comorbidity score. Even when adjusted for hemoglobin and albumin, there was no change in these results. When unadjusted and adjusted for the aforementioned factors however, there was also no difference in risk of hospitalization. Therefore, what the study finds is that the MDC of this type can improve survival outcomes but cannot change the risk of hospitalization. In a 2005 investigation by Patel et al., 5% of adults diagnosed in 2002 with diabetes or hypertension in a primary care clinic network in Columbus, Ohio were screened for CKD based on two laboratory values for GFR. 14 If a patient was diagnosed with CKD, a clinical pharmacist reviewed the patient's medication records and based on standardized Implementation Barriers of Multidisciplinary Care in Chronic Kidney Disease Through a CFIR Framework: a Narrative Review criteria, if a drug-related problem was found, the pharmacist would recommend changes in the patients' charts. Altogether, 69% of patients had a stage of CKD, and there was an average of 3.2 drug-related problems the pharmacist found in 99% of CKD patients. Unfortunately, only 41% of recommendations by the pharmacist were accepted by the patients' physicians, often due to patient nonadherence, patient resistance, and prescriber preference. In Blakeman et al. in 2014, stage 3 CKD patients in England from 24 practices in the bottom 20% most deprived areas in England were randomized to be guided by a lay health worker to use a kidney information guidebook and a self-assessment tool/community resource booklet and website. 15 Patients were then phoned by a lay health worker one week after to help patients identify needs and preferences and were offered local resources. One month after, patients were called again to see if patients attempted the local recommendations and to try once more to assist patients. Altogether, there was a modest improvement in quality of life as well as a maintenance in blood pressure and a cost saving of around £175. In 2011 by Barrett et al., older adult patients with a GFR between 25 and 60 mL/min per 1.73 m 2 across five urban centers in Canada were randomized to meet every four months with a nurse-team every four months who also were connected to a nephrologist and general practitioner. 16 All patients had the same aims, and all received annual lab tests which were made available in their medical records. There was found to be no difference in any way between the intervention group and the control group, however. Even though satisfaction was very high, results conclude that a nurse-coordinated model is inadequate for CKD care. Finally, in the study by Scherpbier-de Haan et al. in 2013, nine general practices in the Netherlands which included 181 CKD patients had an MDC consisting of a nurse practitioner, a general practitioner, and a nephrology team. 17 The nurse practitioners and general practitioners were trained by a nephrology team and were taught about topics including but not limited to blood pressure measurement, blood-glucose management, and lifestyle advice, with the protocol based on the Kidney Disease Outcomes Quality Initiative guideline. Patients that were enrolled in the intervention saw a nurse practitioner every 3 months for 20 minutes for one year and worked on treatment goals and priorities. General practitioners supervised, and then those two consulted nephrology teams digitally. Altogether, blood pressure in the intervention group decreased by 8.1/ 1.1 on average after one year while the control group slightly increased by 0.2/0.5, and 44% of those in the intervention group reached their treatment goals compared to 21% in the control group. It was also found that the intervention group was placed on more lipid-lowering drugs, angiotensin-systems inhibitors and vitamin D than the control group; and parathyroid hormone levels and low-density lipoprotein levels were lower in the intervention group than the control group. CHARACTERISTICS OF THE INTERVENTION Of the five articles selected, all five had structural characteristic barriers whether as a component of the methodology or identified through the results. In Hemmelgarn et al., 13 many characteristics of the intervention were not highlighted. While authors said the MDC is adjusted per patient, there was no information provided to prove that statement, which calls into question this investigation's exact adaptability. Little information regarding the methodology or the evidence strength was provided as well. No mention of cost was made, so possible shareholders including investors would likely be hesitant to implement the results of this article too. Similarly, while results of the study by Scherpbier-de Haan et al. were promising with strong evidence quality, 17 a lack of other structural characteristics mentioned in this paper make it difficult to replicate or implement again. How the training was conducted, the duration of the training sessions, and who were included in the nephrology teams were not included. The mechanism of this digital environment used between members was not discussed, nor was it explained whether or not certain consultations were to assist with goals or future patient visits. It is unknown the trialability of the intervention or how costly it was too. In contrast, Patel et al. mentions some characteristics of the intervention. 14 The authors highlighted that their intervention did not appear to be costly from a resources perspective, nor was it complex, unadaptable to implement, and difficult to trial. Even more detailed, Blakeman et al. included a thorough description of many characteristics including the intervention source originating from other complex diseases. 15 This in addition to descriptions of training lay health workers, monitoring patients, and designing the guidebook and community resources. However, the defining feature noted in this article was cost. The paper mentioned that this intervention saved the patients an average of around £175 to their healthcare infrastructure, but it did not include how costly it was to train the lay workers and employ them as well as the cost of making and maintaining the number of guidebooks and websites offered. Prospective stakeholders may question implementing this research, as £175 saved on average may not amount to enough for the complexity of this intervention. Additionally, a majority of the intervention characteristics in the paper by Barrett et al. were well met such as design quality in that very specific outcomes including time and objective health measurements were studied. 16 The paper also was very adaptable for further implementation research in that they made very specific health aims for their patients. The authors made a thorough attempt to describe the trialability of this intervention within urban centers, but no mention of cost was made beyond the time added onto nurses. This can be seen as disadvantageous, as some stakeholders may want to understand how much money was spent implementing this intervention. However, financial stakeholders may be able to reach their own conclusion about the cost based on their own nurse salaries and the additional time nurses spent operating the intervention. Implementation Barriers of Multidisciplinary Care in Chronic Kidney Disease Through a CFIR Framework: a Narrative Review Cooper Rowan Medical Journal Altogether, many of the barriers to CKD-MDC implementation research as it relates to characteristics of the intervention are not only what is specifically identified by the authors but what is failed to be mentioned. OUTER SETTING Of the five articles included, only Blakeman et al. attempted an MDC intervention based directly on the outer setting. 15 While the researchers designed a kidney information guidebook, a self-community resource booklet, and website with lay health workers for CKD patients, there was not a formal agreement made between the 24 English practices and the external resources, which makes cosmopolitanism questionable. This represents a barrier in that there may be lacking established connections between nephrology clinics and centers to other external resources that could potentially improve outcomes and save costs. A concern is also that few patients utilized local resources, and in fact, fewer patients in the intervention arm utilized their own social networks compared to control patients. It is noted given the results that patients may be overconfident about their level of self-management for CKD. Another explanation is that patients do not infer community resources to be helpful, or that community resources are difficult to access as well. This patient population consisted of the lowest 20% socioeconomically in England, so access would be a potential concern. Overall, there are not many CKD-MDC interventions that utilize or acknowledge the outer setting. This is unfortunate as over half of late-stage CKD patients are non-adherent to treatment, which has been linked to the outer setting. 18 INNER SETTING The inner setting is a common barrier to CKD-MDC research. A barrier found within the five studies was how infrequently the inner setting was described. For example, little was said about the structural characteristics of the investigation by Hemmelgarn et al. 13 Even though they mentioned the clinic had almost 7000 patients, they did not describe what type of clinic this was and if it was for CKD patients only. This is arguably more important than the database they used to acquire patient information they lacked, which was well described as the Calgary Health Region and includes over 1.1 million residents' health information. No information can be gathered about how the organization communicates or is structured in addition to the culture or climate. Similar points can be made in the study by Scherpbier-de Haan et al. 17 This is in contrast to Patel et al., 14 who specifically identified a lacking inner setting in their primary clinic network in that few physicians accepted recommendations from pharmacists. The results are somewhat due to the indirect method of communication within the organization's divisions, whereby communication was made through patients' electronic medical record tabs. Based on the characteristics of the intervention in describing the inner setting, the authors could recommend structural changes to patient monitoring and greater collaboration between the physician and pharmacist divisions. Within the paper as well, they included a specific section about the setting of the primary care network including who they serve, how many patients they see, the size of the clinics, and what services they offer. This offers readers an idea about how the inner setting both played a role in how the investigation was structured and the outcomes found. Given the investigation took place in one healthcare network, a thorough summary of the inner setting is much more feasible in Patel et al. paper than in Blakeman et al., 15 which took place within 24 practices across England. Such practices, due to their differences in location, size, and patient population would be difficult to describe but likely played a role in the outcomes of the investigation. It does appear overall though that the practices allowed the lay health workers to have an independent role between themselves and community resources, which implies a division amongst the practices and their communities. Unlike Patel et al., 13 which seemed to lack organizational communication, Barrett et al. had the opposite dilemma of having too much organizational communication. 16 What authors noted in their investigation was that the 5 urban Canadian centers that were involved in the study all had open communication across their divisions, such that actions and effects made by the nurses and physicians in the intervention could be altered by healthcare workers who were not a part of the experiment. This is an interesting phenomenon whereby open communication played a potentially negative role in the outcomes of the intervention. We can assume overall, however, that the culture and climate within the organizations for all five papers were positive towards change given their attempts to establish and test new MDCs. INDIVIDUALS INVOLVED The component of "Individuals involved" in CFIR is often neglected in research. However, of the five articles, four mentioned the individuals involved to some extent as described above. For example, in Patel et al., 14 only 41% of recommendations by the pharmacist were accepted by the patients' physicians with one reason being prescriber preference. This result can be seen as the physicians of that network being resistant to the intervention's attempt to have pharmacists play a greater role in patient care. Little can be said about resistance or enthusiasm in Blakeman et al., 15 however, a major barrier faced was the lay workers' skillsets. The lay workers were staff members, postgraduate students, and undergraduates at the University of Manchester. While the intervention was meant for the lower 20% socioeconomically in England, most of the lay workers were described as having limited knowledge in health and social care. Their skillset was being able to facilitate referrals to CKD patients on local resources and how to use the guidebooks over the telephone. It is questionable then how capable the lay workers could refer CKD patients to the correct community resources without advanced knowledge of CKD and the social barriers CKD patients face. Just like the lay workers in Blakeman et al. may have had too much expected of them, the same may be said for the nurses in Barret et al. 16 The study by Barret et al. found that their methodology resulted in nurses having 16 times the num-Implementation Barriers of Multidisciplinary Care in Chronic Kidney Disease Through a CFIR Framework: a Narrative Review Cooper Rowan Medical Journal ber of minutes spent with the patient than the doctors without significant patient outcome improvement. It is unlikely that shareholders would be interested in an intervention that overutilizes nurses without any benefit. It is also questionable then how trained the nurses were for the intervention and whether they were familiarized to the potential additional workload. A similar criticism can be made in Scherpbier-de Haan et al., 17 which found benefits of involving a nurse practitioner and a general practitioner with the nephrology team for CKD patients but failed to describe how they were trained for the intervention. None of the articles mentioned beliefs and behaviors of the individuals involved, which may play an important role to the success of an intervention or not. IMPLEMENTATION PROCESS Of the five articles, only two included aspects of the implementation process. In Blakeman et al., 15 the authors specified that the guidebook they developed was completed with stage 3 CKD patients to make it more geared towards the participants of their study. This can be viewed as the investigators planning and considering their stakeholders for the implementation of their study. Additionally, of the 8 lay health workers, who would guide patients in the guidebook and community resources, only one was explicitly mentioned as being employed to oversee the telephone support. The other lay health workers' status as being volunteers or employees is unknown. However, it was mentioned that the lay health workers were trained in a 3-hour session by one of the authors. Training was also mentioned in Scherpbierde Haan et al., 17 in which nurse practitioners and general practitioners of the intervention were trained by a nephrology team. Beyond these points, none of the papers discussed how their interventions were planned and even how their interventions were tailored to patients in either the introductions or the methods sections. Engagement was unclear as was fidelity. There was also no mention of re-flecting or feedback during the implementation of the interventions. The wider implications are that authors are not gearing their papers to be easily replicated for future research nor for clinical practice. DISCUSSION The purpose of this paper was to highlight how CFIR can be used to understand barriers faced in MDC-CKD research design and publications. Limitations to this study are that data may have been misinterpreted from the methods of the investigation, and that a majority of the articles utilized from the umbrella reviews were not analyzed. As a systematized narrative review, analysis may be considered to be a biased and limited qualitative summary compared to systemic literature reviews, however, that is justified in order to provide readers examples and ideas of how to avoid CFIR barriers. One proposed limitation of this study is that papers from both Europe and the United States were utilized, though this is not believed to be a limitation due to the fact that the barriers outlined in CFIR are universal from a reader's point of view. The findings suggest that MDC-CKD research publications can suffer barriers of implementation that exist in all five domains of CFIR, and that MDC-CKD researchers may contribute to the barriers by publishing vague accounts of their implementation. Additionally, the lack of meaningful outcomes in MDC-CKD research may be due to the barriers researchers experience and contribute to in regards to either designing clinical interventions or attempting to repeat previous research that were too unclear to follow. Therefore, it is encouraged that MDC-CKD researchers utilize the CFIR framework in order to identify and amend barriers across the domains they encounter. Doing so could inform future MDC studies and encourage the exploration of implementation science components that have previously not been utilized in CKD treatment, which could help maximize the impact of MDC interventions.
2023-07-12T15:16:16.881Z
2023-06-21T00:00:00.000
{ "year": 2023, "sha1": "0fdff4098b463da5b7c4977620f3fb43df2906ca", "oa_license": "CCBY", "oa_url": "https://rdw.rowan.edu/cgi/viewcontent.cgi?article=1137&context=crjcsm", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "978652c8ae946a291231447f3be43666fe0814cc", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [] }
270773584
pes2o/s2orc
v3-fos-license
The effect mechanism of perceived entrepreneurial environment on Chinese college students’ entrepreneurial intention: chain mediation model test Introduction This study aimed to explore the effect of perceived entrepreneurial environment among Chinese college students’ entrepreneurial intention and its underlying mechanism. Methods Based on a survey of 445 college students from 5 universities with the perceived entrepreneurial environment assessment scale, the achievement motivation scale, the entrepreneurial self-efficacy scale, and the entrepreneurial intention questionnaire. Results There were significant correlations among perceived entrepreneurial environment, achievement motivation, entrepreneurial self-efficacy, and entrepreneurial intention, and perceived entrepreneurial environment could significantly positively predict entrepreneurial intention. Achievement motivation and entrepreneurial self-efficacy played significant mediating roles between the perceived entrepreneurial environment and entrepreneurial intention. There were three paths that perceived entrepreneurial environment to influence entrepreneurial intention: One was the mediating role of achievement motivation; The second was the mediating role of entrepreneurial self-efficacy; The third was the chain-mediated role of both achievement motivation and entrepreneurial self-efficacy. Discussion The internal mechanism of the relationship between perceived entrepreneurial environment and entrepreneurial intention enriches the research results of entrepreneurial psychology among college students and provides a theoretical basis for training and guiding the entrepreneurship of college students. Introduction Entrepreneurial intention is a critical initial step in the entrepreneurial process.It refers to an individual's conscious plan to pursue entrepreneurial activities and is an important predictor of entrepreneurial behavior (Krueger et al., 2000;Fan and Wang, 2006;Thompson, 2009).Entrepreneurship can not only increase the employment of undergraduates and promote Lin et al. 10.3389/fpsyg.2024.1374533Frontiers in Psychology 02 frontiersin.orgsustainable socio-economic growth, but also bring market innovation, improve economic efficiency, and alleviate social tensions (Li and Zhang, 2015;Chang and Lyu, 2022).As a result, it has become an important career choice among Chinese college students.Existing research mainly focuses on the influencing factors of entrepreneurial intention from a certain aspect such as entrepreneurial ability and entrepreneurial environment, but there is insufficient attention paid to the systematic influencing factors of entrepreneurial intention and its mediating effects (Ankita and Parwinder, 2024).In addition, the academic community has not paid enough attention to the specific group of college student entrepreneurs, mainly because different scholars have different research perspectives.Previous studies have focused more on the internal and external factors of entrepreneurs (Mohammad and Majed, 2024).In fact, the influencing factors of entrepreneurial intention among college students are diverse, driven by both internal and external factors (Lyu et al., 2024).Therefore, this study systematically investigates the influencing factors of college students' entrepreneurial intention guided by the perception of entrepreneurial environment, and reveals the theoretical and practical value of college students' achievement motivation and entrepreneurial self-efficacy in their entrepreneurial growth process.It can also provide theoretical guidance and empirical basis for guiding and encouraging college students to start businesses.The models of entrepreneurial intentions argue that perceived entrepreneurial environment is one of the important factors influencing individual entrepreneurial acts (Lüthje and Franke, 2003).Perceived entrepreneurial environment is the sum of a range of external factors perceived by individuals that influence the emergence and development of the entrepreneurial activity, including regulatory, normative, and cognitive factors (Wei, 2015).Studies have confirmed that individuals are less likely to have entrepreneurial intentions when they perceive that environmental factors from school, family, and society hinder entrepreneurial acts (Zhao and Du, 2016).However, individuals will increase their entrepreneurial intention when they perceive supportive entrepreneurial environments.Based on this, this study hypothesized 1 that the perceived entrepreneurial environment is an important factor affecting entrepreneurial intention.Besides, social cognitive theory suggests that individual behavior changes with changes in both individual and environmental factors.Entrepreneurial intention is affected not only by an individual's perception of the objective environment (entrepreneurial environment assessment) but the intrinsic psychological dynamics of the individual (the achievement motivation) and the belief that the individual is competent for the role or complete the task (the entrepreneurial selfefficacy).Therefore, this study will investigate how perceived entrepreneurial environment acts on entrepreneurial intention through internal psychological variables (achievement motivation) and individual characteristic variables (entrepreneurial self-efficacy). The expectation-value theory argues that when individuals are in a competitive environment, they experience two psychological tendencies: the motivation to pursue success and the motivation to avoid failure (Atkinson, 1957).When the motivation to pursue success is greater than the motivation to avoid failure, individuals tend to have a higher estimation of success goals and work hard to achieve them.Achievement motivation, an important internal trait, is the intrinsic motivation that drives individuals to pursue meaningful, valuable tasks and seek to acquire (Mcclelland, 1961).The formation of achievement motivation is the result of the interaction between individuals and their environment.It has been confirmed by studies that a good entrepreneurial environment is an important factor that can significantly predict individual achievement motivation (Yang et al., 2019).Therefore, perceived entrepreneurial environment can predict individual achievement motivation significantly and positively.On the other hand, it has been founded by studies that entrepreneurs often have a strong motivation for achievement.It is because that entrepreneurship is different from ordinary employment, when individuals have a strong motivation for achievement, the more courageous they are to take risks to start a business, the stronger their willingness to start a business.According to the theory of planned behavior (Ajzen, 1991), behavioral intention is the most direct factor influencing behavior, and attitude and motivation play a key role in the influence of individual behavior.It also has been confirmed by studies that individual achievement motivation can significantly and positively predict entrepreneurial intention (Brandstatter, 2011).Perceived entrepreneurial environment may influence entrepreneurial intention through the mediating role of achievement motivation.Therefore, based on the relationship between perceived entrepreneurial environment, entrepreneurial self-efficacy, and entrepreneurial intention, hypothesis 2 of this study is that achievement motivation is the mediating variable between entrepreneurial environment assessment and entrepreneurial intention. Entrepreneurial self-efficacy is the belief that an individual will be successful in the role of an intrapreneur or complete the tasks of intrapreneurship (Chen et al., 1998).The social cognitive theory argues that the relationship between individuals, cognition, and environment in the social environment constitutes a dynamic interaction.The individual's analysis of tasks, attribution of past successes and failures, and evaluation of the environment affect the formation of self-efficacy (Bandura, 1997).Entrepreneurial selfefficacy is based on the assessment of the entrepreneurial environment.When individuals feel the support of an objective environment, it will enhance their sense of security, eliminate the fear of failure, and promote the formation of entrepreneurial self-efficacy (Hopp and Stephan, 2012).Studies have confirmed that the perception of an entrepreneurial environment composed of individual growth experience and social support can significantly predict entrepreneurial self-efficacy and influence entrepreneurial behavior (Arias et al., 2021).On the other hand, according to Bandura's theory of selfefficacy, self-efficacy is the decisive factor of individual behavior.It is the psychological driving force for the continuous regulation of individual behavior.Entrepreneurial self-efficacy is considered a key predictor of individual entrepreneurial intention and ultimately entrepreneurial action.It plays a key role in the formation and development of individual entrepreneurial intention (Wang et al., 2023).Numerous studies have confirmed that higher entrepreneurial self-efficacy leads to the greater conviction that one's own entrepreneurship is feasible, and can significantly predict an individual's entrepreneurial intention, behavior, and performance (Wilson et al., 2010;Su and Qiao, 2019).In view of the above, hypothesis 3 of this study is that entrepreneurial self-efficacy is the mediating variable between perceived entrepreneurial environment and entrepreneurial intention. The achievement motivation is closely related to entrepreneurial selfefficacy (Li and Zeng, 2018), and it positively predicts entrepreneurial self-efficacy (Wang and Yu, 2023).College students with higher achievement motivation have stronger self-learning ability, career adaptability, entrepreneurial ability, and career maturity, as well as a higher sense of entrepreneurial self-efficacy (Wu et al., 2016).Therefore, the impact of achievement motivation on entrepreneurial intention may be realized through entrepreneurial self-efficacy.Based on this, hypothesis 4 of this study is that perceived entrepreneurial environment can influence entrepreneurial intention through the chain mediating effect of achievement motivation and entrepreneurial self-efficacy.Above all, this study intends to systematically investigate the effects of perceived entrepreneurial environment, achievement motivation, and entrepreneurial self-efficacy on entrepreneurial intention, as well as the mediating effects of the entrepreneurial achievement motivation and entrepreneurial self-efficacy in the relationship between perceived entrepreneurial environment and entrepreneurial intention.Finally, it reveals how perceived entrepreneurial environment affects entrepreneurial intention.Considering that previous studies have found that the gender, grade, family economic status are significantly related to entrepreneurial intention.Therefore, these variables were controlled for in this study. Measures Perceived entrepreneurial environment assessment scale The Perceived Entrepreneurial Environment Assessment Scale prepared by Huang et al. (2020) was used, which was compiled on the basis of Wei Hongmei's entrepreneurial environment theory.This study is revised appropriately according to the actual situation of college students.For example, the second question "The school's leaders are very supportive of the teachers' entrepreneurial activities" is changed to "The teachers are very supportive of our entrepreneurial activities"; the question "My colleagues have supported me in entrepreneurial activities and provided the necessary help" is changed to "my teachers have supported me in entrepreneurial activities and provided the necessary help"; the question "when I participate in various forums, my contacts or discussions with my colleagues can provide me with useful entrepreneurial information and entrepreneurial skills" is changed to "my contacts or discussions with my friends can provide me with useful entrepreneurial information and entrepreneurial skills." The scale is divided into three dimensions with a total of 8 questions, namely, microregulatory environment assessment (such as "the school provides us with services such as an entrepreneurial platform") reflects the school's support for the entrepreneurial system, and micro-normative environmental assessment (such as "my family members have supported me in entrepreneurial activities and provided the necessary help") reflects the atmosphere and effective action support of social encouragement and support for entrepreneurship.Micro-cognitive environmental assessment (such as various industry conferences, academic conferences, workshops, and courses that help me acquire the information and skills I need to start a business) examines the knowledge and skills to acquire entrepreneurship.A score of 5 points is used, with 1 indicating complete disagreement and 5 indicating full agreement, and higher scores indicating a better perceived entrepreneurial environment.The measurement model fit index is χ 2 /df = 2.50, GFI = 0.98, AGFI = 0.95, NFI = 0.97, IFI = 0.98, TLI = 0.96, CFI = 0.98, RMSEA = 0.06.The Cronbach'a coefficient for the total questionnaire in this study was 0.83. Achievement motivation scale The Achievement motivation scale revised by Ye and Hagtvet (1992) was used to test the subjects.A total of 30 items, including motive to achieve success (Ms), for example, "I like to work tirelessly on problems that I am not sure about solving." and motive to avoid failure (Mf), for example, "I hate working in situations where I'm completely unsure if I'll fail." A score of 4 points is used, with 1 indicating complete disagreement and 4 indicating full agreement, Mt. = Ms. -Mf.The measurement model fit index is χ 2 /df = 2.49, IFI = 0.91, TLI = 0.90, CFI = 0.91, RMSEA = 0.06.The Cronbach'a coefficient for the total questionnaire in this study was 0.93. Entrepreneurial self-efficacy scale The Entrepreneurial self -efficacy scale compiled by Li (2017) was used to test the subjects.The scale consists of 6 dimensions and 32 items.They are opportunity recognition effectiveness (such as I am able to respond promptly to business opportunities'), leadership effectiveness (such as I am able to articulate the company's vision and values), human resource management effectiveness (such as I am able to persuade others to accept my perspective '), product innovation effectiveness (such as I am able to identify new areas with high growth potential), willpower efficacy (such as I am able to respond quickly to unexpected changes and failures') and risk tolerance efficacy (such as I am able to work efficiently in a continuously high-pressure work environment ').The scale uses a 5-point rating, from "1″ (completely inconsistent) to "5″ (completely consistent).A score of 5 points is used, with 1 indicating complete disagreement and 5 indicating full agreement, and higher scores indicating a better perceived entrepreneurial environment.The measurement model fit index is χ 2 / df = 3.10, IFI = 0.92, TLI = 0.90, CFI = 0.91, RMSEA = 0.07.The Cronbach's a coefficient for the total questionnaire in this study was 0.95. Entrepreneurial intention questionnaire The entrepreneurial intention questionnaire compiled by Gelderen (2006) and revised by Li et al. (2008) was used to test the subjects.The scale has 5 items in total.The first four questions are "I think I will start a business in the future, " "I have considered running my own company, " and "If I have the opportunity and am free to make decisions, I will choose to start my own business, " "Considering my current situation and various limitations (such as lack of funds), I will still choose to start my own business, " using a 5-point scoring system, from "1" (completely disagree) to "5" (completely agree), The fifth question, "What do you think is your likelihood of starting a business in the next 5 years?"uses a percentage score, ranging from "1" (0%) to "5" (100%).The higher the total score, the stronger the willingness to start a business.The measurement model fit index χ 2 /df = 2.16, GFI = 0.99, AGFI = 0.97, NFI = 0.99, IFI = 0.99, TLI = 0.99, CFI = 0.99, RMSEA = 0.05.The Cronbach's a coefficient for the total questionnaire in this study was 0.90. Procedure After obtaining informed consent from the school and individuals, professional teachers distributed paper questionnaires to the participating students, uniformly introduced the guidelines, and informed them of the testing content and requirements.The questionnaire is anonymous to ensure the authenticity and reliability of the survey.The testing time is about 15 min, and all questionnaires are collected on-site.This study received funding from the first author's institution for scientific research and innovation, and was approved by the school's review committee.After the test, participants can receive a gift of one dollars as a reward. Analyses In the present study, the missing value data in the participants' responses were less than 5%, so no data were deleted.We used SPSS 24.0 and Amos version 24.0 for data analyses.The data were sorted and statistically analyzed using SPSS version 24.0, and Amos version 24.0 was used for confirmatory factor analysis.First, descriptive analyses and Pearson correlation analyses were conducted to report means, standard deviations, and correlations for the interested variables.Secondly, the mediation effect was calculated using SPSS 24.0 with Hayes's PROCESS windows (Hayes, 2017) to further explore the relationship of the interest variables. Common method bias In this study, anonymous tests and reverse-scoring questions were used to control the common method bias.We used Harman's one-factor test (Podsakoff et al., 2003) to determine whether the data exist common method.In this test, we used the SPSS factor analysis routine to identify the first eigenvalue from the data matrix.The test results reveal that the first eigenvalue accounts for 24.71% of total variances, which does not equate to the majority of the total variance explained (threshold of 40%).Thus, according to Harman's one-factor test, common method bias is not likely to bias the results (Zhou and Long, 2004). Descriptive statistics The descriptive statistics and results of the correlation analyses of the main variables in this study are shown in Table 1.It was found that the perceived entrepreneurial environment was positively correlated with achievement motivation (r = 0.43), entrepreneurial self-efficacy (r = 0.58), and entrepreneurial intention (r = 0.54).Achievement motivation was positively correlated with entrepreneurial self-efficacy (r = 0.60) and entrepreneurial intention (r = 0.43).Entrepreneurial self-efficacy was positively correlated with entrepreneurial intention (r = 0.48). Mediating effect analysis After controlling for gender, grade, and family economic status, the mediating model analysis results show that the path coefficients of gender, grade, and family economic status were not significant.we examined the direct pathway of the perceived entrepreneurial environment to entrepreneurial intention.The results showed that the perceived entrepreneurial environment positively predicted entrepreneurial intention (β = 0.54, p<0.001).Then, the mediation model was reanalyzed.The results of the mediating effect of achievement motivation and entrepreneurial self-efficacy are shown in Figure 1.The results showed that perceived entrepreneurial environment had a significant positive effect on achievement motivation (β = 0.43, p < 0.001), while burnout had a significant positive effect on entrepreneurial self-efficacy (β = 0.39, p < 0.001).Achievement motivation had a significant positive effect on entrepreneurial selfefficacy and entrepreneurial intention (β = 0.42, p < 0.001; β = 0.13, p < 0.01), respectively.And entrepreneurial self-efficacy had a significant positive effect on entrepreneurial intention (β = 0.17, p < 0.01).In addition, the perceived entrepreneurial environment had a significant positive effect on entrepreneurial intention (β = 0.38, p < 0.001). The bootstrap 95% CI confirms the significant indirect effects of achievement motivation and entrepreneurial self-efficacy in the relationship between perceived entrepreneurial environment and entrepreneurial intention (Table 2).These results indicate that achievement motivation and entrepreneurial self-efficacy not only partially mediate the relationship between perceived entrepreneurial environment and entrepreneurial intention, but also have a chain mediating effect on them (Table 2 and Table 3). Discussion The relationship between perceived entrepreneurial environment and entrepreneurial intention This study explores the relationship between perceived entrepreneurial environment and entrepreneurial intention based on the theory of the entrepreneurial intention model, as well as its internal mechanism.The results indicate that perceived entrepreneurial environment is significantly and positively correlated with entrepreneurial intention, it can significantly predict college students' entrepreneurial intention, which is consistent with previous research results (Saeed et al., 2014;Du et al., 2016;Ye and Fang, 2017).Additionally, it also has a significant positive impact on promoting entrepreneurial intention.It is worth noting that after introducing the two mediating variables of achievement motivation and entrepreneurial self-efficacy, the influence of perceived entrepreneurial environment on entrepreneurial intention remained significant.The results support the theory of the entrepreneurial intention model, and perceived entrepreneurial environment is an important factor affecting college students' entrepreneurial intention.Therefore, to enhance Chinese college students' entrepreneurial intention, it is necessary to pay attention to the factors of the entrepreneurial environment.A good entrepreneurial environment not only provides a crucial channel for college students to access entrepreneurial resources but also stimulates their entrepreneurial passion (Zhou and Wang, 2019). The separate mediating effect of achievement motivation and entrepreneurial self-efficacy The results show that perceived entrepreneurial environment can influence entrepreneurial intention through the separate mediating roles of achievement motivation and entrepreneurial self-efficacy.These results are consistent with previous studies conducted on college students in the mainland, which have found that achievement motivation mediate the relationship between perceived entrepreneurial environment and entrepreneurial intention, the entrepreneurial selfefficacy mediate the relationship between perceived entrepreneurial environment and entrepreneurial intention. The subjective environmental system (perceived entrepreneurial environment) can only act on the individual through the individual's internal psychological dynamic system (achievement motivation).Compared to those with a low perceived entrepreneurial environment, college students with a high entrepreneurial environment assessment perceive more support from their entrepreneurial environment (Virginia et al., 2024).This perception can increase their achievement motivation, foster entrepreneurial passion, and avoid the negative impact of individual entrepreneurial failure learning (Zhou and Wang, 2019).When individuals perceive support from family, school, friends, and social environment institutions, their achievement motivation increases, leading to a stronger entrepreneurial intention.Conversely, when individuals perceive hindrances in their environment, their motivation for achievement decreases, resulting in a decline in entrepreneurial intention.Achievement motivation plays a role as a "bridge" between perceived entrepreneurial environment and entrepreneurial intention.The perceived entrepreneurial environment affects the internal factors of the individual, while the entrepreneurial environment (external factors) such as social support, government policies, and infrastructure will play a role in the individual's achievement motivation (internal factors). The perceived entrepreneurial environment refers to an individual's perception of external factors that affect entrepreneurial activities.Entrepreneurial self-efficacy is formed based on the perception of factors within the entrepreneurial environment.When an individual is supported by an objective environment, their sense of security is enhanced, and their sense of fear is eliminated, and this promotes the formation of entrepreneurial self-efficacy.Furthermore, the environment in which the individual is located leads to differences in the growth of entrepreneurial ability.College students, in particular, are in a transitional stage from school to society, and the differences in the entrepreneurial environment may lead to differences in their entrepreneurial knowledge and ability.The diversified driving mechanism of entrepreneurial growth believes that a good entrepreneurial environment can effectively promote the development of entrepreneurial ability, which can in turn enhance the self-efficacy of entrepreneurship and promote the improvement of entrepreneurial intention.Therefore, the subjective environment (perceived entrepreneurial environment) will not only affect the individual entrepreneurial intention through intrinsic psychological motivation (achievement motivation), but also affect the entrepreneurial intention through individual characteristics (entrepreneurial self-efficacy). In addition, this study found that the mediating effects of achievement motivation and entrepreneurial self-efficacy did not differ significantly.Through the specific path analysis, it was determined that the positive predictive effect of perceived entrepreneurial environment on achievement motivation was greater than that of entrepreneurial self-efficacy.In contrast, the positive predictive effect of entrepreneurial self-efficacy on entrepreneurial intention was greater than that of achievement motivation.The possible explanation for this result is that the perceived entrepreneurial environment directly affects an individual's achievement motivation, while entrepreneurial selfefficacy is closely related to entrepreneurial intention.An individual's achievement motivation is formed by perceiving the interaction of the entrepreneurial environment, which promotes the improvement of their entrepreneurial self-efficacy.Therefore, perceived entrepreneurial environment is more closely related to achievement motivation.Moreover, since perceived entrepreneurial environment is an external cause and entrepreneurial self-efficacy is an internal cause, external conditions and factors should function through the individual's internal psychology.When an individual has stronger confidence in their ability to successfully start a business, their willingness to start a business is stronger.Therefore, entrepreneurial self-efficacy is more closely related to entrepreneurial intention than achievement motivation. The chain mediating role of achievement motivation and entrepreneurial self-efficacy The results of this study show that achievement motivation and entrepreneurial self-efficacy play a mediating role between perceived entrepreneurial environment and entrepreneurial intention, and the total mediation effect is 28.48%.In other words, 28.48% of the effect of perceived entrepreneurial environment on entrepreneurial intention is realized through achievement motivation and entrepreneurial self-efficacy.This highlights that an individual's entrepreneurial willingness is affected by both external environmental factors and internal factors, and the interaction between the environment and the individual affects the individual's entrepreneurial intention. Frontiers in Psychology 07 frontiersin.org From an individual perspective, the largest mediating effect was found for entrepreneurial self-efficacy with 12.27%, followed by achievement motivation at 10.50%.Achievement motivation can positively predict college students' entrepreneurial self-efficacy, which is consistent with previous research results (Sun et al., 2013).Entrepreneurial self-efficacy is based on achievement motivation, the higher the achievement motivation, the stronger the independent learning ability, career adaptability, entrepreneurial ability, and career maturity, resulting in a higher sense of entrepreneurial self-efficacy.On the contrary, individuals with low achievement motivation lack internal psychological motivation.They are unwilling to take the initiative to learn entrepreneurship-related knowledge, on the other hand, they also lack career adaptability and entrepreneurial ability, which in turn reduces the sense of entrepreneurial self-efficacy, resulting in a low willingness to start a business. Research limitations Based on the theory of the entrepreneurial intention model, this study aims to examine the relationship between Chinese college students' perceived entrepreneurial environment and their entrepreneurial intention, along with its underlying mechanisms.The results show that perceived entrepreneurial environment affects Chinese college students' entrepreneurial intention, which is mediated by achievement motivation and entrepreneurial self-efficacy, and mediated by a chain of achievement motivation → entrepreneurial self-efficacy.These findings not only provide a comprehensive understanding of the psychological mechanism underlying the influence of the perceived entrepreneurial environment on Chinese college students' entrepreneurial intention but also contribute to the theoretical research on Chinese college students' entrepreneurial intention.The results of this study include widening employment opportunities for Chinese college students and elevating their employment status, as well as providing theoretical guidance and empirical support to foster, aid, and motivate Chinese college students entrepreneurship. There are several limitations to this study that should be noted.Firstly, this study only examined the student population of five higher education institutions, with a relatively large proportion of Chinese language students surveyed.Therefore, the standard deviation of each variable in the survey data is relatively small.In future research, the research population will be further expanded to confirm whether this conclusion can be inferred to other groups.Secondly, the perceived entrepreneurial environment is a dynamic and evolving process, and Chinese college students' entrepreneurial intentions may be influenced by changes in the economic and environmental context.Follow-up studies could explore how changes in the external environment impact entrepreneurial intention over time.Finally, this study controlled for variables such as gender, grade, and family economic level.Future research could investigate the differences between subgroups of Chinese college students based on their family socioeconomic status. Conclusion There were significant correlations between perceived entrepreneurial environment, achievement motivation, entrepreneurial self-efficacy, and Chinese college students' entrepreneurial intention.Furthermore, the study found that perceived entrepreneurial environment significantly predicted Chinese college students' entrepreneurial intentions.The mediating effect of achievement motivation and entrepreneurial self-efficacy was also found to be significant in the relationship between Chinese college students' perceived entrepreneurial environment and entrepreneurial intention.Specifically, there were three intermediary paths identified: the first was the separate mediating role of achievement motivation, the second was the separate mediating role of entrepreneurial self-efficacy, and the third was the chain mediating role of achievement motivation and entrepreneurial self-efficacy.This study has certain theoretical and practical significance, providing specific ideas for universities to carry out innovation and entrepreneurship education.Universities should attach importance to creating a good entrepreneurial atmosphere, and adopt rich club activities and innovation and entrepreneurship competitions to stimulate the achievement motivation of college students, provide opportunities for college students to participate in entrepreneurial activities, enhance their entrepreneurial self-efficacy, and promote their entrepreneurial intentions. TABLE 2 Results of hierarchical regression analysis study variables. TABLE 3 Summary of hypotheses testing.
2024-06-28T15:23:06.786Z
2024-06-26T00:00:00.000
{ "year": 2024, "sha1": "3d9a77021c96acd79a485b406936076c11abb215", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1374533/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "360d1cb46f008662e256623c97da39d8a4b75fab", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Medicine" ] }
19087117
pes2o/s2orc
v3-fos-license
Murine embryos exposed to human endometrial MSCs-derived extracellular vesicles exhibit higher VEGF/PDGF AA release, increased blastomere count and hatching rates Endometrial Mesenchymal Stromal Cells (endMSCs) are multipotent cells with immunomodulatory and pro-regenerative activity which is mainly mediated by a paracrine effect. The exosomes released by MSCs have become a promising therapeutic tool for the treatment of immune-mediated diseases. More specifically, extracellular vesicles derived from endMSCs (EV-endMSCs) have demonstrated a cardioprotective effect through the release of anti-apoptotic and pro-angiogenic factors. Here we hypothesize that EV-endMSCs may be used as a co-adjuvant to improve in vitro fertilization outcomes and embryo quality. Firstly, endMSCs and EV-endMSCs were isolated and phenotypically characterized for in vitro assays. Then, in vitro studies were performed on murine embryos co-cultured with EV-endMSCs at different concentrations. Our results firstly demonstrated a significant increase on the total blastomere count of expanded murine blastocysts. Moreover, EV-endMSCs triggered the release of pro-angiogenic molecules from embryos demonstrating an EV-endMSCs concentration-dependent increase of VEGF and PDGF-AA. The release of VEGF and PDGF-AA by the embryos may indicate that the beneficial effect of EV-endMSCs could be mediating not only an increase in the blastocyst’s total cell number, but also may promote endometrial angiogenesis, vascularization, differentiation and tissue remodeling. In summary, these results could be relevant for assisted reproduction being the first report describing the beneficial effect of human EV-endMSCs on embryo development. Introduction factors released by co-cultured embryos evidenced a significant increase of VEGF and PDGF-AA secretion which have been associated with enhanced angiogenesis, vascularization, differentiation and tissue remodeling possibly aiming to enhance endometrial receptivity. Isolation and in vitro expansion of endometrial mesenchymal stromal cells Endometrial Mesenchymal Stromal Cells (endMSCs) were isolated from menstrual blood of four healthy women (Fig 1) according to previously described protocols [20,21]. Inclusion criteria for women were: females without infection, no hormone therapy and ages between 30-40 years. The exclusion criteria were: females with HBV, HCV or HIV infection, immune disorders and under hormone treatments. The samples were collected on day 2 or 3 of the menstrual cycle by the use of a menstrual cup. Written informed consent was obtained from all donors under the auspices of the Minimally Invasive Surgery Centre Research Ethics Committee, which approved this study. Briefly, menstrual blood was diluted 1:2 in PBS and centrifuged at 450 x g for 10 minutes. Supernatants were discarded in order to remove the residues of cervical mucus and cells were re-suspended in DMEM containing 10% fetal bovine serum (FBS), 1% penicillin/streptomycin and 1% glutamine. Cells were seeded onto tissue culture flasks and expanded at 37˚C in 95% air and 5% CO 2 atmosphere. Non-adherent cells were removed after 24h. Adherent cells were cultured to 80% confluency and detached using PBS containing 0.25% trypsin (v/v). Cells were seeded again at a density of 5000 cells/cm 2 . Culture medium was changed every three days. Phenotypical and functional characterization of endMSCs The differentiation assay of endMSCs was performed when the cells reached 80% of confluence. Standard in vitro differentiation assays were used to promote osteogenic, adipogenic and chondrogenic differentiation. Cells were cultured for 21 days with differentiation specific media (Stem Pro Adipogenesis, Chondrogenesis and Osteogenesis Differentiation Kits, Gibco, Thermo Fisher Scientific, MA, USA) which was replaced every three days. Cells were stained to evidence adipogenic, chondrogenic and osteogenic differentiation with Oil Red O, Alcian Blue 8GX and Alizarin Red S stains, respectively. Differentiated cells were observed by optical microscopy. The degree of adipogenic, chondrogenic and osteogenic differentiations was determined by extracting the stain with 6M guanidine-HCl (Alcian Blue 8GX and Alizarin Red S stains) or pure isopropanol (Oil Red O stain). The absorbance of the extracts was quantified at 490 nm (Oil Red O and Alizarin Red S stains) and at 600 nm (Alcian Blue 8GX). For the phenotypic analysis, 2 × 10 5 endMSCs were stained with human monoclonal antibodies (mAbs) against CD29, CD31, CD34, CD44, CD45, CD49d, CD49f, CD56, CD73, CD90, CD105 and HLA-DR using the appropriate concentrations of mAbs in the presence of PBS containing 2% FBS. The endMSCs and mAbs were incubated for 30 min at 4˚C. The cells were washed and re-suspended in PBS. Isotype-matched antibodies were used as negative controls. The flow cytometric analysis was performed on a FACScalibur cytometer (BD Biosciences, CA, USA) after acquisition of 10 5 events. Viable cells were selected using forward and side scatter characteristics and fluorescence was analyzed using CellQuest software (BD Biosciences, CA, USA). Isotype-matched negative control antibodies were used in the experiments. The percentage of positive cells above the negative control (isotype controls) was determined. Cells were analyzed at passages 3-4. Isolation, purification and characterization of extracellular vesicles from endMSCs An enriched fraction of endMSCs-derived exosomes, contained in the isolated extracellular vesicles, was obtained from endMSCs cultured in 175 cm 2 flasks using a previously optimized protocol [22]. When cells reached a confluence of 80%, culture medium (DMEM containing 10% FBS) was replaced by exosome isolation medium (DMEM containing 1% insulin-transferrin-selenium). The endMSCs supernatants were collected every 3-4 days and centrifuged at 1000 x g for 10 min and 5000 x g for 20 min at 4˚C to eliminate dead cells and debris. Subsequently, the supernatants were filtered twice using a sterile cellulose acetate filter of 0.45 μm and then on of 0.20 μm (Corning, NY, USA). About 15 ml of these endMSCs supernatants were ultra-filtered through 3kDa MWCO Amicon1 Ultra Devices (Merck-Millipore, MA, USA) at 4000 x g for 1 hour at 4˚C. The concentrated supernatants were collected and stored at -20˚C. Prior to in vitro experiments, the concentration of microvesicles was indirectly measured by quantifying the protein content by a Bradford assay (BioRad Laboratories, CA, USA). The concentration and size of purified EV-endMSCs were quantified by nanoparticle tracking analysis (NanoSight Ltd, Amesbury, UK) equipped with fast video capture that relates the rate of Brownian motion to particle size. Results were analyzed using the particle-tracking analysis software package version 2.2. The equipment configuration for this analysis was: frames processed: 900. 899; frames per second: 30; calibration: 166 nm/pixel; automatic defocus: Auto; detection threshold: 4 Multi; minimum size expected: Auto; tracking minimum size: 100 nm; temperature: 24-28˚C; viscosity: 0.80-0.95 cP. For the flow cytometric analysis by fluorescent activated cells sorting, EV-endMSCs were bounded to latex beads and labeled with fluorophore-conjugated antibodies as described by Théry et al. [23]. Briefly, EV-endMSCs (5 μg of exosomal proteins) were conjugated overnight at 4˚C with 10 μl of Aldehyde/Sulfate latex beads, 4% w/v, 4 μm (Molecular Probes, OR, USA). 110 μl of 1M glycine were added to each tube and incubated for 30 minutes. Then the samples were centrifuged and re-suspended in a final volume of 0.5 ml PBS, containing 0.5% bovine serum albumin (BSA; w/v). These EV-endMSCs-coated beads were incubated for 30 minutes at 4˚C with anti-CD9 and anti-CD63 human monoclonal antibodies (BD Biosciences, CA, USA). The EV-endMSCs -coated beads were washed and re-suspended in PBS+BSA. The flow cytometry analysis was performed on a FACScalibur cytometer (BD Biosciences, CA, USA) after acquisition of 10 5 events. Isotype-matched negative control antibodies were used in all the experiments. The percentage of positive cells above the negative control (isotype controls) was determined. Soluble factor analysis At the third day of embryo culture, the embryos were fixed as described below and the cell culture media were individually collected and stored at -20˚C for later soluble factors analyses (Fig 1). As negative controls, KSOM medium with or without EV-endMSCs were used and their quantifications were subtracted to the corresponding study groups. The murine-specific soluble factors analyzed were chosen for their importance in the first phases of embryo development: GM-CSF, VEGF, IGF-I, IL-6, M-CSF, EGF and PDGF-AA. These soluble factors were analyzed using a bead-based Magnetic multiplexed Luminex Assay (LXSAMSM, R&D systems, MN, USA) according to the manufacturer's instructions. The concentrations of the different factors were expressed as pg/ml. Animals and superovulation protocol All the experimental procedures were reviewed and approved by the Ethical Committee of the Jesús Usón Minimally Invasive Surgery Centre. B6D2F1 mice were housed under a 12 h light/ 12 h dark cycle, at a controlled temperature (19-23˚C) with free access to food and water. Females were intraperitoneally (IP) injected with 8 IU of equine chorionic gonadotropin (eCG, Veterin Corion, Divasa Farmavic) followed 49 h later by 8 IU of IP human chorionic gonadotropin (hCG, Foligon, MSD) to trigger ovulation (Fig 1). In vivo embryo recovery and culture Female mice (8-12 weeks old) were hormonally stimulated to trigger ovulation as previously described; after hCG injection, females were paired with B6D2 males in a 1:1 ratio. After 36 hours from the hCG injection, females were sacrificed by cervical dislocation and the embryos were collected from the oviducts in M2 (Sigma-Aldrich, Barcelona, spain); these 2-cell embryos were washed in fresh KSOM (Merck-Millipore, Madrid, Spain) and placed in 100 μl droplets (9-12 embryos/droplet) devoid of EV-endMSCs (control) or in KSOM droplets added with 10, 20, 40 or 80 μg/ml of EV-endMSCs from the four different donors individually and media was not replaced throughout the entire culture; embryos were incubated in a 5% CO 2 /95% air atmosphere at 37˚C and 100% humidity. For each experiment, 2 cell murine embryos were obtained from 3 different females (36 hours post-hCG) and pooled prior embryo culture (12 different females used in total). The number of initial 2 cell embryos and the percentage of embryos reaching the expanded blastocyst stage as well as embryo hatching after 75 hours in culture were recorded (Fig 1). Total cell number The number of cells in an embryo is a well-known indicator of embryo viability and quality [24]. Therefore, in view of the previous data, after embryo hatching assessment, the blastocysts obtained were fixed in 4% formaldehyde in PBS added with 0.01% of polyvinyl alcohol (PVA; w/v) at 4˚C for 12 hours and stained with 2.5 μg/ml of Hoechst 33342 (Eugene, OR, USA) in PBS added with PVA for 10 minutes at 37˚C. Then, the blastocysts were mounted on glass slides with glycerol, covered with coverslips and sealed. Embryos were then visualized (Fig 1) using a fluorescence microscope (Nikon Elipse TE2000-S) equipped with an ultraviolet lamp. Cell number was analyzed using the Fiji Image-J Software (1.45q, Wayne Rasband, NIH, USA). Statistical analysis For total cell number analysis data were tested for normality using a Shapiro-Wilk test; results are reported as mean ± standard deviation (SD). Groups were compared using a one way ANOVA due their Gaussian distribution and homoscedasticity. When statistically significant differences were found, a Bonferroni post-hoc test was used to compare pairs of values. Blastocyst and hatching rates were compared among groups by Chi-square test with the Yates correction for continuity. The Fisher's Exact Test was used when a value of less than 5 was expected in any treatment. A Student t-test for paired comparisons was performed on VEGF and PDGF-AA measurements. The correlation between VEGF and PDGF-AA was calculated using the Pearson correlation coefficient. Statistical analyses were performed using Sigma Plot software version 12.3 for Windows (Systat Software, IL, USA) or with SPSS-21 software (SPSS, IL, USA); p < 0.05 was considered as statistically significant. Size distribution, concentration and exosome specific markers in EV-endMSCs In order to quantify the proteins in the enriched fraction of exosomes, a Bradford assay was performed ranging their protein concentrations between 350 and 750 μg/ml. The nanoparticle tracking analysis of EV-endMSCs revealed that the mean size and standard deviation of isolated vesicles from four different donors was 153.5±63.05 nm, while their concentration was 3.31x10 11 ± 3.8x10 9 particles/ml. Fig 3A shows a representative analysis of nanoparticle tracking analysis. Finally, the EV-endMSCs were phenotypically characterized by flow cytometry with specific exosomal markers. The analysis of CD9 and CD63 demonstrated a positive expression of these exosome-related proteins (Fig 3B). Development to the blastocyst stage, embryo hatching and total cell number count Our results showed that the developmental competence of the embryos did not vary disregarding EV-endMSCs addition or the dosage used. The blastocyst rate of the control embryos devoid of EV-endMSCs was 86.8%, this percentage was 98.2% for the 10 μg/ml dose, 92.9% for 20 μg/ml, 79.6% when 40 μg/ml of EV-endMSCs were added and 84.9% for the 80 μg/ml dose and no statistically significant differences were observed between groups (p>0.05; Table 1). Conversely, total cell number was significantly enhanced ranging from 73.8-75.6 cells/embryo (37-41 blastocysts evaluated per treatment) when EV-endMSCs were added exceeding the non-EV-endMSCs added control (61.2 ± 19.6 cells/embryo, n = 44; p<0.05), demonstrating that EV-endMSCs significantly promoted blastomere division during embryonic development (Fig 4). Furthermore, as seen in Table 1, embryo hatching was consistently enhanced for all the EV-endMSCs dosages tested, although statistically significant differences were only observed between the control (20.5% hatching embryos) and the 20 μg/ml and 40 μg/ml EV-endMSCs dosages (54.1 and 47.6% of hatching embryos respectively; p<0.05). Quantification of soluble factors secreted during embryo development in vitro The analysis of soluble factors was performed at the third day of embryo culture. Unfortunately, the analysis of murine molecules: GM-CSF, IGF-I, IL-6, M-CSF and EGF were below the detection limits of the technique and the quantification of these molecules was impossible at this time point (data not acquired). On the contrary, blastocyst-released PDGF-AA and VEGF were detectable in all supernatants at day 3 of embryo culture for all the different EV- endMSCs dosages tested. Our results demonstrated an EV-endMSCs concentration-dependent increase of VEGF and PDGF-AA released by the embryos (Fig 5). It is important to note that the hypothetical presence of VEGF or PDGF-AA factors from EV-endMSCs was The initial number of 2 cell embryos retrieved in uterus (n) as well as blasocyst rate in % and hatching embryos in % is provided (hatching rates were calculated as the number of blastocysts that hatched/total blastocyst number per treatment); different superscripts represent statistically significant differences (p<0.05). considered and negative controls (culture medium with/without EV-endMSCs) were included for the different samples and this background was subtracted for each sample. However, as we could not discard cross-reactivity between human and mouse PDGF-AA in the Luminex assay, there is a possibility that residual human PDGF-AA with exosomal origin coming from embryos could have interfered in the measurements. The separated values of the experimental samples and negative control samples are shown in S1 Fig. In the case of PDGF-AA, significant differences were found between negative controls (embryos cultured in the absence of EV-endMSCs) and embryos co-cultured with EV-endMSCs. The EV-endMSCs added at 10 μg/ml, 20 μg/ml, 40 μg/ml and 80 μg/ml showed the following p values: p = 0.021, p = 0.014, p = 0.064 and p = 0.031 respectively (Fig 5A). Finally, as shown in Fig 5C, our results demonstrated that the released VEGF and PDGF-AA produced by the blastocyst during their development in vitro were very significantly correlated (r = 0.812, p<0.001). Discussion Exosomes derived from MSCs are a matter of study as they are known to exert a therapeutic role in different physiological and pathological conditions. In the case of endMSCs, these cells participate in tissue remodeling which is essential for the endometrium [25] exhibiting immunomodulatory potential through the release of soluble factors [26]. These cells can be easily obtained during menstruation and their expansion, characterization and subsequent exosome retrieval can be achieved in the laboratory. Specifically, human exosomes released in the uterus have been demonstrated to profoundly influence implantation and embryo-maternal crosstalk [8]. Additionally, it has been demonstrated that in vitro produced bovine embryos release exosomes and that their composition varies depending on the embryo competence [27]; on the other hand it has been shown that, when the culture medium is not replaced during the entire embryo culture, the blastocyst rate, total cell number and calving significantly improve in bovine cloned embryos [28]. These effects have been related to exosome secretion by the embryos in the culture medium as removal of these exosomes by medium replacement impairs embryo development, while its supplementation to exosome-depleted embryos increases embryo quality [28]. Taken together, these reports reflect the complex interplay existing between the maternal environment and the embryo as exosome release and uptake determines embryo competence, quality and birth rate. Even when exosomal content is still under study, what it is already known is that exosomes are locally released and are meant to interact and transfer their cargo to the target cells. Although the exact mechanisms are under study, in the case of the uterine environment these mechanisms are even more intricate, as the exosome content also depends on the hormonal environment and gestational status [29,30]. Several studies have been conducted to understand their in vitro features such as protein cargo [8] or micro RNA content [31] although the role that they may play in vivo or the optimal dosage to be used remains unexplored. Based on the previously mentioned literature, we aimed to elucidate if human EV-endMSCs influenced the soluble factors released by blastocyst, the development and hatching of blastocysts or the blastomere proliferation of murine embryos obtained in vivo. Our results did not evidence any effect of EV-endMSCs on embryo development as the blastocyst rates remained unchanged between treatments (over 79% blastocyst rate for all the treatments tested). These data are in agreement with previous reports in the bovine species in which exosomes derived from homologous oviductal explants did not increase the blastocyst yield [32,33]. However, core differences exist in both experimental settings as bovine in vitro produced embryos exhibit consistently lower developmental competence than B6D2F1 murine in vivo obtained two cell embryos [34]. Nevertheless, our results do not evidence any toxic effect of the EV-endMSCs dosages tested, and thus, the ones used in the present study can be considered non-toxic. As embryo development to the blastocyst stage does not predict embryo quality, the expanded blastocysts obtained were fixed and stained with Hoechst 33342 in order to determine their quality by total blastomere counts [35]. Interestingly, our results demonstrated an increase in blastomere count at all the EV-endMSCs dosages tested over the control demonstrating a positive effect of these microvesicles on embryonic total cell number. However, embryo transfer will be performed in next studies to rule out an increased incidence of embryonic aneuploidies in embryos supplemented with EV-endMSCs, although their incidence is very low in this species [36]. As previously mentioned, homologous oviductal exosomes increase the quality of bovine embryos in terms of survival rate after cryopreservation and gene expression [32,33]. In addition, exosomes derived from cloned bovine embryos also enhanced total cell number and the inner cell mass/trophoectoderm cell of the embryos [28] and these results parallel the ones obtained in the present work. To further confirm that EV-endMSCs increase embryo quality in vitro, hatching rates were also evaluated. Interestingly, statistically significant differences in hatching rates were only detected in the 20 and 40 μg/ml dosages compared against control, although for the 10 and 80 μg/ml dosages hatching rates were almost doubled compared to the non-microvesicles added treatment. It is known that embryo hatching in the mouse is achieved by an embryonic-mediated lytic mechanism and by the pressure of the expanding blastocele/blastocyst against the zona [37]. However, the exact mechanism of hatching is still controversial as some authors claim that the lytic mechanism is predominant and the blastocyst pressure is less relevant [38]; thus our results parallel those of Gordon et al. who described that enzymatic digestion of the zona pellucida using Tyrode's solution better induces embryo hatching compared to mineral injection under the zona to mimic the mechanical embryo pressure as, even when total cell number significantly increases in each EV-endMSCs treatment, hatching rates improve significantly only for the 20 and 40 μg/ml dosages. These results reflect that the 10 μg/ml is a low dose while with the 80 μg/ml dosage we may be working at saturation conditions; therefore the 20-40 μg/ml dosages seem to be adequate reference ranges to work with. Our preliminary proteomic analyses of the EV-endMSCs have revealed a wide range of proteins closely related to embryo development and implantation (manuscript in preparation). As an example, transferrin [39], vinculin [40], and fibronectin [41,42] have been demonstrated to promote embryo development being essential in the early embryo development and survival. Thus, EV-endMSC could be promoting embryo blastomere division and hatching by the specific protein cargo released to the embryo culture medium. In addition, core proteins related to embryo implantation such as matrix metalloproteinase-2, -3 and -9 [43,44], or E-cadherin [45] were also identified. In summary, according to the proteomic profile of EV-endMSCs and the enhanced hatching observed in our EV-endMSCs embryos (positively related with embryo implantation [46]), our results may indicate a higher implantation potential of the obtained embryos, but this theory has to be further confirmed. Apart from the analysis of embryo developmental competence, total blastomere count and blastocyst/hatching rate; we aimed to evaluate the release of soluble factors by murine embryos. It is known that embryos release cytokines that play a role in embryonic development, embryo-maternal recognition and maintenance of the proper hormonal environment [47]. However, the exact molecules that modulate the development of the pre-implantation embryo still remains a matter of study [48]. In this study, our analysis was firstly focused on the quantification of GM-CSF which has been demonstrated to promote blastocyst development [49], embryo implantation [50] as well as embryo survival [51]. Unfortunately, in our experimental conditions, the quantification analysis by Luminex xMAP detection was not sensitive enough to quantify murine GM-CSF. In fact, the detection limit of our Luminex assay was 11.65 pg/ml, while the detection limit by commercially available ELISAs is usually between 4 to 5 pg/ml. Based on that, future studies for quantifying murine GM-CSF will be performed by ELISA tests or by using blastocysts supernatants at different time points. Similarly to murine GM-CSF, the analyses of murine IL-6, M-CSF, EGF and IGF-I were also below the detection limit. Previous studies in human embryo culture-conditioned media have demonstrated that IL-6 was undetectable in embryo supernatants [52]. However, we should not rule out that ELISA (with better detection limit: 24.56 pg/ml for Luminex vs. 7-8 pg/ml for ELISA), may provide a more reliable quantitative analysis of this cytokine which has been found to be secreted by trophoblast cells [53]. In the case of IGF-I and EGF, these two growth factors were also undetectable in our experimental conditions. The IGF-I has been reported to promote blastocyst development [49] and is positively correlated with embryo quality when present in high concentrations in the follicular fluid [54]. In the case of EGF, this molecule stimulates trophoblast development having a key role in the implantation process [55,56]. Although several soluble molecules were undetectable in the culture medium of murine embryos, detectable levels were observed for VEGF and PDGF-AA. VEGF is intensely synthesized by blastocysts during embryo development in humans [57] and PDGF-AA has been associated to enhanced embryo quality and developmental potential [58]. In our experimental conditions, the addition of EV-endMSCs to embryo cell culture triggered the release of these two growth factors and this release was EV-endMSCs concentration-dependent. In the case of VEGF, the synthesis of this molecule has been initially described during embryonic angiogenesis [59] and seems to be the responsible of vascularization in placenta and decidua when secreted by trophoblastic giant cells [60]. In fact, the angiogenic potential of exosomes from different origins (human placenta-derived MSCs [61], umbilical cord blood [62] endothelial cells [63], human-induced pluripotent stem cell-derived MSCs [64] or bone marrow derived MSCs [65], among others) has been reported. Finally, and similarly to our experimental conditions, in mouse models, the role of VEGF has been associated with embryo implantation and embryonic vasculogenesis [66]. Regarding PDGF-AA and embryo development, the first expression analysis demonstrating the expression of PDGF-AA in embryonic murine cells [67]. This molecule has been linked with early embryo development and more recently, it has been associated with the regulation of programmed cell death mediating the fine-tuning formation of the primitive endoderm at the end of the preimplantation period [68]. In general, PDGF-AA has been defined as a mitogenic factor driving the proliferation of undifferentiated cells [69] and in later maturation stages it has been associated with cell differentiation, tissue remodeling, patterning and morphogenesis [70]. Interestingly, even when the effect of human EV-endMSCs was tested on murine embryos, the embryos increased the secretion of VEGF and PDGF-AA of murine origin. This fact highlights that EV-endMSCs exert their effect in a non-species-specific manner and suggest that the murine model can be a good candidate to further investigate the efficacy of EV-endMSCs of human origin on these embryos. To summarize, to the authors' best knowledge, this is the first report describing the lack of toxicity and beneficial effect of human EV-endMSCs on embryos of any species. The increased release of VEGF and PDGF-AA may indicate that the beneficial effect could be mediating not only an enhanced embryo quality reflected by a significant increase in total cell number per blastocyst and embryo hatching, but also supporting angiogenesis, vascularization, differentiation and tissue remodeling of the endometrium after embryo hatching in view of the soluble factors released. These results confirm a beneficial effect of EV-endMSCs in the field of assisted reproduction and aim to impulse future research in this still underexplored area.
2018-05-03T00:19:20.010Z
2018-04-23T00:00:00.000
{ "year": 2018, "sha1": "72f3c98699837144472949116852028192804951", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0196080&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72f3c98699837144472949116852028192804951", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
411025
pes2o/s2orc
v3-fos-license
On the Degrees-of-freedom of the 3-user MISO Broadcast Channel with Hybrid CSIT The 3-user multiple-input single-output (MISO) broadcast channel (BC) with hybrid channel state information at the transmitter (CSIT) is considered. In this framework, there is perfect and instantaneous CSIT from a subset of users and delayed CSIT from the remaining users. We present new results on the degrees of freedom (DoF) of the 3-user MISO BC with hybrid CSIT. In particular, for the case of 2 transmit antennas, we show that with perfect CSIT from one user and delayed CSIT from the remaining two users, the optimal DoF is 5/3. For the case of 3 transmit antennas and the same hybrid CSIT setting, it is shown that a higher DoF of 9/5 is achievable and this result improves upon the best known bound. Furthermore, with 3 transmit antennas, and the hybrid CSIT setting in which there is perfect CSIT from two users and delayed CSIT from the third one, a novel scheme is presented which achieves 9/4 DoF. Our results also reveal new insights on how to utilize hybrid channel knowledge for multi-user scenarios. I. INTRODUCTION There has been a significant recent interest in understanding the impact of delayed CSIT on the DoF of multi-user MIMO systems. Maddah-Ali and Tse [1] showed that for the Kuser MISO broadcast channel, with a K-antenna transmitter and K single antenna users, the optimal sum DoF is given by the elegant formula K/(1 + 1 2 + . . . + 1 K ). This result shows that even completely delayed CSIT can significantly increase the DoF by exploiting overheard side-information at the users/receivers. However, this result assumes homogeneity in channel knowledge in the following sense: CSIT from every user is delayed. This assumption may not always be true in practice and the delays experienced in acquiring CSIT can vary across users. Such scenarios can arise when some of the users can supply timely CSIT whereas others supply CSIT with delay (which could be a result of factors such as uplink overhead or infrequent feedback). This heterogeneity of channel knowledge motivates the framework of hybrid CSIT. To formalize the hybrid CSIT framework, we denote the availability of CSIT from a particular receiver through a variable I CSIT , which can take values either P or D. For receiver k, the state I k CSIT = P indicates that it supplies perfect and instantaneous CSIT and the state I k CSIT = D indicates that it supplies completely delayed CSIT. Thus, for a M -antenna transmitter and K single antenna receivers, i.e., the (M, K) MISO BC, there are a total of 2 K possible CSIT configurations. The understanding of how to optimally utilize hybrid CSIT is far from complete and optimal results are known only for the case of (2, 2) MISO BC. If the transmitter has perfect CSIT from both the receivers (PP), then the optimal DoF is 2 which can be achieved using beamforming techniques [2]. When there is delayed CSIT from both the users (DD), then the optimal DoF reduces to 4/3 [1]. For the hybrid CSIT scenario in which the transmitter has instantaneous CSI from receiver 1 and delayed CSI from receiver 2, (hybrid CSIT: PD) it was shown in [3] that the optimal DoF is 3/2. We next come to the simplest non-trivial extension of the hybrid CSIT setting for more than 2 receivers, i.e., the case of three receiver, i.e. (M, 3) MISO BC which is the main focus of this paper. Here, a total of 2 3 = 8 possible CSIT configurations, namely PPP, PPD, PDP, DPP, PDD, DPD, DDP, DDD can arise. Essentially, we have 4 non-degenerate CSIT configurations, namely PPP, PDD, PPD, DDD, which depending on the number of transmit antennas (M = 2 or M = 3) lead to two scenarios. The optimal DoF for the (2, 3) MISO BC is 2 with either PPP or PPD configurations (limited by 2 transmit antennas). For the DDD configuration, it has been shown in [1] that the optimal DoF is given by 3/2. Therefore, the only remaining case for which optimal DoF was not known prior to this work is the PDD configuration. In this paper, we show that the optimal DoF for this CSIT configuration is 5/3. We next consider the (3, 3) MISO BC with hybrid CSIT. For this setting, the optimal DoF is 3 with the PPP configuration [2] and reduces to 18/11 in the DDD configuration [1]. However, the optimal DoF values for the remaining two CSIT configurations i.e., PDD and PPD are unknown. We present novel schemes for these two configurations which exploit hybrid channel knowledge to achieve sum DoF values of 9/5 and 9/4 respectively. The paper that is most relevant to this work is [8], in which outer bounds for the (K, K) MISO BC are obtained for general hybrid CSIT configurations. In addition, a coding scheme for (3, 3) MISO BC with PDD configuration is given which achieves 5/3 DoF. Our results improve upon this bound to achieve 9/5 DoF, as well as establish that the optimal sum DoF for the (2, 3) MISO BC for PDD configuration is 5/3. Our results show how to utilize hybrid CSIT, which is a mixture of instantaneous and outdated CSIT from different receivers. While the core idea to exploit overheard side information at those receivers supplying delayed CSIT bears similarities to [1], the new technical challenge is that this exploitation must be done so that instantaneous CSIT (from other receivers) can be simultaneously harnessed. II. SYSTEM MODEL A (M, K) MISO broadcast channel with M -transmit antennas and K-single antenna receivers with hybrid CSIT is considered. The received signal at the kth receiver is given by where x(t) is the M × 1 channel input at time t with E |x(t)| 2 ≤ P T , where P T is the average input power constraint; h k (t) is the 1 × M channel vector from the transmitter to receiver k at time t. Without loss of generality, h k (t) is assumed to be sampled from any continuous distribution (e.g., Rayleigh) with an identity co-variance matrix, and are independent and identically distributed (i.i.d.) across time and also i.i.d. across receivers. The additive noise z k (t) is distributed according to CN (0, 1) for k = 1, . . . , K and assumed to be independent of all other random variables. Throughout the paper, we assume the availability of global channel state information at the receivers (i.e., full CSIR). The rate tuple (R 1 , R 2 , . . . , R K ), with R k = log(|W k |)/n, where W k is the message intended for the kth receiver, is achievable if there exist an encoding function and K decoding functions (one for each receiver) such that the probability of decoding error at each receiver can be made arbitrarily small. The encoding function depends on the specific hybrid CSIT configuration. For example, when the transmitter has perfect and instantaneous CSIT from the 1st receiver and delayed CSIT from the remaining (K − 1) receivers, the encoding function depends on the current and past CSIT of the 1st receiver and only the past CSIT of the other (K −1) receivers. For other hybrid CSIT settings, the encoding and the decoding functions can be defined similarly. In this paper, we focus on the sum DoF of the M -antenna, K-receiver MISO BC, (henceforth referred to as the (M, K)-MISO BC) which is defined as DoF(M, K) = lim P T →∞ max , where the maximum is over all achievable K-tuples (R 1 , . . . , R K ). III. MAIN RESULTS Theorem 1: The optimal sum DoF of the (2, 3) MISO BC with instantaneous CSIT from receiver 1 and delayed CSIT from receivers 2, 3 i.e., I 1 Theorem 2: The sum DoF of the (3, 3) MISO BC with instantaneous CSIT from receiver 1 and delayed CSIT from receivers 2, 3 i.e., I 1 Theorem 3: The sum DoF of the (3, 3) MISO BC with instantaneous CSIT from receivers 1 and 2, and delayed CSIT from receiver 3 i.e., I 1 The converse proofs (upper bounds) follow directly from the arguments in [1], [8], [9] and are therefore omitted. Henceforth, we present new achievable schemes (lower bounds on DoF) which are the main contributions of this paper. IV. ACHIEVABILITY PROOFS We first refresh the optimal scheme for the (2, 2) MISO BC [3] with hybrid CSIT configuration PD (instantaneous CSIT from user 1, delayed CSIT from user 2) that achieves the DoF value of 3/2. Here, the transmitter sends two symbols (a 1 , a 2 ) to user 1 and one symbol b to user 2 in two time slots as follows: in the first time slot, it sends [a 1 a 2 ] T +h 1 whereas user 2 gets a linear combination of (a 1 , a 2 ) and b, denoted by L 2 (a 1 , a 2 ) + b. Thus, L 2 (a 1 , a 2 ) is a symbol which is desirable by both users since user 1 can decode (a 1 , a 2 ) from L 1 (a 1 , a 2 ), L 2 (a 1 , a 2 ), whereas user 2 can use it to decode b. Thus, we call L 2 (a 1 , a 2 ) as an order 2 symbol, i.e., a symbol desired by 2 users; this symbol can be sent in the second time slot achieving 3/2 DoF. While the order 2 DoF in the (2, 2) MISO BC is 1 (one order 2 symbol can be delivered to two receivers in one time slot), one can do better when we consider the extension to the 3-user MISO BC. For the 3-user MISO BC, there are 3 possible types of order 2 symbols, namely, symbols desired by receivers (1, 2), symbols desired by receivers (1, 3), and symbols desired by receivers (2,3). We first present the optimal degrees of freedom region for delivering order 2 symbols with hybrid CSIT. This result forms the basis for establishing the achievability proofs for Theorems 1 and 2. Lemma 1: The order 2 DoF region for the (2, 3) MISO BC with PDD CSIT configuration is given by Proof: The converse proof follows from the arguments similar in [1], [8], [9] and is therefore omitted. From (5)- (7), the optimal order 2 sum DoF is 5/4 corresponding to the tuple (d 12 , d 23 , d 13 ) = 1 2 , 1 4 , 1 2 . We next present a novel coding scheme which achieves this tuple. To this end, we denote ab, bc and ac-symbols as the order 2 symbols desired by receivers (1, 2), (2, 3) and (1, 3) respectively and present a scheme which sends (ab 1 , ab 2 , ac 1 , ac 2 , bc) to the corresponding receivers in 4 time slots. This scheme is shown in Fig. 2 and is described next: The receiver outputs are shown in Fig. 2, where G i (L j (ab 1 , ab 2 ), bc) indicates a linear combination of L j (ab 1 , ab 2 ) and bc. At the end of t = 1, we note that the linear combination L 3 (ab 1 , ab 2 ) is useful for all the 3 receivers, i.e., this is an order 3 symbol. • At t = 2, an identical scenario is created with the ac symbols by sending The corresponding outputs at the receivers are shown in Fig. 2, and similar to L 3 (ab 1 , ab 2 ), F 2 (ac 1 , ac 2 ) is also an order 3 symbol . • We now note that these order 3 symbols, i.e., L 3 (ab 1 , ab 2 ) and F 2 (ac 1 , ac 2 ) can be reconstructed at the transmitter via delayed CSIT from users 2 and 3. These symbols can be subsequently delivered in time slots t = 3 and t = 4 (as order 3 DoF for a 3-receiver MISO BC is 1). Finally, at the end of these 4 time slots, the 1st receiver can decode ab 1 , ab 2 using L 1 (ab 1 , ab 2 ), L 3 (ab 1 , ab 2 ) and ac 1 , ac 2 using F 1 (ac 1 , ac 2 ), F 2 (ac 1 , ac 2 ). The 2nd receiver can decode bc by canceling the interference from F 2 (ac 1 , ac 2 ). This bc symbol is used to recover L 2 (ab 1 , ab 2 ) from G 1 (L 2 (ab 1 , ab 2 ), bc). Thus using L 2 (ab 1 , ab 2 ), L 3 (ab 1 , ab 2 ), it can reconstruct the symbols ab 1 , ab 2 by solving two LCs of two symbols. Along similar lines, the 3rd receiver can reconstruct (ac 1 , ac 2 , bc). Thus, the order 2 DoF tuple We next present the optimal scheme that uses the result of Lemma 1 to achieve the DoF tuple (d 1 , d 2 , d 3 ) = (1, 1/3, 1/3). Specifically, the scheme sends 12 symbols to the 1st receiver and 4 symbols each to receivers 2 and 3 (denoted by in 12 time slots. Stage 1 -generating order 2 symbols: This stage consists of 3 phases, denoted by phase-bc, phase-ab and phase-ac. Phase-bc corresponds to creating one order 2 symbol for receivers (2,3). Phase-ab and phase-ac are used to create {ab 1 , ab 2 } and {ac 1 , ac 2 } respectively. Fig. 3 shows the outputs at the receivers in this stage and the mechanism of generating order 2 symbols. Phase-bc: creating 1 bc symbol • At t = 1, the transmitted and received signals are given by At the end of t = 1, notice that A 2 is useful for both 1st and 2nd receivers. Since the transmitter has delayed CSI from the 2nd receiver, it can reconstruct A 2 . Note that A 1 , A 2 and A 1,2 are linear combinations of the two symbols (a 1 , a 2 ). • At t = 2, the transmitter sends A 2 along with a new symbol b 2 as x(2) = [A 2 0] T + h ⊥ 1 (2)[b 2 0] T and the outputs at the receivers are shown in Fig. 3. At the end of t = 2, the 1st receiver can decode the symbols a 1 , a 2 using two linearly independent combinations A 1 and A 2 . L 4 (A 2 , b 2 ) (with receiver 3) is useful for the 2nd receiver as it helps decode b 1 , b 2 (since it can recover 3 symbols A 2 , b 1 , b 2 from L 1 , L 3 and L 4 ). Thus, L 4 (A 2 , b 2 ) is an ingredient for creating the bc-symbol. • At t = 3 and t = 4, the transmitter sends new symbols a 3 , a 4 for the 1st receiver along with c 1 , c 2 for the 3rd receiver as Fig. 3. The side information L 7 (A 4 , c 2 ) at receiver 2 is useful for the 3rd receiver (to recover c 1 , c 2 ). Therefore from t = 2 and t = 4, L 4 (A 2 , b 2 ) + L 7 (A 4 , c 2 ) is the bc symbol. Stage 2 -delivering order 2 symbols: The 5 order 2 symbols created in stage 1 can be delivered in 4 time slots using the scheme developed in Lemma 1. Upon receiving these order 2 symbols, all receivers can decode their desired symbols. For example, upon receiving A 6 , the 1st receiver can decode a 5 , a 6 via using A 5 which was already received at t = 5. The 2nd receiver can use A 6 to cancel interference in the symbol L 9 (A 6 , b 3 ) and decode b 3 . Similar reasoning holds true for the other symbols at the receivers. Overall, the transmitter spent 8 time slots in the 1st stage and 4 time slots in the 2nd stage, which gives the optimal DoF tuple 12 12 , 4 12 , 4 12 = 1, 1 3 , 1 3 i.e., DoF PDD (2, 3) = 5/3. In this section, we present a scheme that achieves the tuple (d 1 , d 2 , d 3 ) = 1, 2 5 , 2 5 , i.e., total of 9/5 DoF and improves upon the best known bound of 5/3 [8]. We present a scheme that sends 10 symbols to the 1st receiver and 4 symbols each to receivers 2 and 3 in a total of 10 time slots. and the outputs at the receivers are shown in Remark 1: Similar to the scheme for (2, 3) MISO BC, this scheme also has two stages, stage 1 dedicated to generating order 2 symbols and stage 2 for their delivery using Lemma 1. However, there are two key distinctions -a) the mechanism of generating order 2 symbols is different, and b) the rate of creation of order 2 symbols is higher for (3, 3) MISO BC, leading to a higher DoF value of 9/5 compared to 5/3. Stage 1 -generating order 2 symbols: This stage (shown in Fig. 4) is split into two distinct phases: Phase-bc takes 3 time slots and generates 1 bc-symbol and Phase-(ab, ac) takes 3 time slots to jointly generate 2 ab-symbols and 2 ac-symbols. Phase-bc: creating 1 bc-symbol This phase sends (a 1 , a 2 , a 3 ) along with (b 1 , b 2 ) and (c 1 , c 2 ) in three time slots as follows: It is clear that at the end of this phase, receiver 1 is able to decode (a 1 , a 2 , a 3 ) and the transmitter can create the following bc-symbol that is useful for both the receivers 2 and 3: bc = L 2 (A 2 , B 2 ) + G 1 (A 3 , C 1 ). Phase-(ab, ac): creating 2 ab-symbols and 2 ac-symbols This phase sends {a j } 10 j=4 for receiver 1, (b 3 , b 4 ) for receiver 2 and (c 3 , c 4 ) for receiver 3 to generate 2 ab and 2 ac-symbols in 3 time slots (at a higher rate in comparison to Theorem 1). • • Thus at t = 6, the transmitter sends these side information symbols along with a 10 , a new symbol for receiver 1 as . We next note that in Phase-(ab, ac), receiver 1 has obtained a total of 3 interference free symbols and requires 4 more useful symbols in order to decode {a j } 10 j=4 . Receiver 2 uses y 2 (5) and y 2 (6) to eliminate G 3 (A 8 , C 3 ) to obtain a LC of (a 10 , A 6 ) and B 4 denoted as A 6 + B 4 . Receiver 2 also obtains L 3 (A 5 , B 3 ) at t = 4 which leads to the generation of 2 absymbols: A 5 and A 6 . Receiver 3 uses y 3 (4) and y 3 (6) to eliminate L 4 (A 6 , B 4 ) to obtain a LC of (a 10 , A 8 ) and C 3 denoted as A 8 + C 3 . Thus A 9 and A 8 are 2 two ac symbols. At t = 1, we send 2 symbols each to the 1st and 2nd receivers (a 1 , a 2 ), (b 1 , b 2 ) in directions orthogonal to the h 2 (1) and h 1 (1) respectively so as to not create cross interference. Additionally we transmit one symbol, c for the 3rd receiver by using a projection operator h ⊥ (1,2) (1) such that The transmitted signal is given by This is repeated again at t = 2 with new symbols (a 3 , a 4 ) and Phase-(ab,ac) Phase-bc (b 3 , b 4 ) for 1st and 2nd receivers but with the same c symbol for the 3rd receiver as At the end of these two time slots, notice from Fig. 5 that receiver 1 requires LCs A 2 and A 4 . These symbols are also useful to the 3rd receiver as it helps cancel interference and thereby decode symbol c. Similarly, B 2 and B 4 are required at the 2nd and 3rd receivers. Thus the goal of the next two time slots is to send these symbols in an efficient manner to the receivers. Using delayed CSIT from the 3rd receiver, transmitter can reconstruct A 2 , A 4 , B 2 and B 4 . At t = 3, the transmitter sends the symbols A 2 and B 4 as B 2 , B 4 ), c), L 5 (G 2 (A 2 , A 4 ), c) and L 6 (G 1 (B 2 , B 4 ), G 2 (A 2 , A 4 )). In summary, this scheme achieves the DoF triplet (1, 1, 1 4 ) or in other words the sum DoF of 9 4 . This concludes the achievability proof for Theorem 3. V. CONCLUSIONS In this paper, we investigated the impact of hybrid CSIT on the degrees-of-freedom of 3-receiver MISO BC. Novel achievable schemes were presented for various hybrid CSIT configurations which established the optimal DoF (for the (2, 3) MISO BC) and improved upon the best known achievable DoF (for the (3, 3) MISO BC). Our results show that that an important aspect when dealing with hybrid CSIT is the generation and transmission of higher order symbols which are desired by multiple receivers. As our schemes show, this problem is far from trivial even for the 3-receiver broadcast channel. Showing the optimality of these schemes, and extensions of these ideas to more than K = 3 receivers are interesting open problems for future work.
2014-02-19T08:54:21.000Z
2014-02-19T00:00:00.000
{ "year": 2014, "sha1": "ca947fb589ffb078ee46a6521d19ede3da5afc5c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1402.4729", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ca947fb589ffb078ee46a6521d19ede3da5afc5c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
118292570
pes2o/s2orc
v3-fos-license
Turbulence in exciton-polariton condensates Nonequilibrium condensate systems such as exciton-polariton condensates are capable of supporting a spontaneous vortex nucleation. The spatial inhomogeneity of pumping field or/and disordered potential creates velocity flow fields that may become unstable to vortex formation. This letter considers ways in which turbulent states of interacting vortices can be created. It is shown that by combining just two pumping intensities it is possible to create a superfluid turbulence state of well-separated vortices, a strong turbulence state of de-structured vortices, or a weak turbulence state in which all coherence of the field is lost and motion is driven by weakly interacting dispersive waves. The decay of turbulence can be obtained by replacing an inhomogeneous pumping by a uniform one. We show that both in quasi-equilibrium and during the turbulence decay there exists an inertial range dominated by four-wave interactions of acoustic waves. Introduction. The phenomenon of turbulence -chaotic motion of vortices of many different length scales -is ubiquitous in nature, and quantitative understanding of it is a notoriously difficult problem of classical physics. Turbulence occurs in many usual fluid flows as well as in exotic systems such as plasmas and superfluids. Vorticity in superfluids is quantized in units of h/m, where m is the mass of the boson in contrast with continuously distributed vorticity of a classical Navier-Stokes fluid. In superfluids quantized vorticity is considered to be an evidence for a macroscopically occupied quantum state that can be described by a classical complex-valued wave function ψ(x, t). Quantization of velocity circulation in superfluids leads to significant differences between superfluid turbulence (ST) and classical turbulence. On the other hand, at large Reynolds numbers the motion of well-separated vortices in an incompressible classical flow may have similar features to ST. In this case the vortex dynamics in superfluids is almost classical in accordance with the Biot-Savart law (BSL). The decay of the turbulence (loss of the vortex line density) occurs due to dissipative effects induced by interactions with a normal fluid component (with a thermal cloud). Recently by introducing an external oscillatory perturbation in a trapped atomic BEC it became possible to obtain a disordered system of many topological defects [1]. The dynamics of this matter field differs from both dynamics of vortices in classical turbulence and in superfluid helium turbulence. Firstly, the characteristic distance between vortices is comparable to their core sizes, so the chaotic behavior is seen on the level of a single vortex, secondly, these vortices are not structured, so they do not obey BSL, finally, the system is in a strongly non-equilibrium state. These creates a novel nontrivial regime of a classical complex matter field -"strong turbulence" state -whose evolution is quite different from that of ordered condensate. In analogy with other nonlin-ear systems such as plasmas, fluids and nonlinear optics, apart from the regime of strong turbulence there exists the regime of weak turbulence where all phases of the complex amplitudes of the matter field are random. Recently [2] these three regimes (superfluid, strong turbulence and weak turbulence) have been observed at different temperatures in 2D cold atomic gases, showing a universal scaling. The weak turbulence plays crucial role in kinetics of Bose-Einstein condensation [3]. It was shown that a strongly non-equilibrium Bose gas evolves from the regime of weak turbulence to superfluid turbulence, via states of strong turbulence in the long-wavelength region of energy space. An important question remains whether it is possible to force a condensate system to pass through these stages in a reverse order. It has been suggested [4] that if a sufficiently strong external perturbation is applied to the trap, it is in principle possible to obtain the weak turbulence state. When this is done it will lead to a discovery of nontrivial transitional regimes of classical matter fields in atomic systems [5]. In the last few years the Bose-Einstein condensation has been achieved in solid state systems [6], such as microcavities, ferromagnetic insulators and within superfluid phases of 3 He. Microcavity exciton-polaritons are quasi-particles that consist of superpositions of photons in semiconductor microcavities and excitons in quantum wells. The Bragg reflectors confining photon component are imperfect, so exciton-polariton have finite life time and and have to be continuously re-populated. Such combination of pumping and decay leads to quasiparticle flow even at steady states of the system. At sufficiently low densities these quasi-particles can form a Bose-Einstein condensate, so the many particles quantum system can be described by a classical equation in a form of the complex Ginzburg-Landau equation (cGLE) for η = 0, σ = 0.3 and (i) α(x) = 2 for |x − (±5, 0)| < 2 and 1/2 otherwise, t=100 (left panel) and (ii) Vext = x 2 + y 2 and α(x) = 5 for x 2 + 2y 2 < 64 and α = −0.5 otherwise (right panel). Red ellipse indicates the pumping spot. Luminosity of the density plots is proportional to density. Vortices are seen as black dots. [8,9]: where α is an effective gain that represents intensity of the pumping field, σ represents nonlinear losses. The unit of length is a healing length ξ =h/ √ 2mU ρ ∞ that defines the size of the vortex core and the unit of time ish/2U ρ ∞ , where U is the strength of a δ-function interaction potential. We shall assume that α = α 0 is constant away from some localised nonuniformities and so the number density there is ρ ∞ . It is possible to include a disorder potential of the microcavity by adding V ext (x)ψ to the right-hand side of Eq. (1). This equation is a mean-field description of the condensate; it can also be derived from the saddle point in a path integral formalism [10]. In the absence of pumping and dissipation Eq.(1) reduces to the Gross-Pitaevskii equation describing an equilibrium Bose-Einstein condensate. The energy relaxation has been noted to be of importance in experiments on extended 1D waveguides [11,12]. These effects can be included in the cGLE by means on a parameter η [13]. This is the same term that has been incorporated into the Gross-Pitaevskii equation to represent a dissipation of the condensate component due to interactions with a thermal cloud [14]. The turbulence and mechanisms of the vortex generation in equilibrium condensates are well known. These include (i) interactions of finite amplitude sound waves (e.g. energy exchange between rarefaction pulses may lead to vortex formation) [15]; (ii) existence of critical velocities of the flow (e.g. moving objects generate vortices if the Landau critical velocity is reached on their surfaces [16]); (iii) modulational instabilities of density variations (e.g. transverse instability of a dark soliton in 2D generates vortices) [17]. Some of these mechanism may produce vortices in the cGLE as well. For instance, the flow of exciton-polaritons about a spatially extended defect may produce vortex pairs of opposite circulation depending on the flow velocity [18]. In addition, Eq. (1) with gain and dissipation can form vortices by other physical mechanisms. For instance, an inhomogeneity of the pumping or/and disorder potentials form steady currents which may produce vortices through pattern forming symmetry breaking mechanism [8]. Although the formation of vortices has been observed in experiments [7] they seem to appear due to the intrinsic disorder potential in CdTe. The vortices become pinned at the local minimum of such potential and remain stationary. So the conditions in which a turbulent state of matter can be obtained in exciton-polariton condensates remained unclear. The purpose of this letter is to suggest how the turbulent state can be created in such a system, to study the properties and structure of the turbulence, and to propose how different regimes can be detected experimentally. It will be shown that turbulence can be created by deliberately designed pumping fields, and depending on characteristics of such fields the system can reach various regimes of turbulence from superfluid turbulence to strong and finally weak-turbulent state. Vortex formation. To illustrate the basic mechanism that drives the formation of vortices we first consider a pumping field in a form of a step function in 1D, so that α = α 1 + α 0 , σ = σ 1 , η = η 1 for x < 0 and α = α 0 , σ = σ 0 , η = η 0 for x > 0. The steady state mass continuity and Bernoulli equations resulting from the Madelung transformation ψ = √ ρ exp iS applied to Eq. (1) are µ = u 2 + ρ − d 2 √ ρ/2 √ ρ dx 2 and d(ρu)/dx = (α−ηµ−σρ)ρ where ρ is the number density, u = S (x) is the velocity and the chemical potential µ is introduced by 2i∂ t ψ = µψ. Away from large density fluctuations we can drop the quantum pressure term d 2 √ ρ/2 √ ρ dx 2 . We expect that u → 0 as x → −∞, so µ → (α 1 + α 0 )/(σ 1 + η 1 ). As x → ∞, therefore, there will be a steady current u = [α 1 (η 0 + σ 0 ) + α 0 (η 0 − η 1 + σ 0 − σ 1 )/σ 0 (η 1 + σ 1 )] −1/2 generated by the step. The presence of boundaries or other sources of outflow generate interference fringes seen, for instance, in recent experiments in 1D [11]. In 2D the fringes that meet at nonzero angles evolve into a pair of vortices of opposite circulation as seen on the left panel of Fig. 1. The mechanism leading to vortex formation in this case is analogous to the transverse instability of a density depletion in a conservative Gross-Pitaevskii equation [17]: the motion of grey solitons is inversely proportional to their depth, so modulation in the transverse direction forces different parts of the front to move with different velocities leading to vortex pair formation. This suggests that the several sources of such flows may continuously generate a large number of vortices leading to a turbulent flow. Another possibility to create a turbulent flow is related to the formation of vortex lattice in a harmonic trapping potential due to an instability of a For time t < 500 the pumping is nonuniform as discussed in the text. At time t = 500 the nonuniformity of the pumping field is removed and the vortex density decays linearly initially, as the inset shows. As the density of vortices decreases, the system reaches the superfluid turbulence regime of well-separated vortices with a logarithmic decay [20]. These decay rates are in contrast with a power-decay rates of the order t −3/4 in classical 2D viscous fluids and in the limit of the cGLE equation with zero dispersion [21]. non-rotating solution [8]. By removing the circular symmetry of either the trapping potential or pumping field it is possible to create a turbulent flow of vortices instead of a regular vortex lattice (see the right panel of Fig. 1). Numerical set-up. In order to engineer a turbulent formation and interaction of vortices we shall consider an inhomogeneous pump α(x) that can be obtained by passing the laser beam through a spatial phase (light) modulator. This will be even further simplified by assuming that only two laser intensities are allowed: the background with a superimposed set of almost periodical spots of a higher intensity, so that α(x) = α 0 everywhere except for x inside the circles |x − a i | 2 < c i where α(x) = α 1 + α 0 and c i = c+χ i , a i = (a 1 ±iT +δ i , a 2 ±iT +φ i ), T is the period of spots, c is the square of the spot radius, i = 0, 1, 2... and χ i , δ i and φ i are random displacements of the order of the healing length. Both η and σ take different values for different pumping intensities. In practice, setting different values for these quantities does not change the qualitative behaviour of the system. In what follows both η and σ with be set to be constants across the fields [19]. Through the time evolution one can observe the formation of vortices until their number begins to fluctuate about a constant value; see Fig. 2. The larger difference between the two pumping intensities, α 1 , leads to the faster outflows and a larger number of vortices generated. The relaxation has a negative effect on the number of vortices (compare the vortex densities for α 1 = 3 and η = 0 or η = 0.01 on Fig. 2). At time t s = 500 (well after the quasi-equilibrium is reached) we remove the nonuniformity of the pump by setting α 1 = 0. After that the vortices start annihilating each other leading to the decay of the turbulence. This stage can be compared and contrasted with the wave turbulence of the Gross-Pitaevskii equation where the dissipation is at a given (high) momenta and so has a different physical meaning [20]. By tuning the nonuniformity of the pumping field it is possible to reach different turbulent regimes. If the difference between intensities, α 1 , is below a threshold or the distance between the spots of higher intensity is large, no vortices will be created. In a case of a moderate α 1 and only few spots a set of several well-formed wellseparated vortex pairs is created and the system is in a superfluid turbulence state (see the left panel of Fig. 1 and the left inset of Fig. 3). By increasing the difference between intensities α 1 it is possible to create the state of strong turbulence (where vortex cores start to overlap; see the top inset of Fig. 3). It is, therefore, tempting to see if the system can be driven even further to enter the regime of weak turbulence in which all coherence is lost and all Fourier amplitudes have random phases. To verify this we calculated the second moment of the correlation function g 2 = |ψ| 4 / |ψ| 2 2 . By Wick's theorem the state of the weak turbulence corresponds to g 2 = 2. As shown on Fig. 3 by raising α 1 it is possible for the system to reach the weak turbulence state. Note that the relaxation η increases g 2 . This occurs because the relaxation increases the rate at which vortex pairs annihilate by bringing the vortex cores closer to each other; this effect can be seen on Fig. 2 showing the number of vortices in quasi-equilibrium. The energy released from vortex annihilation becomes converted into acoustic energy therefore increasing g 2 . In order to describe the turbulence in the Eq.(1) we shall assume that there exists an inertial range in the momentum space and that the role of pumping and dissipation is insignificant there. The evolution equation for the wave spectrum defined by a k1 a * k2 = n k1 δ(k 1 − k 2 ), with a k being the Fourier transform of ψ and k i are discrete wave vectors, can be obtained by using a random phase approximation and expanding in small nonlinearity [22]. The equation takes the form ∂ t n k1 (t) = d 2 k 2 d 2 k 3 d 2 k 4 W k1,k2;k3,k4 × (n k3 n k4 n k1 +n k3 n k4 n k2 −n k1 n k2 n k3 −n k1 n k2 n k4 ), . Two solutions of this evolution equation correspond to a thermodynamic equipartition of the total kinetic energy E = k 2 n k dk, so that n k ∼ k −2 and to an equipartition of the total number of particles N = n k dk, so that n k ∼ const. These correspond to the two limits of the Rayleigh-Jeans distribution T /(k 2 + µ), where T is the temperature. We verified the existence of the inertial range in our simulations. Although the system is in a quasi- Starting from an initial constant density profile the nonuniform pumping is applied, so that g2 rapidly grows reaching a quasi-stationary state after t ∼ 20. After that g2 fluctuates about a constant value. The blue dots (η = 0) and red squares (η = 0.01) show the average of the g2 during the time interval [20, ts]. The time snapshots of the normalized density |ψ| 2 of the fields ψ are shown for superfluid turbulence state (left inset), strong turbulence (bottom inset) and weak turbulence state (top inset) for t < ts. equilibrium rather than in the true thermodynamical equilibrium we observed both spectra. For the nonuniform pumping the wave spectrum shows the particle equipartitions for strong turbulence (see the top (blue) curve of Fig. 4), whereas the superfluid turbulence spectra corresponds to energy equipartition (the Kolmogorov-Zakharov energy cascade n k ∼ k −2 ); see the grey (green) curve on the inset of Fig. 4. During the turbulence decay stage the wave spectrum corresponds to n k ∼ k −2 ; see the bottom (red) curve of Fig. 4. This suggests that at these intermediate scales of the inertia range the turbulence is dominated by four-wave interactions and the wave field is weakly nonlinear and dominated by acoustic modes. In summary, we proposed a way to generate various regimes of turbulence in nonequilibrium condensates, such as exciton-polariton condensates. By designing a nonuniform pumping field that leads to sufficiently strong interacting fluxes it is possible to create the superfluid turbulence with well separated quantised vortices, the strong turbulence with overlapping and de-structured vortices or the weak turbulence state with a complete loss of coherence. The nonequilibrium condensates, therefore, are new and exciting systems with a nontrivial evolution of complex matter field with turbulence that may span regimes fundamentally different from the classical fluid turbulence. The author acknowledges useful discussions with A. Amo, C. Ciuti, J. Keeling and B. Svistunov. 4: (color online) The wave spectrum log(n k ) vs log(k) for the state of strong turbulence established for parameters η = 0, σ = 0.3 at t = 450 (nonuniform pumping field with α1 = 3, α0 = 1/2, top blue curve) and at t = 650 (uniform pumping filed α1 = 0, α0 = 1/2, bottom red curve). Lines corresponding to n k ∼ const and n k ∼ k −2 are included. Inset shows the wave spectrum for the state of superfluid turbulence α1 = 1 with n k ∼ k −2 spectrum of the inertial range and for the transitional state α1 = 2, both for t < ts.
2010-11-05T16:01:08.000Z
2010-10-25T00:00:00.000
{ "year": 2010, "sha1": "24846e8a779f3208abc5f7bfdd4c10cdb5dfe7e5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "24846e8a779f3208abc5f7bfdd4c10cdb5dfe7e5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
894342
pes2o/s2orc
v3-fos-license
On Functional Module Detection in Metabolic Networks Functional modules of metabolic networks are essential for understanding the metabolism of an organism as a whole. With the vast amount of experimental data and the construction of complex and large-scale, often genome-wide, models, the computer-aided identification of functional modules becomes more and more important. Since steady states play a key role in biology, many methods have been developed in that context, for example, elementary flux modes, extreme pathways, transition invariants and place invariants. Metabolic networks can be studied also from the point of view of graph theory, and algorithms for graph decomposition have been applied for the identification of functional modules. A prominent and currently intensively discussed field of methods in graph theory addresses the Q-modularity. In this paper, we recall known concepts of module detection based on the steady-state assumption, focusing on transition-invariants (elementary modes) and their computation as minimal solutions of systems of Diophantine equations. We present the Fourier-Motzkin algorithm in detail. Afterwards, we introduce the Q-modularity as an example for a useful non-steady-state method and its application to metabolic networks. To illustrate and discuss the concepts of invariants and Q-modularity, we apply a part of the central carbon metabolism in potato tubers (Solanum tuberosum) as running example. The intention of the paper is to give a compact presentation of known steady-state concepts from a graph-theoretical viewpoint in the context of network decomposition and reduction and to introduce the application of Q-modularity to metabolic Petri net models. Introduction The knowledge of biochemical networks, in particular of metabolic networks, increases daily with the capabilities of new upcoming high-throughput technologies to measure all the participating molecules and the relations between them. This enables us to construct large and complex models for many pathways of different species. In particular, modeling of metabolism helps us to understand biological function. A prerequisite for a quantitative model is the complete knowledge of metabolite concentrations and reaction constants and/or rates or, at least, a critical amount of them. However, in most cases, quantitative data in sufficient amounts and of high quality are rare and only available for rather small metabolic systems. This situation motivated the development of qualitative methods, which enable us to analyze statements on functional behavior and dynamic properties of the system without any knowledge of the kinetic parameters. Metabolism is commonly understood as a system of interacting and hierarchically organized functional modules [1]. Scale-freeness with the appearance of super-hubs, e.g., AT P or N ADH, are typical features of metabolic networks [2]. The evolutionary reason and advantage of this organization structure is a topic of ongoing controversial discussions; see, for example, [2,3]. Currently, bioinformatics takes up the formidable challenges of characterizing the structural properties common in different metabolic systems and of identifying functional modules and their hierarchical organization. Many concepts, methods and algorithms emerge for network validation, decomposition and reduction. All are based on mathematical grounds and allow rigorous statements, even though the running time behavior becomes an issue for large networks. Graph-theoretical methods are based on topological properties, mainly connectivity, and do not account for stoichiometric relations or steady-state conditions. Such non-steady-state methods have been developed in various scientific fields, for example, in physics [4], social science [5], economy [6], marketing [7,8], production processes [9] and communication [10]. Many modularization techniques based on graph partitioning have been developed and studied over decades [11]. Recently, the Q-modularity introduced by Newman & Girvan [12] has boosted the research on community detection in graphs [13]. Most techniques have been developed for networks of one-to-one (unipartite) interrelations between components. These methods are suitable for biological interaction networks, such as protein-protein interaction in proteomics; see, for example, [14]. However, for reaction systems, such as metabolic pathways, it is beneficial to consider bipartite graphs, where metabolites cover the passive part, and the enzyme-catalyzed reactions, the active part of the system. This distinction enables a unique and exhaustive examination of the concurrent processes inherent in biological networks. The bipartiteness of graphs is a typical, intuitive feature in all complex networks [15], thus, also, in biochemical networks. The literature in this field of ongoing research is extensive, and we abstain from giving a representative overview. The aim of this paper is, first, to present known steady-state methods for network decomposition from a graph-theoretical point of view; second, to introduce the application of Q-modularity to metabolic networks; and third to give a compact and understandable review on module detection discussed from both perspectives, with and without the steady-state assumption. In the paper, we aberrate from the traditional division into Methods and Results sections, because we partly present known concepts, but from a different point of view, in order to explain the new concepts. Thus, the organization of the paper is method driven. We start with the description of computer science terms of computability. Afterwards, we continue with a recapitulation of steady-state network decomposition methods and their application to metabolic systems, including a brief consideration of network representation as hypergraphs and bipartite graphs, the definition of Petri nets and a detailed explanation of the Fourier-Motzkin algorithm for invariant computation. Addressing graph-theoretical concepts, we define and discuss communities, Q-modularity and network reduction. In this context, we consider the use of functional modules for network verification and reduction. To illustrate the concepts for network decomposition and reduction, we apply a small biochemical running example. Finally, we summarize and give conclusions. Complexity Definitions of Algorithms and Problem Classification In practice, we are interested in developing algorithms with the shortest possible running time. In computer science, problems formalized as algorithms are classified according to their running time behavior. This makes the formal estimation of running times of algorithms essential, including the development of a unique notation. We consider the running time dependent on the size of the input data and want to estimate the evolution of the computing time for big sizes of input data. Distinguishing the worst case, the best case and the average case, the worst case is of general interest and mainly applied. For pairwise sequence alignment, the size of the input data is defined by the sequence length; for multiple sequence alignment, the number of sequences to be compared needs to be included, as well. For graph-theoretical problems, the number of vertices, n, and edges, m, define the size of the input data. Now, we have to find a mathematical function that behaves similarly to the running time function, representing an upper, lower or tight bound. Commonly, the Landau notation [23] is used to denote asymptotic upper bounds (O and o notation), lower bounds (Ω and ω notation) and tight bounds (Θ and θ notation). As the Big-O notation for the worst case is most widely used, we explicitly give its definition. For a more detailed description we refer, for example, to [24]. Definition 1 (Big-O notation [24]) : Let f (n) be the mathematical function that describes the behavior of our running time function. For a given function, g(n), we denote O(g(n)) as the set of functions with O(g(n)) = { f (n): there exists positive constants, c and n 0 , such that O ≤ f (n) ≤ cg(n) for all n ≥ n 0 }. The complexity theory classifies problems according to their running time behavior in the worst case. Algorithms, whose running time grows not faster than O(n a m b ) with the exponents, a and b, as small as possible, are favorable. Problems, whose algorithms exhibit such a polynomial behavior, are classified to be in the complexity class, P (polynomial). Problems for which no polynomial-time algorithms are known, but whose solutions can be verified in polynomial time, belong to the complexity class, NP (non-deterministic polynomial). Problems like the Traveling Salesman, Boolean Satisfiability or Linear Programming are in NP. These problems are also called NP-complete. NP-complete problems are decision problems in NP and as hard as any other problem in NP. If there would exist a polynomial algorithm for one NP-complete problem, then every problem in NP would also have a polynomial-time algorithm. Then, the question, "P = NP?", would have been solved and, thus, a fundamental problem in computer science. For a list of NP-complete problems in graph theory, we refer to [25]. NP-hard problems are at least as hard as any NP-complete problem, but do not have to be in NP. There exists many other subclass definitions for special problems. One of these definitions that we will need is the class, EXPSPACE, which is solvable with O(2 p(n) ) memory, where p(n) is a polynomial function of n. In practical applications, the complexity class of a task gives a reasonable indicator for the chance of success when we search for solutions in large graphs. Please keep in mind that the complexity class describes the worst-case scaling property. The simplex algorithm for linear programming represents a well-known example. It has an impressive record of running fast in practice, despite having exponential-time complexity when applied to a hard problem [26,27]. Note that the complexity class for the averaged scaling behavior is an independent (and interesting) question of its own. We will touch on the issue of complexity and computability later, again. The rather long and growing list of NP-complete problems motivated the development of alternative concepts, such as DNA computing [28,29], quantum computing [30] and membrane computing [31]. However, a discussion of the capabilities and limitations of these concepts are outside the scope of this work. Network Diagrams: Hypergraphs and Bipartite Graphs Graph-theoretical representations are widely applied to illustrate networks. For biochemical networks, these graphs are usually directed. Traditionally, biologists and physicians use the hypergraph representation; see Figure 1a. A hypergraph consists of a finite set of vertices, representing metabolites, and a finite set of hyperedges, denoting an arbitrary number of reactions that transform metabolites. In metabolic networks, a hyperedge covers one reaction, which is usually named after the enzyme that catalyzes this reaction. Figure 1a illustrates a hypergraph representation of a part of the central carbon metabolism in young Solanum tuberosum (potato tubers). The edges are weighted by an integer number that corresponds to the stoichiometric coefficient of the chemical reaction. For example, the hyperedge, glycolysis, in Figure 1a represents the underlying stoichiometric equation: F ructose-6-P + 29 ADP → 29 AT P . The delineation of a metabolic reaction system as a bipartite graph is more detailed. Bipartite graphs are widely used in computer science. In bipartite graphs, two types of vertices exist, whereby edges are only allowed between vertices of different type, i.e., the edges separate the vertex set into two vertex sets. Researchers in biology and medicine are accustomed to metabolic pathway maps of the KEGG database [32] (see Figure 2) and, hence, inclined to apply bipartite graphs for visual representation. Figure 1. Part of the central carbon metabolism in young potato tubers (Solanum tuberosum) in the hypergraph representation on the left side and, on the right side, the corresponding bipartite graph as a Petri net. The metabolites are modeled as places and the reactions as transitions, which are labeled by the corresponding enzyme names. Transitions without pre-places or post-places model the interface of the system to its environment and are drawn as flat rectangles. Edges only exist between vertices of different types. Additionally, we see two other vertex types, which were introduced for a clearly arranged layout. The filled places stand for logical or fusion vertices. Logical places of the same name represent exactly one vertex in the underlying graph structure. A transition depicted by two nested rectangles stands for a hierarchical transition, meaning that it covers a subnetwork; here, the forward and backward reaction of the transition, phosphoglucoisomerase. If the edge label is not explicitly indicated, the edge weights are equal one. The transition, glycolysis, is enabled if there are at least one molecule or mole of f ructose-6-P and 29 molecules or moles of ADP and produces 29 molecules or moles of AT P when it fires. [32] reference map, number 00500, of the starch/sucrose metabolism depicts a bipartite graph. The circles correspond to the metabolites, and the edges represent the enzyme-catalyzed reactions, where the enzyme is denoted by its EC-number in rectangles. The edges are directed and carry no information on stoichiometry. Petri Nets Petri nets (PN) have been defined by Carl Adam Petri to describe systems with causal, concurrent processes [33]. PN are directed, bipartite graphs. The concept is developed under the strong division into passive and active system elements represented by two vertex types, the set of places, P , and the set of transitions, T . The vertices are connected by directed edges, defining a flux relation, F : ((P × T ) ∪ (T × P )) → N 0 . An edge never connects vertices of the same type, i.e., edges divide the set of vertices into two disjunct vertex sets. For an example, see Figure 1b. The metabolites are modeled as places and the reactions as transitions, which usually carry the name of the catalyzing enzyme. Transitions without pre-places or post-places model the interface of the system to its environment and are drawn as flat rectangles. Additionally, we see two other vertex types, which were introduced for a clearly arranged layout. The filled places stand for logical or fusion vertices. Logical places of the same name represent exactly one vertex in the underlying graph structure. Two nested rectangles stand for a hierarchical transition, hiding subnetworks. In Figure 1b, the nested rectangle covers the forward and backward reaction of the transition, phosphoglucoisomerase. If the edge label is not explicitly given, the edge weight equals one. Places can carry movable objects, the tokens. The distribution of tokens over all places defines a certain system state. The flow of tokens describes the dynamics of a system. The marking, m : P → N 0 , determines the number of entities (e.g., molecules or moles) of each metabolite (place) and describes the current state of the metabolic network. Because tokens can be interpreted in different ways, for example, as objects of manufacturing or financial processes or as the number of moles or molecules, the token flow can be interpreted in various ways, strongly dependent on the application field. In metabolic networks, we consider a flow of substances, whereas in signal transduction networks, we consider a flow of signals, i.e., information. A token flow may take place if a transition is enabled or activated and operates or fires according to a specific firing rule, producing a new system state. In Figure 1b, the transition, glycolysis, is enabled if there are at least 29 tokens of ADP and one token of f ructose-6-P , and the capacity of the corresponding post-place is large enough to accept the produced 29 tokens of ATP, additionally to the existing marking. In most cases, places with unbounded, i.e., infinite, capacity are defined. Figure 1. On the left side, the place, glucose, carries one token, and the place, ATP, depicted by three logical places, carries three tokens. Thus, the transition, hexokinase, is enabled or has concession and can fire. After firing (right side), one token of glucose-6 phosphate (place glucose-6-P ) was produced, consuming one token of AT P (place ATP). One token of sucrose was generated by firing of the transition, sucrose input, which is always enabled. In this paper, we consider the untimed firing rule of classical place/transition nets (P/T-nets). That means that firing, i.e., token movement, takes no time. The number of consumed and produced tokens is defined by the weights of the corresponding edges to the pre-and post-places, respectively, of the firing transition. Note that the total number of consumed tokens must not be equal to the total number of produced tokens. Thus, a PN may not conserve the total number of tokens in the system. Figure 3 shows two states of the PN in Figure 1. On the left side, place glucose carries one token and the place, ATP, depicted by three logical places, three tokens. Thus, transition hexokinase is enabled and can fire. After firing (on the right side), one token of glucose-6 phosphate is generated, consuming one token of AT P . Moreover, one token of sucrose has entered the system by the firing of transition, sucrose input, which is always enabled. To explore the entire dynamic behavior, all reachable states have to be computed. Reachability Analysis The reachability analysis aims to enumerate and investigate all possible system states starting from an arbitrary initial marking. In the analysis, we have to follow all alternatives of firing in the case of conflicts and concurrency. This results in a semi-ordered (partial-ordered, interleaving) semantics that reflects the nondeterministic choice of the processes to be executed. In the case of simulation, we have to decide, for example, which transition of two or more conflicting transitions fires in which order. Figure 4 illustrates a small subnet of the central carbon metabolism in young potato tubers of Figure 1. The place, f ructose-6-P , has two post-transitions, P GI f and glycolysis, which both compete for the tokens on the place, f ructose-6-P . For the reachability analysis, we have to consider the two cases: (1) transition P GI f fires first or (2) transition glycolysis fires first. To represent all possible states and the transitions that cause the respective new states, we define the reachability graph RG. The vertices of an RG encode system states, each defined by a certain token distribution on all places. The directed edges, labeled by the reaction whose firing induces the change of the system state, indicate the direction of the state transformations. Figure 1. The place, f ructose-6-P , has two post-transitions, P GI f and glycolysis, which compete for the tokens on the place, f ructose-6-P . For the reachability analysis, we have to consider the two cases: (1) transition P GI f fires first or (2) transition glycolysis fires first. Usually, a standard graph-theoretical algorithm, called Breadth-First Search (BFS) (see, for example, [24]), is used as basis for the computation of the RG. This algorithm explores all vertices of a graph, starting with an arbitrary vertex and all its neighbors. The visited vertices are labeled, such that they are not processed again. The algorithm continues with the unvisited neighbors, until all vertices of the graph have been explored. Thus, for example, all connected components of a graph can be determined. The BFS algorithm runs in linear time in O(m + n), where m and n are the number of vertices and edges, respectively. Here, the BFS examines all enabled transitions as neighbors of the considered state. The exponentially growing number of system states can lead to a state space explosion.Here, the BFS examines all enabled transitions as neighbors of the considered state. In biology, even for small networks with up to 20 places and 30 transitions, the state space may become very huge. Therefore, in the last few years, special data structures, e.g., binary decision diagrams (BDD), have been developed to cope with the state space explosion [34]. Incidence Matrix and Stoichiometric Matrix Let us consider a sequence of reactions, s = (t i1 , t i2 , . . . , t in ), also called firing sequence, which changes the marking of the system, such that: and The number of occurrences of reactions, t k ∈ T , in the firing sequence, s, is given by the component, τ k = #t k , of the frequency vector, τ : T → N 0 . We call τ a Parikh vector of the sequence, s. Originally, Parikh vectors have been defined for context-free languages, indicating the number of occurrences of a letter in a word [35]. Generally, an incidence matrix, C, describes the relationships between two sets of objects, for example, T and P , which corresponds to the columns and rows of the matrix, respectively. The matrix entry, C(x, y), is nonzero, if x and y are related, and zero, otherwise. For a weighted, directed, bipartite graph with the edge weights, w tp and w pt , the two sets are defined by the two vertex types, i.e., t ∈ T and p ∈ P . The two possible directions, forward and backward, of an edge are specified by the numbers, d f = 1 and d b = −1, respectively. An entry, [x, y], in the incidence matrix is given by d f w pt and determines the change of the token number in a place, p, after the firing of a transition, t; see Table 1. In such a way, we describe the effect of a sequence of firing transitions (reactions) on the marking of the system by the incidence matrix, C : P ⊗ T → Z. Table 1 illustrates the incidence matrix, C, of the PN in Figure 1 covering eight places and nine transitions. The token change of metabolites in the marking on the places is then given by: Table 1. The incidence (stoichiometric) matrix for the network in Figure 1. p i stands for a metabolite (place) and t j for a reaction (transition). The firing of transition t 8 (sucrose input) produces a new token of sucrose on p 1 ; see Figure 1. In this case, the Parikh vector, t, has solely one nonzero component, #t 8 = 1, and we yield: We may use the incidence matrix, C, to compute the changes of token (metabolite) numbers, resulting from the firing sequence of transitions (reactions) (t 4 , t 5 ). The change of metabolites produced by this firing, sums up to zero. In terms of PN theory, the two reactions, t 4 and t 5 , form a transition-invariant. One of these two reactions, let us say, t 4 , may fire spontaneously and drive the systems away from the steady state, but the other reaction, t 5 , of the transition invariant can compensate for the effect of firing of t 4 . Such stochastic fluctuation is a natural and inherent property of a metabolic reaction system, in which all reactions are constantly active and the time-dependent state of the system fluctuates around an ideal steady state. Invariants Let us now consider the invariant properties of the system. The invariants hold in every system state reachable from an arbitrary, initial marking. We define invariant properties for the active and the passive part of the system. Considering the active part and the equation system: we define a nontrivial, nonnegative integer solution, t, and name the solution vector, transition-invariant or t-invariant. The solution, t, has to be an integer, because we consider discrete objects-the tokens, and nonnegative-because any sequence of firing transitions (reactions), s = (t i1 , t i2 , . . . , t in ), gives rise to an integer and nonnegative Parikh vector. Parikh vectors with negative components are senseless in the biological context. Let C T be the transposed incidence matrix. Considering the passive part, we define the nontrivial, nonnegative integer solution, p, of the equation system: and call it place-invariant or p-invariant. The solution space of such linear equation systems is, in general, unbounded, i.e., infinite. However, we are interested in a finite solution set, from which we can compute all possible solutions by positive integer linear combinations of the solution vectors. Such a set is given by all minimal solutions of the invariant equations, where minimal means: for an invariant, x, there exists no invariant, z, whose support is part of the support of x: and the largest common divisor of all entries of x is one. The support of x, written as supp(x), contains the set of the nonzero entries of a vector, x. In the following, we consider minimal, nontrivial, nonnegative t-and p-invariants, shortened as t-invariants or TI and p-invariants or PI, respectively. Invariant properties have important applications in systems biology. P-invariants represent a set of places whose weighted sum remains always constant, thus representing a conservation of substances. T-invariants describe a cyclic firing behavior, because the firing of all transitions of a t-invariant leads back to the initial marking, forming a cycle in the RG. The TI represents basic pathways in biochemical networks at steady state and describes, thus, the basic network behavior. Before explaining the application of invariants in more detail, we first want to discuss their computation. Fourier-Motzkin Elimination Method The Fourier-Motzkin elimination method (FM) [36,37] is a classical algorithm for solving equation systems with minimal, nontrivial, nonnegative integer solutions, i.e., the computation of t-invariants. The working principle of the FM can easily be demonstrated for the network in Figure 1. Initially, we construct a table that consists of the transposed incidence matrix and the |T | × |T | identity matrix: We find one column for each reaction (t 1 , t 2 , . . . , t 9 ) and one column for the change of metabolites in each place (p 1 , p 2 , . . . , p 8 ). We can read the table as follows: Each line describes a sequence of reactions and its effect on the metabolites. The first line corresponds to the sequence (t 1 ), because the entry for reaction, t 1 , is one, and the entries for all other reactions are zero. This sequence of reactions (t 1 ) removes one metabolite of p 1 and adds one metabolite of p 2 and one metabolite of p 3 , because in the first line, the entry is −1 for p 1 , and the entries for p 2 and p 3 are both +1. The interpretation of the second to the eighth line is analog. The basic idea of the algorithm is to combine lines in such a way that the entries for all metabolites become zero. In the first step, we have to select a metabolite, let us say p 1 , and to construct the combinatorial diversity of all sequences of reactions that yield ∆m 1 = 0. Checking the column for p 1 , we find an entry, −1, in the first line (t 1 ) and an entry, +1, in the eighth line (t 8 ). Each of the reactions, t 1 and t 8 , influence the metabolite, p 1 , and the only possible combination of these reactions that yields ∆m 1 = 0 is reaction t 1 plus reaction t 8 . Consequently, we add the first line to the eighth line to get one new line with an entry zero for p 1 . We append this new line (t 1 , t 8 ) to the table and delete the lines utilized to construct that new line. We get the new table: Note that all entries in the column for p 1 are zero now. The new line (t 1 , t 8 ) in the table has an entry, one, in the column for t 1 and an entry, one, in the column for t 8 . Hence, the new line corresponds to the reaction sequence (t 1 , t 8 ). According to the FM, we still have to check whether the new sequence of reactions is minimal, because the support of a new sequence may be a superset of the support of another sequence. In this case, the new sequence would not correspond to a minimal solution and would have to be eliminated from the table. In the particular case of reaction (t 1 , t 8 ), the candidate solution is minimal. The FM algorithm proceeds with another metabolite that has nonzero entries in its column. For large networks, the metabolites should be chosen according to an advantageous heuristic [37]. For simplicity, we leave such heuristic aside and just choose the metabolite, p 2 , in the table above. In the column for p 2 , we find the entry, −1, in the line, (t 2 ), and the entry, one, in the line, (t 1 , t 8 ). Again, we can construct only one new combination that gives p 2 = 0. We construct the new line, (t 1 , t 2 , t 8 ), by adding the first line, (t 2 ), to the eighth line, (t 1 , t 8 ). Then, we delete the utilized lines and append the new line: The new line corresponds to a minimal candidate solution, and the FM proceeds with another metabolite that still has nonzero entries in its column. Proceeding to the next step of the FM for metabolite p 3 , two lines are utilized to append a new (minimal) line, (t 1 , t 2 , t 3 , t 8 ): In the next step of the FM, the combination of the fourth line, (t 7 ), with the fifth line, (t 9 ), zeroes all entries for the metabolite on p 6 , and we yield: Note that, up to now, each step of the FM has reduced the number of lines in the table. Such a reduction of the number of lines is favorable, because, in general, the number of lines grows in each step of the FM. The growth can be exceptionally fast and presents a serious problem, known as the state explosion problem for the computation of all minimal solutions of Diophantine equation systems. We choose (carefully) a network as an example, for which the FM does not run into the state explosion problem. The next step of the FM for metabolite p 4 shows that the number of lines does not have to decrease for each step. Proceeding with metabolite p 4 , we find two positive and two negative entries in the column for The next step for metabolite p 5 results in the table: Applying a further step for metabolite p 7 gives the final table: The entries for the metabolites, p 1 to p 8 , have all to be zero in the final table. The entries for the transitions describe two t-invariants of the network. The t-invariant, {t 4 , t 5 }, represents a simple reversible reaction, and is called a trivial t-invariant. The more complex t-invariant, {15 t 1 , 15 t 2 , 13 t 3 , 2 t 5 , 28 t 6 , 15 t 7 , t 8 , 28 t 9 }, describes the basic functional pathway of the network in Figure 1b. It is easy to verify that all reactions are members of at least one TI, and hence, the network is covered by t-invariants CTI. Reactions that cannot be compensated by other reactions have to be discussed carefully for their biological relevance. Such reactions are strong indicators for missing reactions or errors in the model. The identification of a reaction that contradicts a steady-state behavior is a computational challenge for large metabolic models. Standard approaches are based on the computation of a minimal generator set of all TI. In general, the computation of all TI requires exhausting resources in terms of computer time and memory [38]. Several groups have developed advanced algorithms to speed up the computation of all TI, for example, the canonical basis approach by Schuster & Hilgetag [39], the nullspace approach by Wagner [40], the concept of bit pattern trees by Terzer & Stelling [41], and a parallel divide-and-conquer approach by Jevremovic et al. [42]. Even with all these methods and modern (super-)computers, only models of moderate size have been tractable until now. The number of TI of a metabolic network of moderate size can easily reach tens of millions [42]. This leads us to the next problem: how to interpret this huge amount of basic pathways. Which pathways are the most important ones? To give an answer, let us now consider first the CTI question without the computation of all t-invariants. The CTI Property We want to define another property, which is helpful, in particular, to verify biochemical systems. This property represents a completeness condition which may be applied in network verification. If each transition belongs to at least one t-invariant, we say that the PN is covered by t-invariants (CTI). Accordingly, we call a PN to be covered by p-invariants (CPI), if each place is a member of at least one p-invariant. The CPI property can be used to decide boundedness, i.e., the finite number of tokens for all places. Only for bounded PN, a finite reachability graph can be generated. Though the CPI property is important for many questions, we will not consider it in more detail in this paper. The CTI Question Despite the fact that the knowledge of an even huge number of t-invariants is valuable and represents a prerequisite for more advanced analytical techniques, we want to decide whether a network is CTI without computing all TI. Since the set of all t-invariants describes a minimal set of all functional modes of the system at steady state, each transition should belong to at least one t-invariant. To show the CTI property for a PN, we have to find one integer solution, t, of the equation: with nonzero components (t i ≥ 1, i = 1, 2, . . . , |T |) or to exclude the existence of such a solution. Thus, to decide the CTI question without computing all TI, a less expensive strategy would be beneficial. Lipton [43] gives the proof that the reachability problem for vector addition systems requires exponential space in the worst case. Accordingly, the CTI decision problem is EXPSPACE-hard. For a vector addition system, (s, e, {v 1 , v 2 , . . . , v n }), of dimension k, the reachability problem reads: do vectors w 1 , . . . , w m ∈ N k exist, such that: f or each i, w i+1 = w i + v j , f or some + j? We can easily see the equivalence to the CTI decision problem. Let the dimension, k, be the number of places, ( k = |P |), the incremental change vectors, (v 1 , v 2 , . . . , v n ) ∈ Z k , the column vectors of the incidence matrix, C, the end vector, e ∈ N k , sufficiently large and the starting vector, s ∈ N k , the sum of the end vector, e ∈ N k . Then, the change of metabolites, resulting from firing each transition once, is: where in the Parikh vector, all components equal one (y i = 1, i = 1, 2, . . . |T |). Any solution of this vector addition problem represents a solution of Equation (9) and shows the CTI property of the network. Geometric Point of View The CTI question and the concept of TI are closely related to the theory of convex cones. In this context, Schuster et al. [39] defined the elementary flux modes or elementary modes (EM), which correspond to the TI [44]. It is obvious that the set, These changes of metabolites have to be compensated by a reaction of the net. Therefore, we have to find an element, s, in the convex cone, S, that compensates for ∆m, i.e., s = −∆m. Geometrically, the network is CTI if the vector: is inside the convex cone, S, and the Parikh vector, y, has all components equal to one (y i = 1, i = 1, 2, . . . |T |). We have to check whether b is inside (b ∈ S) or outside (b ∈ S) of the cone. Here, the Lemma of Farkas [45] provides a useful statement: The vector, b, is either inside of the convex cone, S, or it is possible to find a hyperplane, S ⊥ , that separates b from the convex cone. Such a separating hyperplane must be a tangent hyperplane to the convex cone, S. Without loss of generality, we can choose a surface normal, s ⊥ , that points into the same direction as the cone, i.e., the angle between s ⊥ and all vectors in the cone is not greater than 90 • : This inequality can be expressed as: where I is the identity matrix and ν is an arbitrary vector with nonnegative integer components. The surface normal, s ⊥ , determines a tangent hyperplane, S ⊥ = {s ∈ Z k | s T s ⊥ = 0}. Now, we have to prove whether the vector, b, is located on the "wrong" side of the hyperplane, i.e., opposite of the convex cone, S. It turns out that the vector, b, is located opposite of the convex cone if a solution, (s, ν), of Equation (17) with nonzero positive components of ν ≥ 0 exists [46]. The nonzero components of ν identify the reactions not covered by TI. Applying this strategy to the network in Figure 1, we have to construct all solutions (s ⊥ , ν) of the dual system (17): The FM can be again applied to construct all positive integer solutions, (s ⊥ , ν), of this system of the Diophantine Equation (17). In general, solutions of Equation (17) with zero components, ν = 0, are called place-invariants in PN formalism [22] and describe the conservation law of metabolites [47]; see Section 4.3. Note that the computation of solutions of the system of the Diophantine Equation (6) and the computation of solutions of the dual system of the Diophantine Equation (17) are both EXPSPACE-hard problems [46]. Network Decomposition into Functional Modules Functional modules are important for representing, understanding, reducing and verifying general networks. This is true, in particular, for biochemical networks, which are big and complex and for which an experimental validation can be difficult or is even not possible. Several definitions of functional modules have been proposed in various scientific fields. Definitions inspired by biology are mainly manually derived induced by biological knowledge. They often rely on the experience of the individual researcher. With the growing amount of data, the automatic detection of modules becomes of great interest. All known definitions are at least implicitly based on graph-theoretical properties. For biochemical systems, we distinguish between module definitions that are based on the steady-state assumption and definitions that ignore it. Both types of definitions are advantageous to solve specific biological questions. Steady-State Modules The reactions (transitions) of each EM (TI) and the metabolites (places) in between, including the corresponding edges between them, build connected subnetworks that stand for a certain biological function. Thus, a subnetwork defined by a TI can be understood as a functional module. The careful evaluation of the biological interpretation of functional modules, often manually done, is part of proving the model for its correctness. There are many studies that provide exactly this kind of analysis. Some of them report the detection of new pathways that have been later experimentally validated. An example is the prediction of the glyoxylate pathway [48,49] and its validation [50]. Because the number of TI can grow exponentially, thousands to millions and more of TI can exist, even for middle-sized networks of two or three hundred vertices. To handle such a huge number of functional modules, further differentiation becomes necessary and was developed by several groups. We distinguish between methods that are based on the support of a TI vector and others that consider the actual numbers in the Parikh vector. Support Vector-Based Methods Methods based on the support vector do not explicitly take into account the integer numbers of the Parikh vector and, thus, implicitly ignore the stoichiometric relations. Instead, we consider the binary information of whether a reaction or enzyme (transition) is a member of a TI or not. An example of such a method to define modules are minimal cut sets. Minimal Cut Sets (MCS) [51]: MCS has been introduced to study the fragility of metabolic networks and possible knockout strategies to prevent or avoid a specific biological function. An MCS is defined as a minimal set of reactions (enzymes) that blocks, after its removal, all feasible, balanced fluxes that involve an objective reaction (enzyme). Applying the Lemma of Farkas, MCS can be computed without the computation of the TI [52]. The next two module definitions are suitable for large networks. Since our running example is too small to illustrate the usefulness of these definitions, we refer to examples in [53][54][55]. Maximal Common Transition sets (MCT-sets) [56,57]: Inspired by maximal common subgraphs, we summarize equal parts of the solution vectors into new sets, the MCT-sets. An MCT-set is defined by a set of reactions, {1, . . . , m}, in which each pair of reactions, t i and t j , with i, j ∈ 1, ..., m, occurs in exclusively the same TI, such that: where X denotes the set of all TI x, with i, j ∈ 1, ..., m, and χ {0} denotes the characteristic binary function, indicating if its argument equals zero. This grouping leads to maximal sets of transitions, where each set of transitions, ϑ, fulfills: Because of the exclusive membership of transitions, MCT-sets and the places and edges in between define disjunctive subnetworks. Thus, MCT-sets can be interpreted as building blocks, for example, in synthetic biology. Please note that the reactions of an MCT-set do not necessarily represent connected subnetworks, i.e., they do not necessarily form consecutive firing sequences. T-clusters [54,58]: Whereas MCT-sets define disjunctive subnetworks caused by the strong criterion of exclusiveness in their definition, we may wish to allow overlapping subnetworks with a broader, specific biological function. We define t-clusters based on hierarchical clustering methods, such as UPGMA or NEIGHBOR JOINING. As a distance measure, we use the Tanimoto coefficient [59]. The similarity between two t-invariants, t i and t j , is then: where supp(t i ) and supp(t j ) denote the support vectors of the t-invariants, t i and t j . The pair-wise similarity, s ij , expressed by this coefficient is transformed into a distance measure for dissimilarity, d ij [60]: For a detailed description of clustering techniques, see, for example, [61]. The definition of the best number of clusters, which is a fundamental problem in unsupervised classification, is implemented as a user-defined parameter. Additionally, cluster validity measures can be applied to identify the number of clusters which "best" represents the intrinsic grouping of the data [62]. The silhouette width [63], which is computed as the average silhouette value over all data samples, seems to be a suitable measure for biochemical applications. The silhouette value, S, for an individual data sample, i, is defined as: where a i denotes the average distance between i and all the data samples in the same cluster and b i denotes the average distance between i and all data samples in the nearest other cluster. In contrast to MCT-sets, subnetworks based on t-clusters can overlap. MCT-sets and t-clusters have been applied to metabolic systems, but also to signal transduction pathways [57] and gene regulatory networks [64]. An interesting biological interpretation is that the reactions of an MCT-set take place always together, i.e., the expression behavior of the participating genes should be similar. ACoM (Aggregation around Common Motif) [65]: Starting with a common motif defined as the set of transitions that belong to all TI as a seed, it will be extended according to specific rules. This seed motif is of determined length and is successively extended, until a certain threshold is reached. Similar to t-clusters, overlapping aggregations of common motifs were defined. Elementary Flux Patterns [66]: The concept of elementary flux patterns is similar to EM analysis. It explicitly takes into account possible steady-state fluxes through a genome-scale metabolic network when analyzing pathways in a subsystem. Thus, many EM can be computed in reasonable time, although not the complete set of all EM or TI. The concept of elementary flux patterns allows for the application of many EM-based tools to genome-scale metabolic networks. A Parikh Vector-Based Method Enzyme subsets (ES) [67]: Enzyme subsets are enzymes that always operate together in fixed flux distributions in all steady states of the system. In the context of Metabolic Control Analysis, groups of enzymes were introduced as monofunctional units or super-enzymes [68,69]. In monofunctional units, all Parikh entries of the TI, i.e., the ratios of (nonzero) frequencies of the reactions, have to be identical. This requirement represents a restrictive criterion for the definition of functional modules. Communities As Non-Steady-State Modules Communities play a prominent role in a broad range of scientific fields, including, e.g., social science, economics, computer science, engineering, politics, and biology. Examples of communities are friends in a school class, readers of books sharing similar interests, electronic components to be placed together on a layout of a solid-state circuit board, co-authors of scientific articles, interacting proteins or words with similar associations. For an excellent review, we refer to the work of Fortunato [13]. Communities are intuitively understood as a group of members of a network. The members should have many connections within the community and only a few connections to vertices outside the community. Interrelations inside the communities should be dense and between the communities, sparse. The well accepted quality criterion, called Q-modularity, for a partition into communities is defined by: for a unipartite network with m unweighted undirected edges and communities, C [12]. The formula sums the entries of the adjacency matrix, A ij , over all pairs of vertices, i, j, in the same community. The Kronecker delta, δ(C i , C j ), becomes one, if both vertices, i and j, are in the same community and zero, otherwise. The summation over A ij gives the number of edges inside of all communities and a number that cannot exceed the total number of edges times two. The pre-factor, 1/(2m), guarantees a value which is equal to or less than one. Each entry, A ij , is reduced by a probability, P ij , to find the edge, (i, j), by chance in an appropriately chosen statistical null model. A random network with identical distribution of the vertex degree leads to a simple sum over n c modules: where l c is the number of edges in the module, c, and d c is the total sum of the vertex degrees in the module, c. This formula for the Q-modularity is not directly applicable to metabolic networks because metabolic networks are bipartite graphs with directed edges and edge weights. An easy way to apply the formula above would be to transform a metabolic network into a unipartite network with undirected and unweighted edges. This can be done in different ways, but a loss of crucial information (e.g., the direction of flow of metabolites) cannot be avoided. Q-Modularity A partition of a PN is given by disjoint modules, C i , with i = 1, 2, . . . , n c . The vertices of a module can be transitions and/or places. An appropriate formula for Q-modularity of metabolic networks has to consider the direction of edges within modules and between modules in a bipartite metabolic network [70]. Note that, to find modules for which the value of Q reaches its maximum is an NP-hard problem [13]. We apply a genetic algorithm to obtain an optimized structure of modules for metabolic networks. The value of the Q-modularity increases from generation to generation and reaches a maximum after a sufficient number of steps. Figure 5 shows an application of this algorithm to the network in Figure 1b. Figure 5. The structure of modules optimized using a genetic algorithm. Each colored subnet represents a community: the red subnet describes the sucrose uptake and cleavage by invertase; the green subnet covers all reactions, where ADP and AT P are participating; the blue subnet describes the only reversible reaction in the system, and the yellow one stands for the starch production. Application to Network Reduction and Verification We have already discussed the complexity class of various methods for analyzing qualitative properties of metabolic networks. The search for the best possible partition in modules is an NP-hard task, and the CTI question is EXPSPACE-hard. For example, the rather medium-sized metabolic network of Saccharomyces cerevisiae with 63 metabolites and 117 reactions considered in Jevremovic et al. [42] has about 50 million TI. Keeping in mind this huge number of invariants and the extensive computational effort required to compute them, it seems to be hopeless to apply an invariant analysis to metabolic networks of thousands of reactions as published in current databases [71]. The computational effort may explode with the increasing number of network components. This explosion problem is a well-known drawback in practical computations. However, it is instructive to see how the explosion problem can be circumvented for networks using special networks properties: metabolic networks are usually expected to be scale-free; reaction chains appear often; there are super-hubs of metabolites playing an essential role for most reactions (e.g., AT P ); many reactions are reversible and most likely have a small number of one or two input metabolites. Such properties make metabolic networks special and well-distinguishable from random networks or technical networks. It may, for example, be possible to reduce the computational effort to answer the CTI question by transforming a network into a smaller one. Thus, network reduction enables insights into coarse-grained structural properties of the network [19][20][21][22]72]. Useful network reduction techniques for the CTI question are transformations of networks that preserve the CTI property. These CTI-conservative reduction techniques are favorable to decide the CTI question for large networks. For most biological networks, a significant reduction of the computational complexity is possible. A typical kind of a reduction step is inspired by MCT-sets [57] or enzyme subsets [67] (see Section 5.1). The basic idea is that chains of reaction can be summarized to one reaction if they consist of common transition pairs (CTP). A CTP is a local structure of a place that has exactly one pre-transition and one post-transition. Intuitively, the pre-transition produces tokens on a place that can be removed by the post-transition only. Another local structure useful in this context is the invariant transition pair (ITP). An ITP is a reversible reaction, consisting of a forward and backward reaction. Figure 6 depicts an example for network reduction. For a detailed definition and discussion, we refer to [46]. The starting point for the analysis of a new constructed model should always be the theoretical verification of the model. Standard approaches are based on the condition that the model should have the ability to establish an equilibrium with the environment, i.e., external resources have to be supplied by the environment, and accumulating metabolites have to be discharged. We may find dynamic properties of a model that contradict such a steady-state behavior of the system. An iterative process of verification and remodeling is necessary to improve the model and to correct fundamental errors. Thus, laborious computations based on the mass action kinetics or stochastic simulation of a not validated and, possibly, erroneous model can be avoided. Metabolic networks are commonly described in terms of mass action kinetics, using kinetic parameters such as concentrations of the metabolites, reaction constants and rates. The steady-state behavior of the model may, in principle, be evaluated by applying bifurcation theory, local stability analysis and the theory of dynamical systems [74,75]. However, the nonlinear character and the high number of resulting equations hinder such an approach for most metabolic reaction systems, besides the fact that the kinetic parameters in most cases are unknown. Moreover, such a point of view of metabolism is well-satisfied only for well-mixed systems of large spatial dimension. Biological systems, for example, a cell or mitochondrion, are characterized by a complex spatial organization in a small volume. The assumption of well-mixed concentrations of freely diffusing proteins, complexes and small metabolites that react by mass action kinetics inside of a large macroscopic volume is obviously not always met for such systems. Even for small metabolites, the functional role of gradients of concentrations and non-diffusive transport processes (e.g., see the electron transport chain in mitochondria) hamper the application of mass action kinetics. The number of enzymes and metabolites are discrete, countable and not even nearly on the order of the Avogadro constant, N A = 6.02214 × 10 23 . A theoretical description in terms of probability functions and solutions of the stochastic master equation would be more realistic to specify the fluctuation of species in the system [76,77]. Even at the steady state, the numbers of molecules are not constant, but fluctuate around average values, where the average number of molecules of a species depends on its chemical concentrations. Figure 6. Reduced network of a part of the central carbon metabolism in young potato tubers (Solanum tuberosum) of Figure 1b. The original network, consisting of eight metabolites and nine reactions, is reduced by four common transition pairs (CTP) and one invariant transition pair (ITP). The three reactions of the reduced network represent the concerted action of several reactions of the original system. The places, glucose-6-P and f ructose-6-P , lump together the metabolites, glucose-6 phosphate and fructose-6 phosphate. The reduced network is CTI and has only one TI. The frequencies of the individual reactions in the invariant are given by the numbers in the red colored rectangles. Note that the two minimal invariants of the original network (compare Section 4.4) can be constructed by an appropriate extension procedure [46]. The picture was generated using MONALISA [73]. Summary and Conclusions The work aims to give an overview about important methods in both connectivity-based, as well as steady-state-based methods. In this paper, we report two types of approaches for functional module detection: those that are based on the steady-state assumption and those that are based on graph-theoretical methods without a steady-state consideration. The first one considers a bipartite graph representation of metabolic networks, whereas the second one works on unipartite graphs. For the first case, we describe the computation of t-invariants (EM), which can be further decomposed by several approaches into disjunctive or overlapping subnetworks. We introduce Petri nets as a widely used and suitable formalism to model systems with concurrent processes. In the context of PN, we define the system's invariants, which give us insight into the dynamic behavior of the system without any kinetic knowledge. To illustrate the idea, we provide a detailed example for the computation of t-invariants (EM) using the Fourier-Motzkin method. From the geometric point of view, t-invariants are equivalent to the extreme rays of a convex cone. We consider the CTI question, which is important to verify a biochemical model. Using the proof by Lipton, we show that this question corresponds to the reachability problem for vector addition systems and is EXPSPACE-hard. To consider connectivity-based methods, we define communities. We introduce the Q-modularity measure to verify the partitioning by these algorithms. In addition, we illustrate the methods described, using a small metabolic network, and discuss the development of new methods for the structural analysis of metabolic systems. Network reduction plays an important role, in particular, in handling genome-scale networks. We explain how common transition pairs (CTP) and invariant transition pairs (ITP) enable us to compute t-invariants of large networks, even if we will not get a minimal set of t-invariants. Finally, we shortly discuss network verification with respect to kinetic analysis techniques.
2016-03-22T00:56:01.885Z
2013-08-12T00:00:00.000
{ "year": 2013, "sha1": "9b4f0feb9b0971df35ecf4d1ff8809db19f7949e", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/2218-1989/3/3/673/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b4f0feb9b0971df35ecf4d1ff8809db19f7949e", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
236271911
pes2o/s2orc
v3-fos-license
Extended Technology Acceptance Model to Examine the Use of Google Forms – based Lesson Playlist in Online Distance Learning Shifting to online distance learning due to the COVID-19 pandemic challenged educators' roles as instructional materials designers. This study aimed to examine the students' acceptance of the teacher-designed e-learning tool called Google Forms-based Lesson Playlist (GF-LP) in a home-based online distance learning environment. This quantitative research analyzed 570 responses from Grades 11 and 12 students at a private school in the Philippines using the partial least squares-structural equation modeling. Results showed that perceived self-efficacy and system quality strongly affect the users' perceived ease of use while perceived ease of use highly influenced the students' perceived usefulness of GF-LP. Facilitating conditions do not affect the users' attitudes towards using the e-learning tool, which confirmed the effective utilization of GF-LP in online distance learning. The relationships between the original constructs of the Technology Acceptance Model (TAM) were also presented. This study recommends the use of GF-LP or its features for remote learning. Introduction COVID-19 pandemic challenged educators to adapt teaching online (König et al., 2020) as a response to control the spread of the coronavirus disease (Toquero, 2020). Hence, they need to explore designing online learning tools and appropriate teaching pedagogy to assist their students in online distance learning. Today, designing appropriate instructional materials comes with great ease brought by technological advancements in the digital age of the 21st century ((Fei & Hung, 2016). Digital Learning Technology (DLT) adoption in times like this addresses the physical limitations of delivering the lessons and allows students to continue learning anytime and anywhere Mothibi, 2015). Adopting various digital learning technologies into the learning environment helps achieve the desired goals, learning process, and learning outcomes of educational activities Ozerbas & Erdogan, 2016). However, DLT adoption poses barriers, especially during online distance learning. According to Panda and Mishra (2007) and Ancho (2020), the teachers perceived that the significant challenges in utilizing DLT include weak internet connection, insufficient training, lack of institutional policy, and poor instructional design support for e-learning. The majority of the reviewed literature focused on teachers' perceptions of DLT adoption barriers. Thus, there is a need to elicit information periodically from the students to address these barriers to improve digital learning technologies continuously (Adnan & Anwar, 2020). Similarly, Almaiah, Alamri, & Al-rahmi (2019) emphasized that students' acceptance of mobile learning is essential to guarantee online education success. Hence, designing a specific e-learning tool that would address the DLT adoption barriers in an online learning environment is needed. Furthermore, adopting the online distance learning environment can be successful through a significant pedagogical transformation aided by an appropriate e-learning tool that is designed carefully such that it considers the students' realities at home. In this study, the teachers of the respondents developed a Google Forms-based Lesson Playlist (GF-LP), a self-made e-learning tool that is used to address the digital technology adoption barriers in the first implementation of online distance learning in the Philippines. A sample template of GF-LP was provided to the teachers for uniformity. Thus, this study sought to examine the use of Google Formsbased Lesson Playlist as students' e-learning tool in their different subjects in a home-based online distance learning. Google Forms -Based Lesson Playlist as a Digital Learning Technology in ODL The abrupt transition from traditional to online instruction as an emergency response to the pandemic provides an opportunity for a wider acceptance of technology-aided education (Rajab et al., 2020). Digital Learning Technology (DLT) refers to a set of hardware and software devices that allow information and knowledge to be communicated and accessed, distributed, and stored in a digital environment ( Mercader & Gairín, 2020), giving students control over the place, time, path, and, or pace ( Kashada et al., 2018). These learning technologies include web-based learning systems, active learning classrooms (ALC), social media in the education context, mobile technologies, video conferencing applications, YouTube, and email (Granić & Marangunić, 2019). However, students may experience difficulties utilizing these technologies because they are challenging to navigate and primarily dependent on internet access. Also, too many tools and links might distract them from their primary tasks (Zanjani et al., 2017). Therefore, designing a userfriendly e-learning tool with specific features allowing the students to navigate and easily access the instructional materials freely can positively affect their use and acceptance. Since DLT adoption and utilization are not always successful in students' learning outcomes (Kashada et al., 2018), this study utilized a teacher-designed e-learning tool (e.g., Google Forms-based Lesson Playlist) based on the features of DLT that affect users' acceptance. Table 1 shows the identified characteristics of DLT that affect the user satisfaction and approval of a DLT. Teachers' efforts to utilize innovative e-learning tools enable online distance learning (ODL) during the COVID-19 pandemic. In this study, the Google Forms-based Lesson Playlist was created considering the features mentioned in Table 1. A lesson playlist is a set of activities in a lesson that includes online links to video lessons, online articles, and interactive game-based activities. It may also include online formative assessments, summative assessments, reading assignments from student textbooks, etc. (Bohol & Prudente, 2020). The lesson playlists used by the respondents of this study have the same template as this example http://bit.ly/SampleGF-LP. To ensure familiarity with the e-learning tool (See Table 3), the teachers used Google forms as the canvas to embed their instructional materials. (Scherer et al., 2020), is a valid model to test different learning technologies (Granić & Marangunić, 2019) and to grasp the attitude of students and teachers towards the use of educational technology (Siyam, 2019). Table Table 1 Features of DLT Source availability and flexibility, communication, simplicity in learning, portability, cost, and efficiency Asiimwe and Grönlund (2015) design factors such as perceived usefulness, appropriate course management tools, and interactivity of the system, perceived ease of use Zanjani et al. (2017) familiarity with the technology Kashada et al. (2018) system quality, interaction quality, service quality, and platform availability Conceptual Framework and Research Hypotheses To examine the use of Google Forms-based Lesson Playlist in an Online Distance Learning using the Extended Technology Acceptance Model, the variables in Figure 1 are defined operationally. System quality (SQ) is defined as the satisfaction of the students in using the Google Forms-based Lesson Playlist (GF-LP) in terms of features, content quality, system interactivity and navigation, and adaptability in the ODL context (Binyamin et al., 2019;Fearnley & Amora, 2020). The studies of Fathema et al. (2015); Fearnley and Amora (2020) showed that system quality influences perceived usefulness and ease of use. Nevertheless, they do not support each other on the effect of SQ on attitude towards the use of technology. This study hypothesized that if the students are satisfied with the quality of the e-learning tool, they have positive attitudes towards its use. These attitudes may affect their intention to use this tool in the future. Facilitating conditions (FC) refer to an individual's perceived enablers that aid technology adoption (Teo et al., 2018). When a user expects assistance and specializes instruction when using technology, it positively affects his/ her perceived ease of use and attitude towards using the technology (Fathema et al., 2015;Teo et al., 2018). But this study hypothesized that facilitating conditions do not affect the students' attitude towards using the teacher-designed e-learning tool. Teachers designed the e-learning tool to provide students all the necessary assistance by developing a tool flexible enough to supplement the uncertainties or barriers of the online distance learning environment. This study measured facilitating conditions based on the teacher's assistance to students using the Google Forms-based Lesson Playlist. According to Fathema et al. (2015), perceived self-efficacy (PSE) is the users' belief in performing the tasks necessary to achieve the required outcomes. This study shows that teachers' PSE is the predictor of perceived usefulness and perceived ease. When the students can carry out the tasks, they will find the GF-LP valuable and easy to use despite the barriers of digital technology adoption in an online distance learning environment. Perceived usefulness (PU) was defined by Davis (1989) as "the degree to which a person believes that using a particular system would enhance his or her job performance" (p. 320). Similarly, he defined the perceived ease of use (PEOU) as "the degree to which a person believes that using a particular technology would be free from effort." The systematic literature review conducted by Granić and Marangunić (2019) showed that perceived usefulness and perceived ease of use are the predictors of technology adoption in an educational context emphasizing perceived usefulness as the stronger predictor. In particular, the PU and PEOU directly affect the attitude and the behavioral intention to use the technology (Fathema et al., 2015;Teo & Huang, 2019;Fearnley & Amora, 2020). Their study found that perceived ease of use predicts perceived usefulness. Hence, in this study, it is hypothesized that if the students perceive the GF-LP as easy to use, they will consider this useful. Consequently, this will influence their attitude and intention to use this e-learning tool in future ODL activities despite the barriers of DLTs adoption. Hence, this study posited the following hypotheses to confirm the results of the previous studies on a self-made e-learning tool called Google Forms-based Lesson Playlist (GF-LP). Methodology The current study examined the students' utilization of Google Forms-based Lesson Playlist as an e-learning tool in a home-based online distance learning environment. This study adopted the conceptual and operational definitions of home-based online distance learning from the next-generation digital learning environment (NGDLE) of Brown et al. (2015). It describes the home-based online distance learning environment as a virtual learning space composed of an instructor, students, learning materials, and tools to support self-paced learning. Participants of the Study Five-hundred seventy (570) responses were obtained from Grades 11 and 12 students whose subjects utilized Google Forms-based Lesson Playlist as presented in Table 3. The respondents' ages range from 16 to 19 years old. The consent letters were uploaded to the parents' portal and also distributed to the students. The current study used the inverse square root and gamma exponential post-hoc power analysis methods Table 4 presents the demographic profile of the respondents. There were 253 (44.39%) male and 317 (55.61%) female students whose majority (27.72%) used a combination of tablet, smartphone, and laptop/desktop computer during the home-based online distance learning. The majority of the respondents (91.76%) are moderate to extremely familiar with Google Forms. Approximately 12 percent of the students reported that they were not satisfied with their internet access; a majority (39.47%) were slightly satisfied; 35.61 percent were moderately satisfied; 8.60 percent were very satisfied; and 3.86 percent were extremely satisfied. items all rated using a four-point Likert scale: 1 = strongly disagree, 2 = disagree, 3 = agree, and 4 = strongly agree. Data Gathering Procedure The participants of the study answered the online survey questionnaire with the help of their respective subject teachers. The study was conducted during the third term (January 25, 2021 -April 8, 2021) of the academic year when the school adopted the home-based online distance learning to address the limitation of meeting face-to-face brought by the COVID-19 pandemic. The institution approved the conduct of the study following the ethical guidelines in data collection. Data Analysis The students' acceptance of the Google Results and Discussion The analysis using PLS-SEM was adopted Reliability and Validity Fornell and Larcker (1981) and Kock (2020) suggested that the construct reliability is acceptable if the composite reliability (CR) is greater than ≥0.70 and the Cronbach's alpha (CA) is greater than ≥ 0.70. Table 6 is greater than any of the off-diagonal values, then discriminant validity was achieved by all the constructs. ARS, and AARS must be significant. That is, their corresponding p-value must be less than 0.05 (Kock, 2020). Regarding AVIF and AFVIF, their coefficients must have values less than 5 or ideally less than 3.3 (Kock, 2020). Table 6 shows that both coefficients of AVIF and AFVIF are within acceptable ranges. GoF is a measure of the explanatory power of the structural model (Tenenhaus et al., 2005), where it is small if GoF ≥ 0.1, medium if GoF ≥ 0.25, and large if GoF ≥ 0.36 (Kock, 2020). Since GoF = 0.760, then the explanatory power of the model is large. Overall, the structural model shown in the research framework is highly acceptable. Table 8 shows that the system quality (SQ) has a positive effect on perceived usefulness (PU), perceived ease of use (PEOU), attitude (ATT) towards the use of the Google Forms-based Lesson Playlist (GF-LP), and their behavioral intention to use this e-learning tool. This result implies that a high SQ will result in high PU, PEOU, and ATT towards using the GF-LP. In particular, SQ has a significant moderate effect on the PEOU (f 2 = 0.223) and PU (f 2 = 0.217), and a small effect on ATT (f 2 = 0.140), and BI (f 2 = 0.160). This result supports the findings of Fathema et al. (2015), Salloum, et al. (2019), and Fearnley and Amora (2020). SQ proved to be a powerful predictor of students' PEOU of the GF-LP, which implies that designing the e-learning tool considering students' realities during a pandemic helps them utilize the technology with less mental effort. Consequently, they become aware of its usefulness in learning the lessons despite DLT adoption barriers. Furthermore, PEOU and PU mediate the influence of SQ on ATT, which emphasizes that the desirable features and flexibility of the e-learning tool to adjust to the learning environment assure the positive attitudes towards the use of the digital learning technology during the pandemic. This result confirms Granić and Marangunić (2019) findings that PEOU and PU influence the users' attitudes towards using DLT irrespective of its types, features, and navigation tools. This study also extends that PEOU and PU are significant predictors of the ATT in a different learning environment, which is, in this study, a home-based online distance learning environment. Test of Research Hypotheses Table 8 also shows that perceived self-efficacy (PSE) positively affects the PU (β = 0.232) and PEOU (β = 0.308). This result means that students' familiarity with digital technology positively and highly influences the ease of use and usefulness of that digital technology. This study reaffirms that students' PSE influences PU and PEOU, which means that the students showed confidence in using the features and functions of the GF-LP to carry out the required tasks in a home-based online distance learning environment. Similarly, they felt confident in controlling the time and pacing of the lessons with the e-learning tool. This result agrees with the findings of Fathema et al. (2015), Nikou and Economides (2019), and Fearnley and Amora (2020) that PEOU mediates the effect of PSE on PU across different DLTs, users, and learning environment. Facilitating conditions (FC) significantly affect PEOU (β = 0.276) but the positive effect on ATT is not significant, f 2 = 0.037, p > 0.05. This relationship implies that the students' perceived ease of the use of the GF-LP is directly affected by the teacher's support, but this does not influence their attitudes towards using the GF-LP. Consistent with the hypothesis in this study, FC does not affect ATT, which contradicts Fathema et al. (2015) and Teo et al. (2018). This difference may be due to a different learning environment and digital learning technology. This study confirms that the GF-LP is a useful e-learning tool during home-based online distance learning since the students' attitudes towards using the playlists were not affected by the facilitating conditions. The possible reason could be that the teachers designed the e-learning tool so that the students can work independently on their activities. With this result, the use of Google Formsbased Lesson Playlist is an effective e-learning tool during home-based online distance learning. PEOU positively and significantly affects ATT and PU, and consequently, PU positively affects both ATT and BI. These findings suggest that when there is a high value on PEOU, the students perceived high usefulness in the GF-LP, which leads to a high impact on their attitudes and behavioral intentions to utilize it. This study reports that PEOU and PU are strong predictors of accepting the GF-LP, consistent with the TAM-related studies (Granić & Marangunić, 2019). In particular, PU significantly mediates the effect of PEOU on the students' attitudes towards the use of GF-LP in a home-based ODL environment. When the students feel that the e-learning tool is easy to use, useful in acquiring the course's competencies, and relevant in increasing their productivity despite the pandemic, they have positive attitudes and intentions to use that e-learning tool. Perceived usefulness is the most vital determinant of the student's attitude towards the use of GF-LP, consistent with the findings of Teeroovengadum et al. (2017), Granić and Marangunić (2019), and Fearnley and Amora (2020). This result showed that PU is consistently the robust determinant of technology acceptance in different countries and users, various digital learning technologies, and diverse learning environments. The strong positive effect of the students' attitudes to their behavioral intentions towards GF-LP use consequently proves the effective utilization (AU) of the GF-LP. In conclusion, the GF-LP adoption in a home-based online distance learning environment during the COVID-19 pandemic was successful. ATT positively influences BI, and in turn, BI positively and significantly affects the actual use (AU) of GF-LP. The high effect of ATT on BI (f 2 = 0.510) led to a stronger behavioral intention to use the GF-LP and consequently positively affects the technology's actual use (f 2 = 0.645). Conclusion and Recommendation This study utilized the Extended Technology Acceptance Model by Fathema et al. (2015) to examine the use of Google Forms-based Lesson Playlist (GF-LP) as an e-learning tool in a homebased online distance learning. Perceived selfefficacy (PSE) and system quality (SQ) have the most substantial effect on users' perceived ease of use (PEOU) while PEOU strongly influences digital learning technology's perceived usefulness (PU). Facilitating conditions (FC) have a small impact on PEOU but no significant effect on the students' attitudes towards using the Google Forms-based Lesson Playlist. This result implies that when educators carefully design a digital learning technology (DLT) addressing its adoption barriers, the students can independently use that technology without difficulty while they increase their productivity in accomplishing the tasks with less assistance from the teacher. This study also found out that PU highly influences the students' attitudes towards using GF-LP compared to SQ and PEOU. The positive and high effect of ATT on the students' behavioral intentions (BI) to use GF-LP and the similar effect of BI on the actual use of GF-LP imply that the use of GF-LP during online distance learning was favorable to the students. This study contains limitations that open opportunities for future studies. The constructs used in examining students' acceptance of technology were adopted from Fathema et al. (2015). Hence, this study highly recommends future researchers explore other Extended TAM constructs in users with different cultures Hanif et al., 2018, different ages and education levels (Putra, 2019), and various learning environments. The current research also suggests conducting a longitudinal survey (Salloum et al., 2019) and a mixed-method approach (Fearnley & Amora, 2020) to provide a deep understanding of the power of Extended TAM constructs.
2021-07-26T00:06:34.044Z
2021-06-03T00:00:00.000
{ "year": 2021, "sha1": "3ae35d920b933ebe33aa071a0b7fb3b3afc53dc0", "oa_license": "CCBYNC", "oa_url": "https://rmrj.usjr.edu.ph/rmrj/index.php/RMRJ/article/download/955/235", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "55bf4115001b171dc438326959af95a4ba2eb9e9", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Psychology" ] }
243834253
pes2o/s2orc
v3-fos-license
Effect of Coronavirus Disease-2019 on the Workload of Neonatologists Objective To describe the impact of coronavirus disease-2019 (COVID-19) on the neonatology workforce, focusing on professional and domestic workloads. Study design We surveyed US neonatologists in December 2020 regarding the impact of COVID-19 on professional and domestic work during the pandemic. We estimated associations between changes in time spent on types of professional and domestic work and demographic variables with multivariable logistic regression analyses. Results Two-thirds (67.6%) of the 758 participants were women. Higher proportions of women than men were in the younger age group (63.3% vs 29.3%), held no leadership position (61.4% vs 46.3%), had dependents at home (68.8% vs 56.3%), did not have a partner or other adult at home (10.6% vs 3.2%), and had an employed partner (88.1% vs 64.6%) (P < .01 for all). A higher proportion of women than men reported a decrease in time spent on scholarly work (35.0% vs 29.0%; P = .02) and career development (44.2% vs 34.9%; P < .01). A higher proportion of women than men reported spending more time caring for children (74.2% vs 55.8%; P < .01). Reduced time spent on career development was associated with younger age (aOR, 2.21; 95% CI, 1.20-4.08) and number of dependents (aOR, 1.21; 95% CI, 1.01-1.45). Women were more likely to report an increase in time spent time doing domestic work (aOR, 1.53; 95% CI, 1.07-2.19) and a reduction in time on self-care (aOR, 0.49; 95% CI, 0.29-0.81). Conclusions COVID-19 significantly impacts the neonatology workforce, disproportionately affecting younger, parent, and women physicians. Targeted interventions are needed to support postpandemic career recovery and advance physician contributions to the field. T he coronavirus disease-2019 (COVID- 19) pandemic has significant effects on the physician workforce that have yet to be well characterized. The pandemic's adverse consequences have not affected physicians equally and threaten to intensify existing gender disparities in medical careers. [1][2][3][4][5][6][7] Additionally, parents have faced extra challenges, with often inadequate workplace and domestic support. 4,8 Women and parent physicians disproportionately have shouldered increased burdens and anxiety associated with domestic duties, restrictions in childcare access, home schooling, and higher productivity expectations given a perceived increase in available time during lockdown. [7][8][9][10][11][12][13][14][15] The lack of in-person networking and conferences and a decrease in supplemental professional funds have negative impacts on career advancement. 5,16 Healthcare workers, especially women, have reported increased stress and mental health concerns. 2,3,15,17 Although women represent a significant proportion of the physician workforce, 18 gender disparities still existed prepandemic in most aspects of medical careers, including inadequate representation in leadership and higher academic ranks, lower scholarly productivity and publication rates, lower compensation, delayed career success, and fewer opportunities for promotion among women physicians. 1,12,[19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] Numerous factors contribute to these differences, including bias, traditional gender role definitions, sexual harassment, and a relative paucity of women in leadership roles. 19,[36][37][38] An increased burden of domestic responsibilities is an additional challenge that often differentially affects women. 4,39 Overall, women physicians are more likely than men to report spending more time on household activities and childcare and to state that domestic responsibility interferes with their professional duties. 21,22,25,32,[40][41][42][43][44][45][46] Women physicians, particularly those in intensive care fields, are more likely to experience burnout and mental health issues. 19,45,[47][48][49] The increasing proportion of women physicians in neonatology, a subspecialty with rigorous demands of intensive care, 50 has led to greater recognition of the gender gap's impact on professional advancement in this field. In the present study, we sought to describe the impact of the COVID-19 pandemic on the time spent on professional and domestic work of neonatologists and hypothesized that women and men had differential experiences. The COVID-19 pandemic has amplified the need to better support the workforce. A deeper understanding of the workload burdens that surfaced during the pandemic is key to building interventions that address the recovery of career trajectories and may support future gender equity for the pediatric workforce. Sample and Data Collection A cross-sectional survey study was conducted to examine the impact of the pandemic on neonatologists' professional and domestic workloads. A survey methodologist assisted with the survey design and the survey was programmed and stored in the REDCap database, a secure, internet-based research application designed for data storage and online surveys. Before survey dissemination, a core group of neonatologists tested the survey for constructive feedback and time and ease of completion. The Institutional Review Board at the Stanley Manne Institute at Ann & Robert H. Lurie Children's Hospital of Chicago deemed this study exempt. Signed written informed consent was not required, given that completion of the study implied participant consent. We surveyed a convenience sample of academic and private practice neonatologists in the United States and Puerto Rico. We distributed the Internet survey link and an invitation to participate through the following listservs: the American Academy of Pediatrics (AAP) Section on Neonatal Perinatal Medicine (AAP SoNPM, approximately 3000 neonatologist members), the Neonatology Academic Division Chiefs' (division chiefs were requested, but not required, to forward to their faculty members), MEDNAX Neonatology, and the Southern California Permanente Medical Group. We encouraged survey participation on social media platforms, including LinkedIn, Facebook, and Twitter. A single reminder email was sent via listserv 2 weeks after initial distribution. The survey was administered from December 1, 2020 to December 18, 2020. It began with a short paragraph that stated the aim, instructions to complete the survey only once, and assurance of voluntary anonymous participation and confidentiality. The survey questions asked participants to reflect on their experience since the start of the pandemic. Measures The primary outcome of this study was the change in time spent on professional and domestic work since the start of the COVID-19 pandemic. The survey instrument (Appendix; available at www.jpeds.com) included 22 questions measuring the impact of the COVID-19 pandemic on the following: time spent on professional work (clinical care, scholarly work, institutional and national service, medical education, career development, and administrative work), accomplishments, productivity, and other professional issues, such as compensation and new work related to COVID-19 (eg, research or clinical redeployment to other units), time spent on domestic work (dependent care, housework), and time spent on self-care. Response options for time spent included "more," "same," "less," or "not applicable" compared with the time before the pandemic. These categorical responses were chosen to minimize recall error and bias of precise hours given the longitudinal nature of the study, and to capture summative experience, even if individual workloads varied week to week during the pandemic. Demographic and professional characteristics included age, sex, race/ethnicity, years since completion of fellowship, nature of the practice, academic rank, full-time/part-time employment status, changes in employment status (including leave of absence), local and/ or national leadership position (self-defined), number and age of dependents, and relationship status. Statistical Analyses Participant age was dichotomized into 2 groups based on the median age of the participating cohort: 31-47 years (younger) and 48-86 years (older). Career levels were determined by years since fellowship completion, in accordance with AAP SoNPM categorization: 0-7 years (early), 8-17 years (mid), and ³18 years (late). Those who reported not spending time in a specific type of work (eg, clinical, dependent care) before and/or during the pandemic were excluded from the analysis for that specific type of work. Descriptive analyses, including frequency analyses, were conducted using the c 2 test to detect differences (although not directionality) in participant characteristics and outcomes by gender. We selected outcomes that were statistically significantly different by gender in the bivariate analysis to fit multivariable logistic regression models. Multivariable logistic regression analyses were conducted to predict the probability of a reduction in time spent on the selected types of professional work and an increase in time spent on the selected types of domestic work. Potential covariates chosen based on risk factors noted in the literature 4,7,12,22,34 included age (younger vs older), career level (early vs mid vs late), practice type (academic vs nonacademic), holding a leadership position (yes vs no), having a partner or live with other adults (yes vs no), having at least 1 young dependent (yes vs no), and having at least 1 school-age dependent (yes vs no). aORs and 95% CIs were estimated for these variables. Statistical significance was set at P < .05. C-statistics were used for overall model fit statistics. 51 All analyses were conducted in SAS 9.4 (SAS Institute). Results Among 768 survey respondents, we excluded those who reported nonbinary gender (n = 1), preferred not to answer about gender (n = 4), or did not report any total work hours before or during the COVID-19 pandemic (n = 5). Of the remaining 758 respondents, 514 (68%) were women and 246 (32%) were men ( Table I). A higher proportion of women were in the younger age group, did not hold a leadership position, had dependents at home, did not have a partner or other adult in the home, and had a partner who was also employed (P < .01 for all). Table II presents the change in time spent on various types of professional and domestic work during the COVID-19 pandemic by gender. Most neonatologists reported working the same (60%) or more (32%) since the start of the pandemic, but a higher proportion of women (35%) than men (26%) reported this increase. The majority of respondents reported working the same amount of time in clinical care (76%). A higher proportion of women than men reported a decrease in time spent on scholarly work (35% vs 29%; P = .02) and on career development (44% vs 35%; P < .01). Most survey respondents with dependents reported spending more time caring for children since the start of the pandemic, but significantly more women than men reported this increase (74% vs 56%; P < .01). Most respondents also reported spending more time managing their children's education, and although a higher proportion of women than men reported this increase, the difference was not statistically significant. More women than men reported spending more time on domestic work. Although most respondents spent less time on self-care, significantly more women reported this change. In the multivariable logistic regression analysis, women and men were similar when reporting a reduction in time spent on scholarly work. Although women tended to report a reduction in time spent on scholarly work more frequently than men, this bivariate finding was not statistically significant in the multivariable model ( Figure 1; available at www.jpeds.com). A reduction in time spent on career development ( Figure 2) (Figure 3). No other demographic factors differed significantly in these analyses. C-statistics for the multivariable logistic regression models ranged from 0.60 to 0.76. The survey also indicated that 37.9% of respondents took on new work specifically related to COVID-19, 60% worked more from home, 14% took a leave of absence, and 45% experienced a reduction in salary and/or benefits, with no significant gender differences in these responses. In addition, 31% of respondents reported decreased career satisfaction, and 23% faced new or worsened mental health concerns. Of those respondents scheduled to take the Neonatal-Perinatal Certifying Exam in 2020 (n = 96), 30% deferred. Of those who took the examination, 22% did not pass. Discussion This study explored the effects of the COVID-19 pandemic on the neonatology physician workforce. Similar to other medical subspecialties, the neonatology workforce had to adapt to clinical uncertainties and constantly changing management of COVID-19-positive pregnant women and their newborns. Different from some fields, patient care in neonatology did not change significantly in volume, was not elective, and could not be provided remotely during the pandemic, as reflected in our results showing that most neonatologists worked the same or more clinically during this time. Most neonatologists reported a negative impact in important nonclinical domains (scholarly work, medical education, national and institutional service, and career development) during the pandemic. Nearly one-half experienced a reduction in salary or benefits, even though the majority work the same amount or more, and many experienced other negative career impacts (decreased career satisfaction, deferred board-certifying examination) and worsened mental health. The American Board of Pediatrics also reported a decrease in initial certifying examination takers (15%) but similar pass rates in 2020 compared with previous years. 52 Studies have highlighted the stress of the pandemic on physician wellness and work-life balance, especially in those who continued to work in person during the pandemic as frontline providers. 53,54 In addition to professional demands, domestic demands increased. Most neonatologists reported spending the same or more time caring for children, managing children's education, and performing housework, with less time on self-care. We speculate that this intersection of professional and domestic demands contributed to the negative impact in nonclinical domains for neonatologists. These impacts were experienced differently by different segments of the neonatologist workforce. We found the COVID-19 pandemic especially affected early-career neonatologists, those with younger dependents at home, and women, a finding echoed in studies of other subspecialties. 6,55 The fact that the groups most affected by the pandemic-women of childbearing age and those with children at home 33,50 -represent the largest segment of the pediatric workforce has serious implications for not only individual physicians but the field overall. Gender-based disparities, such as salary gaps and fewer leadership positions, adversely affected women in neonatology before the pandemic. 16,22 We found that time spent on career-advancing work (scholarly work and career development) was decreased for women neonatologists since the start of the pandemic. Although the number of self-defined leadership positions in our older age category was similar between genders, the differences in our younger category and that other studies show in the position types and importance level remain concerning. 16,36 Our gender-related findings were less significant than earlier pandemic reports from other specialties, 1,6,17,29,56 perhaps due to our comprehensive inclusion of private practice neonatologists who tend to have less proportion of time devoted to scholarly work at baseline. Manuscript submissions increased for some journals in late spring 2020 (personal communication, P. Gallagher, 2021), with the greatest increase among male international authors. 29 At baseline, early-career and women neonatologists have fewer primary authored publications 20 and may have less protected scholarly time and resources (owing to fewer leadership positions 16 ) and thus may be more affected by the pandemic. Because time spent on professional growth and scholarly work is needed for career advancement, this creates a vicious loop that hinders early career and women neonatologists from progressing their career into leadership positions. The COVID-19 pandemic disrupted the balance between professional and domestic life, with inadequate support for household and parenting needs. 2,4,7,8,26 Our study confirms that neonatologists, especially parents and women, faced increased responsibilities at home. Closed daycare centers, difficulty retaining in-home providers for physicians' children, and home schooling responsibilities related to the pandemic disproportionately affected parents with young children. Caring for aging parents, relatives in long-term care, and ill family members, along with decreases in external household help and time for self-care, were additional burdens. Given that women in our study tended to be younger (and early their career), to have younger dependents, and to have an employed partner compared with men, similar to the prepandemic findings reported by Horowitz et al, 16 these domestic effects may have burdened women differentially. Women physicians are at higher risk for burnout at baseline due to domestic demands compounding their professional responsibilities, 27,48 especially women in intensive care fields. 24 The additional domestic and professional stresses posed by the pandemic may increase the risk of burnout in women physicians. 4 The number of women entering pediatric careers continues to rise, contributing to a predominance of women in the pediatric workforce. 18 Yet gender inequity in medicine contributes to a leaky pipeline, with women's advancement slowed, stalled, or regressing. 57 Gender-equity initiatives are driven mostly by underpaid and underrecognized women volunteers, with little institutional recognition or support. As we map our recovery from COVID-19, including women and parents disproportionately affected by COVID-19 as leaders at the decision making table is key. Building a stronger infrastructure to support domestic needs of both men and women, and viewing it as an investment with long-term benefits rather than additional costs, could allow women and parents to remain fully professionally engaged in the workforce. Importantly, this work would not simply support individuals and their careers, but also maximize their clinical and scholarly efforts to advance the care of their patients and the field of neonatology. This study has some limitations. Our sample might not be truly representative of the complete neonatology physician workforce. The precise numbers, including gender, age, and career path, of actively practicing neonatologists in the US are not tracked comprehensively. More than onehalf of neonatologists, three-quarters of recent fellows, and a lower proportion of those aged >60 years old are women (personal communication, AAP, Section of Neonatal Perinatal Medicine, 2021), 50 which approximates our sample and allows reasonable generalizability to the population of US neonatologists. We gathered responses from both academic and private practice neonatologists; however, survey engagement was greater from academic neonatology practices. The expectations for research and scholarly productivity may be relevant mainly to neonatologists in academia; however, the negative impact on career development would be similar in both practice settings. Our results may be confounded by respondent bias. Neonatologists who experienced a greater adverse effect of the pandemic may have been more likely to respond to our survey, and those with heavier workloads during the pandemic might not have had time to complete the survey. As with any survey instrument, recall bias also may have impacted the responses. However, the survey was sent out in the fourth quarter of 2020, at which point the effects of the pandemic would have been felt for a long enough period for respondents to give an objective response, but not too distant to affect recall. We did not ask for a direct measure of domestic support but used relationship status as a proxy, while recognizing that its utility is variable and limited. Our results call for the development of targeted interventions to support postpandemic career recovery, to support individual physicians and advance their contributions to the field. n We thank Patricia Labellarte for her assistance with survey design and Leonardo Barrera for his assistance with statistical analysis and figure construction.
2021-11-08T14:08:59.677Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "83cb6b6c225674313e602a7cedd9cf3318928244", "oa_license": null, "oa_url": "http://www.jpeds.com/article/S0022347621010672/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "0eadf4e8db395e0abbca027acc55ed4f22869dd3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199109093
pes2o/s2orc
v3-fos-license
Cenozoic tectono-thermal history of the southern Talkeetna Mountains, Alaska: Insights into a potentially alternating convergent and transform plate margin ■ ABSTRACT The Mesozoic–Cenozoic convergent margin history of southern Alaska has been dominated by arc magmatism, terrane accretion, strike-slip fault systems, and possible spreading-ridge subduction. We apply 40 Ar/ 39 Ar, apatite fission-track (AFT), and apatite (U-Th)/He (AHe) geochronology and thermochronology to plutonic and volcanic rocks in the southern Talkeetna Mountains of Alaska to document regional magmatism, rock cooling, and inferred exhumation patterns as proxies for the region’s deformation history and to better delineate the overall tectonic history of southern Alaska. High-temperature 40 Ar/ 39 Ar thermochronology on muscovite, biotite, and K-feldspar from Jurassic granitoids indicates postemplacement (ca. 158–125 Ma) cooling and Paleocene (ca. 61 Ma) thermal resetting. 40 Ar/ 39 Ar whole-rock volcanic ages and 45 AFT cooling ages in the southern Talkeetna Mountains are predominantly Paleocene–Eocene, suggesting that the mountain range has a component of paleotopography that formed during an earlier tectonic setting. Miocene AHe cooling ages within ~10 km of the Castle Mountain fault suggest ~2–3 km of vertical displacement and that the Castle Mountain fault also contributed to topographic development in the Talkeetna Mountains, likely in response to the flat-slab subduction of the Yakutat microplate. Paleocene–Eocene volcanic and exhumation-related cooling ages across southern Alaska north of the Border Ranges fault system are similar and show no S-N or W-E progressions, suggesting a broadly synchronous and widespread volcanic and exhumation event that conflicts with the proposed diachronous subduction of an active west-east–sweeping spreading ridge beneath south-central Alaska. To reconcile this, we propose a new model for the Cenozoic tectonic evolution of southern Alaska. We infer that subparallel to the trench slab breakoff initiated at ca. 60 Ma and led to exhumation, and rock cooling synchronously across south-central Alaska, played a primary role in the development of the southern Talkeetna Mountains, and was potentially followed by a period of southern Alaska transform margin tectonics. ■ INTRODUCTION The Talkeetna Mountains of south-central Alaska are positioned north of the Border Ranges fault system (BRFS) and more than 350 km inboard from the Pacific-North American plate boundary (Fig. 1). Today the Talkeetna Mountains completely overlay the subducted portion of the Yakutat microplate, a buoyant oceanic plateau that has been undergoing flat-slab subduction with southern Alaska since the late Oligocene (Figs. 1 and 2) (Benowitz et al., 2011Lease et al., 2016). The Talkeetna Mountains region also experienced Eocene slab-window magmatism (Cole et al., 2006). The active transpressive Castle Mountain fault (CMF), which is thought to have formed as early as the Late Cretaceous (Bunds, 2001), defines the southern border of the Talkeetna Mountains ( Fig. 1) (Parry et al., 2001;Haeussler et al., 2002). This spatial relationship suggests that vertical tectonics along the CMF may have contributed to the development of the Talkeetna Mountains (Fuchs, 1980;Clardy, 1974;Trop et al., 2003). In order to better understand the overall Mesozoic-Cenozoic tectonic history of southern Alaska, we have applied 40 Ar/ 39 Ar (whole-rock, hornblende, muscovite, biotite, and K-feldspar), apatite fission-track, and apatite (U-Th)/He geochronology and thermochronology to bedrock samples collected from transects along and across the strike of the CMF and along vertical profiles throughout the glaciated high-peak region of the southern Talkeetna Mountains. The specific objectives of this study were to: (1) determine if the Talkeetna Mountains are primarily a paleotopographic expression of an earlier phase of tectonism and if the production of topography was driven by a Paleocene-Eocene plate boundary event prior to the current late Oligocene to present Yakutat flat-slab plate configuration; (2) determine if late Oligocene to present subduction of the Yakutat microplate primarily drove topographic development in the Talkeetna Research Paper Our 40 Ar/ 39 Ar and fission-track results suggest that the Talkeetna Mountains are in part residual topography that formed in response to a Paleocene-Eocene thermal event, as proposed for the western Alaska Range (Fig. 1) (Benowitz et al., 2012a). A compilation of new and regional thermochronology and dating of volcanics shows no west-east progression in the timing of initiation of Paleocene-Eocene exhumation or magmatism, which is inconsistent with the current sweeping spreading-ridge subduction model. Finally, based on apatite (U-Th)/He thermochronology, we infer there has been ~2-3 km of vertical displacement along the CMF since the Miocene in response to flat-slab subduction of the Yakutat microplate which has also contributed to the creation of topography in the Talkeetna Mountains. Geology of the Talkeetna Mountains The Talkeetna Mountains are bordered to the west by a Cenozoic intraplate and forearc composite basin (Susitna Basin) and to the east and south by remnant Cenozoic forearc basins (Copper River Basin and Matanuska Valley Basin, respectively) ( Fig. 2B) Stanley et al., 2014). The subvertical Talkeetna fault bisects the Talkeetna Mountains and acts as a lithospheric terrane boundary between the Wrangell composite terrane (WCT) and the Alaska Range suture zone (ARSZ) (Fig. 1) (Brennan et al., 2011;Fitzgerald et al., 2014). Three allochthonous crustal fragments making up the WCT-the Alexander, Wrangellia, and Peninsular terranes-amalgamated together by early Mesozoic time, collided with the former continental margin at lower paleolatitudes and were translated northward along regional strike-slip faults (e.g., Denali fault; Plafker and Berg, 1994;Cowan et al., 1997;Stamatakos et al., 2001;Roeske et al., 2003;Gabrielse et al., 2006). In the southern Talkeetna Mountains, our main study area, the Peninsular terrane consists of a >5-km-thick succession of lavas, tuffs, and volcaniclastic sedimentary strata and adjacent plutonic rocks that are interpreted to reflect an oceanic arc (Rioux et al., 2007(Rioux et al., , 2010. Large batholiths intruding the WCT are referred to as the Late Triassic-Early Jurassic Talkeetna Arc; these batholiths were emplaced in this region from ca. 183-153 Ma (Hacker et al., 2011). A series of Late Jurassic trondhjemite plutons make up the bulk of bedrock exposures in the interior Talkeetna Mountains and constitute Mount Sovereign, the highest peak in the Range (~2700 m) ( Fig. 3) (Rioux et al., 2007). North of the Talkeetna fault, the ARSZ primarily consists of a ~3-to ~5-km-thick package of Kahiltna Basin marine sedimentary strata; this package was subaerially uplifted during the Mesozoic WCT collision (Ridgway et al., 2002). The ARSZ is a complex suture zone between outboard accreted terranes (WCT) and inboard pericratonic terranes (e.g., Foster et al., 1994;Dusel-Bacon et al., 2006). The erosion-resistant nature of the southern Talkeetna Mountains plutonic bodies and the mafic crust of the WCT relative to the north Kahiltna Basin rocks may contribute in part to the difference in relief between the northern and southern regions of the Talkeetna Mountains. Overall, the structural configuration of the southern Talkeetna Mountains is not well constrained due to a lack of systematic, detailed geologic mapping across the entire range. Generally northwest-and southeast-striking extensional fault systems bisect the region of the Caribou Creek volcanic field and the Hatcher Pass region (Fig. 3) (Cole et al., 2006). These extensional faults are thought to have been active during a period of Paleocene-Eocene volcanism in the Caribou Creek volcanic field and regional Paleocene-Eocene crustal extension (Cole et al., 2006). Faults also appear to partially bound the high-peak region of the southern Talkeetna Mountains, consisting primarily of the Jurassic trondhjemite pluton (Fig. 3), suggesting that the region may have exhumed as an independent crustal block. A series of mesoscale folds and reverse faults deform Paleocene-Eocene strata exposed within ~10 km of the CMF, recording post-ca. 50 Ma shortening along the CMF ( Bartsch-Winkler and Schmoll, 1992;Kassab et al., 2009;Robertson, 2014). Talkeetna Mountains Mesozoic-Cenozoic Paleogeography Paleomagnetism studies of rocks from the Talkeetna Mountains located in the WCT of southern Alaska (Fig. 1) suggest that during the Late Cretaceous (ca. 80 Ma), the region was positioned at a paleolatitude 15° ± 8° to the south of its current location (Stamatakos et al., 2001). Northward displacement of the WCT is inferred to have occurred along structures such as the Denali fault and Tintina fault systems, which accommodated at least ~1000 km of combined dextral slip based on offset geologic features (Denali fault: Lowey, 1998;Benowitz et al., 2012b;Tintina fault: Gabrielse, 1985). The WCT was near its present-day latitude by ca. 54-40 Ma judging from paleomagnetic constraints from southern Talkeetna Mountain Eocene lava flows (Panuska et al., 1990). The Castle Mountain Fault The subvertical CMF extends ~250 km along the southern border of the Talkeetna Mountains without any obvious restraining or releasing bends and ends in a horsetail splay at the eastern end of the Talkeetna Mountains ( Figs. 1 and 3). The CMF separates denser rocks to the south from less dense rocks to the north (Mankhemthong et al., 2013), and these strength heterogeneities may play a role in where deformation is focused along the fault zone. This fault zone is also rheologically weaker than the crust surrounding it and accommodates strain transferred inboard from the plate margin (Bunds, 2001). Approximately 130 km of Cenozoic right-lateral horizontal displacement has been suggested along the CMF (Trop et al., 2005;Pavlis and Roeske, 2007). Willis et al. (2007) constrained a Holocene horizontal slip-rate estimate along the western portion of the CMF at ~2-3 mm/yr based on an offset postglacial outwash channel, although this slip rate may decrease significantly to the east where slip is likely being partitioned into an oblique component. Fuchs (1980) suggested a post-Eocene slip rate of ~0.5 mm/yr based on field mapping observations, and finite element models by Kalbas et al. (2008) suggested a Holocene slip rate of ~1 mm/yr. Conversely, light detection and ranging (lidar)-based geomorphic studies along the western segment of the CMF suggest a much-diminished Holocene dextral slip rate (<0.3 mm/yr) and vertical motion at a rate of ~0.5 mm/yr (Koehler et al., 2014). The overall vertical displacement history along the CMF is not well constrained; regional mapping studies document up to 3 km of north-side-up Neogene vertical slip based on offset Jurassic to Paleogene strata (Grantz, 1966;Detterman et al., 1976), but detailed cross sections have not been reported. Kula-Resurrection Spreading-Ridge Subduction Hypothesis It has been proposed that during late Paleocene-Eocene time, southern Alaska experienced diachronous subduction of the active Kula-Resurrection oceanic spreading ridge ( Fig. 2A) (Bradley et al., 1993;Haeussler et al., 2003). The Kula-Resurrection ridge is interpreted to have subducted at an oblique angle and along an eastward-sweeping trajectory in a subparallel motion with respect to the paleo-trench (Haeussler et al., 2003). This model stems chiefly from a ~2000-km-long string of near-trench plutons in the accretionary prism, the Sanak-Baranof belt (Figs. 1 and 2A); the string shows an eastward progression in the timing of magmatism from ca. 63 Ma to ca. 47 Ma. Many other data sets also document a regional ca. 63-47 Ma "near-trench" thermal event within the prism, including high-temperature and low-pressure metamorphism, mafic underplating, extensive fluid circulation, and rapid exhumation and erosion (e.g., Haeussler et al., 1995;Kusky et al., 1997;Pavlis and Sisson, 2003;Gasser et al., 2011). However, the Sanak-Baranof near-trench magmatic belt may have been positioned >~2500 km to the south along the western margin of North America ca. 63 Ma to ca. 47 Ma and subsequently translated to its current location by the late Eocene along orogeny-parallel faults such as the BRFS (Cowan, 2003;Garver and Davidson, 2015;Garver, 2017;Davidson and Garver, 2017). This competing model is based in part on a paleomagnetism study by Bol et al. (1992) and detrital-zircon studies by Garver and Davidson (2015) and Davidson and Garver (2017). If this is the case, then a relatively stationary Paleocene-Eocene slab-window or other thermal perturbation (hot spot?) led to the emplacement of the Sanak-Baranof suite ~2500 km to the south as the overlying plate was translated to the north over the thermal perturbation (Cowan, 2003). Independent of the Sanak-Baranof near-trench plutonic suite, there is compelling supporting evidence for a widespread Paleocene-Eocene slab-window-related thermal event across southern and interior Alaska; this event drove rapid rock cooling (e.g., Yukon-Tanana: Dusel-Bacon and Murphy, 2001; western Alaska Range: Benowitz et al., 2012a;Saint Elias Range: Enkelmann et al., 2017), thermal resetting (Finzel et al., 2016), basin subsidence and inversion (Ridgway et al., 2012;Kortyna et al., 2013) (Cole et al., 1999(Cole et al., , 2006(Cole et al., , 2007. If these geologic events are related to a west-to east-sweeping ridge subduction event to the south with minimal strike-slip displacement (<500 km) between the accretionary prism and region north of the BRFS since ca. 63-47 Ma, there should be a north-of-the-BRFS rock record of west to east and south to north progressions of initiation of these deformation events. Alternatively, during the Paleocene-Eocene, the accretionary prism Sanak-Baranof belt and southern Alaska north of the BRFS may have been distal from each other and experienced different slab-window mechanisms. Flat-Slab Subduction of the Yakutat Microplate Since ca. 30 Ma, the primary driver of orogenic processes in southern Alaska has been the ongoing flat-slab subduction of the Yakutat microplate, a buoyant ~15-to ~30-km-thick oceanic plateau (Fig. 2B) (Worthington et al., 2012;Brueseke et al., 2019). The Yakutat flat-slab extends ~350 km inboard before the dip angle increases (Eberhart-Phillips et al., 2006), and this has been suggested to cause the almost-complete gap in magmatism between the Aleutian and Wrangell Arcs (Finzel et al., 2011;Trop et al., 2012;Brueseke et al., 2019). Active transpressional fault systems across southern Alaska accommodate the oblique convergence of the Yakutat flat-slab, resulting in numerous regions of deformation far inboard from the trench interface Riccio et al., 2014, Burkett et al., 2016Waldien et al., 2018). It has been proposed that the topographic development of the Talkeetna Mountains coincided with the flat-slab subduction of the Yakutat microplate (Figs. 1 and 2B) (Hoffman and Armstrong, 2006;Finzel et al., 2011). This is primarily based on the modern position of the Alaska Range over the subducted portion of the flat-slab, limited Miocene Talkeetna Mountain AHe bedrock cooling ages (e.g., Arkle et al., 2013), and enhanced sediment accumulation rates and sediment delivery from bedrock sources exhumed above the flat-slab region (Cook Inlet;Finzel et al., 2011Finzel et al., , 2016Tanana Basin;Benowitz et al., 2019). Geodynamic computational modeling provides a potential test if the Yakutat microplate has primarily driven topographic development in the Talkeetna Mountains. Jadamec et al. (2013) used three-dimensional numerical models to test if the deformational patterns across southern Alaska can be explained by the modern plate configuration (Fig. 4A). The models produce results that match most of Alaska's modern topography. However, in the Talkeetna Mountains region, they do not predict a topographic high, but rather a basin (Figs. 4A and 4B). The models suggest that basins correlate to where the slab-dip angle increases and separates from the overriding plate, dynamically pulling down the overlying crust (Fig. 4B). The high topography of the Talkeetna Mountains contradicts the topographic predictions of Jadamec et al. (2013) unless a significant portion of the topographic relief predates the modern configuration, which would suggest that the modern plate configuration is not the dominant control of topography in the range. The Jadamec et al. (2013) modeling also includes a rheologically weak zone where the Denali fault is located and correctly predicts topographic construction in the Alaska Range along this strike-slip structure. Conversely, this model does not account for the existence of the CMF and its potential for focusing deformation and vertical displacement, which may also contribute to its failure to correctly predict the topography of the Talkeetna Mountains. Therefore, the possibility of topographic development due to vertical tectonics Research Paper along the CMF is not eliminated by these models, if the CMF is also a rheologically weak zone, as argued by Bunds (2001). A cross section of seismicity from the Aleutian Trench to interior Alaska displays the flat-slab subduction of the Yakutat microplate under the North American plate (Fig. 5) and highlights the significant active structural elements such as the Denali fault, which is clearly shown as a crustal-scale feature. The Yakutat slab dips subhorizontally until it reaches the Talkeetna Mountains region, where the dip angle increases to ~20°. Beneath the Talkeetna Mountains, seismicity appears to be diffuse, and the CMF does not appear to display significantly more seismicity compared to the area immediately to the north. The limited shallow seismicity and the imaged depth of the downgoing Yakutat slab suggests that the interacting plates are not highly coupled and that the buoyant slab is not acting as an upward force on the crust of the Talkeetna Mountains. Given this framework, we can use thermochronology to test if the Talkeetna Mountains reflect a paleotopography contribution that formed during a previous tectonic event and the role, if any, of the CMF in the region's topographic development history. Cenozoic Thermal History of Southern Alaska The known varied convergent margin configurations that the region has undergone suggest that the thermal regime of southern Alaska has changed throughout the Cenozoic (e.g., Riccio et al., 2014;Lease et al., 2016). Thermochronology in the western Alaska Range (Fig. 1) shows evidence for a higher than normal geothermal gradient (>50 °C/km) (Benowitz et al., 2012a) during the Eocene, suggesting that high heat flow and the injection of magma into the upper crust contributed to regional mountain building. Finzel et al. (2016) also infer a possible high geothermal gradient (>100 °C) across southern Alaska during the Paleocene-Eocene based on reset detrital-zircon fission-track ages from Cretaceous-Cenozoic strata. This anomalously high geothermal gradient event likely extended across southern Alaska and persisted for ~20 m.y. (O'Sullivan and Currie, 1996;Cole and Stewart, 2009). However, it is not known when the modern thermal regime was established. The southern Talkeetna Mountains are thought to occupy a region that was subjected to elevated heat flow above a slab window to the asthenosphere and was subsequently cooled from flat-slab subduction of the Yakutat microplate (Cole et al., 2006;Finzel et al., 2011). The basis for this inference is a package of Eocene volcanic rocks that have been linked to an inferred Paleocene-Eocene spreading-ridge subduction event ( Fig. 1) (Cole et al., 2006;Cole and Stewart, 2009) and the current position of the range over the subducted portion of the Yakutat microplate (Fig. 1). Therefore, the regional rock record should register a marked shift in its thermal structure through time and space. The Talkeetna Mountains Jurassic trondhjemite plutons have been intruded by mafic dikes and K-feldspar-rich fluids ( Fig. 6; P1, P2, and P3) (see Results section), providing additional evidence for a regional thermal event. Currently there are no active hot spring systems in the region; there are, however, abundant outcrops displaying hydrothermal alteration, especially within the trondhjemite plutons ( Fig. 6; P3). Spatially the Talkeetna Mountains are located ~300 km from Neogene to presently active volcanoes (Fig. 1); so the region's thermal history has not been overprinted by Neogene volcanism. Previously published low-temperature thermochronology data have generally focused on the southernmost region of the Talkeetna Mountains near the CMF (Little and Naeser, 1989;Parry et al., 2001;Hoffman and Armstrong, 2006;Hacker et al., 2011;Bleick et al., 2012) and have indicated temporal-spatial variability in the timing of Talkeetna Mountains rock cooling and inferred exhumation. AFT ages in the Hatcher Pass region (Figs. 3 and 7) record Paleocene-Eocene structural and erosional exhumation (Bleick et al., 2012). Miocene AHe ages near the CMF are indicative of a more recent rock cooling and inferred exhumation event in the southern Talkeetna Mountains (Hoffman and Armstrong, 2006). Younger (ca. 16-22 Ma) AFT ages south of the CMF suggest a period of rapid exhumation coincided with the highly coupled flat-slab subduction of the Yakutat microplate (Little and Naeser, 1989;Hoffman and Armstrong, 2006). However, only scant low-temperature thermochronology data are available in the high-peak region of the Talkeetna Mountains, and previously published ages were not collected in a systematic way that could elucidate age-elevation relationships, thermal resetting, or a CMF structural control on cooling age patterns. Hence, this study utilizes a multi-thermochronometer and geochronological approach applied to bedrock samples collected between the Talkeetna fault and the CMF and one sample south of the CMF, combined with previously published results (Hacker et al., 2011;Bleick et al., 2012;Arkle et al., 2013), to constrain rock-cooling histories. Sampling Strategy We use a range of geochronology ( 40 Ar/ 39 Ar on whole-rock volcanics) and multi-method high-temperature ( 40 Ar/ 39 Ar on hornblende, muscovite, K-feldspar, and biotite) and low-temperature (apatite fission-track [AFT] and (U-Th)/He [AHe] on apatite) thermochronology techniques to constrain regional patterns of volcanism and time and/or temperature histories for rock samples in our study area (Fig. 7). In order to discern regional cooling age patterns with respect to the CMF and elevation, our sampling strategy included bedrock sampling transects along and across strike of the CMF and over a substantial portion of the high-peak region of the Talkeetna Mountains ( Fig. 7) (Spotila, 2005). We also conducted two vertical profiles collecting bedrock samples every ~100 m over a ~1300 m vertical distance. Age-elevation profiles allow the possible identification of distinct slope inflection points, which are interpreted to mark changing rock-cooling and inferred exhumation rates (e.g., Fitzgerald et al., 1993). One vertical profile was collected along Mount Sovereign (~2700 m) and one along a peak off the Sheep Glacier (~2250 m) referred to herein as Sheep Mountain (Fig. 3). Most of our samples were collected within a large Jurassic trondhjemite pluton and a Jurassic granite pluton (Fig. 3). Sample 01Chic is a metabasalt collected in the CMF zone (Fig. 7). One tonalite sample was collected south of the CMF (05King). We collected samples at different distances from Eocene volcanic intrusions on the outskirts of the Jurassic trondhjemite pluton ( Fig. 6; P5 and P6) to test for thermal resetting. Volcanic rocks representing five different phases of magmatism were sampled at a minimum distance of ~5 m from trondhjemite sample 01Sov (Figs. 6 and 7) to further test for thermal resetting. Our new ages were integrated with existing thermochronology (Silberman and Grantz, 1984;Little and Naeser, 1989;Parry et al., 2001;Cole et al., 2006;Hoffman and Armstrong, 2006;Hacker et al., 2011;Bleick et al., 2012;Arkle et al., 2013) to constrain the Cenozoic exhumation and magmatic history of the Talkeetna Mountains. Geochronology and Thermochronology Techniques: 40 Ar/ 39 Ar 40 Ar/ 39 Ar geochronology and thermochronology were performed at the University of Alaska Fairbanks Geochronology Facility on hornblende (05King), muscovite (01Sov, 03Sov, and 13Sov), sericite (01Red), biotite (13Sov), K-feldspar (01Sov and 03Sov), and phenocryst-free groundmass separates from whole-rock volcanic samples (01Sov-1, 01Sov-2, 01Sov-3, 01Sov-4, 2Sov, and 14Sov). Samples were crushed, sieved for the 250-1000 µm grain size, washed, put through heavy liquids, and then separated using magnetic and handpicking mineral separation techniques. Samples were analyzed on a VG-3600 mass spectrometer using laser step-heating techniques described in . Dating multiple minerals in the same sample provides information about a rock's thermal history from ~150-450 °C. Whole-rock volcanic ages provide information about the timing of magmatism and diking. For a more detailed description of the 40 Ar/ 39 Ar analytical methods used and how uncertainties were derived, see the Supplemental Materials 1 (Text S1). The K-feldspar age spectra for samples 01Sov and 03Sov (Fig. 8) are interpreted using multi-domain diffusion modeling (Lovera et al., 2002) to understand their thermal histories. Instead of performing diffusion experiments, we look at the timing of closure of the high-temperature (KFAT max : ~350 °C) and low-temperature (KFAT min : ~150 °C) domains for K-feldspar Löbens et al., 2017). A summary of all the 40 Ar/ 39 Ar results is given in Table 1, with all ages quoted to ±1σ and calculated using the constants of Renne et al. (2010). For detailed isotopic tables and figures, see the Supplemental Materials (Table S1 and Fig. S1 [footnote 1]). Thermochronology Techniques: Apatite Fission Track Under typical continental geothermal gradients, AFT thermochronology provides information about the thermal history of a rock sample in the upper ~3-5 km of the crust (Dodson, 1973). This technique involves analysis of the damage tracks formed by the spontaneous fission of 238 U (Tagami and O'Sullivan, 2005). Depending on the apatite grain composition and cooling rate, fission tracks will partially anneal at temperatures >60 °C and completely anneal at temperatures >120 °C. This temperature window is referred to as the partial annealing zone (PAZ). The temperature sensitivity of fission tracks allows for analysis of a rock sample's thermal history by measuring track lengths; shorter tracks indicate a longer residence time in the PAZ (60-120 °C) and a relatively slower cooling rate (Donelick et al., 2005). Track-length distributions that include both long and partially annealed tracks indicate more complex thermal histories. For this study, AFT analyses were performed by Paul O'Sullivan at the GeoSep Services facilities in Moscow, Idaho, on 21 samples. Age and track-length information is reported in Table 2, and AFT analytical data are reported in Table 3. For a detailed description of the methods used and how uncertainties were derived, see the Supplemental Materials (Text S2 [footnote 1]). Thermochronology Techniques: Apatite (U-Th)/He (U-Th)/He thermochronology involves the analysis of alpha particles ( 4 He) accumulated in a mineral due to the radioactive decay of uranium and thorium (Reiners and Brandon, 2006). With a nominal closure temperature of 40-80 °C, apatite (U-Th)/He thermochronology (AHe) provides information about the thermal history of a rock sample in the upper ~2-4 km of the crust (Farley, 2002). 4 He particles travel ~20 microns from their parent atoms during radioactive decay, resulting in the ejection of 4 He produced near the edge of a grain, requiring corrections referred to as the F T correction (Farley et al., 1996;Ketcham, 2005). The closure temperature of an apatite grain should vary depending on the grain size, cooling rate, and radiation damage accumulated Upon their return from the reactor, the sample and monitors were loaded into 2 mm diameter holes in a copper tray that was then loaded into an ultra-high vacuum extraction line. The monitors were fused, and samples heated, using a 6-watt argon-ion laser following the technique described in York et al. (1981), Layer et al. (1987 and . Argon purification was achieved using a liquid nitrogen cold trap and a SAES Zr-Al getter at 400C. The samples were analyzed in a VG-3600 mass spectrometer at the Geophysical Institute, University of Alaska Fairbanks. The argon isotopes measured were corrected for system blank and mass discrimination, as well as calcium, potassium and chlorine interference reactions following procedures outlined in McDougall and Harrison (1999). Typical full-system 8 min laser blank values (in moles) were generally 2 10 -16 mol 40 Ar, 3 10218 mol 39 Ar, 9 10 -18 mol 38 Ar and 2 10 -18 mol 36 Ar, which are 10-50 times smaller than the sample/standard volume fractions. Correction factors for nucleogenic interferences during irradiation were determined from irradiated CaF2 and K2SO4 as follows: ( 39 Ar/ 37 Ar)Ca = 7.06 10 -4 , ( 36 Ar/ 37 Ar)Ca = 2.79 10 -4 and ( 40 Ar/ 39 Ar)K = 0.0297. Mass discrimination was monitored by running calibrated air shots. The mass discrimination during these experiments was 1.3 % per mass unit. While doing 1 Supplemental Materials. Isotopic data tables and figures and detailed methods. Please visit https:// doi.org/10.1130/GES02008.S1 or access the full-text article on www.gsapubs.org to view the Supplemental Material. Notes: Samples analyzed with standard MMHB-1 with an age of 523.5 Ma; most robust age in bold. Ages reported at ±1 sigma. Abbreviations: BI-biotite; FS-feldspar; HOhornblende; MSWD-mean square of weighted deviates; MU-muscovite; SE-sericite; WR-whole rock. *Does not meet all the criteria of a plateau age; therefore, weighted average age used. in the crystal lattice and should be reflected in intra-sample and overall grain age dispersion (Reiners and Farley, 2001;Flowers et al., 2009). However, closure temperature in individual apatite grains is often not clearly controlled by these kinetic factors (Fitzgerald et al., 2006). Twelve AHe ages were determined for this study on four to seven grains for each rock sample by Jim Metcalf at the University of Boulder, Colorado. Corrections based on grain size (F T ) were applied to raw ages to correct for alpha particle ejection effects (Farley, 2002). Single-grain outliers, which were significantly older or younger than the mean age of grains in a sample, were found in three analyses. In general, this was due to low concentrations of uranium or 4 He in that particular grain. We excluded these outliers from our sample average age calculations, and it did not affect our results or interpretations. Given the natural dispersion for intra-sample single grains in AHe ages, we calculated the standard deviation for each sample grain set and applied this as the best approximation of the geologic error for the analysis (Spotila and Berger, 2010). Sample AHe average ages, uncertainties, and analytical data are reported in Table 4. For a more detailed description of the AHe methods used and how uncertainties were derived, see the Supplemental Materials (Text S3 [footnote 1]). HeFTy Thermal Modeling Inverse thermal models were created for each of our samples using the program HeFTy (Ketcham, 2005). Using an estimate of the present-day surface temperature and higher-temperature ( 40 Ar/ 39 Ar) thermochronology data as constraints, HeFTy models the time and temperature cooling history of a sample. The program evaluates "best-fit" cooling paths and slopes based on input age and AFT track-length constraints. We present Monte Carlo method inverse models showing 50,000 acceptable and good cooling paths constrained in envelopes and weighted-mean T-t paths. Input constraints for the models include 40 Ar/ 39 Ar hornblende (~400-600 °C), muscovite (~400-425 °C), biotite (~250-350 °C), and K-feldspar (~180-350 °C) ages, AFT data (~60-120 °C) (single-grain ages, Dpar, track lengths, angle of tracks to the c-axis), and average AHe ages. We use a broad temperature window (~40-80 °C) for sample average AHe ages because intra-sample grain age dispersal and overall grain age dispersal were not correlated with either grain size or effective uranium ( Fig. S3 [footnote 1]). ■ RESULTS AND INTERPRETATIONS Field Observations Samples collected for this study outside of the Jurassic trondhjemite pluton (Fig. 3) were spot samples collected via helicopter. Hence, this study does not document any field relationships outside of the trondhjemite pluton and the region immediately surrounding it. Samples collected outside of the trondhjemite pluton were generally granitoids (01King, 02King, 03King, and 05King) but also included a metabasalt collected in the CMF zone (01Chic). The northeast edge of the Jurassic trondhjemite pluton is characterized by a contact with Paleocene-Eocene volcanic rocks and numerous exhumed lithified volcanic bodies and mafic dikes that intrude into the trondhjemite pluton for ~3 km from the contact (Figs. 3 and 6; P1 and P2). The mafic dikes intrude along exfoliation joints in the trondhjemite pluton and are evidence for some degree of unroofing prior to dike emplacement. Circa 60-50 Ma sedimentary strata locally overlie these Jurassic plutons and volcanic rocks along a prominent nonconformity (Sunderlin et al., 2014;Wilson et al., 2015) requiring significant unroofing prior to emplacement of these Eocene dikes. The concentration of dikes significantly decreases moving southwest from sample 03Sov. There are rare dikes diffusely dispersed across the interior trondhjemite pluton, such as samples 14Sov and 12Talk (Figs. 3 and 7). We did not observe any faults in the region of our vertical profiles (Fig. 3), although there are mapped structures that appear to partially bound the edges of the trondhjemite pluton ( Fig. 3) (Wilson et al., 2015). Between samples 13Talk and 14Talk, there is a distinct ~S-N-striking shear zone consisting of exhumed amphibolite with extensive mineralization and a mapped ~NW-SEstriking fault (Figs. 3 and 7). Throughout the trondhjemite pluton, outcrop faces show evidence for fluid infiltration, hydrothermal alteration, and subsequent mineralization ( Fig. 6; P3 and P4). This is most apparent along the Sheep Mountain profile where sample 01Red was collected (Fig. 7). Here a portion of the trondhjemite pluton has been metasomatized to K-feldspar and sericite-rich rocks ( Fig. 6; P4). In this area, we observed a staked mining claim that speaks to the extent of alteration. Ar/ 39 Ar Geochronology and Thermochronology Results Fifteen 40 Ar/ 39 Ar ages were produced for this study and are presented below organized by mineral type. Ages are reported at ±1σ uncertainty (Table 1). Age spectra are shown for each sample in Figures 8 and 9 (also see Fig. S1 [footnote 1]) and in general are flat, suggesting minimal argon loss. Isochron ages were calculated when possible and are shown in Figure S1 (footnote 1), and isotopic analytical data are reported in Table S1 (footnote 1). Hornblende Age A homogeneous hornblende separate from sample 05King, a tonalite collected from hypabyssal intrusions south of the CMF (unit Jktm on Fig. 3), was analyzed (Figs. 8 and S1 [footnote 1]; Table 1). The integrated age (59.9 ± 13.5 Ma) and the plateau age (47.6 ± 11.9 Ma) are within uncertainty. We prefer the plateau age of 47.6 ± 11.9 Ma for sample 05King because of the higher atmospheric content of the lower-temperature step-heat release. The large uncertainty is likely due to the low-K concentration of the hornblende separate. A homogeneous biotite separate from sample 13Sov was analyzed (Figs. 8,9,and Table 1). The integrated age (148.2 ± 0.6 Ma) and the plateau age (148.7 ± 0.6 Ma) are within uncertainty. We prefer the plateau age of 148.7 ± 0.6 Ma because of the high atmospheric 40 Ar content of the low-temperature step heat. The time between closure of the muscovite and biotite mineral systems in sample 13Sov is ca. 1.5 Ma (Fig. 9). These overall mica ages are similar to the U-Pb zircon crystallization age of the trondhjemite pluton (Fig. 7) (157 Ma to ca. 159 Ma; Rioux et al., 2007). The duration of time between closure of the muscovite and biotite mineral phases (~100 °C) is geologically instantaneous (~1.5 m.y.) (Fig. 9). This suggests rapid rock cooling (~67 °C/m.y.) following the Late Jurassic emplacement of the trondhjemite pluton, which may have been protracted (Hacker et al., 2011). A homogeneous sericite separate from sample 01Red, collected in unit Jtr (Fig. 3), was analyzed (Figs. 8 and S1 [footnote 1]; Table 1). The integrated age (102.8 ± 1.2 Ma) and the plateau age (99.1 ± 0.9 Ma) are not within uncertainty. We prefer the plateau age of 99.1 ± 0.9 Ma because of the anomalously older age for the lowest temperature step heat. K-Feldspar Ages Homogenous K-feldspar separates from samples 01Sov and 03Sov were analyzed (Figs. 8, 9, and S1 [footnote 1]; Table 1). For sample 01Sov, the age spectrum is bimodal, suggesting a more complex thermal history. The age spectrum did not meet the criteria for a plateau age (three consecutive steps); therefore, weighted average ages are reported. The integrated age (163.6 ± 4.4 Ma), maximum weighted average age (86.5 ± 2.5 Ma), and minimum weighted average age (61.0 ± 3.1 Ma) are not within uncertainty. We prefer a maximum weighted average age (KFAT max ) of 86.5 ± 2.5 Ma and a minimum weighted average age (KFAT min ) of 61.0 ± 3.1 Ma for sample 01Sov. The duration Figure 9. Muscovite/biotite and muscovite/K-feldspar 40 Ar/ 39 Ar age spectra pairs for samples 01Sov, 03Sov, and 13Sov. The "age gap" represents the closure between the two mineral phases. Filled red, brown, and orange bars represent the steps used for the muscovite, biotite, and K-feldspar steps, respectively. MSWD-mean square of weighted deviates. of time between closure of the ~350 °C and ~150 °C nominal temperature domains for K-feldspar is ~26.5 m.y. The duration between closure of the muscovite and the high-temperature K-feldspar mineral phases in sample 01Sov is ~63.4 m.y. The age spectrum for sample 03Sov is flatter and suggests a less complex thermal history (Fig. 8). The integrated age (133.3 ± 2.3 Ma) and the plateau age (124.9 ± 1.8 Ma) are not within uncertainty. We prefer the plateau age of 124.9 ± 1.8 Ma for sample 03Sov because of the anomalously older age for the lowest temperature step heat. An isochron age of 129.1 ± 3.2 Ma was determined for the K-feldspar separate from sample 03SOV and is within uncertainty of the plateau age. The duration between closure of the muscovite and K-feldspar mineral phases in sample 03Sov is ~33 m.y. (Figs. 8 and 9). Sample 03Sov records relatively slow Cretaceous rock cooling between the closure of muscovite and rapid closure K-feldspar temperature domains (~5 °C/m.y.) (Fig. 9). Sample 01Sov records relatively slow Cretaceous rock cooling between the closure of muscovite and high-temperature K-feldspar domains (~2 °C/m.y.). 40 Ar/ 39 Ar K-feldspar thermochronology on sample 01Sov, located ~200 m below sample 03Sov (Fig. 7), has a bimodal age spectrum that we infer demonstrates thermal resetting at ca. 61 Ma and subsequent rock cooling (Figs. 8 and 9; Table 1), indicating a more complex cooling history. The partial thermal resetting of K-feldspar in sample 01Sov (ca. 61 Ma) happened prior to the main episode of regional dike emplacement at ca. 50-40 Ma (Fig. 7). We attribute the thermal resetting of K-feldspar in sample 01Sov to an elevated Paleocene geothermal gradient induced by high heat flow through a slab window beneath the Talkeetna Mountains, as proposed by Cole et al. (2006), followed by subsequent rock cooling related to exhumation. This time period (ca. 61 Ma) also overlaps with a period of thermal resetting constrained by detrital-zircon fission-track analyses on Cook Inlet (Fig. 2) Cretaceous strata (Finzel et al., 2016), and this inference is consistent with regional evidence for an elevated geothermal gradient (Benowitz et al., 2012a). Whole-Rock Ages Homogenous, phenocryst-free whole-rock separates from samples 01Sov-1, 01Sov-2, 01Sov-3, 01-Sov-4, and 01-Sov-5 (five different phases of magmatism sampled meters from each other), 02Sov, and 14Sov, which are mafic dikes intruding the trondhjemite pluton (unit Jtr in Fig. 3), were analyzed (Figs. 8 and S1 [footnote 1]; Table 1). The sites for the 01Sov sample series and 02Sov are located at the northeast edge of the trondhjemite pluton, and sample 14Sov is located toward its interior (Figs. 3 and 7). The five different magmatic phases of sample 01Sov (located at the contact of units Tepv and Jtr in Fig. 3) have plateau ages from ca. 46.5 Ma to ca. 42.3 Ma (Fig. 8). Four of the five different magmatic phases provided isochron ages of 42.2 ± 0.2 Ma, 42.7 ± 1.0 Ma, 44.6 ± 0.6 Ma, and 44.5 ± 0.5 Ma (see Fig. S1 [footnote 1]). We prefer plateau ages for these samples because of the higher atmospheric content of the lower-temperature step-heat releases. For sample 02Sov, the integrated age (46.5 ± 0.5 Ma), plateau age (46.3 ± 0.4 Ma), and isochron age (46.3 ± 0.4 Ma) are all within uncertainties. We prefer the plateau age of 46.3 ± 0.4 Ma because of its higher precision. For sample 14Sov, the integrated age (52.6 ± 2.5 Ma), plateau age (52.4 ± 2.5 Ma), and isochron age (49.6 ± 5.8 Ma) are all within uncertainties. We prefer the plateau age of 52.4 ± 2.5 Ma because of the anomalously high age of the highest temperature step heat. When our new whole-rock ages are integrated with 19 previously published whole-rock 40 Ar/ 39 Ar ages in the Talkeetna Mountains (north of the CMF and south of the Talkeetna fault), ages range from ca. 61 to ca. 30 Ma (Fig. 7) (Silberman and Grantz, 1984;Cole et al., 2006;Oswald, 2006;Cole et al., 2007). Our ages support the interpretation by Cole et al. (2006) of a period of highflux Talkeetna Mountains regional volcanism that persisted for millions of years during the Paleocene-Eocene and sparse magmatism that continued during the Oligocene. Apatite Fission-Track Thermochronology Results Twenty-one AFT cooling ages were produced for this study on intrusive rocks ( Fig. 7 and Table 2) and are compiled with previously existing AFT cooling ages in the region (Little and Naeser, 1989;Parry et al., 2001;Bleick et al., 2012). We report pooled ages with calculated uncertainties representing the ±95% confidence interval (2σ) (ranging from ±ca. 3 Ma to ca. 31 Ma; Table 2). Dpar was measured in most dated grains, and average sample Dpar values range from ~1.7-2.7 µm (Table 3). There is no correlation between Dpar values and age (Fig. S4A [footnote 1]) or track lengths (Fig. S4B [footnote 1]), suggesting similar annealing kinetics for all samples. Confined track-length distributions are reported in Figure S2 (footnote 1). AFT North of the CMF Eighteen samples north of the CMF (Fig. 7) have Paleocene-Eocene cooling ages ranging from ca. 63.0 Ma to ca. 41.1 Ma (Fig. 10A). The highest elevation sample (13Sov) has a Cretaceous cooling age of ca. 74.2 Ma (Fig. 10A). Sample 03King was collected within ~5 m of the granite/metamorphic rock contact between units Jgr and PSm (Fig. 3) and has an Oligocene cooling age of ca. 34.2 Ma. These results agree with other AFT cooling ages (north of the CMF from Bleick et al., 2012), which are predominantly Paleocene-Eocene. Mean track lengths from our sample set range from 12.9 µm to 14.4 µm ( Table 2). AFT cooling ages in the southern Talkeetna Mountains have two separate cooling domains divided by elevation. Samples located outside of the trondhjemite pluton at lower elevations (less than ~1500 m) and near the CMF do not have an age-elevation relationship (Fig. 10A). AFT cooling ages from samples collected along vertical profiles of Mount Sovereign and Sheep Mountain have an age-elevation relationship with an inflection point at ca. 59 Ma that suggests more rapid rock cooling and inferred exhumation after that time at a maximum rate of ~188 m/m.y. (Fig. 10B). Samples 06Sov, 10Sov, and 12Sov Figure 10C. However, all AFT ages in the southern Talkeetna Mountains do not show a trend of getting younger toward the CMF, and the R 2 relationship in Figure 10D is slightly stronger, suggesting these age patterns are controlled by elevation, rather than distance from CMF. (A) Apatite fission-track (AFT) age versus elevation plot including all cooling ages from this study and other published sources. (B) AFT age versus elevation plot for samples (this study) from Mount Sovereign and Sheep Mountain vertical profile. Yellow bar is inflection point that we interpret reflects a change to more rapid rock cooling and inferred exhumation, and width of bar is qualitative uncertainty. Exhumation rates estimated from lines qualitatively fit through sample ages are shown in bold font. Blue circles are apatite (U-Th)/He (AHe) cooling ages that may have been thermally reset during peak regional volcanism (red bar). (C) AFT age versus distance from Castle Mountain fault (CMF) along S-N transect approaching CMF. (D) AFT age versus elevation along the same S-N transect approaching the CMF. There is an apparent relationship of AFT ages getting younger approaching the CMF in Downloaded from https://pubs.geoscienceworld.org/gsa/geosphere/article-pdf/15/5/1539/4831366/1539.pdf by guest have AFT ages of ca. 42.6 Ma, ca. 45.0, and ca. 51.9 Ma, respectively, and are distinct outliers from the general age-elevation relationship (Fig. 10B). This is likely due to thermal resetting from the injection of hydrothermal fluids during middle Eocene magmatism based on field observations of hydrothermal alteration ( Fig. 6; P3), new whole-rock 40 Ar/ 39 Ar constraints on Mount Sovereign Eocene magmatism (Fig. 7), and the apparent elevation-invariant AHe cooling ages along the same vertical profile (Fig. 10B). Sample 04King has an AFT age of ca. 44.0 Ma and is another outlier to the age-elevation trend. This sample is located away from the main vertical profile sample cluster (Fig. 7) and across a mapped fault that may be affecting its age (Fig. 3). Sample 03King, the closest sample to 04King, also has a regionally young AFT age of ca. 34.2 Ma, adding credence to the possibility of an unmapped structure in the region. Alternatively, the young AFT age of sample 03King may be due to fluid flow along the unit contact with the metamorphic rocks (Fig. 3). To test if proximity to and differential unroofing along the CMF might be controlling these AFT age-elevation patterns, we collected eight samples along a S-N transect approaching the CMF. The AFT cooling ages (Figs. 7 and 10C) have an apparent pattern of younging toward the fault. However, these samples also decrease in elevation moving toward the CMF, and the correlation between age and elevation along the same transect is slightly stronger (Fig. 10D), making it more likely that block exhumation along a vertical trajectory (reflected in age-elevation relationships) is the primary control on these cooling-age patterns rather than the proximity to the CMF. Sample 01Devil produced an AFT age of ca. 40.1 Ma, which is relatively young compared to the full Talkeetna Mountains AFT data set. This sample is the most west and north sample in our Talkeetna Mountains AFT data set. Because sample 01Devil is a single cooling age, it is difficult to weigh its significance, but we report it for completeness. AFT South of the CMF Sample 05King has an AFT cooling age of ca. 31.2 Ma (Figs. 7 and 10A). This result is consistent with regional AFT cooling ages from Little and Naeser (1989) and Parry et al. (2001), who document distinctly younger cooling ages (ca. 21-32 Ma) south of the CMF. From this AFT cooling age pattern, we infer that the north side of the CMF did not have a significant vertical component during the Eocene-early Oligocene. This is consistent with mapping studies that infer chiefly Neogene vertical displacement across the fault (Grantz, 1966;Fuchs, 1980;Trop et al., 2003). Possible Thermal Resetting of AFT Cooling Ages To test for thermal resetting due to diking, AFT analyses were performed on samples 01Sov and 02Sov (Fig. 7 and Table 2), which are trondhjemite rocks collected at a minimum distance of ~5 m and a maximum of ~50 m from Eocene volcanic intrusions at the northeastern edge of the pluton (Fig. 6; P5 and P6). The AFT cooling ages of samples 01Sov and 02Sov (ca. 49 Ma and ca. 48 Ma, respectively) are older than the volcanic ages of the proximal dikes (ca. 46 Ma to ca. 42 Ma) ( Fig. 7 and Table 1), providing evidence that the rocks of the trondhjemite pluton were not thermally reset during dike emplacement. The customary large uncertainty on the AFT ages of samples 01SOV and 02SOV (±ca. 9 Ma; Table 3) does overlap with the 40 Ar/ 39 Ar dike ages (uncertainties <±ca. 1 Ma); hence, additional approaches (HeFTy kinetic modeling and age-elevation patterns) are required to further support an interpretation of no thermal resetting as discussed below. Throughout the trondhjemite pluton, outcrops show variable evidence for alteration from hydrothermal fluids ( Fig. 6; P3) that were likely injected during the period of peak Eocene magmatism (Cole et al., 2006). Previous studies have demonstrated that the heat effects from hydrothermal fluids can result in the thermal resetting of the AFT system (Roden and Miller, 1989). Samples 06Sov, 10Sov, and 12Sov have AFT cooling ages of ca. 42 Ma, ca. 45 Ma, and ca. 51 Ma, respectively, and are distinct outliers from the Mount Sovereign to Sheep Mountain AFT age-elevation relationship (Fig. 10B), suggesting they have been thermally reset. HeFTy thermal models of these three outlier samples show more rapid Paleocene-Eocene rock-cooling rates (up to ~30 °C/m.y.) compared to the other samples in the AFT age-elevation profile (~16 °C/m.y.) (see Fig. S2 [footnote 1]), indicating the two sample sets have experienced different thermal histories. AHe cooling ages are invariant with elevation along the Mount Sovereign to Sheep Mountain vertical profile with ages generally ca. 45 Ma, adding support to this being a period of peak hydrothermal fluid injection and thermal resetting. We test this inference with HeFTy kinetic modeling and find that the thermal models provide better fits, if reheating is allowed (Fig. S2). Apatite (U-Th)/He Thermochronology Results Twelve AHe sample cooling ages were produced for this study on intrusive rock samples collected north of the CMF. Sample average ages and 1σ uncertainties are reported (uncertainties range from ±ca. 1 Ma to ca. 11 Ma; Table 4) and were calculated following the techniques outlined in the Methods section. Sample 05Talk yielded a large spread of individual apatite grain cooling ages (Table 4) that did not meet the parameters to calculate an average age, and it is therefore excluded from our interpretations. AHe sample cooling ages range from ca. 45.3 Ma to ca. 10.5 Ma. These new AHe results were compiled with published ages from Hoffman and Armstrong (2006) and Hacker et al. (2011) and, combined, show a distinct pattern of younging ages approaching the CMF (Fig. 11A). Sample 01King, located directly north of the continuous strand of the CMF is ca. 42.4 Ma ( Fig. 7 and 11C). Sample 02King located north of a splay off of the CMF is ca. 12.1 Ma. There is no relationship between AHe cooling age and elevation (Fig. 11B). Eight samples along a S-N transect approaching the CMF show a distinct pattern of younging toward (A) All apatite (U-Th)/He (AHe) cooling ages existing in our study area versus distance from Castle Mountain fault (CMF). Orange circles are ages analyzed for this study, and blue dots are previously published ages (Hoffman and Armstrong, 2006; Hacker et al., 2011). Sample 01King, a distinct outlier from the data set, is north of the continuous strand of the CMF but south of the northernmost strand of the CMF, suggesting that the northernmost strand is the active strand of the CMF. (B) There is no relationship between AHe age and elevation. (C) AHe age versus distance from CMF along S-N transect approaching CMF shows a pattern of ages getting younger approaching the northernmost (and inferred active) strand of the CMF. (D) AHE age versus elevation along the same transect shows no relationship. Downloaded from https://pubs.geoscienceworld.org/gsa/geosphere/article-pdf/15/5/1539/4831366/1539.pdf by guest the fault (Fig. 11C); young cooling ages near the CMF suggest there has been exhumation along this structure since at least the Miocene. There is a weak relationship between age and elevation along the same transect (Fig. 11D), adding support to the notion that vertical displacement along the CMF is controlling AHe cooling-age patterns. The AHe age from sample 01King located directly north of the continuous strand of the CMF is 42.4 ± 6.9 Ma (Fig. 7). The AHe age from sample 02King located north of the northern splay of the fault is 12.1 ± 0.8 Ma. This recognizably younger age is evidence that the northern splay has been the active strand of the CMF since at least the Miocene. The compilation of AHe cooling ages (Fig. 12) indicates parts of the southern Talkeetna Mountains cooled below the ~80 °C isotherm during the Eocene. Samples 01Sov, 06Sov, and 13Sov have AHe cooling ages that may reflect thermal resetting from hydrothermal fluids during Eocene magmatism (Fig. 10B) based on their invariance with elevation-age relationship and regional evidence of hydrothermal fluid injection at the time. Overall, the AHe data support a southern Talkeetna Mountains rock-cooling event during the Oligocene-Miocene. Support for this interpretation includes deposition of Miocene fluvial strata in the footwall of the CMF in the southern Talkeetna Mountains (Bristol et al., 2017). This is consistent with our interpretation that exhumation was driven by north-side-up vertical displacement along the CMF. In the context of regional basin analysis and the magmatic record, a compilation of the geochronology and thermochronology data from the southern Talkeetna Mountains supports Paleocene-Eocene and Oligocene-Miocene exhumation events with evidence of spatially limited thermal resetting related to hydrothermal fluid injection. The occurrence of spatially variable resetting should be taken into account by future thermochronology studies in this region (detrital studies in particular) because of the difficulty in distinguishing monotonic cooling ages from thermally reset ages without field evidence of outcrop alteration from hydrothermal fluids, age-elevation relationships, and structural control. Cooling and Magmatic Patterns in Time and Space Previously published and our new whole-rock 40 Ar/ 39 Ar, non-reset AFT, and AHe cooling ages confined to north of the CMF and south of the Talkeetna fault are compiled into a normalized probability density plot (Fig. 12) (Silberman and Grantz, 1984;Parry et al., 2001;Cole et al., 2006;Hoffman and Armstrong, 2006;Oswald, 2006;Hacker et al., 2011;Bleick et al., 2012;Arkle et al., 2013). AFT cooling ages that we interpret to be thermally reset or reflecting displacement along unmapped structures (06Sov, 10Sov, 12Sov, and 03King) are excluded. The whole-rock volcanic peak is younger than the AFT peak, suggesting that AFT cooling ages represent exhumation and not thermal resetting. Our interpretation that AFT cooling ages are exhumation-related is consistent with field observations and geochronology constraints (from the Matanuska Valley region) that document rapid accumulation of a >2-km-thick succession of ca. Kortyna et al., 2013;Sunderlin et al., 2014;Trop et al., 2015). When plotted versus latitude and longitude, whole-rock 40 Ar/ 39 Ar data in the southern Talkeetna Mountains show no S-N or W-E age progressions (Figs. 13A and 13B). When plotted versus latitude and longitude, AFT cooling ages in the southern Talkeetna Mountains show no S-N or W-E age progressions ( Fig. 13A and 13B). There is also no clear evidence of regional resetting of AFT or AHe cooling age due to Eocene volcanism (Figs. 12 and 13). Similarly, region-wide, Paleocene-Eocene exhumation-related cooling ages and volcanic ages across southern Alaska have no apparent S-N or W-E age progressions: from southwest Alaska (O'Sullivan et al., 2010), the Revelation Mountains region (Reed and Lanphere, 1972), the Tordrillo Mountains Benowitz et al., 2012a), the Kichatna Mountains (Ward et al., 2012), the Kenai Mountains (Valentino et al., 2016), the Foraker Glacier region (Reed and Lanphere, 1972;Cole and Layer, 2002), the Susitna Basin (Stanley et al., 2014), the Cantwell Fig. 6). Yellow bar to the right represents the inflection point from our AFT age-elevation profile (Fig. 10B). Light-blue bar to the left represents the approximate timing of initiation of Yakutat microplate flat-slab subduction. HeFTy Thermal Models To construct a detailed thermal history of the region, inverse thermal models were produced for all our samples using all available U-Pb zircon, 40 Ar/ 39 Ar, AFT, and AHe age constraints. Representative thermal models are shown in Figures 15 and 16 (for all thermal models, see Fig. S2 [footnote 1]). The thermal models display some spatially and elevation-controlled variations but in general record three main rock-cooling events: (1) The highest elevation sample from the Mount Sovereign vertical profile (13Sov) records relatively slow rock cooling from the Cretaceous to present (~1-4 °C/m.y.) (Figs. 15 and 16); (2) thermal models of lower-elevation samples (01Sov and 03Sov) show relatively slow but not well-constrained cooling until ca. 60 Ma (~1-4 °C/m.y.), when the cooling rate significantly increases for a period of ~20 m.y. (>16 °C/m.y.); and (3) slow cooling follows, and there is relative tectonic quiescence from ca. 45 Ma to present (~1 °C/m.y.). Three samples near the CMF (01Trop, 02King, and 04King) show a second relatively rapid cooling event (~4 °C/m.y.) that initiated in the Miocene (Fig. 15), although the exact timing of onset is not well constrained by our current cooling age data set. Southern Talkeetna Mountains Paleogeothermal Gradient Geothermal gradient constraints must be known to quantify the total amount of exhumation. We have no quantitative measurement of paleogeothermal gradients for the Talkeetna Mountains. Therefore, we use our exhumation and cooling-rate calculations along with petrological observations and other regional geothermal gradient constraints to assess qualitative variations in the geothermal gradient through time, allowing us to make inferences about the amount of southern Talkeetna Mountains exhumation. Overall, documenting variations in the regional geothermal gradient through time is integral to understanding the Cenozoic tectonic evolution of southern Alaska. Apatite Fission-Track Cooling Age-Elevation Relationships We use age-elevation relationships to calculate variations in the rate of exhumation through time (Fig. 10); we interpret breaks in slope as reflecting a change in the exhumation rate (Fitzgerald et al., 1993). Rock-cooling rates and estimated exhumation rates are also calculated using results from the HeFTy kinetic modeling program (Ketcham, 2005). Seventeen samples north of the CMF have Paleocene-Eocene AFT cooling ages (ca. 63 Ma to ca. 44 Ma) (Figs. 7 and 10A; Tables 2 and 3). When these ages are compiled with previously published AFT cooling ages in the region (Parry et al., 2001;Bleick et al., 2012), there is a complex age-elevation relationship that can be divided into two different cooling domains (Fig. 10A). Samples at lower elevations (<1500 m) do not have an age-elevation relationship (Fig. 10A). AFT cooling ages at higher elevations (>1500 m) that are within the trondhjemite pluton along the Mount Sovereign to Sheep Mountain vertical profile have a positive age-elevation relationship (Fig. 10B). As discussed above, a few of the samples along the AFT vertical profile show evidence for spatially variable thermal resetting that we infer is related to hydrothermal fluid injection. The Mount Sovereign to Sheep Mountain AFT vertical profile shows three distinct periods of exhumation: (1) relatively slow Late Cretaceous-early Paleocene (ca. 74 to ca. 60 Ma) exhumation at a rate of ~15 m/m.y.; (2) a break in slope at ca. 60-58 Ma indicating relatively rapid exhumation at a maximum rate of ~188 m/m.y.; and (3) a second break in slope at ca. 56 Ma indicating a less rapid exhumation rate of ~65 m/m.y. Alternatively, the AFT cooling ages have large uncertainty bars that allow only one inflection point at ca. 60-58 Ma with a more moderate exhumation rate. We do not favor this interpretation because it is not supported by the HeFTy thermal models, which show a significant increase in rock-cooling rates at ca. 60 Ma and an inferred increase in exhumation rates at this time (Fig. 15). These results suggest that rapid cooling and inferred exhumation in the Talkeetna Mountains began immediately after the thermal resetting and subsequent cooling of K-feldspar from sample 01Sov (ca. 61 Ma) and continued during the main episode of dike emplacement from ca. 40-50 Ma (Figs. 7 and 12). AFT cooling ages from samples at lower elevations (<1500 m) and closer to the CMF do not have an age-elevation relationship (Fig. 10A). The two most likely explanations for the lack of an AFT age-elevation relationship at lower elevations are potential differential erosion after the closure of the AFT system (possibly related to Cenozoic deformation or late Cenozoic glaciation) (Williams et al., 1989) and the perturbation of isotherms at lower elevations around the trondhjemite pluton by the Paleocene-Eocene volcanism, resulting in possible erratic rock-cooling profiles (Reiners, 2007). AFT cooling ages south of the CMF are distinctly younger than those to the north (Arkle et al., 2013), which is counter to what is expected if the CMF experienced significant Eocene-Oligocene north-side-up displacement and is the primary control on Cenozoic AFT cooling age patterns. There are only four AFT cooling ages available in the Talkeetna Mountains region south of the CMF (Fig. 7), including sample 05King from this study (ca. 31 Ma), one sample located in the fault zone from Parry et al. (2001) (ca. 31 Ma), and two samples from Little and Naeser (1989) (ca. 24 Ma and ca. 21 Ma). The lack of data makes it difficult to draw interpretations from these cooling ages. However, Arkle et al. (2013) attribute these regionally younger ages to exhumation in the Chugach syntaxis driven predominantly by underplating from the Yakutat flat-slab since the Oligocene. According to their model, the exhumation effects driven by underplating die out north of the CMF, which may explain why ages south of the fault are younger than ages north of the CMF. The AFT ages south of the CMF are also located near the BRFS, and one sample is located south of a BRFS strand. Therefore, it is possible that the ages were affected by the Neogene contractional reactivation of the BRFS (Little and Naeser, 1989). HeFTy Thermal Modeling Constraints on Rock-Cooling Events in the Talkeetna Mountains Rock-cooling paths were constructed for all our samples using the HeFTy kinetic modeling program (Ketcham, 2005) and all available thermochronology constraints. These time/temperature paths highlight the approximate timing and duration of multiple rock-cooling events. Representative HeFTy thermal models are shown in Figure 15. HeFTy thermal models for all samples can be found in Figure S2 (footnote 1). Rock-cooling paths for samples 13Sov, 03Sov, and 01Sov show cooling patterns that vary with elevation ( Figs. 15 and 16). The highest elevation sample (13Sov) has a much slower cooling rate relative to the lower-elevation samples. Sample 03Sov shows a cooling rate that significantly increases after ca. 60 Ma. Sample 01Sov also shows a similarly increased cooling rate after the low-temperature K-feldspar domain is thermally reset and subsequently cooled at ca. 61 Ma (Figs. 8 and 9). All our HeFTy thermal models show distinct rock-cooling patterns that vary both spatially and with elevation. To highlight these variations, we divided the thermal models into three groups with similar cooling histories (Fig. 17): (1) The two highest elevation samples collected from the summits of Mount Sovereign and Sheep Mountain (13Sov and 07Talk) (Fig. 7); (2) seven samples from the interior Talkeetna Mountains along the two vertical profiles (01Sov, 02Sov, 03Sov, 08Sov, 11Sov, 05Talk, and 13Talk); and (3) three samples near the CMF (01Trop, 02King, and 04King). The HeFTy thermal models suggest four distinct rock-cooling events in the southern Talkeetna Mountains topographic development history: (1) The highest elevation samples record relatively slow rock cooling from Cretaceous to present (~1-3 °C/m.y.) (Fig. 17). Mapping studies in the southern Talkeetna Mountains document Cretaceous crustal shortening (Csejtey et al., 1978;Fuchs, 1980), suggesting that rock cooling was related to exhumation. (2) The interior Talkeetna Mountains samples collected along vertical profiles and samples near the CMF both record a rapid rock-cooling event (>16 °C/m.y.) initiating at ca. 60 Ma (Fig. 17). The HeFTy models suggest that this elevated cooling rate persisted for ca. 20 Ma. This is consistent with the inferred timing of a prolonged period of Paleocene-Eocene volcanism (Cole et al., 2006) and our constrained onset of rapid exhumation at ca. 60-58 Ma (Fig. 10B) following the thermal resetting of K-feldspar in sample 01Sov at ca. 61 Ma (Figs. 8 and 9). (3) The rapid rock-cooling event is followed by a period of relative middle Eocene-Miocene tectonic quiescence with a rock-cooling rate of ~1 °C/m.y. More thermochronology data are needed to determine the exact duration of this rock-cooling event, which is unclear from our data set. (4) The samples collected near the CMF records a second, more rapid rock-cooling event (~4 °C/m.y.) during the Miocene, likely in response to vertical displacement and exhumation along the CMF. More low-temperature thermochronology data are needed near the CMF to define the timing of initiation of this rock-cooling event. Our AFT age-elevation relationship suggests a maximum exhumation rate of 200 m/m.y. (Fig. 10B). When the maximum sustained cooling rate from our HeFTy thermal models (~16 °C/m.y.) is converted to an exhumation rate (800 m/m.y.) using a normal continental geothermal gradient of 20 °C/km, the two individual calculations disagree. Alternatively, when the ~16 °C/m.y. cooling rate is converted to an exhumation rate using a much higher geothermal gradient of ~80 °C/km, the two exhumation calculations agree well (200 m/m.y.). When geothermal gradients are obtained from the time-averaged cooling rates between ca. 60 Ma and ca. 45 Ma from five individual HeFTy thermal models, the geothermal gradient is ~55 °C/km on average, indicating a nonsteady-state geothermal gradient during this time period. This is expected given the dynamic nature of slab windows and associated upwelling of the asthenosphere (Thorkelson, 1996). Hence, thermochronology results from this study provide independent evidence for an anomalously high geothermal gradient during Paleocene-Eocene times; this evidence aligns with previous southern Alaska paleogeothermal gradient interpretations (e.g., Benowitz et al., 2012a;Finzel et al., 2016). We acknowledge that the age-elevation profile does not provide a unique rock-cooling scenario given the large uncertainty associated with our AFT ages (±ca. 4 Ma to ca. 17 Ma; Fig. 10 and Table 3). It is also possible that large amounts of tilting could explain the disconnect between the exhumation rate calculated from our AFT age-elevation relationship and the exhumation rate calculated from our HeFTy thermal models. However, vertical lithified volcanic bodies crosscut the mafic dikes that appear to intrude along exfoliation joints ( Fig. 6; P1 and P2), suggesting there has not been significant tilting since the Eocene. Given this and the regional evidence for a high Paleocene-Eocene geothermal gradient across southern Alaska (O'Sullivan and Currie, 1996;Dusel-Bacon and Murphy, 2001;Benowitz et al., 2012a;Riccio et al., 2014;Finzel et al., 2016), we also favor the interpretation of an elevated Paleocene-Eocene southern Talkeetna Mountains geothermal gradient averaging ~55 °C/km. Paleocene-Eocene Paleotopography Geophysical models of southern Alaska from Jadamec et al. (2013) predict that the modern plate configuration would result in the development of a basin in the region of the Talkeetna Mountains rather than high topography (Fig. 4). AFT cooling ages from the Talkeetna Mountains are predominantly Paleocene-Eocene (Fig. 12). These results partly reconcile the models from Jadamec et al. (2013) by suggesting that the southern Talkeetna Mountains have a significant paleotopography component that formed prior to the modern Yakutat flat-slab plate configuration (Fig. 2B). Adding support to this interpretation, Eocene dikes intrude subhorizontally into the Mount Sovereign trondhjemite pluton along exfoliation joints and are crosscut by vertical dikes, and lithified volcanic bodies that display a visible lack of tilting (Fig. 6;P1 and P2). This suggests that southern Talkeetna Mountains unroofing initiated before dike emplacement, consistent with previous studies in the region (e.g., Trop, 2008), and the region was uplifted as a uniform crustal block. We speculate that the prolonged episode of dike emplacement, along with possible magmatic underplating during slab-window magmatism (Li et al., 2012a(Li et al., , 2012b, could have thickened the crust and may in part explain the sustained high topography into modern times. We interpret these overall findings to suggest that Paleocene-Eocene topographic development across the southern Talkeetna Mountains is related to the creation of a Paleocene-Eocene slab window (Cole et al., 2006;Benowitz et al., 2012a). Structural Control on the Paleocene-Eocene Topographic Development of the Southern Talkeetna Mountains The structures involved in accommodating southern Talkeetna Mountains exhumation related to an inferred Paleocene-Eocene slab window are not well constrained (Fig. 3). Faults appear to partially bound the edges of the trondhjemite pluton (Fig. 3) (Wilson et al., 2015). Our AFT age-elevation relationship suggests that there are two separate rock-cooling domains defined by elevation (Fig. 10A) based on the relatively well-defined age-elevation relationship within the trondhjemite pluton. Sample 04King is located across a fault on the western edge of the pluton away from the Mount Sovereign vertical profile sample cluster (Fig. 3) and is an outlier from our AFT age-elevation relationship (Fig. 10B). Similarly, samples 13Talk and 14Talk are located near a mapped fault (Fig. 3), and their AFT cooling ages fall in the cooling domain that does not display an age-elevation relationship (Fig. 10A). This is evidence that at least the structure on the western boundary of the trondhjemite pluton was active and/or a conduit for hydrothermal fluids in the Paleocene-Eocene and supports the notion that the high-peak region, established by the trondhjemite pluton, exhumed as an independent crustal block along these structures (Fig. S6 [footnote 1]). There are also numerous mapped NW-SE-trending normal faults to the east of our study area. It is unclear whether these NW-trending normal faults were created and reactivated to allow for volcanism and exhumation driven solely by mantle processes (i.e., a slab window) (Trop et al., 2003;Cole et al., 2006), or conversely, if crustal extension and the creation and reactivation of structures were influenced by the hypothesized counterclockwise rotation and oroclinal bending of southern Alaska (Hillhouse and Coe, 1994) in the presence or absence of a slab window. The orientation of these structures is also consistent with transtension linked to dextral slip along the CMF (Cole et al., 2006). The Castle Mountain Fault The geophysical models by Jadamec et al. (2013) do not account for the existence of the CMF, which may in part explain the disparity between the predicted compared to actual topography (Fig. 4). To test how southern Alaska structures control patterns of deformation, we compiled the youngest AFT cooling ages along an ~S-N transect across southern Alaska (Figs. 1 and 18A) and included published data (Kveton, 1989;Parry et al., 2001;Bleick et al., 2012;Arkle et al., 2013;Frohman, 2014) and ages from this study. This compilation shows a pattern of ages abruptly changing across major faults, supporting the notion that Cenozoic deformation has been focused along these structures. However, the ages do not change as distinctly across the CMF, suggesting it has experienced less vertical displacement than other major structures along the transect. AFT ages directly to the north of the CMF are ca. 63 Ma to ca. 44 Ma (Table 2), suggesting there has been less than ~3-5 km of vertical displacement and unroofing along the CMF since the Eocene and possibly even less exhumation considering the evidence for an elevated geothermal gradient. This is consistent with previous estimates of ~3 km of Neogene vertical slip based on mapping studies (Grantz, 1966;Fuchs, 1980). There is some regional evidence for Eocene displacement along the CMF. Wishbone Formation strata south of the CMF are deformed by footwall (Haeussler et al., 2003). Specifically, a west-to-east and south-to-north progression in the timing of volcanism and exhumation across southern Alaska inboard from the BRFS would be expected. We applied whole-rock 40 Ar/ 39 Ar geochronology to volcanic rocks in the Talkeetna Mountains (Table 1) and compiled our results with previously published regional Paleocene-Eocene volcanic ages to test this prediction. We also applied AFT thermochronology to plutonic rocks in the Talkeetna Mountains (Table 2) and compiled our results with previously published regional Cenozoic cooling ages. Whole-rock 40 Ar/ 39 Ar ages in the southern Talkeetna Mountains show no overall S-N or W-E relationships, suggesting no local spatial progressions in the timing of volcanism (Fig. 13). AFT cooling ages in the southern Talkeetna Mountains also show no overall S-N or W-E relationships, suggesting no local spatial progressions in the timing of exhumation (Fig. 13). More importantly, region-wide Paleocene-Eocene exhumation-related cooling ages and volcanic ages from across southern Alaska have no apparent S-N or W-E relationships: southwest Alaska (O'Sullivan et al., 2010), the Revelation Mountains region (Reed and Lanphere, 1972), the Tordrillo Mountains Benowitz et al., 2012a), the Kichatna Mountains (Ward et al., 2012), the Kenai Mountains (Valentino et al., 2016), the Foraker Glacier region (Reed and Lanphere, 1972;Cole and Layer, 2002), the Susitna Basin (Stanley et al., 2014), the Cantwell volcanics (Cole et al., 1999), the Jack River volcanics (Cole et al., 2007), the Talkeetna Mountains (Silberman and Grantz, 1984;Parry et al., 2001 et al., 2006;Hoffman and Armstrong, 2006;Oswald, 2006;Cole et al., 2007;Bleick et al., 2012;Hacker et al., 2011), the St. Elias Mountains (Enkelmann et al., 2017), and three sites in the Yukon- Tanana Terrane (Tempelman-Kluit and Wanless, 1975;Dusel-Bacon and Murphy, 2001;Enkelmann et al., 2017) (Fig. 14). The Paleocene-Eocene cooling and volcanic ages across southern Alaska are all broadly similar, suggesting a synchronous exhumation and volcanic event that was widespread across southern Alaska and persisted for millions of years. The apparent lack of any S-N or W-E progressions in the timing of Paleocene-Eocene volcanism and exhumation across southern Alaska north of the BRFS conflicts with the proposed model of an eastward-sweeping active spreading ridge impacting the region. The lack of any Paleocene-Eocene spatial age patterns like those observed in the prism suggests southern Alaska was likely not influenced by sweeping ridge subduction or diachronous thermal perturbation as evidenced by the varied ages in the near-trench Sanak-Baranof belt plutons in the Chugach accretionary prism to the south of the BRFS (Fig. 1). Oblique ridge-trench convergence does prompt an unzipping pattern, whereby slab-window geometry is triangular, and the opening widens progressively as the ridge descends into the mantle; thus, spatial patterns may be more diffuse farther inboard of the trench (Dickinson and Snyder, 1979;Thorkelson, 1996;Breitsprecher and Thorkelson, 2009). However, the absence of any age progression, regardless of rate, across a >800-km-wide swath of southern Alaska makes it difficult to link a Paleocene-Eocene west-to-east sweeping ridge subduction event to the overall regional geology north of the BRFS. Therefore, a different mechanism is required to explain the regional synchronous and long-lived slab-window event recorded in southern Alaska. To reconcile this, we propose a new model for the Paleocene-Eocene tectonic configuration of southern Alaska. We suggest that a Paleocene-Eocene slab window formed subparallel to the trench (Fig. 19) and drove exhumation and volcanism synchronously across southern Alaska while also significantly increasing the regional geothermal gradient. The cause of this Paleocene-Eocene slab-window event is unclear, but Baja, California, provides an analog tectonic setting: a Miocene slab-window event has been attributed to the subduction of an active spreading ridge that was parallel to the trench and led to slab detachment and the opening of a slab window subparallel to the trench (Michaud et al., 2006). Another possible mechanism for the opening of a Paleocene-Eocene slab window across southern Alaska includes the subduction of an inactive bathymetric high (e.g., aseismic ridge or seamount chain) that was part of the Kula plate, leading to slab breakoff. Our new model of the subduction of a trench-parallel bathymetric high shutting off subduction is consistent with the lack of evidence for southern Alaska subduction-related magmatism during late Paleocene-early Eocene time (Cole et al., 2006) and stratigraphic and/or detrital geochronologic evidence for subaerial uplift and exhumation of the formerly marine forearc region followed by subsidence and nonmarine sedimentation (e.g., Trop, 2008;Ridgway et al., 2012;Kortyna et al., 2013;Finzel et al., 2015). In our new model, southern Alaska was located ~1000 km from the Chugach accretionary prism during late Paleocene-early Eocene time, while the ca. 63 to ca. 47 Ma near-trench plutons were emplaced into the Chugach accretionary prism. The Paleocene-early Eocene margin outboard of southern Alaska was likely a transform setting characterized by dextral slip along the BRFS or unidentified faults to the south of the BRFS. Both regions were subsequently shuffled laterally by dextral displacement along orogen-parallel strike-slip faults, consistent with paleomagnetic data indicating that southern Alaska (WCT) and the Chugach accretionary prism were positioned hundreds of kilometers south of their current position during latest Cretaceous-Paleocene time, but still distal from each other (Bol et al., 1992;Stamatakos et al., 2001;Cowan, 2003;Garver and Davidson, 2015). Rocks making up our study area in southern Alaska at ca. 80 Ma were positioned at a paleolatitude ~15° to the south of their current location (Stamatakos et al., 2001) and were translated to near their current latitude by ca. 54-40 Ma judging from paleomagnetic data (Panuska et al., 1990), consistent with significant northward translation of the WCT during late Paleocene-early Eocene time (Figs. 20 and 21). Large-scale translation of the Chugach accretionary prism and Sanak-Baranof belt was likely accommodated along the BRFS and other orogen-parallel fault systems (Fig. 20). The slip history of the BRFS is prolonged and complex with multiple episodes of displacement suggested during the Late Cretaceous-Paleogene and reactivation during the Neogene (Pavlis and Roeske, 2007). Roeske et al. (2003) proposed at least ~600-1000 km of Late Cretaceous-Eocene BRFS slip. Slip may have been partitioned onto other structures across southern Alaska (Fig. 1), such as the Castle Mountain fault, which has been suggested to accommodate ~130 km of dextral slip (Pavlis and Roeske, 2007); the Denali fault, which has an inferred ~400 km of post-Early Cretaceous dextral displacement (Lowey, 1998;Benowitz et al., 2012b); or faults within the Chugach accretionary prism with poorly understood slip histories such as the Eagle River fault (Kochelek et al., 2011) or Glacier Creek fault (Little, 1990). Eocene Oroclinal Bending Paleo-vectors of Pacific plate motion relative to the North American plate do not favor large translation along the North American plate boundary, driven by Pacific plate motion, given the modern geographic configuration of North America (Fig. 21) (Doubrovine and Tarduno, 2008). However, if the southern Alaska orocline was unbent during the Paleocene-middle Eocene (Fig. 21), the paleo-vectors are more compatible with the northward translation of the near-trench plutons along the western margin of North America (Garver and Davidson, 2015). Paleomagnetic declinations of Late Cretaceous-Paleocene rocks support ~30°-50° counterclockwise rotation of southern Alaska by the late Eocene (Hillhouse and Coe, 1994;e.g., Betka et al., 2017). This oft-cited but loosely constrained model explains the curvature of regional structures and mountain ranges (e.g., Denali fault and Alaska Range) and is known as the southern Alaska orocline (e.g., Cole et al., 2007). Given the heating of the southern Alaska thermal regime during the inferred slab-window event (Figs. 19 and 20), it is possible that oroclinal bending may have been facilitated in part due to the thermally induced weakening of the crust, making it less elastic and more deformable. A similar mechanism for oroclinal bending due to thermal weakening has been suggested for the Pamir Mountains of Central Asia (Yin et al., 2001). As southern Alaska was rotated, the angle of convergence between the plate boundary and the Pacific plate would increase, allowing for normal subduction to resume by the late Eocene ( Fig. 20) (Jicha et al., 2006;Stern and Gerya, 2017). The re-initiation of normal subduction by the late Eocene is also supported by a study of the Hawaii-Emperor chain bend documenting a major change in Pacific plate motion by ca. 47 Ma (Torsvik et al., 2017). The new increased convergence angle of the Pacific plate would favor subduction along the southern Alaska plate boundary. Our results and interpretations align with the proposed middle-late Eocene oroclinal bending of southern Alaska. However, the loosely constrained orocline model would benefit from higher-resolution integrated paleomagnetic and geochronologic studies across southern Alaska. Furthermore, the unbending of the southern Alaska orocline may not be necessary for a Paleocene-Eocene plate boundary configuration that favored a transform margin. Paleomagnetism studies of Eocene volcanic rocks in the southern Talkeetna Mountains (Panuska et al., 1990;Stamatakos et al., 2001) suggest the sampled rocks (and the underlying WCT north of the BRFS) were not in their current location, but rather were positioned at lower latitudes at the time of our proposed Paleocene-Eocene slab breakoff event. The southern Talkeetna Mountains are thought to have then been translated northward along structures such as the Denali fault and Tintina fault systems, which are believed to have accommodated at least ~1000 km of combined displacement since the Cretaceous (Denali fault: Lowey, 1998;Benowitz et al., 2012b;Tintina fault: Gabrielse, 1985). This paleoposition would favor transform margin tectonics given known constraints on the incoming plate convergence angle with North America (Fig. 21). Hence, if the Talkeetna Mountains were located ~1000 km to the southeast of the present location at the time of our proposed slab breakoff event, their position along western North America would still favor transform margin tectonics with or without Cenozoic oroclinal bending of southern Alaska. The Sanak-Baranof near-trench magmatic belt located south of the BRFS (Cowan, 2003) would have been ~1000 km south of the Talkeetna Mountains in this palinspastic reconstruction. Farris and Paterson (2009) proposed an alternative Kula-Resurrection ridge model that involves varying obliquity of the incoming sweeping ridge along a convergent margin that becomes progressively more curved due to oroclinal bending (Hillhouse and Coe, 1994). However, the Farris and Paterson (2009) model assumes the Sanak-Baranof belt was translated <100 km since emplacement and infers the incoming ridge was mostly parallel to the margin ca. 53-50 Ma. This model does not fit our interpretation of the initiation of a slab window across interior south-central Alaska (north of the BRFS) by ca. 60 Ma nor inferred large Paleocene-Eocene translation of the Sanak-Baranof belt along western North America (Cowan, 2003;Garver, 2017 (Thorkelson, 1996) leading to variations in obliquity regardless of the strike of the margin. We do not have enough geologic constraints to infer the plate configuration or bathymetric high responsible for the Paleocene-Eocene slab window that we infer synchronously affected interior south-central Alaska. We acknowledge an extremely modified Kula-Resurrection ridge configuration can be reconciled with large translation of the Sanak-Baranof belt and a Paleocene-Eocene slab window under southern Alaska. Summary Our proposed Cenozoic tectonic evolution of southern Alaska is summarized in Figure 20 and can be divided into four separate plate configurations: (1) the Late Cretaceous-early Paleocene plate configuration was characterized by normal subduction and the approach of what we infer to be a trench-parallel bathymetric high (e.g., aseismic or active ridge or seamount chain) (Fig. 20A). (2) The middle Paleocene-middle Eocene plate configuration was characterized by a slab-window event beneath southern Alaska, related region-wide volcanism and exhumation, the increase of the regional geothermal gradient (Fig. 20B), and synorogenic sedimentation. We infer that at this time southern Alaska had a transform margin, allowing for the northward translation of the near-trench intrusions within the prism along the western margin of North America. The rotation and oroclinal bending of southern Alaska, possibly due in part to the thermally induced weakening of the crust above a slab window, may have initiated by the middle Eocene. (3) The late Eocene-Oligocene plate configuration was characterized by the resumption of normal subduction (Fig. 20C) and a period of relative tectonic quiescence. (4) The Miocene-present plate configuration is characterized by the flat-slab subduction of the Yakutat microplate, displacement and mountain building along southern Alaska structures, and the lowering of the geothermal gradient due to the removal of the mantle wedge during flat-slab subduction (Fig. 20D). ■ CONCLUSIONS 40 Ar/ 39 Ar (hornblende, muscovite, biotite, K-feldspar, and whole-rock), AFT, and AHe thermochronology data indicate that the southern Talkeetna Mountains have a polyphase topographic development history (Fig. S6 [footnote 1]) that can be divided into four distinct rock-cooling events (Fig. 17): (1) slow rock cooling (~1-3 °C/m.y.) and exhumation from the Late Cretaceous-early Paleocene (ca. 74 Ma to ca. 60 Ma); (2) rapid rock cooling (>16 °C/m.y.) and exhumation initiating by the middle Paleocene (ca. 60 Ma) and persisting for ~15 m.y.; (3) a period of slow rock cooling (~1 °C/m.y.) and relative tectonic quiescence during the late Eocene-Oligocene (starting by ca. 45 Ma with Oligocene constraints not well defined by our results); and (4) more rapid rock cooling (~4 °C/m.y.) and exhumation that were focused along the CMF and initiated by the Miocene (ca. 12 Ma). 40 Ar/ 39 Ar whole-rock volcanic ages and AFT cooling ages in the southern Talkeetna Mountains are predominantly Paleocene-Eocene (Fig. 12), suggesting that the Talkeetna Mountains has a component of paleotopography that formed prior to the current Yakutat flat-slab plate configuration. Our thermochronology data set also provides evidence for an elevated Paleocene-Eocene geothermal gradient (~55 °C/km on average) and suggests that the thermal effects of a slab window beneath southern Alaska drove exhumation. Miocene AHe cooling ages near the CMF (Fig. 7) suggest ~2-3 km of near fault vertical displacement since ~11 Ma, which is consistent with vertical offset of Paleocene-Eocene strata across the CMF and that the CMF played a role in the Miocene topographic development history of the Talkeetna Mountains. Miocene-Holocene vertical slip along the CMF was likely driven by the highly coupled flat-slab subduction of the Yakutat microplate (Fig. 20D). Paleocene-Eocene volcanic ages and cooling ages across southern Alaska north of the BRFS are generally similar and show no apparent S-N or W-E relationships ( Figs. 13 and 14), suggesting a synchronous and widespread volcanic and exhumation event. To reconcile this, we propose a new model for the Paleocene-Eocene tectonic configuration of southern Alaska. We suggest that region-wide Paleocene-Eocene volcanism and exhumation were driven by a trench-parallel slab-window event beneath southern Alaska (Fig. 19) and that at this time southern Alaska had a transform margin, allowing for the northward translation of the near-trench plutons and the Chugach accretionary prism to their current position. The combination of possible oroclinal bending of Alaska and a change in the vector of Pacific plate to more northerly led to a more convergent southern Alaska margin and the resumption of normal subduction during the middle-late Eocene. Finally, the Oligocene to present-day flat-slab subduction of the Yakutat microplate developed the modern tectono-thermal regime of southern Alaska.
2019-08-02T20:42:48.000Z
2019-07-16T00:00:00.000
{ "year": 2019, "sha1": "86442c5b9dabac8553218e00764ba702122b1696", "oa_license": "CCBYNC", "oa_url": "https://pubs.geoscienceworld.org/gsa/geosphere/article-pdf/15/5/1539/4831366/1539.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1b597aac16e7114edacf800785b06e319366f933", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
17492963
pes2o/s2orc
v3-fos-license
Normalisation and Stigmatisation of Obesity in UK Newspapers: a Visual Content Analysis Obesity represents a major and growing global public health concern. The mass media play an important role in shaping public understandings of health, and obesity attracts much media coverage. This study offers the first content analysis of photographs illustrating UK newspaper articles about obesity. The researchers studied 119 articles and images from five major national newspapers. Researchers coded the manifest content of each image and article and used a graphical scale to estimate the body size of each image subject. Data were analysed with regard to the concepts of the normalisation and stigmatisation of obesity. Articles’ descriptions of subjects’ body sizes were often found to differ from coders’ estimates, and subjects described as obese tended to represent the higher values of the obese BMI range, differing from the distribution of BMI values of obese adults in the UK. Researchers identified a tendency for image subjects described as overweight or obese to be depicted in stereotypical ways that could reinforce stigma. These findings are interpreted as illustrations of how newspaper portrayals of obesity may contribute to societal normalisation and the stigmatisation of obesity, two forces that threaten to harm obese individuals and undermine public health efforts to reverse trends in obesity. Introduction Obesity is a major, and growing, public health concern. Globally, obesity affects more than one in ten adults, and prevalence has more than doubled since 1980 [1]. In 2009, 22% of men and 24% of women [2] in England were obese (defined as a BMI greater than, or equal to, 30 [3]), as were 27% of men and 28% of women in Scotland [4]. Obesity's rapid growth and links to increased mortality and morbidity [5] have led the global obesity problem to be described as an epidemic [6]. This is an open access article licensed under the terms of the Creative Commons Attribution Non-Commercial License (http:// creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted, non-commercial use, distribution and reproduction in any medium, provided the work is properly cited. Explanations for the causes of obesity have changed over time. Focus has recently shifted somewhat away from viewing obesity as a consequence of negative individual behaviour and towards viewing it as a social and environmental phenomenon [7,8], and one that can be viewed as a natural human response to overwhelming environmental influences [5,6]. In their history of the medicalisation of obesity, Chang and Christakis [9] observe that: 'Initially cast as a social parasite, the [obese] patient is later transformed into a societal victim' (p.155). Underpinning the structurally-driven obesity epidemic is the 'obesogenic environment', a combination of features of the post-industrial built, economic, political and sociocultural environments that create barriers to healthy eating and active lifestyles [10,11]. Hill and colleagues [6] suggest that: 'in pursuing the good life people have created an environment and a society that unintentionally promote weight gain and obesity, given peoples' genetic and biological make-up' (p. 20). The mass media are an important part of the sociocultural environment. Agenda-setting theory illustrates how mass media are instrumental in setting the public agenda, determining the issues to which people are exposed, and what information they receive about those issues [12]. The mass media reflect, reinforce and shape common culture, including public healthrelated beliefs and behaviours [12,13]. Media interest in obesity has grown quickly over the past two decades [8,14], coexisting with increases in the incidence of overweight and obesity in the UK and worldwide [15]. The increasing quantity of reporting about obesity, coupled with ability of mass media to help define public understandings of health issues, means that the media represent an important element of the obesogenic environment. One way that mass media could influence public understandings and perceptions of obesity is by contributing to its normalisation. Normalisation of obesity is a cyclical process by which shifting public perceptions of weight lead to increases in population adiposity, exacerbating the obesity problem [16][17][18]. Underpinning this theory is the concept that as average body mass increases within a population, so does that population's familiarity with, and acceptance of, increased body mass. Increased acceptance may prevent individuals from recognising, and attempting to regulate, unhealthy adiposity in themselves, exacerbating the prevalence of obesity and likely increasing population mortality and morbidity [5]. Keightley and colleagues [18] describe how normalisation might condition individuals to rationalise obesity in themselves: 'It is possible that the increase in the proportion of the population who are overweight or obese may have resulted in a normalising effect on perceptions of weight and as a result, thus changing the social ideology of being fat. That is, the threshold of what has been deemed 'fat' in the community may be rising to accommodate increased average weights in the population. It is possible therefore, that through social conditioning, individuals may rationalise the extent and/or risks of obesity based on a perception of physical fitness and social conditioning of body morphology.' Keightely, Chur-Hansen, Princi & Wittert, P.E342 Moffat [19] suggests that, despite objections by some researchers that the obesity epidemic is characterised by unhealthy moral panic and alarmism, many health professionals fear that the normalisation of obesity has generated a dangerous apathy about the health risks of obesity. In addition to media representations, potential drivers of normalisation include 'vanity sizing', the phenomenon of clothing retailers labelling their garments as smaller than they are [20], growing food portion sizes [21] and the increasing medicalisation of obesity [17,22]. A wealth of evidence highlights shifting societal perceptions of weight [23]. Overweight and obese individuals increasingly underestimate their own weight [16,24] and parents often fail to recognise obesity in their children [25,26]. For example, Johnson and colleagues' [16] comparison of two UK household surveys from 1999 and 2007 found that increases in selfreported weight over time were matched by an increase in the body-size threshold at which respondents deemed themselves to be overweight. Overweight and obese respondents to the 2007 survey were less likely to describe their weight status accurately than were their 1999 counterparts. The researchers note that this shift occurred despite of public health campaigns and elevated news reporting on the topic of overweight and obesity. Duncan and colleagues [27] studied the relationship between weight perceptions and weight-related attitudes in the United States. Their analysis of survey data found that overweight and obese respondents who misperceived their weight were much less likely to want to lose weight, and to have tried to lose weight, than those who perceived their weight accurately. This suggests misperception of weight can act as a barrier to adopting healthy lifestyles. In addition to a decline in individuals' ability to accurately assess their own weight, there is evidence that obesity stigma could undermine efforts to tackle the obesity problem [28]. Stigma is commonly defined in terms of identifying certain characteristics as deviant from widely-accepted societal norms, and therefore marking individuals who embody those characteristics as undesirable outsiders [29]. Link and Phelan [29] identify four interrelated components that converge to create stigma: distinguishing and labelling human differences; linking the labelled individuals to negative stereotypes; separating labelled individuals from those without the undesirable characteristics; and finally discrimination and the resulting social disadvantage of the labelled persons. This model can be applied to the process of stigmatisation of obese individuals: humans are be labelled by their BMI category; obese BMI is often associated with negative stereotypes including greed, sloth and lack of discipline [30]; the obese population is often mentioned as a specific societal group; and obese individuals can be subject to discrimination and disadvantage in various social spheres [31]. Obesity stigma has consequences for both psychological and physical health. Psychological consequences include depression, self-esteem, body-image dissatisfaction, and unhealthy coping strategies. Crucially, stigma does not appear to provoke the adoption of healthier lifestyles. On the contrary, evidence suggests that stigmatisation increases binge-eating [32,33] and threatens physical health [31]. As such, it is vital that public health efforts to reduce obesity do not stigmatise it. There is some evidence that media representations might contribute to the stigmatisation of obesity [28,30], but as yet this issue has received relatively little attention. One aspect of newsprint coverage that content analyses often overlook is the images that illustrate articles. There is evidence that images can significantly influence readers' interest in, and interpretations of, news articles [34,35], and that news consumers can recall news images long after their memory of the content of the accompanying text has faded [36]. The power of news images is such that there is value in analysing them in addition to text. Gollust and colleagues [37] analysed descriptive and demographic features of images of overweight and obese individuals published in American news magazines, and Heuer and colleagues [38] performed a similar analysis of photographs accompanying American online news stories about obesity. Both of these studies found that image subjects were often depicted engaged in stereotypical behaviours, including eating junk food and watching television. Due to news images' potential to influence readers' perceptions, these stereotypical depictions may reinforce damaging stigma. Furthermore, Lewis and colleagues [39] suggest that the subtle forms of stigma reproduced in banal forms such as newspaper representations tend to be the most harmful in terms of health and social wellbeing. Heuer and colleagues [38] suggest that the stigmatising depictions may cause blame for obesity to be attributed to obese individuals, which is directly at odds with the goals of public health policy to address obesity as a social and environmental issue. The normalisation and stigmatisation of obesity are two damaging phenomena in which mass media portrayals may play a role. In this study, we investigate how UK newspapers might contribute to each of those phenomena. We analyse the photographs used to illustrate newspaper articles about obesity with reference to the text that accompanies them to examine how articles represent obesity. Our research questions are, firstly, to what extent might newspaper images of obesity contribute to the normalisation of obesity, and secondly, how might they contribute to the stigmatisation of obesity. To answer the first research question, we analyse the differences between article authors' written descriptions of image subjects' body sizes and researchers' visual estimates of those subjects' body sizes. Visual estimation of BMI is less accurate than true physical measures, but is used routinely by doctors to diagnose obesity [40]. Disparities between these descriptions and evaluations may be important because they could cause readers to form an inaccurate impression of what body sizes range is considered to be obese, particularly if these skewed perceptions are reinforced repeatedly over time. In answering the second research question, we analyse the occurrence of a set of potentially stigmatising and stereotyping features in images, and how the appearance of these features relates to the body size represented. To our knowledge, this is the first content analysis of UK newspapers' coverage of obesity that analyses both images and text, and the first that employs visual estimates of body size. . This typology has been used in other analyses of print media discourse to select a broad sample of newspapers with various readership profiles and political orientations [41]. Publications were chosen on the basis of having high circulation figures (www.nrs.co.uk) and indicating the inclusion of images in their database entries for articles. Method Sample Selection and Collection Keyword searches were conducted on the Nexis UK and NewsBank databases to identify articles related to obesity published between 1st January 1996 and 31st December 2010. The time period was chosen to incorporates a short period prior to the WHO's 1997 warning about the obesity epidemic [42] and the subsequent rise in newspaper reporting on obesity over the following 15 years [8]. An initial search was carried out for articles featuring the search terms "obesity", "obese", "fat nation", "fatties" or "lardy" in the headline. To determine relevant search terms, two researchers read a selection of articles about obesity and noted terms that were used commonly. The initial search retrieved 3,878 articles. The articles were manually sorted based on two initial inclusion criteria: human obesity must be the primary topic of the article, and the article must not be from the letters, television guide or television reviews sections of the publication. Following application of the inclusion criteria, 1,698 relevant articles were retained. The remaining articles were scrutinised for indications that they contained images, either in the form of references to an image in the text, or in the inclusion of image captions. Of the 1,698 relevant articles, 344 indicated that they contained images. As the online newspaper databases used do not store images with articles, original printed copies of the articles were retrieved from the newspaper archives of the National Library of Scotland (NLS). Due to limitations of the archives, 133 of the list of 344 articles with images were retrieved. These 133 images were each examined, and those that were cartoons or did not feature people were excluded. The final sample comprised 119 articles and images (Table 1). In the case of articles that contained more than one image, the largest or most prominent image was used. If more than one person was pictured in the image, the most central or prominent person was used. The Figure Rating Scale A figure rating scale was used to assess subjects' body sizes. Figure rating scales are commonly used in studies of body image disturbance [43] and generally do not include BMI values. For this study it was necessary to use a scale that attributes a BMI value to each portrait so that body sizes observed by the coders could be assigned to BMI categories. The body image instrument developed by Pulvers and colleagues [44], which has been tested for content validity, was chosen, and BMI values ranging from 16 to 40 were applied to each portrait in increments of three BMI points based on the authors' guidance [44, p.1642] (Fig. 1). Coders identified the portrait on the scale that most closely resembled each newspaper image, and assigned each image a rating between one to nine accordingly. To minimise the effect of the pre-existing knowledge of the BMI scale, BMI values and categories were not included in the scale provided to coders. Values and categories based on World Health Organisation [3] classifications have been included in Fig. (1) for illustrative purposes. The Coding Frame A coding frame for recording features of the images and articles was developed. Researchers (CP, SH) examined images to create thematic categories capturing information about image subjects and the contexts in which they were photographed. Additional categories were developed to record descriptive details of articles including publication date, publication title and how the subject's body size is described in the text. While articles did not always specifically describe their image subjects' body size, such as when a stock image was used to illustrate obesity in general, coders attributed the predominant body size description used in the article to the image used to illustrate it. This approach was chosen to take into account the associations that the reader might perceive, rather than associations that the author may have intended to create. The initial coding frame was piloted with seven researchers who coded batches of images and suggested further improvements. The final coding frame included two contextual codes and eleven conceptual codes. The contextual codes comprised a unique identification code assigned to each image, and the caption associated with the image, if any. Conceptual codes comprised: body size described in article text; sex; age group; clothing; pose; body parts visible; body angle depicted; photography location; facial expression; the presence of family or others in the image; and obesity-related behaviours depicted. Coding and Analysis The thematic content of each image and its accompanying text were coded by CP. The body size depicted in each image was coded by four coders who assigned each image a value between one and nine using the figure rating scale. Using four coders ensured that any systematic coding biases could be identified. Discrepancies between coders' evaluations of images allowed researchers to identify images that were posed in such a way that parts of the body were obscured, making reasonable estimations of body size difficult to achieve. Those images that produced significant disagreement between coders were not coded. The coded images were assigned BMI categories based on WHO classifications [1]: a BMI between 18.5 and 24.9 was considered to be 'normal range', 25-29.9 'overweight' and 30+ 'obese'. Data from completed coding frames were entered into SPSS 15. A key part of the analysis was identifying the degree to which articles' written descriptions of subjects' body sizes agreed with coders' evaluations of those body sizes. Any articles in which the written descriptions of subjects differed from coders' evaluations could be interpreted as misrepresenting body size, and if a large proportion of articles in the sample were found to be misrepresentative, this might be indicative of a trend of misrepresentation of body size in newsprint coverage of obesity. Fleiss' Kappa was used to measure inter-rater agreement between coders' ratings of image subjects' BMI categories, and Cohen's Kappa was used to measure agreement between article authors' written descriptions and coders' visual evaluations. Sample Characteristics The sample comprised 119 images from articles published between 1998 and 2010 (Table 1). Almost half of subjects were males (n=53) and just over half female (n=64). The sex of two subjects could not be determined. A third (n=39) of subjects were assessed to be young children (≤12 years), a tenth (n=12) teenagers (13-18 years), and half (n=58) adults (≥19 years). The age groups of ten subjects could not be determined. Almost two thirds (n=74) of subjects were pictured alone, and a third (n=45) with others. Two thirds (n=79) of subjects were dressed in casual clothes, 17 were smartly dressed and three were depicted as untidy. Five subjects wore clothing associated with being a medical patient, while a tenth (n=14) of subjects were partially clothed ( Table 2). Subject behaviours Subjects' obesity-related behaviours were recorded. Five were pictured watching television, and 28 were pictured with food, often junk food. Subjects' poses were also coded. A quarter (n=29) were sitting or reclining, six engaged in exercise and the remaining 82 (68.9%) were standing or walking. Of those subjects with visible facial expressions, 37 (45.1%) were happy, 10 unhappy and 35 (42.7%) neutral (Table 2). Varying Descriptions of Body Size Eighty-three articles described subjects' body sizes in the article text. Ten were described as 'normal' (including 'healthy' and 'slim'), 13 as overweight and 60 as obese. Coders assessed the body sizes of 105 (88.2%) subjects using the figure rating scale. Fourteen were not coded because they were either too small or awkwardly posed to be evaluated reliably, highlighted by a lack of agreement between coders. Of the subjects coded, seven were judged to be in the 'normal' weight range (BMI 18.50-24.99), 13 overweight (BMI 25.00-29.99) and 85 obese (BMI 30.00+). Of the seven images coded as normal weight, four were of individuals who were once obese but had lost weight, two were from articles about exercise classes in schools, and one was from a story about a trend of dieting among girls aged between 11 and 16. A Fleiss' Kappa test of agreement on BMI category between the four coders returned a Kappa of 0.617, which can be interpreted as substantial agreement [45]. Articles' descriptions of body sizes were compared with coders' estimates of those subjects' body sizes. A Cohen's Kappa test of agreement returned a result of 0.361, which can be interpreted as fair agreement [45]. Table 3 provides an overview of the lack of agreement between descriptions and coders' estimates. Of the eight subjects estimated by coders to be overweight, two were described as overweight and the remaining six as normal. Of the 64 subjects coded by coders as obese, one was described as normal range, 10 overweight and 53 obese. Table 4 details the distribution of the BMI values of the 53 subjects that were both described in article text as 'obese'. On the figure rating scale (Fig. 1), the obese category is represented by portraits 6, 7, 8 and 9, representing BMI values 31, 34, 37 and 40 respectively. Table 4 demonstrates that BMI values were not evenly distributed between subjects described by articles as being obese. Subjects tended to represent higher BMI values within the obese range, and the most commonly represented BMI value was 40. Relationships Between Body Size and other Characteristics Researchers recorded the angle from which each subject was photographed and the visibility of each subject's face. The 10 subjects described as normal weight range were all pictured with their faces visible and facing the camera. Of the 37 subjects shown without their faces visible, five were described as overweight and 28 obese (Table 2). Subjects described as overweight or obese were depicted as untidy, casually dressed, wearing clothing associated with being a medical patient, or partially clothed more frequently than those described as 'normal' weight ( Table 2). Subjects described as overweight or obese had unhappy expressions more commonly than did those described as normal weight (Table 2). Only subjects described as obese were pictured engaged in activities associated with sedentary lifestyles (n=5), and they were more commonly photographed eating (n=19) than were those described to be of other body sizes. No subjects described as being of normal weight were untidy, wearing medical clothing, pictured with unhappy or obscured facial expressions, engaged in sedentary activities or eating (Table 2). Discussion The findings help to illustrate two mechanisms by which newspapers may contribute to the normalisation of obesity. Firstly, we identified statistically significant disparity between the articles' descriptions and coders' evaluations of subjects' body sizes. Subjects were frequently of higher BMI categories than they were described in the accompanying text, suggesting that the journalists may have a tendency to underestimate their body sizes. Secondly, we showed that BMI is neither evenly nor normally distributed between subjects described by articles as obese; as nearly three quarters of these subjects represented BMI values of 37 or higher, and nearly one third represented a BMI of 40, often categorised as 'morbidly obese' [46]. This distribution suggests that newspapers tend to use images of relatively extreme obesity to illustrate articles about obesity. In addition, the negatively skewed BMI distribution within obese subjects in the sample differs starkly from the positively skewed distribution of BMI values within the obese population of the UK [47]. These findings are not, in isolation, evidence of the normalisation of obesity. However, when considered in light of the power of news images to influence readers' perceptions [34,35], our findings illustrate how newsprint misrepresentations may play a role in reinforcing and exacerbating misconceptions about body size. If the trends identified in this study are extant in wider mass media reporting on obesity, they may play an important role in determining societal perceptions of obesity, and therefore a role in driving the normalisation of obesity. Normalisation is important because it may prevent overweight and obese individuals from adopting healthy lifestyles, and wider society from embracing legislative solutions to obesity [17,18]. In addition to normalisation, signs of stigmatisation were identified. The findings echoed those of previous research [37,38], highlighting a tendency for newspaper photographs of overweight and obese individuals to include negative stereotypes that may reproduce weight stigma. Compared with subjects described as normal weight, subjects illustrating overweight and obesity were more frequently depicted with unhappy or neutral facial expressions, obscured heads or faces, and eating food, often junk food. Unhappy or neutral facial expressions may stigmatise overweight and obese individuals as unhappy or deserving of pity. Excluding subjects' heads or faces, while likely intended to protect the subject's privacy, may serve to dehumanise overweight and obese people. Depicting subjects eating food, while not an inherently unhealthy behaviour in itself, may serve to focus readers' attention on individual overeating as a driver of obesity to the exclusion of other drivers, which could reinforce the stereotype of the obese individual being to blame for a lack of self-control, and undermine the roles of social and environmental drivers of obesity. These trends could be harmful if found in wider mass media coverage of obesity, serving to reproduce negative stereotypes of obesity, leading to further prejudice, discrimination and damage to psychological and physical health [28]. Certain limitations of the research should be taken into account. Firstly, compromises were unavoidable in choosing the coding instrument. Figure rating scales are predominantly used to study body image perception, not for evaluating BMI. Furthermore, visual estimation is a much less reliable measure BMI than physical measurements. Despite this, visual estimation of BMI is used routinely by doctors, not necessarily with the aids of graphical scales, to diagnose patients' BMI [40]. In a blind study of cardiology doctors' visual estimations of BMI, Husin and colleagues [40] found that 81% of obese patients were correctly estimated to be obese, with the remaining obese patients were estimated to be overweight. Additionally, the scale used was initially designed for measuring body image perception in African Americans, while the majority of the image subjects in our sample were Caucasian, and body composition is known to vary by ethnicity [48]. While acknowledging the compromises made in choosing a scale, we are confident that the instrument represented a robust tool for a relatively novel research design. The implementation of a team of coders blind-coding each images allowed individual systematic coding biases to be eliminated. Images that were difficult to code due to their composition or the subject's pose were identified by substantial disagreement between coders, and removed accordingly, and a Fleiss Kappa test of inter-rater agreement indicated substantial agreement on the remaining images. Any uniform bias among the coders could not be detected. However, if any uniform bias existed, Husin and colleagues' [40] findings suggest that coders were likely to underestimate subjects' BMI values. If this were found to be the so, it would logically follow that the disparities between article text descriptions and image subjects' true BMI categories were greater than our findings suggest, which would strengthen the conclusion that newsprint representations misrepresent the range of body sizes classed as obese. The second limitation of the study is its sample size. Inconsistencies in data about images in online newspaper article databases and the incompleteness of the library archive meant that the final sample of 119 articles and images was smaller than we anticipated. As a result, the trends identified in the sample cannot necessarily be generalised to wider newsprint coverage. In addition, the sample size limited our ability to analyze how variables such as publication genre and publication date related to articles' representations of obesity. Inconsistencies and incompleteness in the database and archive may also have produced the Patterson and Hilton Page 9 Open Obes J. Author manuscript; available in PMC 2018 October 19. variation in the number of articles published in different publications. For example, the relatively high frequency of illustrated articles about obesity in the Mirror & Sunday Mirror could result from between-publication variations in the way that specific elements of articles are submitted to the database. However, there is no reason to believe that these articles and images were in any way atypical. In addition, due to the disproportionately powerful influence of news images, compared to that of article text [34,35,36], it seems reasonable to suggest that the images analysed may have influenced readers' perceptions more than would text-only articles. The third limitation of the study is inherent to content analysis; one can only describe the content of material, and cannot provide insight into its creators' motives or intentions. This is particularly relevant to newspaper articles as they can be modified by a number of individuals from inception and publication, each of whom may have different motivations. Furthermore, images may have been chosen by a picture editor working independently of the original author of the text. In addition, analysing media content alone cannot tell us what messages the audience will take away, as forming meaning is a collaborative process between the text and the audience, and the context within which the text is consumed plays a role in how it is interpreted [49]. However, regardless of the intent of publishing decisions, the final article presented to readers is important, due to the role of media portrayals in influencing public understandings of health issues [12]. Further research in this area might benefit from these limitations being taken into account in their research design. Firstly, a figure rating scale designed specifically for visually estimating BMI, with normative BMI values for each portrait, would be of value. Secondly, taking into account the difficulties inherent to sourcing newspaper articles with images, further research might benefit from focusing instead on online news articles, as did Heuer and colleagues [38]. In addition, researchers interested in images of obesity may find that images are more numerous in other news media, such as magazine articles or television news, and there may be value in comparing images in articles about obesity with images in unrelated articles. The issue of the complex authorship of newspaper articles may warrant study in itself, which could investigate the roles and motivations of the personnel involved in putting together an article. As Gibson and Zillmann [50] suggest, journalists should be aware of the potentially harmful power of news images. This study adds to evidence that could lead news media producers with an interest in accuracy and integrity to consider their editorial processes with regard to illustrative images. If editors wish to illustrate obesity to readers in an accurate, informative and socially-responsible manner, they might consider seeking illustrative images that represent the full range of body sizes within the obese category and avoiding images that reinforce negative stereotypes of obesity. Alternatively, if public health campaigners wish to combat misleading and negative images of obesity, they might consider developing informational campaigns aimed specifically at counteracting those images. Mass media coverage can influence how ideas develop, spread and enter public discourse [12]. This study suggests that there may be a tendency for newspapers to misrepresent the range of body sizes within the obese category, and disproportionately use images of extreme obesity to illustrate general societal obesity. These trends demonstrate a possible mechanism by which newspapers might contribute to the normalisation of obesity in society. This study also contributes to existing literature on mass media stigmatisation of obesity [37,38], demonstrating how newspapers' photographic representations of overweight and obesity could serve to reinforce stigmatisation. In conclusion, this study contributes to a growing body of literature on mass media portrayals of obesity. It does so by illustrating two ways in which newspapers' pictorial depictions of overweight and obesity could harm both public understanding and public healthy: by exacerbating a process of normalisation that distorts public perceptions of healthy weight; and by contributing to the stigmatisation of overweight and obesity that harms the psychological and physical health of overweight and obese individuals [28].
2016-10-26T03:31:20.546Z
2013-11-13T00:00:00.000
{ "year": 2013, "sha1": "4a2ccc1ab04587ed211da52c2367b9c4b9fac1ae", "oa_license": "CCBYNC", "oa_url": "http://benthamopen.com/contents/pdf/TOOBESJ/TOOBESJ-5-82.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a2ccc1ab04587ed211da52c2367b9c4b9fac1ae", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8022262
pes2o/s2orc
v3-fos-license
Optimal balance of efficacy and tolerability of oral triptans and telcagepant: a review and a clinical comment Dose–response curves for headaches relief and adverse events (AEs) are presented for five triptans: sumatriptan, zolmitriptan, naratriptan, almotriptan, and frovatriptan, and the CGRP antagonist telcagepant. The upper part of the efficacy curve of the triptans is generally flat, the so-called ceiling effect; and none of the oral triptans, even in high doses, are as effective as subcutaneous sumatriptan, In contrast, AEs increases with increasing dose without a ceiling effect. The optimal dose for the triptans is mainly determined by tolerability. Telcagepant has an excellent tolerability and can be used in migraine patients with cardiovascular co-morbidity. Based on the literature the triptans and telcagepant are rated in a table for efficacy and tolerability. Introduction The vignette suggests that ''the philosophers's stone'' has been found with the introduction of sumatriptan. In clinical practice with oral triptans not all migraine patients respond to a triptan and AEs can be a problem. The optimal balance of efficacy and tolerability depends on the combined dose-response curves for both antimigraine effect and incidence of AEs. These dose-response curves for oral triptans will be reviewed, the findings discussed and finally my clinical comments will be presented. Methods and results Dose-defining, randomised, controlled trials (RCTs) of triptans were searched for in PubMed and in The Headaches [5]. Studies defining the dose-response curves of oral triptans for both efficacy and the incidence of AE were selected for analysis. In addition, large dose-defining studies on the CGRP antagonist telcegepant were searched for. For three triptans (zolmitriptan, naratriptan, and almotriptan) the balance of efficacy and tolerability could be evaluated by drawing the curves from one dose-defining study as shown in Figs. 2, 3, and 4. Two dose-defining studies [5,6] were needed to evaluate the full doseresponses curves for sumatriptan and frovatriptan (Figs. 1, 2, and 6). For rizatriptan and eletriptan the incidence of AEs was not presented [7][8][9][10][11] and only the results for efficacy of these two triptans are mentioned briefly. Sumatriptan is the first and standard triptan and it took two studies, from 1991 and 1998, before the dose-response curve for oral sumatriptan could be established (Figs. 1, 2) [6,12]. It is evident from Figs. 1 and 2 that there is an upper flat part of the dose-response curve for efficacy, starting at sumatriptan 50 mg, and there is no increase in efficacy up to the 300 mg dose. The incidence of AEs increases with increasing dose of sumatriptan, reaching a maximum of 53% after 300 mg sumatriptan. 25 mg sumatriptan was the minimum effective dose [6]. For sumatriptan 50 mg there was 7% more AEs than after placebo (Fig. 1a) which is quite similar to the 9% found in one meta-analysis [13]. The recommended starting dose of oral sumatriptan is 50 mg. This choice is based on maximal efficacy and reasonable tolerability (Figs. 1, 2). The dose-response curves for zolmitriptan are shown in Fig. 3 [14]. Again there is a flat upper part for efficacy. The starting dose for this plateau is 2.5 mg zolmitriptan. The AEs increase with increasing dose and reach a maximum of 67% after 10 mg zolmitriptan. For zolmitriptan 2.5 mg there were 14% more AEs than after placebo. This incidence is quite similar to the 15% found in a meta-analysis [13]. The biggest difference between efficacy and AEs ( Fig. 2) was observed at the 2.5 mg dose which is therefore the recommended dose for zolmitriptan [15]. Oral naratriptan apparently has a dose-response curve for efficacy [16] with a plateau which starts at 7.5 mg (Fig. 4). For AEs there is a similar plateau in this dose range. At 2.5 mg there are no more AEs than with placebo, as has also been observed in a meta-analysis [13]. The 2.5 mg dose of naratriptan was subsequently chosen as a recommended dose without any more AEs than placebo, the so-called ''gentle triptan'' [17]. The dose-response curves for almotriptan are shown in Fig. 5 [18] and there is a slight increase in efficacy from 6.25 (56%) to 25 mg (66%). The incidences of AEs are remarkably low and first at 25 mg there is a slight increase compared with placebo. The AEs up to 12.5 mg (16-18%) were described as being mild in the majority of patients whereas the AEs after 25 mg (25%) were described as Effect of naratriptan 1, 2.5, 5, 7.5, and 10 mg on headache relief and adverse events in one RCT [16] being of moderate intensity in 48% of cases. Also in a meta-analysis almotriptan 12.5 mg was found to have AEs at the placebo level [13]. Mostly based on the change in intensity of AEs almotriptan 12.5 mg was chosen as the recommended dose [15,18]. The efficacy of frovatriptan was evaluated by pooling the results of two RCTs [19]. The combined results are shown in Fig. 6. From 2.5 mg and with higher doses there is a flat dose-response curve. Below 2.5 mg there is no efficacy. The incidences of AEs increase with dose and there is a maximum of 72% at 40 mg. The recommended dose is frovatriptan 2.5 mg, the lowest dose with efficacy. For rizatriptan and eletriptan the total incidences of AEs (any patients with an AE) are not reported but the incidences of individual AEs are given in tables [8][9][10][11]. Thus only the dose-response curves for efficacy of these two triptans can be evaluated. In one dose-finding RCT (n = 417) headache relief was 18% with placebo, and 21, 45, and 48%, with rizatriptan doses of 2.5, 5, and 10 mg, respectively [10]. In a RCT (n = 449) exploring the upper part of the dose-response curve for rizatriptan headache relief was 18% with placebo and 52, 56, and 67% with 10, 20, and 40 mg doses of rizatriptan. AEs occurred more frequently after a 40 mg dose of rizatriptan [11]. In one RCT (n = 1,190) investigating the effect of eletriptan headache relief was 20% with placebo an 47, 62, and 59% with 20, 40, and 80 mg doses of eletriptan [9] and in another RCT (n = 1334) [8] headache relief was 22% with placebo and 47, 62, and 59% with the eletriptan doses of 20, 40, and 80 mg, respectively. In both RCTs AEs were comparable for eletriptan 20 mg and placebo [8,9]. AEs from different trial programmes are difficult to compare because of differences in the methodology of collecting AEs. In a meta-analyses any AE (placebo-subtracted) were 7 and 13% after 5 and 10 mg doses of rizatriptan; and 2, 6, and 18% after 20, 40, and 80 mg, respectively, doses of eletriptan [13]. There is thus also for these two triptans an increase in the incidence of AEs with increase in doses. Telcagepant, a calcitonin gene-related peptide (CGRP) receptor antagonist, is currently being developed for the acute treatment of migraine. In one small dose-defining RCT [20] doses of 300 and 600 mg telcagepant were found comparable and the 300 mg dose was selected for further investigation. The dose-response curves for telcagepant in doses from 50 to 300 mg are shown in Fig. 7 [21]. The incidence of AEs is at the placebo level, confirming the lack of CGRP antagonists on human vasculature [22], and there is probably a plateau for efficacy from 150 or 300 mg and further up [21,23]. The recommended dose will probably be 300 mg telcagepant, a dose with maximum effect and AEs on placebo level. Discussion In 2002, it was stated that triptans have served as the foot soldiers or the advances in migraine research during the latter part of the twentieth century [24]. How effective are these revolutionary drugs then in clinical practice? The triptans are per se highly effective drugs confer the 85-91% headache relief at 2 h after subcutaneous sumatriptan and naratriptan [1][2][3]. Theoretically, it should be possible by increasing the oral dose of a triptan to obtain similar high response rates. This is, however, not the case. Even with similar plasma concentrations of the sumatriptan and naratriptan after oral and subcutaneous administration the injection is still superior to the oral form [4]. As shown in Figs. 1, 2, 3, and 5 there is for several triptans, sumatriptan, zolmitriptan, and frovatriptan, a flat upper part of the dose-response curves. In addition, the efficacy even with very high doses, e.g., the 40 mg dose of frovatriptan. (42%) and of rizatriptan (67%), is not near the efficacy of the subcutaneous form, vide supra. This higher efficacy of injected triptans compared with the oral form is most likely due to a quicker rise in blood concentrations after subcutaneous injections [4]. The upper part of the dose-effect curves for several triptans, sumatriptan, zolmitriptan, and frovatriptan (Figs. 1, 2, 3, and 6) demonstrate a ceiling effect for response on migraine pain. This ceiling effect is especially pronounced for frovatriptan for which a 16-fold increase to 40 mg from the 2.5 mg dose did not result in an increase in efficacy (see Fig. 6). In contrast the dose-response curves for AEs show that the incidence of AEs increases with increasing doses (Figs. 1, 2, 3, 4, 5, and 6), and there is no indications of a ceiling effect. Only reporting the incidence of AEs does not in all cases give the full picture of the clinical impact of the AEs. Thus for almotriptan 12.5 mg AEs were reported as mild whereas for 25 mg they were reported as moderate [18]. The global impact of AEs should be measured on suitable quality of life scales in the future [25]. Compared to the traditionally used drug, ergotamine, which in addition to its 5-HT1B/1D has agonistic effect on e.g., the dopamine D 2 receptor [26], the triptans act selectively on the 5-HT1B/1D receptor [15,27] and should thus have a better tolerability profile than ergotamine. Thus in one RCT rectal ergotamine 2 mg (73%) was slightly superior to rectal sumatriptan 25 mg (63%) for headache relief but caused significantly more nausea and/or vomiting: 28 and 7%, respectively [15,28]. Even if just recording the incidence of AEs in the balance between efficacy and tolerability is not the ideal measure of tolerability it is fair measure for the potential for AEs of a triptan in the migraine population and in several cases the incidence of AEs has determined the recommended doses of the triptans. The recommended doses are in most cases a realistic compromise between efficacy and tolerability. The new CGRP antagonist telcagepant has an excellent tolerability with AEs on the placebo level (see Fig. 6 [21,23]). Telcagepant has a headache relief of 56% and has a 26% pain-free response [29] which is lower than 40% for rizatriptan 10 mg [13]. Clinical comments My personal rating of the triptans and telcagepant is given in Table 1. It is based both on comparative RCTs [5], two systematic reviews [27,30], and a meta-analysis [13]. For efficacy ? is given for a drug somewhat better than placebo, ?? is given for an effective drug, and ??? for a highly effective drug. For tolerability 0 is given for no more AES than placebo, ? for \10% more AEs than placebo, ?? for \25% more AEs than placebo, and ??? for [25% more AEs than placebo. It should be noted that there are most likely inter-individual difference to responses to triptans. Thus one patient A may use one triptan successfully whereas patient B may prefer another triptan. This variability among triptans is most likely due to both a pharmacokinetic and pharmacodynamic variability among the drugs [31]. From a pharmacokinetic point of view almotriptan has the advantage of a high oral bioavailability of 80% and is more unlikely to Table 1 Efficacy and tolerability of triptans and telcagepant For explanation of (? to ???) for efficacy and of (0 to ???) for AES potential, see text. The rating is based on [13,15,21,23,27,30] Telcagepant: 300 ?? 0 vary among subjects than e.g., sumatriptan with an oral bioavailability of 14% [15,27]. Because of no more AEs in RCTs than placebo (see Fig. 5) almotriptan 12.5 mg can apparently (see Table 1) be a first choice triptan if no AEs are tolerated. It should be noted, however, that some patients can experience so-called ''triptan'' symptoms (see below) even after almotriptan as after other triptans. Sumatriptan is now of patent in most countries and sumatriptan 50-100 mg should therefore in clinical practice be the triptan of first choice when triptans are used de novo in migraine patients. Even if the AEs after triptans are in most cases mild to moderate and transient they can be frightening for the patients which should be informed about possible AEs. Somnolence and asthenia are reported as AEs of triptan but they are most likely partly treatment-emergent CNS symptoms of the migraine attack following the treatment with triptans [26]. Even so they are experienced by the patients as bothersome AEs. The so-called ''triptans'' symptoms [32] are shown for placebo and 2.5 mg recommended dose of zolmiriptan in Table 2 [15,33]. Note that zolmitriptan 2.5 mg caused 17% more adverse events than placebo. Chest symptoms (mainly tightness and pressure) have been reported to occur in up to 20% (tablets) and 40% (subcutaneous injection) of the patients treated with sumatriptan some time [15,34]. Such symptoms can be a frightening experience for the patients, and they should be warned in advance of the risk of the symptoms and should be informed about the transient and generally benign nature. If telcagepant becomes available it will be the drug of first choice for the patients with migraine and cardiovascular diseases or high risk for such diseases. It will also be a good choice if the migraine patient has intolerable AEs when treating with triptans. It should be noted that with any drug used in acute migraine treatment there is a different balance of efficacy and tolerability in the individual patient and there is thus no standard dose that suites every patient. In addition, some patients may prefer a very effective drug with some AEs to a drug with lower efficacy and virtually no AEs. Drugs and doses should thus be tailored to the need of the individual patient. Finally, it is important to note that the majority of the patients experience no AEs with use of the oral specific 5HT1B/1D receptor agonists, the triptans, in the recommended doses (see, Fig 1, 2, 3, 4, and 5). When AEs occur they are in most cases mild to moderate and transient. On balance, the triptans with their proven efficacy and an acceptable tolerability profile have been a major step forward in the acute treatment of migraine. Conflict of interest None. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
2014-10-01T00:00:00.000Z
2011-02-25T00:00:00.000
{ "year": 2011, "sha1": "a538b50eb293d04f9b3aef4449fc33fbdf94843d", "oa_license": "CCBY", "oa_url": "https://thejournalofheadacheandpain.biomedcentral.com/track/pdf/10.1007/s10194-011-0309-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e3f40d6b76f97933fd5e9a2c10c6b42039655b0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210072352
pes2o/s2orc
v3-fos-license
Regulation of Garcinol on Histone Acetylation in the Amygdala and on the Reconsolidation of a Cocaine-Associated Memory Exposure to drug-related cues often disrupts abstinence from cocaine use by triggering memories of drug effects, leading to craving and possible relapse. One prospective method of treatment is weakening cocaine-associated memories via impairment of memory reconsolidation. Previous experiments have shown that systemic injection of the amnestic agent garcinol impairs the reconsolidation of cocaine-cue memories in a temporally constrained, cue-specific, and persistent manner. Here, we investigated garcinol’s effect on cocaine-cue memory reconsolidation when administered to the lateral nucleus of the amygdala (LA), as well as its epigenetic activity following systemic garcinol administration and also when given in conjunction with trichostatin A (TSA), a histone deacetylase (HDAC) inhibitor. Rats received 12 days of cocaine self-administration training during which time an active lever press resulted in an i.v. cocaine infusion that was concurrently paired with the presentation of a light/tone cue. After 8 days of lever extinction, rats received a memory reactivation session followed by a cue-induced reinstatement test. Intra-LA garcinol following memory reactivation significantly impaired reconsolidation only if the memory was reactivated. Additional studies revealed a significant reduction in histone H3 K27 acetylation and reduced expression of the immediate-early genes Arc and Egr-1 in the LA. When administered alone, TSA enhanced the reinstatement of a cocaine-cue memory, an effect that was prevented when garcinol was concurrently administered. These data indicate the LA is a key structure responsive to garcinol, suggest that one of garcinol’s mechanisms of action is through the reduction of memory-related gene expression in the LA, implicate changes in histone acetylation in memory reconsolidation, and support garcinol as a potential therapeutic tool for sustaining abstinence. INTRODUCTION One commonality among substance use disorders is the tendency to relapse to drug-seeking behaviors that is a major detrimental factor for long-term abstinence. Much research has been focused on identifying novel neural mechanisms that underlie craving and drug-seeking behavior in an effort to develop more effective treatments. One important contributing factor that may initiate relapse is exposure to environments or cues that have become associated with drug-taking. These cues can elicit memories of the pleasurable effects of taking the drug and ultimately result in craving and drug-seeking behavior, preventing sustained abstinence. Therefore, one potentially therapeutic treatment option is to identify new mechanisms and methods to weaken the strength of these drug-associated memories. Reconsolidation and extinction have been widely identified as two primary processes by which existing memories can be modified. Although extinction or exposure-based treatment methods are promising, used alone they have failed to be successful (Taylor et al., 2009;Torregrossa and Taylor, 2013;Everitt et al., 2018). Memory extinction is the process whereby repeated exposure to the conditioned stimulus (CS) is performed in a context lacking the unconditioned stimulus (US). Following multiple exposures to the CS, a new memory is formed in which the CS no longer elicits the conditioned response (Kindt et al., 2009). In a clinical setting, this consists of repeatedly exposing patients to the cues. Such processes can be anxiety-provoking for patients; furthermore, extinction learning may actually be impaired in patients with psychiatric disorders (Holt et al., 2009;Singewald and Holmes, 2019). In addition, memories that are successfully extinguished are not ''erased''; they are susceptible to spontaneous recovery with the passage of time, are capable of being reinstated during periods of stress (Singewald et al., 2015;Mantsch et al., 2016), and may renew in new contexts other than that in which the memory was extinguished (Crombag and Shaham, 2002;Kindt et al., 2009). During the process of reconsolidation, a memory that has previously been consolidated is recalled and subsequently enters a destabilized state for a short period of time, allowing the memory to be updated with new information before becoming restabilized, which can either strengthen or weaken the original memory (Zhang et al., 2018;Kida, 2019). Importantly, when memory is disrupted through interference with the reconsolidation process, the impairment appears to be persistent and is not prone to the constraints of extinction (Kindt et al., 2009;Singewald et al., 2015;Kida, 2019). In light of this, it has been suggested by researchers that ''reconsolidation''-based therapy may be more clinically effective and less stressful. Utilizing behavioral strategies and/or pharmacological methods to interfere with this process has been shown to successfully impair the reconsolidation of cue memories in animals (Lewis, 1979;Nader, 2003;Lee et al., 2005Lee et al., , 2006Lee et al., , 2017Tronson and Taylor, 2007;Sanchez et al., 2010;Sorg, 2012;Xue et al., 2012;Sartor and Aston-Jones, 2014;Taylor and Torregrossa, 2015;Torregrossa and Taylor, 2016;Dunbar and Taylor, 2017a;Monsey et al., 2017;Haubrich and Nader, 2018). It is widely accepted that associative memories, such as a cocaine-cue memory, are formed and stored in an important brain region, the lateral nucleus of the amygdala (LA), which has therefore been the target of much research (Thomas et al., 2003;Lee et al., 2005Lee et al., , 2006Tipps et al., 2014;Rich et al., 2016Rich et al., , 2019. Numerous studies have demonstrated that interfering with signaling cascades within the LA impairs its ability to form and store cocaine-associated memories (Wan et al., 2014;Shi et al., 2015;Rich et al., 2016Rich et al., , 2019. Previous work from our lab has indicated that one such compound, the amnestic agent garcinol, can impair the reconsolidation of a cocaine-cue memory in a manner that requires memory reactivation, temporally regulated, long-lasting, persistent after extended access to cocaine, and cue-specific (Monsey et al., 2017). Further studies showed garcinol can also impair conditioned reinforcement learning, weakens the ability of acquiring a new response, and can also impair reinstatement following a US (cocaine) reactivation session (for review see Dunbar and Taylor, 2017a,b;Monsey et al., 2017). Garcinol, a natural compound derived from the fruit rind of Garcinia indica, the Kokum tree, has been investigated in therapeutic contexts as a treatment for AIDS, HIV, and cancer (Yamaguchi et al., 2000;Koeberle et al., 2009;Padhye et al., 2009;Ahmad et al., 2010). Studies have found garcinol to be a potent histone acetyltransferase (HAT) inhibitor of the transcriptional coactivator p300 (EP300 binding protein)/CBP (CREB-binding protein) family, and PCAF (p300/CBP-associated factor) family; p300/CBP activity has been found to play a key role in the reconsolidation of auditory fear memories (Maddox et al., 2013a;Merschbaecher et al., 2016). Garcinol is also of interest because of its role in modulating epigenetic processes in the LA, for example, histone acetylation (Ac), which may underlie memory reconsolidation mechanisms (Maddox and Schafe, 2011;Monsey et al., 2011;Maddox et al., 2013a,b;Hitchcock et al., 2019). Memory reactivation also has been shown to regulate levels of histone H3 protein in the amygdala (Maddox and Schafe, 2011). Further, administration of a histone deacetylase (HDAC) inhibition in the LA following memory reactivation enhances reconsolidation of an auditory fear memory, while use of a HAT inhibitor, like the amnestic agent garcinol, impairs memory reconsolidation (Maddox et al., 2013a,b). These epigenetic mechanisms are thought to play an important role in the reconsolidation of auditory fear memories, yet little is known about their involvement in the reconsolidation of appetitive memories and drug-associated memories in particular. Thus, we explore the effect of the HAT inhibitor garcinol, on reconsolidation of cocaine-associated cue memories. Subjects Adult male Sprague-Dawley rats (Charles River), aged 2-3 months and weighing 275-300 g, were singly housed and kept on a 12 h light/dark cycle. Following recovery from surgery, food was restricted for the duration of the experiment to maintain rats at 90-95% of their pre-surgery body weight. Water was provided ad libitum. Surgical Procedures Rats were anesthetized with a mixture of ketamine (75 mg/kg) and Xylazine (5 mg/kg, i.p.). They also received 5 mg/kg Rimadyl and 5 ml s.c. of lactated Ringer's solution. Indwelling catheters were implanted into the right jugular vein. Catheters were perfused with heparinized saline every other day to maintain patency. For intra-cranial infusion experiments, during the same surgery immediately following the catheterization, rats were implanted bilaterally with 26-gauge stainless steel guide cannulas that were aimed at the LA (Bregma −3.2 AP, ±5.0 ML, −8.0 DV). The cannulas were adhered to several screws in the skull using a mixture of dental cement and acrylic. Dummy cannulas (31-gauge) were inserted into the guide cannulas to keep them from clogging. Following surgery, rats received 1 week of recovery time where they were singly housed with provided with ad libitum food and water. Rats were weighed daily throughout the remainder of all experiments. Behavioral Procedures For self-administration training, rats were placed in soundattenuated operant conditioning chambers (Med Associates). The boxes contained two extendable levers (on the same wall), a cue light, a separate house light, a speaker for the tone, and a background noise-generating fan. Rats received 12 days of cocaine self-administration (SA) training occurring in 1-h sessions. Throughout the session, an active and inactive lever was extended. Each active lever press resulted in immediate i.v. infusion of cocaine (1 mg/kg) while simultaneously a cue light and tone (75 dB) were presented in the chamber for 10 s. An inactive lever press did not result in cocaine infusion or cue presentation. For self-administration training a fixed ratio 1 (FR1) schedule was used; one active lever press = 1 cocaine infusion/cue presentation. Rats then underwent 8 days of lever extinction, where pressing either lever had no outcome. Rats were required to meet acquisition criteria of ≥6 infusions for each of the last 3 days of self-administration. This criteria, on average, is met by 90-95% or rats. These rats were then divided into to-be-vehicle or to-be-garcinol groups and balanced for a total number of infusions over all the days of SA and comparable levels of extinction. Twenty-four hours after the last extinction day, rats were placed in a novel chamber (addition of a novel lemon-scented odor, changes in floor texture, and different lighting) for a memory reactivation session. Here, rats received three presentations of the light and tone cues recall the cocaine-cue memory. There were no levers present. For no-reactivation controls, rats were placed in the same novel chamber, however, they did not receive cue presentation. For studies using systemic administration of garcinol or vehicle, rats received a 10 mg/kg i.p. injection 30 min after reactivation (and an additional injection of 2.5 mg/kg trichostatin A (TSA) or vehicle 45 min after reactivation in rescue experiment) and were returned to the animal colony. In experiments using intra-LA infusion of garcinol or vehicle, rats received a 500 ng 0.5 µl/side infusion 1 h after reactivation and were returned to the animal colony. For qRT-PCR experiments, rats were sacrificed 1 h after reactivation (30 min after garcinol or vehicle treatment) and brains were stored at −80 • C until processed. In behavioral studies, rats were tested for cue-induced reinstatement 24 h after reactivation in the original chamber. During this test, an active lever press resulted in a 10 s light/tone cue presentation but did not result in cocaine infusion. Quantitative Real-Time PCR Punches were taken on a sliding freezing microtome from the LA using a 1 mm punch tool from 400 µm thick sections. For RNA isolation, samples were processed using an RNAqueous Micro Kit (ThermoFisher). For cDNA synthesis, a High Capacity cDNA Reverse Transcription Kit (Applied Biosystems) was used. Quantitative real-time PCR (qRT-PCR) was performed using the ∆∆Ct method using custom primers (Integrated DNA Technologies, Coralville, IA, USA) for Arc (Forward CCC TGCAGCCCAAGTTCAAG; Reverse GAAGGCTCAGCTGCC TGCTC) and Egr-1 (Forward AGCGAACAACCCTATGAG CA; Reverse TCGTTTGGCTGGGATAACTC). Relative gene concentrations were normalized to GAPDH (Forward GCA TCCTGCACCACCAACTG; Reverse ACGCCACAGCTTTCC AGAGG). Data were analyzed using a two-tailed t-test with a significance threshold of p < 0.05. Data are normalized to GAPDH and then expressed as the average threshold cycle (Ct) difference between groups. Average fold change values were then calculated and values were expressed as a percentage of the control. Statistical Analysis Self-administration data were analyzed by repeated measure (RM) analysis of variance (ANOVA) across each day for total infusions, total active and inactive lever presses. Extinction training was also analyzed using ANOVA across each day to measure total active and inactive lever responses. Reinstatement tests were analyzed using RM-ANOVAs measuring total active and inactive lever responses on the last day of extinction and reinstatement test day. Bonferroni adjustment and post hoc tests were used where appropriate. qRT-PCR and Western blotting data were analyzed with two-tailed t-tests. Intra-LA Garcinol Impairs Cue-Induced Reinstatement of a Cocaine-Associated Memory Following Memory Reactivation In our first experiment we examined whether the intra-LA infusion of garcinol impairs the reconsolidation of a cocaineassociated memory following either memory reactivation or in no reactivation controls (see Figure 1A). Here, rats received 12 days of cocaine self-administration training where each active lever press resulted in a 1 mg/kg i.v. infusion of cocaine as well as the presentation of a light/tone cue at the same time. Following this, rats underwent 8 days of lever extinction where active lever presses no longer resulted in cocaine infusion or cue presentation. Twenty-four hours after the last day of extinction, rats underwent a reactivation session in a novel context where the cue was presented. No differences were seen in the acquisition of the self-administration task between to-be-vehicle (N = 7) and to-be-garcinol (N = 8) groups. We report no significant differences in the number of total cocaine infusions between these groups (p > 0.05; Figure 1B). There were also no differences during lever extinction between groups for total active lever presses (p > 0.05; Figure 1C). The ANOVA across the last final extinction day and the cue reinstatement test day revealed a significant main effect of day (F (1,13) = 102, p < 0.0001) and of drug (F (1,13) = 46.79, p < 0.0001) as well as a significant interaction in active lever presses between garcinol (500 ng/side; 0.5 µl) and vehicle infused rats (F (1,13) = 54.21, p < 0.0001; Figure 1D). We found that post-reactivation intra-LA garcinol decreased active lever pressing during the reinstatement test when compared to vehicle controls [p < 0.05 (see Figure 1E for estimates of infusion sites)]; however, no differences were observed between groups on the last day of extinction. Bonferroni's test revealed a significant difference in vehicle infused rats from the last day of extinction compared to the reinstatement day (p < 0.0001), while no significant difference was observed from extinction to reinstatement day in garcinol injected rats (p > 0.05). This suggests that intra-LA, like systemically administered, garcinol can also block the reconsolidation of a cocaine-cue memory to decrease drug-seeking behavior. To enhance its clinical utility, it is important to show that garcinol only blocks memories that have been reactivated and not those that have not. To control for this, we next examined whether garcinol would have an effect on a cocaine-cue memory receiving ''no-reactivation'' (see Figure 1F). Here, a second group of rats went through self-administration training and lever extinction as in our reactivation experiment. However, on the reactivation day, rats were placed in the reactivation chamber for the same length of time as the reactivated groups, but they Frontiers in Behavioral Neuroscience | www.frontiersin.org were not exposed to cue presentations. One hour after this session rats received an injection of either vehicle or garcinol, as above, and then underwent a cue reinstatement test 24 h later. Similar to our reactivation experiments, we found no differences in the number of infusions between to-be-vehicle (N = 5) and to-be-garcinol (N = 5) groups during acquisition of cocaine self-administration (p > 0.05; Figure 1G). Likewise, there were no differences in active lever presses between groups throughout lever extinction (p > 0.05; Figure 1H). The ANOVA for the reinstatement test day compared to the last day of lever extinction revealed a significant main effect of day (F (11,99) = 5.38, p < 0.01), but a nonsignificant main effect of group and no significant interaction between garcinol and vehicle-injected groups [p > 0.05; Figure 1I (see Figure 1J for estimates of infusion sites)]. These findings suggest that the LA is a key region involved in garcinol's effects on cocaine-cue memory reconsolidation. Moreover, because garcinol does not alter reinstatement in the absence of memory reactivation, these results also suggest that garcinol's effects on reconsolidation are predicated on active reactivation of cocaine-associated memory. Systemic Garcinol Decreases the Expression of Immediate-Early Genes in the LA of Reactivated Rats Our next set of experiments examined the expression of genes previously shown to be regulated by memory reactivation in the LA and the effects of garcinol on expression of these genes (Figure 2; Maddox et al., 2010;Maddox and Schafe, 2011;Ziókowska et al., 2011;Alaghband et al., 2014). Of the many genes involved in the consolidation and reconsolidation of long-term memories, several immediate-early genes (IEGs) are quickly transcribed in response to powerful external stimuli, such as fear conditioning. This first wave of gene expression is thought to be key for the later consolidation of the memory trace, ultimately leading to structural (i.e., morphological) changes at LA synapses. Here, we chose to examine the IEGs Arc and Egr-1, as they have been shown to be required for memory reconsolidation processes (Thomas et al., 2003;Lee et al., 2004Lee et al., , 2006Maddox et al., 2010;Maddox and Schafe, 2011;Everitt, 2014) and are enhanced in the LA following reactivation of a drug-associated cue in a cocaine self-administration model (Ziókowska et al., 2011). For this set of experiments rats received systemic vehicle or garcinol (10 mg/kg, i.p.) administration following memory reactivation (vehicle N = 5; garcinol N = 6; see Figure 2A) or no reactivation (vehicle N = 5; garcinol N = 6; see Figure 2D). Again, there were no differences in cocaine infusions between to-be-vehicle or to-be-garcinol groups across self-administration (p > 0.05) or across extinction sessions in the reactivated and non-reactivated groups (p > 0.05; Supplementary Figure S1). Rats were sacrificed 30 min following reactivation or no reactivation and LA tissue was processed for qRT-PCR. The results revealed a significant reduction in Arc (t (9) = 2.71, p < 0.05; Figure 2B) and Egr-1 (B) Quantification of Arc mRNA in the LA in vehicle and garcinol treated rats following memory reactivation using qRT-PCR. * p < 0.05, significant decrease relative to vehicle group. (C) Quantification of Egr-1 mRNA in the LA in vehicle and garcinol treated rats following memory reactivation using qRT-PCR. * p < 0.05, significant decrease relative to vehicle group. (D) Schematic of the behavioral protocol. (E) Quantification of Arc mRNA in the LA in vehicle and garcinol treated rats following no memory reactivation using qRT-PCR. (F) Quantification of Egr-1 mRNA in the LA in vehicle and garcinol treated rats following no memory reactivation using qRT-PCR. * p < 0.05. (t (9) = 3.03, p < 0.05; Figure 2C) mRNA in rats that received garcinol treatment following cocaine-cue memory reactivation. Conversely, in rats receiving no memory reactivation, we did not see any significant differences in Arc or Egr-1 mRNA expression in garcinol treated rats compared to vehicle (p > 0.05; Figures 2E,F). These data suggest that systemic garcinol administration is capable of decreasing levels of IEG expression in the LA in rats receiving cocaine-cue memory reactivation, but it does not alter expression patterns in no reactivation controls. Systemic Garcinol Reduces Histone Acetylation in the LA of Reactivated Rats In our next set of experiments, we sought to examine whether systemic garcinol (10 mg/kg, i.p.) administration resulted in molecular epigenetic changes in the LA. Garcinol is an inhibitor of the two HATs CBP/p300 and PCAF (Balasubramanyam et al., 2004). Further, previous studies have demonstrated that CBP/p300 actively is responsible for acetylating lysine residues 18 and 27 (K18 and K27) and that deletion of CBP/p300 in cells specifically reduces acetylation levels on these residues compared to others (Jin et al., 2011). We hypothesized that following reactivation of a cocaine-associated cue memory (but not in non-reactivated controls), levels of acetylated histone H3 K18 and K27 would be significantly decreased in the LA in response to systemic garcinol administration. For these experiments, rats received either vehicle or garcinol injection after memory reactivation (vehicle N = 7; garcinol N = 8; see Figure 3A) or no reactivation (vehicle N = 6; garcinol N = 7; see Figure 3D). There were no differences in total drug infusions between vehicle and garcinol treated rats in the reactivated and no reactivation group during the 12 days of cocaine self-administration training (p > 0.05; Supplementary Figure S2). Additionally, no differences were seen in the number of active lever presses during the 8 days of lever extinction between groups (p > 0.05; Supplementary Figure S2). Rats were then sacrificed 90 min after memory reactivation or no reactivation and Western blotting was performed on LA tissue. The results revealed a nonsignificant difference in AcH3 K18 (p > 0.05, Cohen's d = 0.76; Figure 3B); however, levels of AcH3 K27 were significantly decreased in the garcinol group compared to the vehicle group in reactivated rats (t (13) = 2.20, p < 0.05, Cohen's d = 1.14; Figure 3C). In rats receiving no memory reactivation session, neither AcH3 K18 nor AcH3 K27 levels were altered in response to garcinol administration when compared to the vehicle group (p > 0.05; Figures 3E,F). These data suggest that systemic garcinol treatment following cocaine-cue memory reactivation is capable of decreasing levels of histone H3 acetylation in the LA. Systemic Inhibition of HDAC Activity Rescues the Reinstatement Impairment Induced by Garcinol Previous studies utilizing a fear conditioning paradigm have reported that intra-LA infusion of the HDAC inhibitor TSA following training and following memory reactivation leads to an increase in histone H3 levels in the amygdala (Maddox and Schafe, 2011;Monsey et al., 2011). Further, it was shown that intra-LA TSA is capable of enhancing both the consolidation and reconsolidation of auditory fear memory (Maddox and Schafe, 2011;Monsey et al., 2011). In light of this, in our final set of experiments, we hypothesized that by using the HAT inhibitor garcinol as well as the HDAC inhibitor TSA we could bi-directionally regulate reinstatement (see Figure 4A). There was no significant difference observed between to-be-vehicle and to-be-garcinol treated groups across the 12 days of cocaine self-administration (p > 0.05; Figure 4B). Likewise, there were no significant differences between groups across extinction (p > 0.05; Figure 4C). However, when comparing the last day of extinction and the cue reinstatement test day, a RM-ANOVA revealed a significant main effect of day (F (1,16) = 224.30, p < 0.0001), group (F (3,16) = 51.68, p < 0.0001) and day by group interaction (F (3,16) = 36.68, p < 0.0001; Figure 4D). Bonferroni's test comparing groups on the cue reinstatement day revealed a significant decrease in lever pressing in the Garcinol/TSA group compared to the Vehicle/TSA group (p < 0.0001). Further, there was a significant decrease in lever pressing in the Garcinol/Vehicle group when compared to the Garcinol/TSA group (p < 0.0001). These data indicate that inhibiting HATs leads to an impairment in cocaine-cue memory reinstatement and conversely, inhibiting HDACs leads to an enhancement in the reinstatement of drug-seeking behavior. When given together TSA appears to rescue (or prevent) this garcinol-induced impairment in reconsolidation-confirming that altering levels of histone acetylation plays an important role in modulating reconsolidation processes. DISCUSSION In the present study, we sought to investigate the effects of the naturally-derived compound, garcinol, on downstream targets such as acetylation of histones and immediate-early gene expression in the amygdala. Here, we also examined whether using modulators of histone acetylation would alter the reinstatement of a cocaine-cue memory. We identified the LA as an important structure responding to garcinol, as intra-LA infusion of this compound following cocaine-cue memory reactivation was sufficient to block the reconsolidation of this memory. This impairment was isolated to only those memories that were reactivated as we did not observe any memory Total active lever presses on the last day of extinction compared to during the cue-induced reinstatement test. * p < 0.05, Veh/trichostatin A (TSA) and Garc/Veh significant increase and decrease, respectively, relative to all other groups. deficits in our non-reactivated controls. Following the systemic injection of garcinol after memory reactivation, we observed a decrease in expression of the IEGs Arc and Egr-1 in the LA as well as a decrease in histone H3 K27 acetylation. Finally, we showed that systemic injection of the HDAC inhibitor TSA is capable of increasing reinstatement of a cocaineassociated memory; however, when given in conjunction with garcinol, this effect is prevented and reinstatement is reduced to baseline levels. Collectively, these data are consistent with our previous findings that characterize garcinol as a potentially clinically useful amnestic agent to treat mnemonic pathologies such as substance use disorder (Monsey et al., 2017). Prior studies established garcinol's ability to impair a cocaine-associated memory in a manner that is specific to reactivated memories only, long-lasting, cue-specific, temporally-constrained, and persist following extended cocaine access when administered systemically after cue reactivation (Monsey et al., 2017). Additionally, it has been shown that garcinol can impair reconsolidation when given systemically following a US (cocaine) reactivation session and that garcinol's effects are only observed if administered during the labile period following memory reactivation (Dunbar and Taylor, 2017a). While these studies hold promise for clinical utility due to the systemic administration of garcinol, it is also of importance to identify brain region-specific sites where garcinol could be exerting its effects on mnemonic processing. We chose to examine the LA due to its involvement in the formation and storage of emotionally salient associative memories (Sorg, 2012;Taylor, 2013, 2016;Taylor and Torregrossa, 2015). Our observation that intra-LA garcinol infusion is sufficient to impair the reinstatement of a cocaine-cue memory suggests that this may be one target brain region being affected following systemic administration. In agreement with previous data, this effect was constrained only to memories that had been reactivation because we did not observe a reconsolidation impairment in our non-reactivated controls following intra-LA garcinol infusion. Garcinol's precise mechanisms of action that might contribute to its ability to impair the reconsolidation of a cocaine-cue memory following reactivation remain uncertain. In light of this and our previous data, we also examined potential downstream molecular modifications known to be altered in response to garcinol. The results of our molecular experiments confirmed that systemic garcinol does indeed alter levels of mRNA expression and histone acetylation in the LA when administered after memory reactivation. We first examined the expression of the immediate-early genes Arc and Egr-1 and found that mRNA levels of both IEGs were reduced in garcinol injected rats. This cascade of IEGs is important as induction of IEG expression in brain regions occurs during cognitive processing such as neuronal activation during behavioral tasks (Guzowski et al., 2001;Ziókowska et al., 2011;Minatohara et al., 2015;Li et al., 2016). Both Arc and Egr-1 have been previously reported to play a critical role in consolidation and reconsolidation processes of associative memories (Ploski et al., 2008;Maddox et al., 2010;Maddox and Schafe, 2011;Ziókowska et al., 2011;Alaghband et al., 2014). One study utilizing a mouse model of drug self-administration found that cue-induced reinstatement of cocaine-seeking lead to an induction of Arc and Egr-1 expression in the medial prefrontal cortex as well as the amygdala (Ziókowska et al., 2011). Others have reported that intra-LA infusion of Egr-1 antisense oligodeoxynucleotides prior to a cue-induced reinstatement test abolishes reinstatement and cocaineseeking behavior (Lee et al., 2006). Further, Arc knockout mice exhibit impairments in long-term memory despite intact short-term memory formation and also show negative alterations in long-term potentiation and long-term depression (Plath et al., 2006). Thus, garcinol's ability to reduce the expression of these IEGs after memory reactivation may be one way in which it exerts its effects and impairs cocaine-cue memory reconsolidation. Previous research has established that epigenetic modulation of histone acetylation in the LA also contributes to the reconsolidation process in other models of memory formation such as fear conditioning (Maddox and Schafe, 2011;Maddox et al., 2013a,b;Monsey et al., 2015). One study using a fear memory reconsolidation paradigm revealed that levels of histone H3 acetylation were significantly elevated following memory reactivation and that HAT inhibitors such as garcinol and c646 were capable of diminishing these reactivation-related increases and impaired reconsolidation (Maddox and Schafe, 2011;Maddox et al., 2013a,b). Others have shown similar mechanisms are crucial for responses following cocaine selfadministration. It has been reported that changes in histone H3 acetylation in the striatum and nucleus accumbens are essential for cocaine-induced neuroplasticity in addition to motivation for the reinforcing effects of cocaine (Kumar et al., 2005;Wang et al., 2010). In agreement with this, we also observed the regulation of histone acetylation in response to memory reactivation in our own experiments. We report a decrease in histone H3 K27 acetylation in the LA following memory reactivation and systemic garcinol administration. Epigenetic alterations and posttranslational modifications such as histone acetylation have been widely implicated in the pathogenesis of numerous psychiatric diagnoses including depression, PTSD, schizophrenia, Rett syndrome, and addiction (Tsankova et al., 2004(Tsankova et al., , 2006(Tsankova et al., , 2007Kumar et al., 2005;Maddox and Schafe, 2011;Monsey et al., 2011Monsey et al., , 2017Nott et al., 2016;Thomas, 2017). Acetylation of histones occurs on lysine residues on the protein tail. Increasing levels of histone acetylation shift the chromatin structure toward an open state, allowing access for transcriptional machinery and thus enhances gene transcription. Conversely, a reduction in histone acetylation is thought to lead to a compact form of chromatin and subsequent transcriptional repression. In the present study, we observed a decrease in acetylation of histone H3 K27, but not of H3 K18, following memory reactivation and garcinol treatment, which is consistent with garcinol's role as a HAT inhibitor. We initially predicted that garcinol would decrease acetylation of both of these lysine residues, however, we may have failed to observe a difference in H3 K18 expression due to low power in this experiment. Further studies including more animals and the examination of acetylation changes on other lysine residues might yield alternate results. We hypothesize that acetylation is reduced on other lysine residues in response to garcinol as well. This, in turn, could lead to a decrease in mRNA transcription as we observed with Arc and Egr-1 in the present set of experiments. It would be of interest to explore this theory further and examine acetylation levels on promoter regions of these IEGs as well as on other late-phase genes known to underlie the reconsolidation process. More comprehensive measures of examining changes in acetylation on proteins regulated by cocaine-cue memory reactivation and garcinol administration using proteomic analysis would also contribute to our understanding of the role these epigenetic modifications play in our behavioral paradigm (see Rich et al., 2016;Torregrossa et al., 2019). In our final set of experiments, we built on the hypothesis that changing levels of histone acetylation would result in altered levels of reinstatement of a cocaine-associated memory and drug-seeking behavior. For these experiments, we investigated the effects of garcinol as well as the HDAC inhibitor TSA both alone and given in conjunction. Previous studies have shown that intra-LA infusion of TSA elevates levels of histone H3 K9/14 in the amygdala and enhances the consolidation and reconsolidation of auditory fear memory (Maddox and Schafe, 2011;Monsey et al., 2011). Interestingly, when TSA was given alone following a cue reactivation session we also observed an enhancement in reinstatement, consistent with the notion that levels of histone acetylation may contribute to changes in the reconsolidation process. Conversely, when garcinol and TSA are administered together following memory reactivation reinstatement returns to baseline levels; therefore, it appears that TSA can prevent the garcinolinduced impairment in reconsolidation observed in our initial studies. This suggests that inhibiting both HATs and HDACs together might result in subthreshold changes in histone acetylation levels and thus have no outcome on behavior in our specific paradigm. Further testing is required to more thoroughly dissect the interplay between garcinol and TSA and how they both individually and collectively contribute to modifications to cocaine-associated memory reconsolidation and drug-seeking behavior. In summary, the results of the present study provide further support for garcinol as a promising pharmacological tool for impairing reconsolidation of an appetitive cocaine-associated cue memory and that one avenue of action may be through changes in histone acetylation in the LA. Thus, epigenetic modulation of memory-related gene expression and the role histone acetylation plays in memory reconsolidation-particularly in the LA-warrant further study. Examining these mechanisms may reveal the processes involved in the maintenance of drug-associated memories that disrupt sustained abstinence. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by Yale University IACUC. AUTHOR CONTRIBUTIONS MM designed study, conducted experiments, data analysis, manuscript writing, and editing. SR assisted with conducting experiments, data analysis, and manuscript editing. JT assisted with study design, interpretation of results, and manuscript editing. ACKNOWLEDGMENTS We would like to thank Dr. Danielle M. Gerhard and Dr. Ronald S. Duman for their help with qRT-PCR experiments.
2020-01-09T09:15:09.901Z
2020-01-08T00:00:00.000
{ "year": 2019, "sha1": "0bb45075896f5aba7e4693eaf7c58815aa0fef0c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2019.00281/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a2dedc26b67e5c1943d6c2dadada69df9bd41038", "s2fieldsofstudy": [ "Psychology", "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248898808
pes2o/s2orc
v3-fos-license
Comparison of the Mortality Prediction Value of Soluble Urokinase Plasminogen Activator Receptor (suPAR) in COVID-19 and Sepsis In the last years, biomarkers of infection, such as the soluble urokinase plasminogen activator receptor (suPAR), have been extensively studied as potential diagnostic and prognostic biomarkers in the intensive care unit (ICU). In this study, we investigated whether this biomarker can be used in COVID-19 and non-COVID-19 septic patients for mortality prediction. Serum suPAR levels were measured in 79 non-COVID-19 critically ill patients upon sepsis (within 6 h), and on admission in 95 COVID-19 patients (66 critical and 29 moderate/severe). The non-COVID-19 septic patients were matched for age, sex, and disease severity, while the site of infection was the respiratory system. On admission, COVID-19 patients presented with higher suPAR levels, compared to non-COVID-19 septic patients (p < 0.01). More importantly, suPAR measured upon sepsis could not differentiate survivors from non-survivors (p > 0.05), as opposed to suPAR measured on admission in COVID-19 survivors and non-survivors (p < 0.0001). By the generated ROC curve, the prognostic value of suPAR in COVID-19 was 0.81, at a cut-off value of 6.3 ng/mL (p < 0.0001). suPAR measured early (within 24 h) after hospital admission seems like a specific and sensitive mortality risk predictor in COVID-19 patients. On the contrary, suPAR measured at sepsis diagnosis in non-COVID-19 critically ill patients, does not seem to be a prognostic factor of mortality. Introduction Since endothelial damage was recognized as an important pathobiological mechanism involved in COVID-19 [1], sepsis biomarkers have been shown to be relative in this disease [2][3][4][5]. Apart from the families of cell adhesion molecules, various receptor biomarkers implicated in sepsis, have also been shown elevated in COVID-19 [6]. The urokinase plasminogen activator (uPA) system is central to a spectrum of biological processes, including inflammation, fibrinolysis, cell proliferation, migration, and adhesion [7]. The binding of uPA to its receptor (uPAR) results in the conversion of plasminogen to plasmin via a proteolytic cascade [8]. Proteolytic cleavage of uPAR releases its soluble form, suPAR. suPAR was first identified in 1985 as a cellular binding site for urokinase [9], and since has been investigated as a potential prognostic marker in the intensive care unit (ICU). suPAR has been suggested to reflect the activation status of the immune system, rather than exerting inflammatory actions [10]. It is considered a proinflammatory biomarker associated with immune activation and fibrinolysis inhibition; its levels have been found to increase in several systemic diseases, including sepsis, cardiovascular disease, cancer, autoimmune conditions, kidney disease, and other organ failures [11]. Hence, it is not considered a disease-specific diagnostic marker. The study by Corban and colleagues [12] provided evidence for the link between fibrinolytic and inflammatory pathways and endothelial dysfunction. However, whether such biomarkers, including suPAR, can cause endothelial dysfunction or are only associated with it is still unknown. In a recent study, we demonstrated that suPAR is useful in predicting mortality in critically ill COVID-19 patients [5]. Recently, a distinctive biomarker motif of endotheliopathy was shown for COVID-19 and septic syndromes [13]. Hence, in the present study, we aimed to characterize suPAR's prognostic ability in COVID-19 pneumonia compared to sepsis arising from the respiratory tract. Materials and Methods This observational, single-center study included 95 consecutive COVID-19 patients admitted either in the ICU (N = 66, critically ill COVID-19 patients), or the ward (N = 29, moderate/severe patients), and 79 consecutive non-COVID-19, critically ill septic patients, admitted to the general, multi-disciplinary ICU of Evangelismos Hospital. Septic patients were matched for age, sex, and critical illness severity, while the source of sepsis was the respiratory tract. Septic patients with the site of infection at the abdomen (N = 1), the central nervous system (CNS) (N = 1), or the bloodstream (N = 6) were excluded from further analyses. SARS-CoV-2 infection was diagnosed by real-time reverse transcription PCR (RT-PCR) in nasopharyngeal swabs. Sepsis was defined as a life-threatening organ dysfunction caused by a dysregulated host response to infection, according to the third consensus definition for sepsis and septic shock [14]. All patients included in the study had pneumonia with respiratory failure, while all ICU patients and 19/29 (66%) of the ward patients fulfilled the Berlin criteria for acute respiratory distress syndrome (ARDS) [15]. The study was approved by the Hospital's Research Ethics Committee (129/19-3-2020), and all procedures carried out on patients were in compliance with the Helsinki Declaration. Informed written consent was obtained from all patients' next-of-kin. The critically ill COVID-19 patients were immediately hospitalized in the ICU after the Emergency Department (ED). Following study enrolment, demographic characteristics, comorbidities, symptoms, vital signs, and laboratory findings were recorded. Acute physiology and chronic health evaluation (APACHE II) and sequential organ failure assessment (SOFA) score were calculated on ICU admission. The outcome was defined as overall mortality. Four milliliters (4 mL) of venous blood were collected within the first 24 h post hospital admission in the COVID-19 patients, and within 6 h from sepsis diagnosis in the non-COVID-19 patients. Serum was drawn in BD Vacutainer ® Plus Plastic Serum Tubes. Serum was collected, portioned into 0.5 mL aliquots, and stored at −80 • C until used. suPAR was measured by enzyme-linked immunosorbent assay (ELISA) (R & D Systems Inc., Minneapolis, MN, USA). Data are given as individual values N (%), mean ± standard deviation (SD), or median with interquartile range (IQR), accordingly. Comparisons were performed by the t-test, the non-parametric Mann-Whitney test, the chi-square test, or one way ANOVA, as appropriate. Correlations were performed by Spearman's correlation coefficient. Receiver operating characteristic (ROC) curves were plotted using overall ICU mortality (or hospital mortality for the ward patients) as the classification variable and suPAR levels as prognostic variables. The optimal cut-off value for predicting mortality was calculated as the point with the greatest combined sensitivity and specificity. This value was then used to divide the patients into two groups, higher or lower than the ROC curve generated cut-off value, to perform Kaplan-Meier analysis for survival probability estimation; the log-rank test for a two-group comparison was used. The analyses were performed with IBM SPSS statistical package, version 22.0 (IBM Software Group, Armonk, NY, USA), and GraphPad Prism, version 8.0 (GraphPad Software, San Diego, CA, USA). Statistical significance was set at p < 0.05. Patient Characteristics The demographics of the three patient groups are shown in Table 1. The groups did not differ in terms of age and sex. The non-COVID-19 septic patients had fewer comorbidities as expected, since this group included 43% trauma patients. Thirty-four percent (34%) of the patients had been subjected to emergent surgery, while 23% of the patients suffered from mainly CNS-related pathologies. The two ICU groups had a comparable disease severity. The site of infection in the septic patients was the respiratory system, and in 69 of them (87.3%) due to a Gram-negative bacterium, while in the remaining seven by a Gram-positive bacterium. Sepsis occurred between days 4-11 of ICU stay (median seven days). Twenty-six patients (33%) also developed septic shock later on during their ICU stay. The ICU COVID-19 group had a higher mortality, whereas the ICU non-COVID-19 septic group had higher length of stay and mechanical ventilation duration. Twenty-nine (44%) of ICU COVID-19 patients from the second wave received dexamethasone, as per guidelines; it should be noted that in a previous study we demonstrated that dexamethasone lowered suPAR levels, however not in a statistically significant manner [5]. * p-value < 0.05; ** p-value < 0.01; **** p-value < 0.0001 from the ICU non-COVID-19 septic patient group. † Twentynine patients were from the second wave and received dexamethasone, whereas 37 were recruited at the start of the pandemic, prior to the adoption of the dexamethasone administration guidelines. Data are expressed as number of patients (N), percentages of total related variable (%), or mean ± SD for normally distributed variables and median (IQR) for skewed data. The Kruskal-Wallis test or the chi-square test was used, as appropriate. Characteristics were measured within 24 h from admission in the COVID-19 patients, whereas in the ICU non-COVID-19 septic patients, the data given are at sepsis onset (within 6 h). Definition of abbreviations: APACHE = Acute physiology and chronic health evaluation; CRP = C-reactive protein; ICU = Intensive care unit; PCT = Procalcitonin; SOFA = Sequential organ failure assessment; suPAR = Soluble urokinase plasminogen activator receptor. Comparative Analysis of suPAR in COVID-19 and ICU-Acquired Sepsis Both critical and moderate/severe COVID-19 patients had elevated suPAR levels compared to the ICU non-COVID-19 septic patients Horizontal lines, medians of the two groups. The groups were compared by ANOVA followed by Kruskal-Wallis (panel A, p-values against the septic group), or Mann-Whitney (panel B, between survivors and non-survivors within each group). * p < 0.05, ** p < 0.01, **** p < 0.0001. suPAR = soluble urokinase-type plasminogen activator receptor. A ROC curve was generated to determine the prognostic accuracy of suPAR in predicting mortality in COVID-19; the area under the curve (AUC) of suPAR levels was 0.81 (95% CI = 0.71-0.91), p < 0.0001; Figure 2. According to the ROC curve analysis, the optimal cut-off point for suPAR was 6.3 ng/mL, with a greatest combined sensitivity of 74.2% (95% CI = 56. 8 Patients with lower (low group = 0) than the cut-off value generated from the ROC curve, were subsequently compared to the patients who had higher than the cut-off value (high group = 1). We found that the higher group was associated with a higher overall mortality in the Kaplan-Meier analysis. The respective median time to mortality for the aforementioned groups was 37 (23-51) days for the low group, and 25 (22)(23)(24)(25)(26)(27)(28) days for the high group (Log-Rank test, p = 0.006; Figure 3). . suPAR levels on admission and COVID-19 survival probability. suPAR levels were measured on hospital admission (within 24 h). The Kaplan-Meier method was used for survival probability estimation and the log-rank test for two-group comparison. The COVID-19 group was dichotomized above and below the cut-off value generated from the ROC curve (6.31 ng/mL). Dashed line ≥ cut-off value (high group, 1); solid line < cut-off value (low group, 0). The respective median time to mortality for the two aforementioned groups were 37 (23-51) days for the low group, and 25 (22)(23)(24)(25)(26)(27)(28) days for the high group (Log-Rank test, p = 0.006). suPAR = soluble urokinase-type plasminogen activator receptor. COVID-19 survivors and non-survivors differed only in terms of age, APACHE II scores (for the critically ill subpopulation), and suPAR levels on admission. The univariate and multivariate regression analyses showed that suPAR levels could be assumed as an independent predictor of mortality in COVID-19, in the presence of age [1.000 (1.000-1.001), p = 0.001]. Of note, a combined ROC curve of suPAR levels and age did not improve prognostic value. Discussion To our knowledge, this is the first report to compare the prognostic value of suPAR levels in critically ill COVID-19 and non-COVID-19 septic patients with respiratory infections in terms of mortality. Our results showed that in our cohorts, suPAR can differentiate COVID-19 patients who will not survive their illness, but not septic patients. In COVID-19, suPAR has been shown elevated in severe versus moderate disease, and furthermore, has been attributed a prognostic value in identifying patients with a poor prognosis, such as prolonged stay, or requirement of mechanical ventilation [16][17][18][19][20][21]. Very few studies have investigated its role in mortality [5,22,23]. We previously showed that critically ill COVID-19 patients who will not survive their disease have much higher ICU admission suPAR levels compared to survivors [5]. The prognostic ability of suPAR did not seem to be affected by the administration of dexamethasone. Similarly, Napolitano et al. proposed suPAR as a serum biomarker of clinical severity and outcome in patients who were hospitalized with COVID-19 [23]. They demonstrated that the non-survivor group exhibited higher levels of serum suPAR (4.5 ng/mL) than the survivor group (3.2 ng/mL), suggesting that suPAR may be predictive of survival. On the other hand, the utility of suPAR in diagnosing sepsis has been confirmed in many studies [24]. Its role, however, as a prognosticator of poor sepsis outcomes, and specifically in predicting mortality in the context of sepsis has arisen controversies. In a very recent study, it was shown that suPAR could predict early mortality among sepsis patients at a cut-off value of 13.4 ng/mL [25]. Furthermore, suPAR could be assumed a predictor of bad prognosis and poor survival in sepsis, seven days following admission [26]. High suPAR levels on admission in initially septic and non-septic patients were shown to be an independent predictor for ICU and 28-day mortality [27]. suPAR was shown to be stably elevated during the first week of treatment in the ICU, and could predict mortality in both sepsis and non-sepsis critically ill patients [28]. The authors suggested that the high suPAR concentrations upon ICU admission likely reflected activation of the immune system, prior to sepsis development. It is possible that the lower presence of comorbidities in our cohort may influence suPAR levels and affect its predictive ability in the septic ICU population. Limitations of our study include its single-center nature, the moderate number of patients, and the low mortality rate (19%) in the non-COVID-19 critically ill septic patients. Furthermore, the COVID-19 group had a higher incidence of comorbidities, which might explain the higher mortality. However, the strengths were the well characterized non-COVID-19 patients, who on ICU admission were initially non-septic. Patients who developed sepsis within 24-48 h from ICU admission were excluded in order to exclude underlying sepsis. suPAR was measured in samples collected within 6 h from sepsis occurrence. We also matched our septic patients to the COVID-19 patients, based on age, critical illness severity, as expressed by the APACHE II and SOFA scores, and selected patients with a site of infection in the respiratory system. Conclusions Our results demonstrated the presence of high levels of circulating suPAR in COVID-19 compared to another severe inflammatory syndrome, namely sepsis, caused mostly by Gram-negative bacteria in the respiratory system. Furthermore, in COVID-19, yet not in sepsis, suPAR levels could differentiate survivors from non-survivors, with a good prognostic accuracy from the ROC curve generated. Combined with other biomarkers, a profile can be built for COVID-19 patients with a clear association with poor outcome prediction, exhibiting a distinctive pattern from septic syndromes.
2022-05-20T15:07:01.621Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "e53cb7b53fe9b5177f41cdfb63b011b476f1847c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/12/5/1261/pdf?version=1652881119", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6893b1ed829b369f344edf50b258169940d47157", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218816329
pes2o/s2orc
v3-fos-license
Need of the Hour— COVID-19 for Cardiologists Address for correspondence Jyotsna Maddury., MD, DM, FACC, FESC, FICC, Department of Cardiology, Nizam’s Institute of Medical Sciences, Hyderabad, Telangana, India (e-mail: janaswamyjyotsna@gmail.com) DOI https://doi.org/ 10.1055/s-0040-1709950. ©2020 Women in Cardiology and Related Sciences The most distressing pandemic at present is coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).1 Even though COVID19 predominantly affects the lungs by causing acute respiratory distress syndrome, the heart is not completely spared. People with underlying heart disease are at risk. The main aim of this article is to summarize the available evidence of cardiac involvement in COVID-19 patients and outlay precautions to patients with underlying cardiovascular disease (CVD). People with the cardiopulmonary disease are at higher risk. As the virus survives in low temperatures, nose and sinuses are the main sources of infection. This virus could affect the heart, especially a diseased heart. Although information about COVID-19 is changing on an hourly basis, information of the previous severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS)– producing coronaviruses offers insight.2 They were linked to cardiac disease as they produced inflammation of the heart muscle, myocardial infarction, and rapid-onset heart failure. Most of the published information on COVID-19 is from China. Three important publications in New England Journal of Medicine, Lancet, and Allergy are based on the cases from China.3-5 Even though the initial study published in Lancet showed male preponderance (70% males), in a short duration another publication in Allergy showed a 1:1 ratio of male (50.7%) and female involvement. Initial studies showed low association of chronic cardiac diseases (10%) in COVID19 patients along with the acute cardiac injury accounting to 23%. Acute cardiac injury was diagnosed when hypersensitive cardiac troponin I was > 28 pg/mL. Recent studies have showed increased association of CVD, up to 40%, in COVID-19 patients. But more recent studies have showed increased association of CVD, up to 40%, in COVID-19 patients. Fifty percent of COVID-19 patients had comorbidities, most common was hypertension (in 30%), diabetes (in 19%), and coronary artery disease (in 8%). This high proportion of CVD was the cause for high mortality in patients with COVID-19. COVID-19 in patients with compensated heart failure may precipitate heart failure.6 In COVID-19 patients cardiovascular disorders including arrythmias may occur due to drug therapy for the diseases, especially the antiviral drugs or drug interactions. So, these patients also require close monitoring. Even though the comorbid conditions association with COVID-19 was high, acute myocardial infarction was reported only in one young female who had normal coronaries on angiogram. These reports with new information urge cardiologists to warn patients about the potential risk and encourage them to practice “additional, reasonable precautions” for those with underlying heart disease. The mechanism of increased risk for cardiovascular disease patients to COVID-19 is not clear.7 The virus penetrates the cell though the angiotensin-converting enzyme 2 (ACE2) receptor and then multiplies to produce the disease. These receptors are present on epithelial cells of the lung, intestine, kidney, and blood vessels.8 Patients with hypertension and diabetes, who receive angiotensin-converting enzyme inhibitors (ACEIs) and angiotensin receptor blockers (ARBs) have more expression of these ACE2 receptors on the target cells, which may facilitate the entry of the virus (►Fig. 1). But a similar effect is not seen with calcium channel blockers. Previously, acute myocarditis and heart failure was reported with MERS-CoV. As SARS-CoV-2 and MERS-CoV have similar pathogenicity, myocardial injury caused due to SARS-CoV-2 infection may be immune mediated through the ACE2 receptor or cytokine storm and/or hypoxia due to acute respiratory distress syndrome (ARDS).9 Added myocardial damage along with ARDS makes the patient’s prognosis worse and treatment becomes difficult and complex. During the course of progression of COVID-19 disease, due to intense systemic inflammatory response more frequent cardiac involvement occurs. Deaths from COVID-19 are due to cytokine storm syndrome and fulminant myocarditis. Cytokine storm syndrome culminates as ARDS. Fulminant myocarditis is primarily Ind J Car Dis Wom 2020;5:4–7 The most distressing pandemic at present is coronavirus disease 2019 , caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). 1 Even though COVID-19 predominantly affects the lungs by causing acute respiratory distress syndrome, the heart is not completely spared. People with underlying heart disease are at risk. The main aim of this article is to summarize the available evidence of cardiac involvement in COVID-19 patients and outlay precautions to patients with underlying cardiovascular disease (CVD). People with the cardiopulmonary disease are at higher risk. As the virus survives in low temperatures, nose and sinuses are the main sources of infection. This virus could affect the heart, especially a diseased heart. Although information about COVID-19 is changing on an hourly basis, information of the previous severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS)producing coronaviruses offers insight. 2 They were linked to cardiac disease as they produced inflammation of the heart muscle, myocardial infarction, and rapid-onset heart failure. Most of the published information on COVID-19 is from China. Three important publications in New England Journal of Medicine, Lancet, and Allergy are based on the cases from China. [3][4][5] Even though the initial study published in Lancet showed male preponderance (70% males), in a short duration another publication in Allergy showed a 1:1 ratio of male (50.7%) and female involvement. Initial studies showed low association of chronic cardiac diseases (10%) in COVID-19 patients along with the acute cardiac injury accounting to 23%. Acute cardiac injury was diagnosed when hypersensitive cardiac troponin I was > 28 pg/mL. Recent studies have showed increased association of CVD, up to 40%, in COVID-19 patients. But more recent studies have showed increased association of CVD, up to 40%, in COVID-19 patients. Fifty percent of COVID-19 patients had comorbidities, most common was hypertension (in 30%), diabetes (in 19%), and coronary artery disease (in 8%). This high proportion of CVD was the cause for high mortality in patients with COVID-19. COVID-19 in patients with compensated heart failure may precipitate heart failure. 6 In COVID-19 patients cardiovascular disorders including arrythmias may occur due to drug therapy for the diseases, especially the antiviral drugs or drug interactions. So, these patients also require close monitoring. Even though the comorbid conditions association with COVID-19 was high, acute myocardial infarction was reported only in one young female who had normal coronaries on angiogram. These reports with new information urge cardiologists to warn patients about the potential risk and encourage them to practice "additional, reasonable precautions" for those with underlying heart disease. The mechanism of increased risk for cardiovascular disease patients to COVID-19 is not clear. 7 The virus penetrates the cell though the angiotensin-converting enzyme 2 (ACE2) receptor and then multiplies to produce the disease. These receptors are present on epithelial cells of the lung, intestine, kidney, and blood vessels. 8 Patients with hypertension and diabetes, who receive angiotensin-converting enzyme inhibitors (ACEIs) and angiotensin receptor blockers (ARBs) have more expression of these ACE2 receptors on the target cells, which may facilitate the entry of the virus (►Fig. 1). But a similar effect is not seen with calcium channel blockers. Previously, acute myocarditis and heart failure was reported with MERS-CoV. As SARS-CoV-2 and MERS-CoV have similar pathogenicity, myocardial injury caused due to SARS-CoV-2 infection may be immune mediated through the ACE2 receptor or cytokine storm and/or hypoxia due to acute respiratory distress syndrome (ARDS). 9 Added myocardial damage along with ARDS makes the patient's prognosis worse and treatment becomes difficult and complex. During the course of progression of COVID-19 disease, due to intense systemic inflammatory response more frequent cardiac involvement occurs. Deaths from COVID-19 are due to cytokine storm syndrome and fulminant myocarditis. Cytokine storm syndrome culminates as ARDS. Fulminant myocarditis is primarily caused by infection with viruses, with mortality rates as high as 50 to 70%. Concern about the continuation of ACEIs and ARBs for patients who are already taking them was discussed by different hypertensive societies. 10 Other drugs that are used in CAD patients like statins, antiplatelets, and β-blockers were discussed in the webinar jointly released by the American College of Cardiology (ACC) and Chinese cardiology association. They recommend to continue statin but with close monitoring. Antiplaletes and β-blockers are also to be continued. If steroids are required for fulminant myocarditis, the dose should be low to moderate. If a COVID-19 patient comes with ST elevation myocardial infaction (STMI), then thrombolysis should to be considered. If primary percuaneous coronary intervention (PCI) is required then it is better to do it in isolated cath laboratories. If they are not available and mandatory to do then minimum central air circulation in the cath laboratory. At present we know only about the acute course of the disease. We need to follow these patients for long-term effects. SARS-CoV-infected patients on long-term follow-up of 12 years showed hyperlipidemia (68%), cardiovascular system abnormalities (44%), and glucose metabolism disorders (60%). 24 As SARS-CoV 2 also has similar pathogenicity as SARS-CoV, follow-up of these COVID-19 patients for cardiac events and altered metabolic status is required. With this increasing need for awareness of cardiac diseases in COVID-19, ACC released a clinical bulletin for the cardiac care team. 20 Cardiac complications in COVID-19 simulates that of SARS, MERS, and influenza. The cardiologist has to assist other clinical specialties to manage the cases of COVID-19 with cardiac complications. Echocardiography (ECG) should be done in patients with ECG changes and those demonstrating heart failure, arrhythmia, or cardiomegaly. Patients with CVD are at risk of contracting COVID-19 and have a worse prognosis. As there is an increased risk of secondary infections with COVID-19, patients are advised to remain current with vaccinations, including the pneumococcal vaccine and influenza vaccine in accordance with current ACC/American Heart Association (AHA) guidelines. In patients with heart failure or volume overload conditions, fluid administration should be carefully monitored. Strategies should include identifying cardiovascular patients with COVID-19 symptoms from other patients, including the outpatients, substituting telephonic or telehealth consultations for in-person reviews of stable CAD patients in order to prevent possible nosocomial COVID-19 infections. 21 Care should be taken to avoid underdiagnosis of AMI in the COVID-19 setting. Based on a small study of 26 patients in France and 16 nonrandomised trials, it is believed that the time taken to resolve viral shedding from COVID-19 patients is decreased by treatment with hydroxychloroquine alone or in combination with Azithromycin. 25 With raised concerns about the increased risk of arrhythmic death caused by QT prolongation associated with use of chloroquine, hydroxychloroquine, or azithromycine (alone or in combination), hydroxychloroquine or chloroquine therapy should occur in the context of a clinical trial or registry, until sufficient evidence is available for use in clinical practice. 26 Regarding the management of patients with ACS in the setting of COVID-19, SCAI consensus is to preferably take true STEMI patients for primary angioplasty and to avoid diagnostic or therapeutic interventions in NSTEMI ACS patients with low risk features. It is also recommended to avoid endotracheal intubation in the cath lab as much as possible, but if it must be done, it is advised to remove all nonessential personnel from the lab to avoid potential exposure to aerosolized virus. For patients in respiratory distress, intubation before transfer to the cath lab is advised to avoid aerosolization. 27 In conclusion, the exact mechanism through which SARS-CoV-2 causes COVID-19 is not known; it may be through the ACE2 receptors and immune mechanism. SARS-CoV-2 infection in patients with underlying CVD have worse prognosis. Cardiovascular protection should be given attention during the treatment for COVID-19.
2020-04-23T09:08:53.128Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "b95c9948432a2a5e94a772e5510c57645e73a1b0", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0040-1709950.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b1ff27498c10cefc1cb54b69c8706aada8340f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233365039
pes2o/s2orc
v3-fos-license
Domain Expert Platform for Goal-Oriented Dialog Collection Today, most dialogue systems are fully or partly built using neural network architectures. A crucial prerequisite for the creation of a goal-oriented neural network dialogue system is a dataset that represents typical dialogue scenarios and includes various semantic annotations, e.g. intents, slots and dialogue actions, that are necessary for training a particular neural network architecture. In this demonstration paper, we present an easy to use interface and its back-end which is oriented to domain experts for the collection of goal-oriented dialogue samples. The platform not only allows to collect or write sample dialogues in a structured way, but also provides a means for simple annotation and interpretation of the dialogues. The platform itself is language-independent; it depends only on the availability of particular language processing components for a specific language. It is currently being used to collect dialogue samples in Latvian (a highly inflected language) which represent typical communication between students and the student service. Introduction Modeling of human-computer interaction through dialogue systems and chatbots has raised a great interest among researchers and industry already since the time when the first chatbot Eliza (Weizenbaum, 1966) was created. This interest has become viral after the successful introduction of Siri (Bellegarda, 2013). Today, virtual assistants, virtual agents and chatbots are present everywhere: on mobile devices, on different social networking platforms, on many websites and through smart home devices and robots. The virtual conversational agents are usually implemented as end-to-end neural network models, or their components are implemented through neural network architectures (Louvan and Magnini, 2020). Such architectures require creation of datasets that represent various dialogue scenarios, as well as knowledge of the specific domain. This information has to be provided in specific data formats that in many cases are too complicated for domain experts. Moreover, the required training datasets usually include various annotation layers, such as named entities, dialogue acts, intents, etc. The creation of such datasets is a complex task, and the datasets are not completely isolated and abstracted from the particular dialogue system. Thus, domain experts that are involved in the creation of the datasets must have a high-level understanding of the overall structure of the dialogue system and its components, and how it is reflected in the annotated dialogue samples. This demonstration paper address this issue by presenting a web-based platform for the creation and maintenance of dialogue datasets 1 . The interface of the platform is very simple and high-level: it allows a domain expert without detailed technical knowledge of the underlying dialogue system to create and update a representative training dataset, as well as to maintain the underlying database of domain-and organisation-specific information that will be used for question answering. The platform provides tools for the creation of goal-oriented dialogue systems, in particular: • creation of datasets for dialogue systems that provide (or generate) responses depending on user input, intents and on the previous actions of the dialogue system; • creation of datasets for dialogue systems that cover one or several topics; • slot filling, including slot filler (e.g. named entity) normalization and annotation; • creation and maintenance of slot filler aliases; • creation and maintenance of knowledge base and interactive response selection; • response generation, including the generation of inflectional forms for syntactic agreement. Our platform not only supports collection of dialogue scenarios, but also simulates prototypical interaction between human and computer. The tool has been successfully used for the creation of a dialogue dataset for the virtual assistant that supports the work of the student service in relation to three frequently asked topics: working hours and contacts of the personnel and structural units (e.g. libraries), issues regarding academic leave, as well as enrollment requirements and documents to be submitted (Skadina and Gosko, 2020). In the next chapters of this paper, we describe our motivation to develop the platform, its overall architecture and main components, and the domain expert interface and its main components. Background and Motivation For English and several other widely used languages, many publicly available dialogue datasets have been created and are reused for different research and development needs (e.g., Budzianowski et al. (2018), Zeng et al. (2020)). However, in the case of less-resourced languages, only few or no training datasets are available (Serban et al., 2018). To our knowledge, there is no publicly available dataset for Latvian, that could be used for goal-oriented dialogue system modelling. To overcome this obstacle, some research groups machinetranslate existing English datasets into the lowresourced languages, while others try to build training datasets from scratch. When possible, crowdsourcing, including gamification (Ogawa et al., 2020), is used as well. However, there in no best recipe for obtaining or collecting dialogue samples for a specific NLP task (in our case, dialogue modeling) for a less-resourced language with a relatively small number of speakers. The motivation of our work is the necessity to build virtual assistants in less-resourced settings. The practical use case to test the platform has been the everyday communication between students and the student service of the University of Latvia. Since this communication has never been intentionally recorded, we started with the analysis of data retrieved from an online student forum to identify the most common topics, question and answer templates, and the typical dialogue scenarios. For the demonstration purposes, we have chosen three common topics: working hours, academic leave, and enrollment requirements. Elaborated and annotated sample dialogues constituting the training dataset have been specified by a domain expert using the dialogue management platform presented in this paper. Since we focus on goal-oriented virtul assistants, the Hybrid Code Networks (HCN) architecture has been selected for the implementation (Williams et al., 2017) allowing us to combine recursive neural networks (RNN) with the domain-specific knowledge and action templates of the dialogue system. The concrete dialogue system is implemented within the DeepPavlov framework 2 . Overall Architecture and Components The platform presented in this paper is designed to support three use cases: 1. To create and gradually improve a collection of dialogue samples necessary for developing and testing a goal-oriented dialogue system. 2. To support (re-)training of a goal-oriented dialogue system. 3. To support dialogue testing in the inference mode. Training and running a dialogue system in the inference mode is performed through the DeepPavlov framework by passing the goal-oriented bot model configuration along with relations to other objects that are specific to the platform. Figure 1 illustrates the architecture of the platform. Apart from the domain expert user interface described in detail in Section 4, key components of the platform are four databases for storing the dialogue scenarios, the relevant entities and their aliases for slot filling, the required external knowledge for question answering, and reusable templates for response generation. Dialogue Database Dialogues created by the users of the platform (i.e., by domain experts not end-users) are stored in the SQLite database Dialogues to support concurrent modification. The dialogue database stores potential end-user utterances together with the respective Intents for the particular dialogue dataset are defined in a separate view of the platform's interface. The predefined intents are linked to utterances during the dialogue writing process (for details, see Section 4) and later used for training. In our demonstration dialogue system, we use a Keras classification model with Latvian fastText word embeddings for intent detection. The configurator of the platform uses a custom data reader that reads training data from a customschema SQLite database. The reason why SQLite is used is the high modification rate produced by the platforms's user interface for dialogue editing. Knowledge Base The database Knowledge base stores the external knowledge that is necessary for the dialogue system to provide the answers to the end-users. Such knowledge is usually dynamic and can change while the dialogue system is deployed (e.g. working hours of the university personnel in our demonstration case). Entity Database For our use case, the named entity recognition (NER) model combines a neural network model and a rule-based model. The neural network model is based on Latvian BERT word embeddings (Znotins and Barzdins, 2020). To support entity classes of a particular domain, the NER model is trained on a larger generaldomain dataset (Gruzitis et al., 2018;Paikens et al., 2020) and a smaller domain-specific dataset. The combined model allows to recognize not only commonly used entity classes like persons, locations and organizations, but also domain specific entities like job positions and working hours. The rule-based NER is based on the Aho-Corasick algorithm (Aho and Corasick, 1975) with additional regular expression rules to ensure entity detection in various inflectional forms, as well as detection of very specific domain entities like room names and specific job positions that would not be recognised otherwise due to the limited amount of training data. In our demonstration dialogue system, a custom slot filler is implemented, which relies on normalized entities returned by the NER module to be directly filled in the respective slots. The normalization is done in two steps. First, after the recognition of named entities (NE), an external NE normalization service is called, which provides base forms for both single-word entities and multi-word entities. Second, the database Entities is consulted to align the recognised and normalised entities (entered by the end-user) with the corresponding entities in the database. This also includes resolving NE aliases (for more details see Subsection 4.5). Template Database Responses to the end-user are generated using a template-based approach which depends on the recognised intent and slots. The response templates support additional markup for slot filler inflection that are replaced with the correctly inflected word forms during the response generation step. In our demonstration system, word forms are inflected using a Latvian morphological analyser (Paikens et al., 2013) as an external service. All template data are stored in and reused from the database Templates. User Interface In this section we present overall interface and constituents (sub-windows) of our dialogue data preparation platform: the window for dialogue collection, the action template editing window, the knowledge base preparation and management window, the window for intent definition and the window for creation and maintenance of slot filler aliases (see Figure 2) 3 . The user interface for data editing is powered by Python HTTP backend that serves static files and API calls. The backend modifies all four databases directly and uses slotfiller to retrieve slots from user interface. Frontend is written in Javascript and VueJS, and is running inside a web browser. Dialogue Collection Window The central part of the dialogue collection platform is the dialog collection window. The dialogue collection window contains all dialogues submitted by the user. The dialogues could be changed any time: • by adding or deleting one or several utterances, • changing text of the utterance and corresponding slot and intent values, • changing corresponding dialogue act. To enter a new dialogue user have to push "Add dialogue" button and write an utterance ( Figure 3). By pushing button "Extract", entities (slots) in user's utterance are automatically identified by the named entity recognizer, they are extracted and grammatically normalised (see Subsection 3.3), and, if necessary, semantically normalised (for details, see Subsection 4.5). Then user can specify an intent and select an action that needs to be performed by the dialogue system. When the action of the dialogue system is selected, the expected response from the bot is displayed to the user, allowing to check the possible answer and change it, in case a mistake has been identified (for details see Subsection 4.2). The utterance entering process continues until dialogue writing is completed. After pushing "Done" button the dialogue is being add to the Dialogues SQLite database. Action Definition and Editing The action template window is used to define the action performed by the dialogue system. For our purposes we have introduced two types of actions: (1) templates of bot answers for particular action and (2) information retrieval request from the knowledge base depending on identified slot values. In the simplest case system's action is a fixed utterance, specified in action template window. However, in most cases dialogue action and answer depends on previous actions and information gathered from the user. Therefore we introduced mechanisms allowing to generate context dependent grammatically correct slot values identified during the dialogue. The slot values used for answer generation could be from the last utterance or previous ones, as well as from bot's previous answers. For example, the template for action 'info working hours', contains utterance template 'Working hours for #position of the #ORG #PERSON: #time', where '#position', '#ORG', '#PERSON' and '#time' represent slot values (entities), identified during the dialogue or retrieved from the knowledge base. Action templates can also include form generation instructions which are very important for fluent output generation in case of inflected languages. For example, in the Latvian template 'Kuras fakultātes position@g darbalaiku Jūs vēlētos noskaidrot?' (Which faculty #position@g working hours you would like to know?) item #position represents slot (entity), identified during the dialogue (e.g., dekāns (dean)), while '@g' requests generation of the genitive form of this entity (e.g., dekāna instead of lemma dekāns). When action requires information retrieval from the database (template api call), the previously extracted information from user's input (slots) is used For example, if answer to the previous question is "Faculty of Computing", then query to database will ask for working hours of the dean of the Faculty of Computing. Similarly to dialogue utterances, actions and their templates could be easily modified during the dialogue writing process, new actions could be added and unnecessary actions removed. Knowledge Base Preparation Goal-oriented dialogue systems often include means for knowledge retrieval from database or any other type of knowledge base. In some cases database already exist, while often creation of knowledge base is part of the dialog system building process. To ensure consistency between dialogues and information in database, the database could be created, filled and modified during dialogue collection process. The Knowledge base preparation and maintenance window has very simple interface allowing user to enter new entries, change the existing ones or even modify database structure. To ensure consistency between different information pieces of the dialogue system, names of columns in database needs to correspond to the entity types of the dialogue system. Intents For intent management, small and simple window is created, allowing to add, modify and delete intents. Intents defined in this window, are used during dialogue writing process: they can be assigned to each user's utterance (for details see Subsection 4.1). Creation and Maintenance of Slot Filler Aliases The common problem in dialogue systems that include knowledge retrieval, is incoherence between entity in utterance submitted by user and correct and normalised entity fixed in database. For instance, when dialogue system asks to specify name of particular organization, user can enter its abbreviation (e.g., "DF" or "FC" instead of Faculty of Computing), commonly used shortened form, use jargon or make spelling errors (e.g., errors in capitalization -faculty of computing instead of Faculty of Computing). To overcome this bottleneck we introduce entity alias management window, where user can specify official (normalised) form of the entity which has been stored in knowledge base and its typical aliases (see Figure 4). Similarly to other windows of this platform, the entity editing window allows to add, edit and delete entities and their aliases. Each "official" entity could have several aliases (synonyms). We also Figure 3: Demonstration of dialogue preparation -utterance writing, slot filling, intent identification and retrieval from the database keep entity type, in case the same string belongs to several types. Conclusion In this paper, we have presented a configurable platform for dialogue collection that supports synchronization of information necessary for building a goal-oriented dialogue system, besides specification of dialogue scripts. The presented platform is publicly available 4 . It has been used for creation of Latvian-specific dataset of dialogues between students and a student service. Following recommendations from reviewers the platform is currently demonstrated on prototypical English dialogue samples, demon-4 http://bots.ailab.lv/ Figure 4: Window for creation and maintenance of slot filler aliases strating its scalability to other languages. We will add a short walkthrough video demonstrating the main features of the platform. The next development tasks include simple export of dialogue data in commonly used formats to facilitate experiments with various neural dialogue system architectures, and support for a one-click re-training process which currently is implemented as separate background process.
2021-04-23T21:45:03.912Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "83acaf32b3004e3f45e5c12ffbc28622703507b7", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2021.eacl-demos.35.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "83acaf32b3004e3f45e5c12ffbc28622703507b7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1971887
pes2o/s2orc
v3-fos-license
Mst1 and Mst2 kinases: regulations and diseases The Hippo signaling pathway has emerged as a critical regulator for organ size control. The serine/threonine protein kinases Mst1 and Mst2, mammalian homologs of the Hippo kinase from Drosophila, play the central roles in the Hippo pathway controlling the cell proliferation, differentiation, and apoptosis during development. Mst1/2 can be activated by cellular stressors and the activation of Mst1/2 might enforce a feedback stimulation system to regulate oxidant levels through several mechanisms, in which regulation of cellular redox state might represent a tumor suppressor function of Mst1/2. As in Drosophila, murine Mst1/Mst2, in a redundant manner, negatively regulate the Yorkie ortholog YAP in multiple organs, although considerable diversification in the pathway composition and regulation is observed in some of them. Generally, loss of both Mst1 and Mst2 results in hyperproliferation and tumorigenesis that can be largely negated by the reduction or elimination of YAP. The Hippo pathway integrates with other signaling pathways e.g. Wnt and Notch pathways and coordinates with them to impact on the tumor pathogenesis and development. Furthermore, Mst1/2 kinases also act as an important regulator in immune cell activation, adhesion, migration, growth, and apoptosis. This review will focus on the recent updates on those aspects for the roles of Mst1/2 kinases. Introduction The Hippo pathway plays a very important role in controlling cell proliferation and differentiation, and monitoring organ size and oncogenesis. This pathway was first discovered in Drosophila through genetic screens for regulators of organ size. The loss of function (LOF) mutant of the protein kinase "Hippo" exhibits tissues overgrowth and tumorigenesis, in which the increased cell number is associated with the acceleration of cell cycle progression and a failure of developmental apoptosis [1][2][3][4][5]. The Hippo phenotype closely resembles phenotypes of LOF mutants of the protein kinase Warts [6,7] and the small noncatlytic protein Mats [8] as well as a milder phenotype of another noncatalytic scaffold protein Salvador (Sav) [9,10]. Sav binds both Hippo and Warts, and promotes Hippo phosphorylation of Warts; Mats is another Hippo substrate that binds to and promotes Warts activation. With the activation of those downstream elements, the key role of Hippo signaling is to inhibit Yorkie [11,12], a transcriptional coactivator of proliferative and pro-survival genes. These studies in Drosophila defined a developmentally regulated growthsuppressive and proapoptotic pathway operated by the Hippo kinase. Each of the core components of this pathway is evolutionally conserved and their counterpart(s) are identified in mammalians respectively. In general, the mammalian Ste20-like kinases Mst1 and Mst2 [13,14] (Mst1/2, corresponding in Drosophila as Hippo), associated with the WW-domain scaffolding protein WW45 (corresponding in Drosophila as Sav), that binds Mst1/2 and phosphorylates Large tumor suppressor (Lats1/2, corresponding in Drosophila as Warts) [15], through their respective SARAH coiled coil domains, thereby promoting Mst1/2 phosphorylation of Lats; Mst1/2 also phosphorylates Mps one binder kinase activator-like 1 (Mob1A/B, corresponding in Drosophila as Mats) [16,17] which enhances Mob1's ability to bind and activate Lats1/2; phospho-Mats binds to and promotes Wts/Lats autophosphorylation and activation; Lats1/2 phosphorylates Yes-associated protein (YAP, corresponding in Drosophila as Yki) [18], which promotes 14-3-3 binding to YAP, causing YAP nuclear exit, hereby inhibiting its function. Intranuclear YAP/Yki mainly promotes cell proliferation and resists cell death through the Scalloped/TEAD transcription factor(s). Loss of Mst1/Mst2 results in a YAP dependent accelerated proliferation, resistance to apoptosis and massive organ overgrowth. The details of many aspects of the Hippo signaling pathway can be found in depth discussion from several recent reviews [19][20][21][22][23][24]. In this review, we will focus on the recent updates of the roles of mammalian "Hippo" kinases, ie. Mst1 and Mst2, on the cellular redox state regulation and their involvement in organ size control, tumorigenesis, and immune regulation. Mst1/2 and the cellular redox state Oxidative stress induces the activation of Mst1/2 [25]. Thioredoxin-1 (Trx1), a conserved antioxidant protein that is well known for its disulfide reductase activity, can physically associate with the SARAH domain of Mst1 in intact cells and inhibit the homodimerization and autophosphorylation of Mst1, thereby prevents Mst1 activation; whereas H2O2 abolishes this interaction and eventually causes the activation of Mst1. Thus, Trx-1 might function as a molecular switch to turn off the oxidative stress-induced activation of Mst1 [26]. Besides the Trx-1 as a redox-sensitive inhibitor of Mst1, the molecular mechanism of reactive oxygen species (ROS)-induced Mst1 activation needs to be further defined. Hippo/ Mst1 kinase directly phosphorylates and activates the forkhead box proteins (FOXO), which causes expression of proapoptotic genes, such as the FASL and TRAIL genes under stress conditions. The apoptosis of cultured neurons induced by oxidative stress or by Mst1 over expression is blocked by RNAi depletion of FOXO [27]. Mst1 mediates oxidative stress-induced neuronal cell death by phosphorylating the transcription factor FOXO3 at serine 207 [27], or FOXO1 at serine 212 [28]. Mst1 and its scaffold protein Nore1 are required in cell death of granule neurons upon growth factors deprivation and neuronal activity [28]. Yuan's group further demonstrates that oxidative stress induces the c-Abl-dependent tyrosine phosphorylation of Mst1 and increases the interaction between Mst1 and FOXO3, thereby activating the Mst1-FOXO signaling pathway, leading to cell death in both primary culture neurons and rat hippocampal neurons. These results suggest that c-Abl-Mst-FOXO signaling cascade plays an important role in cellular responses to oxidative stress and might contribute to pathological states including neurodegenerative diseases in the mammalian central nerve system (CNS) [29,30]. Indeed, Mst1 mediated FoxO3 activation in response to β-amyloid (Aβ) has been shown to mediate death of selective neuron in Alzheimer's disease (AD) [31]. Furthermore, amyotrophic lateral sclerosis (ALS) associated SOD1(G93A) mutant induces dissociation of Mat1 from a redox protein trx-1 and promotes Mst1 activation in spinal cord neurons in a reactive oxygen species-dependent manner. Genetic deficiency of Mst1 delays disease onset and extends survival in mice expressing the ALSassociated G93A mutant of human SOD1 [32]. Lim's group recently also shows that Hippo-Foxa2 signaling pathway plays a role in peripheral lung maturation and surfactant homeostasis [33]. In the immune system, Mst1 deficient peripheral T cells have impaired FOXO1/3 and decreased FOXO protein levels indicating a crucial role of the Mst1-FOXO signaling pathway for the maintenance of naive T cell homeostasis [34]. Mst1 deficient lymphocytes and neutrophils exhibit enhanced loss of mitochondrial membrane potential and increased susceptibility to apoptosis [35]. More recently, Valis K. et al. further demonstrated that the activation of Hippo/Mst1 is able to stimulate the transcription of another proapoptotic mediator NOXA in a FOXO1-dependent Manner via acetylation of the histone proteins in the NOXA promoter [36]. The Hippo/Mst1-FOXO1-Noxa axis is a novel tumor suppressor pathway that controls apoptosis in cancer cells exposed to anticancer drugs such as a-TOS [36]. In contrast, a recent study demonstrates that Ras activation and mitochondrial dysfunction cooperatively stimulate production of ROS resulting in activation of JNK signaling which cooperates with oncogenic Ras to inactivate the Hippo pathway, leading to up regulation of YAP targets Unpaired (an Interleukin-6 homologue) and Wingless (a Wnt homologue) in Drosophila [37], although earlier study show activated K-Ras induces apoptosis by engaging the RASSF1A-Mst2-Lats1 pathway [38]. Recently, Morinaka et al. demonstrate that peroxiredoxin-1 (Prdx1), a cysteine-containing, highly conserved enzyme that reduces H2O2 to H2O and O2, interacts with Mst1 under conditions of oxidative stress and Prdx1 is required for Mst1 activation by H2O2, as knockdown of Prdx1 is associated with loss of Mst1 activity [39]. Chernoff's group also shows that both Mst1 and Mst2 interact with Prdx1 in HEK-293 or in human hepatocarcinoma HepG2 cells under oxidative stress conditions [40]. However, the later one supports that Prdx1 represents a downstream target, rather than an upstream regulator of Mst1. Mst1 phosphorylates Prdx1 at the highly conserved Thr-183 site resulting in inactivation of Prdx1 with subsequent increased H2O2 levels in cells. As Mst1 can be activated by increased H2O2 levels, inactivation of Prdx1 resulted from the activated Mst1 might enforce a feedback stimulation system to prolong or intensify Mst1 activation. Such a feedback stimulation system, resulting in higher oxidant levels and DNA damage, might represent a tumor suppressor function of Mst1/2 to prevent the accumulation of mutations [40]. Consistently, our recent study shows that elimination of Mst1/2 from liver cells is accompanied by increased expression of a cohort of anti-oxidant enzymes important for ROS elimination [41]. The increased expression levels of those enzymes, such as glutathione reductase (GSR), NAD(P)H:quinone oxidoreductase (NQO1), γ-glutamyl-cysteine ligase (GCL, including catalytic subunit (GCLC) and modifier subunit (GCLM)), catalase (CAT), copper/zinc superoxide dismutase (SOD), cytosolic thioredoxin (Txn1) and mitochondrial thioredoxin (Txn2), promote the accumulation of glutathione (GSH). The accumulation of GSH in the Mst1/ 2 deficient liver results in the activation of the GA-binding protein (GABP) which is a critical transcription factor for the expression of YAP [41,42]. In addition, Mst2-Lats1 can physically bind and promotes phosphorylation of GABPβ which interrupts GABPα/β homodimerization, prevents their nuclear localization and inhibits their transcriptional activity. Thus, in addition to inhibit YAP function by phosphorylation of YAP and promoting YAP nuclear exit, Mst1/2-Lats signaling can also inhibit YAP function by downregulating its expression level [41]. In contrast to the Mst1-FOXO signaling pathway leading to the decreased ROS production, the activation of the Mst1/2 pathway inhibiting YAP in liver tissues maintains the higher levels of ROS ( Figure 1). There is no doubt that oxidative stress activates Mst1/2 signaling; however the conflict effects on regulating the cellular oxidative state upon the activation of Mst1/2 are reported in different cell contexts. It is possible that the Mst-FOXO signaling pathway is predominantly activated in neuron or immune cells resulting in the decreased ROS production, whereas in other cell types, such as hepatocyte, the activation of Mst1/2-GABP-YAP signaling leads to increased ROS production. These critical but inconsistent findings indicate the importance and complexity of inter-regu lation among mitochondrial function, oxidant generation and/or clearance, and the Hippo signaling pathway. Increased production of ROS during pro-oxidant conditions would lead to Mst1/2 activation resulting in phosphorylation of GABP, inhibition of its transcription activity, and downregulation of YAP expression, consequently decreased the expression of a variety of genes that encode mitochondrial proteins and proteins with antioxidant properties, resulting in increased cellular ROS and a diminished GSH/GSSG ratio [41]. On the other hand, GABP itself helps modulate oxidative metabolism of the cell through regulating the expression of many genes necessary for cellular respiration in mitochondria, including enzymes involved in oxidative phosphorylation, such as cytochrome c oxidase subunits IV and Vb [43]. Growing evidence points that the cellular redox state and redox signaling has significant roles in regulating the metabolic fate and regenerative potential of adult tissues [44,45]. The GABP will emerge as a critical component of the Hippo signaling pathway for its role in regulating the cellular redox state and cell growth. The roles of Mst1/2 in organ size control and tumorigenesis The Hippo signaling pathway is a tumor suppresser pathway. Mst1 or Mst2 single knockout mice are viable and do not exhibit obvious organ overgrowth or tumor development, whereas Mst1 and Mst2 double-knockout (DKO) mice exhibit early embryonic lethality [46,47]. To define the roles of Mst1 and Mst2 in vivo, conditional knockout mouse of Mst1 and Mst2 in variety tissues were generated and severe context-dependent phenotypes were observed (Table 1). For example, Hippo seems to control cell-cycle exit and terminal differentiation in some tissues without having major effects on organ growth, whereas in other tissues Hippo signaling maintains stem cell/progenitor compartments. The Hippo-Lats-Yorkie tumor-suppressor pathway predicated in Drosophila does not prevail in all mammalian tissues. In mammalian liver, Mst1/Mst2 negatively regulates Yap1, whereas, in mouse embryo fibroblasts (MEFs), the cell-cell contact results in Yap1 phosphorylation and nuclear exclusion equally well in wild type and Mst1/Mst2 DKO MEFs [46]; in mouse keratinocytes, Yap inactivation during cellular differentiation occurs independently of Mst1/2 and lats1/2 [48]. Thus, it appears that the wiring upstream of Yap1 and downstream of Mst1/Mst2 has been diversified considerably in mammals compared with the Drosophila Hippo pathway. Liver We and other groups have demonstrated that Mst1 and Mst2 are the most potent tumor suppressors in liver and a single copy of either Mst1 or Mst2 can significantly inhibit tumor formation in the liver [46,49,50]. Elimination of both alleles of Mst1 together with heterozygosity for Mst2, and vice versa, results in the development of spontaneous hepatocellular carcinomas associated with loss of the remaining wild-type Mst1 or Mst2 allele in the tumors, whereas no tumors were observed in other organs of these mice. Conditional inactivation of Mst1/ Mst2 in the liver results in the immediate onset of dramatic hepatocyte proliferation and hepatomegaly followed by the development of Hepatocellular carcinoma (HCC) and cholangiocarcinoma within 2 month, in which loss of Mst1/2-dependent inhibition of YAP contributes to the liver cell proliferation and tumorigenesis. Inactivation of Mst1/Mst2 in liver leads to the loss of YAP(Ser127) phosphorylation and increased YAP nuclear localization. Knocking-down YAP in Mst1/Mst2deficient HCC cell lines results in massive cell death and cell cycle arrest, similarly, the restoration of Mst1 expression in these cells restores YAP(Ser127) phosphorylation and leads to cell cycle arrest and apoptosis. In contrast to Drosophila, Lats1/2 does not serve as the Mst1/Mst2 activated YAP kinase in hepatocytes, indicating the existence of an novel, as yet unidentified intermediary kinase downstream of Mst1/Mst2 that is critical for YAP(Ser127) phosphorylation in the liver [46]. However, our recently study shows that activation of Mst2/ Lats1 can downregulate the expression of YAP by regulating GABPβ1 phosphorylation and cytoplasmic retention in HepG2 Cells. Besides reduced YAP(Ser127) phosphorylation, the relative expression levels of YAP have also been shown significantly increased in human HCCs compared with nontumorous livers [41]. Nevertheless, both the upstream regulation of Mst1/2 and the full spectrum of Mst1/2 antiproliferative targets remain to be defined as do the relative role of these pathways in promoting hepatic carcinogenesis [51]. Intestines The intestines of Mst1 or Mst2 single knockout mice are indistinguishable from their wild-type counterparts. Mst1/2 intestinal DKO mice (Mst1 −/− Mst2 fl/fl -villin-Cre) with ablation of both Mst1 and Mst2 in intestinal compartment are born normal at birth, however they develop colonic adenomas within 3 months old and can only survive for about 13 weeks (median age) accompanied by severe wasting. Both the small and large intestine of Mst1 −/− Mst2 fl/fl -villin-Cre mice exhibit an expansion of stem-like undifferentiated cells expressing high levels of CD133, Leucine-rich repeat-containing G-protein coupled receptor 5 (Lgr5) and Achaete-scute complex homolog 2 (Ascl2), which are stem cell markers in the intestine, an increased number of cells expressing CD44 and CD24, markers associated with colon cancer stem cells, and an almost complete absence of all secretory lineages. The loss of Mst1/2 in intestine decreases phosphorylation of YAP(Ser127 and Ser384) and causes an increase in both YAP abundance and nuclear localization. The hyperproliferation and loss of differentiation caused by the Mst1/2 deficiency can be entirely [52,53]. The inactivation of Mst1/2 in the intestine compartment to promote the hyperproliferation of intestinal stem cells and to inhibit intestinal epithelial differentiation is attributed largely to an enhancement of βcatenin action and an activation of Notch signaling. The enhanced β-catenin transcriptional activity in the intestine compartment of Mst1 −/− Mst2 fl/fl -villin-Cre mouse is evident by the increased abundance of the activated form of β-catenin (dephospho-Ser37/Thr41) and Wnt targets Lgr5 and Ascl2 [52]. The expression levels of the Notch ligand Jagged 1, mediated possibly in part through up-regulated Wnt signaling [54,55], the intranuclear Notch intracellular domain (NICD) and the abundance of Hairy and enhancer of split 1 (Hes1), a Notch target gene, are all increased in Mst1/Mst2 deficient intestine. Those evidences indicate that the Notch signaling pathway is highly activated in the intestine of Mst1 −/− Mst2 fl/ fl -villin-Cre mouse. Mst1/Mst2 deficient intestines develop colonic adenomas, and unlike the polyps described in the Sav1-deficient colon [56], the polypoid lesions in the Mst1/Mst2-deficient colon do not exhibit a sawtooth/serrated architecture but hyperproliferative adenoma which might result from an activation of β-catenin and/or the inactivation of the Hippo signaling pathway in these lesions [52,57]. Pancreas The Hippo pathway is necessary for proper development and to preserve homeostasis in the liver and intestines, both of which, as well as the pancreas, are developed from a primitive gut tube derived from the embryonic endoderm [58]. Thus the pancreas specific Mst1 and Mst2 conditional knockout mice using Pdx1-Cre were generated to study the effect of the Hippo pathway during mouse pancreas development. Mst1/2 pancreasspecific knockout (Mst1/2-Pdx-Cre) mice were born with no distinctive pancreatic defects at birth, however, in contrast to Mst1/2 liver-specific knockout mice with the hepatomegaly phenotype, Mst1/2-Pdx-Cre mice have a significantly decrease in pancreas mass relative to that of wild-type littermate controls at adult age [59,60]. These mice exhibit obvious morphologic alterations, including acinar cell atrophy, overabundance of ductal structures, and smaller islets with abnormal α/β cell ratios in pancreas. In brief, the pancreas became more ductal and less acinar in phenotype. Furthermore, a YAP-dependent loss of acinar cell identity and extensive disorganization in Mst1/2 deficient exocrine tissue leads to pancreatitis-like autodigestion which might result in tissue necrosis and pancreas mass decrease. In mouse embryo, normal pancreatic differentiation is divided to two stages, the primary transition and the secondary transition. The primary transition occurring between embryonic days 9.5 and 12.5 (E9.5 and E12.5 respectively) marks the appearance of very low levels of acinar digestive enzymes and the first wave glucagon-gene and subsequently insulin-gene expressing cells. The secondary transition (between E13.5 and E16.5) characterized by intense proliferation and differentiation throughout the pancreas epithelium spans the geometric increase of acinar digestive enzymes and insulin [61]. Mst1 (but not Mst2) and YAP proteins are detected in the wild type pancreas during the secondary transition stage, and was almost undetectable at birth before returning to higher levels at postnatal day 7 (P7) and P14. Mst1/2 deficiency does not affect YAP protein levels in the embryonic pancreas, but lost of Mst1/2 was associated with higher levels of total YAP at adult age [59]. Within the adult pancreas, Yap expression is limited to the exocrine compartment, including ductal and acinar cells, whereas loss of Mst1/2 increases the YAP protein level and nuclear accumulation of nearly all exocrine cells accompanied with increased cell proliferation rate. Those evidences suggested that Mst1/2 signaling does not play a major role in pancreas organogenesis, but become functionally active during the secondary transition. The activation of Mst1/2 is required for regulating postnatal YAP levels and phosphorylation status in acinar cells to maintain differentiation [59,60]. Heart It has been shown that Mst1 regulates heart size by activating its downstream kinase, Lats2, and inhibiting YAP activity, thereby attenuating compensatory cardiomyocyte growth. In cardiomyocytes, Mst1 is activated by pathological stimuli, such as hypoxia/reoxygenation in vitro and ischemia/reperfusion in vivo [62]. Mst1 mediates cardiac troponin I phosphorylation and play a critical role in the modulation of myofilament function in the heart. The function of Mst1 in cardiomyocytes can also be negatively regulated by a new identified Mst1-interacting protein protein-L-isoaspartate (D-aspartate) O-methyltransferase (PCMT1) [63]. Cardiac-specific over-expression of Mst1 in mouse results in activation of caspases, increased apoptosis and dilated cardiomyopathy, whereas the inhibition of endogenous Mst1 prevents apoptosis of cardiomyocytes and cardiac dysfunction after myocardial infarction without producing cardiac hypertrophy [62,64]. Furthermore, Del Re DP and colleagues show that Rassf1A is an endogenous activator of Mst1 in the heart and the function of Rassf1A/Mst1 pathway is different between cardiomyocytes and fibroblasts. The Rassf1A/Mst1 pathway promotes apoptosis in cardiomyocytes playing a detrimental role; while the same pathway inhibits fibroblast proliferation and cardiac hypertrophy through both cellautonomous and autocrine/paracrine mechanisms, playing a protective role during pressure overload [65]. More recently, cardiac conditional knockout mice with either WW45, Lats2 or Mst1/2 using the Nkx2.5-cre exhibit expansion of trabecular and subcompact ventricular myocardial layers, thickened ventricular walls, and enlarged ventricular chambers without a change in myocardial cell size [66]. Yap1 protein was robustly detected in neonatal and juvenile mouse heart and declined with age. Cardiomyocyte-restricted loss of Yap1 in Fetal resulted in marked, lethal myocardial hypoplasia and decreased cardiomyocyte proliferation, whereas fetal activation of Yap1 stimulated cardiomyocyte proliferation [67]. Thus, the Mst1/2-WW45/Lats2-Yap1 pathway is critical of cardiomyocyte proliferation, cardiac morphogenesis, and myocardial trabeculation, but it does not influence physiological hypertrophic growth of cardiomyocytes during the experimental context. Gene expression profiling and chromatin immunoprecipitation revealed that Hippo signaling negatively regulates a subset of Wnt target gene in cardiomyocyte [66]. The functions of Mst1/2 in immune system The murine Mst1 and Mst2 kinases are most abundant in tissues of the lymphoid system. Mst1 kinase acts as an important regulator in T cell selection, adhesion, migration, growth, and apoptosis [68][69][70][71][72][73]. The Mst1 deficient mouse exhibits a reduction in white pulp, decreased numbers of total CD4 + T cells, CD8 + T cells and B220 + B cells and absence of marginal zone B cells. Compared to the wild type littermates, Mst1-deficient mice have much fewer CD62L hi /CD44 lo naïve peripheral T cells and a high proportion of CD62L lo /CD44 hi effector/memory T cells in tissues, such as liver and lung. Inactivation of Mst1 and Mst2 does not have obvious effect on the thymocytes development, although a lightly small size thymus is found in the Mst1 −/− Mst2 fl/fl -VavCre mouse. This might due to the very low abundance and activity of Mst1/2 kinases in double-positive (DP) cells and developmentally earlier thymocytes. Recently, patients bearing LOF mutations of Mst1 are reported with a primary immunodeficiency syndrome characterized by naïve CD4 + and CD8 + T-cell lymphopenia in particular, as well as neutropenia, closely assembling with the major defect of Mst1 deficient mice in lymphocyte homeostasis. Those patients have recurrent bacterial infections, viral infections, and autoimmune manifestations with autoantibodies [35,74,75]. In contrast to defects seen with deletion of Mst1, a global deletion of Mst2 caused no changes in lymphocyte numbers in any compartment. However, the additional elimination of Mst2 in the entire hematopoietic lineage on an Mst1 deficient background (Mst1 −/− Mst2 fl/fl -VavCre mouse) causes a marked exacerbation of the deficits seen in Mst1 deficient T cells, suggesting that Mst2 might play a redundant role in lymphoid tissues during the absence of Mst1 [69]. The kinase activity of Mst1 is essence for T cell homeostasis, since the defective phenotype of Mst1/Mst2 deficiency in the lymphoid compartment can only be restored by the transgenic expression of wild type but not catalytically inactive Mst1. Mst1-deficient naive T cells proliferate vigorously in response to TCR stimulation and have enhanced ongoing apoptosis in vivo. Mst1, but not Mst2, is greatly reduced in effector/memory T cells compared to that in naïve T cells, thus Mst1 might serves as a likely determinant of the threshold for activation of naïve T cells. Upon the T cell receptor (TCR) stimulation, the increase in tyrosine phosphorylation of CD3ζ, ZAP70, Lck, and PLCγ is similar in splenic T cells from wild-type and Mst1 deficient mice, whereas the phosphorylation of Mob1A/B observed in the wild-type T cells is lost entirely in the Mst1 deficient T cells. Elimination of Mst1 has little effect on the Lats1 carboxyl-terminal phosphorylation, Lats1/2 autophosphorylation and YAP phosphorylation in T cells. Thus the activation of Mob1A/B might serve as the effector of Mst1's antiproliferative effect in naïve T cells [69,71]. The disruption of Mst1, or both Mst1 and Mst2, impairs the thymocyte egress and causes an accumulation of nature T cells in thymus, shown as the increased proportion of singlepositive (SP) thymocytes in thymus, and a decreased number of lymphocytes in circulation. Mst1-deficient mice show defects in adhesion, homing, and intranodal migration in vivo. Furthermore, two independent pools of the ADAP/SKAP55 module, one of which associates with RAPL, Mst1, and Rap1, whereas the other interacts with RIAM, Mst1, Kindlin-3, and Talin are identified that they are independently recruited to the αor β-chain of LFA-1 and coordinate CCR7-mediated activation of LFA-1 as well as T-cell adhesion and migration [76]. Thymocytes express multiple Rac1/2 GEFs [77], in which the deletion of Dock2 resulting in similar defects in migration, actin polarization, and Rac GTPase activation seen in the Rac1/Rac2-deficient thymocytes [78]. Mst1/Mst2 double knockout thymocytes lack the ability to activate RhoA as well as Rac, however, no evidence shows that Dock2 is a regulated downstream of Mst1/Mst2. Although the limited overlap between Dock8 and Mst1/Mst2 deficiency, loss of phospho-Mob1A/B activation of Dock8 might contribute to chemokine-stimulated Rac1 activation in Mst1/Mst2-deficient thymocytes and in turn to the failure of thymic egress [69]. More recently, Mst1 in thymocyte has also been shown to involve in LFA-1/ ICAM-1-dependent high-velocity medullary migration and is required for migrating thymocytes to associate with rare populations of Aire + ICAM-1 hi mTECs in a negatively selecting environment. Thus, Mst1 might have a key role in regulating thymocyte self-antigen scanning in the medulla [79]. Conclusion The mammalian Hippo pathway has generated great interests and gained significant progress in the past few years. In addition to the conserved role of growth control and tumor prevention, the Hippo pathway has also been shown to integrate with other critical signaling pathways, such as Wnt and Notch pathways and extend its function in many other critical biological events. There are still many open questions in the Hippo pathway field remained to be fully elucidated, especially the mechanism by which upstream regulators of the Hippo pathway to initiate or terminate signaling, and how the cellular redox plays a role in this process. Advances in understanding the Hippo signaling pathway regulation may not only solve the scientific questions, such as organ size control and developmental regulations, but also provide new therapeutic targets for human diseases.
2016-05-04T20:20:58.661Z
2013-08-28T00:00:00.000
{ "year": 2013, "sha1": "0be22281e8c6410af7e245e48cb6110c9440240d", "oa_license": "CCBY", "oa_url": "https://cellandbioscience.biomedcentral.com/track/pdf/10.1186/2045-3701-3-31", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9fbba9fe423f0494d076f966aa69de8c508ed8e7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221667076
pes2o/s2orc
v3-fos-license
Nicotine Reduces Human Brain Microvascular Endothelial Cell Response to Escherichia coli K1 Infection by Inhibiting Autophagy Studies have shown that exposure to environmental tobacco smoke can increase the risk of bacterial meningitis, and nicotine is the core component of environmental tobacco smoke. Autophagy is an important way for host cells to eliminate invasive pathogens and resist infection. Escherichia coli K1 strain (E. coli K1) is the most common Gram-negative bacterial pathogen that causes neonatal meningitis. The mechanism of nicotine promoting E. coli K1 to invade human brain microvascular endothelial cells (HBMECs), the main component of the blood–brain barrier, is not clear yet. Our study found that the increase of HBMEC autophagy level during E. coli K1 infection could decrease the survival of intracellular bacteria, while nicotine exposure could inhibit the HBMEC autophagic response of E. coli K1 infection by activating the NF-kappa B and PI3K/Akt/mTOR pathway. We concluded that nicotine could inhibit HBMEC autophagy upon E. coli K1 infection and decrease the scavenging effect on E. coli K1, thus promoting the occurrence and development of neonatal meningitis. INTRODUCTION Neonatal bacterial meningitis (NBM) is a serious life-threatening infectious disease of the central nervous system (Heckenberg et al., 2014;Iovino et al., 2014). Although antibiotic treatment has significantly reduced the mortality rate of NBM, the morbidity rates remain unchanged; survivors often suffer from permanent neurological sequelae such as cerebral palsy, seizures, deafness, and blindness (Furyk et al., 2011;van de Beek et al., 2012;Gradstedt et al., 2013). Escherichia coli K1 (E. coli K1 strain) is the most common Gram-negative bacterial pathogen causing NBM (Wang et al., 2016). High level of bacteremia is a necessary condition for meningitis (Doran et al., 2016;Kim, 2016), and nicotine (NT), the main component of tobacco smoke, can significantly enhance the invasion of human brain microvascular endothelial cells (HBMECs), which are the main component of the blood-brain barrier (Wang et al., 2003). However, the molecular mechanism of how nicotine affects the pathogenesis of E. coli meningitis has not been clearly understood (Bredfeldt et al., 1995;Iles et al., 2001;Chi et al., 2011b;Huang et al., 2016;Liu et al., 2019), while other microorganism-mediated autophagy play the opposite role, such as Brucella melitensis that can induce incomplete autophagy, which can help B. melitensis survive and replicate in host cells for a long time (Wang and Cheng, 1994;Siddiqi et al., 2015). In E. coli meningitis, α7 nAChR is critical for the activation of the NF-κB signaling pathway in HBMECs (Chen et al., 2002;Chi et al., 2011a). Under the different conditions, the continuous activation of NF-κB signaling pathway can negatively or positively regulate autophagy; it is unclear whether activation of the NF-κB signaling pathway regulates autophagy in HBMECs infected with E. coli K1 ADDIN EN.CITE (Koedel et al., 2000;Vallabhapurapu and Karin, 2009). There is a close correlation between NF-κB signal transduction pathway and rapamycin kinase target protein (mTOR) pathway. The activation of mTOR can inhibit autophagy, and it was regulated by multiple upstream signals, PI3K/Akt is one of the most important regulatory signal pathways (Deretic et al., 2013). Whether the NT can regulate the autophagy of E. coli K1-infected HBMECs through PI3K/Akt/mTOR pathway needs us to further explore. In this study, we explored the effects of nicotine on HBMEC autophagy and the scavenging effect of HBMEC on invading intracellular bacteria, and identified this effect and clarified the role of NF-kappa B and PI3K/Akt/mTOR pathway in this process. It will help us to understand the pathogenesis of neonatal E. coli meningitis and provide an important theoretical basis for its prevention and treatment. Cell Culture and Bacterial Infection The human brain microvascular endothelial cell line was cultured and isolated as described in previous studies (Chi et al., 2011a(Chi et al., , 2012. HBMEC is the main component of the bloodbrain barrier. There are many tight junctions between cells, which produce high transendothelial impedance and express cell adhesion molecules. HBMEC was routinely cultured in RPMI 1640 medium, supplemented with 10% heat-inactivated fetal bovine serum, 2 mM glutamine, 1 mM sodium pyruvate, essential amino acids, vitamins, penicillin G (50 µg/ml), and streptomycin (100 µg/ml) at 37 • C in 5% CO 2 . E44 is a rifampicin-resistant derivative of E. coli RS218 (O18:K1:H7), which was isolated from a cerebrospinal fluid from a patient with neonatal E. coli meningitis (Chi et al., 2011a,b) and was grown overnight in Luria-Bertani broth supplemented with rifampicin (100 µg/ml) at 37 • C. For infection assays, cells were infected with E44 at a multiplicity of infection of 100 E44 to 1 HBMEC in experimental medium (1:1 mixture of M199:Ham's F-12 containing 5% heatinactivated fetal bovine serum). To test the effects of NT on autophagy of E44-infected HBMECs, cells were preincubated with or without 5 nM αBTX for 1 h and then treated with 10 −6 M NT for 24 h before infection. To inhibit NF-κB activation, cells were pretreated with 5 µM BAY11-7082 for 1 h before infection. In conditions where autophagy was inhibited or activated, cells were preincubated with 5 mM 3-MA or 200 nM rapamycin for 2 h before infection. Intracellular Bacterial Survival Assay HBMECs were cultured in 24-well-plates and were pre-treated with or without αBTX, NT, BAY11-7082, 3-MA, and rapamycin and then infected with E44 in experimental medium at 37 • C and 5% CO 2 for 1 h. The cells were washed three times with PBS to remove free bacteria and then incubated in experimental medium containing gentamicin (100 µg/ml) for 1 h to kill extracellular bacteria. Half of the wells in each treatment group were washed three times with PBS and lysed with 0.5% Triton X-100, and then intracellular bacteria were counted by plating serial dilutions of the lysates on Luria-Bertani solid medium plates to enumerate CFU t1 . The remaining wells were subjected to further incubation for 1 h, and intracellular bacteria were enumerated as described previously (CFU t2 ). Intracellular survival (%) was calculated as CFU t2 /CFU t1 × 100%. Transmission Electron Microscopy Samples were fixed with 2.5% glutaraldehyde and incubated with 1% osmium tetroxide in 0.1 M sodium cacodylate buffer and embedded in epoxy resin. Sections were cut at a nominal thickness of 80 nm and stained with uranyl acetate and lead citrate. Images were recorded using a Hitachi-7700 transmission electron microscope (Hitachi Limited, Tokyo, Japan). Confocal Laser Scanning Microscopy HBMECs were transfected with 100 µl OptiMEM medium (Gibco/BRL) containing 1% Lipofectamine 2000 (Invitrogen) and 1 µg of mCherry-GFP LC3B plasmid. After 6 h, the medium was changed to normal medium, and after transfection for 48 h, the transfected cells were treated with or without NT, E44, and BAY11-7082. Transfected cells were examined by confocal microscopy. Statistical Analysis All experiments were performed in triplicate and repeated at least three times. Statistically significant differences between groups were determined using two-tailed one-way ANOVA, followed by a Student-Newman-Keuls test or Student t-test. P < 0.05 was considered statistically significant. Nicotine Inhibits Autophagy in E. coli K1-Infected HBMEC To determine whether E. coli K1-infected HBMEC acted directly on autophagy, we used HBMECs exposed to different time of E44 in the culture medium. We assessed levels of autophagyassociated proteins. LC3 is ubiquitous in mammalian cells and is the most characteristic core autophagy-related protein. Upon initiation of autophagy, LC3I in the cytoplasm is converted to LC3II, is covalently linked to phosphatidylethanolamine and bound to the autophagosome membrane (Tanida et al., 2008). Subsequently, the p62/SQSTM1 protein transports cargo such as ubiquitinated protein aggregates, ubiquitin-labeled bacteria, etc. into the autophagosome by interacting with the LC3 protein (Pankiv et al., 2007). Finally, p62/SQSTM1 proteins are degraded by autolysosome (Bjorkoy et al., 2005). Therefore, LC3II and p62/SQSTM1 proteins can be used as markers for detecting autophagy. To investigate autophagy level of HBMECs in the context of E. coli K1 infection, HBMECs were infected with E. coli strain E44 for different amounts of time, and the results showed the expression of LC3II in E44-infected HBMECs in a time-dependent manner, with peak conversion at 2 h (Supplementary Figure 1). Next, HBMECs were exposed to low doses of NT (10 µM) for 24 h and then infected with E44 for 2 h. Western blotting results showed that NT significantly blocked the expression of LC3II and promoted the expression of p62 ( Figure 1A). To conclusively determine whether nicotine inhibits E44-infected HBMEC autophagy, 5 nM α-bungarotoxin (αBTX, a nicotinic acetylcholine receptor antagonist) was added for 1 h before treatment with NT. Western blotting results indicated that αBTX significantly blocked the effect of NT on autophagy-related proteins p62 and LC3 in E44-infected HBMECs ( Figure 1B). We further transfected HBMECs with mCherry-GFP-LC3B plasmid. In the early stages, the autophagosomes were double-labeled by Cherry and GFP to appear yellow. In the later stage, the autophagosomes fused with lysosomes, and the acidic lysosomal environment quenched GFP, appearing red. We found that E44 infection increased the abundance of yellow and red puncta compared with that in the uninfected group, and the abundance of both puncta in the NT pretreatment group decreased ( Figure 1C). Together, these results suggest that E44 infection promoted autophagy in HBMECs and that NT significantly blocked E44-induced autophagy. Our results have shown that autophagy of HBMECs is significantly activated when cells were infected with E44 for 2 h, it is unclear whether E44 resides in the double membrane structures typical of autophagosomes. Using transmission electron microscopy (TEM) analysis, we found that after 2 h of infection, E44 was found in double membrane structures of autophagosomes ( Figure 1D). Nicotine Promotes NF-κB Activation to Inhibit Autophagy We next assessed whether NT promotes NF-κB activation to inhibit autophagy on E. coli K1-infected HBMEC. Previous studies have shown that α7 nAChR plays an important role in the activation of NF-κB during the pathogenesis of E. coli meningitis (Huang et al., 2016). However, it is unclear whether NT regulates the activation of NF-κB in E44-infected HBMECs. We pretreated HBMECs with or without NT for 24 h and then infected them with E44 for 2 h. Studies have confirmed that phosphorylation of the Ser-536 site of the p65 subunit can serve as a marker of NF-κB activation (Ahmed et al., 2014). Western blotting results showed that NT significantly promoted the expression level of p-p65 (Figure 2A). Next, αBTX (5 nM) was added for 1 h before treatment with NT, and the protein level of P-P65 decreased significantly (Figure 2B). To further investigate whether NF-κB activation affects autophagy, we inhibited NF-κB with BAY11-7082 (5 µM), and the results showed that the protein level of LC3II was significantly increased and that p62 levels were significantly decreased ( Figure 2C). We transfected HBMECs with the mCherry-GFP-LC3B plasmid and found that the inhibition of NF-κB activation significantly increased the abundance of yellow and red puncta ( Figure 2D). Together, these results suggest that NT promotes NF-κB activation during E44 infection and inhibits autophagy. Nicotine Promotes Activation of the Autophagy-Related PI3K/Akt/mTOR Signaling Pathway We began our investigation of the drivers of autophagyrelated PI3K/Akt/mTOR signaling pathway in E. coli K1-infected HBMEC by hypothesizing that the NT playing a detrimental role in HBMECs' defense against E. coli meningitis may act as an infection-associated regulator of autophagy. mTOR is a serine/threonine kinase composed of two signal complexes, mTORC1 and mTORC2, and mTORC1 activation inhibits autophagy (Jung et al., 2009). Activation of mTORC1 directly promotes phosphorylation of the downstream target ribosomal protein S6 kinase (P70S6K), and the phosphorylation level of P70S6K can reflect the activation of mTORC1 (Laplante and Sabatini, 2012). Previous studies have shown that E. coli K1 activates PI3K/Akt in a time-dependent manner, with peak at 10-15 min of infection (Sukumaran et al., 2003;Zhao et al., 2010), and NT significantly promotes activation of Akt in HBMEC infected with E. coli K1 for 20 min (Chen et al., 2002). Our results have confirmed that E44 infection for 2 h significantly promoted autophagy of HBMEC, but was inhibited under nicotine exposure. It is unclear whether the PI3K/Akt/mTOR signaling pathway is involved in the regulation of autophagy in E44-infected HBMECs. Therefore, the present study aimed to further examine whether NT may affect E44infected HBMEC autophagy through the PI3K/Akt/mTOR pathway. The levels of PI3K, P-PI3K, Akt, P-Akt, and P-P70S6K were determined by western blotting. The results showed that the PI3K/Akt/mTOR pathway was inhibited by 2 h of E44 infection of HBMECs, and NT pretreatment significantly activated the PI3K/Akt/mTOR pathway ( Figure 3A). Next, αBTX was used to block NT binding to α7 nAChR, and PI3K/Akt/mTOR pathway activation was blocked ( Figure 3B). To further investigate whether NT affected E44-infected HBMEC autophagy via the PI3K/Akt/mTOR signaling pathway, we inhibited mTOR with rapamycin, and the results showed that the protein level of Frontiers in Cellular and Infection Microbiology | www.frontiersin.org LC3II was significantly increased and that p62 levels were significantly decreased ( Figure 4B). These results indicate that NT significantly promotes activation of the autophagy-related PI3K/Akt/mTOR signaling pathway in E44-infected HBMECs. The Autophagy Pathway Limits Intracellular E44 Survival Although nicotine inhibits autophagy in E44-infected HBMECs, it is unclear whether inhibition of autophagy affects intracellular E44 survival. Thus, we tested the survival rate of E44 in HBMECs. Compared with in the E44 treatment alone group, the survival rate of intracellular E44 was significantly increased in the NT + E44 treatment group, and the survival rate of E44 was significantly decreased after αBTX blocked NT binding to α7 nAChR ( Figure 3C). Compared with in the NT + E44 treatment group, the survival rate of E44 was significantly decreased in the BAY11-7082 treatment group (Figure 3D). To investigate the effect of autophagy on intracellular E44 survival, we activated and inhibited autophagy with the autophagy inducer rapamycin (200 nM) and the autophagy inhibitor 3-MA (5 mM) (Figure 4B), respectively, and rapamycin significantly reduced the intracellular E44 survival rate ( Figure 3E). The aforementioned results indicate that autophagy facilitates the clearance of intracellular E. coli K1 by HBMECs. NT promotes the survival of E44 in HBMECs by inhibiting autophagy. Induction of Autophagy Inhibits the Expression of ICAM-1 Finally, we sought to determine whether autophagy inhibits the expression of ICAM-1. The adhesion molecule ICAM-1 plays an important role in the migration of PMNs. Western blotting results showed that NT promotes the expression of ICAM-1 in E44-infected HBMECs (Figure 4A). To further investigate the effect of autophagy on ICAM-1 expression, we used the autophagy inhibitor 3-MA and the autophagy inducer rapamycin to inhibit and activate autophagy, respectively, in HBMECs. Western blotting results showed that ICAM-1 expression was downregulated after treatment with the autophagy inducer rapamycin (Figure 4B). Together, these results suggest that NT promotes the activation of NF-κB in E44-infected HBMECs and promotes the expression of ICAM-1, while autophagy inhibition may be beneficial for ICAM-1 expression. DISCUSSION NT is a major component of environmental tobacco smoke, with multiple damaging effects on blood vessels, immunity, and the nervous system. Currently, studies have shown that nicotine promotes the pathogenesis of E. coli meningitis through the cholinergic α7 nAChR pathway. Autophagy is a fundamental eukaryotic pathway that balances the beneficial and adverse effects of immunity and inflammation and thereby may prevent infectious, autoimmune, and inflammatory diseases (Levine et al., 2011;Deretic et al., 2013). However, there is lack of information about whether and how nicotine affects the autophagy of E. coli K1-infected HBMECs. In this study, we confirmed the effect of nicotine on autophagy in E44-infected HBMECs. Furthermore, the function of autophagy in E. coli K1-infected HBMECs and the underlying molecular mechanism of NT-mediated regulation of these effects were elucidated. Bacterial pathogens invade the blood-brain barrier endothelium, and traversing the barrier is a critical step during the development of neonatal E. coli meningitis. Previous studies have confirmed that autophagy is essential for HBMECs to clear Group B Streptococcus, the most common Grampositive bacterial pathogen causing neonatal meningitis (Cutting et al., 2014). In this study, our results provide new evidence that autophagy is activated in E. coli K1-infected HBMECs at early time points and can limit intracellular bacterial survival, whereas chronic nicotine exposure significantly inhibits intracellular antibacterial autophagy formation. MTOR is a serine/threonine kinase composed of two signal complexes, mTORC1 and mTORC2, and mTORC1 activation inhibits autophagy (Jung et al., 2009). The activation of mTOR is regulated by multiple upstream signals, and PI3K/Akt is one of the most important regulatory signaling pathways (Manning and Cantley, 2007). Previous studies have shown that E. coli K1 activates PI3K/Akt in a time-dependent manner, with peak at 10-15 min of infection (Sukumaran et al., 2003;Zhao et al., 2010), and NT significantly promotes activation of Akt in HBMEC infected with E. coli K1 for 20 min (Chen et al., 2002). In this study, HBMECs were infected with E. coli K1 for 2 h and found that activation of the PI3K/Akt/mTOR signaling pathway was inhibited and NT significantly reversed the above inhibition. The NF-κB signaling pathway is activated during bacterial meningitis, which plays an important role in mediating E. coli K1 invasion of HBMECs and PMN migration across the blood-brain barrier (Chi et al., 2012); blocking NF-κB signaling can suppress bacterial meningitis (Koedel et al., 2000;Chi et al., 2012). Recent findings have shown that phosphorylation of the p65 subunit at Ser-536 is a novel mechanism of NF-κB transcriptional activation (Douillette et al., 2006;Nicholas et al., 2007;Ahmed et al., 2014). In this study, we showed that chronic NT exposure significantly promoted the phosphorylation of p65 Ser-536. Next, we inhibited the activation of NF-κB and found that the level of autophagy increased significantly and that the intracellular survival rate of E. HBMECs were subjected to the following treatments: without any treatment (CON); infection with E44 for 2 h (E44); incubation with NT for 24 h and then infection with E44 for 2 h (NT+E44); treatment with αBTX for 1 h, then incubation with NT for 24 h, and finally infection with E44 for 2 h (αBTX+NT+E44). Phosphorylated p65 and total P65 were analyzed by Western blotting (left). Quantitative analysis was performed using ImageJ software to determine P-P65/P65 ratios (right). (C) HBMECs were subjected to the following treatments: incubation with NT for 24 h and then infection with E44 for 2 h (NT+E44); incubation with NT for 24 h and BAY11-7082 (BAY) for 1 h and then infection with E44 for 2 h (NT+BAY+E44). Quantitative analysis was performed using ImageJ software to determine P-P65/GAPDH, p62/GAPDH, and LC3II/LC3I ratios (right). (D) Representative confocal microscopic images of HBMECs transfected with mCherry-GFP-LC3 (the white arrows indicate autophagosomes). Scale bar, 25 µm. Data are means ± SEM of 3 independent experiments conducted in triplicate. GAPDH, glyceraldehyde 3-phosphate dehydrogenase; DAPI, 40,6-diamidino-2phenylindole. *P < 0.05; **P < 0.01; and ***P < 0.001. HBMECs were subjected to the following treatments: without any treatment (CON); infection with E44 for 2 h (E44); incubation with NT for 24 h and then infection with E44 for 2 h (NT+E44); treatment with αBTX for 1 h, then incubation with NT for 24 h, and finally infection with E44 for 2 h (αBTX+NT+E44). P-PI3K, P-Akt, P-P70S6K, PI3K, Akt were analyzed by Western (Continued) The effect of autophagy on the survival of E44 in HBMEC. Data are means ± SEM of three independent experiments conducted in triplicate. GAPDH, glyceraldehyde 3-phosphate dehydrogenase. *P < 0.05; **P < 0.01; and ***P < 0.001. HBMECs were subjected to the following treatments: without any treatment (CON); infection with E44 for 2 h (E44); incubation with NT for 24 h and then infection with E44 for 2 h (NT+E44); treatment with αBTX for 1 h, then incubation with NT for 24 h, and finally infection with E44 for 2 h (αBTX+NT+E44). ICAM-1 were analyzed by Western blotting (left). Quantitative analysis was performed using ImageJ software to determine ICAM-1/GAPDH ratios (right). (B) Effect of autophagy on ICAM-1 expression in E44-infected HBMEC. HBMECs were subjected to the following treatments: incubation with NT for 24 h and then infection with E44 for 2 h (NT+E44); incubation with NT for 24 h and 3-MA for 2 h and then infection with E44 for 2 h (NT+3-MA+E44); incubation with NT for 24 h and RAP for 2 h and then infection with E44 for 2 h (NT+RAP+E44). ICAM-1, LC3, and P62 were analyzed by Western blotting (left). Quantitative analysis was performed using ImageJ software to determine ICAM-1/GAPDH, p62/GAPDH, and LC3II/LC3I ratios (right). Data are means ± SEM of three independent experiments conducted in triplicate. GAPDH, glyceraldehyde 3-phosphate dehydrogenase. *P < 0.05; **P < 0.01; ***P < 0.001; and ****P < 0.0001. coli K1 decreased. Activation of NF-κB promotes the expression of ICAM-1, which is critical for PMN binding to HBMECs and migration. Our results show that promoting autophagy significantly inhibits ICAM-1 protein expression. Taken together, these findings suggest that NT promotes the activation of the NF-κB and PI3K/Akt/mTOR signaling pathway in E. coli K1-infected HBMECs, which inhibits autophagy, promotes the survival of intracellular pathogens, and facilitates the expression of ICAM-1. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material. AUTHOR CONTRIBUTIONS CW and MY: conceived the project, carried out the experiments, and drafted the article. RL and HH: study design, analysis of the data, and critical revision of the article. LJ and XZ: data collection and critical revision of the article. SH: revision of the article. LW: study design and critical revision of the article. All authors contributed to the article and approved the submitted version.
2020-09-15T13:06:26.887Z
2020-09-15T00:00:00.000
{ "year": 2020, "sha1": "82e82895d37140b64111e9faea17e94de68441e0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2020.00484/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82e82895d37140b64111e9faea17e94de68441e0", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
27270032
pes2o/s2orc
v3-fos-license
Twistorial versus space--time formulations: unification of various string models We introduce the D=4 twistorial tensionfull bosonic string by considering the canonical twistorial 2--form in two--twistor space. We demonstrate its equivalence to two bosonic string models: due to Siegel (with covariant worldsheet vectorial string momenta $P_\mu^{m}(\tau,\sigma)$) and the one with tensorial string momenta $P_{[\mu\nu]}(\tau,\sigma)$. We show how to obtain in mixed space-time--twistor formulation the Soroka--Sorokin--Tkach--Volkov (SSTV) string model and subsequently by harmonic gauge fixing the Bandos--Zheltukhin (BZ) model, with constrained spinorial coordinates. 1. Introduction. Twistors and supertwistors (see e. g. [1,2,3]) have been recently widely used [4,5,6,7,8] for the description of (super) particles and (super) strings, as an alternative to space-time approach. We stress also that recently large class of perturbative amplitudes in N = 4 D = 4 supersymmetric Yang-Mills theory [9,10,11] and conformal supergravity (see e.g. [12]) were described in a simple way by using strings moving in supertwistor space. Such a deep connection between supertwistors and non-Abelian supersymmetric gauge fields, from other perspective firstly observed almost thirty years ago, should promote geometric investigations of the links between the space-time and twistor description of the string model. In this paper we derive fourlinear twistorial classical string action, with target space described by two-twistor space. Our main aim is to show that the twistorial master action for several string models which all are classical equivalent to D = 4 Nambu-Goto string model, can be also described by the fundamental Liouville 2-form in two-twistor space. Recently also there were described in D = 4 twotwistor space T (2) = T ⊗ T the models describing free relativistic massive particles with spin [5,13,14,15]. The corresponding action was derived by suitable choice of the variables from the following free two-twistor oneform where (A = 1, ...4, i = 1, 2; no summation over i): with imposed suitable constraints. In this paper we shall study the following canonical Liouville two-form in two-twistor space T (2) (3) * On leave from Ukr. Eng. Pedag. Academy, Kharkov, Ukraine † Supported by KBN grant 1 P03B 01828 restricted further by suitable constraints. We shall show that from the action which follows from (3) one can derive various formulations of D = 4 bosonic free string theory. We start our considerations from the first order formulation of the tensionfull Nambu-Goto string in flat Minkowski space which is due to Siegel [16] [29] The kinetic part of the action (4) is described equivalently by the two-form where P µ = P m µ ǫ mn dξ n , dX µ = dξ m ∂ m X µ i. e. in Siegel formulation the pair (P 0 µ , P 1 µ ) of generalized string momenta are represented by a one-form. we shall obtain the SSTV bosonic string model [17] S = d 2 ξ e λα ρ m λ α ∂ m Xα α + 1 2T (λ αi λ αi )(λ jαλα j ) (7) where √ −h = e = det(e a m ) = − 1 2 ǫ mn ǫ ab e a m e b n . Further we shall discuss the local gauge freedom in the spinorial sector of (7) and consider the suitable gauge fixing. We shall show that by suitable constraints in spinorial space we obtain the BZ formulation [18] which interprets the D = 4 spinors λ αi ,λ iα as the spinorial Lorentz harmonics. Finally we shall derive the second order action for twistorial string model described by the two-form (3). Further we shall consider the bosonic string model with tensorial momenta obtained from the Liouville twoform [19,20] Such a model is directly related with the interpretation of strings as dynamical world sheets with the surface elements If we introduce the composite formula for P αβ = P µν σ µν αβ , Pαβ = −P µν σ µν αβ in terms of spinors (see also [20]) by passing to the first order action we obtain the mixed spinor-space-time SSTV and BZ string formulations. We see therefore that both bosonic string models, based on (8) and (5), lead via SSTV to the purely twistorial bosonic string with the null twistor constraints and the constraint determining the string tension T If we wish to obtain the BZ formulation one should introduce in place of (10) two constraints providing the particular solution of the constraint (10). Siegel bosonic string. Equations of motion following from the action (4) are If we solve half of the equations of motion (13) without time derivatives where P 0 µ = P µ denotes the string momentum and λ = √ −h h11 , ρ = h01 h11 , the action (4) takes the form It is easy to see that (16) describes the phase space formulation of the tensionfull Nambu-Goto string where g (2) is the determinant of the induced D = 2 metric T is the string tension, and the string Hamiltonian (see (16)) is described by a summ of first class constraints generating Virasoro algebra. By substitution of equations of motion (13) into the Siegel action (4) one obtains the Polyakov action Note that the equations (14) describe the Virasoro first class constraints. 3. SSTV string model and its restriction to BZ model. In order to obtain from the action (4) the mixed spinor-space-time action (7) we should eliminate the fourmomenta P m µ by means of the formula (6). We obtain that the second term in string action (4) takes the form where we used Tr(ρ m ρ n ) = 2h mn . Note thatλ iαλα i = λ iαλα i . Putting (6) and (20) in the action (4) we obtain the SSTV string action (7) which provides the mixed spacetime-twistor formulation of bosonic string. We stress that in SSTV formulation the twistor spinors λ αi are not constrained. Further, the algebraic field equation (14) after substitution (6) is satisfied as an identity. Calculating from the action (7) the momenta π αi ,πα i , p (e)m a conjugate to the variables λ αi ,λ iα , e a m one can introduce the following two first class constraints generating the following local transformations: In particular one can fix the real parameters b, c in such a way that we obtain the constraints (11). The relations (11) can be rewritten in SU (2)-covariant way as follows (we recall that T is real) If we introduce the variables v αi = 2 T λ αi ,v iα = 2 Tλ iα we get the orthonormality relations for the spinorial Lorentz harmonics [18]. If we impose the constraints (11) the model (7) can be rewritten in the following way 4. Purely twistorial formulation. Let us introduce second half of twistor coordinates µα i ,μ αi by employing Penrose incidence relations generalized for string Incidence relations (25) with real space-time string position Xα α imply that the twistor variables satisfy the constraints which are antiHermitian ((V i j ) = −V j i ). Let us insert the relations (25) into (24). Using we obtain the first order string action in twistor formulation The variation with respect to zweibein e a m of the action (27) gives the equations (we use that ee m a = −ǫ ab ǫ mn e b n ) For compact notation we introduce the string twistors Then (29) and the constraints (26) can be rewritten as Substituting (29) and (30) in the action (27) we obtain our basic twistorial string action: Using explicit form of D = 2 Dirac matrices we can see that the first term in the action (31) equals to i.e. the action (31) is induced on the world-sheet by the canonical 2-form (3) with supplemented constraints (23) and (26). From SSTV action to tensorial momentum formulation. The zweibein e a m can be expressed from the action (7) as follows ((λλ) ≡ λ αi λ αi , (λλ) ≡λ iαλα i ) Substitution of the relation (32) in the action (27) provides the following string action Using identities for D = 2 Dirac matrices and the relation after contractions of spinorial indices we obtain the action where the composite second rank spinors satisfy the constraints Using fourvector notation the relations (36) take the form whereP µν = 1 2 ǫ µνλρ P λρ . The action (34) is the Ferber-Shirafuji form of the string action with tensorial momenta Expressing P µν by its equation of motion, we get After substituting (39) in the action (38) we obtain the second-order action (see e.g. [23]) Eliminating further Λ and using that (see also (18)) we obtain the Nambu-Goto string action (17). It is important to notice that the solution (39) satisfies the constraint P µνP µν = 0 as an identity. We see therefore that in the action (38) it is sufficient to impose by the Lagrange multiplier only the first constraint (37). Conclusions. We have shown the equivalence of five formulations of D = 4 tensionfull bosonic string: • two space-time formulations, with vectorial string momenta (see (4)) and tensorial ones (see (38)); • two mixed twistor-space-time SSTV (see (7)) and BZ (see (24)) models; • the generic pure twistorial formulation with the action given by the formula (31). Following the massive relativistic particle case (see [13,14,15]) the main tools in the equivalence proof are the string generalizations of Cartan-Penrose string momenta (see (6) and (35)) and the incidence relations (25). The action (27) in conformal gauge e a m = δ a m is the commonly used bilinear action for twistorial string. We would like to stress that the model (31) is substantially different from the one proposed by Witten et al [9][10][11]. In Witten twistor string model described by CP (3|4) ( N = 4 supertwistor) σ-model the targed space is described by a single supertwistor, and the Penrose incidence relation, introducing space-time coordinates appears only after quantization, as the step permitting the space-time interpretation of holomorphic twistorial fields. In our approach composite space-time variables enter already into the formulation of classical string model, in a way enforcing the complete equivalence of classical twistorial string and Nambu-Goto action provided that we treat the space-time target coordinates as 2-twistor composites. In this paper we restricted the presentation to the case of D = 4 bosonic string. The generalization to D = 6 is rather straightforward; the extension to D = 10 requires clarification how to introduce the D = 10 conformal spinors, i.e. D = 10 twistors. Other possible generalizations are the following: i ) If we quantize canonically the model (27) one can show that the PB of the constraints V i j satisfy the internal U (2) algebra (see [24]). One can introduce, contrary to (26), nonvanishing V i j . The degrees of freedom described by V i j can be interpreted (see also [2,14,15]) as introducing on the string the local density of covariantly described spin components and electric charge; ii ) We presented here the links between various bosonic string models. Introducing two-supertwistor space and following known supersymmetrization techniques (see [21,22]) one can extend the presented equivalence proofs to the relations between different superstring formulations with manifest world-sheet supersymmetry which involved the twistor variables (see e.g. [25]- [28]). iii ) Particularly interesting would be the twistorial formulation of D = 4 N = 4 Green-Schwarz superstring, which should be derivable by dimensional reduction from D = 10, N = 1 Green-Schwarz superstring. Such twistorial D = 4, N = 4 supersting model could be in our formulation the counterpart of twistorial N = 4 superstring considered in [9][10][11].
2017-10-20T13:05:56.385Z
2006-06-26T00:00:00.000
{ "year": 2006, "sha1": "d6879937112c8d98193acbde9c711e0e6142eda4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0606245", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d6879937112c8d98193acbde9c711e0e6142eda4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
209516085
pes2o/s2orc
v3-fos-license
SU$(3)_1$ Chiral Spin Liquid on the Square Lattice: a View from Symmetric PEPS Quantum spin liquids can be faithfully represented and efficiently characterized within the framework of Projected Entangled Pair States (PEPS). Guided by extensive exact diagonalization and density matrix renormalization group calculations, we construct an optimized symmetric PEPS for a SU$(3)_1$ chiral spin liquid on the square lattice. Characteristic features are revealed by the entanglement spectrum (ES) on an infinitely long cylinder. In all three $\mathbb{Z}_3$ sectors, the level counting of the linear dispersing modes is in full agreement with SU$(3)_1$ Wess-Zumino-Witten conformal field theory prediction. Special features in the ES are shown to be in correspondence with bulk anyonic correlations, indicating a fine structure in the holographic bulk-edge correspondence. Possible universal properties of topological SU$(N)_k$ chiral PEPS are discussed. Introduction -Quantum spin liquids are long-range entangled states of matter of two dimensional electronic spin systems [1][2][3]. Among the various classes [4], spin liquids with broken time-reversal symmetry, i.e., chiral spin liquids (CSL) [5,6], exhibit chiral topological order [7]. Intimately related to Fractional Quantum Hall (FQH) states [8], CSL host both anyonic quasi-particles in the bulk [9] and chiral gapless modes on the edge [10]. It was early suggested that, in systems with enhanced SU(N ) symmetry, realizable with alkaline earth atoms loaded in optical lattices [11], CSL can naturally appear [12]. Later on, many SU(N ) 1 CSL with different N were identified on the triangular lattice [13], while the original proposal on the square lattice [12] remains controversial. In recent years, Projected Entangled Pair States (PEPS) [14] have progressively emerged as a powerful tool to study quantum spin liquids. As an ansatz, PEPS provide variational ground states competitive with other methods [15,16], and equally importantly, offer a powerful framework to encode topological order [17] and construct non-chiral [18,19] and chiral -both Abelian [20] and non-Abelian [21] -SU(2) spin liquids. Generically, SU(2) CSL described by PEPS exhibit linearly dispersing chiral branches in the entanglement spectrum (ES) well described by Wess-Zumino-Witten (WZW) SU(2) k (with k = 1 for Abelian CSL) conformal field theory (CFT) for one-dimensional edges [22]. However, to our knowledge, there is no known example of more general SU(N ) PEPS with unambiguous chiral edge modes. Thus it remains unclear whether symmetric PEPS can describe higher SU(N ) CSL faithfully. In order to address these issues, we propose and investigate a frustrated SU(3) symmetric spin model on the square lattice with a symmetric PEPS ansatz, thereby taking the first step towards describing general SU(N ) k CSL with PEPS. Model and exact diagonalization -On every site, we place a three-dimensional spin degree of freedom, which transforms as the fundamental representation of SU (3). The Hamiltonian, defined on a square lattice, includes the most general SU(3)-symmetric short-range three-site interaction: where the first (second) term corresponds to two-site permutations over all (next-)nearest-neighbor bonds, and the third and fourth terms are three-site (clockwise) permutations on all triangles of every plaquette. We have chosen J 2 = J 1 /2 so that the two-body part (J 1 and J 2 ) on the interacting triangular units becomes S 3 symmetric, hence mimicking the corresponding Hamiltonian on the triangular lattice [23] and further parameterized the amplitude of each term as: J 1 = 2J 2 = 4 3 cos θ sin φ, J R = cos θ cos φ, J I = sin θ. We have performed extensive exact diagonalization (ED) calculations [24,25] on various periodic N s -site clusters to locate the CSL phase in parameter space. We expect (1) to host a SU(3) 1 CSL equivalent to the 221 Halperin FQH state [26], whose spectral signatures on small tori can be established precisely [27,28]. A careful scan in θ and φ reveals that there is a small region where, for N s = 3p (p ∈ N + ), there are three low-lying singlets below the octet gap, reflecting the expected topological degeneracy of the CSL. For N s = 3p, the lowenergy quasi-degenerate manifold reflects perfectly the anyon content of the CSL. In both cases, the momenta of the lowenergy states match the heuristic counting rules of the 221 Halperin FQH state with 0, 1 or 2 quasi-holes [28-31] (see Fig. 1 and supplemental material (SM) [32]). In the following, we shall focus on angles θ = φ = π 4 , for which clear evidence of a gapped CSL is found. [33] Symmetric PEPS ansatz -Representing the CSL in terms of a symmetric PEPS allows us not only to obtain short-range properties such as energy density efficiently, but also to reveal its topological properties by examining the entanglement structure [34,35]. This can be accomplished by using SU(3)symmetric tensors, analogously to the SU(2) case [36]. The simplest virtual space available here is V = 3 ⊕ 3 ⊕ 1 such that (i) a symmetric maximally entangled SU(3) singlet |Ω can be realized on every bond by pairing two neighboring virtual particles and (ii) four virtual particles around each site can be fused into the 3-dimensional physical spin with an on-site projector P [37, 38], see Fig. 2(a). The so-called bond dimension is thus D = 7. In addition to the continuous rotation symmetry, full account of the discrete C 4v point group symmetry (shown as purple arrows in Fig. 2(a)) can be taken [36], and tensors can be classified according to the corresponding irreps. By linearly combining on-site projectors of two different irreps with opposite ±1 characters w.r.t. axis reflections, one can construct a complex PEPS ansatz breaking both parity (P) and time-reversal (T) symmetries while preserving PT, as required for a CSL ground state. Details about this construction, used for SU (2) CSL [20,21], are provided in the SM. For the ease of the following discussion, it is convenient to define the tensor A by absorbing the adjacent singlets on, e.g., the right and down bonds around each site into the on-site projector, forming an equivalent way to express the wavefunction. The fact that center of the SU(N ) group is isomorphic to the Z N group allows one to associate a Z 3 charge Q = +1 to the physical space of the tensor A, while the virtual space carries Z 3 charges Q = {+1, −1, 0}, i.e. it contains a regular representation of Z 3 [39]. Hence, the tensor A bears an important Z 3 gauge symmetry associated to local charge conservation: where the action on virtual indices reads as left, right, up and down, and ω = e i2π/3 , Z = diag(ω, ω, ω, ω 2 , ω 2 , ω 2 , 1) is the representation of the Z 3 generator in V. This built-in gauge symmetry is central to topological properties, such as topological degeneracy on the torus and anyonic excitations [17]. Let us emphasize that the Z 3 gauge symmetry naturally appears from the physical SU(3) and point group symmetry, and is not a symmetry we imposed ad hoc. Variational optimization -The best variational ground Ns, respectively. The PEPS energy is optimized at χ = D 2 and extrapolated to χ → ∞ (red circles). Blue squares stand for DMRG data on several finite-width cylinders (Nv = 3,4,5,6), and the ED results on tori with Ns = 12, 15, 18, 21, 24 sites and different geometries are indicated by stars. The dotted (dash-dotted) line is an exponential fit of the DMRG (ED) data to the thermodynamic limit. state is obtained by taking the ansatz P = N1 a=1 λ a is the number of linearly independent projectors in the B 1 (B 2 ) class, and optimizing the (few) variational parameters {λ a 1 , λ b 2 } ∈ R with a conjugate-gradient method [40]. For a given tensor the energy is obtained via the corner transfer matrix renormalization group (CTMRG) method, computing an effective environment of bond dimension χ surrounding an active 2 × 2 region embedded in the infinite plane (so-called iPEPS) [41][42][43][44]. The gradient is then simply obtained by a finite difference approach [45]. U(1) quantum number is also used occasionally to speed up the computation [46][47][48]. The exact contraction scheme corresponds to the limit χ → ∞. To establish the relevance of our symmetric PEPS ansatz for the model (1), we compare the PEPS energy density with that obtained by ED on several different tori up to size N s = 24, and by the density matrix renormalization group (DMRG) method [49, 50] on various finite cylinders. In DMRG, for each cylinder width N v , we compute the ground state energies at two different cylinder lengths to subtract the contribution from the edges [51]. A detailed description of the DMRG method and additional data can be found in the SM. As shown in Fig. 2(b), the PEPS energy density obtained on the infinite plane turns out to lie close to the energy density in the thermodynamic limit, estimated from a finite-size scaling of the ED and DMRG data. Entanglement spectrum -To get further insight into the nature of the CSL phase, we now explore the properties of our symmetric PEPS, where the Z 3 gauge symmetry implies topological degeneracy on closed manifolds. On finite-width cylinders, quasi-degenerate ground states can be constructed by restricting the virtual boundary of PEPS to fixed Z 3 charges Q = 0, ±1, with or without inserting Z 3 flux line through the cylinder. Here we shall focus on states without Z 3 flux line, and briefly discuss those with flux line in the SM. The topological properties can be most easily obtained through a study of the entanglement spectrum, which is defined to be minus log of the spectrum of reduced density matrix (RDM) of subsystem, say the left half of a cylinder [34]. For a PEPS on an infinite long cylinder, the RDM can be constructed from the leading eigenvector of the transfer operator [35]. Since the onsite tensor carries charge +1, the cylinder width N v must be multiple of three. In our current setting with bond dimension D = 7, exactly contracting the transfer operator is not feasible for large enough N v . Instead we use the iPEPS environment tensors computed with CTMRG to construct the approximate leading eigenvector [52], where large enough environment dimension χ is needed to get converged results [21]. The constructed RDM is fully invariant under translation and SU(3) rotations, which allows to block diagonalize it, introducing appropriate quantum numbers. In practice, we use the Z 3 charges (associated to the Z 3 gauge symmetry), two U(1) quantum numbers, and the momentum quantum number to do ED. The results with N v = 6, χ = 343 are shown in Fig. 3 for the three different charge sectors, i.e., Q = 0, ±1. Linearly dispersing chiral modes well separated from the high energy continuum are seen with the same velocity, one mode in the Q = 0 sector and three modes in the Q = +1 sector. The Q = ±1 sectors have identical spectra: As both the bare tensor and the bond |Ω are PT-symmetric, so is the wavefunction, but after the reflection, the bonds are at the other side of the entanglement cut, and since the bonds exchange 3 ↔ 3, this maps between the Q = ±1 spectra. Interestingly, for all different χ we have considered, the lowest level in the sectors. The spectrum in Q = −1 sector (not shown here) is found to be identical to that in Q = +1 sector but with conjugated SU(3) irreps, and is shown in the SM. For convenience, the lowest eigenvalue is subtracted in each plot. One chiral branch is seen in (a) starting at momentum K0 = −π/3 and three branches are seen in (b) starting at momenta K±1 = −π/3, π/3 and π. In each sector, the irreps encircled by the red boxes (or the blue boxes and the arrows) agree with the level counting of the SU(3)1 WZW CFT (shown on the plot vertically). Q = 0 sector appears at finite momentum K 0 = −π/3, while the three branches in the Q = ±1 sectors start at momenta K ±1 = −π/3, π/3, and π. We believe the momentum shift is due to a quantum of magnetic flux trapped in the cylinder, and is an intrinsic property of the optimized PEPS, that constrains us to choose N v = 6p, p integer (For N v = 3(2p + 1), where K 0 and K ±1 do not belong to the reciprocal space, see SM). Reconstructing the SU(3) irreps from the two U(1) quantum numbers (the Young tableaux for the relevant SU(3) irreps are provided in the SM), we found that the level contents follow the prediction of the Virasoro levels of the SU(3) 1 WZW CFT [22,53]. However, we observe a tripling of the branches in the Q = ±1 sectors that we shall discuss later. Bulk correlations -The above entanglement spectrum provides strong evidence of SU(3) 1 chiral topological order. However, it has been shown that in PEPS describing chiral phases, certain bulk correlation lengths computed from the transfer matrix spectrum diverge [20,21,52,[54][55][56][57][58][59]. Nevertheless, a priori it is not known which type of correlation is quasi long-ranged, and how critical bulk correlations are related to the observed chiral edge modes. Here we address this question with our symmetric PEPS ansatz, where both the SU(3) symmetry and the associated Z 3 gauge symmetry will provide valuable insights. Within the PEPS methodology, correlation lengths of different types of operators, including the anyonic type, can be obtained from two complementary methods. On one hand, correlation functions of usual local operators, e.g., spin-spin can be obtained directly by applying the local operator on the physical indices. Here the spin operators are the eight generators of su(3) algebra in fundamental representation (see SM for explicit expression), and the dimer operator is defined as D x i = S i · S i+dex . The Z 3 gauge symmetry enables to define topologically nontrivial local excitations like spinon, vison and their bound state [17,60,61] . A local excitation in the spinon sector can be created by applying an operator X satisfying XZ = ωZX on the virtual indices of the local tensor such that it carries zero Z 3 charge instead of the original charge 1. Similarly, X 2 can create a charge −1 spinon, since X 2 Z = ω 2 ZX 2 . A pair of vison excitations can be created by putting a string of Z (or Z 2 ) operators on the virtual level, whose end points correspond to the vison excitations. Parafermions, bound states of a spinon and a vison, can be created by putting spinons at the end points of the Z string. All these real space correlations can be obtained using the CTMRG environment tensors, see SM for further details. The correlations of the different types are shown in Fig. 4(a), computed with χ = 392. On the other hand, correlation lengths can also be extracted from the spectrum of the transfer matrix of the environment tensors provided by CTMRG (see SM), also termed channel operator, whose eigenvalue degeneracies carry information about the types of correlation. Correlation lengths along horizontal and vertical directions are found to be the same, as expected. Denoting the distinct transfer matrix eigenvalues , we extract the correlation lengths using exponential fits, which are shown in (b) (using the same symbols), along with those extracted from the transfer matrix spectrum with or without flux inserted (shown as lines), with g the degeneracy of the eigenvalue. Both approaches agree for the spinon, vison and dimer correlation lengths which show no saturation with increasing χ. as t a (a = 0, 1, ...) with |t 0 | > |t 1 | > |t 2 | > ..., it turns out t 0 is non-degenerate, suggesting that there is no longrange order in the variational wave function (confirming the ED results). The sub-leading eigenvalues t a (a = 1, 2, 3) are six-fold degenerate, followed by a non-degenerate t 4 . These eigenvalues give direct access to series of correlation lengths ξ (a) = −1/log(|t a /t 0 |), which therefore carry the same degeneracies. We have also computed the correlation length with a ±1 Z 3 flux by inserting a string of Z (or Z 2 ) operators, where the leading eigenvalue of the corresponding transfer matrix is denoted as t Z,1 [62]. From t Z,1 , which is nondegenerate, one can obtain the leading correlation length in the flux sector ξ (1) Z = −1/log(|t Z,1 /t 0 |). A summary of various correlation lengths versus χ from both methods is shown in Fig. 4(b). We find that the largest one in all sectors, ξ (1) Z , is equal to the correlation length found between a pair of visons; it is non-degenerate, in agreement with the fact that visons carry no spin. In the sector without flux, the leading correlation length ξ (1) is in perfect agreement with the correlation length ξ spinon extracted from placing a spinon-antispinon pair. Moreover, since PT symmetry maps spinons placed on reflected bonds to anti-spinons, we expect the spinon correlations to have a degeneracy structure 3 ⊕ 3 which indeed agrees with the six-fold degeneracy in ξ (1) , and is further supported by checking the U(1) quantum numbers of the t 1 multiplet. The U(1) quantum numbers further suggest that t 2,3 , which are also six-fold degenerate, also carry SU(3) representation 3 ⊕ 3. Thus, ξ (1,2,3) all correspond to spinon correlation length. This, in fact, is in correspondence with the three linear dispersing branches in the ES in the Q = ±1 charged sectors, as we shall discuss later. Further examining the correlation length, we find ξ (4) is identical to dimer correlation length, where non-degeneracy agrees with the fact that the dimer operator is invariant under SU(3) rotation. Depending on the parafermion type, the ξ parafermion have different values, both of which are smaller than the spinon correlation length. Interestingly, all of these correlation lengths, except the spin correlation length ξ spin , have no sign of saturation with increasing χ, in agreement with our expectation that the state is not in the Z 3 quantum double phase. Degeneracy structure of topological chiral PEPS -A remarkable feature of our results is the correspondence between the leading four eigenvalues of the transfer matrix and the different sectors in the ES: The Q = 0 sector has one branch, while Q = ±1 each have three almost degenerate branches. This is in direct analogy to the unique leading eigenvalue t 0 which has trivial spin, and the approximate three-fold degeneracy of t 1 , t 2 , t 3 , which have perfectly degenerate spins 3 and 3, matching the perfect degeneracy between Q = ±1. A similar correspondence between (approximate) degeneracy of the (2D) transfer operator and of the ES branches was observed for chiral PEPS with SU(2) 1 counting, where it could be explained as arising from the symmetry of the tensors, and subsequently used to remove the degeneracy in the non-trivial sector in the vicinity of a (fine-tuned) perfectly degenerate point [59]. Furthermore, we checked that the same correspondence also holds in the PEPS description of non-Abelian SU(2) 2 CSL [21]. It is suggestive that such a correspondence in the (approximate) degeneracy structure is a general feature of chiral PEPS and will also hold for general SU(N ) k models; indeed, both ES and the eigenvalues t i are extracted from the same objects, namely the left/right (or up/down) fixed points of the CTMRG environment. It would be interesting to see whether such a correspondence holds in a context of general chiral models, and whether it could possibly even be used to further characterize the precise nature of a chiral theory. Conclusion and outlook -In this work, we have proposed a model for a SU(3) 1 CSL on square lattice and unambiguously identified the relevant parameter space based on ED technique. Guided by ED and DMRG results, we have focused on constructing and optimizing a symmetric PEPS ansatz for the CSL, whose variational energy is remarkably good. For the first time, linear dispersing branches in all three sectors of SU(3) 1 WZW CFT can be obtained with PEPS. A comparison between edge spectrum and bulk correlations reveal a fine structure in the bulk-edge correspondence, which will be tested in further study of SU(N ) k PEPS CSL. Certain unresolved issues, e.g., the number of variationally degenerate ground states on torus, and anyon statistics of chiral topological order remain open, which we hope to uncover in the future. Acknowledgements on the other layer has the same spectrum as the one without flux, and the spectrum with Z or Z 2 on only one layer share the same absolute values but with different phase ω or ω 2 . To complement the main findings in the manuscript, we provide several relevant details in this supplementary material, organized as follows: basic knowledge about SU(3) group and its irreducible representations (irreps) in Sec. I, exact diagonalization study on various small tori in Sec. II, DMRG study on finite cylinders in Sec. III, construction of SU(3) symmetric PEPS and the tensor classification scheme in Sec. IV, the specific corner transfer matrix renormalization group (CTMRG) method we use and the optimization procedure in Sec. V, additional data for entanglement spectrum (ES) in Sec. VI, topological excitations and correlation functions of symmetric PEPS in Sec. VII, and in the end we also list the nonzero elements of the tensors in Sec. VIII. I. BRIEF OVERVIEW OF SU(3) IRREPS Since the theory of SU(3) group and its irreps can be found in many textbooks, e.g. Ref. 1, here we only list the relevant known results without derivation. As a special case of the general representation theory of SU(N ) group, to each irrep of SU(3) we can associate a Young tableau containing a maximum of two rows (see Fig. S1). Denoting by p (q) the number of columns in the first (second) row, with p ≥ q, the dimension of the corresponding irrep is (1/2)(p + 2)(q + 1)(p − q + 1). FIG. S1. A generic Young tableau characterizing an irrep of SU(3). Unlike the SU(2) case where the states of a given multiplet are labeled by a unique U(1) quantum number (eigenvalue of S z ) and related to each other by a unique ladder operator S − (or S + ), multiplets of SU(3) should rather be seen as two-dimensional objects where states are characterized by two U(1) quantum numbers S z = (s z 1 , s z 2 ) and related by two ladder operators (S − 1 , S − 2 ) (or (S + 1 , S + 2 )). Note that a given couple (s z 1 , s z 2 ) is no longer necessarily associated to a unique state. We note that, in irreps of SU(N ), there is some arbitrariness in defining the N − 1 diagonal generators (so-called Cartan subalgebra). Without basis change, it is possible to linearly combine them. The two U(1) quantum numbers shown in Tab. S1 indeed correspond to the eigenvalues of S 7 and The generator for the center of SU(3) can also be expressed in terms of the two diagonal generators as: The two-site permutation operator P ij used in defining the Hamiltonian, can be expressed with su(3) generators in the irrep 3 as: II. EXACT DIAGONALIZATION ON VARIOUS CLUSTERS For exact diagonalization, we have used the Lanczos (respectively Davidson) algorithm to compute the ground-state (respectively low-energy excitations) of our model on various finite-size clusters of N s sites with periodic boundary conditions, see Tab. S3. Since we are looking for a quantum spin liquid state, all clusters are adequate even though they possess different momenta in their Brillouin zone. Even more, we have considered some clusters which are not perfectly square to get additional signatures: we define the eccentric-ity of a cluster by the ratio of the two smallest inequivalent loops of the nearest-neighbor graph around the torus, which is a measure of the "two-dimensionality" of the cluster, where a value close to one is considered fully two-dimensional. In order to reduce the size of the Hilbert space, we have used all space symmetries (translation and point-group) as well as color conservation, which is equivalent to fixing the values of the two U(1) quantum numbers S z : namely, we diagonalize the Hamiltonian in a subspace with a given number of particles per color (N 1 , N 2 , N 3 ) with a constraint of singleoccupation, i.e. Ns eccentricity point group 12 In the seminal work by Halperin [2], a SU(2) spin-singlet fractional quantum Hall (FQH) state was introduced for hardcore spin-1/2 bosons at filling ν = 2/3. As stated in the main text, the SU(3) 1 CSL that we are investigating is a lattice realization of such a phase [3]. The simplest signatures of a FQH phase are given by the ground-state degeneracy and the quasi-hole properties (quasi-degeneracy and momentum quantum numbers) [4,5]. Moreover, there is a simple generalized exclusion principle [6,7] with (2, 1) clustering properties in our case: for instance, in the spinful bosonic language, when N s = 3p, the three (quasi) degenerate ground-states are given by occupations: (↑, ↓, 0, ↑, ↓, 0, . . .) and its translations. These occupations have to be understood as a function of N s orbitals which are obtained when folding the Brillouin zone [4,5]. This exclusion rule simply enforces that there are no more than 2 particles in 3 consecutive orbitals and that a ↓ particle must necessarily be followed by a hole. As a result, for all clusters N s = 3p ranging from N s = 12 to N s = 24, we have confirmed that our model does indeed show a quasi three-fold degeneracy of the ground-state, and their quantum numbers are given by the above generalized exclusion principle. This is a non-trivial feature and it is different from what would be expected for a charge-density wave phase. For example, in the cluster 18a, the three states are found at momentum Γ = (0, 0), while in the cluster 18b, one state is found at Γ and the two others at ±(2π/3, 2π/3) as predicted. For N s = 21, in all three considered clusters, we also FIG. S2. Low-energy spectra obtained from ED on additional Ns = 21 clusters: (a) 21b and (b) 21a. Each Brillouin zone is shown as inset. On these clusters, the ground-state is a global SU(3) singlet. We confirm the presence of two additional low-energy singlet states with the expected momenta, see text. find three quasi-degenerate states with the correct momenta, see Fig. 1a in the main text and Fig. S2. Moreover, as shown in the main text, the ground-state energy shows a rather quick saturation with N s , compatible with a gapped phase. Regarding the quasi-hole case, it can be obtained from clusters N s = 3p − 1. In such a case, the counting in the sector (N 1 , N 2 , N 3 ) = (p, p, p − 1) (which are three-fold degenerate due to SU(3) symmetry) is given by the generalized exclusion principle for spinful particles with N ↑ = p and N ↓ = p − 1 (number of holes being p). For example, for all clusters with N s = 20, we predict N s low-energy quasi-hole states, more precisely one (three-fold degenerate) per momentum sector using the heuristic rule [4,5], which is indeed observed in Fig. S3(a) (other data not shown), and each low-energy state transforms as a3 irrep. For clusters N s = 3p−2, we expect that it would be best described by two quasi-hole states since a single quasi-hole excitation is rather localized. As a result, we would observe lowenergy excitations both in sectors (N 1 , N 2 , N 3 ) = (p, p, p−2) as well as (p, p − 1, p − 1) (and their equivalent sectors). This is indeed what we have found: for instance in both clusters N s = 19, the counting in sector (7, 7, 5) predicts 3 states per momentum and the one in sector (7, 6, 6) leads to 7 states per momentum, which is indeed observed in Fig. S3(b) (other data not shown). Reconstructing the SU(3) irreps from the states quantum numbers, one then finds four 3 and three 6 multiplets in each momentum sector, as predicted. III. RELEVANT DETAILS OF DMRG METHOD For DMRG, we have computed the ground-state wavefunction on various cylinders N s = L x × L y (with open/periodic boundary conditions in the long/short direction). We have used explicitly the two U(1) quantum numbers to ease convergence. Using up to m = 4 000 states, we can obtain reliable energies (discarded weight below 5e-5) up to L y = 6. In order to stabilize a global SU(3) singlet ground-state, we have chosen the system size N s multiple of 3, more specifically we took L x = 3p as the integer closest to 2L y . By computing the total energy for cylinders L x × L y and (L x + 3) × L y , we can obtain an accurate estimate of the ground-state energy density (per site) by subtraction, providing the data plotted in Fig. 2b in the main article. Quite remarkably, there is a very fast convergence since all data for L y = 4, 5 or 6 are compatible with a ground-state energy density e 0 = −2.05(1), very close to the ED estimate. In Fig. S4, we plot the bond strengths P ij on nearestneighbor bonds which do not show any modulation at all. Moreover, we have measured the local Cartan (s z 1 , s z 2 ) average values and found that they are vanishing (below 1e-6). All these measurements are indicative of a featureless phase. Since our model possesses one SU(3) fermion per unit cell (equivalent to 1/3 filling), a trivial featureless gapped groundstate is impossible according to the Lieb-Schultz-Mattis theorem for SU(N) spin systems and its generalization to two dimensions [8][9][10][11]. Therefore, our ED and DMRG data are suggestive of a gapped topological phase. IV. SU(3) SYMMETRIC PEPS ON THE SQUARE LATTICE Here we present details about the symmetric PEPS construction, following the same spirit of Ref. 12-14. To construct a faithful PEPS representation of a chiral spin liquid wave function, we could encode the symmetry property of the desired wave function into local tensors. On the microscopic lattice scale, the symmetries that we need to take into account are: (1) the wave function |ψ is invariant under global SU(3) rotations, i.e., it is a SU(3) singlet; (2) under onesite translation and π/2 lattice rotation, |ψ is invariant up to a phase; (3) under lattice reflection P or time-reversal action T, |ψ is transformed into its complex conjugate |ψ → |ψ (also up to a possible phase), but is invariant under their combination PT. These symmetry requirements can be fulfilled by taking a suitable unit-cell of tensors, where these tensors satisfy certain symmetry constraints. To implement global SU(3) symmetry, the PEPS wave function can take a form where a virtual SU(3) singlet formed by two virtual spins, denoted as |Ω , is put on every bond, and the on-site tensor P does a projection from four virtual spins on every site into the physical spin. Using translation symmetry, we can put the same virtual singlet on every bond, and the same local projector on every site, so as to work directly in the thermodynamic limit. The wave function then takes a simple form: where i stands for site index, and lrud, s is for left, right, up, down virtual, and physical spin on every site, respectively. The lattice point group symmetry imposes strong constraints on these local tensors, namely, up to a phase, the local projector P should be invariant under π/2 lattice rotation, and becomes complex conjugate under reflection, and the virtual singlet |Ω should be invariant under reflection. One noticeable difference between SU(3) (more generally SU(N )) and SU(2) group is that SU(2) group is self-conjugate such that one can form a singlet with two spins carrying the same irrep. Such a property is absent with general SU(3) spins, and we need to combine two spins carrying irreps with opposite Z 3 charges to form the singlet. The virtual space we use in this work, V = 3 ⊕ 3 ⊕ 1 with bond dimension D = 7, satisfies this SU(3) symmetry requirement, and allows us to construct the virtual singlet: where the labeling of basis for each irrep follows Tab. S1. This (unnormalized) maximally entangled virtual singlet is indeed symmetric under reflection. Unlike the SU(2) case, where using on-site unitary transformation on one sublattice, one can transform bond singlet into an identity matrix without changing the on-site projector, the same trick cannot be applied to SU(3) virtual singlet |Ω . Nevertheless, we can absorb the two neighboring bond matrices into the on-site projector P, e.g. the right and down one, forming the tensor A, without enlarging the unit cell. This strategy is taken in the numerical calculation in this work. To systematically construct the on-site projector, we first did a classification of the rank-5 tensor. According to the fusion rules of SU (3) then determined. For each occupation number channel, the highest weight states (corresponding to S z = (1/2, 0)) are expressed in the tensor product basis of V ⊗4 . A point group analysis (C 4v ) is then performed (see Tab. S5), and highest weight states are symmetrized accordingly. Lower weight states (namely S z = (−1/2, 1/2) and S z = (0, −1/2)) are determined using lowering operators expressed in the tensor product basis of V ⊗4 . As a result of the classification, the local projectors are now classified according to irreps of the square lattice point group C 4v , denoted as A 1 , A 2 , B 1 , B 2 and E. One can then construct the on-site projector P by linearly combining different classes of tensors, such that it is invariant under π/2 lattice rotation but becomes its complex conjugate upon reflection (up to an irrelevant phase). One choice we considered is: where {λ a 1 , λ b 2 } are real coefficients, as mentioned in the main text. Here N 1 = 6, N 2 = 5 is the number of tensors in the B 1 and B 2 classes, respectively. We note that, one could also use A 1 and A 2 classes to build chiral PEPS [14], whose energy turns out to be significantly higher than Eq. (S5) (data not shown). Thus we do not examine the detailed property of the later. The expressions for the classes of tensor considered in this work are provided in Sec. VIII. See also Fig. S5 for a pictorial illustration. V. CTMRG METHOD AND VARIATIONAL OPTIMIZATION For completeness, here we briefly describe the specific CTMRG method we used in this work, which follows Ref. 15 and is further simplified in Ref. 16. In our setting, for tensor network of the wave function norm, the unit cell contains only one tensor, denoted as E (see Fig. S6(a)), which is obtained by contracting tensor A and its complex conjugate over the physical index. The CTMRG method allows us to approximately contract the whole network on the infinite plane by computing the effective environment tensors surrounding the unit cell. In our case, the environment tensors are composed by corner tensors and edge tensors {C i , T i }(i = 1, 2, 3, 4), see Fig. S6(b) for graphic notation. The accuracy of CTMRG method is controlled by the environment bond dimension, denoted as χ, and typically we choose χ = kD 2 (k ∈ N + ). In the CTMRG procedure, we dynamically increase χ by a small amount to keep the complete SU(3) multiplet structure. To further speed up the CTMRG procedure, we have explicitly kept track of the first U(1) quantum number of the SU(3) multiplets [17]. We note that, although the wave function has certain lattice symmetry, we do not use them in the CTMRG procedure, since after absorbing the bond singlet into on-site projector to construct tensor A, the tensor A is not invariant under π/2 lattice rotation. As a result, the four corner (edge) tensors {C i }(i = 1, 2, 3, 4) ({T i }(i = 1, 2, 3, 4)) are not necessarily the same. Nevertheless, we have checked that the physical observables, e.g., correlation lengths, along the horizontal and vertical directions are the same, as expected. For a given set of variational parameters {λ a 1 , λ b 2 }, we can now compute the energy density with the environment tensors, simply by inserting the identity operator or local Hamiltonian terms in the central region, see Fig. S6(b). Energy gradient can then be easily obtained by finite difference method, which is feasible due to the significantly reduced number of variational parameters (compare to general PEPS ansatz). The conjugate-gradient method [18] is then utilized to find the variational optimal parameters. In practice, this optimization procedure is carried out with χ = D 2 . Then we evaluate the energy density of the optimized ansatz with several larger χ = kD 2 (k = 2, ..., 6) and eventually extrapolate to the χ → ∞ limit. VI. ADDITIONAL DATA FOR ENTANGLEMENT SPECTRUM The entanglement property of PEPS can be most easily characterized by studying the entanglement spectrum on finite width cylinders, which is defined to be minus log of the spectrum of reduced density matrix (RDM) of subsystem [19], say the left half of the cylinder. For PEPS on an infinitely long cylinder, the RDM can be constructed from the leading eigenvector of the transfer operator through the relation: where U is an isometry relating the physical degrees of freedom to the virtual ones [20], and we have adopted the convention that the first index of σ L,R is in the bra layer. This RDM further shares the same spectrum as ρ = σ L σ T R , which we diagonalize to get information about edge properties. As mentioned in the main text, for our case with bond dimension D = 7, it is not feasible to compute σ L,R exactly, except for cylinders with small width. Instead, we can use the environment tensors computed from CTMRG method to approximate the σ L,R , see Fig. S7 for illustration. This is justified by the fact that CTMRG method is essentially approximating the fixed-point of certain transfer operator with matrix product state formed by environment tensors. One advantage of this approach is that, using σ L,R constructed in this way one can find RDM in all different charge sectors simultaneously, while with exact contraction one has to find them separately. The RDM has both translation symmetry and SU(3) symmetry, allowing us to block diagonalize it with momentum quantum number K, the two U(1) quantum number, and Z 3 quantum number (charge Q). A typical result of full diagonalization is shown in Fig. S8, where linear dispersing chiral FIG. S7. Using CTMRG environment tensors to construct RDM for the left half. (a) shows the transfer operator on a width Nv = 6 cylinder, whose right leading eigenvector σR is approximated by a ring of T2 tensors, shown in (b). Similarly for the left leading eigenvector σL. The RDM is then obtained by contracting a ring of T4 and T2 tensors, shown in (c). modes can be seen in the low energy spectrum. The degeneracy between the charge Q = +1 and Q = −1 sector can be identified with Fig. S8(b) and (c), where same energy levels with same momenta but conjugated SU(3) irreps appear in the Q = +1 and Q = −1 sector separately, confirming the degeneracy mentioned in the main text. In all three Z 3 sectors, an entanglement gap [19] separating the chiral mode from the high energy continuum can be identified, although the magnitude of the gap in charge Q = 0 sector is much larger than the gap in charge Q = ±1 sectors. In the Q = 0 sector, the chiral branch starts at momentum K 0 = −π/3, while the three quasi-degenerate chiral branches in the Q = ±1 sectors start at momenta K ±1 = −π/3, π/3 and π, individually. Further examining the level counting of the chiral modes confirms that they satisfy SU(3) 1 Wess-Zumino-Witten (WZW) conformal field theory (CFT) prediction [21,22]. See Tab. S6 for a list of the tower of states in SU(3) 1 WZW CFT. It is interesting to see that the Virasoro level contents also exhibit degeneracy between the charge Q = +1 and Q = −1 sectors, which is perfectly recovered by the numerically computed entanglement spectrum. Due to this degeneracy, in the following, we have plotted the Q = +1 sector and Q = −1 sector together, using open symbols and filled symbols respectively, to stress this symmetry (see Fig. S9 and Fig. S10). It should be noted, when using CTMRG environment tensors to construct approximate RDM, the environment bond dimension χ is the only tuning parameter. Since χ controls the accuracy of CTMRG procedure, we expect that the level contents of chiral CFT mode becomes more complete with increasing χ. This finite χ effect is shown in Fig. S9. Certain features of the ES, e.g., momentum shift in all sectors and three branches in charged sectors, are present for all different χ we have considered. This is reasonable since the low energy spectrum converges first with increasing χ, and suggests that these features are intrinsic properties of the optimized PEPS wave function, i.e., not artifacts of the approximation. The momentum shift observed in ES on N v = 6 cylinder has dramatic consequence for ES on N v = 3 and N v = 9 cylinders, shown in Fig. S10. In the charge Q = 0 sector for both N v = 3 and 9, a linear dispersing mode can be vaguely identified. However the content of each level is doubled, i.e., two singlets for n = 0 level, two 8 for n = 1 In each sector, the (in)complete Virasoro levels have been indicated by (blue) red boxes when necessary, and the missing levels can be found in the higher energy spectrum, marked as blue arrows. Their contents are shown in red vertically (see also Tab. S6). (Since the Q = +1 sector is degenerate with the Q = −1 sector, only Virasoro levels in the former sector are marked out.) With increasing χ, more Virasoro levels become complete, which is evidently seen in the charge Q = 0 sector. This trend is not monotonic in charged sectors, due to the large number of branches -three -on a relatively small width (Nv = 6) cylinder, which mix each other in the higher energy spectrum. Nevertheless, the CFT spectrum in all sectors get more separated from the high energy continuum with increasing χ. level, since the finite momentum of ground state K 0 = −π/3 is incommensurate with N v = 3, 9. This scenario is further confirmed by exact contraction for N v = 3 case (data not shown). In the charged sectors with N v = 3, the low-est level can appear at different momenta, which typically depends on χ, see Fig. S10(c) and (d). This is in agreement with the three quasi-degenerate branches with different momenta Finally, we close this section by briefly discussing the ES in the flux sector. In this work we have mainly focused on ES in the topological sectors without flux, and succeeded in finding signatures for chiral topological order. The Z 3 gauge symmetry implies that we can also construct topological sectors with flux, see Fig. S11 for illustration. However, previous studies in the SU(2) case [23,24] suggest the ES in flux sectors does not follow a simple CFT description. Therefore, we do not l a t e x i t s h a 1 _ b a s e 6 4 = " 6 + D Q A A 1 b J + C n G D + 2 7 5 D r v L l m U z g = " > A A A C z n i c j V H L T s J A F D 3 U F + I L d e m m g Z i Y m J C i x s e O 4 M Y l G n k k Q E h b B m w o b T O d k h A k b v 0 A 3 e p n G f 5 A / 8 I 7 Q z E a Y v Q 2 b c + c e 8 + Z u X e s w H V C Y R i T h L a w u L S 8 k l x N r a 1 v b G 6 l t 3 c q o R 9 x m 5 V t 3 / V 5 z T J D 5 j o e K w t H u K w W c G b 2 L Z d V r d 6 l z F c H j I e O 7 9 2 K Y c C a f b P r O R 3 H N g V R 9 f t i 6 6 b B T a / r s l Y 6 a + Q M F f o 8 y M c g W 8 g 0 D p 8 m h W H J T 7 + h g T Z 8 2 I j Q B 4 M H Q d i F i Z C e O v I w E B D X x I g 4 T s h R e Y Y x U q S N q I p R h U l s j 7 5 d W t V j 1 q O 1 9 A y V 2 q Z d X H o 5 K X X s k 8 a n O k 5 Y 7 q a r f K S c J f u b 9 0 h 5 y r M N 6 W / F X n 1 i B e 6 I / U s 3 q / y v T v Y i 0 M G 5 6 s G h n g L F y O 7 s 2 C V S U 5 E n 1 7 9 1 J c g h I E 7 i N u U 5 Y V s p Z 3 P W l S Z U v c v Z m i r / r i o l K 9 d 2 X B v h Q 5 5 S X f C F j N O v 6 5 w H l a N c / j h 3 c p 3 P F o q Y R h J 7 y O C A 7 v M M B V y h h L K a + D N e 8 K q V t I E 2 1 h 6 m p V o i 1 u z i R 2 i P n 9 P 6 l v E = < / l a t e x i t > hB L | < l a t e x i t s h a 1 _ b a s e 6 4 = " S i a f W W A V 9 N 3 U m A J f A v F Z Z X w u 3 8 4 = " > A A A C z 3 i c j V H L S s N A F D 2 N r 1 p f V Z d u Q o s g C C V V 8 b E r d e P C R Q v 2 A W 0 p S T q t w W k S k o l S q u L W v b j V v 5 L + g f 6 F d 6 a p K E X 0 h i R n z r 3 n z N y 5 l s + d U B j G K K H N z M 7 N L y Q X U 0 v L K 6 t r 6 f W N a u h F g c 0 q t s e 9 o G 6 Z I e O O y y r C E Z z V / Y C Z f Y u z m n V 1 K v O 1 a x a E j u d e i I H P W n 2 z 5 z p d x z Y F U c 0 m N 9 0 e Z 3 q x f X 7 b T m e N n K F C n w b 5 G G Q L m e b u 0 6 g w K H n p N z T R g Q c b E f p g c C E I c 5 g I 6 W k g D w M + c S 0 M i Q s I O S r P c I c U a S O q Y l R h E n t F 3 x 6 t G j H r 0 l p 6 h k p t 0 y 6 c 3 o C U O r Z J 4 1 F d Q F j u p q t 8 p J w l + 5 v 3 U H n K s w 3 o b 8 V e f W I F L o n 9 S z e p / K 9 O 9 i L Q x b H q w a G e f M X I 7 u z Y J V K 3 I k + u f + t K k I N P n M Q d y g e E b a W c 3 L O u N K H q X d 6 t q f L v q l K y c m 3 H t R E + 5 C n V g E 9 k H H 6 N c x p U 9 3 L 5 / d x B O Z 8 t F D G O J L a Q w Q 7 N 8 w g F n K G E C n n 7 e M Y L X r W y d q P d a w / j U i 0 R a z b x I 7 T H T y 0 6 l w 8 = < / l a t e x i t > FIG. S11. On infinitely long cylinders, topologically quasidegenerate states can be constructed by choosing virtual boundary state |BL , |BR belonging to fixed Z3 charge sector, with or without nontrivial Z3 flux insertion (shown as blue squares). explore it here but leave it to further study. VII. TOPOLOGICAL EXCITATIONS AND CORRELATIONS IN SYMMETRIC PEPS As discussed in the main text, Z 3 gauge symmetry, generated by Z(Z 3 = I D ), implies topological excitations on infinite plane, whose type can be labeled by the group element and group irreps. Here we will not describe the full details of the theory, but refer to Ref. [25] for the interested reader. Spinon, one of the topologically nontrivial elementary excitations, can be created by modifying a local tensor such that it belongs to different irreps of Z 3 . This can be achieved by acting on the virtual level of the local tensor A with an operator X, see Fig. S12. Its anti-particle can then be similarly created with an operator X 2 also acting on the virtual index. Apart from basic algebraic relation of X and Z: with ω = e i2π/3 , which generalizes the anti-commutation relation between Pauli matrix σ x and σ z to Z 3 , the choice of X is not unique, due to the internal SU(3) symmetry in the ansatz. The specific X we use to compute spinon-antispinon correlation function is: In principle, the operator X can be put on any of the four virtual indices of A. However, unlike the Pauli matrix σ x in the Z 2 case, X in the Z 3 case cannot be chosen to be symmetric. Thus the order of indices matters when computing anyonic correlation functions using X. In practice, we always put X or X 2 in the ket layer, with the first index contracted with down index of local tensor A. Apart from spinons, the Z 3 gauge symmetry also allows us to construct vison excitations, which are end points of Z or Z 2 string operator acting on the virtual level. Their bound states, so-called parafermions, can be created by attaching spinons to the end points of virtual string. With these topological excitations at hand, the calculation of their correlation functions is straightforward, shown in Fig. S13(a) and (b). The corresponding transfer matrix can also be constructed, without or with flux, see Fig. S13(c) and (d). VIII. NONZERO ELEMENTS OF SYMMETRIC TENSORS For the sake of completeness, we now present the resulting tensors from classification. According to the irreps of C 4v group, the on-site projectors can be classified into four real classes, and two complex classes. Since only the real classes are used, we list their nonzero elements below, denoted as A 1 , . See also Tab. S1 for the U(1) quantum numbers of each basis in both physical and virtual spaces. The expressions of the three components of each real tensors are provided in tables S17-S27, where the tensor indices are in [up, left, down, right] order.
2019-12-31T16:30:25.000Z
2019-12-31T00:00:00.000
{ "year": 2020, "sha1": "1be390893f8aacf310b24b3735bfcee3ec2cc890", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.125.017201", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "d1a038f9c79b90605273cfe66f42552aef66e053", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
172136210
pes2o/s2orc
v3-fos-license
Is fat suppression in T1 and T2 FSE with mDixon superior to the frequency selection-based SPAIR technique in musculoskeletal tumor imaging? Objective To determine the image quality of fast spin echo (FSE) with mDixon relative to spectral attenuated inversion recovery (SPAIR) FSE sequences in musculoskeletal tumor imaging on a 1.5-T MRI system. Materials and methods In a HIPAA-compliant prospective study, 265 patients requiring musculoskeletal tumor MRI scans were included. Patient consent was waived by the medical ethical committee. Two radiologists compared SPAIR and mDixon FSE water-only images in both T2- and T1-weighted gadolinium-enhanced (T1-Gd) sequences using a five-point scale (paired samples t test and visual grading characteristics curves (VGC)). Homogeneity of fat suppression, noise, contrast, several artifacts (motion, phase, edge blurring and water–fat swap) and subjective preference were evaluated. Results Readers did not have subjective preference for either sequence in 71% and 55% (reader 1 and 2, respectively). Scores for homogeneous fat suppression were significantly (p < 0.01) higher for mDixon (4.88 in T2 and 4.87 in T1-Gd) than for SPAIR (4.31 for T2 and 4.21 for T1-Gd). All VGC curves for homogeneity demonstrated preference for mDixon. In 57 individual mDixon cases, fat-suppression homogeneity was strikingly better (≥ 2 points higher), namely in areas with field heterogeneity. Average noise and contrast scores were slightly higher for mDixon, as were motion artifact scores for SPAIR (< 0.5 points difference). Conclusions mDixon fat suppression was significantly more homogeneous than SPAIR on both T2 and T1-Gd FSE images in musculoskeletal tumor protocols. In areas of field inhomogeneity, mDixon outperforms SPAIR. SPAIR had slightly less motion artifacts than mDixon. Introduction Fat suppression in musculoskeletal oncology magnetic resonance imaging (MRI) is used for improving lesion conspicuity and lesion characterization. Since the introduction of turbo or fast spin echo sequences (FSE), fat suppression has become indispensable because decreased J-coupling causes high signal intensity of fat in these sequences [1]. Some radiologists, however, prefer using T2-weighted images without fat suppression because the signal-to-noise ratio (SNR) is higher, depiction of anatomy is easier, and problems with inhomogeneous fat suppression do not exist. Several methods of fat suppression have been developed and implemented over the years, each with their own advantages and limitations. These techniques are often based on chemical shift, non-selective inversion pulses, and hybrid techniques. In the 1980s, Dixon introduced a chemical shift method based on phase shift secondary to water-fat resonance frequency differences. The method allowed the separation of water and fat signals to be postponed to the image postprocessing phase, and required only a single data acquisition sequence with multiple echo times [2]. Relative to other fatsuppression techniques, the classic Dixon techniques have long acquisition times and high sensitivity to B 0 heterogeneity. However, because the separation of fat and water takes place during image reconstruction, the main advantages are independency to field strength and decreased sensitivity to B 1 field heterogeneity [3]. Despite several improvements in hardware and software including multiple-point sequences, phase correction, and parallel imaging [4,5], the still relatively long acquisition times made this sequence unpopular in routine clinical practice. A more recent modified Dixon (mDixon FSE) sequence uses two-point Dixon with flexible echo times rather than fixed in-and opposed phase echoes [3,6]. Decreased echo times and lowered pixel bandwidth result in more efficient data acquisition and higher SNR. This allows reduction of acquisition times and reduces sensitivity to B0 heterogeneity compared to classic two-point Dixon, while maintaining the advantages of accurate water and fat separation in reconstructed images and reduced dependency on high field strength. Several studies have described superior image quality of various modern Dixon techniques compared to conventional fat-suppression methods in MSK imaging [7][8][9][10][11][12][13][14]. These studies mostly assessed image quality in specific anatomical sites. Our aim was to determine the image quality of water-only mDixon FSE in musculoskeletal tumor imaging compared to frequency selective fat-suppressed-based spectral attenuated inversion recovery (SPAIR) FSE sequences in T2-and T1weighed gadolinium-chelate enhanced (T1-Gd) images, in a specific tumor protocol on a 1.5-Tesla MRI system. Patients In this prospective study, during a 2-year period (2015-2016), all consecutive patients who required a musculoskeletal tumor scan on the same 1.5-T MRI scanner were included. Because all studies were clinically indicated, patient consent was waived by the local medical ethical committee. All untreated new patients and patients being treated or under surveillance were eligible. Indications were diagnosis, staging, therapy monitoring or detection of recurrence. From 330 eligible patients, 66 were excluded because of technical protocol violations such as use of tailored pulse sequence improvements. During the first year (129 patients), a T1-Gd mDixon sequence was added to the standard protocol. In the second year (135 patients), a T2-weighted mDixon sequence was used instead. Out of 264 patients, 132 were male. Mean age for males was 49.7 (range, 11-83) years and for females was 48.2 (range, 12-88) years. Mean age for all patients was 48.9 (range, 11-88) years. Protocol All scans were performed using the same 1.5-T MRI system (Philips Ingenia, Release 5.3.0.3, Best, The Netherlands). Surface coils and scanning parameters depended on the body Significance was calculated using a two-tailed, paired Student's t test. A difference of more than 0.5 points was considered clinically relevant. T2 = T2weighted sequences, T1-Gd = T1-weighted, gadolinium-chelate enhanced sequences Significance was calculated using a two-tailed, paired Student's t test. A difference of more than 0.5 points was considered clinically relevant. T2 = T2weighted sequences, T1-Gd = T1-weighted, gadolinium-chelate enhanced sequences part imaged: for the shoulder, mediastinum, trunk, and pelvis the 32 channel torso surface coil was used, for the knee the 16 channel knee coil and for the other extremities the eightchannel small extremity coil. All elements of these coils were active during scanning. Standard musculoskeletal tumor protocol included SPAIR fat suppression for both axial T2-and multiplanar T1-weighted FSE Gd-chelate (0.2 cc/kg of Gadoterate Meglumine, Dotarem, Guerbet, Cedex, France) enhanced sequences. To this protocol, either a T2-weighted mDixon FSE or Gd-chelate enhanced T1-weighted mDixon FSE sequence was added. The mDixon and SPAIR sequences were performed in the same session, on the same patient, and the surface coil, scanning plane, slice thickness, and resolution were the same for both sequences. Between patients, however, these parameters varied, depending on the body part that was imaged. Default shimming was used in all sequences, no additional shimming was performed. The reference tissue selected for mDixon was 'skeletal muscle'. From the mDixon reconstructions, the water-only reconstructions were used for comparison to the corresponding SPAIR images. In T2-weighted images, the mean imaging parameters were as follows for mDixon FSE and SPAIR FSE, respectively: repetition time (TR) 2679 (standard deviation (SD) 53) and 3321 (SD 71) ms, echo time (TE) 63 (SD 0.6) and 60 (SD 0.1) ms, echo train length 16 (SD 0.4) and 14 (SD 0.5), number of signal averages (NSA) 1.4 (SD 0.04) and 1.5 (SD 0.04). In T1-Gd-chelate images with mDixon FSE and SPAIR FSE mean parameters were respectively: TR 676 (SD 16) and 701 (SD 22) ms, TE 13 (SD 5) and 13 (SD 0.6) ms, echo train length 6 (SD 0.1) and 6 (SD 0.1) and NSA 1.1 (SD 0.03) and 1.2 (SD 0.03). Inversion time for the SPAIR sequences was 95 ms. Image analysis mDixon and SPAIR stacks were compared on adjacent monitors using a Sectra viewing system (IDS7 PACS, Linköping, Sweden). Either left or right position of the sequences had been randomized by one of the authors (W. H., not one of the readers). Two radiologists with 33 years (J.B.) and 15 years (C. v R.) of musculoskeletal MRI imaging experience, blinded to the sequence names and clinical information, separately completed a questionnaire comparing eight parameters. Readers based their grade on the whole image stack. Patient order was the same for both readers and study population was read in multiple sessions. The primary image quality parameter was homogeneity of fat suppression throughout the sequence. Five other parameters included muscle-fat contrast, image noise, random motion artifacts, phase encoded motion artifacts, and edge blurring. Motion artifacts were defined as artifacts due to random motion such as bowel contractions and phase artifacts as ghosting due to repetitive motion, such as breathing. Edge blurring was defined as small lines across the borders of anatomic structures. These six parameters were graded on a five-point scale as follows, 5: perfect image without artifacts, 4: small artifacts at the periphery of the image, 3: prominent artifacts but no interference with the region of interest, 2: prominent artifacts in the region of interest and 1: the region of interest could not be evaluated due to artifacts. The waterfat swap artifact was graded as present or absent. Finally, 1908 Skeletal Radiol (2019) 48:1905-1914 Fig. 2 Contrast and noise. Axial T2-weighted images of the right upper leg. Contrast between normal fat and muscle tissue is different for mDixon (a) and spectral attenuated inversion recovery (SPAIR) (b) images in this case. In the mDixon image, fat exhibits a lower signal intensity compared to muscle because signal from fat is more efficiently eliminated. In the SPAIR image, the signal intensities of both tissues are similar. Both images, scanned in the same location and with the same coil, experience problems with field homogeneity and noise in the periphery (dorsal side of the leg). In the SPAIR image, noise is more prominent and interferes with the visibility of anatomical structures subjective preference for either technique was recorded when present. Differences between mDixon and SPAIR of two points or more were defined as outliers and were analyzed at a later time by two observers in consensus to determine the reason for this difference. Statistical analysis With a desired power of 0.8, and significance level of 0.05, an anticipated difference of 0.5 points on a scale of 5 in semiquantitative scoring of imaging parameters and an expected standard deviation of 1.5 points, a minimal study population of 129 patients was needed [15]. Results were collected using the Formdesk questionnaire system (Innovero Software Solutions, Wassenaar, The Netherlands). Statistical analysis was performed in collaboration with the department statistician, using SPSS Statistics version 23 (IBM Corporation, New York, USA). For scan time and inter-reader reliability, the scores were compared with a paired Student's two-tailed t test. To compare mDixon and SPAIR image quality, average scores of the readers were compared with a paired Student's two-tailed t test using a 5% level of significance. A mean difference less than 0.5 points on the five-point scale was considered a nonrelevant finding. Six parameters were analyzed with visual grading characteristics (VGC) analysis and area under the curve (AUC). This method, described by Båth [16], was developed to compare image quality on a multiple point scale and uses frequency tables to produce ROC-like curves in order to depict the reader's preference. Contingency tables were used to detect outlier cases in the primary parameter. Results The scanned body parts are listed in Table 1. The upper trunk included spine, neck, thoracic wall, mediastinum and shoulder. The l ower trunk included abdominal wall, retroperitoneum, pelvis, and hips. Extremity scans included elbow, wrist, hand, knee, ankle, and foot. The majority of the scans were performed on knees and pelvis (42 (31%) knees and 24 pelvises (18%) of 135 T2 scans and 64 (50%) knees and 26 pelvises (20%) of 129 T1-Gd scans). A tumor was seen in 81 (60%) of 135 T2 scans and in 68 (53%) of 129 T1-Gd scans. In the cases without a tumor, only posttreatment changes or normal findings were present. The mean acquisition time of the T2-weighted images was 190 (SD 61) seconds for SPAIR and 188 (SD 63) seconds for mDixon (p = 0.70, paired t test). For T1-weighted post Gd images the acquisition times were 152 (SD 51) seconds for SPAIR and 204 (SD 60) seconds for mDixon (p < 0.00, paired t test). In several parameters (Table 2), inter-reader differences reached significance. However, the difference between average scores was never larger than 0.5 points. The scores for mDixon and SPAIR, averaged over the two observers, are listed in Table 3. Average scores of fat suppression homogeneity in the T2-weighted scans were 4.88 (SD 0.35) for mDixon and 4.31 (SD 1.02) for SPAIR (p < 0.01). In the T1-Gd-chelate scans mean scores were 4.87 (SD 0.39) for mDixon and 4.21 (SD 1.01) for SPAIR (p < 0.01). An example of fat-suppression heterogeneity is shown in Fig. 1. For contrast and noise (Fig. 2), mDixon received slightly higher scores. However, this difference was smaller than 0.5 points and thus not large enough to be relevant according to our predefined criteria. Motion, phase (Fig. 3) and blur artifacts (Fig. 4) were more prominent in mDixon imaging, but again these differences did not reach the 0.5 threshold. Waterfat swap artifacts (Fig. 5) were present in four out of 135 cases (3%) in T2-weighted mDixon images and in two out of 129 cases (2%) in the T1-Gd group. The visual grading characteristics curves, shown in Fig. 6, demonstrate the degree of preference of each reader for either the mDixon or SPAIR sequence. Both readers showed a preference for mDixon concerning fat-suppression homogeneity, the primary parameter. Areas under the curve were 0.67 for reader 1 and 0.79 for reader 2 in the T2 group and 0.68 for reader 1 and 0.69 for reader 2 in the T1-Gd group. Slight preference for mDixon was found for contrast for reader 1 (AUC 0.61 (T2)) and slight preference for SPAIR in phase artifacts and blur for reader 1 (AUC 0.39 (T2) and 0.40 (T1-Gd)). The other areas under the curve for noise, contrast, and artifacts fell within 0.1 points from 0.5 and were thus categorized as no preference. For additional outlier analysis of the primary parameter (homogeneity of fat suppression), distinction was made between scores that were higher by 0 or 1 point and scores that were higher by 2 or more points for either mDixon or SPAIR. For overview purposes, all fat-suppression homogeneity scores of both readers are shown in contingency tables in Table 4 (each case was scored by two readers, thus resulting in twice as many scores as cases). The majority of scores fell into the 0-1 point difference group: 228 (84%) in T2 and 222 (86%) in T1-Gd. Of these cases, 95% of sequences contained few artifacts (and scored either four or five points in 218 out of 228 in T2 and 210 out of 222 in T1-Gd). In a smaller amount of scores, the difference was two points or more (from here on referred to as 'outliers'). These outliers were gathered for further analysis. It should be noted that because each case was scored by both readers, two scores could refer to the same case. The T2-weighted images yielded 42 outlier scores, corresponding to 29 separate cases in favor of mDixon. In the T1-Gd group, outliers were found 35 times, corresponding to 28 separate cases in favor of mDixon and one case in favor of SPAIR. These outliers were analyzed in order to determine the reason for fat-suppression heterogeneity. Among the outliers in favor of mDixon in the T2-group (29 scans), six SPAIR scans (21%) suffered from bulk susceptibility artifacts (three cervical spines, two shoulders, one foot) and 20 scans (69%) from B 0 field inhomogeneity problems at the edge of the gantry (seven shoulders, seven elbows, six hips) and six scans (21%) from field inhomogeneity problems at the edge of the coil (one cervical spine, one wrist, three knees, and one foot). Among the outliers in favor of mDixon in the T1-Gd group (28 scans), five SPAIR scans (18%) suffered from bulk susceptibility artifacts (three cervical spines, one shoulder and one thoracic wall), 16 scans (57%) from field inhomogeneity at the edge of the gantry (five shoulders, five elbows, six hips) and 14 scans (50%) from field inhomogeneity at the edge of the coil (one shoulder, five hips, six knees, one ankle, and one thoracic wall). In this group, mostly sagittal and coronal scans were performed, using the full length of the coil. This is why artifacts at the edge of the coil were more conspicuous than in the T2 group, which were mostly scanned in an axial plane. The scan of the outlier in favor of SPAIR in T1-Gd was performed in the knee and showed field inhomogeneity problems in the form of water-fat swap artifacts. The overall subjective preference scores are listed in Table 5 per reader. In more than half of the cases, readers had no preference for either mDixon or SPAIR (55% and 71%). In 19 and 28% of cases, they preferred mDixon and in 9 and 18% of cases they preferred SPAIR. Discussion The current study shows that the theoretical advantages of mDixon FSE, a fast adaptation of the classic two-point Dixon FSE technique, result in superior quality of T2 fatsuppressed images relative to the SPAIR FSE technique, without disadvantages such as long acquisition times or significant interference by blurring artifacts. Since differences between pulse sequences may vary secondary to specific protocols tailored on clinical indications, we limited our study to a homogeneous population in which a musculoskeletal tumor protocol was clinically indicated. In the majority of patients, the two observers did not have an overall subjective preference for either mDixon or SPAIR in both T2 and T1 Gd-chelate enhanced sequences, but when looking at the individual image quality parameters, fat suppression with mDixon proved to be significantly more homogeneous than with SPAIR on both T2-and T1-Gd enhanced imaging. Using the predefined threshold of minimally 0.5 point difference on a five-point scale, this was the only parameter of six reaching both a significant difference in the averaged reader scores and a substantial preference with the visual grading characteristics curves for both individual readers. However, the difference on averaged scores was only 0.57 for T2-, and 0.66 for T1-Gd sequences. This means that, overall, in clinical practice the difference between the two sequences is minimal. However, our consensus analysis of outliers showed that mDixon performed strikingly better in a subset of cases with B 0 field inhomogeneity at the edge of the gantry or coil, and in areas with bulk susceptibility; i.e., In the mDixon water-only image (a), this causes water-fat swap artifacts: the signal intensity of fat and water are swapped. Thus, high signal is erroneously assigned to the intramedullary cavity, intermuscular tissue (arrowheads), and in the subcutaneous fat tissue. Note the sharp demarcation of the water-fat-swap artifact (arrow) in the mDixon image around the cervical spine, thoracic wall, shoulder girdle, elbows, and hips. The secondary parameters showed some significant differences that were considered to be clinically irrelevant because of not reaching the predefined 0.5-point threshold. These differences included better scores for contrast and noise in mDixon and for motion and edge blurring artifacts in SPAIR. We found a recognizable water fat swap in only six out of 264 mDixon exams (2%). Advantages of Dixon techniques relative to other fatsuppression techniques include reduced dependency on high field strength, high SNR, low specific absorption rate, reduced sensitivity to metal artifact, and insensitivity to B1 inhomogeneity. With two-point mDixon and three-point or four-point Dixon techniques decreased sensitivity to B0 inhomogeneity is achieved [3,5,17]. The main disadvantage is the long acquisition time and dependency on reconstruction algorithms with edge blurring due to long echo trains, sensitivity to motion, and phase shifts, and water-fat swap artifacts [3,4,18]. In the current study, the use of asymmetrical rather than the classic symmetrical echoes in combination with the reconstruction algorithms of the mDixon technique provide further decreased sensitivity to B 0 inhomogeneity and eddy currents and decreased echo spacing with shortened acquisition times and less blurring [3]. Dixon fat-water separation can be either gradient echo based or FSE based. The mDixon sequence can also be applied to both, using a bipolar gradient readout for gradient a b c Fig. 6 Visual grading characteristics (VGC) curves. The VGC curves of fat-suppression homogeneity for each reader with corresponding area under the curve (AUC) in the bottom right corner. a VGC curves for reader 1 (red) and reader 2 (blue). b VGC curves for noise (both readers purple) and contrast (both readers green). c VGC curves for motion artifacts (dark blue), phase artifacts (green) and blurring (red) echo and a multi-repetition spin echo sequence with flexible TE values for FSE [6]. Because we aimed to evaluate the routinely used FSE techniques in MSK tumor imaging, we did not use gradient echo mDixon techniques. Mentioned in the European Society of Musculoskeletal Radiology guidelines for soft tissue tumor imaging [19] and practiced in many tumor centers is the use of T2-weighted images both with and without fat saturation. The possibility to save time by using both the water-only and in-phase reconstructions from the same Dixon acquisition to replace these sequences was beyond the scope of the current study. Interestingly, the use of the other Dixon reconstructions (inphase, out-of-phase and fat-only reconstructions) in MSK imaging are being studied by other groups, including the possibilities for fat quantification [20,21] and might contribute to saving time by replacement of other sequences. In the future, mDixon may allow other innovations such as real-time change of water-fat contributions to an image at the work station. There are several limitations in our study. Firstly, although we aimed to use the same acquisition times for both mDixon FSE and SPAIR FSE, the T1-Gd mDixon sequence was on average 50 s longer than SPAIR. Unfortunately, the technicians took liberty to deviate from the protocol in the T1-Gdchelate sequences in an attempt to improve image quality, but there was no difference in acquisition times between the T2weighted SPAIR and mDixon sequences. Secondly, although readers were blinded to protocol name, it is likely that they could deduce which sequence was the mDixon sequence due to inherent differences in contrast and based on the type of artifacts. Thirdly, the image quality was only studied at 1.5 T because we aimed at studying a homogeneous data set and we schedule more musculoskeletal oncology patients on our 1.5 T than on 3-T scanners. Potentially, differences in image quality will be larger when using a 3-T system because magnet heterogeneity increases with field strength. Image quality is difficult to quantify and depends on personal preference and display settings. Finally, there were significant differences between reader scores. However, these differences were small and considered irrelevant. SPAIR SPAIR All scores are listed (270 scores for 135 cases from the T2 group and 258 scores for 129 cases from the T1-Gd group). Scores with two or more points difference between modified Dixon (mDixon) and spectral attenuated inversion recovery (SPAIR) are indicated in red. The scans that received high scores (4 or 5) and differed less than 2 points are marked with a turquoise box. (T2 = T2-weighted images, T1-Gd = T1-weighted gadolinium-chelate enhanced images) Concluding, in a musculoskeletal oncology population, mDixon FSE at 1.5 T allows time-effective creation of T2weighted images with superior elimination of fat relative to SPAIR FSE images and without disadvantages. Especially in areas of field inhomogeneity mDixon is preferred. When motion is an issue, SPAIR is preferred. In our tumor protocols, we replaced T2-SPAIR FSE by mDixon FSE.
2019-06-02T13:14:16.571Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "efc932705bd321ca2f3436a6bc823684e654db78", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00256-019-03227-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f7d7f6e2e9c8cbbd6e8dc03cc69abbce0ce7b88b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212710929
pes2o/s2orc
v3-fos-license
Retrospective analysis of paediatric tracheostomy in Indian tertiary care centre Paediatric patients may have 2 to 3 time more morbidity then adults and have high complication rates. Infantile larynx and trachea have small diameter therefore severe life threating airway obstruction can develop due to event mild mucosal oedema. Infantile larynx occupy a higher position in neck. Sometimes cricoid cartilage may not be prominent on palpation and may pose difficulty for surgeon to ascertain level of airway. The anatomical differences between adult and paediatric may increase problem during management of such patients. These paediatric patients are medically vulnerable and high risk associated with higher rated of complications. INTRODUCTION Tracheostomy is one of the most frequently performed planned or emergency surgical procedure in critically ill patients who are on prolonged ventilatory support, in cases of retained pulmonary secretions and respiratory insufficiency. 1 Trousseau by around mid of eighteenth century performed approximately 200 tracheostomy due to diphtheria with airway obstruction. 2 Paediatric patients may have 2 to 3 time more morbidity then adults and have high complication rates. Infantile larynx and trachea have small diameter therefore severe life threating airway obstruction can develop due to event mild mucosal oedema. Infantile larynx occupy a higher position in neck. Sometimes cricoid cartilage may not be prominent on palpation and may pose difficulty for surgeon to ascertain level of airway. 3 The anatomical differences between adult and paediatric may increase problem during management of such patients. These paediatric patients are medically vulnerable and high risk associated with higher rated of complications. 4 Previously, infective causes were much more common indication for paediatric tracheostomy however, now indications have changed to airway obstruction, long term ventilatory dependence, neurological impairment and respiratory problems. 5 This tertiary care centre is a referral centre and receives many such paediatric patients in whom tracheostomy is indicated. The aim of this study was to present our clinical experience with paediatric tracheostomy cases in terms of indications, intra-operative surgical challenges, complications and outcome of procedure. METHODS This study is a retrospective analysis of paediatric tracheostomy procedure performed between June 2015 to June 2019 at Sri Aurobindo Institute of Medical Sciences (SAIMS), Indore, Madhya Pradesh. Inclusion criteria consisted of children less than 12 years of age who underwent tracheostomy. Patients records were analysed in terms of indications for tracheostomy, any specific issue during the procedure, complications and outcome. The study was approved by ethics committee and consent was also obtained from parents of all patients. All tracheostomies were performed by consultant ENT surgeon in operation theatre. Patients airway was secured by endotracheal tube in all cases except in two cases were intubation was not possible. Patient neck was hyper extended and local infiltration with 2% lignocaine with adrenaline was done as per weight of patient. Thyroid cartilage, cricoid cartilage and tracheal ring were palpated and marked. Tracheostomy performed at our institution consist of vertical skin incision below the level of cricoid cartilage measuring 1 to 1.5 cm in length. Dissection of underlying straps muscle performed layer by layer by staying strictly in the midline and were retraced laterally to expose the trachea. Thyroid isthmus encountered was carefully dissected off and retracted above with retractors. Bipolar assisted dissection was done when needed for haemostasis. The cricoid cartilage identified and used as landmark for tracheal incision. Trachea was confirmed by air aspiration in a saline filled syringe and conventional stay suture was applied on anterolateral wall of trachea for traction and were fixed. A vertical incision was placed on 2 nd and 3 rd tracheal rings and in few patients, horizontal inter cartilaginous incision between 2nd to 3rd tracheal ring. Tracheal incision was dilated and tracheostomy tube of appropriate size was then inserted in trachea and secured. Patients in early post-operative period were managed in paediatric intensive care unit (ICU) under cardiorespiratory monitoring. All patients had undergone X-ray chest with thorax (anteroposterior view) to ascertain position of tube and condition of lung. Post-operative care involved endotracheal suction of mucous, blood clot, debris to prevent occlusion of tracheostomy tube. Change of tube was performed seven days after surgical procedure. Patients were planned for decannulation or continued on tracheostomy tube depending on their clinical condition and progress. Process of decannulation was started once patients were off ventilatory support and with no airway obstruction. Laryngotracheal endoscopy was performed prior to decannulation procedure to ascertain status of airway above tracheostomy and to rule out any vocal cord pathology or palsy. Decannulation included gradually reducing calibre of tracheostomy tube. RESULTS A total of 32 patients underwent tracheostomy during period of 4 years from June 2015 to June 2019. Out of 32 patients, 2 patients were in age group 1 to 4 years i.e. 6.3%, 14 patients were between age group of 5-8 years i.e. 43.7% and 16 patients were between age group of 9-12 years i.e. 50% (Figure 1). About 20 patients i.e. 62.5% were male and 12 patients i.e. 37.5% were female. The number of patients every year ranged from 6-8 each years ( Figure 2). Out of 32, two patients underwent emergency tracheostomy whereas rest tracheostomies were planned. The various indications for paediatric tracheostomy in our study are as per Table 1. Patients tracheostomised due to prolonged intubation included cases with respiratory infections and laryngotracheobronchitis (15.6%) each followed by neuromuscular disease (15.6%), seizure disorder (9.4%), metabolic disease (9.4%) and neurological infections (6.3%). Airway obstructive causes included cases with head injury (9.4%) followed by sub-glottic stenosis (6.3%), malignancy (6.3%) and craniofacial; anomaly (3.1%). During the surgical procedure and in follow up period, certain complications were encountered which are as follows. In two patients, tracheostomy tube was inserted in false passage which was detected on table and reinsertion of tube was done. One patient had excessive bleeding from region of isthmus which was controlled on table by cautery and ligature. In post-operative period ie. 4 th -5 th post-operative day, partial blockage of tube was noted in four patients which was treated by change of tube. Five patients had developed granulations around stoma; out of which one had excoriation of peristomal skin. In paediatric ICU, two patients had accidental decannulation due to strong cough reflex which was managed by re-insertion of tube by ICU staff (Figure 3). Twenty-two patients out of 32 are still on follow up. Out of these patients, six could not be decannulated in spite of recovery from primary disease. Out of 32 patients who underwent tracheostomy, sixteen patients were successfully decannulated after recovery from primary illness. About six patients were lost to follow up. Patients who are still on tracheostomy tube are six and four patients expired in hospital due to fatal worsening of primary illness (Figure 4). For decannulation, patients were subjected to direct laryngoscopy using endoscope in operating room. Endoscopic evaluation included vocal cord mobility and airway narrowing if any. Those patients who had normal laryngoscopy findings and no features suggestive of aspiration of feeds were decannulated. The decannulation was done by gradual reduction in size of tube. The tracheostomy site was strapped with sticking bandage once patients were decannulated. They were kept for 24hour observation following decannulation to determine their ability to breath from nose without distress and to detect need of re tracheostomy if any due to breathing difficulty. DISCUSSION Tracheostomy is a common surgical procedure performed in all age groups. Significant anatomical differences exist between paediatric and adult airway. The air passage is relatively small in children's than adults. To perform tracheostomy in children's, ENT surgeon require good expertise and exposure of paediatric cases when compared to adult tracheostomy. Paediatric tracheostomies are associated with high rate of complications and mortality.⁶ There is about 2 to 3 times higher rate of morbidity and mortality as compared to adults.⁷ However, studies have suggest that paediatric tracheostomy is relatively safe and carry less risk of complications if performed by trained experienced team at tertiary care centre.⁸ Out of 32 patients in our study, 62.5% were male and 37.5% were females. Except two, all other tracheostomies were planned elective procedure done with oral intubation as airway control. Similar to present study, in astudy by Putra 18.6% 18.6% 12.5% Follow up of patients In last 2 to 3 decade, there has been change in indications of tracheostomy. Tracheostomy performed for infective conditions like epiglottitis, laryngotracheobronchitis and retropharyngeal abscess has been on decline due to development of better anaesthetic technique and safer endotracheal intubation. Furthermore, immunization has reduced incidence of these infectious disease. 9,22 In our series, prolonged intubation was common indication for tracheostomy as compared to airway obstruction, similar to other studies. 12,17,18 Carter et al reported neurological causes and causes requiring prolonged intubation as major indication for paediatric tracheostomy. 11 Ward et al reported an increase from 22% (1980-85) to 45% (1985-90) in tracheostomy preferred for prolonged intubation as compared to those done for airway obstruction from 67% to 42%. 12 Decannulation in paediatric patients is considered difficult process. At our institution, we follow a protocol of pre-decannulation check endoscopy, gradual reduction of size of tube and gradual capping of tube. Similar protocol is being followed at centres wher higher number of paediatric tracheostomies are done. 14,15 In our study, 51% patients were decannulated successfully. Similar to present study Putra et al had 66.6% successful decannulation. 9 However, Carr et al reported decannulation rate of 34% in 142 children's where as Dursum and Ozel reported successful decannulation in 5 out of 30 paediatric tracheostomy. 12,13 Decanulation rate at our institution is reasonably good when compared to others. The overall complication rate in our study was 43.7% which is comparable to other reported studies whose complication ranged from 36% to 49%. [17][18][19][20] No pneumothorax, tracheocutaenous fistula, tracheaesophageal fistula were reported in our series. Early complications mainly included partial tube blockage and accidental decannulation while late complication consisted of peri-stomal granulations. There was no tracheostomy related death in our study. Various studies have shown mortality rate in paediatric tracheostomy range from 0.5% to 5%. 21 As such overall complications in paediatric tracheostomy has reduced significantly and mainly attributed to tracheostomy carried out in operation theatre by trained ENT surgeon and better paediatric ICU care. 10 CONCLUSION In conclusion, based on our experience, there is a changing trend in indications of paediatric tracheostomy from infective causes to those causes requiring prolonged ventilatory support. Paediatric tracheostomy is relatively safe procedure than compared earlier, if carried out meticulously by trained ENT surgeon preferably in operation theatre. The complication rate have declined in paediatric tracheostomy, in improvement in better care facilities, trained team and proper decannulation protocol.
2020-02-27T09:15:05.747Z
2020-02-24T00:00:00.000
{ "year": 2020, "sha1": "144cd30f3c8869ea932d60d0edf2030878272c8d", "oa_license": null, "oa_url": "https://www.ijorl.com/index.php/ijorl/article/download/1966/1134", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2e1098d3ad3fd5fec45f93f6b55f52e44b88353a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253316330
pes2o/s2orc
v3-fos-license
Challenges of Periodontal Tissue Engineering: Increasing Biomimicry through 3D Printing and Controlled Dynamic Environment In recent years, tissue engineering studies have proposed several approaches to regenerate periodontium based on the use of three-dimensional (3D) tissue scaffolds alone or in association with periodontal ligament stem cells (PDLSCs). The rapid evolution of bioprinting has sped up classic regenerative medicine, making the fabrication of multilayered scaffolds—which are essential in targeting the periodontal ligament (PDL)—conceivable. Physiological mechanical loading is fundamental to generate this complex anatomical structure ex vivo. Indeed, loading induces the correct orientation of the fibers forming the PDL and maintains tissue homeostasis, whereas overloading or a failure to adapt to mechanical load can be at least in part responsible for a wrong tissue regeneration using PDLSCs. This review provides a brief overview of the most recent achievements in periodontal tissue engineering, with a particular focus on the use of PDLSCs, which are the best choice for regenerating PDL as well as alveolar bone and cementum. Different scaffolds associated with various manufacturing methods and data derived from the application of different mechanical loading protocols have been analyzed, demonstrating that periodontal tissue engineering represents a proof of concept with high potential for innovative therapies in the near future. Introduction The periodontium is a complex system composed of gingiva, periodontal ligament (PDL), cementum, and alveolar bone, featuring a hierarchically compartmentalized architecture [1]. The homeostasis of this system is maintained by the PDL, a specialized connective tissue, which is located between the cementum and alveolar bone and articulates (gomphosis) the teeth to the jaws [2,3]. Embryologically, PDL derives from the dental follicle cells under the guidance of Hertwig's epithelial root sheath (HERS), which secrete numerous epithelium-derived factors [4] before obliterating almost completely. From a histological perspective, PDL is an aligned fibrous network with a thickness ranging between 100 and 400 µm and is characterized by an extensive blood supply and a neural network [5]. PDL is constituted by a heterogeneous population of cells (namely PDL cells) that includes periodontal ligament fibroblasts (PDLFs), which represent by far the largest population and are responsible for the deposition and maintenance of the extracellular matrix (ECM) and periodontal ligament stem cells (PDLSCs), showing both osteogenic and tendo/ligamentogenic characteristics. Collagen type I and, in lesser amounts, type III constitute cross-banded fibrils, named Sharpey's fibers, that provide mechanical support and are usually classified as dentinogingival, transseptal, or alveolodental (forming the The Role of PDL Cells In 2004, Kawaguchi et al. proposed the autograft of bone-marrow-derived mesenc mal stem cells (BMMSCs) to enhance the healing of periodontal defects, which prov successful in a dog model [17]. This approach highlighted the potential of cellular thera and paved the way to other pre-clinical studies dealing with BMMSCs [18], adipose rived stem cells (ASCs) [19], and PDLSCs [20]. Among all the possible sources of MS PDLSCs ( Figure 2)-which are characterized by the expression of many markers resum in Table 1-may be selected for PDL regeneration owing to their commitment capacity they express scleraxis, i.e., a tendon/ligament-specific transcription factor, more th The complex architecture of the periodontal apparatus (Figure 1), including dual-tissue interfaces (alveolar bone-PDL and PDL-cementum of the tooth root), is difficult to regenerate due to the small dimensions of the PDL and the challenging oral environment [16]. Any strategy aiming at periodontal regeneration should entail studying the specific events that guide the formation and remodeling of the PDL as well as understanding the intimate bond of tissue histology and function. Hence, this review outlines the most recent achievements Nanomaterials 2022, 12, 3878 3 of 22 in the field-with a particular emphasis on the biomimetic approach, foreseeing the use of PDLSCs in combination with biomimetic scaffolds and subjected in vitro to controlled native-like mechanical loadings-showing that periodontal tissue engineering represents a promising strategy for innovative therapies in the near future. The Role of PDL Cells In 2004, Kawaguchi et al. proposed the autograft of bone-marrow-derived mesenchymal stem cells (BMMSCs) to enhance the healing of periodontal defects, which proved successful in a dog model [17]. This approach highlighted the potential of cellular therapy and paved the way to other pre-clinical studies dealing with BMMSCs [18], adipose derived stem cells (ASCs) [19], and PDLSCs [20]. Among all the possible sources of MSCs, PDLSCs ( Figure 2)-which are characterized by the expression of many markers resumed in Table 1-may be selected for PDL regeneration owing to their commitment capacity, as they express scleraxis, i.e., a tendon/ligament-specific transcription factor, more than BMMSCs or dental pulp stem cells (DPSCs) and have the potential to form cementum and PDL-like structures [21,22]. Indeed, the preservation of the PDL is essential to achieving proper regeneration of the periodontium and avoiding the ankylosis of the tooth, i.e., direct contact between the root and the alveolar bone. taining to differentiative potential, donor age plays an important role [30,31]. By analy donor age impact on PDLSC function, it was observed that PDLSCs from elderly po tions exhibited decreased expression of osteogenesis-related genes, such as osteoc (OCN), COLL-1, and runt-related transcription factor 2 (RUNX2) as well as reduced ogenic activity [30]. This has prompted the possibility of banking cells as soon as teet extracted to create a reservoir for any future usage. Little is known, however, abou differences between PDLSCs deriving from deciduous and permanent teeth. The seem to be less adipogenic than deciduous teeth according to some authors [32,33], w others disagree [34]. Moreover, PDLSCs from deciduous teeth express higher leve cytokines that regulate the host immunity and secreting factors involved in tissue d dation and catalytic activities than PDLSCs from permanent teeth [35]. Distinguished researchers [36,37] explored the feasibility of using PDLSCs to cementum and PDL-like structures in proper animal models, reporting very encoura results. Unfortunately, the enthusiastic outcome of these in vivo studies does not see be fully supported by clinical evidence, which is surprisingly scarce. In fact, only one domized controlled trial [38] assessed the safety and feasibility of using autolo PDLSCs in combination with bovine-derived bone mineral materials for treating p dontal bone defects, but it could not demonstrate any statistically significant differ "between the Cell group and the Control group (p > 0.05)", implying that more stu with incremented numbers will possibly shed light on the matter. It appears that in mans, cells-however important-may alone not be sufficient to regenerate the which is endowed with a peculiar and rather complex architecture. From this perspective, the role of the transforming growth factor-β1 (TGF-β1) signaling becomes interesting since its activation enables the commitment of cementocytes, while its inhibition promotes fibroblastic differentiation of the ligament progenitors [23]. PDLSCs transplanted into a periodontal lesion in a rat model generated typical PDL-like structures in vivo by forming Sharpey's-fiber-like collagen bundles that are connected to cementum-like structures [20]. New insights into the molecular regulation of periodontal attachment have been brought by Bai S et al., who investigated the regulatory mechanism of copine 7 (CPNE7) and cementum attachment protein (CAP) in coordination with cytoskeleton arrangement [24]. Regenerating cementum is regarded as key to promoting new fibrous tissue attachment, preserving the PDL, and avoiding tooth ankylosis [25]. Important issues should be considered in order to implement successful PDLSC-based protocols for PDL regeneration. Owing to the reduced volume of the source tissue, the yield of PDLSCs isolated from a single tooth may be poor, and the in vitro expansion of these cells has been linked to morphological changes and diminished expression of genes associated with pluripotency of embryonic stem cells, such as NANOG and OCT4 [26]. Additionally, Nanomaterials 2022, 12, 3878 4 of 22 reducing the ex vivo manipulation of cells is a goal in every cell transplantation procedure since ex vivo expansion of PDLSCs, as well as of ASCs, causes senescence, decline in multipotency, and is subjected to complex regulatory issues under the compelling requirements of good manufacturing practices (GMP) guidelines [27]. Therefore, it becomes paramount to select the best harvest conditions and methods to maximize the number of cells available. The outgrowth method has been suggested to outperform enzymatic digestion in terms of efficiency and preservation of the commitment capacity regarding both the formation of mineralized nodules and the expression of cementoblast-like genes [28]. Consistently, the paper by Abuarqoub et al. compared the features of PDLSCs expanded using either fetal bovine serum (FBS) or platelet lysate (PL) and concluded that the latter outperformed the former [29]. Regarding the possible sources of heterogeneity-influencing PDLSC behavior pertaining to differentiative potential, donor age plays an important role [30,31]. By analyzing donor age impact on PDLSC function, it was observed that PDLSCs from elderly populations exhibited decreased expression of osteogenesis-related genes, such as osteocalcin (OCN), COLL-1, and runt-related transcription factor 2 (RUNX2) as well as reduced osteogenic activity [30]. This has prompted the possibility of banking cells as soon as teeth are extracted to create a reservoir for any future usage. Little is known, however, about the differences between PDLSCs deriving from deciduous and permanent teeth. The latter seem to be less adipogenic than deciduous teeth according to some authors [32,33], while others disagree [34]. Moreover, PDLSCs from deciduous teeth express higher levels of cytokines that regulate the host immunity and secreting factors involved in tissue degradation and catalytic activities than PDLSCs from permanent teeth [35]. Distinguished researchers [36,37] explored the feasibility of using PDLSCs to form cementum and PDL-like structures in proper animal models, reporting very encouraging results. Unfortunately, the enthusiastic outcome of these in vivo studies does not seem to be fully supported by clinical evidence, which is surprisingly scarce. In fact, only one randomized controlled trial [38] assessed the safety and feasibility of using autologous PDLSCs in combination with bovine-derived bone mineral materials for treating periodontal bone defects, but it could not demonstrate any statistically significant difference "between the Cell group and the Control group (p > 0.05)", implying that more studies with incremented numbers will possibly shed light on the matter. It appears that in humans, cells-however important-may alone not be sufficient to regenerate the PDL, which is endowed with a peculiar and rather complex architecture. Biomimetic Scaffolds to Reproduce the Micro-Environment of PDL Traditionally, periodontal regeneration achieved through guided tissue regeneration (GTR) has been based on the concept of avoiding epithelial invasion of the bone defect to be treated by means of barrier membranes to allow PDL and bone repopulation of the dental root. Thus, a specific avenue of research has been paved toward the improvement of these membranes from the original non-resorbable expanded polytetrafluoroethylene (e-PTFE) [48] to the high technology level of recent developments, such as that proposed by Nasajpour et al. [49]. In parallel, the promising potential unleashed by tissue engineering, which relies upon combining biomaterials functioning as scaffolds and stem cells, has opened a range of new therapeutic strategies in the periodontal field. In a pioneering proof [36]. More recently, Shi et al., after culturing PDLSCs in osteogenic conditions and seeding them on a biphasic calcium phosphate scaffold, observed a periodontal regeneration in the recipient animal constituted by new bone formation and PDL-organized fibers correctly inserted into adjacent cementum and bone, along with neo-vascularization, after 12 weeks [37]. Given these premises, it has become increasingly evident that the PDL is itself the key to attaining the complete regeneration of the periodontium, since this thin tissue of less than 500 µm interconnecting dental root and alveolar bone [21,22] through a series of collagen fiber bundles, is obliterated in PD. Many research efforts have been and are currently spent on identifying the best methods for fabricating biomimetic 3D scaffolds able to reproduce the PDL microenvironment. In Table 2, we report some relevant in vivo studies. Cell Sheet Technology One of the first tissue engineering approaches the cell sheet technology, based on culturing the cells in hyper-confluency until they produce their own extracellular matrix (ECM) by forming a cell sheet [58]. Proposed by Okano et al., this technique entails the use of poly-N-isopropyl acrylamide (PIPA Am) as a convenient cell substrate capable of both supporting the growth of a cell monolayer at 37 • C and releasing it below 20 • C without any enzymatic degradation [59]. The adhesion of the cell sheet to the root surface was enhanced through the preservation of the integrin-fibronectin complex [60]. In 2009, Iwata et al. isolated canine PDLSCs and seeded them on temperature-responsive culture dishes until sheet formation. Three-layered PDL cell sheets supported with woven poly glycolic acid (PGA) were transplanted to the exposed dental root surfaces, and bone defects were filled with porous β-TCP, inducing both the regeneration of new bone and the connection of cementum with well-oriented collagen fibers [50]. In 2012, Vaquette et al. combined fused deposition modeling with electrospinning, obtaining a biphasic scaffold with compartments through bone and PDL sheets, demonstrating that the presence of cell sheets promoted periodontal fiber attachment and cementum-like cells [51]. Takahashi et al. demonstrated that PIPA Am is useful for fabricating a brush surface with selective patterns capable of supporting cell growth while preserving orientation [61]. To overcome the pitfalls of single cell sheets in large-scale tissue injuries, Raju et al. proposed 3D complex cell sheets composed of multiple types of cells, attaining the functional connection of collagen fibers to the tooth root and alveolar bone [62]. Similarly, the co-culture of PDLSCs and human umbilical vein endothelial cells (HUVECs) allowed the generation of 3D cell sheet constructs that were wrapped around human tooth roots and implanted into the subcutaneous layer of mice. The presence of HUVECs contributed to the regulation of the thickness of PDL, which was thicker than in mice treated with PDLSCs alone [52]. An unsolved issue of PDL cell sheet technology [63], however, is achieving the directional control of the fibrous network within the constructs. To verify this crucial role of the ECM, as a proof of concept, microfibrous scaffolds were prepared by removing the cellular component from tooth slices using sodium dodecyl sulfate and Triton X-100 and supporting the repopulation and differentiation of PDL cells [64]. The 3D Printing Among the most promising techniques implemented to physically control the orientations of PDL, researchers have focused recently on additive manufacturing, a technique that allows one to precisely control the macro-and micro-structure of the scaffolds [65,66]. Ideally, this technique can build complex tissues by depositing different materials layer by layer following 3D digital models and can even embed cells directly within the constructs during the fabrication in a process called bio-printing [67]. Hence, the main advantage over traditional tissue engineering protocols relies in the possibility of fine-tuning the creation of tissues to be akin to that of the native cellular micro-environment [68,69]. This approach may be used as a sophisticated mean to reproduce proper fiber orientation, creating specific micro-grooved surfaces for aligning human PDL cells with high predictability [54]. Such is the case of polymeric 3D scaffolds capable of replicating the peculiar micro-patterned histological architecture [70]. In vitro, this arrangement could be maintained for prolonged periods of time in the presence of growing cell populations [25]. Synthetic Polymers and Surface Modifications of Printed Scaffolds Among the materials suitable for 3D printing, polycaprolactone (PCL) is widely used due to its convenient rheological, mechanical, and biological features [71]. PCL scaffolds endowed with meso/microscale architectural features were also fabricated to form de novo bone-ligament-cementum complexes in vivo [53]. A 3D-printed bone region with grooved pillars seeded with fibroblasts overexpressing bone morphogenetic protein (BMP)-7 was covered with a tooth dentin segment, and was subsequently positioned subcutaneously in a murine model, with a very encouraging outcome [53]. To improve cell adhesion efficiency on PCL, various surface modification treatments aimed at reducing its hydrophobic interface have been proposed, such as graphene oxide (GO), oxygen plasma, and gelatin coatings ( Figure 3) [72,73]. Through plasma treatment, it is possible to variate the surface roughness of nanosized PCL scaffolds, conveniently modulating cell adhesion [74]. Moreover, through electrospinning technique, PCL allows the preparation of nanofibrils or nanocellulose membranes, which can be utilized to encapsulate and carry drugs. For instance, membranes made of PCL encapsulated in gelatin nanocellulose were prepared, and magnesium oxide nanoparticles were incorporated inside them. This system showed high biocompatibility and hydrophilicity that promoted PDLSCs proliferation rates [73]. Vera-Sánchez et al. studied the biocompatibility and potential of a composite coating with GO to induce differentiation of human PDLSCs [75], showing that the GO coating technology increases the hydrophilicity of the PCL surface, promoting cell adhesion. Additionally, poly(d,l-lactide-co-glycolide)/hyaluronic acid PLGA/HA biodegradable microcarriers were treated with GO, improving osteogenic differentiation of stem cells [76]. PCL is fundamental in allowing the printability of the scaffold since it can be conveniently integrated with proper hydrogels functioning as cell carriers and possibly other components, such as a mineralized compartment in a multilayered construct. From an anecdotal point of view, a human case of a large periodontal bone defect treated with a 3D-printed PCLbased scaffold and enriched with platelet-derived growth factors has been reported [77]. Unfortunately, the scaffold was removed after 13 months due to exposure and bacterial is fundamental in allowing the printability of the scaffold since it can be conveniently integrated with proper hydrogels functioning as cell carriers and possibly other components, such as a mineralized compartment in a multilayered construct. From an anecdotal point of view, a human case of a large periodontal bone defect treated with a 3D-printed PCL-based scaffold and enriched with platelet-derived growth factors has been reported [77]. Unfortunately, the scaffold was removed after 13 months due to exposure and bacterial contamination. This unsuccessful outcome likely depended on the slow resorbability of PCL and the geometry of the construct, which was too bulky and scarcely interconnected. Figure 3. Polymers, such as PCL and PGLA, can be used to print scaffolds functionalized to promote PDLSC adhesion and proliferation. Different strategies of functionalization can be adopted: gelatin nanocellulose can be mixed with PCL or used as an envelope to incorporate other nanoparticles; graphene oxyde (GO) coating increases the hydrophilicity of the PCL surface; oxygen plasma variates the surface roughness. Melt electrowriting (MEW), a novel technology particularly suitable for PCL [78], is expected to overcome the common limitations of 3D-printed scaffolds, such as porosity not being well matched to tissue, poor resolution, and inflexible shapes [79]. MEW enables Figure 3. Polymers, such as PCL and PGLA, can be used to print scaffolds functionalized to promote PDLSC adhesion and proliferation. Different strategies of functionalization can be adopted: gelatin nanocellulose can be mixed with PCL or used as an envelope to incorporate other nanoparticles; graphene oxyde (GO) coating increases the hydrophilicity of the PCL surface; oxygen plasma variates the surface roughness. Melt electrowriting (MEW), a novel technology particularly suitable for PCL [78], is expected to overcome the common limitations of 3D-printed scaffolds, such as porosity not being well matched to tissue, poor resolution, and inflexible shapes [79]. MEW enables the fabrication of micron-to nanodiameter filaments arranged in highly ordered architectures [80] within multicompartmental scaffolds [81] that may mimic the biochemical composition and/or structural organization of the hierarchical structure of the periodontium, incorporating not only PDL but also its interfacial tissues [82]. By presenting selectively regulatory cues within each compartment, multicompartmental scaffolds can guide cells to form the tissue types desired within the anatomical locations designed [83], promoting cell/tissue in-growth [84]. In 2022, the research group led by William V. Giannobile proposed the design of tricompartmental scaffolds obtained via MEW. Thereby, human PDLFs and primary osteoblasts were co-cultured, achieving "a mineral gradient from calcified to uncalcified regions with PDL-like insertions within the transition region" [85]. As the authors claim, their "process effectively recapitulates the key feature of interfacial tissues in periodontium", offering "a fundament for engineering periodontal tissue constructs with characteristic 3D microenvironments similar to native tissues". Natural Polymers A viable and natural alternative to PCL is collagen, the primary extracellular PDL protein, which has been employed widely as grafting material owing to its outstanding biocompatibility [86,87]. Unfortunately, collagen alone is not easily printable because of its low viscosity and denaturation temperature [88]. To address this problem, a novel technology called freeform reversible embedding of suspended hydrogels (FRESH) was introduced whereby collagen was deposited in a hydrogel that functioned as a transient mold to be removed non-destructively afterwards [89]. FRESH allowed the printing of collagen parts of human hearts with a satisfactory resolution (20-200 µm) [90]. A good candidate combining antibacterial properties and printability is chitosan, a natural biodegradable polysaccharide already used for guided tissue regeneration [91,92]. A nanohydroxyapatite-chitosan scaffold combined with PDLSCs resulted in effective promotion of bone regeneration in a calvaria bone repair model [93]. In 2018, Varoni et al. [94] prepared a tri-layer scaffold characterized by highly oriented channels aimed at guiding PDL fiber growth through electrochemical deposition. The other compartments were produced with medium-and low-molecular-weight chitosan for regenerating gingiva and bone, respectively. Excellent results supported the feasibility of this resorbable tri-layered structure both in vitro, with high cell survival rates, and in vivo, achieving selective differentiation in terms of mineralized deposits. From this perspective, the development of a new chitosan-based bioink incorporating cellulose nanocrystals, although it has only been tested on murine pre-osteoblasts, is to be regarded as most interesting [95] since it could extend the application of 3D bioprinting for periodontal regeneration to a natural polymer. Hydrogels The above-described natural polymers, collagen and chitosan, along with fibrin can also be used as hydrogels, being biodegradable and biocompatible and resembling the original ECM components. The ideal carrier is meant to mimic the ECM, which forms an intricate fibrillar architecture, and can also deeply affect PDLSC colonizing capabilities. Indeed, when seeded on a fibrin sponge, PDLSCs produced abundant ECM, that was positively stained by Alizarin Red S [96]. The effect of a biomimetic electrospun fish-collagen/bioactive-glass/chitosan composite nanofiber membrane (Col/BG/CS) on periodontal regeneration was investigated, showing that the composite membrane promoted cell growth and osteogenic gene expression in vitro, but it was also effective in promoting PDL and bone formation in a canine model [55]. The combination of collagen and methacrylate has garnered growing interest owing to the suitability of the latter for 3D printing; indeed, by adding methacrylate, the collagen may crosslink via UV light in a more controlled way in lieu of using thermal crosslinking. A customized 3D cell-laden hydrogel array with a gradient of gelatin methacrylate (GelMA) and poly(ethylene glycol) (PEG) dimethacrylate compositions showed that the higher the ratio of PEG was, the better the performance of the PDLSCs in cell proliferation and cell spreading on the scaffold [97]. Nonetheless, as inert ECM-based scaffolds alone may be poorly efficient for generating durable tissue repair, they are usually functionalized to release active compounds. PDLSCs sheets combined with platelet-rich plasma were useful for increasing the production of ECM and enhancing cell differentiation [98]. PDLSCs engineered to overexpress platelet-derived growth factor-BB showed increased osteogenic power and were tested in a rat model to induce alveolar bone regeneration [99,100]. The presence of signaling molecules, such as connective tissue growth factor (CTGF), BMP-2, and BMP-7 promote tissue regeneration and cementogenic differentiation [101][102][103]. These three factors have been incorporated into 3D-printed PLGA microspheres, and the results indicated that BMP-7 triggered thicker cementum-like layers, better integration with the dentin surface, and higher expression of cementum protein-1 [56]. In situ tissue engineering (iTE) allowed the production of a iTE-scaffold made with a PLGA/poly (L-lactic acid (PLLA) shell/core structure and functionalized to allow a sequential delivery of b fibroblast growth factor (bFGF), which promotes regeneration of the periodontium [104] and BMP-2, significantly facilitating stem cell homing, proliferation, and periodontal bone regeneration [57,105]. This iTE-scaffold, implanted in a rat model of periodontal defect, demonstrated an anti-inflammatory response, provided adequate blood supply, and achieved the desired bone repair [106]. Regrettably, over the years, safety concerns have hindered the usage of bioactive molecules such as BMP-2 at the high concentrations required to be effective [107], somehow questioning the classic paradigm of tissue engineering based on cells/scaffolds/signaling cues and favoring the development of smart materials [108]. To avoid the functionalization through bio-active molecules, increasing interest in the synthesis of conductive polymers that are also printable-such as poly(3,4-ethylenedioxythiophene): polystyrene sulfonate (PEDOT:PSS) [109], which can be conveniently enriched by poly(vinyl alcohol) (PVA) forming a hydrogel strain sensor [110]-has been conveyed. Conductive hydrogels may become ideal interfaces with the human body, but rarely simultaneously possess the satisfying electrical, mechanical, and adhesive properties shown by the Ti3C2Tx-polyacrylic acid hydrogel that can be printed into complex geometries with high resolution [111]. Mimicking the Physical Micro-Environment of PDL During normal oral functions, intermittent occlusal contacts accompanied by pressure from the tongue occur, and the PDL periodically undergoes different combinations of mechanical loading (i.e., compression, stretch, fluid-induced shear stress) that contribute to maintain the PDL homeostasis [112]. The mechanical response of PDL to experienced loading is determined by the combination of the oriented collagen fiber bundles and the distribution of the interstitial fluid, which makes the PDL acting as a shock absorber, increasing the tooth's ability to withstand loading via the hydrostatic effect [113,114]. This is accompanied by a mechano-biological response of the PDL that, depending on the location under the tooth, changes structure and functions and induces remodeling in the surrounding tissues [5]. Cells detect and transduce the mechanical signals from their membrane to the nucleus through a molecular process named mechanotransduction [115]. Among the most relevant cell membrane mechanosensors, integrins play a fundamental role, mediating direct contact with the ECM. As transmembrane constituents of the focal adhesions (FA), integrins interact with scaffolding, docking, and signaling proteins linked to the actin cytoskeleton [116]. The variable composition of the FA core depends on ECM and mechanical stimuli. Cells modulate their own cytoskeletal architecture in response to applied forces [117] and remain in a sort of tensional homeostasis, i.e., a basal equilibrium stress state [118]. From a molecular point of view, the role of two actin-binding proteins associating with the cytoplasmic tail of the β1 integrin-talin and filamin A (FLNa) [119]is noteworthy. By interacting with integrins, talin enhances cell adhesion to the ECM [120]. Without mechanical stimuli, talin is fully structured, but while under increasing force regimes, talin exposes progressively more vinculin binding sites (VBS), thus activating more vinculin proteins [121]. Vinculin may mediate reorganization of cell polarity, helping the cell to adapt to increased tensile forces [122]. FLNa competes with talin for binding to β1 integrin [123], and it is thought to antagonize integrin-mediated cell adhesion [124]. For example, Shifrin et al. showed that through the Rac/Pak/p38 signaling pathway, FLNa may prevent apoptosis in PDL in response to tensile forces [125]. Due to the fundamental role played by mechanical loading in vivo, a compelling strategy for directing cell commitment in periodontal tissue engineering may be represented by the reproduction in vitro of the dynamic environment in which PDL cells operate. Several groups investigated the sensitivity of PDL cells (i.e., cells harvested form the PDL, including not only PDLSCs but also more committed cells and even fibroblasts) to mechanical loading and their involvement in periodontal and bone remodeling in vitro. In the following paragraphs, investigations performed in the last decade on in vitro mechanically stimulated PDL cells and constructs are reported per type of force (i.e., compression, stretch, shear stress; Figure 4 and Table S1) and depending on the adopted in vitro mechanical loading method ( Figure 5), highlighting the use of conventional two-dimensional (2D) or more physiological 3D cell culture techniques. Weight Method This method, based on the use of cover glasses or cylinders containing metal granules, which allows the application of tunable static compressive forces to the culture, was widely adopted to investigate in vitro how continuous compression can influence PDL cells. In 2011, Li et al. established a 3D model of PDL tissue based on hPDL cells seeded on a sheet of porous PLGA scaffold and exposed it to static compression (5-35 g/cm 2 for 6-72 h), observing a significant induction of osteoclastogenic genes that did not occur Figure 5. In vitro mechanical loading methods applied for PDL investigations. Schematic representation of the in vitro mechanical loading methods adopted in the reviewed studies for exposing PDL cells cultured in 2D layers or in 3D constructs to compression, stretch, and shear stress stimuli. Weight Method This method, based on the use of cover glasses or cylinders containing metal granules, which allows the application of tunable static compressive forces to the culture, was widely adopted to investigate in vitro how continuous compression can influence PDL cells. In 2011, Li et al. established a 3D model of PDL tissue based on hPDL cells seeded on a sheet of porous PLGA scaffold and exposed it to static compression (5-35 g/cm 2 for 6-72 h), observing a significant induction of osteoclastogenic genes that did not occur when human gingival fibroblasts were used [126]. Moreover, a predominant upregulation of osteoclastogenesis inducers was observed at the early stage (6 h), while osteoclastogenesis inhibitor genes increased at the late stage (24-72 h), although cell proliferation was reduced [127]. In 2013, a comparison between hPDL cells cultured under compressive forces (2.0 g/cm 2 for 2 or 48 h) in conventional 2D culture dishes or in 3D collagen gel highlighted significant alterations of the expression levels of several genes [128]. In particular, the number of activated integrin-focal adhesion kinase (FAK) was higher in 3D than in 2D culture, supporting that cellular attachment to ECM can strongly influence cellular responses to mechanical forces [128]. In 2016, it was demonstrated that compressive force (1 g/cm 2 for 24 h) applied to hPDLSCs altered cell morphology and repressed collagen expression, which both recovered after force withdrawal [129]. Recently, continuous compression (0-1.5 g/cm 2 for 12 h) on PDLSCs could reduce differentiation ability and increase macrophage migration, osteoclast differentiation, and proinflammatory factor expression. Moreover, a universal upregulation of the subfamily V member 4 of the transient receptor potential calcium channel (TRPV4), which regulated osteoclast differentiation by affecting the system receptor activator of nuclear factor kappa-B ligand (RANKL)/osteoprotegerin (OPG) via extracellular signal-regulated kinase (ERK) signaling (Figure 6a) [130], was shown. Brockhaus et al. reported that hPDLFs cultured under compression (2 g/cm 2 for 24, 48, and 72 h) changed their morphology towards more unstructured, unsorted actin filaments, with a significant reduction of proliferation followed by recovery after 48 h, demonstrating that hPDLFs restore homeostasis and adapt to the compressive force through a lower cell division rate and a slowed cell cycle [131]. Moreover, Stemmler et al. reported that the inflammatory response of hPDLFs caused by periodontal pathogens combined with compressive load (2 g/cm 2 for 6 h) was supported by the growth differentiation factor 15 (GDF15), which modulated the inflammatory response of PDLFs also regulating the levels of the key inflammatory molecule TNFα [132]. Recently, Jiang and colleagues showed a novel cellular mechanism. Indeed, continuous compressive force (0.5-2.5 g/cm 2 , 12 h) activated autophagy in hPDLSCs that induced M1 macrophage polarization via the inhibition of the AKT signaling pathway, contributing to the force-induced bone remodeling and tooth movement [133]. Hydrostatic Pressure Method The hydrostatic pressure method exploits air pressure applied on the culture medium for imposing static or fluctuating compressive forces. Exerting a static compressive stress on PDLSCs (100 kPa, for 1, 6 or 12 h) [134] and on hPDL cells (1 MPa or 6 MPa for 10 or 60 min) [135], the expression of genes regulating osteoclastogenesis and osteoblastogenesis was induced. For mimicking the physiological loading during mastication, cyclic hydrostatic pressure was applied (1 MPa, 0.1 Hz, 3 h/day for 2 days) on hPDLFs, showing that the expression of several integrins, collagens, and metalloproteinases was significantly upregulated [136]. Recently, the inflammatory, osteogenic, and pro-osteoclastic effects of different cyclic compressive loading conditions (50-150 kPa, 0.1 Hz, for 1 h/day for 5 days) were investigated by stimulating hPDL cells in an inflammatory environment using a customized bioreactor [137]. According to the level of cyclic pressure, cells released different levels of inflammatory and pro-osteoclastic factors, modulating the downregulation (with150 kPa) or the upregulation (with 90 kPa) of osteogenic genes (alkaline phosphatase (ALP), collagen type I (COLL-1), RUNX2, OCN, osteopontin (OPN), and osterix (OSX)) [138]. Substrate Deformation Methods Compression can also be achieved through several commercial and customized devices based on substrate deformation, in which an elastic membrane is deformed by force and the cells/constructs cultured on it are exposed to strain. In 2014, a 3D construct composed of hPDL cells seeded into a matrix of hyaluronan, gelatin, and COLL-1 was exposed to cyclic compression (340.6 g/cm 2 for 1 s every 60 s for 6, 12, and 24 h) by using the commercial Flexercell FX-4000C Strain Unit (Flexcell International Corporation, Hillsborough, NC, USA). Compression increased cell death and the expression of several apoptosis-related genes. ECM genes were mostly upregulated after 6-12 h, but all were downregulated at 24 h, except for the three major ECM-degrading enzymes-MMPs1-3-and the connective tissue growth factor (CTGF), with upregulated matrix metalloproteinase-1 (MMP-1) and tissue inhibitor of metalloproteinases-1 (TIMP-1) protein levels without changes observed in RANKL, OPG, and basic fibroblast growth factor (FGF-2) expression [139]. By using the same device, Nettelhoff et al. exposed hPDLFs to compressive force (5 and 10% for 12 h). The 5% compression induced the highest ALP gene expression and the highest RANKL/OPG ratio, while 10% compression decreased cell viability without promoting apoptosis but resulting in tissue damage [140]. Thus, short-term static compression (almost for 1 h) can promote osteogenic differentiation of PDLSCs, while long-term static compression (for 12 h or longer) can alter the morphology of hPDL cells, may inhibit cell proliferation and the osteogenic differentiation of PDLSCs, and can promote the secretion of osteoclastogenesis-stimulating cytokines and ECM degradation. Substrate Deformation-Vacuum Approach The vacuum approach stretches the cell-seeded membrane across a loading post by applying vacuum pressure, delivering a tunable biaxial or uniaxial tensile strain. This method is commonly applied by using commercial devices (e.g., Flexercell tension system (Flexcell International Corporation, Hillsborough, NC, USA). In 2013, Saminathan et al. embedded hPDLFs in an 80-100 µm thick 3D collagen membrane and cultured the construct under equibiaxial cyclic stretching (12%, 0.2 Hz, 5 s every 60 s for 6 h/day up to 21 days) to investigate the influence on ECM homeostasis. Mechanical loading did not affect the cell number, but it significantly upregulated the release of MMP-1 and TIMP-1 in the supernatants, suggesting that fibroblasts were remodeling the surrounding ECM [141]. This was confirmed by Chen et al. who exposed hPDLCs to equibiaxial cyclic stretching (12%, 0.1 Hz for 24 h), showing the upregulation of major periodontal ECM genes, such as COL1A1, COL3A1 and COL5A1 [142]. Some studies reported that hPDLSCs cultured in osteoinductive medium under cyclic stretching enhanced the osteogenic differentiation [143,144], both maintaining the same parameters stimulation and reducing the time of stimulation [145] or keeping the same strain magnitude but halving the frequency [146,147]. In particular, Xi et al. demonstrated that cyclic stretching (10%, 0.5 Hz, for up to 36 h) could increase the generation of reactive oxygen species (ROS), which may lead to the osteogenic differentiation of hPDLSCs (Figure 6b) [146]. In 2017, Liu et al., culturing healthy or pathological donorderived PDLSCs under static strain (6-14% for 12 h), showed that PDLSCs from patients affected by periodontitis were more sensible to physical load than PDLSCs from healthy patients, likely due to the inflammatory milieu [148]. Recently, Salim et al. cultured human PDL cells under static strain (2.5, 5, and 10% for 24 h) and performed in vivo analyses on teeth with and without orthodontic tooth movement (OTM) [149]. Interestingly, they found that chaperone-assisted selective autophagy (CASA) machinery genes (chaperones HSPA8 and HSPB8, the cochaperones BAG3 and STUB1, and the molecule SYNPO2 interacting with BAG3 for autophagosome membrane formation) were inherently expressed in PDL cells and exhibited transcriptional induction upon in vitro mechanical strain and in vivo after OTM. The role of FLNa was also investigated, pointing out that it acts as a flexible actin crosslinker that is stretched under tension and degraded by CASA when damaged, which is consistent with previous works [150,151], further supporting the importance of the dynamic environment as a key factor of the homeostatic maintenance of PDL both in physiologic and treatment conditions [152]. Substrate Deformation-Pulling Approach The substrate-pulling approach is based on a system that clamps the cell-seeded membrane and imposes uniaxial stretch by a controlled actuator. Adopting the commercial STB-140 STREX cell stretch system (Strex Co., Osaka, Japan), it was reported that a longterm cyclic stimulation (5%, 60 s/returns, resting time = 29 s for 7 days) could increase collagen mRNA and protein expression, suggesting that cyclic stretch on hPDLFs may contribute to the homeostasis of PDL fibers and to the ECM remodeling [153]. In 2012, a 3D construct based on a collagen film laden with rat PDLFs was exposed to uniaxial cyclic stretch (8%, 1 Hz (15 min stretch + 15 min rest) for 8 h/day for 5 days) using a customized device. After mechanical stimulation, the cells were perpendicularly oriented with respect to the stimulation direction and, analyzing several genes' expression (COLL-1, RUNX2, c-fos, and Cox-2), the authors concluded that PDL cells under loading might tend to have bone-like and, at the same time, tendon-like behavior [154]. Applying the same stimulation for a shorter period (16 h) and in a cyclic or static manner, the cellular orientation could be reached, and three different pathways (ERK, p38 and JNK) were activated [155]. In 2021, Yu et al. demonstrated that exposing a hPDLSCs-laden 3D collagen membrane to uniaxial stretching (20% for 5 days) dramatically enhanced the bioactivity of PDLSC-derived exosomes [156]. Substrate Deformation-Inflation and Bending Approaches Studies based on the substrate inflation approach were inspired by Howard et al. [157]. A cell-seeded membrane is clamped and deflected by hydrostatic pressure applied to the underside, providing uniform biaxial stretch. In 2012, Xu and colleagues showed that hPDL cells exposed to cyclic stretching (1-20%, 0.1 Hz up to 24 h) appeared aligned perpendicularly to the stretching direction, and the expression of the membrane connexin 43 (Cx43) protein could be modulated in a time-and magnitude-dependent manner via cyclic stretching [158]. By adopting the substrate-bending approach, the osteogenic differentiation of hPDLSCs cultured under tensile stress (3000 µstrain, 0.5 Hz) was reached after 24 h of stimulation, demonstrating upregulation of different osteogenic markers [159]. In summary, the aforementioned studies demonstrated that tensile stress could promote osteogenic differentiation. In particular, cyclic stretch can upregulate the protein and mRNA expression of osteogenic genes and the synthesis of osteoclastogenesis-inhibitory molecules, complemented by an increased expression of the major periodontal ECM genes, leading to the homeostasis and organization of the PDL fibers and to ECM remodeling. Shear Stress The effect of shear stress on PDL cells has been poorly investigated in the literature, even though this stimulus is an important cue in the physiological environment. The most adopted method for investigating the effect of shear stress on PDL behavior is based on tangential fluid provided by parallel plate flow chambers. In 2014, Tang and colleagues cultured hPDL cells in osteogenic medium under steady fluid shear stress (12 dyn/cm 2 for 2 h), showing an early morphologic change and rearrangement of filamentous actin with significant increases in ALP activity and mRNA levels of osteogenic genes and osteoid nodules [160]. Similarly, Zheng et al. exposed hPDL cells to fluid shear stress (6 dyn/cm 2 up to 12 h), reporting a rearranged cell alignment, an inhibited cell proliferation and migration, and osteogenic differentiation [161]. Very recently, Shi et al. observed that fluid shear stress (6 dyn/cm 2 for 4 h) promoted cell proliferation by activating mechanotransduction pathways involving the p38 mitogen-activated protein kinases, angiomotin (AMOT), and Yes-associated protein (YAP) (Figure 6c) [162]. Adopting the sliding plate method where the 3D construct is housed between two parallel plates with one of them connected to an actuator that imposes controlled sliding motion and consequent shear stress on the construct, a static shear stress was applied to a construct composed of hPDL cells embedded in a collagen gel. After 24 h of stimulation, cells and collagen fibers aligned in the direction of the principal strain vector [163]. Recently, a model of PDL regeneration based on a fiber-guiding scaffold seeded with PDL cells and subjected to shear stress in a laminar flow-based bioreactor (6 dynes/cm 2 for 1-4 h) showed increased viability, adhesion, and cytoskeleton arrangement compared to cells in the absence of load [164]. [130]; (b) external cyclic stretch can promote the osteogenic differentiation of hPDLSCs by activating the Nrf2 [146]; (c) shear stress applied to hPDL cells can activate p38, which regulates the nuclear translocation YAP, promoting cell proliferation [162]. Overall, the above-mentioned findings highlight how shear stress can represent an effective approach for guiding cell and fiber alignment and for promoting osteogenic differentiation. Future Perspectives Exciting advances in engineering and cell biology have begun to address the numerous obstacles hampering the implementation of effective protocols for periodontal regeneration. Despite the flurry of research regarding osteogenic differentiation, however, only a few papers deal with tenogenic differentiation, which is essential to preserve the PDL tissue within the mineralized interfaces of cementum and bone. This lack of knowledge could delay the goal of making available cost-effective, patient-specific treatment options in the future. To this end, it is necessary to assess whether and how mechanical stimulation protocols affect the tenogenic differentiation of PDLSCs, as it is conceivable that various types of physical stimulation, with different application times, are required to achieve the combined modulation of osteogenic and tenogenic activities that allow PDL regeneration. Therefore, in-depth in vitro investigations combined with high-throughput analyses are increasingly necessary to unravel molecular and cellular mechanisms activated by different spatial and physical culture conditions. Finally, from a long-term perspective, developing reliable protocols for periodontal regeneration requires the solution of the following challenges: (1) the definition of the most suitable mechanical stimuli and of the optimal stimulation parameters for promoting tissue regeneration; (2) assessing how the biomechanical load affects the tissues to be regenerated overall; (3) handling the construct at the wound site, avoiding microbial infection/reinfection. Conclusions Despite the remarkable endeavors done in the last years to identify new regenerative treatments for periodontitis, periodontal regeneration remains a tricky challenge. GTR was proven to be as effective as the open flap debridement [165] and seemingly more efficient in regenerating bone than the PDL, which is the key for a true restitutio ad integrum of the periodontium. Cell sheet engineering is technically demanding owing to difficulties in handling and stabilizing the construct, which has greatly limited its clinical application [81]. Several pre-clinical studies [166] have proven the effectiveness of PDLSC-based therapies in regenerating PDL, but the clinical feasibility of this therapeutic approach is far from being achieved. Indeed, well-designed randomized controlled trials (RCTs) are mandatory for assessing the long-term success of innovative procedures, but they must abide by stringent safety and regulatory issues, delaying the implementation of protocols based on PDLSC transplantation in humans. To the authors' knowledge, only one limited RCT has been retrieved in the scientific literature so far (2022). It reported the clinical safety of the treatment with PDLSCs but with inconclusive evidence regarding PDLSC-based periodontal regeneration [38]. In order to create PDL-cementum complexes, it becomes paramount to tailor the cell environment (mimicking the ECM) by selecting the best scaffold capable of supporting PDLSC growth and differentiation in humans, which implies the adoption of the most up-to-date 3D printing techniques. In parallel, the biomimetic approach demands the use of technological devices for supporting the in vitro maturation of the tissue to be grafted, particularly bioreactors providing controlled physical stimuli to reproduce the native environment [167][168][169]. Indeed, mechanical stimulation can be essential for promoting specific cellular and tissue processes with the final aim of inducing the correct orientation of the fibers forming the PDL and of maintaining tissue homeostasis. In particular ( Figure 4): (i) moderate compressive forces promote active tissue remodeling, while long-term static compression can alter the morphology of hPDL cells, inhibiting the proliferation and the osteogenic differentiation of PDLSCs while inducing osteoclastogenesis and ECM degradation; (ii) tensile stress enhances the osteogenic differentiation of PDLSCs, and cyclic stretching can contribute to the homeostasis and organization of the PDL fibers and to the ECM remodeling; (iii) shear stress promotes osteogenic differentiation and guides cell and PDL fiber alignment. Since PDL is a complex multilayer tissue that is physiologically subjected to all these types of forces, a biomimetic in vitro model should both mimic the PDL microstructure and provide, in a controlled manner, the actual combined mechanical loading experienced in vivo. Such an advanced model would allow the mechano-biological behavior of the PDL to be comprehensively unveiled and the best culture protocols for obtaining reliable results in terms of PDL regeneration to be defined. Finally, synergic collaborations between biologists, biotechnologists, biomaterialists, bioengineers, and dentists become urgent for the development of advanced solutions for more reproducible, standardized, and biomimetic studies that could lead in the near future to a successful production of functionally engineered PDL for clinical applications. Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/nano12213878/s1, Table S1. List of the reviewed in vitro PDL mechanical loading studies. Funding: This research was funded by Ministero dell'Istruzione, dell'Università e della Ricerca (MIUR), program "Dipartimenti di Eccellenza" ex L.232/2016 of the Dept. of Surgical Sciences, University of Turin. MIUR had no part whatsoever in conducting the research, in the preparation of the article, or in the decision to submit the paper for publication. The APC was funded by a grant of the University of Turin. This work was also supported by CRT Foundations (CRT2020) and Fondazione Ricerca Molinette ONLUS. Conflicts of Interest: The authors declare no conflict of interest.
2022-11-05T15:25:01.844Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "2351f10bbe3c05a90b0588042e2404e2e1bae5bf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/21/3878/pdf?version=1667409777", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "095855d55e9e8ad881536d4914c163f64072a87f", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
8962431
pes2o/s2orc
v3-fos-license
Hydrogen-bond memory and water-skin supersolidity resolving the Mpemba paradox The Mpemba paradox, that is, hotter water freezes faster than colder water, has baffled thinkers like Francis Bacon, Rene´ Descartes, and Aristotle since B.C. 350. However, a commonly accepted understanding or theoretical reproduction of this effect remains challenging. Numerical reproduction of observations, shown herewith, confirms that water skin supersolidity [Zhang et al. , Phys. Chem. Chem. Phys. , DOI: 10.1039/C1034CP02516D] enhances the local thermal diffusivity favoring heat flowing outwardly in the liquid path. Analysis of experimental database reveals that the hydrogen bond (O:H–O) possesses memory to emit energy at a rate depending on its initial storage. Unlike other usual materials that lengthen and soften all bonds when they absorb thermal energy, water performs abnormally under heating to lengthen the O:H nonbond and shorten the H–O covalent bond through inter-oxygen Coulomb coupling [Sun et al. , J. Phys. Chem. Lett. , 2013, 4 , 3238]. Cooling does the opposite to release energy, like releasing a coupled pair of bungees, at a rate of history dependence. Being sensitive to the source volume, skin radiation, and the drain temperature, the Mpemba effect proceeds only in the strictly non-adiabatic ‘source–path–drain’ cycling system for the heat ‘‘emission–conduction–dissipation’’ dynamics with a relaxation time that drops exponentially with the rise of the initial temperature of the liquid source. Introduction 2][3][4][5] Proposed factors explaining this effect include evaporation, 6 frosting, 7 solutes, 8 supercooling, 7,9 thermal convection, 10,11 etc.According to the winner 12 of a competition held in 2012 by the Royal Society of Chemistry, thermal convection rationalizes the energy ''emission-conductiondissipation'' dynamics in the ''source-path-drain'' system in which the Mpemba paradox takes place.However, little attention has yet been paid to the intrinsic nature and the relaxation dynamics of the hydrogen bond (O:H-O) 13 as the primary component of the liquid source for heat emission and the liquid path for heat conduction.In this communication, we show quantitatively that the O:H-O bond memory and the water-skin supersolidity 14,15 resolve this paradox with reproduction of the observed attributes. 2,12Numerical solution: water-skin supersolidity Fourier thermal-fluid equation We firstly conducted numerical calculation by introducing the skin supersolidity 14,15 into the path of heat conduction.Molecular under coordination shortens and stiffens the H-O bond and meanwhile lengthens and softens the O:H nonbond through Coulomb repulsion between electron pairs on adjacent oxygen ions.This process turns the skin of water and ice into the supersolid phase that is elastic, polarized, thermally stable, highly tensile, hydrophobic, and self-lubricant. 14,16A mass density of 0.75 g cm À3 , a high-frequency phonon of 3450 cm À1 , an O 1s binding energy of 538.1 eV and a melting point of 315 K compared to the bulk values listed in Table 1 characterize the skin supersolidity. The Fourier equation 17 with appropriate initial-and-boundary conditions best describes the process of thermal-fluid transportation in the liquid water but the skin-supersolidity is necessary.In order to examine all possible factors contributing to the Mpemba effect, we solved this initial-and-boundary condition problem using the finite element calculation method.Fig. 1 illustrates the adiabatically walled, open-ended, onedimensional tube cell containing water at the initial temperature y i .We divide the tube cell into the bulk (B, from Àl 1 = À9 mm to 0) and the skin (S, from 0 to l 2 = 1 mm) region along the x-axis and cool it in the drain of constant temperature y f .The y f is subject to variation. The rate of temperature change in any point (x) of the partitioned tube cell follows the step-function, for simplicity, and the initial-and-boundary conditions: Using a slope function at the interface complicated calculation without changing the physical meanings.The first term describes thermal diffusion and the second thermal convection in the Fourier equation with a being the thermal diffusivity and v the convection rate.The known temperature dependence of the thermal conductivity k(y), the mass density r(y), and the specific heat under constant pressure C p (y), given in Fig. 8 in the Appendix, determine the thermal diffusivity of bulk water a B . The skin supersolidity 14 contributes to the a S in the form of a S (y) E 4/3a B (y) because the skin mass density 0.75 g cm À3 is 3/4 times that of the standard at 4 1C.The a S (y) is subject to optimization as the skin supersolidity may modify the k(y)/C p (y) value as well in a yet unknown way.The boundary conditions represent that at t 4 0, both the temperature y and its gradient y x = qy/qx continue at the skinbulk interface (x = 0) and the thermal flux h(y f À y) are conserved at both ends of the tube.The velocity field of heat convection takes the bulk value of v S = v B = 10 À4 or 0 m s À1 for examination.As the heat transfer (through radiation) coefficient h j depends linearly on the thermal conductivity k in the respective region, 18 we took the standard value of h 1 /k B = h 2 /k S = 30 w m À2 K À1 (ref.19) in solving the problem.The h 2 /k S term contains the boundary heat reflection that is also negligible.The h 2 /h 1 ratio 4 1 describes the possible effect of thermal radiation of the skin. Examination of the thermal convection and diffusivity The computer reads in the digitized r(y), k(y), and C p (y) in Fig. 8 to compose the a B (y) before each iteration of calculating the partitioned elemental cells.Besides the thermal diffusivity and the convection velocity field in the Fourier equation, we examined all possible parameters in the initial-and-boundary conditions.Results in Fig. 2 and 3 revealed the following: (1) Characterized by the crossing temperature of the relaxation y(y i , t) curves, the Mpemba effect occurs only in the presence of the skin supersolidity (a S /a B 4 1) disregarding the thermal convection. (2) Complementing the skin supersolidity, thermal convection raises only slightly the skin-bulk temperature difference, Dy, and the crossing temperature. (3) The Mpemba effect is sensitive to the source volume, the a S /a B ratio, the radiation h 2 , and the drain temperature y f . (4) The bulk/skin thickness (l 1 : l 2 ) ratio and the thermal convection velocity have little effect on observations. For instance, increasing the liquid volume may annihilate the Mpemba effect because of the non-adiabatic process of heat dissipation.It is understandable that cooling one drop of 1 mL water needs shorter time than cooling one cup of 200 mL water at the same y i under the same conditions.Higher skin radiation h 2 /h 1 4 1 promotes the Mpemba effect.Therefore, conditions for the Mpemba effect are indeed very critical, which explains why the Mpemba effect occurs infrequently. Reproduction of the Mpemba attributes Fig. 4 shows numerical reproduction of the observed Mpemba attributes (insets), 2,12 which confirmed the following: (1) Hotter water freezes faster than colder water does under the same conditions; (2) The liquid temperature y drops exponentially with cooling time (t) for transiting water into ice with a relaxation time t that drops as the y i is increased; (3) The water skin is warmer than sites inside the liquid and the skin of hotter water is even warmer throughout the course of cooling. h i dependence of the H-O bond linear velocity The following formulates the decay curve y(y i , t) shown in Fig. 4a 12 dy ¼ Àt i À1 y dt ðdecay functionÞ The y i dependent relaxation time t i is the sum of t ji over all possible jth processes of heat loss during cooling.Excitingly, the documented experimental profiles of y(y i , t) 12 (Fig. 4a) and d H (y) 16 (Fig. 5a The O:H nonbond is correlated with the H-O bond in relaxation by the equation in Table 1.As E x = k x (Dd x ) 2 /2 approximates the energy stored in the respective bond with k x being the force constant, one can obtain the velocities of d x and E x readily (x = L and H denote the O:H and the H-O bond, respectively).For simplicity and concise, we will be focused on the instantaneous velocity of d H during relaxation: Fig. 5b plots the y i dependence of the d H linear velocity, which confirms that the O:H-O bond indeed possesses memory. Although passing through the same temperature on the way to freezing, the initially shorter H-O bond at higher temperature remains highly active compared to those initially longer ones at lower temperatures when they meet on the way of freezing. h i dependence of the relaxation time Solving the decay function (2) yields the relaxation time t i (t i , y i , y f ), An offset of y f (= 0 1C) and y i by a constant b i is necessary to ensure y f + b i Z 0 in the solution (b i = 5 was taken with reference to the fitting in Fig. 4a).With the measured t i , y i , and y f , given in Fig. 6a (scattered data), as input, one can find the respective t i that is featured in the solid line.According to the fitting, t i drops exponentially with the increase of y i , or with the increase of the initial energy storage or vibration frequency both of which are experimental results, 20 as shown in Fig. 6b. Liquid source and path: Heat emission and conduction The O:H-O bond approximates a pair of asymmetric, coupled, H-bridged oscillators with short-range interactions and memory. 21ig. 7 illustrates interactions and the cooperative relaxation of the O:H-O bond in water under thermal excitation cycling. Source-drain interface: non-adiabatic cycling It is necessary to emphasize that the Mpemba effect occurs only under the circumstance that the temperature drops abruptly from y i to y f at the source-drain interface.Fourier solution indicates that the Mpemba crossing temperature is sensitive to the volume of the liquid source (Fig. 3a).Too large liquid volume may prevent this effect by heat-dissipation hindering.As confirmed by Brownridge, 7 any spatial temperature decay between the source and the drain could prevent the Mpemba effect from occurring.The procedures of decay include tube end sealing, oil film covering, source-drain vacuum isolation, connecting muffin-tin like containers, or putting multiple sources into the limited volume of a fridge.Conducting experiments under identical conditions is necessary to minimize artifacts such as radiation, source/drain volume ratio, exposing area, container material, etc. Fig. 4 Numerical reproduction of the measured (insets) (a) thermal relaxation y(y i ,t) and (b) skin-bulk temperature difference Dy(y i ,t) curves 2,12 for water cooling from different y i .Results were obtained using the conditions given in Fig. 3. Other factors: supercooling, solutes, and evaporation Supercooling is associated with the slower relaxation of the longer H-O bond at an initially lower temperature.It has been confirmed that E H determines the critical temperature for phase transition. 22Generally, superheating is associated with the shorter H-O bond pertained to water molecules with fewer than four neighbors such as those forming the skin, monolayer film, or droplet on a hydrophobic surface. 25Supercooling is associated with the longer H-O bond between molecules in contact with the hydrophilic surface 26 or being compressed. 22 210 MPa compression lowers the melting point to À22 1C according to the phase diagram. 15The supercooling of the colder water in the Mpemba process 7 evidences that the initially longer H-O bond of colder water is slower than those in the warmer water to relax at icing because of the slower momentum of the relaxation -memory effect. The involvement of ionic solutes or impurities 27,28 mediates the Coulomb coupling because of the alternation of charge quantities and ionic volumes. 29,30Salting shares the same effect of heating on the H-O phonon blue shift, 31,32 which is expected to enhance the velocity of heat ejection under cooling.Mass loss due to evaporation of the liquid source 3 affects the O:H-O relaxation little as the amount of evaporation is negligible under cooling.We have confirmed that the mass loss is only 1.5% or lower in repeating the experiments by freezing 75 1C water to À40 1C ice. Conclusion Reproduction of observations revealed the following pertaining to the Mpemba paradox: (1) O:H-O bond possesses memory, whose thermal relaxation defines intrinsically the rate of energy emission.Heating stores energy in water by O:H-O bond deformation.The H-O bond is shorter and stiffer in hotter water than in colder water.Cooing does oppositely to emit energy with a thermal momentum that is history dependent. (2) Heating enhances the skin supersolidity and the skin thermal diffusivity by a S /a B Z r B /r S = 4/3.Convection alone produces no Mpemba effect but only raises the skin temperature slightly. (3) A highly non-adiabatic ambient system is necessary to ensure the immediate energy dissipation at the source-drain interface.The Mpemba crossing temperature is not only sensitive to the volume of the liquid source but also to the drain temperature and to the radiation rate. (4) The Mpemba effect takes place with a characteristic relaxation time that drops exponentially with the increase of the initial temperature or the initial energy storage of the liquid. (5) The O:H-O bond memory may be implicated in living cells in which the hydrogen bond relaxation dominates the signaling, messaging, and damage recovery. Appendix Table 1 Summary of the skin supersolidity and the O:H-O bond-electron-phonon attribute under various conditions.Quantities are derived using the following equations 16 from measurements (indicated with ref. ).This equation means that one can derive the lengths of the O:H nonbond and the H-O bond with the measured mass density under applied stimulus such as molecules with fewer coordination neighbors 14 or heating. 34 with Y being the elastic modulus that is proportional to the energy density Ed À3 . 29 Fig. 1 Fig.1Water in the adiabatically walled, open-ended, one-dimensional tube cell at initial temperature y i is cooled in the drain of y f .The liquid source is divided into the bulk (B (Àl 1 = À9 mm, 0) and the skin (S (0, l 2 = 1 mm)) in the right-hand side) region along the x-axis with thermal diffusivity a B and a S and the mass density ratio of r S /r B = 3/414,16 in the respective region.x = 0 is the bulk-skin interface.h j is the heat transfer (radiation) coefficient at the tube ends with the absence ( j = 1) and presence ( j = 2) of the skin. ) allow us to show directly the memory of the O:H-O bond without needing any assumption or approximation.The y(y i , t) curve provides the slope of dy/dt = Àt i À1 y and the d H (y) = 1.0042-2.7912Â 10 À5 exp[(y + 273)/57.2887](Å) 16 curve formulates the measured y dependence of the H-O bond relaxation.Multiplying slopes of both curves yields immediately the d H linear velocity under cooling. Fig. 2 Fig. 2 Thermal relaxation curves y(y i , t) (at x = 0) with the (a, b) absence (a S /a B = 1) and (c, d) presence (optimized at a S /a B = 1.48) of the skin supersolidity and with the (a, c) absence (v S = v B = 0) and (b, d) presence (v S = v B = 10 À4 m s À1 ) of the thermal convection of the liquid heat source.The Mpemba effect characterized by the crossing temperature of the y(y i , t) curves occurs only in the presence of the skin supersolidity disregarding the thermal convection.Insets (a) and (b) show the time dependent thermal-field in the tube cell.Supplementing the skin supersolidity, convection only raises slightly the Dy and the crossing temperature. Fig. 3 Fig. 3 Sensitivity of the Mpemba effect (crossing temperature) to the (a) source volume, (b) bulk/skin thickness ratio (l 1 : l 2 ), (c) a S /a B ratio, (d) radiation rate (h 2 /h 1 ), and (e) the drain temperature y f .Volume inflation (from 1 to 5 cm) in (a) prolongs the time for reaching the crossing temperature and raises the skin temperature (see inset).(b) The l 1 : l 2 ratio has little effect on the relaxation curve.Increasing (c) the a S /a B and (d) the h 2 /h 1 ratio promotes the Mpemba effect.(e) Lowering the y f shortens the time of the crossing temperature.The sensitivity examination is conducted under conditions of a S /a B = 1.48, v S = v B = 10 À4 m s À1 , y f = 0 1C, l 1 = 10 mm, l 2 = 1 mm, h 1 /k B = h 2 /k S = 30 w m À2 K À1 unless indicated otherwise. Fig. 5 Fig. 5 The (a) measured (scattered data) and simulated (solid line) d H (y) and (b) the experimentally derived y i (corresponds to the starting point of each line) dependence of the d H velocity during relaxation under cooling.The velocity of the initially shorter H-O bond at higher y i remains always higher than those initially longer ones at lower y i values when they meet. Fig. 6 Fig. 6 (a) Cooling time t (scattered circles) dependent relaxation time t i (fitted in solid line) is correlated with the (b) initial energy E H (solid black line) and vibration frequency o H (dashed blue line) 20 of the liquid source cooling at different y i . Fig. 7 O Fig. 7 O:H-O bond short-range interactions and the O:H-O (denoted as d L and d H ) cooperative relaxation dynamics. 23,24d H0 and d L0 are the respective references at 4 1C.Indicated are the van der Waals like (vdW approaches the nonbonding interaction) interaction (E L B 0.1 eV) of the O:H nonbond (left-handed side), the exchange interaction (E H B 4.0 eV) of the H-O bond (right-handed side), and the Coulomb repulsion between electron pairs (paring green dots) on oxygen ions.A combination of these interactions and the specific heat disparity between the O:H and the H-O bond dislocate O atoms in the same direction by different amounts under cooling.The relaxation proceeds along the O:H-O bond potentials with H atoms (in grey) being the coordination origin under heating (red line linked spheres, denoted hot) or cooling (blue line linked spheres, denoted cold).Springs of different diameters represent the strengths of the respective interactions. Fig. 8 Fig. 8 Temperature dependence 33 of (a) the mass density r (in black) and specific heat C p (in blue) and (b) the thermal conductivity k (ref.19) of liquid water, which form the thermal diffusivity of bulk water a B (y) = k(y)/[r(y)C p (y)].
2018-04-03T00:00:34.947Z
2013-10-24T00:00:00.000
{ "year": 2014, "sha1": "d6988b7d11b3bb20a6c74b6f7bf1f8c8bf45ebe4", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2014/cp/c4cp03669g", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ad0a10643eeb91ac226c7dbf5dcbb5bbfc607d75", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
235228646
pes2o/s2orc
v3-fos-license
Mass Spectrometry-Based Glycoproteomics and Prostate Cancer Aberrant glycosylation has long been known to be associated with cancer, since it is involved in key mechanisms such as tumour onset, development and progression. This review will focus on protein glycosylation studies in cells, tissue, urine and serum in the context of prostate cancer. A dedicated section will cover the glycoforms of prostate specific antigen, the molecule that, despite some important limitations, is routinely tested for helping prostate cancer diagnosis. Our aim is to provide readers with an overview of mass spectrometry-based glycoproteomics of prostate cancer. From this perspective, the first part of this review will illustrate the main strategies for glycopeptide enrichment and mass spectrometric analysis. The molecular information obtained by glycoproteomic analysis performed by mass spectrometry has led to new insights into the mechanism linking aberrant glycosylation to cancer cell proliferation, migration and immunoescape. Introduction Prostate cancer (PCa) is the most common and the second most lethal cancer in men in the United States with an estimated 248,530 new cases and 34,130 deaths in the current year [1]. To date, PCa detection is based on digital rectal exam (DRE) and on serum dosage of PSA but the use of PSA is affected by its low specificity and sensitivity and by its inability to discriminate between aggressive (AG) and indolent tumours [2,3]. In fact, PCa treatment poses some concerns about its clinical behaviour, being this tumour silent and often non-aggressive (NAG) in some individuals (low-risk PCa) and AG with early progression and metastasis onset in others (high-risk PCa) [4]. Consequently, the discovery of new diagnostic and prognostic biomarkers of PCa is of pivotal importance. Most of the known biomarkers are proteins. Thus, in recent years, proteomics has become a supporting science for clinical research. Proteomic analysis, in fact, allows to shed light on the alterations correlated with pathological events by characterizing and quantifying proteins and their post-translational modifications (PTMs) [5]. Among protein PTMs, glycosylation is the more widespread modification, and it is implicated in several biological processes [6]. About 250-500 genes are responsible for protein glycosylation [7], determining the generation of 3000 different glycans structures; the microheterogeneity is further complicated by several possible combinations of different glycan linkages, generating a high number of variants. Glycans can be attached to the polypeptide chain in two manners: O-glycosylation on serine or threonine amino acid residues and N-glycosylation on asparagine residue with consensus sequence asparagine-X-Serine/Threonine (where X is any amino acid except proline). The glycoproteome is not static but it is extremely dynamic and dependent on cell type, tissue differentiation and disease status [8]. The observed changes are not the direct product of gene expression but are the result of an intricate balance between different elements: the correct synthesis and function of enzymes involved in the process, the availability of sugars and the activity of the necessary enzymes for sugar 2 of 23 metabolism [9]. When the harmony between these different elements is perturbed, as it happens during malignant transformation, altered glycoform expression (over-expression, under-expression, neo-expression) can be encountered [10]. Glycoproteins are usually secreted or membrane proteins. As a consequence, glycoproteome alterations greatly impact on crucial cellular processes such as cell signalling, invasion, immune modulation, angiogenesis and cell-cell interaction [11,12]. In the light of the wealth of information that can be collected through glycoprotein analysis and considering the current lack of a specific biomarker for PCa, the application of glycoproteomics analysis could be a winning path to delineate a clearer picture of this disorder. To date, glycoproteomics of PCa is an expanding branch of research because it offers the possibility of identifying proteins with a central role in PCa, thus unveiling aspects that straightforward whole proteome analysis could not detect. Moreover, the study of glycoproteins in PCa is important because the prostate gland produces secreted proteins, and these are usually glycosylated. When the prostate gland undergoes a malignant transformation its architecture is altered with evident changes (in size and shape) and progressive epithelium reduction; these modifications lead to important effects on the secretory pathways and the altered and aberrant glycosylation represents a probable consequence of all the perturbations occurring after tumour onset [13]. Over the years, many efforts were focused on the study of prostate specific antigen (PSA) and its glycosylations. Notably, several studies have demonstrated that the integration of PSA glycosylation analysis with serum dosage of PSA could help distinguishing PCa from benign prostate hyperplasia (BPH). Moreover, PSA from PCa patients shows increased levels of fucosylation [14] and α2,3-linked sialic acid [15] compared to PSA from BPH patients. The increment of fucosylation in PCa patients seems to be correlated with the up-regulation of fucosyltransferases, the enzymes involved in fucosylation process; for instance, the fucosyltransferase 6 (FUT6) is over-expressed in patients with bone metastasis [16]. Many goals in the glycoproteomic field have been achieved by the development and improvement of several techniques such as mass spectrometry (MS). MS based approaches have the great advantage of potentially allowing the full characterization of glycoproteins through the analysis of amino acid sequence, the identification of glycosylation sites and the study of attached sugars. However, the simultaneous analysis and characterization of the different components by MS is a challenging task [17]. For this reason, the choice of the proper workflow for sample preparation and analysis is of primary importance. To date, there is no universal method for glycoproteomic analysis; in particular, two different approaches can be used: top-down and bottom-up. Both strategies are based on the isolation of glycoproteins from the whole proteome. The top-down approach consists in the direct analysis of intact proteins by MS to obtain information about protein sequence, sugar composition and glycosylation sites. The drawback of this experimental design is that its applicability is limited to simple mixtures of small glycoproteins [18]. Definitely, the most frequently used approach for glycoproteomics is bottom-up, which is based on the MS analysis of proteolytic peptides. After enzymatic digestion and glycopeptide enrichment (see below), two different paths can be undertaken: the first is based on the chemical or enzymatic removal of glycans from glycopeptides [19] followed by mass spectrometric analysis of formerly glycosylated peptides; the second strategy is characterized by the direct analysis of intact glycopeptides by MS [20] (Figure 1). These two different bottom-up strategies provide complementary information for glycoprotein characterization. In fact, the removal of attached glycans produces peptides which are more suitable for MS/MS analysis, but it only provides information, about the identity of proteins and their glycosylation sites. Conversely, intact glycopeptide analysis, despite a greater analytical complexity, provides information about attached glycans. These two different bottom-up strategies provide complementary information for glycoprotein characterization. In fact, the removal of attached glycans produces peptides which are more suitable for MS/MS analysis, but it only provides information, about the identity of proteins and their glycosylation sites. Conversely, intact glycopeptide analysis, despite a greater analytical complexity, provides information about attached glycans. An additional challenge in glycoproteomics is the low abundance of many glycoproteins; this problem is solved by utilizing enrichment methods ( Figure 2). An additional challenge in glycoproteomics is the low abundance of many glycoproteins; this problem is solved by utilizing enrichment methods ( Figure 2). These two different bottom-up strategies provide complementary information for glycoprotein characterization. In fact, the removal of attached glycans produces peptides which are more suitable for MS/MS analysis, but it only provides information, about the identity of proteins and their glycosylation sites. Conversely, intact glycopeptide analysis, despite a greater analytical complexity, provides information about attached glycans. An additional challenge in glycoproteomics is the low abundance of many glycoproteins; this problem is solved by utilizing enrichment methods ( Figure 2). Both glycoproteins and glycopeptides can be separated from non-glycosylated proteins using lectins [21]; lectins are carbohydrate-binding proteins exhibiting characteristic glycan-binding specificity; some can bind specific glycans while other have a wider range of recognized glycans [22]. The advantage of lectin affinity enrichment is the reversibility of the bond between lectins and the sugars, which allows to characterize the glycan originally present on the glycoproteins. Another enrichment method is solid phase extraction of glycopeptides (SPEG) [23]. In this method, exploiting hydrazide chemistry, the oxidation of carbohydrate moieties is the necessary step to covalently attach the glycopeptides on a solid support. After enrichment, formerly N-glycosylated peptides are removed from the solid support through Peptide:N-Glycosidase F (PNGase F). De-glycopeptides are then characterized by MS. In 2007 Larsen et al. have developed a protocol utilizing titanium dioxide (TiO 2 ) for capturing sialic acid-containing glycopeptides; this method is based on the high affinity of sialic acids for TiO 2 beads in a specific buffer [24]. All enrichment methods, since they involve several steps, constitute a potential source of error. For this reason, the quality of results can be improved by the use of labelling protocols [25]. For instance, a labelling approach, known as metabolic oligosaccharide engineering, uses synthetic monosaccharides to label glycoproteins directly in cells and in experimental animals. Among available molecules, the azido monosaccharides are often used in these experiments because they are small and not present in cells [26]; glycoproteins incorporating the functional groups are separated from the other cellular components by the use of a reaction with affinity probes [25].The isolated glycopeptides can be identified and quantified by MS analysis obtaining a relative quantification between the different conditions. Finally, an important challenge to improve glycoproteomics analysis could be the development of a database, a precious box in which the glycoforms associated to a specific pathology are collected. A potential source of information is UniCarbKB, a database where the sugar structures and the characterized glycosylation sites are collected; these data are freely available; everyone can explore the glycoforms associated with specific disorders [27]. In the future, the implementation of new software will probably allow the realization of the "Human glycome project" [28]. The progress made is there for all to see, but much more needs to be done to improve both glycoproteomic workflows and software for data interpretation. Despite the gaps to fill, MS has demonstrated to be a great support for clinical research contributing to delineate a deep proteomic map for several samples. The big challenge of the future will be to detect proteins indicating early tissue alterations in easily collectable biofluids, thus limiting the use of invasive exams. In this context, we will discuss the most promising works related to MS-based glycoproteomic studies on PCa. In the text, to allow a more fluent reading, and since MS actually identifies gene products, protein names are replaced with gene names. Glycoproteomics of Cells and Tissues Glycoproteomic analysis of cells and tissues is a charming approach to investigate the mechanisms involved in PCa development and progression, giving insights into signalling pathways that trigger the tumour onset and promote metastasization. We will start by reporting studies performed on cell cultures, which highlighted how differences in the glycoprotein profiles could be caused by the alterations in the expression of the enzymes involved in the glycosylation process. LNCaP (androgens dependent) and PC3 cells (androgens independent) were used by Shah et al. [29] to analyse glycoproteins structure, glycosylation sites and bound sugars; this work characterized for the first time the glycoproteome of these cell lines. Peptides were labelled by the iTRAQ reagent (isobaric Tags for Relative and Absolute Quantification) and fractionated to analyse the whole proteome and to perform the quantitative analysis on the intact glycopeptides. An aliquot of the labelled peptide mixture was enriched by SPEG and analysed after PNGase treatment. Overall, 8063 peptides and 653 N-glycoproteins were identified, 176 of which were found differentially abundant between LNCaP and PC3 cells. The comparison between the differentially expressed glycoproteins (176) and global proteome showed that for 156 glycoproteins (88.11%) the observed differences were attributable to changes in the amount of the corresponding protein, while for 21 glycoproteins (11.9%) these variations were ascribed to differential glycosylation. For this reason, the observed changes in the two cell lines could reflect a variation in protein abundance or a change in enzyme expression involved in the glycosylation process. The whole proteome analysis highlighted that two fucosyltransferases (FUT8 and FUT 11) were up-regulated in PC3 cells. This differential enzymatic expression could be the cause of the observed changes in the glycan structure of some glycoproteins. Notably, APMAP, HSP90B1, PSAP showed an increase in fucosylation levels and a decrease in oligomannose content in PC3 cells respect to LNCap cells. The function of FUT8 was furtherly investigated by Wang et al. [30]. They showed that FUT8 inhibition (by siRNA technology) in PC3 cells decreased cell motility: a crucial process in neoplastic evolution. Moreover, using tissue microarrays the authors affirmed that FUT8 was elevated in metastatic tissue vs. normal tissue and that over-expression of FUT8 was associated with high Gleason score; they concluded that FUT 8 could be used as potential prognostic biomarker. The same group has published a follow-up study focused on protein fucosylation in PCa cell lines [31]. After comparing six different lectins known to have specificity for fucosylated peptides, they adopted the best possible protocol for glycopeptide enrichment, consisting in protein digestion, fucosylated glycopeptide capture by Lens culinaris agglutinin (LAC), further purification of glycopeptides by a hydrophilic interaction chromatographybased mechanism, and glycopeptide quantification based on isobaric mass tagging of the intact glycopeptides. The protocol was applied to the study of four different PCa cell lines, namely two NAG types (LNCaP, LAPC4) and two more AG types (PCA3, DU145) and allowed the quantification of 973 glycopeptides (on 252 distinct glycoproteins), among which 51 resulted significantly increased in AG cell lines by partial least squares-discriminant analysis (OPLS-DA). Among the differential peptides, 13 belonged to three proteins: ITGA2, ITGA3 and ITGB1. The authors postulate that increased fucosylation, for which the fucosidase FUT8 seems to be responsible, might activate integrin-mediated cell migration and signalling. In order to investigate further a possible role of FUT8 in PCa development, and in particular in the development of castration resistant prostate cancer (CRPC), the same group performed and additional proteomic study [32]. They implemented a comprehensive comparative proteomic analysis (whole proteome, phosphoproteome and glycoproteome) on: (i) LNCaP cells overexpressing FUT8, (ii) LNCaP control cells and (iii) PC3 cells. They detected increased levels of EGFR in LNCaP-FUT8 cells with respect to wild type cells. The authors suggested that increased levels of EGFR were possibly due to increased stability of the protein itself via increased levels of core-fucosylation of EGFR by FUT8. EGFR phosphopeptide levels resulted also increased in LNCaP-FUT8 cells, together with phosphopeptides of several proteins linked to EGFR downstream signalling, such as GRBs, SRC, MEKK1, MEK1, JAK1, and PKC. Overexpression of FUT8 would also suppress PSA production by cancer cells and promote resistance to gefitinib-induced cell death. The protein FUT8 was investigated further in vivo, by comparing FUT8 expression in tumour sections of xenograft mice which were subjected to orchiectomy versus noncastrated mice (control group). Immunohistochemistry data showed induction of FUT8 under androgen-restricted conditions and overall increase of core-fucosylation in cancer tissue. The information reported above brought light on an important aspect: the study of differentially expressed glycoproteins can guide the identification of altered activity of enzymes involved in the glycosylation process providing a crucial information on the tumour environment. A better knowledge of cancer perturbations is of pivotal importance for the development of a targeted therapeutic strategy. Glycosylation is abundant on proteins localized on the cell surface. Since these proteins are in direct contact with the extracellular microenvironment, they could be involved in mechanisms that support tumour onset and progression. Some studies have been focusing on the analysis of sialylated surface glycoproteins, since sialylation may correlate with cancer progression, possibly favouring immunoescape [33]. An elegant workflow, specifically targeted at surface/extracellular sialoglycoproteins based on bioorthogonal labelling, has been described in a few studies [34,35]. Cells were labelled with peracetylated N-acetylmannosamine (Ac 4 ManNAc), which would be converted in the corresponding sialic acid derivative, and finally incorporated into the glycoproteins in vitro. The reaction with an enrichment probe (generally a terminal alkyne-biotin) would allow the isolation of the labelled sialoglycoproteins by affinity capture. Yang et al. have applied the workflow described above to the analysis of cell surface sialoglycoproteins in non-metastatic (N2) and highly metastatic (ML2) PC3 cancer cell lines [36]. Proteomic analysis of the captured sialoglycoproteins was achieved by SDS-PAGE separation at the protein level followed by in-gel digestion and nLC-MS/MS. After enrichment they have cumulatively identified 538 proteins (372 from ML2 and 324 from N2 cells). The efficacy of the enrichment method was demonstrated by the fact that 26% and 30% of the identified proteins were, respectively, secreted proteins and cell-surface proteins. Moreover, a semi-quantitative comparison by spectral counting has detected, in ML2 cells, the overexpression of several proteins involved in growth process (SLC38A2, JUP, PTPN1, POSTN, CALR, BSG, PNN, CDCP1, POLR1E, COL6A1), invasion (BSG, POSTN, GPI, CALR, LRRC15, MARCKS), migration and cell mobility (JUP, PTPN1, NPC1, SLC38A2, CAP1), all processes linked to malignant transformation. Protein sialylation was also studied by Spiciarich et al. [35] using a similar approach to that used by Yang et al. based on metabolic labelling with AC 4 ManAz and covalent immobilization on beads by click chemistry. Compared to the previously described work, in Spiciarich et al. normal (n = 8) and PCa (n = 8) tissues were analysed using a more complex model. In total, the authors have identified 972 proteins, 68% of which were either membrane-bound or secreted. The comparative analysis between cancerous and normal tissue resulted in 24 proteins found increased in PCa by a factor higher than 4-fold (VDAC1, DPP2, EZRI, GDF15, FOLH, CATB, AMPN), though quantitative analysis was performed using a low precision approach (spectral counting). Interestingly, the authors found an average 22-fold increase of VDAC1 in PCa versus normal tissue. This observation could be justified by the increase in concentration of the protein itself, or by an increase of its glycosylation levels. The role of VDAC1 in PCa was confirmed by another study on PC-3 cells, where the silencing of VDAC1 expression led to the inhibition of both cell proliferation and tumour growth in xenografts [37]. The overall results of this small-scale study confirmed that cancer disease has a specific sialylation profile. Discriminating between aggressive and indolent behaviour of PCa is of pivotal importance. Chen et al. have searched for glycoproteins associated to AG PCa by applying SPEG technology to optimal cutting temperature (OTC)-embedded tissue slices [38]. This work was an extension of a proof of principle paper published by the same group two years earlier [39], in which the authors described the methodology and applied it to a limited number of cases (eight in total). In this follow-up work, their discovery sample set consisted of 31 specimens from NAG and 24 from AG PCa. LC-MS/MS analysis identified 350 formerly N-glycosylated peptides belonging to 242 glycoproteins. Among the identified proteins, using a low-precision quantitative approach based on spectral counting, they revealed 17 differentially abundant proteins. Among them, COMP, CTSL, APMAP, AOC3, POSTN, CSPG2. In order to perform ELISA validation on a second sample set (27 NAG, 20 AG), they chose the most promising candidates by an interesting prioritization approach based on literature analysis. Proteins previously associated to PCa and to cancer aggressiveness were selected for validation. This second set of experiments confirmed the overexpression of COMP and POSTN and the decreased expression of CSPG2 in AG compared to NAG PCa. Biomarkers of tumour aggressiveness were also investigated by Liu et al. They carried out a comparative analysis between normal prostate (n = 10), NAG (n = 24), AG (n = 16) and metastatic (n = 25) cancer tissues [40]. After protein extraction and digestion, glycopeptides were isolated by SPEG, treated with PNGase F and analysed by sequential window acquisition of all theoretical mass spectra (SWATH-MS). A spectral library, for protein identification and quantification, was previously generated by the analysis of pooled samples and synthetic reference peptides, resulting in a total of 2188 N-glycosites and 897 N-glycoproteins. The increased library space has allowed the identification of, on average, 1430 N-glycosites per analysis. Compared to previous studies, these results largely expanded the number of identified N-glycosites. Moreover, statistical analysis has led to a list of 220 proteins with altered expression profile among the different groups. Extensive literature search revealed that the majority of these regulated proteins was previously correlated to PCa and/or associated with general tumour aggressiveness. Besides, 50 out of 220 proteins were significantly differentially expressed between AG and NAG. Glycoproteomic analysis has revealed that NAAA and PTK7 were associated with AG cancer. The validation on tissue microarray has indicated that these two candidates could be used to discriminate between AG and NAG phenotypes. Their protein list was very interesting because 17 proteins were previously identified in human serum by selected reaction monitoring (SRM) and 7 of these (TIMP1, ATRN, ASPN, CADM1, BTD, HYOU1, NCAM1) were considered potential diagnostic PCa biomarkers [41]. Notably, more than 75% of identified proteins in this tissue study are found in human plasma at low concentrations (<100 ng/mL) but, nonetheless, still detectable by MS. Thus, the obtained candidates are suitable for further validation studies. For what was mentioned above, this study represents an excellent starting point for the identification of novel biomarkers able to discriminate between normal and different cancer stages: AG, NAG and metastatic. In order to evaluate if PCa progression was accompanied by glycoproteome perturbations, Kawahara et al. [42] performed a glycoproteomic analysis on 5 tissues from patients with BPH and 50 from PCa patients. The latter were furtherly divided in five different groups (n = 10 per group) based on Gleason score. Briefly, after protein digestion, peptides were labelled by Tandem Mass Tags (TMT) and N-glycopeptides were enriched by ZIC-HILIC (zwitterionic Hydrophilic Interaction Liquid Chromatography) glycopeptide enrichment. Non-modified peptides (in the flow-through), intact glycopeptides (from ZIC-HILIC enrichment), and de-N-glycopeptides (from ZIC-HILIC enrichment after PNGase F treatment) were all subjected to high-pH reversed phase solid phase-extraction (SPE) fractionation followed by LC-MS/MS analysis; glycans released by PNGase treatment were also analysed. Through this approach it was possible to identify approximately 500 N-glycoproteins and 200 O-glycoproteins, obtaining a deep map of the tissue glycoproteome and, as a consequence, increasing the probability of detecting differences between BPH and PCa glycoproteome, possibly highlighting glycoforms correlated with tumour progression. Mostly, glycan analysis permitted to evaluate the alterations in the abundance profile of some glycosylation types, highlighting changes between PCa groups and BPH (t-test, p > 0.05). In particular, the groups with a low grade PCa were characterized by elevated levels of paucimannosidic-(glycans with low percentage of mannose) and monoantennary-complex-type N-glycans respect to BPH group. High grade PCa, instead, was richer in highly branched complex-type N-glycans, while oligomannosidic-, hybridand biantennary complex-type N-glycan showed decreased levels when compared to BPH group. Moreover, to untangle the rich web of molecular information, the identified proteins were compared with the proteins of prostatic tissue, the proteins of bone marrow and the extracellular matrix (ECM) proteins, all information enclosed in Protein Atlas. These data demonstrated changes (increased oligomannosylation) correlated with the tumour stage for glycoproteins of tissue and ECM origin. In addition, the identified paucimannosidic glycans belonged to glycoproteins of immune cells, further indication of the extreme complexity of tumour environment. This study clearly demonstrates that, in the actual scenario, the complete picture of the glycoproteome is only achievable using very complex workflows. PCa, like several neoplastic diseases, is characterized by dynamic events, release mediators and additional components, such as immune cells, playing a key role in the determination of tumour fate. Tumour microenvironment significantly contributes to cancer proliferation, metastasis, and resistance to therapy. In particular, it is known that metastatic castration-resistant prostate cancer tumours (mCRPC) are often enriched in M2 macrophages; these immune cells promote a permissive cell growth environment through the secretion of cytokines, matrix degrading enzymes, angiogenic factors and multiple growth factors. Based on these information, Zarif et al. aimed at identifying M2 enriched surface glycoproteins that may serve as therapeutic targets [43]. Glycoproteomics analysis was performed on human CD14+ monocytes and homogeneous macrophage populations. After SPEG enrichment and de-glycosylation by PNGase F, N-linked glycopeptides were analysed by LC-MS/MS. Overall, 176 unique glycopeptides from 114 proteins with 1% FDR were identified. Label free semi-quantitative analysis based on spectral count demonstrated that MRC1, CTSL, ITGA3, LGMN and SLC9A7 were enriched in M2. Moreover, as a further confirmation of the permissive role performed by M2 in cancer development, they discovered that homogeneous populations of M2 macrophages secreted anti-inflammatory cytokines such as IL-10 and, more importantly, the angiogenic factor VEGF-A. Then, the confirmation of M2 infiltration in mCRPC was carried out by flow cytometry, using MRC1 and M2 macrophage scavenger receptor CD163, whereas M2 infiltration in bone metastasis (a common site of mCRPC metastasis) was demonstrated by immunohistochemistry, by staining for MRC1. In conclusion, this study demonstrated M2 macrophage infiltration in human mCRPC. Besides, some surface glycoproteins in these immune cells were indicated as enriched. These proteins, in particular MRC1, seem to be specific of macrophages M2 and, for this reason, could be potential therapeutic targets. Some of the most relevant findings obtained in the studies described in this paragraph are summarized in Table 1. Glycoproteomics of Biofluids After tumour onset, tissue architecture is modified; this morphologic change leads to the release into systemic circulation of molecular cues highlighting tumour presence. Proteomics and, in particular, glycoproteomics of body fluids, allowing to retrieve information about membrane and/or secreted proteins, represents a rich source of precious information potentially useful for monitoring cancer onset and/or progression. Urine The analysis of the urinary glycoproteome is of great relevance for the characterization of binding sites, structures of the carbohydrate chains and glycoproteins released by prostate gland. Unlike biopsies and blood, the use of urine samples for PCa biomarker discovery has distinct advantages. Primarily, sample collection is not invasive and urinary proteins do not degrade quickly after collection [44]. Furthermore, due to the anatomic proximity of the prostate to the bladder and urethra, urine may contain prostatic secretions and exfoliated prostate epithelial cells empowering the identification of glycoproteins reflecting prostate health status. Kawahara et al. have analysed urinary glycoproteins using a single urine sample, from a healthy man, to describe N-linked glycosylation sites and glycopeptides with low molecular weight [45]. Briefly, urine samples collected in two different days were centrifuged and 20 mL of supernatant were concentrated on filters with a 10 kDa cutoff (Millipore, Billerica, MA). The flow-through was subjected to hydrophilic-lipophilicbalanced (HLB) SPE to characterize low molecular weight endogenous glycopeptides, while proteins captured on the filter were digested with trypsin. Glycopeptide enrichment was performed by HILIC protocol; an aliquot of eluate was deglycosylated by PNGase F. The spectral analysis of intact glycopeptides was conducted with Byonic software while the identification of proteins and glycosylation sites was performed by MaxQuant. In total, 256 glycoproteins with 472 unique N-glycosylation sites were characterized and 90 glycoproteins with 202 unique N-glycosylation sites were identified by analysing the flow-through. This complex workflow led to a thorough characterization of the urinary glycoproteome, though it has to be considered a proof of principle study since it was applied to a single sample from a healthy donor. In order to evaluate whether urine was a reliable source of prostate specific proteins and, in particular, of PCa-associated glycoproteins, a glycoproteomic analysis on urine and serum samples from AG and NAG PCa patients was carried out from Jia et al. [46] The primum movens of this study was to determine which of these biological samples could be a better source of information about the prostatic patho-physiological state. For this study, 40 urine samples from PCa patients and 167 serum samples (n = 119 PCa, n = 48 without PCa) were collected. N-linked glycosite containing peptides were isolated by SPEG and released from hydrazide beads by PNGase F. After enrichment and de-glycosylation, peptides from sample pools from either urine or serum were separated in 24 fractions and subjected to LC-MS/MS analysis. In total, 2923 and 2472 formerly N-glycosylated peptides were identified in serum and urine, respectively. The authors showed that 40% of tissue glycopeptides identified in a previous study [40] were also detectable in urine, whereas only 13% was detectable in serum, suggesting that urine, in view of its proximity to the prostate, may be a more favourable source of prostate-derived proteins than serum. In the same work, the authors performed label free quantitative glycoproteomic analysis on an additional sample set consisting of 20 urine samples from patients suffering from either AG (n = 10) or NAG (n = 10) disease. The relative amount of five glycoproteins, namely PTK7, ICOSLG, AZGP1, FBN1 and GLG1 was significantly decreased in urine samples from patients with AG disease. In a valuable study of Kawahara et al., glycoproteins in urine samples collected from BPH and PCa patients were compared [47]. In brief, after protein digestion, peptides were TMT-labelled before enrichment of glycopeptides by TiO 2 and HILIC protocols. The analysis was performed by MS both on intact N-and O-glycopeptides and on formerly N-glycosylated peptides and desialo-O-glycopeptides obtained by treatment with, respectively, PNGase F and sialidase A. Results showed 56 distinct intact N-glycopeptides able to fully discriminate between BPH and PCa groups. Interestingly, the abundance of both formerly N-glycosylated peptides and corresponding proteins was not able to separate the two groups, highlighting that the use of specific glycoforms provided a more effective PCa-specific signature. However, the reliability of the conclusions that were drawn from this study is undermined by the very limited extent of the sample set (5 PCa and 4 BPH). Urinary expressed prostatic secretions, or EPS-urine, is a sample collected after DRE. Compared to urine, EPS-urine is enriched in prostate specific proteins. For example, Vermassen et al. showed that average total PSA urinary concentration (tPSA) before and after DRE was, respectively, 75.5 µg/L and 13,030 µg/L [48]. Dong et al. analysed EPS-urines from 74 and 68 patients with AG PCa and NAG PCa, respectively; they aimed to detect differentially expressed glycoproteins among the two groups by data independent analysis (DIA) [49]. Briefly, the digestion of 500 µL of EPS-urine was performed by automated tryptic digestion on C4-tips (Lys-C 1 h and trypsin 6 h) [50]; after digestion, N-glycopeptides were isolated by automated C18/MAX-tip method followed by de-glycosylation with PNGase F and peptide purification by C18 StageTip [51]. To build the spectral library for DIA analysis, 142 EPS-urine samples were pooled, and N-glycopeptides were separated in 8 distinct fractions by basic reversed-phase liquid chromatography before LC-MS/MS analysis in data-dependent mode. DIA data analysis revealed that, out of a total of 1289 unique glycopeptides detected, belonging to 594 glycoproteins, 79 glycopeptides showed differential abundance between NAG e AG groups. Among the 79 glycopeptides, 54 showed a fold change of at least 1.5 with an estimated FDR of 0.25. The results include glycopeptides belonging to proteins already known to be related to PCa, such as ACCP, CD63 (decreased in AG disease) and DSC2, LOX, LRG1, CLU, SERPINA1, ORM1 (increased in AG disease). In addition, using the glycopeptides cited above, the discrimination power of different biomarker panels, was evaluated: ACCP in combination with serum PSA alone, or ACCP in combination with serum PSA together with each one of the following candidates: CLU, LOX, SERPINA1, ORM1. The best model was the combination of ACCP, CLU and serum PSA, which provided an AUC of 0.86, with specificity of 50% and sensitivity of 95%. These results were tested in two validation cohorts. In comparison with the discovery phase some candidates such as DSC2 and LRG1 showed a lower discrimination power, while others confirmed their capacity to discriminate between AG and NAG ( Table 2) allowing to delineate a panel with the best candidates. The obtained results seem promising but need further validation with a larger cohort. However, the experimental design of this study encloses several points of strength: (i) the choice of a sample, EPS-urine, enriched of prostatic derived proteins; (ii) the use of an automated system for sample processing increasing the protocol reproducibility; (iii) the implementation of DIA, a sensitive analytical method allowing to detect low abundance proteins. Some of the most relevant findings obtained in the comparative analyses described in this paragraph are summarized in Table 3. Serum Blood is a potential gold mine for the discovery of novel candidate biomarkers as it keeps information about all body systems. In its route through the body, blood takes close contact with all tissues and organs, acquiring valuable information about the individual's health. Tissue-derived proteins are released in the blood circulation by secretion or leakage and are ultimately diluted to low concentrations, which typically lie in the ng/mL range. Consequently, tissue proteins are only a negligible fraction of the whole blood protein content, thus their detection by proteomic techniques is very challenging, ultimately complicating the use of serum as the sample of choice for the discovery of new cancer biomarkers. A valuable attempt to overcome these issues for PCa biomarker discovery was made by Cima et al., in 2011 [41]. They developed a two-stage strategy articulated in an initial phase of discovery applied to a mouse model of PCa progression and then in a second phase of validation on human serum and tissues. Briefly, in the first stage, PCa candidate biomarkers were discovered by enriching N-linked glycopeptides through SPEG to detect and quantify differentially expressed glycoproteins in the prostate tissue (n = 8) and sera (n = 8) of Pten cKO mice compared to control animals. In the second stage, selected candidates were measured in serum by targeted proteomics and ELISA to evaluate the association of PTEN-inactivation in human PCa with a specific serum signature (n = 143, 66 BPH and 77 PCa). As a result, they identified a five protein signature comprising GALNTL4, FN, AZGP1, BGN and ECM1 that predicted patients having tumours with a Gleason score <7 or ≥7 with an AUC of 0.788; and a four protein signature comprising ASPN, CTSD, HYOU1 and OLFM4 discriminating between BPH and PCa groups with an AUC of 0.726. Intriguingly, the combination of the four protein signature and PSA resulted in an AUC of 0.840. Subsequently, Kalin et al., assessed if the quantitative proteome alterations following PTEN loss could enclose prognostic biomarkers in mCRPC [52]. In particular, this study combined the use of the candidates derived from the cancer-genetics guided model proposed by Cima et al., with previously validated biomarkers used in prognostic nomograms. Formerly N-glycosylated proteins were quantified by either massspectrometry based targeted proteomics or ELISA in sera of 57 mCRPC patients processed as described in the aforementioned study. The results showed a five-factor predictor, comprising THBS1, CRP, PVLR1, EFNA5 and MME having an accuracy of 96% and 94% in predicting 12-and 24-months survival, respectively. In 2015 Thomas et al., in an attempt to discover a biomarker signature able to distinguish AG PCa from indolent, NAG PCa, developed a "tier 2" multiplexed targeted MS assays for the quantification of N-glycopeptides in serum by parallel reaction monitoring (PRM) [53]. PRM assays, being usually based on the use of isotopically labelled internal standards, are characterized by high precision. The assay relies on parallel precursor isolation and fragmentation of target ions followed by a single MS/MS scan at high resolution. This scan mode is suitable for targeted verification of selected candidates found in discovery experiments. In this case, the starting point was a list of 377 glycopeptides which were considered potential targets of interest in two previous works on PCa tissue published by the same authors [38,39]. N-linked glycopeptides were enriched by SPEG followed by the specific release of formerly N-linked glycosylated peptides by PNGase. Forty-three N-linked glycopeptides were selected for PRM quantification based on their detectability in serum, and 41 could be reproducibly quantified in 75 serum samples (n = 25 AG, n = 25 NAG, n = 25 without PCa). Among the 41 assay targets, only 4 N-linked glycosite-containing peptides showed significantly higher levels (p < 0.05) in serum from the NAG vs. AG patient groups: AFNSTLPTMAQMEK (CD44); EEQFNSTFR (IGHG2); GAFISNFSMTVDGK (ITIH2); and INNTHALVSLLQNLNK (CDH13). Despite of the appreciable investigation strategy, this study showed no significant differences between PCa (AG, NAG) and non-PCa groups. In 2018 Totten et al., performed a quantitative glycoproteomic analysis using multilectin affinity chromatography (M-LAC) to compare the circulating levels of proteins and their glycoforms in sera of BPH and PCa patients (n = 10 PCa, n = 7 BPH) [54]. This study was focused on glycosylation alterations previously reported to be aberrant in PCa: the increase in glycan branching as well as the increase of fucosylation and sialylation. To this end, Aleuria aurantia lectin (AAL) were used to capture intact core-fucosylated proteins and Phaseolus vulgaris leucoagglutinin/erythroagglutinin (PHA-L/E) were used to capture glycoproteins containing highly branched glycans. Serum samples (BPH and PCa) were subjected to immunodepletion of the top 14 most abundant proteins; nondepleted proteins were alkylated using heavy 13 C-acrylamide. Proteins from a sample pool were derivatized by light 12 C-acrylamide in order to represent a reference sample for relative quantification. All heavy/light mixes were subjected to affinity capture by either AAL or PHA-L/E lectins. Bound (AAL, PHA-L/E) and unbound (UNB) fractions were subjected to reversed-phase chromatography at the intact protein level (n = 3 × 13 fractions per sample); finally, protein fractions were digested by trypsin, and resulting peptides were analysed by LC-MS/MS. Relative quantification was based solely on acrylamide-labelled peptides. As a result, the authors observed alterations in circulating levels of proteins both at the global level and in specific glycoforms (thus specifically enriched in either AAL or PHA-L/E fractions). Both PCa and BPH groups showed quite similar protein expression patterns confirming the difficulty to identify glycoproteins able to efficiently differentiate between the two groups. Despite a high level of similarity in glycoprotein content, some significant differences were observed. Global levels of CD5L, CFP, C8A, BST1, and C7 proteins were significantly increased in the PCa samples. Besides, glycoform-specific alterations between BPH and PCa were identified among proteins such as CD163, C4A ATRN in the PHA-L/E fraction, and C4BPB and AZGP1 glycoforms in the AAL fraction, all of which were over-expressed in the PCa group. In 2018, Sajic et al. employed a cross-tumour comparison to determine the molecular similarities and differences within the plasma/serum proteome in different types of localized-stage carcinoma (colorectal, pancreatic, lung, prostate, and ovarian) [55]. Blood samples (n = 284) were analysed with a proteomic workflow combining N-glycosite enrichment and SWATH MS. In particular, to increase the coverage of low abundant tissue derived proteins, N-glycosylated peptides were selectively enriched by SPEG and analysed after de-glycosylation by PNGase F. The results of this multi-cancer comparison revealed that localized tumours display "specific biomarkers" for each individual cancer type as well as "common biomarkers" which derive from a systemic response to cancer. This study showed that FBN1, THBS1 and ITIH3, involved in platelet activation, signalling and aggregation, change across different tumour types and appear to be sensitive to general blood cancer biology, while Pregnancy zone protein (PZP), as previously reported in the lit-erature, confirmed to be significantly changed in the serum of PCa patients, thus emerging as a potential specific candidate biomarker of PCa [42]. Gabriele et al., in 2018 developed a high-throughput protocol which allowed a consistent detection of low abundance proteins in serum via enrichment of sialylated peptides by TiO 2 beads, de-glycosylation by PNGase F and nLC-MS/MS analysis either in datadependent or in targeted mode [56]. In a first step, a peptide library of over 700 formerly N-glycosylated peptides was generated by data-dependent LC-MS/MS analysis of glycopeptides from serum pools of both PCa patients and healthy controls (6 pool in total). Then, 16 medium to low abundance proteins (DSG2, IL6ST, LAMP2, PLXNB2, GOLM1, CTSD, TIMP1, PTPRF, IGFBP3, ASPN, POSTN, APMAP, LAMP1, LCN2, PIGR, PEDF) were selected and quantified in duplicates by SRM on a cohort of 54 patients (PCa = 24 and BPH = 31). Four formerly N-glycosylated peptides belonging to four different proteins APMAP, POSTN, CTSD and LAMP2 were found significantly increased in PCa sera compared to the control group. Results of these studies are summarized in Table 4. Though Table 4 only contains a selection of DAPs found in each work, it is evident that there is limited overlap between the different studies. For example, only two proteins (AZGP1 and POSTN) were identified as DAPs in three different studies. The low incidence of common discoveries is possibly due to multiple reasons. Probably, the most important factor is that the focus of the studies was different, varying from comparing AG to NAG disease, to discriminating between PCa and BPH, to comparing the glycoproteomic profile of several cancer types. Besides, sample preparation and MS instrumentation employed were different, resulting in large variations in terms of proteome coverage and in the type of analytes detected (deglycopeptides, intact glycopeptides, glycoproteins). Overall, we summarize the principal advantages and disadvantages for each sample type mentioned above (Figure 3). Prostate Specific Antigen Since being introduced in the clinic in the mid-1980s, PSA blood test has been widel used for PCa early detection in association with DRE. PSA, an androgen-regulated serine protease, is a 28.7 kD glycoprotein exclusivel produced by the prostatic gland. It belongs to the kallikrein family and is encoded by th KLK3 gene. PSA is synthesized by the columnar epithelial cells as a 261-amino acid (aa prepro-protein having a 17-aa signal sequence. After cleavage, the signal sequence i released into the prostatic ducts to generate an inactive 244-amino acid precursor protei (proPSA). In the prostatic ducts, proPSA is activated by enzymatic cleavage (7 N-termina aa) in PSA: a 237-aa protein with a single glycosylation site at Asn-45 (Asn-69 i preproPSA). PSA is a major protein in semen, where its function is to cleave semenogelin in the seminal coagulum. However, small amounts of this glycoprotein are also found i the blood circulation of healthy individuals as consequence of its spread through som anatomic barriers such as the basement membrane, the stromal layer, and the walls o blood and lymphatic capillaries [57,58]. High PSA levels in serum, as a result of the los of the architecture of the prostate gland and the disruption of basal cells and basemen membrane by tumour cells are found in PCa patients [59] (Figure 4). Prostate Specific Antigen Since being introduced in the clinic in the mid-1980s, PSA blood test has been widely used for PCa early detection in association with DRE. PSA, an androgen-regulated serine protease, is a 28.7 kD glycoprotein exclusively produced by the prostatic gland. It belongs to the kallikrein family and is encoded by the KLK3 gene. PSA is synthesized by the columnar epithelial cells as a 261-amino acid (aa) prepro-protein having a 17-aa signal sequence. After cleavage, the signal sequence is released into the prostatic ducts to generate an inactive 244-amino acid precursor protein (proPSA). In the prostatic ducts, proPSA is activated by enzymatic cleavage (7 N-terminal aa) in PSA: a 237-aa protein with a single glycosylation site at Asn-45 (Asn-69 in preproPSA). PSA is a major protein in semen, where its function is to cleave semenogelins in the seminal coagulum. However, small amounts of this glycoprotein are also found in the blood circulation of healthy individuals as consequence of its spread through some anatomic barriers such as the basement membrane, the stromal layer, and the walls of blood and lymphatic capillaries [57,58]. High PSA levels in serum, as a result of the loss of the architecture of the prostate gland and the disruption of basal cells and basement membrane by tumour cells are found in PCa patients [59] (Figure 4). This occurrence represents the rationale of PSA blood testing. However, significant limitations plague PSA blood screening. In fact, increased levels of PSA in blood are not cancer-specific; other conditions can raise PSA blood levels such as benign prostate hyperplasia (BPH), prostatitis, or manipulations of the prostate (e.g., bicycling or catheterization). Moreover, PSA testing lacks sufficient sensitivity, because serum levels of PSA do not necessarily increase in presence of advanced PCa. Another issue with PSA blood test is "the diagnostic grey zone" (men with blood concentrations of PSA between 4-10 ng/mL) in which only the 25% of patients have PCa, leading to the execution of many unnecessary biopsies [60,61]. Importantly, this test does not differentiate between indolent and aggressive forms of PCa. As a consequence, the clinical implementation of this screening has led to reduced incidence of advanced disease and mortality, but also to overdiagnosis and overtreatment [62]. Thus, several efforts have been made to improve the diagnostic and prognostic power of PSA, such as: (i) normalizing PSA on the basis of the prostate gland volume (PSA density), (ii) monitoring the kinetics of PSA in serum (PSA velocity, PSA doubling time), (iii) measuring multiple molecular traits (e.g., free and complexed PSA). In particular, the ratio of free to total PSA (%-free PSA) has been shown to decrease in PCa compared to BPH and has been approved by FDA for use in patients which fall in the diagnostic grey zone. Further improvements have been made by the introduction of tests measuring different isoforms of PSA, namely the Prostate Health Index (PHI) and the 4Kscore. PHI combines measurements of free, total, and (-2)pro-PSA (an isoform preferentially produced by cancer cells) into a single score, while the 4Kscore measures a panel of four kallikreins: free PSA, total PSA, intact PSA, and kallikrein-like peptidase 2 (hK2). Both tests have outperformed %-free PSA in detecting PCa [60]. This occurrence represents the rationale of PSA blood testing. However, significant limitations plague PSA blood screening. In fact, increased levels of PSA in blood are not cancer-specific; other conditions can raise PSA blood levels such as benign prostate hyperplasia (BPH), prostatitis, or manipulations of the prostate (e.g., bicycling or catheterization). Moreover, PSA testing lacks sufficient sensitivity, because serum levels of PSA do not necessarily increase in presence of advanced PCa. Another issue with PSA blood test is "the diagnostic grey zone" (men with blood concentrations of PSA between 4-10 ng/mL) in which only the 25% of patients have PCa, leading to the execution of many unnecessary biopsies [60,61]. Importantly, this test does not differentiate between indolent and aggressive forms of PCa. As a consequence, the clinical implementation of this screening has led to reduced incidence of advanced disease and mortality, but also to overdiagnosis and overtreatment [62]. Thus, several efforts have been made to improve the diagnostic and prognostic power of PSA, such as: (i) normalizing PSA on the basis of the prostate gland volume (PSA density), (ii) monitoring the kinetics of PSA in serum (PSA velocity, PSA doubling time), (iii) measuring multiple molecular traits (e.g., free and complexed PSA). In particular, the ratio of free to total PSA (%-free PSA) has been shown to decrease in PCa compared to BPH and has been approved by FDA for use in patients which fall in the diagnostic grey zone. Further improvements have been made by the introduction of tests measuring different isoforms of PSA, namely the Prostate Health Index (PHI) and the 4Kscore. PHI combines measurements of free, total, and (-2)pro-PSA (an isoform preferentially produced by cancer cells) into a single score, while the 4Kscore measures a panel of four kallikreins: free PSA, total PSA, intact PSA, and kallikrein-like peptidase 2 (hK2). Both tests have outperformed %-free PSA in detecting PCa [60]. Thus, a viable way to increase the specificity of this test and a possible solution for all these issues could be the characterization and quantification of specific PSA isoforms. In view of what was mentioned above and on the basis of new evidence linking cancer development to protein glycosylation alterations, many studies have been focusing on the study of PSA glycoisomers. In the PCa glycoproteomics scenario, MS has emerged as a powerful tool for glycoprotein characterization and quantification. The great potential of this technique derives from its ability to provide both quantitative and structural information such as glycosylation position, glycan composition and importantly quantitative differences between different conditions. However, to improve the characterization of glycoproteins and, in particular, of PSA glycoforms, there is still the need to couple MS analysis with upstream separation techniques, such as liquid chromatography or capillary electrophoresis. This strategy, greatly improves the resolution and the sensitivity of the analysis but also impacts on sample throughput [63]. Given the above, the use of matrix-assisted laser desorption/ionization (MALDI-MS) represents an attractive "separation-free" alternative. A recent study about male infertility has allowed the quantification of 44 PSA glycopeptides through a highthroughput and easy to use platform by MALDI-MS. PSA was captured from seminal plasma by bead-based affinity purification and subjected to tryptic digestion. The stabilization of α2,6and α2,3-sialylated isomers was achieved by a two-step amidation reaction; then glycopeptides were enriched by HILIC SPE and analysed by MALDI-MS. This quick workflow could represent a good starting point for future PSA glycoforms characterizations in PCa [64]. The principal mass spectrometric studies about PSA were mostly focused on the characterization of its glycan structures using commercially available PSA or cell lines, while only few studies have explored the differences of PSA glycoforms between healthy and cancer state in clinical samples. PSA has different degrees of sialylation and this results in heterogeneity of its charge. In the literature, two isoforms of PSA are known: the major isoform consisting of over 90% of PSA having an isoelectric point of 6.9, and the minor one, with an isoelectric point of 7.2 (PSA high isoform, PSAH). The principal characterization of the glycan structures associated with the two isoforms of PSA was made as part of the 2012 ABRF Glycoprotein Research Group study [65]. This interlaboratory study was focused on the characterization of the two glycoforms of PSA commercially available (PSA and PSAH) by MS. The aim of this study was to evaluate state of the art mass spectrometry-based methods in the determination of differences in glycoprotein structures between the two isoforms. The choice of PSA, with its characteristics of low-molecular weight protein bearing a single glycosylation site, allowed the implementation of multiple analytical strategies including intact protein analysis (top-down approach), analysis of glycopeptides (bottom-up approach) and analysis of glycans released after PNGase treatment. Each participating laboratory (22 in total) produced a list of N-glycan compositions with indication of their corresponding relative intensities for both PSA and PSAH isoforms. Statistical and comparative analyses were used to identify a consensus from the resulting data. A consensus cluster representing 17 of the 22 participating laboratories, was identified, producing consistent results for differential N-glycan composition between the two PSA isoforms. Notably, 61 N-glycan were identified, 8 of which differed significantly in abundance between the two PSA samples. The results demonstrated a major presence of disialylated and fucosylated structures in PSAH than in PSA. As part of this study, Behnken et al., characterized PSA and PSAH isoforms by a top-down approach, coupling high resolution LC-MS and bioinformatics (mathematical deconvolution of isotopically resolved ion patterns and database analysis) [66]. This "plug and play" strategy allowed the identification of 38 glycoforms. A further characterization of the 2 isoforms of PSA was made by Song et al., which previously took part in the ABRF study. Tryptic digestion of the two separate PSA isoforms produced 3 peptide backbones: NKSVILLGR, AVCGGVLVHPQWVLTAAHCIRNK, and AVCGGVLVHPQWVLTAAH-CIRNKSVILLGR. Fifty-six N-glycans were associated to PSA, whereas 57 N-glycans were observed in the case of PSAH; the majority of the observed glycans were identified on the NKSVILLGR backbone. Interestingly, 3 sulphated/phosphorylated glycopeptides were identified. The glycan structures were quantified by spectral counting and the results demonstrated a good correlation with the 2012 ABRF [67]. Another PSA isoform which has attracted interest as potentially PCa-specific is the one bearing α2,3-sialic acid. Below, some examples of promising studies focusing on α2,3-sialylated isoform of PSA are reported. The following studies are not based on mass spectrometric analysis. In 2014, Yoneyama et al. developed a magnetic microbead-based immunoassay measuring the amount of α2,3-sialic acid-linked PSA in serum in a training set of 100 samples (non-PCA = 50 and PCa = 50) and then in a validation set of 314 samples (Non-PCa = 176 and PCa = 138) [68]. The diagnostic accuracy of this assay was compared with that of conventional PSA and %fPSA. The results showed a sensitivity of 95.0% and 90.6% and a specificity of 72.0% and 64.2% for the diagnosis of PCa in the training and validation sets, respectively. Moreover, in the validation study, the AUC for the detection of PCa bearing α2,3-sialic acid resulted to be significantly higher than that obtained by PSA or %fPSA (α2,3-sialic acid = 0.84, %fPSA = 0.60, PSA 0.61). Another paradigmatic example of the importance of PSA glycoforms in support of future clinical decisions is the study of Ferrer-Betallè et al. They compared the performance of PHI with a glycoform assay measuring the α2,3-sialic acid percentage of PSA in serum to discriminate between BPH and PCa patients (BPH = 29 and low-risk = 7, intermediate risk = 21, high-risk = 22 PCa) [69]. Briefly, total PSA was immunopurified and then applied to a lectin chromatography using sambucus nigra (SNA)-agarose lectin, which binds to α2,6-sialylated glycoconjugates allowing the separation of α2,3-sialylated from α2,6-sialylated PSA glycoforms. Eluted unbound and bound chromatographic fractions were collected and the quantification of PSA in these fractions was made using electrochemiluminescence technology (the bound fraction corresponded to α2,6-sialic acid PSA while unbound fraction corresponded to α2,3-sialic acid PSA). As a result, the % of α2,3-sialic acid outperformed PHI in separating high-risk PCa from the other groups (AUC of 0.971 vs. 0.840). The combination of both markers increased the AUC up to 0.985 resulting in 100% sensitivity and 94.7% specificity to differentiate high-risk PCa from the other low and intermediate-risk PCa and BPH patients. These results are promising but require further validation studies. Kammeijer et al. developed a method based on capillary electrophoresis-mass spectrometry (CE-MS) allowing the selective analysis of α2,3and α2,6-sialylated glycopeptides [70]. This separation platform was applied to the analysis of tryptic glycopeptides of commercially available PSA after glycan enrichment by cotton HILIC SPE. Seventyfive glycopeptides, all attached to the tryptic dipeptide "N69K" were detected, many of which were biantennary structures harbouring 2 terminal sialic acids. The results showed a good separation of α2,3and α2,6-linked isomers. However, the principal drawback of this set up was the high amount of PSA required for the analysis (1 ng of PSA was injected and analysed), a quantity more readily available in urine samples than in serum. Indeed, this analytical strategy was used to develop a PSA Glycomics Assay (PGA), for the differentiation of α2,6and α2,3-sialylated isomers of PSA in urine [71]. After affinity purification and tryptic digestion of PSA, samples were analysed by CE-ESI-MS (capillary electrophoresis-electrospray ionization coupled to mass spectrometry). This strategy was applied on 23 urine samples (PCA = 13, non-PCA = 10) from patients suspected of PCa. After tryptic digestion, 0.5 µL of urine deriving from each patient were used to create a sample pool, subsequently used to characterize PSA glycopeptides, as previously done with commercially available PSA. A total of 67 N-glycopeptides, all attached to the dipeptide "N69K" were identified. This assay demonstrated a good intra-day and inter-day variability which were below 3% and 7% (RSD), respectively: well below the assessed biological variation (RSD of 50%). Despite the low technical variability of PGA, no significant differences were detected among the two groups, probably due to limited sample size. Van der Burgt et al., in a proof of principle study, showed a partial separation of sialic acid linkage-specific isoforms of PSA [72]. This exploratory study coupled the use of hydrophilic interaction liquid chromatography (HILIC) with targeted quantitative MS for the analysis of PSA glycopeptides obtained after tryptic or ArgC digestion of commercially available human PSA. However, this strategy resulted only in a partial separation of α2,6and α2,3-sialylated PSA isomers. Besides the real potential of this approach should be evaluated by the analysis of clinical samples. Altered sialylation and glycosylation content of PSA were also investigated in prostate cancer and patient-matched non-cancer tissues by Li et al. They developed an SRM assay to quantify formerly glycosylated and sialylated PSA [73]. In this approach, glycopeptides were isolated from tryptic digests of tissue samples (PCa = 9 and non-PCa = 9 obtained from the same 9 subjects) by either SPEG or by a modified SPEG protocol able to capture sialylated glycopeptides by using mild oxidation conditions. Quantification of formerly N-glycosylated peptides obtained from the two parallel enrichment protocols, carried out by SRM, was performed using a heavyisotope-labelled PSA peptide. Results showed that there was no correlation between total PSA (measured by immunoassay) and the abundance of either glycosylated or sialylated PSA. Besides, no significant differences were observed in either total PSA glycosylation or sialylation between PCa and non-PCa groups. Chun-Jen Hsiao et al. quantified the relative abundance of urinary PSA glycoforms in BPH and PCa patients [74]. In particular, PSA was captured from 50 mL of urine using anti-PSA beads, denatured, separated by SDS-PAGE and finally digested with chymotrypsin and analysed by LC-MS/MS. PSA glycoforms were quantified by a label-free method; the abundance of each PSA glycopeptide was normalized using the following equation: Level of PSA glycopeptide = (Glycopeptide ion abundance)/(PSA protein internal reference peptide ion abundance) × 100. The authors found that the two most frequently observed glycoforms in BPH samples were H5N4S1F1 (71%, H: hexose; N: N-acetylhexosamine; S: sialic acid; F: fucose) and H6N3S1 (69%) while in PCa were H5N4S2F1 and H6N3S1 (55%). Though this work represents the first attempt to exploit the detection of specific PSA glycoforms for clinical purposes, it shows several important limitations regarding sample collection and analysis. Notably, no internal reference peptide or PSA glycopeptide could be detected in almost one third of the sample set (originally comprising 61 PBH and 38 PCa samples), determining the exclusion of these samples from subsequent data analysis. By considering either relative levels of total glycopeptides or of unfucosylated glycoforms, the authors achieved a sensitivity of 88% and a specificity of 60% in classifying the remaining samples (43 BPH and 20 PCa). In an attempt to improve the efficiency of PSA in discriminating PCa from BPH and AG PCa from NAG PCa, Lang et al., investigated PSA-core fucosylation [61]. Patients enrolled in the study were 150 (BPH = 50, AG-PCa = 50 and NAG-PCa = 50), all within the PSA diagnostic grey zone. The quantification of total and core-fucosylated PSA was obtained by immunoaffinity enrichment of PSA from serum, partial deglycosylation by Endo F3 (specific cleavage within the chitobiose core of N-linked fucosylated-biantennary and triantennary oligosaccharides), followed by tryptic digestion and then LC-MS/MS analysis. Finally, total PSA and fuc-PSA were analysed by measuring peptide LSEPAELTDAVK and glycopeptide NK + GlcNAc + Fuc, respectively. The data showed that total PSA concentrations of samples measured by LC-MS/MS and electrochemiluminescence immunoassay analyser (ECLIA) exhibited a good agreement. However, %-Free PSA (AUC of 0.74) outperformed fuc-PSA (AUC 0.58) in correctly classifying BPH and PCa samples. Moreover, fuc-PSA was not able to better stratify aggressive and non-aggressive PCa in comparison to standard methods (%-Free PSA and total PSA). Glycans of PSA were also investigated in PCa tissue-originated spheroids (CTOS) [75]. The cardinal principle of this study was that CTOS-derived PSA is considered to reflect glycan structures of the patient's tumour. PSA glycan profile obtained from a single patient's CTOS was compared with that of PSA from normal seminal plasma and two cancer cell lines (LNCaP and 22Rv1) using lectin chromatography and mass spectrometry. The results indicated a higher fraction of (Concanavalin A)-unbound PSA from the three cancer cell types with respect to seminal plasma. Interestingly, in the Con A-unbound fraction, 2 novel forms of PSA were identified: a high molecular weight PSA with highly branched N-glycans and a low molecular weight PSA which was either a truncated or an unglycosylated isoform. Being common to all samples of cancer origin and not to seminal PSA, the two novel glycoforms are proposed as potential new cancer markers which will be the object of follow-up studies. The most promising MS-based study on PSA glycosylation is perhaps that of Haga et al. This study established a novel PCa-specific diagnostic model (PSA G-index) integrating immuno-affinity enrichment of PSA from serum and mass spectrometric oxonium ion monitoring of tryptic peptides [76]. PSA glycoforms were quantitatively evaluated in sera from 15 PCa or 15 BPH patients having PSA levels in the diagnostic grey zone. The data showed that abundances of di/trisialylated LacdiNAc (GalNAcβ1-4GlcNAc)-containing structures were significantly increased in the PCa group compared to the BPH group. In total, 52 glycan structures were quantified using 100 µL of serum as starting material. Then, PSA G-index was devised using 2 of these glycoforms and was tested in an independent sample cohort (15 PCA vs. 15 BPH falling in the diagnostic grey zone). In this limited sample set, the AUC of PSA G-index was 1, while that of total PSA or %freePSA was 0.50 or 0.60, respectively. Moreover, both PSA glycoforms showed significant correlation with Gleason scores. Thus, PSA G-index could drastically improve the specificity of PCa diagnosis compared to traditional blood testing but large-scale experiments are needed to confirm these preliminary results. Some of the most relevant findings obtained in the comparative analyses described and depicted in this paragraph are summarized in Table 5. Table 5. Main differentially abundant glycoforms of PSA found increased in PCa. * indicates non-MS-based workflows. Plain text = discovery level; bold character = validation level. Bead capture = PSA immunopurification by anti-PSA microbeads. Ref. Sample Sample Groups Workflow PSA Glycoforms Depiction [68] Serum Non-PCa = 176 and PCa = 138 Bead-capture * α2,3-sialic acid-linked (GalNAcβ1−4GlcNAc)-containing structures were significantly increased in the PCa group compared to the BPH group. In total, 52 glycan structures were quantified using 100 µl of serum as starting material. Then, PSA G-index was devised using 2 of these glycoforms and was tested in an independent sample cohort (15 PCA vs. 15 BPH falling in the diagnostic grey zone). In this limited sample set, the AUC of PSA G-index was 1, while that of total PSA or % freePSA was 0.50 or 0.60, respectively. Moreover, both PSA glycoforms showed significant correlation with Gleason scores. Thus, PSA G-index could drastically improve the specificity of PCa diagnosis compared to traditional blood testing but large-scale experiments are needed to confirm these preliminary results. Some of the most relevant findings obtained in the comparative analyses described and depicted in this paragraph are summarized in Table 5. glycoforms and was tested in an independent sample cohort (15 PCA vs. 15 BPH falling in the diagnostic grey zone). In this limited sample set, the AUC of PSA G-index was 1, while that of total PSA or % freePSA was 0.50 or 0.60, respectively. Moreover, both PSA glycoforms showed significant correlation with Gleason scores. Thus, PSA G-index could drastically improve the specificity of PCa diagnosis compared to traditional blood testing but large-scale experiments are needed to confirm these preliminary results. Some of the most relevant findings obtained in the comparative analyses described and depicted in this paragraph are summarized in Table 5. while that of total PSA or % freePSA was 0.50 or 0.60, respectively. Moreover, both PSA glycoforms showed significant correlation with Gleason scores. Thus, PSA G-index could drastically improve the specificity of PCa diagnosis compared to traditional blood testing but large-scale experiments are needed to confirm these preliminary results. Some of the most relevant findings obtained in the comparative analyses described and depicted in this paragraph are summarized in Table 5. Bead-capture H5N4S2F1 and H6N3S1 glycoforms glycoforms showed significant correlation with Gleason scores. Thus, PSA G-index could drastically improve the specificity of PCa diagnosis compared to traditional blood testing but large-scale experiments are needed to confirm these preliminary results. Some of the most relevant findings obtained in the comparative analyses described and depicted in this paragraph are summarized in Table 5. Conclusions The results reported in this review demonstrate that glycoproteomics analysis on cell lines or tissues is helping in further clarifying the molecular dynamics around PCa onset and development. For example, several studies mentioned above have reported altered fucosylation in cancer cells and have linked such alterations to differential expression of fucosyltransferases like FUT8; this was the starting point for functional validation experiments demonstrating the involvement of FUT8 in integrin-mediated cell migration and signalling and in stabilizing EGFR in cancer cells. Concerning biofluid analysis, despite the identification of several novel candidate biomarkers, glycoproteomics has not yet identified an analyte or a protein panel able to replace PSA in the clinical practice. However, the analysis of PSA glycoforms is promising, though it requires further validation studies in order to assess its real advantages over the mere dosage of free and total PSA. Although, to date, glycoproteomic analysis of urine has been performed only at an initial discovery level, since this sample is a prostate proximal fluid, it holds promise for future success in PCa early detection. Although the ultimate goal of many of the studies discussed in this review is the introduction in the clinical practice of a glycoprotein or a panel of glycoproteins able to improve PCa diagnosis, the achievement of this aim is still a long way off. In general, in fact, the best candidates detected in the discovery phase must be validated through the recruitment of large cohorts of patients, involving several laboratories. However, the realization of such multicentric studies is hindered by some elements such as the elevated costs of experiments, the complexity and laboriousness of extraction and analysis protocols currently in use and the absence of standards to perform the absolute quantitation of glycan-bearing peptides. Moreover, the commonly used label-free approach holds some intrinsic limitations, mainly represented by its limited robustness and precision. Thus, the improvement of these methodological aspects will be of pivotal importance to allow the implementation of large-scale studies and, consequently, the introduction in the clinical routine of the validated biomarkers. Making assumptions about the future, we could hypothesize that, the achievement of more standardized protocols (sample collection, processing and analysis) will allow a more prominent contribute of MS to clinical practice. Conclusions The results reported in this review demonstrate that glycoproteomics analysis on cell lines or tissues is helping in further clarifying the molecular dynamics around PCa onset and development. For example, several studies mentioned above have reported altered fucosylation in cancer cells and have linked such alterations to differential expression of fucosyltransferases like FUT8; this was the starting point for functional validation experiments demonstrating the involvement of FUT8 in integrin-mediated cell migration and signalling and in stabilizing EGFR in cancer cells. Concerning biofluid analysis, despite the identification of several novel candidate biomarkers, glycoproteomics has not yet identified an analyte or a protein panel able to replace PSA in the clinical practice. However, the analysis of PSA glycoforms is promising, though it requires further validation studies in order to assess its real advantages over the mere dosage of free and total PSA. Although, to date, glycoproteomic analysis of urine has been performed only at an initial discovery level, since this sample is a prostate proximal fluid, it holds promise for future success in PCa early detection. Although the ultimate goal of many of the studies discussed in this review is the introduction in the clinical practice of a glycoprotein or a panel of glycoproteins able to improve PCa diagnosis, the achievement of this aim is still a long way off. In general, in fact, the best candidates detected in the discovery phase must be validated through the recruitment of large cohorts of patients, involving several laboratories. However, the realization of such multicentric studies is hindered by some elements such as the elevated costs of experiments, the complexity and laboriousness of extraction and analysis protocols currently in use and the absence of standards to perform the absolute quantitation of glycan-bearing peptides. Moreover, the commonly used label-free approach holds some intrinsic limitations, mainly represented by its limited robustness and precision. Thus, the improvement of these methodological aspects will be of pivotal importance to allow the implementation of large-scale studies and, consequently, the introduction in the clinical routine of the validated biomarkers. Making assumptions about the future, we could hypothesize that, the achievement of more standardized protocols (sample collection, processing and analysis) will allow a more prominent contribute of MS to clinical practice. Author Contributions: C.G., L.E.P. and M.G. wrote the manuscript; G.C. and M.G. revised the Conclusions The results reported in this review demonstrate that glycoproteomics analysis on cell lines or tissues is helping in further clarifying the molecular dynamics around PCa onset and development. For example, several studies mentioned above have reported altered fucosylation in cancer cells and have linked such alterations to differential expression of fucosyltransferases like FUT8; this was the starting point for functional validation experiments demonstrating the involvement of FUT8 in integrin-mediated cell migration and signalling and in stabilizing EGFR in cancer cells. Concerning biofluid analysis, despite the identification of several novel candidate biomarkers, glycoproteomics has not yet identified an analyte or a protein panel able to replace PSA in the clinical practice. However, the analysis of PSA glycoforms is promising, though it requires further validation studies in order to assess its real advantages over the mere dosage of free and total PSA. Although, to date, glycoproteomic analysis of urine has been performed only at an initial discovery level, since this sample is a prostate proximal fluid, it holds promise for future success in PCa early detection. Although the ultimate goal of many of the studies discussed in this review is the introduction in the clinical practice of a glycoprotein or a panel of glycoproteins able to improve PCa diagnosis, the achievement of this aim is still a long way off. In general, in fact, the best candidates detected in the discovery phase must be validated through the recruitment of large cohorts of patients, involving several laboratories. However, the realization of such multicentric studies is hindered by some elements such as the elevated costs of experiments, the complexity and laboriousness of extraction and analysis protocols currently in use and the absence of standards to perform the absolute quantitation of glycan-bearing peptides. Moreover, the commonly used label-free approach holds some intrinsic limitations, mainly represented by its limited robustness and precision. Thus, the
2021-05-29T05:14:28.645Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "41048e359912fb5c547b74270a9ffaa8be73cd52", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/10/5222/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41048e359912fb5c547b74270a9ffaa8be73cd52", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
43953210
pes2o/s2orc
v3-fos-license
Temporal variability of spawning site selection in the frog Rana dalmatina : consequences for habitat management Temporal variability of spawning site selection in the frog Rana dalmatina: consequences for habitat management.— We evaluated whether R. dalmatina females laid their eggs randomly within a pond or preferred particular microhabitats. The same measures were performed in the same area in two consecutive years to determine whether the pattern remained constant over time. In 2003, we observed a significant selection for areas with more submerged deadwood and vegetation, presence of emergent ground and low water depth. However, these results were not confirmed in the subsequent year when none of the microhabitat features measured had a significant effect. Although microhabitat features can strongly influence tadpoles, the temporal variability of habitat at this spatial scale suggests that habitat management could be more effective if focused on a a wider spatial scale. Introduction Oviposition habitat selection is a key determinant of reproductive success for many oviparous animals since it can affect important traits such as survival, development and growth rate of the offspring (Mousseau & Fox, 1998).In pond-breeding amphibians, ovoposition habitat selection is a process that can occur at several spatial scales (Resetaris, 2005).At the largest spatial scale, females select the ponds that are in the most favourable landscape, not only because the features of terrestrial habitat are critical for the survival of post-metamorphic stages but also because landscape features can influence the characteristics of ponds (Skelly et al., 1999;Halverson, 2003;Semlitsch & Bodie, 2003;Porej et al., 2004;Marsh et al., 2005).At a smaller spatial scale, within a suitable landscape, frogs do not usually select breeding waterbodies randomly.Both field observations and experimental studies have shown that females attempt to lay eggs in ponds with fewer predators, with greater food availability, with lower desiccation risk or with optimal thermal and chemical features, thus increasing survival or growth rate of tadpoles (e.g., Petranka et al., 1994;Viertel, 1999;Binkley & Resetaris, 2003;Ficetola & De Bernardi, 2004;Resetarits, 2005;Rudolf & Rödel, 2005).However, ponds are not homogeneous environments.Within each wetland, many microhabitats can be recognised, with differences in important features such as water temperature and depth, distribution of animals and plants, and sun exposure.These differences may affect survival and/or growth not only of embryos before hatching but also of tadpoles after hatching.Data on the movements of tadpoles in nature are scarce.However, in a given wetland, tadpoles that hatch close to the more suitable microhabitats could be advantaged when compared with tadpoles that hatched far from suitable areas.This suggests a third spatial scale at which the selection of laying site can occur, that is the microhabitat within a given pond (Tarano, 1998).Knowledge of a selection pattern for a given microhabitat within wetlands could have important consequences for the management of amphibian populations.However, only a limited number of studies have studied whether amphibians lay their eggs randomly within a pond and evaluated the possible consequences of site selection (Jacob et al., 1998;Tarano, 1998). In this study, we investigated whether, within a pond, the Agile Frog Rana dalmatina lays eggs in microhabitats with selected features.Rana dalmatina could be an excellent species to study within-pond spawning selection since their egg masses are easily identifiable and are usually fixed to the substrate, thus minimizing the risk of movements after the laying.Moreover, as R. dalmatina is an explosive breeder, temporal differences between laying dates of females are minimal, reducing the risk that differences in selection are caused by temporal variation.Finally, R. dalmatina is a species that is rigorously protected in the European Union (Council Directive 92/43/EEC of 21 May 1992 on the conservation of natural habitats and wild fauna and flora) and wetland management is often performed to improve the survival of populations. Methods Rana dalmatina is a brown frog that is widely distributed in Central and Southern Europe.It inhabits deciduous forests from sea level to an altitude of about 800m (Grossenbacher, 1997).Rana dalmatina breeds in late winter-early spring in wetlands with stagnant water; each female lays a single egg mass that is usually fixed to the substrate (Nollert & Nollert, 1992).We studied a single R. dalmatina population breeding in a pond (diameter: about 50 m) within the "Ca'del Re" moor (Parco Regionale delle Groane, Lombardy, Northern Italy).The pond is generally permanent but can exceptionally be dry.A potential issue in studies analysing the relationship between species and habitat is their temporal stability.For the applicability of management studies, data need to be validated during subsequent intervals (Vaughan & Ormerod, 2005).We therefore collected the data in two subsequent breeding seasons (2003 and 2004) to evaluate whether the results obtained during one season can be generalized. The number of R. dalmatina females breeding in this pond, estimated on the basis of egg masses, was 63 in 2003 and 72 in 2004.To improve its suitability for R. dalmatina and the Smooth Newt Triturus vulgaris, in 1998Triturus vulgaris, in -2001 this wetland was subjected to habitat management (eradication of alloctonous plants; increase of wetland surface and depth) (Ferri et al., 2004).Two other species of amphibians are also present in this area, the Italian Tree Frog Hyla intermedia and the Pool Frogs belonging to the Rana esculenta complex. In early spring 2003, we haphazardly selected 36 R. dalmatina clutches laid within this pond.To reduce spatial autocorrelation we allowed a minimum distance of 1 m between two selected clutches.We also randomly selected 29 further points.The minimum distance allowed between two random points or between a random point and a clutch was 1 m.Random points were selected along the pond banks, since all egg masses were laid close to banks.For each egg mass and for each random point, we measured eight environmental variables (table 1).A square frame (1 m 2 ) divided by a 0.1 x 0.1 m grid was overlapped to each clutch and to each random point to improve measurement of environmental features.The same protocol was repeated in spring 2004 and we measured the microhabitat features of 20 clutches and 18 random points. Data analysis We used logistic regression to analyse clutch distribution, using likelihood ratio (i.e., the change in deviance if a variable is added to the model) to calculate the significance (Menard, 1995).We built all possible models including only significant variables, then ranked the models according to their AIC values (Burnham & Anderson, 2002).The model with the lowest AIC value accounted for the greater deviance on the basis of the smallest number of parameters.AIC was thus used to rank the models according to their performance (Rushton et al., 2004).Models differing less than 2 AIC units from the best model are usually considered good candidates (Burnham & Anderson, 2002).However, as all models differed > 3.6 AIC units from the best model, only the best model was considered and shown in the results.The logistic regression model was built using the data collected in 2003 and was validated using data collected in 2004.It was not possible to perform the inverse procedure since no significant models were built using data collected in 2004. To avoid multicollinearity, we calculated pairwise correlation between variables in the two years, considering that the risk of multicollinearity arises if pairwise correlation among variables is > 0.7 (Berry & Feldman, 1985).For environmental data collected in 2003, the model was not biased by multicollinearity as all |r| were d [ 0.6.In 2004, we observed a strong, negative correla-tion between the percentage of submerged vegetation and percentage of emergent vegetation (r = -0.788).However, as none of the variables were significant, multicollinearity could not be a source of bias. We also used a t-test to determine whether pond features changed between 2003 and 2004.Only the features of random points were considered for this analysis.Approximated degrees of freedom were used if variances were not homogeneous between groups. To meet the assumptions of parametric tests, if necessary data were transformed using arcsinesquare root (percentage data) or natural logarithms (distance from the nearest woodland, density of submerged deadwood). Results We counted 63 egg masses in 2003 and 72 egg masses in 2004.In 2003, 35% of egg masses were isolated (no other egg masses at a distance < 1 m), while 65% of egg masses were aggregated in groups of 2-6 clutches.A similar pattern of aggregation was observed for a subset of 36 egg masses, for which we recorded the location in 2004 (table 2).The frequency distribution of aggregations was almost identical between the two years (Kolmogorov-Smirnov test, Z = -0.120,P > 0.99). Our best model shows that, in 2003, clutch presence was positively associated to number of sub- 3).The model explained 28.1% of null deviance and strongly suggested that R. dalmatina females do not lay eggs randomly ( 2 = 25.153,d.f.= 4, P < 0.0001). In 2004, we did not detect the presence of deadwoods or emergent ground in the proximity of egg masses or in the random points (table 1); since these features were not variables we could not include them in the analysis.The model built in 2003 was not significant in 2004 ( 2 = 2.516, d.f.= 2, P = 0.284) and explained only 5% of the null deviance.Moreover, we failed to find any significant relationship between the distribution of egg masses in 2004 and the environmental features.The percentage of submerged vegetation was the variable showing the strongest relationship with distribution of egg masses, but this relationship was far from significance ( 2 = 1.793, d.f.= 1, P = 0.240). Most pond features changed little between the years (table 4).In 2004 the pond tended to be shallower, but the random points did not differ significantly for water depth between the years.Moreover, pond banks were significantly less steep in 2003 (table 4).The complete lack of submerged woods and of areas with emergent ground in 2004 (table 1) suggests that a substantial variations for these two features occurred. However, the difference for the model between 2003 and 2004 was not entirely due to the lack of submerged deadwoods and of emerging ground in the pond during the 2004 breeding season.To show this, we built a logistic regression model for data collected in 2003, without including the variables submerged deadwood and emergent ground.After the exclusion of these two variables, both water depth ( 2 = 6.698, d.f.= 1, P = 0.010) and % of submerged vegetation ( 2 = 6.061, d.f.= 1, P = 0.014) had a significant effect on the distribution of egg masses.The model including only these two variables still explained 11% of null deviance. Discussion Our study showed a different pattern in the two years.In 2003, a strong relationship was observed between microhabitat features and distribution of the egg masses of R. dalmatina.This relationship could suggest R. dalmatina selects the area where eggs are laid and allows speculation about the potential importance of this process for the offspring.However, in the same area the relationship was not confirmed during the successive year.The lack of validation with the dataset collected in 2004 makes it more complex to interpret the significant pattern observed in 2003 and to test its applicability in management. The pattern of laying site selection observed in 2003 can be interpreted in light of the influence that environmental conditions can have on the development of embryos and tadpoles immediately after hatching (see below).Preference for areas with abundant submerged deadwoods is easily explainable since R. dalmatina and several other brown frogs frequently fix their eggs to submerged woods.Fixing eggs could reduce the risk of drifting, and at the same time, fixing eggs under the water surface could reduce the risk of freezing on cold nights and preda-tion by ducks (Pozzi, 1980).As deadwoods were absent from the study pond in 2004 it was not possible to validate this relationship. The association with shallow water might be explained by the different thermal conditions of these areas.In areas with lower water depth, the temperature rises more quickly on sunny days: a warm temperature increases the growth and development rate of both embryos and tadpoles (Bachman, 1968;Skelly et al., 2002); in turn, fast growth and development are believed to be important measures for the performance of embryos and larvae and frequently correlate well with their survival (Semlitsch, 2002 and references therein).Thermal conditions of the water have previously proven a major force influencing breeding site selection at both landscape and pond scale (Skelly et al., 1999(Skelly et al., , 2002;;Ficetola & De Bernardi, 2004, 2005a).Association with areas of the pond with emergent ground could be explained on a similar basis.Finally, in areas with more submerged vegetation, tadpoles could find more food and greater shelter from large predators, such as fish.The association of R. dalmatina clutches with abundant vegetation has also been shown by Kescés & Puky (1992).However, an association with areas with abundant veg-etation is not always favourable, since invertebrate predators (such as Odonata) can be more abundant in such an environment (Gunzburger & Travis, 2004).It should be noted that we measured only the distribution of egg masses, and not the survival pattern or tadpole growth.For a complete picture of the effect of the egg mass distribution on fitness it would be necessary to measure survival of eggs and tadpoles, and even their growth rate. Behavioural interactions can also have important consequences on the distribution of egg masses.For example, Vieites et al. (2004) showed that mating pairs of the frog Rana temporaria are often followed by clutch pirates which try to fertilize eggs in the deposited clutches after deposition.On one hand, females may spawn only when relatively undisturbed by pirates, while on the other, they may gain benefits from pirates as such behaviour may increase the rate of fertilization of the eggs.This trade-off of interests may well influence the distribution of egg deposition and it is also likely to occur in Rana dalmatina (see K. Grossenbacher, unpublished video recording, cited in Hettyey & Pearman, 2003).Furthermore, at the peak of the breeding season Rana dalmatina males can form aggregation and choruses which may increase the likelihood of attracting females and then scramblecompete over approaching females, but later in the breeding season fewer males may be present and they may be distributed more randomly over the ponds, forming territories (Picariello et al., 2006).The distribution of males across the pond is strongly affected by these intraspecific interactions and probably plays an important role in the distribution of egg masses.As Rana dalmatina is the only species of brown frog breeding in this pond, interspecific interactions (see discussions by Petranka et al., 1994;Hettyey & Pearman, 2003, 2006;Ficetola & De Bernardi, 2005b, 2006) are not possible. Surprisingly, the relationship observed in 2003 was not confirmed in the subsequent year even though the same sampling protocol was applied, and it is rather difficult that it occurs since during 2004 as we did not observe variation for two main features.Microhabitat features can be difficult to study, and at this spatial scale changes from the expected patterns are often seen (but see also Rudolf & Rödel (2005) for an example of model transferable in time).For example, Halverson et al. (2006) studied the distribution of tadpoles of the Wood Frog Rana sylvatica in two ponds less than 50 m apart.From the outcome of laboratory studies, it would be expected that tadpoles were aggregated in kin groups (Blaustein & Waldman, 1992).However, Halverson et al. (2006) observed an aggregated distribution of kin groups in only one of the two ponds, and found an opposite pattern in the second pond, with kin tadpoles more distant than would be expected if they were randomly distributed.This suggested that the optimal distribution of tadpoles can be context dependent and strongly modified by microhabitat variations.In our study, the absence of relationships might be caused by the change in pond features over the two years.In 2004 the pond was shallower and slightly smaller, and no deadwoods were present.Nevertheless, the number of egg masses laid did not decrease between the years, suggesting that these changes in microhabitat did not have a major effect on the reproductive output of R. dalmatina.Differences between years in tadpole performance are possible but these were not investigated in the present study.The contradictory results between the two years suggest that a larger sample is needed (higher number of oviposition sites collected over more years), as pond microhabitat features show a wide variation.Moreover, sampling more ponds would be necessary to evaluate whether the results are consistent across space. Non-random choice of egg deposition site within breeding ponds has been demonstrated for several amphibians, including the Newt Triturus marmoratus and the anurans R. dalmatina, R. temporaria and Physalemus pustolosus (Ancona & Capietti, 1996;Jacob et al., 1998;Tarano, 1998).However, interpretation of relationships at this spatial scale can be difficult as patterns are not always confirmed in successive periods.Small environmental variations can partially explain the difficulty in finding a general pattern.Lack of a clear pattern and fast variation of microhabitat features with time can hamper the use of this information for habitat management (Wittingham et al., 2003).Indeed, actions performed at a microhabitat level can be quickly neutralized by natural events such as changes in precipitation or in the growth of vegetation.We therefore suggest concentrating the management effort at the largest spatial scales (pond and landscape) as these suffer less temporal instability.Features at the largest spatial scale can influence those at the smaller scale; the presence of surrounding woodlands, for example, can influence the presence of deadwoods but also the chemical and physical features of the waterbodies (Kiffney et al., 2003;Ficetola et al., 2004).Analogously, the introduction of fish can modify other features such as turbidity and the distribution of vegetation (Sheffer et al., 1993).Acting at the largest spatial scales could therefore provide more effective results for the management of amphibian populations. Table 2 . Frequency distributions of aggregations of egg masses during 2003 and 2004: Ne.Number of egg masses per aggregation. Table 4 . Comparison of features of random points between 2003 and 2004: results of ttests.Degrees of freedom are not always integer since in some cases they were corrected to account the non-homogeneity of variance: Wd.Water depht; Bs.Bank slope;
2018-05-25T01:46:07.488Z
2006-12-01T00:00:00.000
{ "year": 2006, "sha1": "0b0f43e338a91811d4d4c517f89b5a8a80fa0d5b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32800/abc.2006.29.0157", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0b0f43e338a91811d4d4c517f89b5a8a80fa0d5b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
256867142
pes2o/s2orc
v3-fos-license
Sublingual Microcirculation Specificity of Sickle Cell Patients: Morphology of the Microvascular Bed, Blood Rheology, and Local Hemodynamics Patients with sickle cell disease (SCD) have poorly deformable red blood cells (RBC) that may impede blood flow into microcirculation. Very few studies have been able to directly visualize microcirculation in humans with SCD. Sublingual video microscopy was performed in eight healthy (HbAA genotype) and four sickle cell individuals (HbSS genotype). Their hematocrit, blood viscosity, red blood cell deformability, and aggregation were individually determined through blood sample collections. Their microcirculation morphology (vessel density and diameter) and microcirculation hemodynamics (local velocity, local viscosity, and local red blood cell deformability) were investigated. The De Backer score was higher (15.9 mm−1) in HbSS individuals compared to HbAA individuals (11.1 mm−1). RBC deformability, derived from their local hemodynamic condition, was lower in HbSS individuals compared to HbAA individuals for vessels < 20 μm. Despite the presence of more rigid RBCs in HbSS individuals, their lower hematocrit caused their viscosity to be lower in microcirculation compared to that of HbAA individuals. The shear stress for all the vessel diameters was not different between HbSS and HbAA individuals. The local velocity and shear rates tended to be higher in HbSS individuals than in HbAA individuals, notably so in the smallest vessels, which could limit RBC entrapment into microcirculation. Our study offered a novel approach to studying the pathophysiological mechanisms of SCD with new biological/physiological markers that could be useful for characterizing the disease activity. Introduction Sickle cell disease (SCD) is an inherited red blood cell disorder caused by a single mutation in the β-globin gene. Due to this mutation, abnormal hemoglobin (called HbSS) may polymerize when deoxygenated, leading the biconcave disc red blood cell (RBC) shape to change into that of a crescent moon. RBCs in individuals suffering from SCD are known to be more rigid [1] and sticky [2], hence promoting vaso-occlusion. As a result of these severe hemorheological alterations, functional blood flow to vital organs and other bodily structures is compromised. Individuals suffering from vascular obstruction may 2 of 12 experience side effects varying from mild pain to organ failure. Although gene therapy and hematopoietic stem cell transplants are currently under trial for SCD, [3][4][5] the main treatments available are those that limit the complications and pain associated with the disease [1]. Because the RBCs in SCD patients are more fragile than those in healthy individuals, patients also develop chronic hemolytic anemia with a hemoglobin level of around 7-9 g/dL. The improvement and advancement of therapies aimed at tackling SCD start with understanding the behavior of the disease within an individual. SCD is also characterized by chronic vascular dysfunction due to the effects of free heme and hemoglobin on endothelial cells and inflammation [6]. It has been demonstrated that free hemoglobin may scavenge nitric oxide, hence decreasing the bioavailability of this compound and decreasing the vasomotor reserve [6]. In addition, the release of arginase from RBCs during hemolysis results in the consumption of L-arginine (i.e., the precursor to nitric oxide), further decreasing nitric oxide bioavailability [6]. Moreover, free heme may promote inflammation through the activation of several pathways, such as the NF-κB and NLRP3 inflammasome pathways [6]; indeed, free heme is now considered to be an erythroid damage-associated molecular pattern (eDAMP) [6]. The activation of endothelial cells also results in the overexpression of adhesion molecules from the selectin and super immunoglobulin families, leading to an increase in the adhesion of all circulating cells (white blood cells, platelets, and RBCs) to the endothelial wall [6]. Increased cell adhesion to the vascular wall may slow down the velocity of RBCs, which would eventually spend more time in deoxygenated areas, thus increasing the risk of sickling and complete vascular occlusion occurring if the vascular system is not able to adapt [6]. The overall function of microcirculation is critical for supporting organ activity and proper functioning [7]. Microcirculation is the main site of oxygen exchange from the RBCs to the tissues in the vascular system and is additionally responsible for the exchange of solutes, hormones, and nutrients; thus, it is critical for understanding the flow behavior within these sites [7]. Tissue perfusion under changing conditions can be investigated using intravital microscopy. Typically, studies seek to document the spatial and temporal variability of microcirculation, as it affects perfusion [8]. Microcirculation in the nail, cornea, or sublingual area can be accessed noninvasively, while microcirculation in other locations, such as the brain, can be studied using endoscopies or during surgery. The main parameters extracted from these analyses are geometrical parameters (vessel diameters and density) and local velocities. Sublingual microcirculation has been studied in a few diseases, such as pulmonary arterial hypotension and systemic sclerosis [9,10]. Van Beers et al. previously investigated the blood velocity in sublingual microcirculation during a painful crisis in SCD patients [11]. The role of hemorheological abnormalities in the causation of several SCD complications is the focus of recent studies [12]. It has been demonstrated that the blood viscosity and RBC deformability are lower in SCD patients compared to controls [13][14][15]. The reduction in RBC deformability in SCD patients is due to the presence of the less soluble HbSS polymers, rather than normal hemoglobin, in the cytosol; the high internal viscosity due to cell dehydration; and the loss of membrane elasticity due to the accumulation of membrane damages [12]. Despite the reduction in RBC deformability, the blood viscosity of SCD patients is lower than that of healthy individuals because the hematocrit is very low in SCD patients (20-25% versus 40-50% in SCD and healthy individuals, respectively). Nevertheless, any rise in the blood viscosity in SCD patients may precipitate the onset of a vaso-occlusive crisis [12]. However, the blood rheological properties are always measured ex vivo in blood samples, and no current technique is able to directly measure both the microcirculation dynamics and blood rheology in humans in vivo. Blood is a shear-thinning fluid, and RBC deformability may vary with the geometrical and flow conditions of the vessels, which may be different from one individual to another [16,17]. Studies exploring the density and morphology of microcirculation through muscle biopsies in SCD patients showed a lower capillary density, a greater tortuosity, and larger capillaries in this population compared to healthy controls [18], but the impact of such differences on the blood flow dynamics and on the local blood rheology is unknown. The present study aimed to investigate the systemic vascular changes (vessel density) associated with the local microhemodynamic (velocity) and local hemorheological properties (viscosity and RBC deformability) in individuals with SCD. The vessel diameters, local velocities, and blood rheological parameters were integrated to further characterize the microhemodynamics and local hemorheological properties in individuals with SCD. To support the analysis of the sublingual microcirculation, a blood rheological analysis was performed individually on blood samples [19]. Ex vivo subject-specific rheological properties were then used to derive the local in vivo flow properties to reflect the physiological flow conditions within the vessels. Blood Rheology Measured Ex Vivo There were significant differences between the patients with sickle cell anemia (HbSS) and the healthy individuals (HbAA) with respect to the hematocrit (p < 0.001), the viscosity flow consistence index k (p < 0.05), and RBC deformability (p < 0.001), which were found to be higher in the HbAA individuals (Table 1). Table 1. Blood rheological parameters for HbAA and HbSS. Significant difference between the two groups: * represents p < 0.05, and ** represents p < 0.01. Sublingual Microcirculation Profile The De Backer Score was higher (p < 0.01) in the HbSS subjects compared to the HbAA subjects ( Figure 1a). The analysis of variance indicated that the distribution of the vessel diameters was different between the HbAA and HbSS groups (Figure 1b; p < 0.05). The HbSS volunteers had a higher number of vessels with diameters < 6 µm than the control volunteers (p < 0.05). In contrast, the HbAA group exhibited a significantly higher proportion of vessels > 16 µm (p < 0.05). The greatest proportion of microvessels in the HbSS group was in the 6-8 µm range, while it was in the 8-10 µm range for the HbAA group. Then, we analyzed the blood flow velocity ( Figure 1c) and shear rate ( Figure 1d) values in the two groups. Because the velocities were sporadically detectable in the smallest vessels (2-4 µm), this vessel group was removed from the analysis due to a lack of available data. Notably, in 93 ± 11% of the analyzed sites, no 2-4 µm vessels were flowing in the HbSS group compared to 70 ± 23% in the HbAA group (p = 0.22). The velocities of the HbSS and HbAA individuals were not significantly different (p = 0.104). The shear rate of the two groups decreased as the vessel diameter increased (p < 0.001) and tended to be slightly higher in the HbSS group than in the HbAA group (p = 0.054), notably so in the smallest vessel sizes ranging from 6 to 14 µm. diameter distribution in sublingual microcircula velocity distribution within microcirculation, (d) and shear rate distribution within microcir for HbAA and HbSS groups. Significant differences between HbSS with HbAA groups: * re p < 0.05, and ** represents p < 0.01. Significant decrease across the vessel diameters: ### repr < 0.001. No significant difference was found between HbSS group and HbAA group, but was observed of HbSS group having slightly higher shear rates (p = 0.054). Local Blood Rheology The viscosity in the HbSS volunteers was determined to be significantly low that in their HbAA counterparts (p < 0.001) across almost all the vessel diameters detailed distribution outlined in Figure 2a. Because the shear rate decreased wh vessel diameters increased, the blood viscosity increased from the smallest to the vessels in the two groups (p < 0.001). No significant difference was found when com the shear stress across the vessel diameters of the HbAA group with that of th group (Figure 2b; p = 0.118). The magnitude of the changes in the shear stress betw smallest and the biggest vessels was comparable between the HbAA group (74 crease in the 4-6 μm and 26-28 μm categories) and the HbSS group (71.3% decreas 4-6 μm and 26-28 μm categories). Figure 2c shows the changes in local RBC deform across all the vessel diameters in the two groups. RBC deformability was highe healthy volunteers compared to the HbSS individuals for the vessel diameters r from 4 to 22 μm. Moreover, the increase in RBC deformability between the bigg the smallest vessels was higher in the HbAA group than in the HbSS group, whe deformability remained very low (less than 0.1 a.u.). diameter distribution in sublingual microcirculation, (c) velocity distribution within microcirculation, (d) and shear rate distribution within microcirculation for HbAA and HbSS groups. Significant differences between HbSS with HbAA groups: * represents p < 0.05, and ** represents p < 0.01. Significant decrease across the vessel diameters: ### represents p < 0.001. No significant difference was found between HbSS group and HbAA group, but a trend was observed of HbSS group having slightly higher shear rates (p = 0.054). Local Blood Rheology The viscosity in the HbSS volunteers was determined to be significantly lower than that in their HbAA counterparts (p < 0.001) across almost all the vessel diameters with a detailed distribution outlined in Figure 2a. Because the shear rate decreased when the vessel diameters increased, the blood viscosity increased from the smallest to the biggest vessels in the two groups (p < 0.001). No significant difference was found when comparing the shear stress across the vessel diameters of the HbAA group with that of the HbSS group ( Figure 2b; p = 0.118). The magnitude of the changes in the shear stress between the smallest and the biggest vessels was comparable between the HbAA group (74.3% decrease in the 4-6 µm and 26-28 µm categories) and the HbSS group (71.3% decrease in the 4-6 µm and 26-28 µm categories). Figure 2c shows the changes in local RBC deformability across all the vessel diameters in the two groups. RBC deformability was higher in the healthy volunteers compared to the HbSS individuals for the vessel diameters ranging from 4 to 22 µm. Moreover, the increase in RBC deformability between the biggest and the smallest vessels was higher in the HbAA group than in the HbSS group, where RBC deformability remained very low (less than 0.1 a.u.). HbSS individuals. Significant differences between HbSS and HbAA groups: * represents p < 0.05, ** represents p < 0.01, and *** represents p < 0.001. Significant changes in microcirculation (over vessel diameters): ## represents p < 0.01, and ### represents p < 0.001. Discussion Our study demonstrated a greater De Backer score in the HbSS group than in the HbAA group, suggesting a greater capillary density in the former group. The De Backer score encompasses the arterioles, capillaries, and venules. While it does not provide the same information as the functional capillary density, which specifically reflects the metabolic exchange potential and which includes arterioles, it does provide an indication of the density and perfusion of the microvascular bed [20]. This result contrasted with the previous findings obtained through muscle biopsies in SCD patients and showed a rarefaction in the capillaries [18]. The prevalence of no-flow vessels in the HbSS population could reflect pathophysiologic information, as this may impact perfusion/tissue oxygenation; however, there was no statistical difference between the two groups, probably because of the limited sample size. Studies including more patients are clearly needed. In addition, we observed a higher number of small vessels and a lower proportion of large vessels in the HbSS group compared to the HbAA group, which also contrasted with the muscle biopsies findings, showing a higher number of large capillaries in the SCD patients than in the healthy controls [18]. One of the differences between our study and the study of Ravelojaona et al. [18] was that the analyses of microcirculation were not done in the same tissue: sublingual vs. muscle. Moreover, Merlet et al. [21] recently reported that eight weeks of an exercise training program increased the capillary density of SCD patients. Indeed, high interindividual variability may exist in the SCD population depending on the severity of the disease and on a more or less sedentary lifestyle. Nevertheless, other diseases causing inefficient oxygen transport, such as Moyamoya disease, have been found to have increased vascularization with an increased vessel density compared to control populations [22]. The denser microcirculation network and the increase in the smaller vessels observed in the HbSS population may be credited to the fact that SCD individuals suffer from repeated ischemic episodes due to anemia and repeated microcirculatory blockages due to their poorly deformable RBCs [23]. New blood vessels usually develop in places where they are most needed [24], and the serum levels of the angiogenic factors have been found to be elevated in SCD patients, which indicates a proangiogenic state in this disease [25]. In response to hypoxia, the body tries to counteract this through an adaptive response by developing more transport methods for the RBCs to reach optimum oxygen levels, hence causing an increase in vessel density, a phenomenon called hypoxia-induced angiogenesis [25]. This may occur to maintain organ integrity and an adequate vascular system in cases HbSS individuals. Significant differences between HbSS and HbAA groups: * represents p < 0.05, ** represents p < 0.01, and *** represents p < 0.001. Significant changes in microcirculation (over vessel diameters): ## represents p < 0.01, and ### represents p < 0.001. Discussion Our study demonstrated a greater De Backer score in the HbSS group than in the HbAA group, suggesting a greater capillary density in the former group. The De Backer score encompasses the arterioles, capillaries, and venules. While it does not provide the same information as the functional capillary density, which specifically reflects the metabolic exchange potential and which includes arterioles, it does provide an indication of the density and perfusion of the microvascular bed [20]. This result contrasted with the previous findings obtained through muscle biopsies in SCD patients and showed a rarefaction in the capillaries [18]. The prevalence of no-flow vessels in the HbSS population could reflect pathophysiologic information, as this may impact perfusion/tissue oxygenation; however, there was no statistical difference between the two groups, probably because of the limited sample size. Studies including more patients are clearly needed. In addition, we observed a higher number of small vessels and a lower proportion of large vessels in the HbSS group compared to the HbAA group, which also contrasted with the muscle biopsies findings, showing a higher number of large capillaries in the SCD patients than in the healthy controls [18]. One of the differences between our study and the study of Ravelojaona et al. [18] was that the analyses of microcirculation were not done in the same tissue: sublingual vs. muscle. Moreover, Merlet et al. [21] recently reported that eight weeks of an exercise training program increased the capillary density of SCD patients. Indeed, high interindividual variability may exist in the SCD population depending on the severity of the disease and on a more or less sedentary lifestyle. Nevertheless, other diseases causing inefficient oxygen transport, such as Moyamoya disease, have been found to have increased vascularization with an increased vessel density compared to control populations [22]. The denser microcirculation network and the increase in the smaller vessels observed in the HbSS population may be credited to the fact that SCD individuals suffer from repeated ischemic episodes due to anemia and repeated microcirculatory blockages due to their poorly deformable RBCs [23]. New blood vessels usually develop in places where they are most needed [24], and the serum levels of the angiogenic factors have been found to be elevated in SCD patients, which indicates a proangiogenic state in this disease [25]. In response to hypoxia, the body tries to counteract this through an adaptive response by developing more transport methods for the RBCs to reach optimum oxygen levels, hence causing an increase in vessel density, a phenomenon called hypoxia-induced angiogenesis [25]. This may occur to maintain organ integrity and an adequate vascular system in cases where blood transport is compromised. Park et al. observed pathologic angiogenesis in the bone marrow of SCD mice associated with highly tortuous arterioles and increased HIF-1α levels [26]. This phenomenon was also observed in other studies that examined an increased vessel density and its impact on blood oxygen transfer in hypoxic environments [27][28][29][30]. A previous study on sublingual microcirculation in SCD individuals showed that the sublingual microcirculatory blood flow velocity was not impaired in SCD patients during a painful crisis [11]. Our study also showed that the sublingual blood flow velocity in the HbSS patients that were in a steady state (i.e., not in crisis) was not different from that of the healthy controls despite the fact that the hematocrit and blood viscosity were lower in the HbSS individuals. Nevertheless, the sample size of our HbSS group was very small, and it seems that the blood flow velocity tended to be slightly higher in the HbSS group than in the healthy individuals, notably so in the smallest vessels, which could limit RBCs to being trapped in microcirculation. Veluswamy et al. reported that fast-moving RBCs are better able to escape microcirculation before the polymerization of abnormal hemoglobin occurs [31], hence preventing vaso-occlusive crises. The shear stress of all the vessel diameters was not different between the HbSS patients and the healthy individuals because the shear rate tended to be higher and because the blood viscosity was lower in the HbSS patients compared to the healthy controls. However, RBC deformability was very low in the HbSS individuals compared to the control group, notably so in the smallest vessels. Indeed, in the case of a further loss of vascular reactivity, the accumulation of poorly deformable RBCs would precipitate the onset of vaso-occlusivelike complications. In vessels larger than 22 µm, RBC deformability was not different between the two populations because the shear stress was very low and very similar between the two groups. The low RBC deformability in these vessels was less problematic because the vessel diameter was almost three-fold the diameter of the RBCs, which would allow them to flow more easily than they would in the smallest vessels. Given the variability of the HbAA and HbSS data in the study and the small population size, the results may not be definitive, and several points deserve closer attention in future studies. The clinical relevancy of the qualitative flow in the microvessels (continuous, sluggish, stop and go, or no flow) could be further tested, as it can reflect pathophysiological information. The correlation between the estimated local shear stress and local autoregulation with endogenous nitric oxide and other endothelial autocoids would also be of great interest. Such correlations could open the door to personalized therapeutic management using a point-of-care sublingual microcirculation assessment. Several other diseases where the RBC rheology and microcirculation are impaired, such as thalassemia, hereditary spherocytosis, channelopathies, metabolic diseases, etc., could benefit from such an integrative approach, which would improve clinical management and identify novel therapies. Subjects As part of a wider study presented in [32], sublingual video microscopy recordings were obtained for 8 HbAA subjects (27-42 yrs, 4 males/4 females) and 4 HbSS patients (15-50 yrs, 2 males/2 females) from the University Hospital of Lyon (Hospices Civils de Lyon, Lyon, France). Patients accepted into the trial were first analyzed to ensure that participation requirements were met, which included them being in a clinical steady state. This included no acute vaso-occlusive crises, acute chest syndrome, history of hospitalization in the previous 2 months, or history of blood transfusion within 3 months prior to the study. Furthermore, this study was performed as per the Declaration of Helsinki guidelines and was approved by the French Ethics Committee (CPP Est IV, Strasbourg, France; clinical trial number: NCT03243812). Written informed consent was collected from all volunteers in the study, with parent consent obtained for those under 18 years old. Blood was sampled in EDTA tubes for the measurement of blood rheological parameters. RBC deformability and blood viscosity measurements were performed in the hour following the sampling. The EDTA blood samples were oxygenated at room air temperature for 15 minutes right before ex vivo analysis. RBC Deformability RBC deformability was determined at 37 • C and at shear stress ranging from 0.3 to 30 Pa through ektacytometry (Laser-Assisted Optical Rotational Cell Analyzer, LORRCA MaxSis, RR Mechatronics, Hoorn, Netherlands) [33]. RBCs were suspended in polyvinylpyrrolidone (PVP, viscosity = 30 cP) and were placed between two concentric cylinders (Couette system) as shear stress was applied to generate a thin sheared layer. The RBCs were deformed and then the shape of the diffraction pattern obtained using laser beam diffraction was changed. The diffraction patterns were circular at low shear and became elliptical with increased shear stress. The elliptical shape could be obtained because the external viscosity (of the PVP) was greater than the internal viscosity of the cells. The elongation index (EI) could, thus, be calculated based on the geometry of the ellipse, using the formula EI = (length − width)/(length + width). A higher EI was reflective of a higher ability of the RBCs to deform. The elongation index-shear stress curves were subsequently parameterized with a Lineweaver-Burke relation. The fitted function thus obtained gave the personalized relation presented by Equation (1): where EI max is the maximum theoretical deformation, SS is shear stress, and A is a constant parameter of the slope of 1/EI vs. 1/SS plot. A sample curve fit for a volunteer with sickle cell anemia (HbSS) is presented in Figure 3a. all volunteers in the study, with parent consent obtained for those under 18 years old. Blood was sampled in EDTA tubes for the measurement of blood rheological parameters. RBC deformability and blood viscosity measurements were performed in the hour following the sampling. The EDTA blood samples were oxygenated at room air temperature for 15 minutes right before ex vivo analysis. RBC Deformability RBC deformability was determined at 37°C and at shear stress ranging from 0.3 to 30 Pa through ektacytometry (Laser-Assisted Optical Rotational Cell Analyzer, LORRCA MaxSis, RR Mechatronics, Hoorn, Netherlands) [33]. RBCs were suspended in polyvinylpyrrolidone (PVP, viscosity = 30 cP) and were placed between two concentric cylinders (Couette system) as shear stress was applied to generate a thin sheared layer. The RBCs were deformed and then the shape of the diffraction pattern obtained using laser beam diffraction was changed. The diffraction patterns were circular at low shear and became elliptical with increased shear stress. The elliptical shape could be obtained because the external viscosity (of the PVP) was greater than the internal viscosity of the cells. The elongation index (EI) could, thus, be calculated based on the geometry of the ellipse, using the formula EI = (length − width)/(length + width). A higher EI was reflective of a higher ability of the RBCs to deform. The elongation index-shear stress curves were subsequently parameterized with a Lineweaver-Burke relation. The fitted function thus obtained gave the personalized relation presented by Equation (1): where EImax is the maximum theoretical deformation, SS is shear stress, and A is a constant parameter of the slope of 1/EI vs. 1/SS plot. A sample curve fit for a volunteer with sickle cell anemia (HbSS) is presented in Figure 3a. In vivo, local RBC deformability in the microcirculation network was then estimated using Equation (1) to obtain the corresponding local shear stress. In vivo, local RBC deformability in the microcirculation network was then estimated using Equation (1) to obtain the corresponding local shear stress. Blood Viscosity Blood viscosity was measured for native Hct using a cone/plate viscometer (Brookfield DVII + with CPE40 spindle; Brookfield Engineering Labs, Natick, MA, USA) at 2. 25, 11.5, 22.5, 45, 90, and 225 s −1 and at room air conditions. Using these data, a shear rate-viscosity plot was developed to derive the rheological law for each volunteer. A personalized power law was derived by plotting the viscosity at various shear rates measured using the viscometer. Power law of non-Newtonian fluids, such as blood [34], takes form as outlined in Equation (2) [35]. where µ is viscosity, k is the flow consistence index, γ is the shear rate, and n is the power law index. A sample curve fit for a volunteer with sickle cell anemia (HbSS) is presented in Figure 3b. The derived power law was then used to conduct local parameter estimates for the local in vivo shear rates. Live Imaging of Microcirculation: Intravital Microscopy and Postprocessing Sublingual microcirculation was analyzed using sidestream dark field (SDF) imaging (Microscan, MicroVision, Amsterdam, The Netherlands). In brief, imaging of sublingual microcirculation was performed at the site of interest. SDF imaging utilizes a concentric arrangement of light-emitting diodes (LEDs) to enhance contrast and to reduce blurry imaging [36]. The resulting image is a gray and white video with dark moving speckles representing RBC flow. The light from the device is absorbed within the RBCs, causing the dark and high-contrast spheres [37]. The De Backer score, the diameter distribution, and the velocities were determined using the Automated Vascular Analysis software (AVA, Microscan, MicroVision, Amsterdam, The Netherlands). For each volunteer, videomicroscopy recordings of three to six sublingual sites were analyzed. The sites analyzed were determined based on the video quality obtained, as patients found it difficult to hold their tongue still. Furthermore, to avoid data inaccuracies, the operator maintained constant pressure on the Microscan handheld device, which requires dexterity and practice. The video of each site was approximately 10 s long, and the most stable sequence of frames was extracted for analyses. The De Backer score is a measurement of the vessel density and is determined by counting the number of vessels crossing arbitrary horizontal and vertical lines [20] as shown in Figure 4. It is calculated by dividing the number of vessels crossing the line by the length of the line. Blood viscosity was measured for native Hct using a cone/plate viscomete (Brookfield DVII + with CPE40 spindle; Brookfield Engineering Labs, Natick, MA, USA at 2. 25, 11.5, 22.5, 45, 90, and 225 s −1 and at room air conditions. Using these data, a shea rate-viscosity plot was developed to derive the rheological law for each volunteer. A per sonalized power law was derived by plotting the viscosity at various shear rates measure using the viscometer. Power law of non-Newtonian fluids, such as blood [34], takes form as outlined in Equation (2) [35]. where μ is viscosity, k is the flow consistence index, γ is the shear rate, and n is the powe law index. A sample curve fit for a volunteer with sickle cell anemia (HbSS) is presente in Figure 3b. The derived power law was then used to conduct local parameter estimate for the local in vivo shear rates. Live Imaging of Microcirculation: Intravital Microscopy and Postprocessing Sublingual microcirculation was analyzed using sidestream dark field (SDF) imagin (Microscan, MicroVision, Amsterdam, The Netherlands). In brief, imaging of sublingua microcirculation was performed at the site of interest. SDF imaging utilizes a concentri arrangement of light-emitting diodes (LEDs) to enhance contrast and to reduce blurry im aging [36]. The resulting image is a gray and white video with dark moving speckles rep resenting RBC flow. The light from the device is absorbed within the RBCs, causing th dark and high-contrast spheres [37]. The De Backer score, the diameter distribution, an the velocities were determined using the Automated Vascular Analysis software (AVA Microscan, MicroVision, Amsterdam, The Netherlands). For each volunteer, videomicros copy recordings of three to six sublingual sites were analyzed. The sites analyzed wer determined based on the video quality obtained, as patients found it difficult to hold thei tongue still. Furthermore, to avoid data inaccuracies, the operator maintained constan pressure on the Microscan handheld device, which requires dexterity and practice. Th video of each site was approximately 10 s long, and the most stable sequence of frame was extracted for analyses. The De Backer score is a measurement of the vessel density and is determined b counting the number of vessels crossing arbitrary horizontal and vertical lines [20] a shown in Figure 4. It is calculated by dividing the number of vessels crossing the line b the length of the line. The microvessels were classified by size and were placed in 14 groups from 2-4 µm to 28-30 µm in diameter. The proportion of vessels in each group size was characterized by the proportion of cumulative length. The proportion of cumulative length is defined by the AVA software used as the sum of the lengths of the vessels in the given group size divided by the total length of all the vessels of all sizes included. For each individual, the data from the different recordings were cumulated. The local velocity in each vessel was determined using AVA software based on kymographs. Kymographs are plots representing spatial position as a function of the time used to quantify velocity along a determined path [38]. Kymographs were plotted for each detected vessel along the centerline [8]. The resulting velocity vectors were individually visually validated by the operator. For each subject, the velocities were averaged for each vessel group size. Estimation of Local Blood Viscosity, Shear Rate, Shear Stress, and RBC Deformability In each group size, the diameters and velocities obtained as described previously were used to further estimate the local shear rate γ using the Hagen-Poiseuille relation in circular channels [35] as presented in Equation (3). where V is the velocity and where D is the vessel diameter. Then, the personalized law of viscosity from Equation (2) was used in combination with the local shear rate (Equation (3)) to determine local shear stress (SS) [35]: where µ is viscosity and γ the shear rate. Finally, RBC deformability (i.e., EI) exhibited by each subject at the different vessel sizes was estimated using the personalized elongation index relationship in Equation (1) to determine the shear stress of the given vessel size. Figure 5 outlines the process undertaken to derive the local parameter estimations for blood rheology characterization of healthy and SCD individuals. The microvessels were classified by size and were placed in 14 groups from 2-4 μm to 28-30 μm in diameter. The proportion of vessels in each group size was characterized by the proportion of cumulative length. The proportion of cumulative length is defined by the AVA software used as the sum of the lengths of the vessels in the given group size divided by the total length of all the vessels of all sizes included. For each individual, the data from the different recordings were cumulated. The local velocity in each vessel was determined using AVA software based on kymographs. Kymographs are plots representing spatial position as a function of the time used to quantify velocity along a determined path [38]. Kymographs were plotted for each detected vessel along the centerline [8]. The resulting velocity vectors were individually visually validated by the operator. For each subject, the velocities were averaged for each vessel group size. Estimation of Local Blood Viscosity, Shear Rate, Shear Stress, and RBC Deformability In each group size, the diameters and velocities obtained as described previously were used to further estimate the local shear rate using the Hagen-Poiseuille relation in circular channels [35] as presented in Equation (3). where V is the velocity and where D is the vessel diameter. Then, the personalized law of viscosity from Equation (2) was used in combination with the local shear rate (Equation (3)) to determine local shear stress (SS) [35]: where is viscosity and the shear rate. Finally, RBC deformability (i.e., EI) exhibited by each subject at the different vessel sizes was estimated using the personalized elongation index relationship in Equation (1) to determine the shear stress of the given vessel size. Figure 5 outlines the process undertaken to derive the local parameter estimations for blood rheology characterization of healthy and SCD individuals. Statistical Analyses An unpaired Student's t test was used for comparisons between the two groups. A one-way analysis of variance (ANOVA) was used to test the changes in several parameters across all vessel diameters. To perform the statistical tests, Minitab data analysis Statistical Analyses An unpaired Student's t test was used for comparisons between the two groups. A one-way analysis of variance (ANOVA) was used to test the changes in several parameters across all vessel diameters. To perform the statistical tests, Minitab data analysis software was used, and GraphPrism 9 was used for the graphical representations. A p value less than 0.05 was considered to be statistically significant. Conclusions This study was the first to integrate local in vivo microcirculatory hemodynamics and ex vivo blood rheological data to generate local microcirculatory blood rheological information in humans, and, more particularly, in SCD, a disease characterized by vascular dysfunction and severe rheological blood alterations. Although the sample sizes of the groups were limited, our study offered a novel approach to studying the pathophysiological mechanisms of SCD, and it would have to be tested in other situations (steady state vs. vasoocclusive crisis; effects of sickle cell genotypes; effects of treatments, such as hydroxyurea or voxelotor treatments; effects of simple transfusion or exchanged transfusion; associations with some specific complications involving strong vascular components, such as cerebral vasculopathy, leg ulcers, etc.) to see if it can offer relevant biological and physiological markers of disease progression to help physicians in managing this disease. Informed Consent Statement: Written informed consent was obtained from all the subjects involved in the study, and parent consent was obtained for those under 18 years old.
2023-02-15T16:11:20.488Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "a723de07051be388eb0004136a115c5a6d8210c9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/4/3621/pdf?version=1676095787", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3752e2ee1a361566fee6477359a4c0ed0ef05a6f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245770327
pes2o/s2orc
v3-fos-license
Parents’ Experiences of Childhood Cancer During the COVID-19 Pandemic: An Australian Perspective Abstract Introduction COVID-19 has had far-reaching impacts including changes in work, travel, social structures, education, and healthcare. Objective This study aimed to explore the experiences of parents of children receiving treatment for cancer during the COVID-19 pandemic. Methods Parents whose children were currently in treatment for childhood cancer or had completed treatment in the previous 12 months, participated in semi-structured interviews, face-to-face or via teleconferencing. Thematic analysis was used to analyze the data. Results The sample consisted of 34 participants (17 fathers and 17 mothers) from all states across Australia. Median age 37.5 years (range 29–51, years, SD = 6.3). Five main themes were identified: “Welcome to the Club”; “Remote Work and Study”; “Silver Linings”; “The Loneliest Experience” with three sub-themes “Immediate Family”; “Friends”; and “Overseas Family” and “Lack of Support” with two sub-themes: “Community Support” and “Organized Support.” Conclusion These findings revealed contrasting experiences of the impact of the COVID-19 pandemic. For parents whose children were neutropenic, the pandemic provided benefits in increased community understanding of infection control. Parents also reflected that the movement to remote work made it easier to earn an income. In contrast, some parents observed that restrictions on visitors and family intensified feelings of isolation. Parents also described how the COVID-19 reduced access to support services. These findings contribute to an understanding of the multifaceted impacts of the COVID-19 pandemic on families of children with cancer. Introduction In January 2020, the World Health Organization (WHO) declared COVID-19 a pandemic (WHO, 2021). The impact of COVID-19 has been significant in terms of loss of life with countries such as the USA, Brazil, India, Italy, and the UK experiencing a large number of cases and high mortality rates (WHO, 2021). Research has demonstrated that COVID-19, and the resultant loss of friends and family, health impacts, hospitalizations, restrictions, economic, and societal changes, have led to psychosocial effects including increasing rates of distress, depression, and anxiety (Torales et al., 2020). The impact of COVID-19 has, however, varied considerably internationally, with many countries having comparable low case rates and mortality per head of the population (WHO, 2021). Australia caseload has been relatively low with 30,610 cases and 910 deaths between April 5, 2020 and June 30 2021 (Commonwealth Department of Health, 2021). Governmental responses to the pandemic have also varied significantly. In the Australian context, strict measures were implemented, both at a state and nationwide level, to reduce the potential for outbreaks (Murphy & Karp, 2020). These measures included border closures, meaning that Australians, with few exceptions, could not leave Australia, overseas tourists were prohibited from entering, and interstate travel was limited (Murphy & Karp, 2020). Australian states went into periods of lockdowns which included homeschooling, remote online work, closure of entertainment venues, social distancing rules, and mask mandates. Returning overseas Australians were also required to pay for 2 weeks in government-managed hotel quarantine. This system had some failures and some states experienced longer lockdowns due to community transmission from returned overseas Australians. Changes occurred within the healthcare system as hospitals were restructured to ensure that they were prepared for any urgent COVID-19 medical needs. Hospitals also introduced measures to reduce risks to vulnerable patients including restricting visitor/volunteer access. Background Every year in Australia 1,000 children aged 0-18 years will be diagnosed with cancer and it remains the leading cause of death by disease in children (Australian Institute of Health and Welfare (AIHW), 2021). A diagnosis of childhood cancer is a difficult experience for both the child and their parents, and studies have found that parents exhibited moderate-severe post-traumatic stress symptoms (PTSS) and anxiety and depression after a childhood cancer diagnosis (Kazak et al., 2005;Sulkers et al., 2015;Vrijmoet-Wiersma et al., 2008). Children undergoing cancer treatment often require complex treatment protocols including lengthy hospitalizations, outpatient appointments, surgeries, and therapies. This can create practical challenges for parents, particularly mothers, who often have to reduce working hours or stop work altogether (Wakefield et al., 2014). Fathers have also been shown to experience stress in balancing demands of work and caring for their child with cancer and their siblings (Brody & Simmons, 2007). Siblings are also impacted by the experience and research has shown that they often meet the criteria for post-traumatic stress (PTS) or post-traumatic stress disorder (PTSD) and have a poorer quality of life (QoL) (Kaplan et al., 2013;Long et al., 2018). Children receiving cancer treatment often become neutropenic, making them vulnerable to infectious diseases. This fear of infection impacts all family members and siblings have to miss social activities and school due to concerns about transmitting infections to the child with cancer (Long et al., 2018). Due to the unknown risk of COVID-19 to children with cancer, and their neutropenic status, pediatric oncology wards across Australia restricted ward access for visitors and families (Kotecha, 2020;Sullivan et al., 2020). Given the novel nature of COVID-19, there is still a paucity of data on the psychosocial impacts of the pandemic on families of children diagnosed with cancer. A Dutch study by van Gorp et al. (2021) examined data from childhood oncology outpatient clinics and found no difference in quality of life before the COVID-19 pandemic compared to during the early pandemic. Interestingly, they also found that fewer caregivers were distressed during early COVID-19 compared with pre-COVID-19. In contrast, a study by Darlington et al. (2020) conducted in England during lockdown found that 85% of parents/caregivers of childhood cancer patients were worried about the virus and 69.6% felt that the hospital was not a safe place. The study concluded that COVID-19 had increased parents/caregivers' anxiety and concerns about their child's care. This study found very few positives except for families feeling closer and "feeling safe at home." Similarly, researchers from Italy have concluded that restrictions in hospital access increased parents' psychosocial distress (Zucchetti et al., 2020). Rationale This study is part of a larger study which uses the ecological systems theory as a lens to explore the experiences of those affected by childhood cancer (Bronfenbrenner, 2009). Ecological systems theory is a useful heuristic for understanding the wide-reaching impact of childhood cancer as it seeks to account for both the context and complexity of individual experiences (Bronfenbrenner, 2009). This theory posits that a person's well-being is dependent on interrelated and complex factors within the social system within which they sit and necessitates an examination of the social supports both formal and informal (Bronfenbrenner, 2009). The ecological systems theory also provides a conceptual framework for articulating the necessity to look at the context surrounding the family of a child diagnosed with cancer and highlights the need for any analysis of childhood cancer to include an examination of the larger social context, including major social/health crisis such as the COVID-19 pandemic (Darling, 2007). These are unprecedented times and there is little understanding of the impact of the various societal changes that have occurred due to COVID-19. Much of the research to date regarding COVID-19 and pediatric oncology has understandably occurred in countries with high rates of COVID-19 and there has been limited research that explores the experiences of those living in countries with low infection and mortality rates (Casanova et al., 2020). Although restrictions that were introduced in countries with low rates were necessary to maintain low levels of COVID-19, it remains unknown how the societal changes and restrictions have impacted families of children diagnosed with cancer (Sullivan et al., 2020). It is important to understand how COVID-19 has impacted families whose children have been diagnosed with cancer so that appropriate supports can be put in place to minimize potential negative effects. Objective This study aimed to explore the experiences of Australian parents of children receiving treatment for cancer during the COVID-19 pandemic. Methods The research was approved by a University Human Research Ethics Committee in March 2021 (HRE2021-0119). All participants were provided with study information and completed written consent and demographic forms. This study employed phenomenological approach using a qualitative design (Forrester, 2010;van Manen, 2017). The rationale for using a phenomenological approach is that allows for an exploration of the hidden meanings and enables an understanding of participants' lived experiences (Forrester, 2010). A qualitative methodology has been chosen for this study as this approach recognizes the importance of individuals' point of view and allows people to describe and relate their feelings and responses. This provides a rich in-depth understanding of peoples' experiences and enables the development of practices and policies that meet the needs of the community (Leavy, 2017). The study collected data via semistructured face-to-face interviews. This method of gathering data is more flexible and allows for a natural way of interacting, assisting participants to clearly discuss issues as they understand them. Participants were parents of children aged 17 years or under who were currently in treatment or had completed treatment for childhood cancer in the previous 12 months. Participants were recruited via notices placed between March and June 2020 on Facebook sites and distribution of flyers via multiple organizations and groups that support families of children diagnosed with cancer. Purposeful and snowball sampling were also used to ensure different perspectives were gained. Participants were recruited from across Australia to ensure that broad perspectives were explored including from states that experienced longer COVID-19 lockdowns. Semi-structured interviews, using an interview guide as a framework, were conducted by the first author and interviews were digitally recorded. The interview guide was a flexible document and questions were developed in response to early analysis, examples include "could please describe how COVID-19 impacted your experiences of childhood cancer treatment?" and "Did hospital restrictions impact your experience and if so how?" On average interviews lasted 66.2 min (range, 41-93 min, SD ¼ 14). At the completion of the interview, participants were provided with information on support services and a $20 gift card. Sample size was not predetermined but based on previous qualitative research it was envisaged sample size would range between 20 and 30 interviews (Mason, 2010). Saturation was achieved after approximately the 26th interview. Additional prescheduled interviews were conducted to ensure that no new themes were emerging. We also considered the depth and richness of the data when deciding to cease interviewing. Transcription was completed either online via computer rev.com software or manually by the first author, as soon as possible after the interview. Interviews completed via online software were reviewed and necessary changes made. Participants were given pseudonyms and individual identifiable factors removed. This study followed incorporated multiple measures in order to ensure rigor including the completing a reflexive journal during data collection and analysis to record personal observations, and responses to data to enable awareness of any personal reactions/bias (Berger, 2015). Data Analysis Data were thematically analyzed using Braun and Clarke's six-phase process (Braun & Clarke, 2006). Thematic analysis is a method of analyzing qualitative data which allows for the identification of common themes and patterns across data. Initially, all interviews were listened to by the first author to develop familiarization and this also provided opportunity to review transcripts for accuracy and make any necessary amendments. All transcripts were then read, and initial reflections were recorded. Initial analysis was used to shape ongoing data collection and refining of questions (Pope et al., 2000). The transcripts were then reviewed to look for common patterns. After several readings of transcripts, the main coder developed initial codes and a codebook was developed. This process relied on a paper-based system of coding. The manual process is considered to provide a thorough and comprehensive understanding of the data (Pope et al., 2000). A selection of transcripts was then reviewed by co-coder. Codes were discussed by all members of the research team and any disagreements were discussed until consensus was reached. Transcripts were then reread to generate, name, and define themes (Braun & Clarke, 2006). At this stage, a thematic map was developed to graphically represent findings (Pope et al., 2000). All authors reviewed the thematic map and provided feedback on the final themes and subthemes. Participants A total of 34 parents: 17 fathers and 17 mothers with a median age of 37.5 years (range 29-51 years, SD ¼ 6.3 years) were interviewed. Twenty-six children were currently receiving treatment, whereas 8 children had completed treatment. The average time since completion of treatment was 56 days (range 2-168 days). All children were still being monitored by hospitalbased oncology including scans and blood tests. The age of the child at diagnosis ranged from 4 hours to 15 years. Participants were recruited from all states in Australia, with 76% (N ¼ 26) living in the metropolitan area and 24% (N ¼ 8) living in outer metropolitan/rural regions. Demographic information including race and ethnicity were self-reported by participants. Participants were asked to indicate which ethnicity they identified with, and categories were based on previous studies conducted in Australia and informed by broad categories from the Australian Bureau of Statistics. Table I demonstrates participant demographics. Findings From, the interviews, five themes were identified: "Welcome to the Club"; "Remote Work and Study"; "Silver Linings"; "The Loneliest Experience" with three sub-themes "Immediate Family"; "Friends"; and "Overseas Family" and "Lack of Support" with two sub-themes: "Community Support" and "Organized Support." A thematic map graphically represents findings (Figure 1) Figure 1. Thematic map. Welcome to the Club This theme describes the sentiment amongst participants that the widespread changes that occurred because of COVID-19 mirrored their existing experiences of childhood cancer. Many participants expressed that the fear of infection that the general community was now experiencing was a normal way of life for parents of children with cancer. One father joked: "It's part of a running joke. . .welcome to our world" and another commented, "welcome to our club. This is how we have been living before COVID-19." Childhood cancer treatment and immune suppression had meant living in isolation with one mother commenting: "it didn't impact us so much because we didn't actually see people that often anyway. . .because you always try to keep in a bit of a bubble because he's so unwell. . .and social outings were. . .non-existent." One dad commented "COVID lockdowns feel like what life is like in treatment." Participants commented that the awareness of the need for infection control by the general community reduced the risk to their children. With one father stating "when you have an. . .immunosuppressed child. . .you have to be very cautious in terms of contacting other people. . .We were struggling because people wouldn't care. . .When COVID happened, they started taking care. . .social distancing, wiping everything, covering the mouths." Another father commented: "It's actually positively impacted us. We found that. . .she wasn't getting colds and flu. . .she would normally get." COVID-19 also meant that some parents were able to stay home during lockdowns which reduced the risk of infections to their children and simplified conversations: "We actually. . .enjoyed it. . .he was so compromised with his immune system. It was easier rather than having someone turn up with a sniffle. . .and having to say 'Sorry, you cannot come in'. . .we did not have to have any difficult conversations." Remote Work and Study This theme captures participants' perceptions that the move towards remote work that occurred in response to COVID-19 was beneficial. Working remotely, in several cases, reduced the financial burden. One father observed: "One of the good things is that COVID allows me to work remotely. . .It's a big weight off my shoulders. . .allows for income to keep coming in. . .if it had happened in 2019 it would have been a different approach." The introduction of online schooling also made caring for siblings less complicated "Yeah, it was easier. . .Because of COVID-19, we were homeschooling the kids as well" describing this process as "less hassle with getting them to school." Being able to home-school siblings meant that many of the demands and complexities of juggling siblings' schooling and extracurricular activities were reduced. Silver Linings This theme explores participants' perception that COVID-19 had silver linings that benefited them and their children. One example was that many participants perceived that COVID-19 made it easier for their children to miss out on school and events "She couldn't go to school, that was hard, but then the silver lining that COVID hit, everything got canceled. Either way, she didn't miss out on anything because everybody missed out." Participants also expressed that the widespread societal changes reduced their sense of being anomalous with one father commenting "everyone is wearing masks. . .We do not feel like we are the odd one out." When discussing the restrictions on visitors some participants also noted that it increased the bond with healthcare professionals. One mother commented that it was "okay because we had formed a fantastic relationship with the staff." Several participants also highlighted what they described as "small benefits" such as ease of parking and reduced travel time due to roads being less congested due to remote work. The Loneliest Experience This theme investigates parents' feelings regarding COVID-19 restrictions which includes three subthemes, immediate family, friends, and overseas family members. Immediate Family This sub-theme identifies participants' reflections that restrictions meant that partners and siblings could not visit during lockdowns. One father commented that only being allowed one parent on the ward was "one of the worst parts of the cancer experience" explaining ". . .I couldn't see my partner for three months . . . five minutes at the door of the hospital . . .a little kiss and good night, that was horrible." Participants also reflected that being away from siblings was difficult, with one mother describing missing siblings "My little one. . .I couldn't help him with his online learning. . . his homework. . . I had a Year 12 and a Year 11. They had to fend for themselves . . . The guilt . . . I can't begin to tell you." This loneliness was particularly pronounced for those who received a diagnosis during the tight lockdown rules. Seen in one mother's reflection: The main impact of COVID. . . was firstly only one parent being able to accompany a child in ED at a time. This meant the very first moment we discovered (child's diagnosis) I was sitting alone and (husband) was in the ED waiting room. I then stayed with (child diagnosed with cancer) it meant we were left to process this news solo and not together. . .When I heard (child's diagnosis) the last thing I wanted was to sit with my own thoughts. Hearing that and having to phone (husband) in the waiting room to tell him. . . just made the situation even more stressful. Friends This sub-theme explores the impact of ward visitor restrictions. Some participants reflected that not having visitors made it a difficult experience "I spent every day in the hospital. . . for the whole year I was on my own. . .It was the loneliest year." Participants described how COVID-19 meant that normal social interactions became impossible with one participant describing how they "really needed that extra bit of just sitting down, and having a coffee, and just sharing." This isolation was not just confined to parents and some participants reflected that the isolation also impacted their children. With one mother commenting: "I think the impact on (child diagnosed with cancer) was that he very much lived in an adult world for 12 months because there weren't any siblings or peers." Family Overseas This sub-theme outlines the impact of having borders closed. Several participants noted that one of the challenges of navigating childhood cancer during COVID-19 was that it prevented overseas family members from visiting Australia to support them "we cannot get my family over here. It is a pain in the arse. . .My mum would love to come out and help out." For some, this lack of family support seemed to have made them feel they were alone in the experience "We definitely felt like we're kind of in the trenches, just the three of us. That was because COVID, just because of travel restrictions." Lack of Support This theme discusses the impact of COVID-19 on the provision of support and has two sub-themes: "Community support" and "Organized support." Community Support This sub-theme outlines some participants reflections that COVID-19 meant that support from friends and the local community was limited, with one participant commenting: "People come out and mowed lawns and did all sorts of stuff for us. And then, COVID-19 made that nearly impossible with lockdowns." Another commented: "I did find that it dried up as COVID went on. . .because nobody could see each other, and everybody was busy at home trying to work or school their own kids." Organized Support This sub-theme describes the lack of access to normal support services which impacted the whole family "Because it was last year, it was COVID, so we could not do any of the. . .support groups." COVID-19 restrictions also resulted in less access to ward services. When discussing support services one parent commented "a lot of the stuff that I think they do to keep the kids' spirits up. . .All that stopped completely." Several participants also commented that there were indirect impacts on support services such as lack of access to community-based psychological health services due to the increased demand and lack of access to allied health services due to restrictions. Discussion Here, we report the experiences of Australian parents whose children were receiving cancer treatment during the COVID-19 pandemic. The study demonstrates that the impact of COVID-19 was multifaceted with parents describing both positive and negative impacts. This contrasts with previous COVID-19 research within the pediatric oncology setting in countries with high rates of COVID which reported minimal positive effects (Alshahrani et al., 2020;Darlington et al., 2020). Consistent with previous childhood cancer research, parents in this study described how fear of normal infections, such as colds, was a significant cause of anxiety whilst their children were undergoing treatment (Yildirim Sari et al., 2013;Young et al., 2002). Participants highlighted that in many ways COVID-19 mediated this fear, and they reflected that COVID-19 had increased people's awareness of the need for infection control measures thus reducing the risk to their children. For some, particularly, those in long-term treatment involving chemotherapy the pandemic provided a simplified life as the community modified their behavior which meant that participants did not need to educate people about infection control. Although the need for infection control and change to lifestyle for the general population has undeniably been a negative experience, for families whose children are receiving treatment for cancer it has made them feel less "different." Parents also discussed that they felt that the community could now relate to their anxiety regarding infections. Our report highlights the toll that standard immune suppression and infection risk measures have on families of children diagnosed with cancer including the loss of social life, fear of infections, limitations on community interactions, and need for hygiene measures. The COVID-19 pandemic has made such measures normal for the general population and in doing that has shed light on childhood cancer family's experiences during treatment. Childhood cancer has been shown to have a detrimental impact on parents' income (Kelada et al., 2020). Often one parent must cease working to care for a child with cancer and one parent; usually, the father continues to work to provide income meaning they are unable to be with their child in hospital. Research has also shown that the balancing of work and family can create a sense of conflict for fathers (Brody & Simmons, 2007). In this study, parents observed that the change to remote online work, which came about due to COVID-19 meant that parents could work and earn an income while providing care for their children. Previous research has shown that mothers provide most of the care in hospital (Al-Gamal et al., 2019;Wilford et al., 2019). Although this seemed to be the case in this study, it also appears that COVID-19 has increased the ability of fathers to share the care for the child in hospital whilst working remotely. There are multiple stressors for families of children diagnosed with cancer including the loss of normal life, challenges in balancing the needs of the siblings, and maintaining normal parental relationships (Cox, 2018;Van Schoors et al., 2018). Previous reports have shown that siblings of children with cancer experience poor QoL (Long et al., 2018). Parents in this study reported that COVID-19 exacerbated these challenges and increased the burden on siblings as it reduced normal family interactions including the ability for siblings to visit the ward thus making an already difficult situation more challenging. In contrast to the study in England by Darlington et al. (2020), which indicated that parents felt very fearful of COVID-19, in our study, parents did not express significant concerns regarding the threat of COVID-19 to their children or themselves. These differing results possibly reflect that the Darlington study was conducted during a lockdown when England was experiencing high caseloads/mortality rates. There also appears to be differences in fear of COVID-19 between the adult and pediatric cancer settings. Australian research within the adult oncology setting found that 53% of cancer patients/carers reported significant psychological distress associated with fear of COVID-19 (Edge et al., 2021). Fear of COVID-19 may have been lower among our sample because carers of children with cancer often live restricted lifestyles due to fear of infection and are thus isolated from the outside world which may have provided parents with a sense of safety from COVID-19. One area of concern regarding COVID-19 and the restrictions on visitors on the ward relates to those families whose children were diagnosed during periods of reduced access. Clarke and Fletcher (2003) contended that the manner of disclosure regarding the diagnosis is profoundly important to parents. Several parents in this study described the experience of learning their child had cancer without the support of their partner. For these families, the COVID-19 restrictions increased distress. An important finding of this study relates to COVID-19 increasing parents' sense of isolation and loneliness. Previous research has found that parents of children experience psychological distress as a result of their child's treatment (Al-Gamal et al., 2019;Compas et al., 2015). Studies have found that parents exhibited PTSS, anxiety, and depression after a childhood cancer diagnosis (Kazak et al., 2005;Sulkers et al., 2015;Vrijmoet-Wiersma et al., 2008). Research has highlighted that family support systems can help mitigate negative psychological experiences among parents of children with cancer (Fuemmeler, et al., 2003). Our study shows that COVID-19 prevented many parents from receiving support from family and friends. This raises concerns regarding the lack of support provided to these families which may have increased parents' and siblings' psychological distress, predisposing them to an increased risk of PTSS. Study Limitations and Future Research There are a number of limitations in this study. First, the majority of the participants were married/coupled. This may be reflective of additional time constraints of single parents and thus availability for interviews. In addition, the majority of participants were of Australian or European ancestry, thus reducing the generalizability of the study. Reports have revealed that people from culturally and linguistically diverse communities (CALD) have disproportionately been impacted by COVID-19 (Mamluck & Jones, 2020). It would therefore be valuable to further explore the effects of COVID-19 within Australian CALD communities. Another potential limitation of this study is that the low rates of infection and mortality in Australia compared with other parts of the world may mean that these findings may not be relevant or applicable to countries that have experienced higher rates of infection. One area of interest for future research would be to assess the well-being of parents of children diagnosed with cancer during the COVID-19 pandemic to determine if restrictions affected their long-term psychological wellbeing. Clinical Implications This study has revealed measures that can be introduced to assist families of children receiving treatment for cancer both during the current pandemic and long term. This study suggests that additional measures need to be taken to support families of children with cancer when restrictions prevent both parents from being on the ward. This may include the integration of teleconferencing into clinical care so that information can be disseminated to both parents simultaneously, reducing the burden on one parent to relay distressing news. It is also important that those who received a diagnosis during lockdown are identified, provided with assessment and psychological support to manage the additional stress associated with the timing of the diagnosis. In future, the understanding that COVID-19 restrictions are similar to the restrictions faced by families of immunocompromised children with cancer may facilitate communication between these families and the wider community about expectations and provision of support. Public health and education measures regarding COVID-19 have increased Australians' understanding of the need for basic infection control and hygiene measures. In the long term, this understanding can be used to educate the population regarding immune suppression in children receiving cancer treatment. This understanding may be particularly useful in an educational setting, where schools can be encouraged to adopt similar measures used during the COVID-19 pandemic to assist families of children who are neutropenic. COVID-19 had necessitated that society become adept at the use of technology for work and social interaction. Our findings underscore that this has been a positive aspect of COVID-19 for many families of children with cancer, allowing them to continue work and education without disruption. The online infrastructure COVID-19 has created could be used to assist families of children with cancer. Conclusion This is one of few Australia-wide studies to examine the impact of COVID-19 on families of children diagnosed with cancer. This study provides an understanding of the impact of COVID-19 restrictions and societal changes on families whose children were receiving cancer treatment. It shows that COVID-19 has had substantial impacts both positive and negative. Interestingly, there did not appear to be any difference in experiences or concerns in different states across Australia. This study underscores the significant and life-changing impact of a child's cancer diagnosis on families. Although the general population have found the changes and social isolation implemented because of COVID-19 difficult to adjust to, families in this study found that they were similar to changes they experience while their children were in treatment. This provides insight into the experiences of families whose children are receiving treatment for cancer which will assist in improved understanding and ultimately enhanced delivery of supports to families. Our findings also provide vital information to understand the negative consequences of COVID-19 restrictions and societal changes which will enable appropriate supports to be provided to families of children who are being treated for cancer. It is hoped that the understandings developed can be used to ensure that in the long-term families of children who have been diagnosed with cancer are provided with appropriate support to manage both the routine aspects of childhood cancer and the added burdens arising from the restrictions imposed due to COVID-19. fathers and mothers who so kindly contributed their time and shared their experiences to help develop a better understanding of childhood cancer. Conflicts of interest: None declared.
2021-12-07T06:23:10.283Z
2021-12-03T00:00:00.000
{ "year": 2021, "sha1": "894e2d8a4fa7a8371e588ee7591ed2b93fda6d25", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "56775dc91449b46f55ebfe684eb0e998cf2e9a6a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
32541974
pes2o/s2orc
v3-fos-license
CARDIAC DISORDER IN HEMODIALYSIS: BENEFITS OF CHINESE HERBS Background: A major cause of mortality in hemodialysis patients is cardiac disease. Most complementary and alternative therapies, including Chinese herbal medicine, have been useful in the treatment of cardiac disorders. Materials and Methods: A 46 year old Asian woman with chronic renal failure was admitted to the clinic for hemodialysis. In the course of the fifth session of standard dialysis, she developed shock followed by a ventricular tachycardia which rapidly degenerated into cardiac arrest, from which she was resuscitated through cardio-pulmonary resuscitation. The following therapeutic strategies were applied: low discharge oxygen inhalation; stricter water and salt restriction; dialysate temperature set at 36.0 °C; rhEPO 3000u, per week, low molecular weight iron dextran, 200mg/day, intravenously for five days; the patient received Chinese herbal concoction orally. Results: The patient obtained efficient standard dialysis without any cardiac syndrome. Conclusion: Chinese herbs are useful in the management of cardiac disorders in hemodialysis. Chinese herbs may provide more benefits by adjusting dialysis strategies. Introduction A major cause of mortality in hemodialysis patients is cardiac disease. According to the reports of the International registries ( Van et al., 2001;Cheung et al., 2004); sudden cardiac arrests accounts for about 10% -30% of deaths from all causes. Many complementary and alternative therapies, including Chinese herbal medicine, have proved useful for the management of cardiac disorders (Amy and Steven, 2002;Fu et al., 2010). The case we reported shows the positive role of Chinese herbal medicine for cardiac disorders during dialysis. For specific dialysis patients, the individual therapeutic program including Chinese herbal therapies may provide benefits. Case report The patient was a 46 year old Asian woman with a history of chronic renal failure. Her medication included ferrous sulphate 300mg, three times daily; folacin 10mg, three times daily; rhEPO 3000u, two times weekly; hydrochlorothiazide, 15mg, three times daily; nifedipine, 10mg, three times daily; and salt restriction. She was admitted to the hospital for consideration of hemodialysis with the IgA nephropathy as the primary disease for 4 years. No history of heart disease, smoking and drinking was elicited. On physical examination, the pulse was regular at 82 beats/min, blood pressure was 140/90 mmHg, and the jugular venous pressure was normal. Body mass index (BMI, weight/height 2 ) was 20.7. There were no pulmonary abnormalities. The heart bwats were normal; there was a soft ejection systolic murmur over the second right intercostals space. Her serum potassium and sodium were 5.3mEq/L and 140mEq/L, Hemoglobin level was 7.7g/dl. Creatinine and BUN levels were 875.3umol/L and 32.3moml/L. Serum PO 4 level was 6.2mg/dl, Serum calcium level was 9.4mg/dl, Ca×PO 4 product was 58.28 mg 2 /dl 2 . The ECG showed sinus rhythm, left ventricular hypertrophy. On chest x ray, there was moderate cardiomegaly. Echocardiographic results: IVS 9.6mm, LVDd 58.5mm, FS 24%, EF 49%. There were no other significant laboratory findings. A day after admission, arteriovenous fistula for hemodialysis was operated at the wrist in an end-to-side fashion. 14 days after the operation, three sessions of profiled hemodialysis were performed. A calibrated roller pump on the arterial tubing was set to provide 200ml/min through the dialyzer. On the dialysate side, a calibrated roller pump pumped dialysate from a reservoir at 500ml/min, with dialysate temperature at 37.0 O C. Euvolemic weight was determined clinically to be 69.5kg. The net ultrafiltration was 2L and held constant to keep at a constant weight of 69.5kg. All treatments were performed on Fresenius 4008B dialysis machine (Fresenius Inc., Germany). After profiled dialysis, the patient crossed over to the 4-h standard dialysis (two sessions/week). She had felt well the days prior to the fifth standard dialysis. In course of the fifth standard dialysis, she developed shock followed by a ventricular tachycardia which rapidly degenerated into cardiac arrest, from which she was resuscitated by cardio-pulmonary resuscitation. After this cardiac arrest, the speed of arterial roller pump could not go beyond 150ml/min for the attack of cardiac syndrome: mild dyspnea, palpitation, hyperhidrosis and a few bilateral basal tales were heard. The cardiac syndrome and pump speed were in very close and positive correlation clinically. As a result of inadequate dialysis, her serum creatinine and potassium levels before eighth dialysis session increased to 806.0umol/L and 5.7mEq/L. Hemoglobin level was 7.2g/dl. Echocardiographic results: LVDd 59.6mm, FS 22%, EF 45%. Kt/V was 1.0 which means inadequate dialysis. For the purpose of efficient dialysis, the following therapeutic strategies were applied: low discharge oxygen inhalation just before and during dialysis; stricter water and salt restriction to decrease ultrafiltration volume; dialysate temperature set at 36. 24-hour Holter recording was adopted during dialysis. The arterial pump speed was increased by 10ml/min every session without cardiac syndrome and with normal Holter recordings. After eight sessions of dialysis, the arterial pump speed was increased to 200ml/min. During this period, the net ultrafiltration volume was kept within 1L at a constant weight. The serum creatinine and potassium levels before the sixteenth dialysis session were 326.5umol/L and 4.6mEq/L. Hemoglobin level was 11.2g/dl. Echocardiographic results: LVDd was 54.6mm, FS was 27%, EF was 60%. Kt/V was 1.62. The patient obtained efficient standard dialysis without cardiac syndrome. Discussion Cardiovascular risk in uremic patients is very high. Left ventricular hypertrophy (LVH) is extremely frequent in uremic or dialysis patients. LVH and/or LV dysfunction, with volume and pressure load as crucial determinants, is the strongest predictor of mortality in dialysis population (Switalski et al., 2000;Ansari et al., 2001). Withdrawal of excess fluid by ultra-filtration, as the main goal of dialysis therapy, might result in hemodynamic instability with symptomatic hypotension (Galetta et al., 2001). The incidence of symptomatic hypotension during dialysis is 0.3% per session (Daugirdas, 2001). The main causes are severe hypovolemia with an inadequate compensatory cardiovascular response (Zucchelli and Santoro, 1993). In this case, by water and salt restriction, the net ultrafiltration volume was kept within 1L compared with 2L previously. There was evidence that myocardial contractility improved and inflammatory response is reduced during hemodialysis by Moss reported on the cardiac status of dialysis patients (not exclusively during dialysis) (Moss et al., 1992). 34 percent (seventy-four) experienced cardio-pulmonary resuscitation; 8 percent (6 of 64) survived till discharge, and only two (3%) were alive 180 days later. Another study in Taiwan, 24 cases of cardiac arrest during hemodialysis was reported (Lai et al., 1999). All cases underwent cardio-pulmonary resuscitation. 29.2 percent (Seven) of patients survived but died within 24 hours. These data meant that sudden cardiac death still possess a major challenge during the process of dialysis. Some preventative strategy, such as cardioverter-defibrillator device implantation, was not fully studied in dialysis patients (Green et al., 2011). There's need to direct more attention towards testing alternative interventions together with conventional therapeutic strategies that prevent cardiac arrest or reduce its lethality. In Traditional Chinese Medicine, deficiency of heart Qi or abnormal heart Qi metabolism remain a major cause of initiation and development of cardiac disorders. With Heart Qi Deficiency, the circulatory system suffers greatly. It is said that blood nourishes the Qi, and Qi leads the blood. In short, if the Heart Qi is deficient, the blood is not properly directed. The Chinese herbs used in this case can strengthen the heart Qi and regulate heart Qi disorders. We observed that, adjustments of dialysis strategy, together with Chinese herb administration, has proved to be beneficial. A major limitation of this study include among others: some laboratory data, including potassium, calcium, and bicarbonate concentrations, cannot reflect the serum values immediately pre-shock, were the "most recent available". Misclassification of hyperkalemia or hypokalemia, for example, would likely bias the association between pre-shock serum potassium level (as determined here) and arrest. Second, this study is only a case report, the conclusion need to be confirmed in large number of patients. Notably, no previous reports of similar cases was reported in the current literature. The case supports the notion that these Chinese herbs are useful in the management of cardiac disorders in hemodialysis. While adjusting the dialysis strategies, Chinese herbs may provide more benefits.
2018-04-03T05:22:42.203Z
2016-09-29T00:00:00.000
{ "year": 2016, "sha1": "5a9129f3940047c4a8149d5312440b11dba78e70", "oa_license": null, "oa_url": "https://journals.athmsi.org/index.php/ajtcam/article/download/2805/pdf_2/", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5a9129f3940047c4a8149d5312440b11dba78e70", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
150515040
pes2o/s2orc
v3-fos-license
Exploring Discomfort and Care in the Experience of a National Academic Staff Development Programme This paper explores the use of the pedagogy of discomfort and care in the Teaching Advancement at University (TAU) Fellowships programme, an innovative staff development programme in South African higher education. Our analysis of participant experience of the programme through the lenses of the pedagogy of discomfort and care draws on reflective commentaries submitted by the participants. We found that the initial experience of discomfort was widespread despite the relative seniority of participants. Elements of care built into the programme provided important support, activated agency, and formed the basis for a network of caring relationships among participants. Participants acknowledged these relations as key to their personal and academic growth during the programme and were seeking to extend these beyond the end of the programme. Introduction This paper explores the use of the pedagogy of discomfort and care in the Teaching Advancement at University Fellowships programme, hereafter referred to as the TAU programme, an innovative staff development programme in South African higher education. The significance of an analysis through the lenses of discomfort and care for staff development in South African higher education institutions was first demonstrated in the Community, Self and Identity project (Leibowitz et al., 2012).Members of the project subsequently used the political ethics of care to evaluate professional development for teaching and learning at a South African university (Bozalek et al., 2013).The two lead authors in the above studies were members of the TAU development team, and, as such, the learning from these two earlier projects formed points of reflection around TAU, both during the conception of the programme and during subsequent evaluation and analysis. Although the TAU programme was not explicitly conceptualised in terms of discomfort and care, these concepts provided valuable lenses through which to understand both the decisions taken by the organisers, the interactions during TAU, and the development achieved by participants. The TAU programme arose out of the experience of organising and supporting the educational development community in higher education in South Africa over several decades through the Higher Education Learning and Teaching Association of Southern Africa (HELTASA) and its predecessors.As part of this process, in 2009 National Teaching Excellence awards were established as a joint venture between HELTASA and the Council on Higher Education.However, after several years, the awards team acknowledged thatgiven the disparities between institutions in South Africa -the awards were being made almost exclusively to applicants from well-resourced institutions, and hence not addressing the inequities in the South African Higher Education system.A different approach was clearly required, and the TAU programme emerged out of these considerations. The TAU programme The TAU programme was conceptualised and then piloted in 2015-2016 to support the development of a cadre of academics across the South African public higher education sector as scholars, leaders, mentors, and change agents in teaching and learning in their institutions or disciplinary fields.To achieve this goal, TAU works with experienced academics who have been acknowledged for their teaching excellence, and is designed to assist them, within a supportive and collegial environment, to extend their knowledge of and ability to play an active role in educational development. The programme took place against the backdrop of a South African higher education sector that is struggling to develop a unified integrated system (Webbstock, 2016).A threetier hierarchy of traditional (i.e.research-intensive) universities, comprehensive universities, and universities of technology dominates the landscape, along with the valuing of research over teaching.The sector is being further challenged by recent student protests demanding free higher education and a decolonisation of the curriculum (Heleta, 2016).The TAU programme aligns itself with efforts to increase the levels of awareness about the inequities in the sector, and to promote collaboration across institutions -in short, to contribute to social justice in South African higher education. TAU was designed as a 13-month programme (from January 2015 to January 2016) of three five-day residential Contact Sessions (Units) held at six monthly intervals, with distance engagement continuing in between.The programme was structured around three key themes: being and becoming a change agent in higher education; the Scholarship of Teaching and learning (SOTL); and expanding understandings of teaching excellence.Each participant was required to design, develop, and implement an individual project within an enquiry group.They were required to submit a report on this project, along with a reflective piece on the participant's experience of TAU and a (joint) enquiry group poster, at the end of the programme.Of the 52 participants selected from 22 South African public universities, 50 completed all programme requirements for recognition as TAU Fellows. Theoretical Framework The notion of a pedagogy of discomfort (POD) for use in social justice education was initially conceptualised by Megan Boler (1999Boler ( , 2003).Boler's initial question: 'What do weeducators and students -stand to gain by engaging in the discomforting process of questioning cherished beliefs and assumptions?' (Boler, 1999: 176) was developed further in work with Zembylas (Boler & Zembylas, 2003), and subsequently taken up more broadly (Bozalek et al., 2014;Engelmann, 2009;Leibowitz et al., 2012;Tronto, 2010;Zembylas, 2015;Zembylas, 2018;Zembylas et al., 2014).The POD has emerged as an approach in social justice education and was recently defined as 'a critical pedagogical approach that aims to disrupt hegemonic taken-for-granted assumptions about social structures and relations.This approach encourages individuals to engage in critical thinking that explores the relations of power inherent in habits, practices and knowledge' (Leibowitz et al., 2012: 37).In implementing this pedagogy, Boler (1999: 176) notes as an important component 'the emotions that arise in the process'.The POD aims to encourage 'critical inquiry at a cognitive as well as at an emotional level' and requires 'positive emotional labour' (Leibowitz et al., 2012: 38-39; see also Boler & Zembylas, 2003: 108).Instances of the POD typically bring together members of both dominant and marginalised groups with their differing hegemonic ideas, both of whom are likely to experience discomfort (Boler & Zembylas, 2003: 115;Leibowitz et al., 2012: 3.) Importantly, the POD has been defined as a relational practice, which can allow difference to be explored as 'creative energy' (Boler & Zembylas, 2003: 128) and enable participants to 'gain a new sense of interconnection with others' (Boler & Zembylas, 2003: 127;Bozalek et al., 2014: 4.) In early use of the POD, little differentiation was made between discomfort on the one hand, and pain and suffering on the other.However, Zembylas (2015: 11 -Endnote) has recently noted that '[p]ain and suffering are not the same feelings as discomfort; they are much stronger and they are linked to injury or harm', and has further emphasised the need for a distinction by highlighting the link between 'discomfort' and the term 'comfort zones': he suggests that discomfort be understood as 'the feeling of uneasiness that is disturbing someone's comfort' (Zembylas, 2015: 11 -Endnote) by 'challenging cherished beliefs and assumptions about the world' (Zembylas, 2017: 9).There is now widespread agreement as to the 'proactive and transformative potential' of discomfort, as defined in these terms (Zembylas, 2017: 7) and hence as to the value of discomfort for social justice education. At the same time, the care of educators for their students is the 'very bedrock of all successful education' (Noddings, 1992: 27), and this raises concerns about 'how far educators can "push" their students and use discomfort as a caring pedagogical practice ... how far can one go with pedagogies of discomfort until we stop calling them caring teaching practices' (Zembylas, 2017: 9-10).Increasingly, researchers are drawing attention to the 'tensions and ambivalences ' (ibid.: 19) in the act of caring teaching, which cannot avoid being 'entangled in some form of ethical violence ' (ibid.: 16).In his recent work, Zembylas (2017: 14) has sought to reconceptualise caring teaching in terms of minimising ethical violence and expanding relationalities with vulnerable others.The active and productive empathy with others that can thus be promoted contributes to enabling change and transformation. Care for the well-being of those experiencing the pedagogy of discomfort can minimise ethical violence, and this is all the more essential in that deep-seated emotions are likely to be involved (Boler, 1999: 176).In theorising care, therefore, we draw on Joan Tronto's work which has moved discussion beyond families and dyadic relationships and rather 'portray(s) care as holistic and as a broad, public and political activity' (Bozalek et al., 2014: 3).Fisher and Tronto (1990: 40) have defined care as follows: '[o]n the most general level, we suggest that caring be viewed as a species activity that includes everything that we do to maintain, continue, and repair our "world" so that we can live in it as well as possible' .Furthermore, as Bozalek et al. (2014: 4) note, '[c]are, as a theoretical framework, foregrounds relational and connection-based aspects of human beings rather than seeing humans as atomised individuals'.Care is undoubtedly a 'relational practice' (Tronto, 2010: 161). The 'safe space' metaphor has often underpinned discussion around 'caring teaching' (Boostrom, 1998;Davis & Steyn, 2012;Roestone Collective, 2014), especially as regards engagement with the pedagogy of discomfort; however, this metaphor is increasingly being critiqued.Boostrom (1998: 407) has noted the danger of 'safe spaces' as involving the 'mere expression of diverse individuality', and the power of the `safe space' metaphor to 'censor critical thinking' (ibid.: 406), rather than accommodating and promoting 'intellectual challenge and personal growth' (ibid.: 407).Davis and Steyn (2012: 35) speak of the 'false dichotomy that sees challenging students and supporting them as mutually exclusive'.Safety, they argue, cannot be construed as the 'absence of conflict', and is too often 'mistaken for comfort ' (ibid.: 33).What is required is a caring environment which is able to respect emotions and comments, while challenging problematic ones (ibid.: 35).As noted above, care is a relational practice, and hence creating such safe spaces will involve the 'relational work of cultivating them', rather than 'static and acontextual notions of "safe" or "unsafe"' (Roestone Collective, 2014: 1346). TAU was not set up as an example of the POD and focused on social justice implicitly rather than explicitly.At the same time, the TAU programme was developed by a team with considerable experience in both the POD and social justice education.It was in hindsight that the roles of both discomfort and care in TAU became explicit and were considered to provide a useful lens to understand the responses of participants and the substantial developmental impetus of the programme. In a context such as TAU, which targeted the professional and personal development of senior academics, discomfort is likely to be experienced when hegemonic 'ways of thinking', including ways of behaving, emotional responses, beliefs and assumptions are challenged.Many of these 'ways of thinking' will relate to participants' academic disciplines. Despite ongoing change in the nature of higher education, 'academic staff continue to attest to the power of the discipline as a unifying force in shaping academic identity' (Krause, 2012: 188).Other 'ways of thinking' will relate to participants' institutional 'home', and specifically, given the South African hierarchy of traditional (i.e.research-intensive) universities, comprehensive universities and universities of technology, to the institutional type.One participant commented that when she moved from one institutional type to another: 'It felt like I had landed on the moon'.At the same time, discomfort arising from individual personality traits should not be overlooked.It will be necessary to examine the ways in which challenges to all these 'comfort zones' of academics were occasioned during TAU, and importantly, to acknowledge that discomfort and responses to discomfort emerged not only at a cognitive but also at an emotional level -the latter a factor which in itself may occasion additional discomfort, given that emotions generally do not form part of academic discourse.These many-faceted experiences of discomfort can be envisaged as moving participants towards critical reflection, expanding the borders of their comfort zones, and ideally allow them to achieve a 'new sense of interconnection with others' (Boler & Zembylas, 2003: 127). However, to avoid ethical violence, discomfort should be balanced by caring.Caring will be broadly understood as 'including everything that we do to maintain, continue, and repair our "world" so that we can live in it as well as possible' (Fisher & Tronto, 1990: 40).On the one hand, it may be useful to understand care in terms of the strategies adopted by the conveners in developing and running the programme -this would involve a top-down approach to the analysis.On the other, understanding care as a relational practice appears to be particularly significant for an analysis of TAU, and this suggests that close attention needs to be paid to the relationships that were established during TAU, and to the impact of these relationships in alleviating any discomfort experienced.Caring relationships are likely to have involved both cognitive and emotional responses to the programme.Finally, it will be important to consider whether the care that was invested in, and emerged from, TAU was simply sanitising, seeking to create 'safe spaces', or whether this also allowed space for critical thought and development. In this research, therefore, we endeavour to answer the following: What elements of discomfort and care surfaced in the final written pieces submitted by participants reflecting on their experience of the TAU programme, and what was their significance in the selfreported growth of these participants?To what extent did the combination of discomfort and care in TAU succeed in creating a context within which recipients did indeed experience significant academic and personal growth? Research Process Our primary source of data for the research was the four-page reflective piece submitted by each of the 50 participants at the end of the programme.We also drew on the final evaluation questionnaires submitted by participants, as well as on programme documentation more broadly.We adopted a grounded theory approach (Strauss & Corbin, 1994) to analysing the data within a framework of discomfort and caring.We coded the 50 reflective pieces independently using systematic thematic analysis and then met to establish agreement.After several cycles of coding in this way, we mapped out the elements of discomfort and those of caring which became evident, including caring which was provided by both the programme organisers and the participants themselves. The authors themselves were positioned as participant observers and drew on their experiences as members of the management committee and as enquiry group advisors on the programme as additional sources of data. Analysis of the data The Initial Experience of Discomfort The extent to which TAU participants experienced discomfort became abundantly clear from the reflective reports, where many, at the start, expressed feelings of initial discomfort in strong terms: they wrote of feeling disorientated, intimidated, overwhelmed, terrified, unnerved, daunted, apprehensive, uncomfortable, isolated, vulnerable, incompetent and inferior.Closer analysis revealed six main sources of this discomfort: • an initial lack of clarity as to what TAU was actually about, and what would be expected of participants; • lack of familiarity with educational discourse; • being required to work in groups; • coping with diversity in experience and authority of participants and institutions; • the pressure and work 'overload' in Contact Session One; and • the proposed use of digital communication and collaboration tools. An initial lack of clarity about the nature and expectations of the TAU programme was perhaps to be expected, given that it sought to introduce an innovative approach to staff development, and that this was the first time the programme had run.Participants had been provided in advance with information about the ethos and goals of the programme and key expected outcomes, but at the outset levels of uncertainty and apprehension were clearly high.This was in part because of a wide-spread (though mistaken) initial expectation that the programme would focus on teaching and learning expertise: I thought that, at last, we will be given an opportunity to focus on methods and strategies of teaching and learning at tertiary level, as opposed to the focus on research and research outputs. Unit One took me by surprise from the time we were sent the programme for the unit.I had expected a greater emphasis on teaching and sharing our experiences and contributions that had led to us being recognised by our teaching excellence awards. Many participants were clearly disconcerted to find that TAU was much more strongly focused on the Scholarship of Teaching and Learning and on sectoral issues in Higher Education, such as low student success rates, the differential between success rates of white and black students, and the inequities between institutions.Much discomfort also focused on the individual project, and on the group poster, and what exactly these would involve: Initially I was very confused (and hence, being me, very anxious) about what was required in terms of the group project especially, but also about the scope of the individual project. A second area of discomfort for some was the unfamiliar educational discourse introduced during Contact Session One.Facilitators sought to introduce participants to the Scholarship of Teaching and Learning (SOTL), in terms of an educational research discourse with which few were familiar, and the theoretical framework chosen as an example turned out to be out of the reach of many.Hence the SOTL theme undoubtedly took many participants out of their disciplinary 'comfort zone', and the response was at times couched in emotional terms: I was terrified at the idea that I will have to do social science research in the project. I could not shake off the feeling that I had been thrown into the ocean of unfathomable depth. I felt as if I knew nothing about Teaching and Learning (. …) I was so overwhelmed by all the information given that I did not know what to take on board at that stage and what to ignore. Coming in I also experienced a sense of arriving in another country. A further element of discomfort for many participants was the considerable extent to which group work was incorporated into the programme.Particularly during the opening stages of the first contact session, participants were placed in regularly changing groups, specifically to promote networking and collaboration across disciplines, institutions and levels of seniority.The groups established ranged from informal and short-term groups to the long-term enquiry groups (see below).In the reflective reports, several participants spoke of themselves as 'introverts' or as 'loners' who had been seriously challenged by these interactions: I am inherently introverted, thus ... where I am expected to share my own personal reflections was intimidating. I believe in group work and having conversations with people, but because of my own introverted personality I sometimes struggle to interact in a group task (. ...) I sometimes found it challenging. Initially I thought the arrangement of the TAU project contacts sessions did not make sense and I had a sense of discomfort, probably due to that I particularly do not like group work. The only thing that really hindered me from learning more during the contact weeks was my own shyness and introvertedness. I am at times a rather impatient person, thus not always diplomatic enough to wait out a result or contribution from one of the members of the ad-hoc groups (. …) The group work outside our own fixed enquiry groups was trying, to say the least (I suppose I am just like a student in the respect that I do not like enforced group work); especially in Unit One the constantly changing group composition (no doubt arranged in such a way to facilitate meeting everyone) was exhausting and unsatisfactory. In addition, coping with diversity in experience and authority of participants and institutions occasioned considerable discomfort for some participants.At times substantial differentials in levels of institutional prestige or professional seniority were involved within group work situations.This challenge emerged partly from the unexpected diversity1 in the group of participants, who ranged from well-published professors with many years of experience to relatively recent appointees still busy with their doctorates: Almost 90% of the TAU participants were either Professors or Doctors.And me being young and "black", I felt that I didn't have much to offer. When I first looked at the list of attendees of the TAU first session, I saw all the names started with either Prof or Dr and mine only started with Mr. Initially this made me feel like I am going to be playing in the field where I might be the smallest player. I am a young academic with only 10 years' experience needing to discuss and debate with seasoned academics. At the other end of the scale one participant commented on the difficulty of 'being a senior academic in a group with people who were mainly junior', and a second claimed that most of the participants 'were junior staff members in their institutions'. Many participants experienced considerable discomfort through work 'overload' in Contact Session One.Developing the programme had presented considerable challenges to the Programme Committee.Given that participants, while acknowledged as excellent teachers, were primarily disciplinary specialists with research expertise in their discipline, it was agreed that we would need to introduce them to the discipline of the Scholarship of Teaching and Learning, and to broader perspectives on higher education in South Africa. Unit One attempted to do that in a programme which ran from morning to around 9pm on most evenings.Feedback through the regular evaluations revealed how immensely challenging, and exhausting, some participants experienced this to be: 'The sessions are too long and one can hardly keep awake after supper'; 'very exhausting'; 'I am not firing on all cylinders, not able to comment or make valuable input'; 'I think there are some very tired participants … this has pushed a few people way over the limit'.2 Finally, some participants clearly experienced discomfort when expected to use digital communication and collaboration tools to maintain contact with the enquiry group in between the Contact Sessions.Our initial decision had been to encourage the use of Google Docs, with which many participants were not familiar.The workshop on Google Docs in Unit One was not successful and attempts to use Google Docs were aggravated by the unsatisfactory levels of Wi-Fi reception available at the first workshop venue, despite our attempts to ensure adequate provision.This meant that many participants who were not successful in using Google Docs might well have put this down to their own inadequacy, when issues of connectivity might have been to blame. The extent and intensity of the discomfort initially experienced and revealed primarily through the analysis of the reflective reports, came as something of a surprise to the programme developers, given that the participants were, in most cases, senior colleagues with considerable experience and expertise. Overcoming Discomfort In contrast to the discomfort experienced at the start of the programme, evaluations at the end of the TAU programme were strongly positive, with participants having moved far from discomfort towards fulfilment and enthusiasm for the programme and the new understandings and roles that had emerged.Indeed, many participants thematised their reflective report in terms of a journey, in which they moved from initial uncertainly and apprehension to enthusiasm and acknowledgement of their growth as academics and individuals.Not only was TAU enjoyable -'I have had an amazing journey with TAU'.'Wow!What a great experience!' -TAU was also experienced in terms of significant personal and academic growth: To quote three participants: TAU was 'life-changing'; 'TAU transformed my mind'; and 'I have a sense of having stepped out of a building into the South African sunshine.'Discomfort had clearly been mediated by a variety of elements of care. The following analysis of this move beyond discomfort will begin with a discussion of the 'caring' strategies adopted by the programme conveners and include the responses of participants to these strategies.We then turn to considering care as a relational practice and analyse the significant caring relationships that developed within TAU.It was these caring relationships (which largely emerged within the strategies of care offered by the organisers) which, as many participants reported, were crucial to their personal and professional wellbeing and development during TAU.These relationships appeared both to offer safety and to provoke critical reflection.In many cases, what was at the outset experienced as a source of discomfort evolved, during the year, into a source of care. Providing a Caring Environment An initial element of care built into the programme was provided by the environmental elements of the three contact sessions, which ran at three different hotels or conference centres.Venues with a 'retreat-like' quality (and deliberately not within a city context) were chosen, where participants would feel 'special' and cared for, and which would support the envisaged engagement required.The venues sought to remove participants from their home institution, in order to free up time and space for focus and reflection, which was also acknowledged and appreciated: The break up into units with a week spent away from work was tough -there's always work to catch up with when you get back, but actually there's no other way to do it.It granted me the leisure of focusing just on TAU, thinking and being at one with the scholarship of teaching and learning and being able to focus. Furthermore, TAU was extended over a period of 13 months, on the assumption that learning and growth require time, which was also well received: I enjoyed the process of TAU being run over a year because the amount of learning that happened not only to me, but to others too, needed that incubation time.TAU is a process that could not be shortened or hurried. Developing a Responsive Programme Using a developmental evaluation approach, regular feedback was collected from participants, especially during Contact Session One, and where possible responded to immediately.The TAU Programme Committee monitored the feedback and adjusted the programme appropriately throughout the 13 months, seeking to respond to the diversity among the participants. For instance, in response to the feedback as to 'programme overload' in Contact Session One we redesigned Contact Session Two to engage more explicitly with the care element, and participants appreciated having more space for individual and group-work, for access to advisors, and simply time out for what were termed 'creative activities' -yoga, singing, artwork of various types, beach soccer etc.These aspects of broader well-being were then carried through into Contact Session Three. Support for the Individual Projects In spite of the initial high levels of discomfort, many participants reported finding the individual TAU projects of great value to their development.In the final evaluation, the individual project was mentioned 49 times, by 28 participants, as being of 'great value'. Several care elements contributed to overcoming the initial discomfort: elective sessions focusing on different aspects of education research methodologies; the availability of the advisors to give supportive feedback and advice; and feedback and support given by members of each enquiry group, and by participants more broadly. Design Elements of Care Three design elements were subsequently identified by participants as having played a central role in supporting participants and enabling a caring environment. Firstly, we explicitly attempted to undermine established academic hierarchies by avoiding the use of professional titles, and by valuing every participant's ability to contribute to discussions.Positive responses to this design element emerged strongly in the participants' reflective pieces.Participants indicated that, during TAU, they had felt able to move beyond differences of discipline, institution, and hierarchy and been free to engage, from their individual position and perspective, in a commonality of purpose.They indicated that this had been achieved by the non-use of titles, by the respect with which all were treated, and through the regular engagements in discussion groups diversified in terms of institution, race and seniority.Importantly, participants felt that they had been able to move beyond seeing themselves as a 'lone fish swimming against the stream' and become part of a community which had broadened beyond a single discipline: I felt I was really part of the team (…) (where) everyone's opinion is valued. I liked the idea of the entire TAU group of participants operating at the same level without considering office titles or ranks.This embraced the idea that all of us were there to learn from one another. It was also humbling that the project members treated each other equally with respect and those who are perhaps of high standing within their institutions did not use such status to impose their position. The sense of community that surprisingly quickly emerged from within the TAU group was in my view also facilitated by the very structure of TAU, which brought together a group of lecturers in a space outside of the normal academic environment.The space was not defined as yet and there were no real power structures, hierarchies or relationships that framed the space.In my view this allowed us to really escape the very limiting context within which we normally operate in our institutional structural settings and within our disciplinary silos. In traditional academic environments, there tends to be very little interaction across disciplines.Participants repeatedly noted that close interaction with colleagues from other disciplines had resulted in new exciting relationships and a broad sense of community, which had in turn provoked new insights: My mind-set was completely transformed when I had to interact with colleagues from law, medicine, performing arts, marketing, accounting, etc. Despite a wide experience of HE training in South Africa, I have not been part of such a multidisciplinary… process and the merits of this approach to capacity development are greatly underestimated.Many endeavours in this country bring like-minded individuals together, and I now realise how significantly this limits the learning experience for all.The TAU programme is unique in this sense, and ... it is the greatest strength of the programme. The single biggest facilitator of my learning experience in the TAU programme has been the diversity of the group in terms of institutions, contexts, disciplines, but at the same time the universality of the objective of teaching advancement.I was constantly amazed at how easily I could relate to challenges posed by colleagues from ostensibly very different contexts.This gave me a real sense of a community of practice to which I felt that I could actively contribute and from which I could receive input that greatly broadened my own perspectives. Finally, support was provided for the growth of relationships across institutions by problematising the institutional hierarchy dominant in South African higher education. Information about the broad South African higher education sector was received with considerable interest and new understanding, given that most participants were at best only informed about their own institution.This impacted positively on participants from institutions at all levels of the current hierarchy: in some cases, it awakened the realisation of privilege, in others it created an awareness that institutions lower in the hierarchy might well have greater experience in dealing with less well-prepared students: I realised that I needed to appreciate the work that is happening at our historically black universities, because these colleagues had a greater challenge than I did in taking underprepared students forward and therefore I could learn from them.(This reflection came from a member of an elite institution.) One of my greatest fears was that I came from a university that was previously disadvantaged and I was a part of a team that consisted of individuals that come from elite universities (. …) However, to my surprise, after my interaction with the team, I realised that the issues of concern for my university were similar if not common across all universities and I became conscious and aware of the fact that we all aim to accomplish a common purpose irrespective of which university we come from. It was realisations such as this which made possible the emergence of relationships between participants from very diverse universities and reduced some of the initial discomfort being experienced. Several participants reflected on the substantial impact of a TED talk offered by Chimamanda Ngozie Adichie, 'The Danger of a Single Story', which was included in a discussion of change and transformation and which problematised the use of stereotypes. As one participant commented: 'I believe it was also useful for the interactions among the participants at TAU as we all too come from different backgrounds and have had different experiences'. A second design element which played a key role in situating TAU as a site of care were the enquiry groups.Enquiry groups were constituted during Contact Session One, taking into account participants' interests and intended individual projects but also levels of expertise and diversity, and each was allocated an advisor.Enquiry groups were intended as key sites for intense discussion that would require a level of intimacy over the three sessions, as well as sites of support for individual projects and for learning and sharing of experience.Each enquiry group was required to produce a group poster that reflected the collective outcome of the individual projects.This challenged participants into making their enquiry group function well.In the reflective pieces, the enquiry groups were largely experienced as supportive.Here, too, what was initially experienced as discomforting evolved into a source of care.In the reflective reports, Enquiry Groups were mentioned 53 times as 'playing a key role', by 31 participants.The enquiry group was credited with 'opening up a new perspective for me'; they appear to have functioned as 'safe spaces' within which critical thinking could take place. The third design element was the appointment of advisors to guide the enquiry groups. These were experienced academics, in most cases from an academic development background and with experience in staff development.At the outset of the programme, we were not clear just how important the role of advisor would become, but the participants' feedback confirmed that in almost all cases, very valuable inputs had been given by the advisors, both as regards the individual projects and the group posters, and on a more personal level, and they had contributed substantially to the success of the enquiry groups. In the reflective reports, advisors, at times referred to as "mentors", were mentioned 34 times by 21 participants, as 'playing a key role': XXX, as a mentor, has been inspiring and encouraging throughout the whole process. "Hats off" to our mentor, YYY.He is a thoughtful leader who inspired us to achieve more from the TAU programme.He was able to get six diverse people to work together on a project through deliberate and disruptive guidance. The group was intimate, and the group advisor was very active and excellent with focusing the ideas for our small projects. The TAU advisor system was the best experience of the programme. The Agency of the Participants in Generating Care This, then, was the context created by the programme developers, in terms of the various strategies adopted, which -it was envisaged -would allow new behaviours and ways of thinking to emerge.However, care in TAU was not limited to these elements.They in turn provided the context for caring relationships to emerge -relationships which, according to the participants, were crucial to their growth during the programme.The point made in the literature, that care is a relational process (Tronto, 2010: 161), was aptly confirmed by the TAU programme. A key goal of TAU was to promote collaboration -collaboration across disciplines, seniority and institutions.Successful collaboration, however, presupposes a relationshipand a relationship which involves trust: As a small team of four individuals, we became very close, and we supported each other despite some challenges that we experienced as individuals (. …) We began to trust each other. Collaboration, together with cognates such as collegiality, team and community, was mentioned regularly in the reflective reports; some comments made it clear that this also included friendship at a more personal level.According to the participants, these underlying relationships were crucial to the personal and academic development ('learning') that took place; it was these which allowed the initial discomfort to be overcome.In setting up the programme, we had not anticipated the strength of these relationships nor how significant they would become. Such coming together as a team required time, and face-to-face meetings were an essential part of the process.Many entered their first meeting with their enquiry group with 'initial misgivings … we could not have been stranger bedfellows…'; 'We were a 'chance' grouping of five participants… we had, somehow, to work our somewhat diverse projects into a single group project.' 'Initially we embraced each other grudgingly and with uncertainty but now we warmly embrace each other and celebrate our ability to work together as a team.'One participant used the metaphor of a fruit salad to describe the enquiry group as it finally functioned: initially, 'I saw us as so many tropical fruits in a basket; mangoes, a pineapple, an orange, bananas etc….The beauty of the fruit salad metaphor is that each of us could remain ourselves while at the same time creating a synergy with the rest.'In short, these became 'extremely valuable and enriching relationships'. The close relationships developed within many of the enquiry groups provided a 'lifeline in terms of friendships, support and common interests'.At the same time, relationships also developed across the whole body of TAU participants and were regularly noted as being crucial to the achievement of the TAU outcomes: 'The actual meeting of TAU participants was vital to my learning (…) (through) the opportunity to meet and discuss matters with like-minded, equally passionate people'.One participant defined the term 'Fellowship' as 'a friendly association, especially with people who share one's interests', and continued: 'I had a sense of belonging...During this time, I became aware of TAU as a community of scholars and how to be a critical friend'. One participant summed up her experience of how these caring relationships emerged, and of their very considerable value, as follows: I have learned so much from TAU colleagues (. …) The leisure time we had in the evenings to build relationships with colleagues was as valuable, if not more valuable, than the group discussions and brainstorming sessions.In that time we could get to know colleagues as people, have in-depth discussions about our personal experiences at our various institutions, and come to know one another's concerns and interests as academics and educators, as well as sharing laughs about mutual difficulties.These are the experiences that allow us to develop fellowship feelings and empathy with one another, which is really what facilitates working together, and seeing our similarities rather than our differences as people. Clearly, the types of relationships that emerged will have varied in depth and nature; but there was a general consensus about the importance of this collaboration, collegiality and in many cases friendship for the achievement of the TAU outcomes.What was clear, too, was that the initial discomfort which had been so clearly expressed at the start of the programme had disappeared completely.Participants had moved 'from the somewhat tentative "wrestling" of the first unit to the unashamed enthusiasm of the final unit; from the strong institutional identifications of the first, to the overwhelming sense of "the collective" of the final'. enquiry groups together with their advisors, and the programme slots introduced for relaxation and community building.However, most significant in overcoming initial discomfort was the emergence of meaningful relationships among the participants -relationships which transcended discipline and institution.These relationships became productive 'safe spaces', sites of mutual caring, offering support but also encouraging and enabling critical thinking. Participants did not simply feel 'safe', rather this experience of safety generated critical engagement.There was evidence of increasing growth of community during the thirteen months, and particularly so from the second Contact Session onwards.These communities were frequently associated with the enquiry groups; but significant partnering and caring also took place outside of the enquiry groups, and among the TAU participants as a full cohort. Participants no longer experienced themselves as 'out there on their own', often waging a battle against colleagues and institutions unsympathetic to the significance of teaching and learning.Through the programme they became aware of others in similar situations, and with similar passions, from whom they might learn and who would in turn learn from them. It can be assumed that the characteristics of the participants, as adults and mature professionals, was a factor in allowing these relationships to develop successfully.So, too, the fact that, from the outset, they were encouraged to reflect on their experience -not least as a means of becoming aware of their own agency.According to subsequent feedback, some of this agency has carried over into their own institutions, in the form of care for others. Several participants also undertook to support the ongoing maintenance of these relationships beyond the TAU programme, through establishing a TAU Alumni group. The care that organisers of such a programme can implement will necessarily have its limitations, and participants may well not move beyond accepting such care during the life of the programme.What becomes much more significant, in terms of potential for long-term impact, is discomfort which offers participants possibilities for discovering their own agency as providers of care, both to others and thereby also to themselves.Hence, the crucial role of the advisors to the success of TAU, as nurturing the emergence of such agency in the enquiry groups.Admittedly not all the enquiry groups achieved this level of self-sufficiency, but almost all TAU participants commented on partnerships of different kinds which had developed and committed themselves to continuing these.A number of participants had also started mentoring relationships in their home institution and had begun passing on in their own environment the care they had themselves experienced.
2019-05-13T13:03:42.368Z
2018-12-15T00:00:00.000
{ "year": 2018, "sha1": "5d57d22ec59cb80d396e8ddab2c4ede20a04bbe2", "oa_license": "CCBY", "oa_url": "http://cristal.ac.za/index.php/cristal/article/download/152/178", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5d57d22ec59cb80d396e8ddab2c4ede20a04bbe2", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
234365801
pes2o/s2orc
v3-fos-license
Social Media Exposure, Psychological Distress, Emotion Regulation, and Depression During the COVID-19 Outbreak in Community Samples in China The outbreak of coronavirus disease 2019 (COVID-19) has been a global emergency, affecting millions of individuals both physically and psychologically. The present research investigated the associations between social media exposure and depression during the COVID-19 outbreak by examining the mediating role of psychological distress and the moderating role of emotion regulation among members of the general public in China. Participants (N = 485) completed a set of questionnaires online, including demographic information, self-rated physical health, and social media exposure to topics related to COVID-19. The Impact of Event Scale-Revised (IES-R), the Beck Depression Inventory-II (BDI-II), and the Emotion Regulation Questionnaire (ERQ) were utilized to measure psychological distress about COVID-19, depression, and emotion regulation strategies, respectively. Results found that older age and greater levels of social media exposure were associated with more psychological distress about the virus (r = 0.14, p = 0.003; r = 0.22, p < 0.001). Results of the moderated mediation model suggest that psychological distress mediated the relationship between social media exposure and depression (β = 0.10; Boot 95% CI = 0.07, 0.15). Furthermore, expressive suppression moderated the relationship between psychological distress and depression (β = 0.10, p = 0.017). The findings are discussed in terms of the need for mental health assistance for individuals at high risk of depression, including the elderly and individuals who reported greater psychological distress and those who showed preference usage of suppression, during the COVID-19 crisis. INTRODUCTION The outbreak of coronavirus disease 2019 (COVID-19), a severe acute respiratory syndrome (SARS), was reported on December 31, 2019, in Wuhan, China. Within several weeks, the disease had rapidly spread throughout the world, and on March 9, 2020, the World Health Organization (WHO) declared that COVID-19 had turned into a worldwide pandemic (1). By May 11, 2020, more than 4 million individuals worldwide had been diagnosed with COVID-19 (2), and the number of cases is still on the rise. Previous research has demonstrated noticeable psychological problems in individuals diagnosed with COVID-19 (3,4) as well as the general public (5)(6)(7). In a study conducted in hospitalized patients diagnosed with COVID-19, it was estimated that approximately one third of patients with COVID-19 experience symptoms of anxiety and depression, with symptom severity being associated with lower social support (4). In another study, more than half of health care workers reported symptoms of depression, with greater severity among frontline health care workers who worked directly with patients (8). Moreover, due to the highly contagious nature of the disease, strict lockdown was imposed all over China. The COVID-19 crisis has also had a significant impact on the mental health of members of the general public, people who have not become ill because of the virus may nevertheless experience psychological distress related to the illness. In a nationwide survey of 52,730 non-patients in China at the end of January 2020, about 35% of individuals reported experiencing moderate to severe psychological stress related to COVID-19 (9). More specifically, the prevalence rates of depression were 20.1% in Huang and Zhao (10) and 53.5% in Liu et al. (11), estimated with the Center for Epidemiology Scale for Depression [CES-D; (12)] and the Patient Health Questionnaire-9 [PHQ-9; (13)], respectively. Approximately 4.6% of participants suffered from posttraumatic stress symptoms 1 month after the COVID-19 outbreak (14). Beyond establishing prevalence, it is important to identify factors associated with higher and lower risk of depression among the general population during the COVID-19 pandemic. Massive social media use was found to be associated with poor sleep quality, elevated depressive symptoms, and behavior issues in adolescents, such as cyberbullying (15)(16)(17). Previous research demonstrated that greater exposure to trauma-related media information was associated with an increased risk of developing mental health problems over time. In the study of Holman et al. (18), they compared the impact of media-based indirect exposure and direct exposure on acute stress response after 2013 Boston Marathon bombing, and it was found that bombingrelated media exposure was more strongly related to acute stress than direct exposure to the bombings (18), and these associations may accumulate over time, generating a vicious cycle of media use and distress (19). According to the emotional contagion theory (20), emotional state could be transferred from one person to another through automatic mimicry, such as facial expression and postures. For example, happiness can be spread from person to person through social interactions (21). Moreover, emotional contagion could also occur online, in the absence of typical in-person interaction clues (22,23), especially for negative emotions. Negative posts were followed by more negative responses than positive posts on Twitter, which then increased the amount of negative posts the following week and thus provided greater opportunity for the emotional contagion (24). Media effect theory has been developed to explain how media use brings a change to people's cognition, emotion, and behavior (25). A great deal of information outrushed on the Internet after the outbreak of COVID-19. Internet posts concerning COVID-19 showed a sharp increase after human-to-human transmission was confirmed on January 20, 2020, and the number of posts was associated with the number of diagnosed patients (26), indicating great concern about the spread of COVID-19. Though health information could help relieve the stress (27), misinformation was also disseminated, and it may cause fear and stress among the public (28). According to the emotional contagion theory and media effect theory, those who did not get infected of the virus may also suffer from emotional distress and depression after browsing social media posts related to COVID-19. Consistently, several studies have demonstrated that massive social media exposure to information related to COVID-19 was positively associated with more severe mental health problems, such as anxiety and depression (29,30). Nevertheless, only a few studies have examined the underlying mechanism that might mediate or moderate this association. Liu and Liu (31) found that exposure to social media was related to higher levels of anxiety, and the association was mediated by vicarious traumatization. Given the close relationship between social media exposure and perceived distress (18,19,31), the present study assumed that psychological distress may play a mediation role between social media exposure and depression. People use multiple emotion regulation strategies to regulate their emotional response to crisis. Cognitive reappraisal involves the cognitive reevaluation of emotion-inducing situations. The use of cognitive reappraisal can reduce negative affect and its physiological correlates, thus it is considered to be an adaptive emotion regulation strategy (32). In addition, the use of cognitive reappraisal was associated with higher levels of positive affect and greater satisfaction with life (33)(34)(35) and better psychological consequences such as decreased anxiety and depression [e.g., (36)]. Expressive suppression is a response-focused form of emotion regulation when a person tries to inhibit his or her emotion expressive behavior after the emotional response has already been generated (32). Expressive suppression is considered a maladaptive emotional regulation strategy, which has been shown to increase negative emotional feelings and result in poor social consequences (37). Generally, expressive suppression was associated with higher and cognitive reappraisal with lower posttraumatic symptoms in response to crisis (38,39), while another study reported a non-significant correlation between cognitive reappraisal and severity of posttraumatic symptoms in a clinical sample of trauma-exposed women (40). There are only a few studies that examine the interaction between stress and emotion regulation on psychological wellbeing, and mixed results have been reported. Roos et al. (41) found that suppression, rather than reappraisal, moderated the relationship between stressful life events and physiological responses to acute stressors, while another study suggested a moderating role of cognitive reappraisal between stress and depression (42). Nevertheless, in a recent study using daily diary method, it was found that both cognitive reappraisal and expressive suppression moderated the associations between stress and suicidal thoughts, and the associations were weaker among individuals who reported habitual use of either strategy (43). While previous studies have investigated psychological distress and depression severity related to COVID-19 separately, to the best of our knowledge, no study has examined the extent to which emotion regulation strategies may predict or moderate relations between psychological distress and depression during the COVID-19 outbreak. Given the high prevalence rate of depression on the public under COVID-19 (11), assessing the moderating role of emotion regulation between psychological distress and depression may uncover the mechanism of generating and developing mental illness during the pandemic and provide evidence for the effectiveness of applying certain emotion regulation strategies on reducing mental health burden among the general population. The present study was conducted in mid-February 2020, at which time the number of COVID-19 cases in China had reached 66,576 (44), and the number was still rising. The sample was made up of members of the general population who were not patients with COVID-19. The goals of the study were to estimate the prevalence of depression and to explore the relationships among social media exposure, psychological distress about COVID-19, emotion regulation strategies, and symptoms of depression. Social isolation is helpful in preventing virus spread but also could be a public health concern for the elderly (45) and was a risk factor for depression and anxiety (46). Therefore, it was hypothesized that (1) the elderly would report more severe mental health problems and (2) social media exposure may exacerbate psychological distress and depression during the COVID-19 outbreak. Considering that adaptive and non-adaptive emotion regulation strategies could be utilized in responding to stress elicited by COVID-19 and were closely related to severity of depressive symptoms, moderation analyses were conducted to examine whether the use of emotion regulation moderated the predictive relationship between psychological distress and depressive symptom. As there is still much controversy regarding the moderating effect of specific emotion regulation strategies on the relations between psychological distress and depression (38,41,42), no specific hypothesis was made regarding the moderating role of suppression and reappraisal. The moderating role of suppression and reappraisal would be examined, respectively. Participants Potential participants among Chinese citizens were invited to complete questionnaires via the Internet, using links sent via Social Networking Services (SNSs; such as WeChat) from February 16 to February 19, 2020, using a snowball sampling technique. Of the 576 participants who filled out the questionnaires, 87 were excluded from the final data analysis because the completion time was <180 s or the same answer was given to more than 80% of the items. Four participants were diagnosed patients or frontline medical workers and were also excluded from analysis. There were 485 participants in the final sample (193 males, 39.8%; 292 females, 60.2%). Participants' ages ranged from 12 to 75, with most (76.1%) aged between 18 and 50. Nearly half of the participants (45.8%) were currently enrolled students. About half lived in urban areas (212; 43.7%) and about half in rural areas (273; 56.3%). About half were married, divorced, or widowed (226; 46.6%) and about half were single (259; 53.4%). Among the participants, 55 (11.3%) were from Hubei province. This study was approved by the local ethics committee. All participants provided informed consent to having their anonymous data used for research. In addition, informed consent was obtained from teachers of middle school students before data collection. Demographic Information Demographic variables included age, gender (male, female), marital status (single, married, divorced, widowed), education level (middle school, high school, college or higher), and region (urban, rural). In addition, participants were asked to provide a self-rating of physical health on a 5-point Likert scale from 1 ("very bad") to 5 ("very good"). Coronavirus Disease 2019-Related Information Social media exposure was measured by one item, which was consistent with a previous study (29). Participants rated how much they focused on information related to COVID-19 on social media (e.g., Weibo, WeChat) each day using a 5-point Likert scale from 1 ("almost never") to 5 ("almost always"). Psychological Distress The Impact of Event Scale-Revised [IES-R; (47); Chinese version by (48)] is a frequently used self-report scale to measure psychological distress following a traumatic event (49). The IES-R contains 22 items, and participants are asked to rate each item on a 5-point Likert scale ranging from 0 ("not at all") to 4 ("extremely"), resulting in a total possible score ranging from 0 to 88. The items were adapted to refer in particular to distress elicited by COVID-19. For example, the original item "Any reminder brought back feelings about it" was changed to "Any reminder brought back feelings about COVID-19." The Cronbach α coefficient in the present study was 0.92. Depression Severity The Beck Depression Inventory-II [BDI-II; (50)] was used to measure depressive symptoms. The BDI-II contains 21 items. On each item, participants are asked to choose one of four statements that best describes their feelings, with scores ranging from 0 to 3 for each item. For example, one item provides the following four options: "I do not feel sad" (0), "I feel sad" (1), "I am sad all the time and I can't snap out of it" (2), and "I am so sad and unhappy that I can't stand it" (3). The total possible score ranges from 0 to 63, and participants can be categorized as being at one of four levels of depression severity according to their total score: no or minimal depression (0-13), mild depression (14)(15)(16)(17)(18)(19), moderate depression (20)(21)(22)(23)(24)(25)(26)(27)(28), and severe depression (≥29). The Chinese version of BDI-II was reliable on assessing depressive symptom (51). The Cronbach α coefficient in the present study was 0.92. Emotion Regulation Participants' use of various emotion regulation strategies was measured using the Emotion Regulation Questionnaire [ERQ; (32)]. The ERQ includes 10 items, and participants are asked to rate each item on a 7-point Likert scale ranging from 1 ("strongly disagree") to 7 ("strongly agree"). The ERQ has two subscales: cognitive reappraisal (six items) and expressive suppression (four items). A higher subscale score indicates more frequent use of that emotion regulation strategy. The Chinese version of ERQ was proven to be good in reliability and validity (52). In the present study, the Cronbach α coefficients were 0.88 and 0.76 for the cognitive reappraisal subscale and expressive suppression subscale, respectively. Data Analysis Data analyses were conducted using SPSS 25.0, and the p-value threshold for statistical significance was set at 0.05 (two-tailed). First, to establish the validity of the data, common method bias was assessed using Harman's single-factor test. Principal component analysis extracted 10 factors whose eigenvalues were larger than 1, and the first factor explained 23.36% of the total variance. Result did not reveal severe common method bias in the present study. Then, descriptive analyses were conducted, including correlations among all variables. Independent-samples t-tests and one-way analyses of variance (ANOVAs) were conducted to determine if scores for depression and for psychological distress about COVID-19 varied depending on demographic variables, physical health, and social media exposure. The prevalence of depression was also estimated. Secondly, a moderated mediation model was conducted using Model 14 of PROCESS macro (53) to further explore the relationship of social media exposure, psychological distress, emotion regulation strategies, and depression (Figure 1). The first step of direct regression of independent variable to dependent variable was not necessary for mediation analysis (54); thus, the full model was conducted straightforward. Additionally, conditional direct and indirect effects were calculated with nonparametric bootstrapping method with 5,000 resamples. Finally, simple slope analysis was conducted to explore the patterns of significant moderation effect. Descriptive Information The ANOVA results showed that individuals at an older age and those with a higher education level experienced more severe psychological distress than individuals at a younger age or with a lower level of education (see Table 1 for descriptive and test statistics). Additionally, there was a significant positive correlation between age and psychological distress, r = 0.14, p = 0.003. Self-rated health was associated with depression and psychological distress; individuals with worse physical health status suffered more severe depression and psychological distress about the virus. Descriptive statistics and correlations among social media exposure, psychological distress, emotion regulation, and depression are presented in Table 2. Social media exposure was positively related to psychological distress and depression, r = 0.22, p < 0.001; r = 0.09, p = 0.042. Psychological distress was positively correlated with depression, r = 0.45, p < 0.001. Significant correlations were also found between the use of the expressive suppression emotion regulation strategy and psychological distress, r = 0.22, p < 0.001, and depression severity, r = 0.16, p < 0.001. The correlations between cognitive reappraisal and depression or psychological distress were not significant, ps > 0.05. Prevalence of Depression The prevalence of depression was estimated based on the BDI-II categorical system (50). In the current sample, 413 participants (85.1%) were classified as showing no to minimal depression (BDI-II scores from 0 to 13); 39 participants (8.0%) showed mild depression (BDI-II scores 14-19); 24 participants (5.0%) showed moderate depression (BDI-II scores 20-28), and nine participants (1.9%) showed severe depression (BDI-II scores 29 and above). Thus, 15.9% of the sample showed at least mild depression according to the BDI-II system of classifying respondents according to the severity of depression. The Moderated Mediation Model To examine the relationship between social media exposure, psychological distress, emotion regulation, and depression, a moderated mediation model was conducted. Results showed that social media exposure positively predicted psychological distress (β = 0.24, p < 0.001), and psychological distress positively predicted depression severity (β = 0.043, p < 0.001; Table 3). The conditional indirect effect was significant (β = 0.10; Boot 95% CI = 0.07, 0.15), while the conditional direct effect was nonsignificant (β = −0.04; Boot 95% CI = −0.12, 0.05). Thus, these results indicated that psychological distress fully mediated the relationship between social media exposure and depression. In addition, the interaction of psychological distress and expressive suppression in predicting depressive symptoms was significant (β = 0.10, p = 0.017). Simple slope analysis showed that among individuals who reported higher frequencies in using expressive suppression, psychological distress was significantly associated with more severe depression symptoms (β = 0.52, p < 0.001; Figure 2). Among individuals who reported a lower level of expressive suppression, significant correlation was also found between psychological distress and depression (β = 0.33, p < 0.001). Thus, psychological distress related to COVID-19 was associated with more severe symptoms of depression among participants both with high and low habitual usage of expressive suppression strategy, but with a greater predictive value among those who reported higher levels of suppression. Nevertheless, the interaction effect of cognitive reappraisal and psychological distress on depression was not significant (β = −0.02, p = 0.696); thus, the associations between psychological distress and depression severity were not influenced by cognitive reappraisal. DISCUSSION In this study, we investigated the mediating role of psychological distress and the moderating role of emotion regulation on the relationship between social media exposure and symptoms of depression of the general public during the COVID-19 pandemic in China. The prevalence of depression was 15.9%, and depression severity was correlated with worse physical health. Older age and more frequent exposure to social media posts about COVID-19 were associated with a higher level of psychological distress. Moreover, psychological distress played a mediating role in the relationship between social media exposure and depression, and the associations between psychological distress and depressive symptom severity were moderated by expressive suppression. The results demonstrate the psychological impact of COVID-19 outbreak on non-patients and suggest targets for possible intervention programs for the general population. In the current study, nearly one in six members of the general public reported at least mild depression. The prevalence rate in our sample was relatively lower than in previous studies, in which 20.1-53.5% of participants reported depressive and anxiety symptoms, respectively (10,11), which was conducted from January 30 to February 13, during which the new confirmed cases of COVID-19 reached a peak, whereas the present study was conducted from February 16 to 19, during which time the number of recovered COVID-19 patients has exceeded that of new cases for the first time (55). Moreover, this discrepancy might be related to the different measures of depressive symptoms used in the three studies. The present study applied the BDI-II, which was constructed based on the cognitive-behavioral model and emphasizes the cognitive symptoms of depression (56). Huang and Zhao (10) applied the CES-D, which emphasizes negative emotions (12), and Liu et al. (11) applied the PHQ-9, which incorporates the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) diagnostic criteria for major depressive disorder (13). Lambert et al. (57) found that the PHQ-9 cutoff is easier to reach than the CES-D cutoff, and the CES-D cutoff score is easier to reach than the BDI-II cutoff. The present study was administered during the COVID-19 outbreak; it could be more convincing to measure the dependent variable by comparing the severity of depressive symptoms from before and during the pandemic. A nationwide epidemiological study, however, demonstrates a lifetime prevalence rate of 6.8% for depression disorders in China (58); thus, the prevalence of depressive symptoms is more than two-fold higher during the COVID-19 pandemic compared with before the COVID-19 pandemic. In the present study, individuals with worse self-reported physical health also reported more elevated levels of depression and psychological distress about COVID-19. Although our participants were not infected by COVID-19, the rapid spread and high infectiousness of the virus (59) can cause changes in the lifestyles of non-patients, such as isolation to avoid exposure. Moreover, the practice of social distancing may result in more loneliness, which might contribute to elevated depressive symptoms (60). These lifestyle changes have been shown to have negative psychological effects, including generalized anxiety disorder, symptoms of depression, disrupted sleep (10), and symptoms of acute posttraumatic stress disorder (PTSD) (14). People at an older age reported higher levels of psychological distress, which was consistent with Qiu et al. (9). The elderly and people with underlying health conditions have been shown to be more vulnerable to COVID-19 (61,62). Perceived ageism and social isolation also contributed significantly to the relationship between age and psychological distress (63). Therefore, psychological interventions and physical health care services for the elderly are in urgent need to accommodate for potential emotional distresses in response to the COVID-19 crisis (64). Informed by the emotional contagion theory and media effect theory, the study examined the association between social media exposure and psychological distress, and we found exposure to social media content concerning COVID-19 was associated with greater psychological distress. Indirect exposure to traumatic event via electronic media could lead to increased levels of PTSD and vicarious trauma (65,66), especially exposure to the widely disseminated misleading information related to the COVID-19 outbreak on social media platforms (67). Additionally, the significant associations between social media exposure and depression severity were consistent with findings from a recent study, in which time spent on COVID-19 news via social media was utilized as measures of social media exposure, and they found that the time spent on social media was related to elevated depressive symptoms (68). Besides, the mediation effect suggested that social media exposure contributed to the elevated depressive symptom through psychological distress. Media exposure to COVID-19 has been found to be positively related to acute stress (69). There is considerable evidence that greater social media exposure is a risk factor contributing to depression and psychological distress in adolescents (70); further investigations are needed to clarify the potential moderators between the relationship of social media exposure and depressive severity related to COVID-19 in people of different ages. Greater psychological distress related to COVID-19 was positively correlated with more severe depression symptoms. Psychological distress has been shown to be a common response to traumatic events such as traffic accidents and natural disasters (71,72). Psychological distress has also been shown to be present nearly 4 years after receiving a diagnosis of SARS, an infectious disease that affects the respiratory system similar to the COVID-19 (73), suggesting a persistent impact of this kind of infectious disease on mental health. The results in the current study suggest that psychological distress related to the COVID-19 pandemic may predict the development of more severe chronic psychiatric illnesses, such as depression. Results showed that the interaction between expressive suppression and psychological distress positively predicted depression severity, suggesting that habitual use of suppression strategy together with higher levels of psychological distress in response to COVID-19 outbreak contributes to the development of depression symptoms. The result was consistent with that of a recent study (41), which found that individuals who reported a higher level of expressive suppression exhibited enhanced physiological response in reaction to stressful life events. A large amount of research has shown that expressive suppression was closely related to the development and maintenance of depression episodes (32,(74)(75)(76)(77). Specifically, the usage of expressive suppression was associated with increased negative affect and decreased positive affect in daily life (78) and to be inconducive to the maintenance of good interpersonal relationships, thus aggravated depressive symptoms (79). On the other hand, the associations between depression and cognitive reappraisal, an adaptive emotion regulation strategy, did not reach significance level. The result was consistent with those of previous research (80,81), in which insignificant correlations between cognitive reappraisal and depression were reported. Contrary to expressive suppression, a response-focused emotion regulation, cognitive reappraisal was an antecedentfocused strategy, which requires individuals to make adjustments before behavior and psychological well-being are affected (32). The COVID-19 was a public health emergency of international concern; thus, it was difficult for individuals to pre-evaluate the psychological impact and to regulate their emotions ahead of its sudden outbreak. In addition, it has been shown that expressive suppression was associated with higher stressrelated symptoms in trauma-exposed community samples, while cognitive reappraisal was not (40). The meta-analysis indicated a medium effect size on the associations between suppression and posttraumatic stress symptoms, but no significant effect was found for reappraisal and post-trauma symptoms (82). These findings indicated that for stress-related symptoms, expressive suppression may play a more important role than cognitive reappraisal. However, further studies are needed to test the potential mediating role of other emotion regulation strategies (such as distraction and social sharing) as well as consider other relevant outcome variables, such as anxiety. The current study has several limitations. Firstly, the sample size was not large enough to be representative of nonpatients affected by COVID-19 in China. Secondly, due to lockdown measures, data were collected via SNSs with selfreported questionnaires; thus, the results might be susceptible to memory bias and response tendencies such as social desirability. Recruitment via SNSs might bias samples and result in underrepresentation of older individuals (83). There were only a few participants over the age of 60 in the present study; the geriatric age-group, however, has a higher risk of contracting the disease and greater prevalence of psychological distress related to COVID-19 (46). Thirdly, this was a cross-sectional survey research that only revealed correlational effect. Causal relationships among social media exposure and depression cannot be determined. Longitudinal research is warranted to explore the dynamic change in mental health during different stages of the COVID-19 pandemic and uncover the underlying mechanism on the development and maintenance of mental disorders. CONCLUSIONS The present study contributes to the better understanding of the role of social media exposure to COVID-19 in amplifying psychological distress and mental health consequences. Older age, poor self-reported physical health, and higher exposure to social media content about the pandemic were risk factors for mental health problems. Psychological distress fully mediated the relationship between social media exposure and depression. Additionally, habitual use of expressive suppression interacting with levels of psychological distress about COVID-19 contributed to a higher level of depression. The results highlight the necessity of providing psychological assistance for the elderly, and individuals reported greater psychological distress and habitual use of suppression during the COVID-19 pandemic. The current study helps to inform evidence-based guidelines for minimizing psychological distress and promoting mental well-being during the global pandemic emergency. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT This study was reviewed and approved by Central China Normal University. All participants provided informed consent to having their anonymous data used for research. In addition, informed consent was obtained from teachers of middle school students before data collection. AUTHOR CONTRIBUTIONS Y-tZ and R-tL collected and analyzed the data and wrote the first draft of the paper. X-jS and MP commented significantly to the draft of the paper. XL generated the idea, designed and supervised the study, and wrote the first draft of the paper. All authors have contributed to and have approved the final text.
2021-05-12T13:26:03.859Z
2021-05-12T00:00:00.000
{ "year": 2021, "sha1": "82271849ed230aaccab962ddbca50e711be7577f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2021.644899/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82271849ed230aaccab962ddbca50e711be7577f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
486937
pes2o/s2orc
v3-fos-license
Impacts of alpine wetland degradation on the composition, diversity and trophic structure of soil nematodes on the Qinghai-Tibetan Plateau Alpine wetlands on the Qinghai-Tibetan Plateau are undergoing degradation. However, little is known regarding the response of soil nematodes to this degradation. We conducted investigations in a wet meadow (WM), a grassland meadow (GM), a moderately degraded meadow (MDM) and a severely degraded meadow (SDM) from April to October 2011. The nematode community taxonomic composition was similar in the WM, GM and MDM and differed from that in the SDM. The abundance declined significantly from the WM to the SDM. The taxonomic richness and Shannon index were comparable between the WM and MDM but were significantly lower in the SDM, and the Pielou evenness showed the opposite pattern. The composition, abundance and diversity in the WM and SDM were relatively stable over time compared with other habitats. The abundances of all trophic groups, aside from predators, decreased with degradation. The relative abundances of herbivores, bacterivores, predators and fungivores were stable, while those of omnivores and algivores responded negatively to degradation. Changes in the nematode community were mainly driven by plant species richness and soil available N. Our results demonstrate that alpine wetland degradation significantly affects the soil nematode communities, suppressing but not shifting the main energy pathways through the soil nematode communities. Experimental design. Six plots (20 m × 20 m) one kilometer apart were marked with permanent signs and established in each of the four habitats. During April (winter), May (spring), July (summer) and October (autumn) in 2011, three soil samples (each approximately 500 g) were randomly collected from the 0-15 cm layer at 5 m intervals within each plot. Thus, a total of 288 soil samples (4 habitats × 6 plots × 3 samples × 4 seasons) were examined during the study period. Soil nematode extraction. In the laboratory, soil nematodes were extracted from 50 g of fresh soil from each sample using Baermann funnels. The gravimetric moisture content of the soil was determined so that response variables could be expressed on a dry weight basis. The extracted nematodes were preserved using 5% formalin and were then killed and fixed by the addition of boiling double-strength F.A. 4:1 (100 ml of 40% formaldehyde, 10 ml of glacial acetic acid, and 390 ml of distilled water) 26 . All nematodes in each sample were counted under 40× magnification using a stereomicroscope. The first 100 nematodes encountered in each sample were identified to the genus level under 400× magnification using a compound microscope (Leica DM4000B) according to the reference "Pictorial keys to soil animals of China" 38 . The nematodes were classified into six trophic groups, herbivores, bacterivores, predators, omnivores, fungivores and algivores, according to the references [39][40][41] . Estimation of plant and soil parameters. The environmental characteristics of the four habitats were investigated in April, May, July and October 2011. The species richness and coverage of the plant communities were measured within 1 m × 1 m sampling areas. Three replicate samples were established in each plot. The vegetation height was measured using a ruler with units of 1 cm. The vegetation coverage was measured using visual estimations in the field. The above-and below-ground biomass of each sample was harvested and dried to a constant weight at 80 °C in the laboratory. Soil samples (consisting of three replicates) were collected in each plot at a depth of 0-15 cm using a flat shovel. The soil samples were air-dried and passed through 2.00 and 0.25 mm sieves for chemical analyses. The soil chemical properties were determined according to well-established methods 42 . Specifically, the soil organic matter (SOM) content was determined using the Walkley-Black method. Total N was measured using the semi-micro Kjeldahl method, and plant-available N was determined using a micro-diffusion technique following alkaline hydrolysis. Total P was determined colorimetrically after wet digestion with sulfuric and perchloric acid, and available P was determined using the Olsen method. Total K was determined using a flame photometer, and available K was measured in 1 mol L −1 NH 4 OAc extracts using flame photometry. The soil bulk density content in the 0-15 cm layer was investigated using 200 cm 3 soil cores (height: 52 mm; radius: 35 mm). The gravimetric soil moisture content was measured for each season using a ratio of the mass loss to the total dry mass of the soil samples after heating to a constant weight at 105 °C. Data analysis. First, the nematodes from three soil cores obtained from the same plot and sampling month were pooled as one sample. The abundance (the number of individuals per 100 g dry soil) and generic richness (mean number of genera per sample) were used to measure the response of the soil community to changes in habitats and seasons. Relative diversity indices, i.e., the Shannon index ) and the Pielou index (J = H′/lns), were calculated at the genus level to evaluate the responses of diversity and evenness to changes in habitats and seasons 43,44 . To evaluate the changes in the trophic structure of the soil nematode communities, the abundances (individuals per 100 g dry soil) of the six trophic groups in the same sampling plot were calculated. The relative abundance (individual percentages) of each trophic group was also used to reveal the changes in trophic structure, given that the relative abundances, rather than abundances of different trophic groups, can directly reflect their relative importance in communities in some cases. Repeated-measures ANOVAs were performed using IBM SPSS 22.0 for Windows to evaluate the effects of the habitats (WM, GM, MDM and SDM), sampling months (April, May, July and October) and their interactions on the diversity indices and abundances of the nematode communities, abundances and relative abundances of trophic groups of soil nematodes. Principal components analysis (PCA) was performed using Canoco for Windows 4.5 to evaluate the effects of habitats and sampling months on the composition of the soil nematode communities 45 . The PCA was run separately for each season, as well as for each habitat, to simplify data presentation. To reduce the number of variables and the figure complexity, these analyses were performed at the family level. The abundance data (ind. 100 g −1 dry soil) of each plot were log transformed before they were subjected to PCA. One-way ANOVA was used to evaluate the significant differences in the sample scores of the first two canonical axes (PC1 and PC2) among habitats (IBM SPSS 22.0 for Windows). Additionally, the sample scores of the first two canonical axes (PC1 and PC2) of the communities, determined during April, May, July and October, were averaged across each of the six plots within each habitat. The same calculations were conducted on the abundance, generic richness, Shannon index and Pielou index of each community and on the abundances and relative abundances of the six trophic groups. Finally, stepwise multiple regression analysis was conducted to test the relationships between the soil nematodes and environmental parameters (IBM SPSS 22.0 for Windows). Results Plant communities and soil properties. Plant species richness and coverage were significantly lower in the SDM than in the WM, and the vegetation height and above-ground, below-ground and total biomass varied significantly among the four habitats ( Table 2). The plant variables in the SDM were significantly smaller than those in other habitats ( Table 2). The soil bulk density and pH increased, while the water content decreased significantly from the WM to SDM ( Table 2). The contents of SOM and of total and available soil N, P and K varied significantly among the habitats, with the lowest values occurring in the SDM. In addition, the soil texture differed among the four habitats. For example, peat soil was found in the WM, sandy loam was found in the GM and MDM, and sandy soil in the SDM (Table 1). Soil nematode composition. A total of 78 nematode genera were identified across all samples, belonging to 38 families and 8 orders (Supplementary materials Table S1). Among the four habitats, the number of genera ranged from 51 to 65, and abundance ranged from 687.43 to 6826.94 ind. 100 g −1 dry soil. Overall, Acrobeloides and Aphelenchus were the dominant genera, accounting for 10.80% and 10.06%, respectively, of the total individuals collected. Tylenchida, Rhabditida and Dorylaimida were the three most abundant orders and represented 34.30%, 24.80% and 21.76%, respectively, of the total soil nematodes collected. Regarding the trophic groups, the percentages of bacterivores, herbivores, omnivores, fungivores and predators were, respectively, 32.10%, 20.90%, 17.60%, 15.73% and 12.62%, with algivores (1.05%) constituting the least abundant group (Table S1). Nematode community structure. The PCA results showed that the composition of the soil nematode communities varied among the four habitats ( Fig. 2a-d). The nematode communities from the SDM separated clearly from the other habitats according to PC1 and PC2 in April, May and October (Fig. 2a,b and d); however, the SDM nematode communities overlapped with those of the WM and MDM in July (Fig. 2c). The one-way ANOVA results showed that only the PC2 factor scores differed significantly among habitats in each month (April: F = 15.34, P < 0.001; May: F = 8.02, P < 0.001; July: F = 15.02, P < 0.01; October: F = 10.44, P < 0.001). On the whole, the main taxonomic groups associated with the separation of PC1 and PC2 across the sampling month were Cephalobidae, Tylencholaimidae, Aphelenchinae, Dorylaimidae, Tripilidae and Plectidae, but the pattern varied with sampling month (Fig. 2a-d). The composition of the soil nematode communities also varied between sampling months, but the patterns differed among the habitats (Fig. 3a-d). For the GM, the communities in April were separated clearly from those in May, July and October by PC1 and PC2 (Fig. 3b), and the nematode communities in April and October were separated from those in May and July for the MDM (Fig. 3c). In contrast, the nematode communities differed little among sampling months in the WM and SDM (Fig. 3a and d). The significant differences among sampling months were only observed in the second axis factor scores for the WM (F = 6.81, P < 0.01), GM (F = 16.47, P < 0.001) and MDM (F = 9.24, P < 0.001). Additionally, the taxonomic groups determining the temporal differences of the communities varied among habitats (Fig. 3a-d). Nematode community abundance and diversity. The abundance of nematodes decreased significantly from the WM to SDM (P < 0.001) and varied significantly among sampling months (P < 0.001) (Fig. 4a, Table 3). Nematode abundance also responded significantly to the interaction effects of habitat and sampling month (P < 0.01) ( Table 3). The taxonomic richness, Shannon index and Pielou index differed between the WM and MDM, and the SDM showed significantly lower values for taxonomic richness (P < 0.001) and the Shannon index (P < 0.001) and a higher value for the Pielou index (P < 0.05) (Fig. 4b-d, Table 3). The taxonomic richness also responded significantly to sampling month (P < 0.01) and the interaction effects of sampling month and habitat (P < 0.05), with the Pielou index showing significant differences by sampling month (P < 0.01) ( Table 3). However, the temporal patterns varied among the habitats, and significant temporal dynamics in diversity were only recorded for the GM and MDM (P < 0.05) (Fig. 4b-d). Nematode community trophic structure. With the exception of predators, the abundances of all trophic groups decreased significantly with increasing degradation (P < 0.001 or 0.01), with algivores disappearing from the SDM (Fig. 5a-f, Table 4). The abundances of all trophic groups, except bacterivores, varied significantly among sampling months (P < 0.001 or 0.01) ( Table 4). The abundances of the herbivores, bacterivores and algivores were also sensitive to the interaction effects of habitat and sampling month (P < 0.001 or 0.05) ( Table 4). The temporal patterns of individual trophic groups also differed among habitats (Fig. 5a-f). The relative abundances (individual percentages) of the omnivores and algivores declined significantly with habitats degradation (P < 0.001 or 0.05) (Fig. 6a-f, Table 4). Additionally, the temporal effects on the relative abundances were significant for the bacterivores, predators, omnivores and algivores (P < 0.001 or 0.01) (Fig. 6a-f, Table 4), and the temporal pattern of each trophic group differed among habitats (Fig. 6a-f). Impacts of environmental factors on soil nematodes. The results from the multiple regression analyses (Table 5) show that PC1 and PC2 were significantly correlated with average plant height (P < 0.05) and plant species richness (P < 0.001), respectively. Nematode abundance was correlated with plant species richness (P < 0.05) and soil available N (P < 0.01), while the Shannon index was negatively correlated with aboveground biomass (P < 0.05) and pH (P < 0.001). The taxonomic richness was found to correlate with plant species richness (P < 0.001) and the Pielou index to coverage (P < 0.001), respectively. The abundances of the bacterivores and omnivores were significantly correlated with plant species richness (P < 0.001), as were those of the fungivores and algivores to the soil water content (P < 0.001). In addition, the herbivore and predator abundances were significantly positively correlated with soil available N and total N contents (P < 0.001 or 0.01) ( Table 5). Regarding the relative abundances of the six trophic groups, only omnivores and algivores were correlated with plant coverage and available N (P < 0.001 or 0.05) ( Table 5). Discussion Changes in soil nematode community composition and diversity. The soil nematode communities in the WM, GM and MDM were relatively similar, but they differed remarkably from that in the SDM. Soil nematode community patterns among habitats varied substantially with sampling months. These indicated that the compositions of soil nematode communities change in response to alpine wetland degradation, while further demonstrating that the impacts of wetland degradation are temporally variable. The observed shifts in the nematode communities may reflect differences in plant communities among habitats. Our analysis also shows that the community structure of the soil nematodes was influenced by plant species richness, which changed markedly in the MDM and SDM. Nematode communities are significantly different among vegetation types 46 , and plant species composition is one of the principal factors structuring soil nematode communities 24,47 . This relationship may result from the fact that increased plant diversity generally provides a variety of foods and habitats for soil invertebrates 48 . Apart from the effects of the plants on the soil nematode communities, the composition of soil invertebrate communities can also be affected by soil properties 49,50 . In the Zoigê Wetland, the soil parameters measured, including soil texture and moisture, differed significantly among the four habitats, and these differences were distinct between the MDM and SDM. Therefore, the degradations in the soil properties among the habitats might also be an important determinant of the taxonomic composition of soil nematodes in the alpine wetland ecosystem. Our results suggest that the patterns in nematode abundance and diversity among habitats may be related to plant community and soil traits. Soil nematodes can be affected by changes in soil P, N and organic matter contents 25,30,32 . In our research, the decrease in plant species richness and increase in soil pH during the degradation progress would negatively affect soil nematode diversity according to the relationships between these variables. The plant community simplification can lead to the disappearance of some nematode species 51 . The Pielou index increased gradually with the degradation and was negatively correlated with plant coverage. Other researchers have also reported that the evenness of soil nematode communities was affected by shifts in plant community traits 15 . The increase of the evenness index suggests that competitive exclusion among different nematode taxa may decrease with alpine wetland degradation. Overall, the effects of plant communities and soil properties on the abundance and diversity of the soil nematodes indicate that the abundance and diversity of soil nematodes are more easily influenced by variations in plant communities than soil properties in an alpine wetland. However, compared with the GM, the taxonomic richness and Shannon index decreased only slightly in the MDM and decreased significantly in the SDM. This may be because the plant communities and soil properties only changed slightly and did not deteriorate before moderate degradation occurred, with the result that the habitat remained suitable for almost all soil nematode species. However, when the habitats severely deteriorated, the soil nematode diversity declined sharply because the physiology and activity of most invertebrates are adversely affected when certain environmental factors exceed their tolerance level 52 . The list of soil nematode genera in the four habitats (Table S1) also shows that many scarce genera that were present in other habitats disappeared from the SDM. The dynamics of abundance and diversity were only partially consistent with our first hypothesis that the abundance and diversity will decrease in response to alpine wetland degradation. Changes in the soil nematode trophic structure. The abundances of all trophic groups (except for predators) were significantly lower in the SDM than in the WM, and a similar phenomenon has previously been observed in a forest ecosystem 46 . Our regression analyses showed that the abundances of the five main trophic groups were positively correlated to plant species richness, soil N and water content. The decline of these environmental factors during degradation might explain the observed reduction in the abundances of the trophic Table 3. Repeated-measures ANOVA results for the effects of habitats, sampling months and their interactions on the abundance and diversity of the soil nematode community. Statistically significant (P < 0.05) results are shown in boldface (n = 24). groups. Previous studies have found that the effects of plant community and soil characteristics on soil nematodes are trophic group-specific 16,26,28,29,35 . The abundance of predators, although correlated with total N, did not change significantly between habitats, suggesting that the impacts of alpine wetland degradation do not extend to the higher levels of the soil food web. The organisms in the higher levels of the soil food web did not respond to changes in soil C 27 . This might result from the fact that predator species have a diverse prey preference and are thus not consistently limited by a single environmental factor. Regarding the changes in the relative abundance of each trophic group, significant differences between habitats were only recorded for omnivores and algivores. The regression analysis results also showed that only the relative abundances of the omnivores and algivores were significantly affected by plant coverage and available N, respectively. However, these two groups formed only a small percentage of the nematode community and thus contributed little to the overall pathway of energy flow through the nematode communities. These results indicate that while nematode abundances declined remarkably in response to wetland degradation, the relative abundances of most trophic groups remained stable. Therefore, we can infer that the main energy flow pathways through the nematode communities were only suppressed and not shifted during the process of wetland degradation. Consequently, our second hypothesis was not supported by our findings. In contrast, studies from other ecosystems have found that changes in soil properties 31,33 and plant community 4,15 alter the trophic structure of soil nematode communities. This may result from differences among ecosystem types. Seasonal dynamics and differences between habitats. The abundance, taxonomic richness and Pielou index varied significantly with sampling month, and the abundance and taxonomic richness were significantly affected by the interactions between habitat and sampling month. This may be attributable to seasonal changes in precipitation or temperature that occurred within our study area (Fig. 1). Previous studies have found that precipitation can increase nematode abundance 53 , and the taxonomic richness and Shannon index of soil nematodes depend on seasonal as well as short-term variations in temperature 34 . Additionally, the six trophic groups also responded differently to sampling month according to the temporal dynamics in their abundances and relative abundances. The predators, omnivores and algivores were more sensitive to sampling month than the herbivores, bacterivores and fungivores. This may result from the different influences of the plant community and soil property variables, which differed among the sampling months. A previous study showed that soil nematodes are affected by seasonal fluctuations in soil conditions 34 . Table 4. Repeated-measures ANOVA results for the effects of habitats, sampling months and their interactions on the abundances and relative abundances of trophic groups of the soil nematode communities. The statistically significant (P < 0.05) results are shown in boldface (n = 24). The seasonal dynamics in the community structure and diversity showed that the soil nematodes in the GM and MDM were more sensitive to sampling month than those in the WM and SDM. These findings suggest a close interaction between wetland degradation and seasonal fluctuations in plant community and soil properties in shaping soil nematode communities in alpine ecosystems. The reason behind this interaction may be that some environmental factors, e.g., the plant communities and soil properties, fluctuated more with season in the GM and MDM than in the other habitats. Other studies have shown that seasonal variations in climatic and soil factors can lead to changes in soil nematode communities 50,54 . In WM, the dominant plant species are perennial and hygrocolous, and the soil type is peat, which is less sensitive to temperature changes than the soils in the GM and MDM. Such differences indicate greater habitat stability in the WM than in the GM and MDM. At the extreme, the SDM, with sandy soils and lower plant coverage, has fewer water-filled pore spaces. Compared with more aggregated soils, soils in the SDM likely result in limited food resources for soil nematodes across all sampling months. This may partly explain the minimal temporal variation observed across the sampling months for the soil nematodes in the SDM. Conclusions Our results show that the composition, abundance and diversity of the soil nematode communities in alpine wetlands have been significantly affected by climate-and land-use-driven degradation. The decreases in abundances of most nematode trophic groups showed that the main energy pathways through the soil nematode communities were suppressed by degradation; meanwhile, the changes in relative abundances showed that wetland degradation effects were fairly consistent across the most abundant nematode trophic groups, indicating no obvious shifts in the patterns of energy pathways through the soil nematode communities. The soil nematode communities in the original and severely degraded habitats were more stable across the seasons than were those in the intermediate degradation habitats, indicating that the stability of the soil nematode communities was closely related to the habitat stability. The relationships among soil nematode communities and the measured parameters of the plant community and soil properties suggested that changes in the plant community and soil properties will have important effects on soil nematodes during alpine wetland degradation. Table 5. The partial correlation coefficients of multiple regression analyses (stepwise procedure) between soil nematode communities and environmental factors (n = 24). The superscript stars ***, ** and * indicate significant correlations at the 0.001, 0.01 and 0.05 levels, respectively.
2018-04-03T00:22:50.719Z
2017-04-12T00:00:00.000
{ "year": 2017, "sha1": "235f7af8f266ff4c5f56464462ca9b2f89bc71bf", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-00805-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "002150b4ee25305df5bf77db2d8c10202885d07b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259864777
pes2o/s2orc
v3-fos-license
Emodin, an Emerging Mycotoxin, Induces Endoplasmic Reticulum Stress-Related Hepatotoxicity through IRE1α–XBP1 Axis in HepG2 Cells Emodin, an emerging mycotoxin, is known to be hepatotoxic, but its mechanism remains unclear. We hypothesized that emodin could induce endoplasmic reticulum (ER) stress through the inositol-requiring enzyme 1 alpha (IRE1α)–X-box-binding protein 1 (XBP1) pathway and apoptosis, which are closely correlated and contribute to hepatotoxicity. To test this hypothesis, a novel IRE1α inhibitor, STF-083010, was used. An MTT assay was used to evaluate metabolic activity, and quantitative PCR and western blotting were used to investigate the gene and protein expression of ER stress or apoptosis-related markers. Apoptosis was evaluated with flow cytometry. Results showed that emodin induced cytotoxicity in a dose-dependent manner in HepG2 cells and upregulated the expression of binding immunoglobulin protein (BiP), C/EBP homologous protein (CHOP), IRE1α, spliced XBP1, the B-cell lymphoma 2 (Bcl-2)-associated X protein (Bax)/Bcl-2 ratio, and cleaved caspase-3. Cotreatment with emodin and STF-083010 led to the downregulation of BiP and upregulation of CHOP, the Bax/Bcl-2 ratio, and cleaved caspase-3 compared with single treatment with emodin. Furthermore, the apoptosis rate was increased in a dose-dependent manner with emodin treatment. Thus, emodin induced ER stress in HepG2 cells by activating the IRE1α–XBP1 axis and induced apoptosis, indicating that emodin can cause hepatotoxicity. Introduction Mycotoxins are secondary metabolites produced by fungi and are commonly found in food and feed [1].Emodin (1,3,8-trihydroxy-6-methylanthracene-9,10-dione) is an emerging mycotoxin produced by fungi from the Aspergillus, Penicillium, and Talaromyces genera (Figure 1A) [2,3].By definition, an emerging mycotoxin is neither routinely determined nor legislatively regulated but whose contamination reports are increasing [3].Emodin contamination has been reported in a wide range of foods, such as bread, fruits, vegetables, and nuts [4].Recently, a high prevalence of emodin was reported in global finished pig feed samples (90% of 524 samples) and corn silage samples (>50% of 33 samples), respectively [5,6].Uniquely, emodin is a compound found in plant natural products such as the dried root tuber of Fallopia multiflora, known as Radix Polygoni Multiflori (PMR) and is also known as one of its major components [7].Emodin is an attractive therapeutic agent because of its anticancer, antiviral, anti-inflammatory, antibacterial, anti-allergic, antidiabetic, and other pharmacological effects [8,9].PMR is a traditional Chinese medicine but has been demonstrated to induce toxicity, especially hepatotoxicity, under long-term use or high-dose conditions [10], yet the PMR components causing hepatotoxicity remain unclear.As a major component of PMR, emodin has been implicated in its hepatotoxicity [11]. tosis in LO2 cells [20].However, compounds that can be used to investigate whether emodin affects the IRE1-X-box-binding protein 1 (XBP1) signaling pathway, such as an IRE1 signaling pathway inhibitor, were not used.STF-083010 (Figure 1B), a novel IRE1-XBP1 inhibitor, inhibits XBP1 splicing by affecting only the endonuclease activity of IRE1α, not its kinase activity [21].In addition, in a recent study, LO2 cells were identified as a derivative of the cervical cancer line HeLa, similar to Chang liver cells; therefore, caution should be exercised when interpreting the data obtained using LO2 cells [22]. (A) (B) In this study, the gene and protein expression levels of ER stress markers, such as BiP, IRE1α, and CHOP, and apoptosis markers, such as Bcl-2-associated X protein (Bax), Despite studies on hepatotoxicity induced with emodin, the underlying mechanism has not been fully elucidated [8,[12][13][14].One metabolomic study demonstrated the potential of emodin to disturb glutathione and fatty acid metabolism, supporting that emodin can induce hepatotoxicity [12].Another study [8] reported that emodin induces apoptosis through the mitochondrial apoptosis pathway and generates reactive oxygen species (ROS) in HepaRG cells.Given that the generation of ROS is linked to endoplasmic reticulum (ER) stress [13,14], we hypothesized that emodin could induce hepatotoxicity through ER stress. The ER is the major organelle responsible for protein/lipid synthesis, calcium storage, and signal transduction.The process that disturbs ER function is called ER stress, and the response of cells to ER stress is the unfolded protein response (UPR), which regulates the expression of target genes to maintain cell homeostasis.ER stress is a hallmark of several diseases that can occur when the stress leads to cell death or when pathological conditions impair the ability to overcome ER stress.ER stress is associated with various responses and the pathogenesis of several diseases, including inflammation, viral infection, cancer, and metabolic disease [15].The UPR restores ER function under ER stress conditions [16].Pro-survival signaling is converted to pro-apoptotic signaling [17]; thus, ER stress and the induction of apoptosis are closely related.Apoptosis, also known as "programmed cell death," is characterized by several morphological features, such as cell shrinkage and chromatin fragmentation [18].The biochemical changes in apoptosis include protein cleavage or crosslinking and DNA degradation [19].The three main phases of ER stress-induced apoptosis are initiation, commitment, and execution.Mediators of the initiation phase are protein kinase RNA-like endoplasmic reticulum kinase (PERK), activating transcription factor 6 (ATF6), and inositol-requiring enzyme 1 (IRE1); mediators of the commitment phase are C/EBP homologous protein (CHOP), growth arrest and DNA damage-inducible protein 34 (GADD34), tribbles pseudokinase 3 (TRIB3), and the B-cell lymphoma 2 family (Bcl-2); and mediators of the execution phase are mainly caspases [17]. According to a previous study, emodin induced ER stress through the binding immunoglobulin protein (BiP)/IRE1/CHOP signaling pathway, and caused ER-related apoptosis in LO2 cells [20].However, compounds that can be used to investigate whether emodin affects the IRE1-X-box-binding protein 1 (XBP1) signaling pathway, such as an IRE1 signaling pathway inhibitor, were not used.STF-083010 (Figure 1B), a novel IRE1-XBP1 inhibitor, inhibits XBP1 splicing by affecting only the endonuclease activity of IRE1α, not its kinase activity [21].In addition, in a recent study, LO2 cells were identified as a derivative of the cervical cancer line HeLa, similar to Chang liver cells; therefore, caution should be exercised when interpreting the data obtained using LO2 cells [22]. In this study, the gene and protein expression levels of ER stress markers, such as BiP, IRE1α, and CHOP, and apoptosis markers, such as Bcl-2-associated X protein (Bax), Bcl-2, and cleaved caspase-3, were evaluated following treatment with different doses of emodin.Furthermore, the gene and protein expression levels of ER stress and apoptosis markers were evaluated after the cotreatment of HepG2 hepatocytes with emodin and STF-083010.Therefore, we endeavored to determine whether emodin contributes to hepatotoxicity, at least in part, through the induction of apoptosis by affecting the IRE1α−XBP1 signaling pathway. Bcl-2, and cleaved caspase-3, were evaluated following treatment with different doses of emodin.Furthermore, the gene and protein expression levels of ER stress and apoptosis markers were evaluated after the cotreatment of HepG2 hepatocytes with emodin and STF-083010.Therefore, we endeavored to determine whether emodin contributes to hepatotoxicity, at least in part, through the induction of apoptosis by affecting the IRE1α−XBP1 signaling pathway. Changes in Morphological Properties of HepG2 Cells Induced with Emodin A high-content screening (HCS) assay was used to investigate the effects of emodin on the morphological changes of HepG2 cells.The size of nuclear segmentation was measured using Hoechst channels, and the distribution and amounts of mitochondria were evaluated using MitoTracker™ Deep Red (Figure 3).To confirm the excessive morphological changes in HepG2 cells induced with emodin, cells were treated with relatively high concentrations of 25, 50, 100, and 200 µM.The changes in morphological properties observed included significant dose-dependent increases in cell area and cytoplasmic area values and, conversely, significant reductions in nuclear roundness and cytoplasmic intensity values (p < 0.05).In particular, the migration of mitochondria to the outer and inner nuclear regions after emodin treatment was observed by measuring the symmetry, threshold compactness, axial, or radial (STAR) descriptor, which can confirm the mitochondrial distribution.In addition, among the spots, edges, and ridges (SER) descriptors representing the mitochondrial texture, the values of seven descriptors excluding the SER edge were significantly increased, while the value of the SER edge was significantly decreased in a dose-dependent manner (Table 1).Thus, emodin treatment affected the viability of HepG2 cells and also induced morphological changes.Effect of emodin on the metabolic activity of HepG2 cells.HepG2 cells were treated with emodin for 24 h, and metabolic activity was measured with an MTT assay.The IC 50 value was determined to be 20.93 µM.Data are shown as the mean ± standard deviation (SD) from three independent experiments.* p < 0.05, ** p < 0.01 (t-test) compared with the vehicle control group.The statistical power was ≥0.8, which means that a non-significant result is likely to be not really significant. Changes in Morphological Properties of HepG2 Cells Induced with Emodin A high-content screening (HCS) assay was used to investigate the effects of emodin on the morphological changes of HepG2 cells.The size of nuclear segmentation was measured using Hoechst channels, and the distribution and amounts of mitochondria were evaluated using MitoTracker™ Deep Red (Figure 3).To confirm the excessive morphological changes in HepG2 cells induced with emodin, cells were treated with relatively high concentrations of 25, 50, 100, and 200 µM.The changes in morphological properties observed included significant dose-dependent increases in cell area and cytoplasmic area values and, conversely, significant reductions in nuclear roundness and cytoplasmic intensity values (p < 0.05).In particular, the migration of mitochondria to the outer and inner nuclear regions after emodin treatment was observed by measuring the symmetry, threshold compactness, axial, or radial (STAR) descriptor, which can confirm the mitochondrial distribution.In addition, among the spots, edges, and ridges (SER) descriptors representing the mitochondrial texture, the values of seven descriptors excluding the SER edge were significantly increased, while the value of the SER edge was significantly decreased in a dose-dependent manner (Table 1).Thus, emodin treatment affected the viability of HepG2 cells and also induced morphological changes. Changes in the Gene and Protein Expression of ER Stress and Apoptosis Markers Induced with Emodin To investigate whether emodin induces ER stress and apoptosis, changes in the mRNA and protein levels of ER stress and apoptosis markers were determined with quantitative PCR (qPCR) and a Western blot analysis (Figures 4 and 5), respectively.Tunicamy- Data are expressed as the mean ± SD from three independent experiments.a-d Means in the same row without a common superscript letter differ (p < 0.05) as analyzed with one-way ANOVA, followed by Scheffe's multiple range test.STAR: symmetry, threshold compactness, axial, or radial; SER: spots, edges, and ridges; VCON: vehicle control.The statistical power was ≥0.8 (except compactness; 0.4), which means that a non-significant result is likely to be not really significant. Changes in the Gene and Protein Expression of ER Stress and Apoptosis Markers Induced with Emodin To investigate whether emodin induces ER stress and apoptosis, changes in the mRNA and protein levels of ER stress and apoptosis markers were determined with quantitative PCR (qPCR) and a Western blot analysis (Figures 4 and 5), respectively.Tunicamycin (TM; 2 µg/mL) served as the positive control, and the mRNA levels of BiP, CHOP, IRE1α, and spliced XBP1 (sXBP1) were found to be affected by emodin treatment (Figure 4).Emodin tended to increase the relative mRNA expression level of BiP compared to the vehicle control, but the difference was not significant (Figure 4).The relative mRNA expression levels of CHOP, IRE1α, and sXBP1 were significantly increased with emodin treatment in a dose-dependent manner (Figure 4B-D).The relative protein expression of ER stress markers, such as BiP, IRE1α, and CHOP, was increased upon emodin treatment (Figure 5B-D); in particular, emodin increased IRE1α and CHOP relative protein expression in a dosedependent manner (Figure 5C,D).Treating HepG2 cells with emodin also dose-dependently increased the Bax/Bcl-2 ratio and relative protein expression of cleaved caspase-3, which are related to apoptosis (Figure 5E,F). To determine whether emodin induced the formation of sXBP1, cells were treated with the PstI restriction enzyme to digest XBP1.Fragments digested with the PstI restriction enzyme were evaluated to confirm whether emodin induces ER stress.After treating HepG2 cells with emodin, the expression of sXBP1 was dose-dependently increased, and remarkably, XBP1 digestion with PstI was decreased (Figure 5G).Collectively, these data indicate that emodin can induce ER stress and apoptosis in HepG2 cells.To determine whether emodin induced the formation of sXBP1, cells were treated with the PstI restriction enzyme to digest XBP1.Fragments digested with the PstI restriction enzyme were evaluated to confirm whether emodin induces ER stress.After treating HepG2 cells with emodin, the expression of sXBP1 was dose-dependently increased, and remarkably, XBP1 digestion with PstI was decreased (Figure 5G).Collectively, these data indicate that emodin can induce ER stress and apoptosis in HepG2 cells. Evaluation of Apoptosis Induction Induced with Emodin Using Flow Cytometry Annexin V-fluorescein isothiocyanate (FITC) and propidium iodide (PI) were used to evaluate cell death and apoptosis, respectively.Cells were classified as necrotic (top left quadrant, Q1 in Figure 6A), late apoptosis (top right quadrant, Q2 in Figure 6A), live cells (bottom left quadrant, Q3 in Figure 6A), and early apoptosis (bottom right quadrant, Q4 in Figure 6A).As a result, emodin induced apoptosis in HepG2 cells in a dose-dependent manner (Figure 6B).These results indicate that emodin can induce apoptosis in HepG2 cells. Changes in the Gene and Protein Expression of ER Stress and Apoptosis Markers Induced Using Cotreatment with Emodin and STF-083010 To elucidate whether emodin induces ER stress through the IRE1α-XBP1 pathway, cells were cotreated with STF-083010 (100 µM), a novel IRE1-XBP1 inhibitor, and emodin (30 µM).A Western blot analysis was conducted to evaluate the protein expression levels (Figure 7A), and TM (2 µg/mL) served as the positive control.Cells were also cotreated with TM and STF-083010 for comparison with emodin and STF-083010 cotreatment.The relative protein expression level of BiP was decreased upon cotreatment with STF-083010 and TM or emodin compared with the expression after single-agent treatments (Figure 7B).Conversely, the relative protein expression level of CHOP was increased after cotreatment with STF-083010 and TM or emodin compared with the expression after treatment with the agents alone (Figure 7C)., which means that a non-significant result is likely to be not really significant. Evaluation of Apoptosis Induction Induced with Emodin Using Flow Cytometry Annexin V-fluorescein isothiocyanate (FITC) and propidium iodide (PI) were used to evaluate cell death and apoptosis, respectively.Cells were classified as necrotic (top left quadrant, Q1 in Figure 6A), late apoptosis (top right quadrant, Q2 in Figure 6A), live cells (bottom left quadrant, Q3 in Figure 6A), and early apoptosis (bottom right quadrant, Q4 in Figure 6A).As a result, emodin induced apoptosis in HepG2 cells in a dose-dependent manner (Figure 6B).These results indicate that emodin can induce apoptosis in HepG2 , which means that a non-significant result is lik to be not really significant. Changes in the Gene and Protein Expression of ER Stress and Apoptosis Markers Induced Using Cotreatment with Emodin and STF-083010 To elucidate whether emodin induces ER stress through the IRE1α-XBP1 pathw cells were cotreated with STF-083010 (100 µM), a novel IRE1-XBP1 inhibitor, and emo (30 µM).A Western blot analysis was conducted to evaluate the protein expression lev (Figure 7A), and TM (2 µg/mL) served as the positive control.Cells were also cotrea with TM and STF-083010 for comparison with emodin and STF-083010 cotreatment.T relative protein expression level of BiP was decreased upon cotreatment with STF-083 and TM or emodin compared with the expression after single-agent treatments (Fig 7B).Conversely, the relative protein expression level of CHOP was increased after cotr The relative protein expression of apoptosis-related markers was also increased using cotreatment with STF-083010 and TM or emodin.Cotreatment of HepG2 cells with emodin and STF-083010 increased the Bax/Bcl-2 ratio and relative protein expression level of cleaved caspase-3 compared with the expression level after treatment with emodin alone, similar to the effects after cotreatment with TM and STF-083010 (Figure 7D,E). The formation of sXBP1 was also examined.Fragments digested with the PstI restriction enzyme were evaluated, and the results confirmed that cotreatment with STF-083010 and TM or emodin reduced the formation of sXBP1 compared with sXPB1 formation induced with emodin or TM alone (Figure 7F).emodin and STF-083010 increased the Bax/Bcl-2 ratio and relative protein expression level of cleaved caspase-3 compared with the expression level after treatment with emodin alone, similar to the effects after cotreatment with TM and STF-083010 (Figure 7D,E).The formation of sXBP1 was also examined.Fragments digested with the PstI restriction enzyme were evaluated, and the results confirmed that cotreatment with STF-083010 and TM or emodin reduced the formation of sXBP1 compared with sXPB1 formation induced with emodin or TM alone (Figure 7F). Discussion The present study confirmed that emodin causes ER stress and apoptosis in human hepatocyte HepG2 cells and particularly causes ER stress through the IRE1α-XBP1 axis The statistical power was ≥0.8 (except BiP; 0.6), which means that a non-significant result is likely to be not really significant. Discussion The present study confirmed that emodin causes ER stress and apoptosis in human hepatocyte HepG2 cells and particularly causes ER stress through the IRE1α-XBP1 axis (Figure 8).Emodin decreased the metabolic activity of HepG2 cells and increased the relative protein expression of ER stress and apoptosis markers.Furthermore, emodin caused apoptosis in a dose-dependent manner, as confirmed with flow cytometry.To determine whether the IRE1α-XBP1 pathway is involved in emodin-induced ER stress, cells were cotreated with emodin and STF-083010, followed by measurement of the protein expression of ER stress and apoptosis markers.Compared with the relative protein expression levels induced by the single treatment with emodin, BiP was downregulated, and CHOP, the Bax/Bcl-2 ratio, and cleaved caspase-3 were upregulated. (Figure 8).Emodin decreased the metabolic activity of HepG2 cells and increased the relative protein expression of ER stress and apoptosis markers.Furthermore, emodin caused apoptosis in a dose-dependent manner, as confirmed with flow cytometry.To determine whether the IRE1α-XBP1 pathway is involved in emodin-induced ER stress, cells were cotreated with emodin and STF-083010, followed by measurement of the protein expression of ER stress and apoptosis markers.Compared with the relative protein expression levels induced by the single treatment with emodin, BiP was downregulated, and CHOP, the Bax/Bcl-2 ratio, and cleaved caspase-3 were upregulated.Emodin is known as an emerging mycotoxin [2] and is a major component of some natural products, such as PMR [7].Mycotoxins generated by the contamination of natural products with fungi can cause the total amount of emodin to increase.Thus, the toxicity of emodin should be evaluated.Exposure to emodin for 24 h decreased the metabolic activity of HepG2 cells (Figure 2).According to previous studies, emodin induces cytotoxic effects on HepG2 cells with IC50 values of 32.1 µM (MTT assay, after 24 h of treatment) [23] and 19.12 µM (Cell Counting Kit-8 assay, after 72 h of treatment) [24].The IC50 value calculated in this study (20.93 µM) is generally consistent with previous studies. Regarding the morphological changes induced with emodin, the cell area, cytoplasm area, compactness, and outer/inner membrane values were dose-dependently increased.In addition, the nucleus area, nucleus roundness, nucleus intensity, and cytoplasm intensity values were dose-dependently decreased.Distinct morphological changes appeared as the concentration increased, but most of the properties were not statistically significant at the 25 µM concentration of emodin (Table 1).Moreover, the number of cells at the same magnification decreased following treatment with emodin, and nuclei condensation and mitochondrial distribution occurred (Figure 3).HCS data numerically show that as apoptosis occurs, the nuclei of apoptotic cells appear somewhat smaller than nuclei in a normal state [25], and condensed and aggregated chromatin are observed as bright fluorescence due to DNA condensation [25].Emodin is known as an emerging mycotoxin [2] and is a major component of some natural products, such as PMR [7].Mycotoxins generated by the contamination of natural products with fungi can cause the total amount of emodin to increase.Thus, the toxicity of emodin should be evaluated.Exposure to emodin for 24 h decreased the metabolic activity of HepG2 cells (Figure 2).According to previous studies, emodin induces cytotoxic effects on HepG2 cells with IC 50 values of 32.1 µM (MTT assay, after 24 h of treatment) [23] and 19.12 µM (Cell Counting Kit-8 assay, after 72 h of treatment) [24].The IC 50 value calculated in this study (20.93 µM) is generally consistent with previous studies. Regarding the morphological changes induced with emodin, the cell area, cytoplasm area, compactness, and outer/inner membrane values were dose-dependently increased.In addition, the nucleus area, nucleus roundness, nucleus intensity, and cytoplasm intensity values were dose-dependently decreased.Distinct morphological changes appeared as the concentration increased, but most of the properties were not statistically significant at the 25 µM concentration of emodin (Table 1).Moreover, the number of cells at the same magnification decreased following treatment with emodin, and nuclei condensation and mitochondrial distribution occurred (Figure 3).HCS data numerically show that as apoptosis occurs, the nuclei of apoptotic cells appear somewhat smaller than nuclei in a normal state [25], and condensed and aggregated chromatin are observed as bright fluorescence due to DNA condensation [25]. A previous study reported that emodin could cause ER stress-related apoptosis through the activation of the BiP/IRE1α/CHOP signaling pathway in LO2 cells [20].However, as LO2 cells are derivatives of the cervical cancer line HeLa, similar to Chang liver cells [22], it was necessary to investigate the hepatotoxicity of emodin using other hepatocytes.In this study, HepG2 cells were used as an in vitro liver model.HepG2 cells are used worldwide in pharmacological and toxicological research and are also known to be nontumorigenic cells with an epithelial-like morphology, a high proliferation rate, and the capability of performing liver functions [26].Conversely, the expression of drug metabolites and transporters is restrained [26].Thus, they were appropriately selected for the evaluation of hepatotoxicity induced with emodin, but it is possible that the data reported in this study may have been partially underestimated. Treatment of HepG2 cells with emodin increased the relative mRNA expression (BiP, CHOP, IRE1α, and sXBP1) and relative protein expression (BiP, CHOP, IRE1α, Bax/Bcl-2 ratio, and cleaved caspase-3) of markers related to ER stress and apoptosis.For relative mRNA expression levels, only one reference gene (β-actin) was used for qPCR in this study.Thus, although it was recently reported that β-actin is one of the reference genes that can be used to normalize gene expression in HepG2 cells, the use of single reference genes has limitations in terms of qPCR accuracy [27].Therefore, it is necessary to use appropriate combinations of three or more reference genes in future studies to improve qPCR accuracy.BiP, a central regulator of ER stress, controls the stasis between cell survival and apoptosis in ER stress-stimulated cells, especially through its interaction with caspases [28].That emodin increased the relative protein expression of BiP in our study confirmed that emodin causes ER stress (Figure 5B).Among the three initiation phase mediators (PERK, IRE1, and ATF6), IRE1 is an ER transmembrane sensor that maintains ER and cell functions by activating the UPR.In particular, mammalian IRE1 promotes cell survival but causes apoptosis through the degradation of anti-apoptotic miRNA [13].The treatment of HepG2 cells with emodin dose-dependently increased the relative protein expression level of IRE1α, showing that emodin induces ER stress through IRE1α activation (Figure 5C).In addition, activated IRE1α cleaves XBP1 mRNA, a substrate of IRE1 RNase, subsequently forming sXBP1 [29].A specific region within unspliced XBP1 can be digested by the PstI enzyme, which can be used to identify XBP1 that has not been cleaved by activated IRE1 [30].In the current study, PstI enzyme digestion was used to confirm that sXBP1 was increased and XBP1 was decreased after HepG2 cells were treated with emodin (Figure 5G).CHOP plays a pivotal role in apoptosis and mediates ER stress-induced apoptosis [31].CHOP expression does not occur under non-ER stress conditions, but when ER stress occurs, its expression increases in IRE1-, PERK-, and ATF6-dependent manners [32].That treatment with emodin dose-dependently increased the relative protein expression of CHOP in the current study confirmed that emodin could cause ER stress-induced apoptosis (Figure 5D). In this study, the relative protein expression levels of Bcl-2, Bax, and cleaved caspase-3, all closely associated with apoptosis regulation, were measured after treatment with emodin.Apoptosis is regulated by the Bcl-2 family of genes.Bax promotes apoptosis, whereas Bcl-2 inhibits apoptosis [33].Caspases are also important mediators of apoptosis, and caspase-3 activates the protease that induces apoptosis [34].Therefore, caspase-3 is a major marker related to apoptosis and is activated upon its cleavage [35].Data in the present study showed that the Bax/Bcl-2 ratio and the relative protein expression level of cleaved caspase-3 were increased after treatment with emodin (Figure 5E,F).Thus, emodin can induce apoptosis by activating ER stress, especially the IRE1α-XBP1 signaling pathway. To investigate whether emodin induces ER stress through the IRE1α-XBP1 signaling pathway, cells were cotreated with STF-083010 and emodin, followed by the detection of the relative protein expression level of ER stress and apoptosis markers.STF-083010 is an IRE1 inhibitor that inhibits the generation of sXBP1 by suppressing IRE1 RNase activity [21].Therefore, ER stress and apoptosis-related markers were measured to investigate how HepG2 cells were affected by cotreatment with emodin and STF-083010, which inhibited the IRE1-XBP1 pathway.The results showed that the relative protein expression level of BiP was decreased compared with that upon single emodin treatment, and the relative protein expression level of CHOP, the Bax/Bcl-2 ratio, and cleaved caspase-3 was increased compared with that using single emodin treatment (Figure 7B-E).These results showed the same tendency as the change in relative protein expression with single treatment of TM (positive control; induces ER stress by inhibiting N-glycosylation of proteins, thus accumulating misfolded protein [36,37]) and cotreatment with TM and STF-083010.The same tendency was also verified in a previous study using OVCAR-3 cells and SKOV-3 ovarian cancer cells regarding the protein expression of ER stress and apoptosis markers following cotreatment with TM and STF-083010 [38].Interestingly, the relative protein expression levels of CHOP, the Bax/Bcl-2 ratio, and cleaved caspase-3 were increased using cotreatment with STF-083010 and TM.A previous study [38] suggested that the activation of PERK/ATF4 might upregulate CHOP because the PERK-ATF4 axis is the major pathway that activates CHOP.It has been shown that the activity of ATF4 is increased using cotreatment with STF-083010 and TM [38], which may explain the increased CHOP expression observed in our study.Furthermore, the Bax/Bcl-2 ratio and cleaved caspase-3, which are downstream proteins, were likely upregulated by the activation of CHOP. Apoptosis induced using emodin treatment was measured with flow cytometry using annexin V-FITC and PI double staining.Emodin treatment dose-dependently increased apoptosis, indicating that emodin induces apoptosis (Figure 6B).These results support the increased relative protein expression levels of apoptosis-related markers induced with emodin treatment. Collectively, this study investigated the effects of emodin on ER stress and apoptosis in HepG2 liver cells.Emodin led to cytotoxicity in a dose-dependent manner in HepG2 cells, suggesting that emodin may cause hepatotoxicity.The gene and relative protein expression levels of ER stress and apoptosis-related markers were upregulated with emodin treatment in a dose-dependent manner (except BiP); thus, we propose that emodin may cause ER stress and induce apoptosis.Additionally, changes in the gene and relative protein expression levels of ER stress and apoptosis-related markers after cotreatment of cells with STF-083010 and emodin demonstrated that emodin could cause ER stress through the IRE1α-XBP1 axis.Although the apoptosis-related marker expression was not statistically significantly reversed with inhibitor treatment, the relative protein expression level (CHOP, the ratio of Bax/Bcl-2, and cleaved caspase-3) tends to increase compared to single treatment with emodin.Thus, it can be suggested that apoptosis can be regulated through the IRE1α-XBP1 axis.To the best of our knowledge, this is the only research to use an IRE1α inhibitor to show that emodin causes ER stress by activating the IRE1α-XBP1 pathway.Therefore, our findings demonstrate that emodin can cause hepatotoxicity by inducing ER stress and apoptosis.However, this study was based on an in vitro cell model; thus, additional studies are needed in another in vitro or in vivo liver model.Furthermore, because emodin is an emerging mycotoxin, additional monitoring of food and feed is likely needed. Cells and Cell Culture The HepG2 human hepatic cell line was obtained from the Korea Research Institute of Chemical Technology (Daejeon, South Korea).The cells were maintained in DMEM containing 10% (v/v) heat-inactivated fetal bovine serum and 1% (w/v) penicillin-streptomycin and incubated at 37 • C under a humidified atmosphere of 5% CO 2 . Measurement of Cell Metabolic Activity HepG2 cells were seeded in 96-well culture plates at a density of 2.0 × 10 4 cells/well, and incubated for 24 h.Subsequently, cells were treated with 0, 1, 2.5, 5, 10, 15, 20, 30, 40, and 80 µM of emodin for 24 h.Emodin was dissolved in DMSO, and the final concentration of DMSO in the medium was maintained at 0.5% (v/v).Then, 20 µL of a 5 mg/mL MTT solution dissolved in PBS was added to each well, and the plate was incubated at 37 • C for 4 h.The supernatant was removed after the incubation, and insoluble formazan crystals were dissolved in DMSO.A ThermoMax microplate reader (Molecular Devices, San Jose, CA, USA) was used to measure the absorbance at 540 nm; 0.5% DMSO was used as a vehicle control, and the data are expressed relative to the control.The IC 50 value was calculated using GraphPad Prism software (version 8.0.2;San Diego, CA, USA). Detection of Morphological Properties (HCS Assay) HepG2 cells were seeded in collagen-coated CellCarrier Ultra microplates (PerkinElmer, Waltham, MA, USA) at a density of 2.0 × 10 4 cells/well, and incubated for 24 h.Subsequently, cells were treated with 25, 50, 100, and 200 µM of emodin for 24 h.Emodin was dissolved in DMSO, and the final concentration of DMSO in the medium was maintained at 0.5% (v/v).PBS was used to rinse the cells, and 4% paraformaldehyde (FUJIFILM Wako Pure Chemical Corporation, Osaka, Japan) was used for 20 min to fix the cells.Then, cells were washed twice with PBS and stained with DNA-specific fluorescent Hoechst 33342 (1.1 µM) and MitoTracker™ Deep Red (100 nM; Thermo Fisher Scientific, Waltham, MA, USA) for 30 min.An Operetta High-Content Imaging System (PerkinElmer) was used to observe the cells, and images were analyzed using Harmony software (PerkinElmer). qPCR Analysis HepG2 cells (80 × 10 4 cells) were seeded in 60 mm culture plates for 24 h and treated with emodin (10, 20, and 30 µM) for another 24 h.The total RNA of cells was harvested using an RNeasy Kit (Qiagen, Hilden, Germany) following the manufacturer's instructions.The RNA quality was verified with the 28S/18S ratio using agarose gel electrophoresis (Figure S3) as well as 260/230 and 260/280 nm absorbance ratios.Subsequently, the cDNA was synthesized by reverse-transcribing the RNA (1 µg) using a QuantiTect Reverse Transcription Kit (Qiagen).Fifty nanograms of cDNA, primers for the target genes, and a Thunderbird SYBR qPCR Mix were contained in the final PCR volume of 20 µL.Table S1 presents the sequences of the primers used.Gene expression was determined using a CFX96 Real-Time PCR System (Bio-Rad).The PCR conditions were as follows: initial denaturation at 95 • C for 3 min, followed by 40 cycles of denaturation at 95 • C for 15 s, annealing at 60 • C for 10 s, and extension at 72 • C for 30 s.The expression level of mRNA was analyzed using Bio-Rad CFX Manager software (Bio-Rad), and β-actin was used as the housekeeping reference gene for normalization.The SD and coefficient of variation (CV) of untransformed quantification cycle (Cq) values for β-actin in three independent experiments ranged from 0.4 to 0.8 and 2.1 to 3.9%, respectively.Melting curve analyses were performed to identify nonspecific PCR amplification (Figure S4).The ∆∆C T method was used to calculate gene expression, and the data are expressed relative to the vehicle control (0.5% DMSO). XBP1 Splicing PCR products derived from the XBP1 cDNA were digested with the PstI restriction enzyme at 37 • C for 2 h.PstI was inactivated by heating the mixture at 80 • C for 20 min after a 5 min cooling step.Mixtures of digested or nondigested PCR products and a STAR loading solution (Dyne Bio, Seongnam, Republic of Korea) were loaded onto 2.5% agarose gels and electrophoresed at 100 V for 1 h.The DNA fragments were visualized using the Gel Doc EZ Imager (Bio-Rad). Cell Apoptosis Analysis An Annexin V-FITC/PI Apoptosis Detection Kit (BD Biosciences, Franklin Lakes, NJ, USA) was used to measure cell apoptosis according to the manufacturer's protocol.Briefly, HepG2 cells (300 × 10 4 cells) were seeded in a 100 mm culture plate for 24 h and treated with emodin (10, 20, and 30 µM) for another 24 h.After incubation, all cells were treated with trypsin, washed twice with cold PBS, and resuspended in a binding buffer.Then, the cells (10 × 10 4 ) were stained with annexin V-FITC and PI for 15 min at 25 • C. The samples were analyzed using a FACS Aria II Cell Sorter (BD Biosciences), and 5000 events were used for each sample. Statistical Analysis The data from three independent experiments are expressed as the mean ± SD.Statistical analyses were performed using GraphPad Prism software (version 8.0.2) or SPSS statistical software (version 26.0) (Armonk, NY, USA).The data were analyzed using a Student's t-test or one-way analysis of variance (ANOVA), followed by a Scheffe's multiple range test.Statistical significance was set at p < 0.05.An acceptable statistical power was considered to be 0.8. Figure 2 . Figure 2. Effect of emodin on the metabolic activity of HepG2 cells.HepG2 cells were treated with emodin for 24 h, and metabolic activity was measured with an MTT assay.The IC50 value was determined to be 20.93 µM.Data are shown as the mean ± standard deviation (SD) from three independent experiments.* p < 0.05, ** p < 0.01 (t-test) compared with the vehicle control group.The statistical power was ≥0.8, which means that a non-significant result is likely to be not really significant. Figure 2 . Figure 2. Effect of emodin on the metabolic activity of HepG2 cells.HepG2 cells were treated with emodin for 24 h, and metabolic activity was measured with an MTT assay.The IC 50 value was determined to be 20.93 µM.Data are shown as the mean ± standard deviation (SD) from three independent experiments.* p < 0.05, ** p < 0.01 (t-test) compared with the vehicle control group.The statistical power was ≥0.8, which means that a non-significant result is likely to be not really significant. Figure 3 . Figure 3. Morphological changes in HepG2 cells induced with emodin.Apoptotic nuclei appeared slightly smaller than nuclei in a normal state, and emodin treatment increased mitochondrial distribution.Hoechst 33342 and MitoTracker™ Deep Red were used to stain the nuclei and mitochondria, respectively. Figure 3 . Figure 3. Morphological changes in HepG2 cells induced with emodin.Apoptotic nuclei appeared slightly smaller than nuclei in a normal state, and emodin treatment increased mitochondrial distribution.Hoechst 33342 and MitoTracker™ Deep Red were used to stain the nuclei and mitochondria, respectively. Figure 4 . Figure 4. Relative mRNA expression levels of (A) BiP, (B) IRE1α, (C) CHOP, and (D) sXBP1 after treatment with emodin (10, 20, and 30 µM) for 24 h.The relative mRNA expression levels (A-D) increased after treatment with emodin.TM (2 µg/mL) served as the positive control.VCON, vehicle control; TM, tunicamycin.The data are shown as the mean ± SD from three independent experiments.* p < 0.05 and ** p < 0.01 (t-test) compared with the VCON group.The statistical power was ≥0.8, which means that a non-significant result is likely to be not really significant. Figure 4 . Figure 4. Relative mRNA expression levels of (A) BiP, (B) IRE1α, (C) CHOP, and (D) sXBP1 after treatment with emodin (10, 20, and 30 µM) for 24 h.The relative mRNA expression levels (A-D) increased after treatment with emodin.TM (2 µg/mL) served as the positive control.VCON, vehicle control; TM, tunicamycin.The data are shown as the mean ± SD from three independent experiments.* p < 0.05 and ** p < 0.01 (t-test) compared with the VCON group.The statistical power was ≥0.8, which means that a non-significant result is likely to be not really significant. Toxins 2023 , 16 Figure 5 . Figure 5. Relative protein expression levels of (B) BiP, (C) IRE1α, (D) CHOP, (E) Bax/Bcl-2 ratio, and (F) cleaved caspase-3 after treatment with emodin (10, 20, and 30 µM) for 24 h.(A) Relative protein expression levels of ER stress and apoptosis-related markers measured with a Western blot analysis.(B-F) The relative protein expression levels were increased with emodin treatment.(G) The effects of emodin on the expression of sXBP1 cut with PstI.TM (2 µg/mL) served as the positive control.VCON, vehicle control; TM, tunicamycin.The data are shown as the mean ± SD from three independent experiments.* p < 0.05 and ** p < 0.01 (t-test) compared with the VCON group.The statistical power was ≥0.8, which means that a non-significant result is likely to be not really significant. Figure 5 .Figure 6 . Figure 5. Relative protein expression levels of (B) BiP, (C) IRE1α, (D) CHOP, (E) Bax/Bcl-2 ratio, and (F) cleaved caspase-3 after treatment with emodin (10, 20, and 30 µM) for 24 h.(A) Relative protein expression levels of ER stress and apoptosis-related markers measured with a Western blot analysis.(B-F) The relative protein expression levels were increased with emodin treatment.(G) The effects of emodin on the expression of sXBP1 cut with PstI.TM (2 µg/mL) served as the positive control.VCON, vehicle control; TM, tunicamycin.The data are shown as the mean ± SD from three independent experiments.* p < 0.05 and ** p < 0.01 (t-test) compared with the VCON group.The statistical power was ≥0.8, which means that a non-significant result is likely to be not really significant. Figure 6 . Figure 6.(A) Flow cytometry was used to measure the apoptosis rate using annexin V-FITC and PI double staining.(B) Treatment of HepG2 cells with emodin (10, 20, and 30 µM) increased the apoptosis rate in a dose-dependent manner.TM (2 µg/mL) served as the positive control.A total of 5000 events were collected per sample.VCON, vehicle control; TM, tunicamycin.The data are shown as the mean ± SD from three independent experiments.* p < 0.05 and ** p < 0.01 (t-test) comparedwith the VCON group.The statistical power was ≥0.8, which means that a non-significant result is likely to be not really significant. Figure 7 . Figure 7. Relative protein expression levels of (B) BiP, (C) CHOP, (D) Bax/Bcl-2 ratio, and (E) cleaved caspase-3 after cotreatment with EMO (30 µM) and STF-083010 (100 µM) for 24 h.(A) Relative protein expression levels of ER stress and apoptosis-related markers measured with a Western blot analysis.The relative protein expression levels of (B) BiP were decreased and (C) CHOP, (D) Bax/Bcl-2 ratio, and (E) cleaved caspase-3 were increased compared with treatment with EMO alone.(F) The effects of EMO and STF-083010 on the expression of sXBP1 cut with PstI.TM (2 µg/mL) served as the positive control.VCON, vehicle control; TM, tunicamycin; TM + STF, tunicamycin and STF-083010; EMO, emodin; EMO + STF, emodin and STF-083010.* p < 0.05 and ** p < 0.01 (t-test) compared with the VCON group.# p < 0.05 and ## p < 0.01 (t-test) compared with the single treatment of TM or EMO.The statistical power was ≥0.8 (except BiP; 0.6), which means that a non-significant result is likely to be not really significant. Figure 7 . Figure 7. Relative protein expression levels of (B) BiP, (C) CHOP, (D) Bax/Bcl-2 ratio, and (E) cleaved caspase-3 after cotreatment with EMO (30 µM) and STF-083010 (100 µM) for 24 h.(A) Relative protein expression levels of ER stress and apoptosis-related markers measured with a Western blot analysis.The relative protein expression levels of (B) BiP were decreased and (C) CHOP, (D) Bax/Bcl-2 ratio, and (E) cleaved caspase-3 were increased compared with treatment with EMO alone.(F) The effects of EMO and STF-083010 on the expression of sXBP1 cut with PstI.TM (2 µg/mL) served as the positive control.VCON, vehicle control; TM, tunicamycin; TM + STF, tunicamycin and STF-083010; EMO, emodin; EMO + STF, emodin and STF-083010.* p < 0.05 and ** p < 0.01 (t-test) compared with the VCON group.# p < 0.05 and ## p < 0.01 (t-test) compared with the single treatment of TM or EMO.The statistical power was ≥0.8 (except BiP; 0.6), which means that a non-significant result is likely to be not really significant. Figure 8 . Figure 8. Overview of the possible molecular mechanism of ER stress and apoptosis induced with emodin in HepG2 cells.Created with BioRender.com. Figure 8 . Figure 8. Overview of the possible molecular mechanism of ER stress and apoptosis induced with emodin in HepG2 cells.Created with BioRender.com. Table 1 . Alteration of phenotypic marker expression induced with emodin treatment. Table 1 . Alteration of phenotypic marker expression induced with emodin treatment.
2023-07-15T15:08:44.232Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "aeaaafdfac250d84acedc57190c469ba8d64c0d5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/15/7/455/pdf?version=1689154462", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a90fefdaa2d9145047d39b21eec5fd4115e6763", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
25216687
pes2o/s2orc
v3-fos-license
Prevalence and predictors of smoking in Pakistan: results of the National Health Survey of Pakistan OBJECTIVE We analysed data collected during a nationwide cross-sectional household survey to estimate the prevalence of and identify factors associated with smoking in Pakistan. DESIGN Population-based, cross-sectional survey [National Health Survey of Pakistan (NHSP) 1990-1994]. METHODS A population-based survey was carried out in Pakistan during 1990-1994. A nationally representative sample of 18,135 individuals aged 6 months and older was surveyed. We restricted this analysis to individuals aged 15 years or older (n=9442). The main outcome measure was self-reported smoking. Smokers were defined as individuals who reported current smoking and having smoked at least 100 cigarettes or 'beddies' during their lifetime. RESULTS Overall prevalence of smoking was 15.2% [95% confidence interval (CI), 14.5-15.9%]. It was 28.6% (27.3-29.9%) among men and 3.4% (2.9-3.9%) among women. The highest prevalence was reported in men aged 40-49 years (40.9%). The independent predictors of smoking identified in the multivariate logistic regression analysis included age, male gender, ethnicity and illiteracy. CONCLUSIONS One out of every two to three middle-aged men in Pakistan smoke cigarettes. Our findings suggest that ethnically sensitive smoking control programmes that include measures for improving literacy rates are needed in Pakistan. Introduction It is estimated that in the year 2000 alone, nearly half of the 4.83 million premature deaths attributable to smoking in the world occurred in developing countries, mainly among men aged 30-69 years [1]. According to the World Health Organization, if appropriate preventive measures are not taken, the number of these deaths will increase to 10 million per year by 2030, with 70% of them taking place in the developing world [2]. South Asians, irrespective of where they live, have an increased risk of cardiovascular disease (CVD) in part because they have excess exposure to established CVD risk factors including diabetes, lack of aerobic exercise and low levels of high-density lipoprotein cholesterol [3][4][5][6][7]. Smoking further increases their risk of developing CVD. In many South Asian populations, smoking has become widespread, particularly among men, less educated individuals and the poor [8][9][10]. lack of nationally representative data in Pakistan on the burden and determinants of smoking hampers the development of a relevant and evidence-based smoking control programme. Those few studies that have looked at the prevalence of smoking have been conducted in localized geographical areas [12][13][14][15][16][17]. The National Health Survey of Pakistan (NHSP) 1990-1994 was the first nationally representative study to collect data on smoking in Pakistan [18]. Although the prevalence of smoking among the Pakistani population has been reported previously [19], independent determinants of smoking have not been studied. We analysed the NHSP data to estimate the prevalence of and identify factors associated with smoking among individuals aged 15 years or older. Methods The present analysis is based on data collected during a cross-sectional nationwide household survey. The NHSP was commissioned by the Pakistan Medical Research Council (PMRC) between 1990 and 1994, and designed and conducted under the technical assistance of the United States National Centre for Health Statistics. The survey was administered to a nationally representative sample (n = 18 135 individuals aged 6 months and older) belonging to 2400 households. This analysis was restricted to individuals aged 15 years or older (n = 9442). The design of the survey was a modification of United States' Third National Health and Nutrition Examination Survey (NHANES III). The details of methodology used in the NHSP have been reported previously [19][20][21]. In summary, the survey used a two-stage stratified design, taking urban and rural areas of each of the four provinces of Pakistan as strata. The country was divided into 80 urban and rural primary sampling units. Thirty households were selected randomly from each unit, and all members of the household aged at least 6 months were surveyed. The overall individual response rate was 92.6%. Data collection involved the use of a questionnaire, which had been validated in local languages. Data were collected on variables including demographic, socioeconomic, and lifestyle factors. All respondents underwent a standardized physical examination by physicians. Technicians performed anthropometric examinations. Respondents were asked, 'Have you smoked at least 100 cigarettes or 'beddies' during your entire life?' Those who replied 'yes' were asked, 'Do you smoke now?' Current smokers were persons who reported current smoking and having smoked at least 100 cigarettes or 'beddies' during their lifetime. Respondents were also asked, 'Have you chewed tobacco or used snuff at least 100 times during your entire life?' Those who said 'yes' were asked, 'Do you chew tobacco or use snuff now?' Those respondents who replied they did were reported as current tobacco chewers/snuff users. Our definition of ethnicity was based on the mother tongue of the respondents, which is specific for each of the five major ethnic groups in Pakistan: Punjabi, Pushtun, Muhajir, Sindhi, and Baluchi. As reported previously [20,22], these groups have distinct places of origin, cultural practices and values, health beliefs, and behaviours. Furthermore, almost two-thirds of marriages in Pakistan are consanguineous, resulting in relative homogeneity within ethnic groups. Respondent's socio-economic status (SES) was defined through a count of a number of items they owned. This measure has been validated previously [23]. Literacy was defined as ability to read. Statistical analysis We used SAS version 8.0 (SAS Institute Inc., Cary, North Carolina, USA) for analysis. Means and standard deviations were computed for continuous while proportions and 95% confidence intervals (CI) for categorical study variables. The final analysis was restricted to individuals aged 15 and above belonging to one of the five major ethnic groups. Thus, the final sample size for the multivariable model was 8328. The association between smoking and potential predictors such as age, sex, literacy status and SES was investigated, using univariate logistic regression. Variables that were associated with smoking at P-value < 0.2 in the univariate analysis or were biologically important were considered for multivariate logistic regression analysis. Factors that were included in the final model were age, ethnicity, sex, literacy, urban/rural dwelling. Although socio-economic status was not significantly associated with smoking, it was retained in the final model. Discussion Our analysis provides the first nationally representative estimate of the prevalence of smoking amongst people aged 15 years or older in Pakistan. The overall prevalence of smoking among individuals aged 15 years or older was 15.2%, which is comparable to that (15.8%) observed in a similar study in neighbouring India [10]. In our study, gender was the strongest predictor of smoking: the prevalence of smoking was 28.6% in men versus 3.4% in women (P < 0.001). This finding is consistent with those of several regional studies. In a study conducted in the Ghizar district in northern Pakistan, the prevalence of current smoking was 43.7% among men and 5.5% among women (aged 18 years and older) [16]. These disparities in smoking prevalence by gender have also been consistent with data from India and Indonesia [24,25]. For example, in urban Delhi, 45% men versus 7% women smoked [25]. There are several factors that may explain gender-specific differences in our study. First, many women might have concealed their smoking status from the interviewers for socio-cultural reasons. Second, smoking by women is not perceived as socially acceptable in Pakistan and as a result far fewer women smoke [26]. Because of their historically lower smoking rates, women in developing countries remain a key target of the tobacco industry. Efforts to target women in developed countries have already been rewarded because there is more smoking among teenage girls than boys in several developed countries [27]. It is feared that girls in the developing countries may follow the same trends [27]. Therefore, concrete efforts are needed to protect them from the effects of aggressive marketing policies adopted by multinational and national tobacco companies. Our analysis also showed that smoking rates were very high among men of the most productive age group (20-59 years). Of Pakistan's 133.6 million population, 54.3 million people (27.4 million men and 26.9 million women) are in this age group. We estimated that 10.6 million people in this age group (9.6 million men and 1 million women) were current smokers. A very high prevalence in this age group particularly among men could result in enormous productivity losses in the country, and calls for immediate targeted smoking cessation programs. The prevalence of smoking increased abruptly at younger ages with rates of 8.3% (6.4-10.2%) in men jumping to 30.1% (27.3-32.9%) in those aged 20-29 years. It is quite possible that the relatively lower rate of smoking in teenagers is due to under-reporting. Data from the United States indicate that a significant increase in smoking rates has occurred among persons aged 18-24 years over the past few years [28]. Data from developing countries also indicate that teenage smoking is a growing problem [10,[29][30][31]. Thus, it is extremely important to target this vulnerable segment of the population in smoking cessation programmes. An important finding of our analysis was the lower smoking rates among men aged 50 years or older. Although it is possible that men in this group are giving up smoking or never taking it up, one possible explanation is that many of smokers could be dying of smokingrelated illnesses. Similar lower rates have also been reported in India [10]. A large study indicated recently that among males in India, smoking is associated with a quarter of all deaths in middle age (with a loss of 20 years of life expectancy). Further, a third of the deaths caused by smoking are from vascular disease [11]. The relatively early age of CVD and CVD-related deaths among South Asians compared with other ethnic groups is well documented [32,33]. For example in India nearly half of CVD-related deaths occur below the age of 70 years as compared to developed countries where only one in five of these deaths occur below the age of 70 years [34]. In a case-control study, we recently found that current smoking was associated with a very high risk of acute myocardial infarction (AMI) in young South Asians [35]. Further research is needed to investigate the causes of relatively lower rates of smoking in individuals aged 50 years and above. The association between ethnicity and smoking has been reported widely [36,37]. For example in the UK, men of South Asian origin had a higher prevalence of smoking than men in the general population. However, South Asian women had a lower prevalence of smoking than women in the general population [37]. Our analysis showed that the prevalence of smoking varied widely among the five major ethnic groups in Pakistan. Smoking rates were higher among the Sindhis than the Punjabis. As compared with the Punjabis, the odds of smoking were lower among the Muhajirs and the Pashtuns. However, both these groups, particularly the Pashtuns had a very high prevalence of snuff/chewable tobacco use. Our analysis showed that 20.5% of Pashtuns chewed tobacco or used snuff (data not shown). The use of snuff (naswar) is particularly common in Pakistan's Pushtun-populated areas because they are the leading growers of tobacco. Like chewing tobacco, naswar is made locally in small shops. It is far cheaper than cigarette and readily available even in remote villages. Similarly, tobacco chewing is particularly common in Pakistan's Muhajirs (11.8%) because it may well be that they brought with them this habit from India where tobacco chewing is a public health problem. Around 20% people currently chew tobacco in India [10]. A sound understanding of ethnic differences in smoking rates is necessary because it helps provide information needed to develop appropriate tobacco control programmes. Based on our findings, we recommend ethnically sensitive tobacco control policies in Pakistan focusing more on chewable tobacco in the Pashtuns. Our results also indicated an independent association between place of dwelling and current smoking. Individuals living in urban areas were more likely to be smokers as compared with rural-dwellers (adjusted OR = 1.43; 95% CI: 1.23-1.67). It is interesting to note that in the univariate analysis literacy was associated with smoking (OR = 1.23, 95% CI: 1.09-1.39). However, this relationship reversed in the bivariate analysis after adjusting for male gender (OR = 0.64; 95% CI: 0.56-0.73) suggesting that the apparently higher rates of smoking in literate versus illiterate respondents were confounded by a greater proportion of men as compared to women in the latter. The multivariate analyses confirmed that literate respondents were less likely (adjusted OR = 0.69; 95% CI: 0.59-0.80) to be current smokers as compared with their illiterate counterparts. It is likely that literate people were more aware of the health hazards of smoking as compared with their illiterate counterparts. Various studies have suggested that uneducated people are more likely to smoke than their educated counterparts [10,25]. Although the dichotomy of literacy status into literate and illiterate limits its comparability with other studies, our study has major implications for smoking control policy in Pakistan because the majority of adults aged 15 and over (43.6 million of 76.5 million) in the country are illiterate (unable to read). Therefore, public policy for prevention of smoking must also prioritize measures for improving literacy rates in Pakistan, and ensure that messages are available for illiterate people. Our analysis had the following limitation. The definition of smoking is based on self-reported smoking and not on serum cotinine levels, which are a better measure of smoking status. However, a good agreement occurs between self-reported smoking status and high serum cotinine levels [37]. Conclusions Our analysis of a large nationally representative data provides important information on the prevalence and determinants of smoking in Pakistan and could pave the way for the development of a smoking control programme in the country. One out of every two to three middle-aged men smoke cigarettes in Pakistan. Our findings suggest that ethnically sensitive smoking control programmes that include measures for improving literacy rates are needed in Pakistan.
2018-04-03T02:15:51.794Z
2005-01-01T00:00:00.000
{ "year": 2005, "sha1": "eaf44d8ade2eebc443e995939db08a969190e943", "oa_license": "CCBYNC", "oa_url": "https://ecommons.aku.edu/cgi/viewcontent.cgi?article=1412&context=pakistan_fhs_mc_surg_surg", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "3a51318018b589df934a9adbb22955a561455f23", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
237855647
pes2o/s2orc
v3-fos-license
A New Approach of Soft Joint Based on a Cable-Driven Parallel Mechanism for Robotic Applications A soft joint has been designed and modeled to perform as a robotic joint with 2 Degrees of Freedom (DOF) (inclination and orientation). The joint actuation is based on a Cable-Driven Parallel Mechanism (CDPM). To study its performance in more detail, a test platform has been developed using components that can be manufactured in a 3D printer using a flexible polymer. The mathematical model of the kinematics of the soft joint is developed, which includes a blocking mechanism and the morphology workspace. The model is validated using Finite Element Analysis (FEA) (CAD software). Experimental tests are performed to validate the inverse kinematic model and to show the potential use of the prototype in robotic platforms such as manipulators and humanoid robots. Introduction Soft robotics is a growing research area that has shown advantages over conventional robotics. In this area highly adaptive robots have been developed for soft interactions, providing greater security such as safe human-machine interaction. Compliance and adaptability of the soft structures are used for better efficiency and ability to interact with the environment [1]. Soft robotics is a new solution that covers the unmet need to perform tasks in unstructured and poorly defined environments, where conventional rigid robotics mainly seeks to be fast and accurate. The advantages of soft robots allow for a wide variety of applications. However, this requires a paradigm shift in the methods of modeling, operation, control, materials and new designs to develop soft robots. The deformation property of soft robots is a restrictive element when using many of the most common conventional rigid sensors or other conventional control techniques [2]. Soft robotics is a subdomain of what is known as continuum robotics, it is defined by [3] as those robots with an elastic, continuously flexing structure and an infinite degree of freedom (DOF); and which are related to (but distinct from) hyperredundant robots, consisting of a finite number of many short, rigid links [4,5]. These models are usually more complex than traditional robot models, which have a small number of rigid links. The incorporation of soft robotics into robotic systems comes mainly with two types of approaches [6]. One approach involves the use of compliant joints between different rigid links of the robot, while in another approach continuous soft robots are used, such as those mentioned above. This article explores this last type of design. Continuum soft robotic arms show features of soft robotics such as adaptability, high dexterity, and conformability to the external environment. However, they often cannot achieve the high rigidity and robustness required to handle objects or higher loads. Therefore, it is necessary to find a solution capable of providing the robustness of rigid arms and the versatility of soft loads throughout its positioning range in 3D space, while maintaining the advantages of its soft nature. Furthermore, the proposed joint is scalable and adaptable to operational requirements in a modular and simple way. Therefore, joint properties, such as maximum bending angle or blocking bending, can be configured by modifying the morphological design and number of the links in the joint, or the distance between them, as well as increasing the number of DOF by concatenating joints. Finally, this proposal is a low-cost construction, primarily designed by 3D printing and actuated by three motors that vary the length of tendons. Tendons are integrated within the morphology itself, which favors constant curvature and simplification of the model. Electromechanical action is proposed for the articulation, as opposed to other energy sources such as pneumatics or hydraulics. This feature allows the portability of the prototype and a greater integrability in any system (a robot, a humanoid, etc.), as well as more precise control and easier maintenance. The rest of the paper is organized as follows: Section 2 introduces the soft joint design and prototype. It also shows its geometric design and includes the analysis of its characteristics and configurations. The section also shows the performance and assembly of the prototype and examines the properties of the material chosen for the joint morphology. Section 3 introduces the description of the mathematical model developed for the soft link, considering its workspace and the tendon length ratio that enables performance. The experimental tests carried out with the platform are described in Section 4, where the behavior of the soft joint is analyzed against different inputs and movements using two different tests. The discussion of the experimental results is presented in Section 5, and Section 6 concludes by highlighting the main achievements. This work is under a licensing process and the patent details are given in Section 7. Design and Prototype of the Soft Joint This section presents in detail the design and prototype of the soft joint. Geometry The soft joint has an asymmetrical morphology that allows its end tip to be positioned in the three-dimensional environment, robustly supporting high loads during its performance. Its design provides greater flexibility and a wider range of movement than a rigid joint. It consists of a series of links with asymmetrical prism morphology and circular section pitch. A triangular morphology is represented in Figure 1. The small section and soft nature of the central axis of action, allow a greater bending capacity in all directions. The asymmetrical prismatic section provides the property of blocking and a natural protection, as well as the routing of the tendons for their action. The design performance is achieved by tendons that are routed through the asymmetric prismatic sections, as shown in Figure 2. It is possible to change the morphology of the prism and route the tendons through different points of these sections. This change would cause the variation of the forces and moments the joint is subjected to, therefore obtaining different kinematics and dynamics. By acting on the tendons, the joint can flex and orientate with two DOF. One of the novel characteristics of this design is the natural morphological protection of the joint against large loads provided by the proposed asymmetrical morphology. An example of the triangular morphology are the two different configurations of extreme load: • Configuration 1: Flexion towards one of the vertices of the triangle. • Configuration 2: Flexion towards one of the edges of the triangle. In configuration 1, protection when turning in the direction of one of the vertices is the most restrictive, as shown in Figure 3a. In the case of excessive bending, caused by high loads at the end of the joint or by control failures, the vertices contact each other. This produces a blocking curve of the structure that protects the joint from possible breakage due to wear or due to exceeding its elastic limit. This protection allows the joint to act with robustness and safety, especially in the regions of maximum flexion. In this configuration, the action is achieved by a single tendon, which is routed through the vertices that form the bending curve. Configuration 2 allows larger flexion of the joint, compared to Configuration 1, while also maintaining the natural protection of the joint. When the flexion is towards one of the edges of the triangle, the blocking curve has a smaller radius, as shown in Figure 3b. This is because the edges are closer to the central axis of rotation, as can be seen from the distance ratio d1 < d2 in Figure 1c. A larger bending occurs due to the fact that a larger bending angle is necessary before these edges contact each other and lock the joint structure. In this configuration, performance is achieved by the action of the two tendons that form the edge of the triangle where bending occurs. Actuation As mentioned above, there are several ways to operate soft robots. This paper focuses on operation by tendons of variable length using a winch coupled to a motor shaft. Tendon lengths must be translated into motor angular positions. L o = 0.2 m is the length of the tendons when the joint is at rest position, and L i is the target tendon length. The linear displacement is transformed into an angular displacement by the length of the arc formed by the circumference of the winch for a certain angle (Figure 4), following the equation below: R is the radius of the winch where the tendon is wound or unwound, in this case 9.3 mm, and Ω is the angle that provides that displacement. (a) Prototype To choose the soft joint operation, a test platform was designed. The goal is that the rest position of the joint is horizontal. Three motors will be used to operate the joint by tendons, each of which will wind the three tendons ( Figure 5). The fixing base is made up of two 3 mm thick metal plates, to be strong enough to support the test loads. The motors used for the drive are Maxon EC-max 22. The motors are controlled by Technosoft's Intelligent Drives iPOS 4808 MX, which communicate with the PC via busCAN. Connecting elements have been printed on a 3D printer Creatbot600 pro and Zmorph from PLA (Polylactic acid) material. They are two bases for fastening the soft joint with the metal base, a platform for fastening the electronic elements, three motor fasteners with the metal platform and three winches that are attached to the motor shaft and the tendons, made of polyester thread, for the activation of the joint. The designed soft joint has been built by 3D printing from NinjaFlex using a Creatbot600 pro printer ( Figure 6). Material Properties and Tests One of the most important features when prototyping a soft robotic joint is the choice of material. This design uses NinjaFlex ® 3D Printing Filament, a flexible polyurethane material for Fused Deposition Modeling (FDM) printers. This 3D printing manufacturing method and this material were chosen for their ease to use and for allowing variations in percentage or filling patterns of the soft joint body. The mechanical properties of this material make it a good choice for the purpose of the prototype (Table 1). Its flexibility allows the joint to bend but, at the same time, it is rigid enough to prevent big deformations and resist loads. The soft joint design was analyzed in SolidWorks software, which applies a non-linear finite element study on the material. The prototype was modeled as a simple cantilever beam (one of its ends is fixed and a force is applied to its free end). This allows an efficient testing of the design under stresses and strains. To simplify the simulation, the joint was assumed to be a completely filled solid except for the inner channel, and to simulate the assembly of the real prototype, the soft joint model was assembled including its two support pieces, one at each end. After the design phase, the prototype was 3D printed using NinjaFlex material with 30% infill. The experiments were performed with this specific prototype. The model in SolidWorks was tested under different conditions. First, a no-load test was performed on the soft joint, by only simulating gravity and fixing one of the ends, as shown in Figure 7, with the red arrow representing the orientation of the gravity action in the simulation. One intended use of this soft joint is as a manipulator able to support different loads. Therefore, a second simulation was carried out with a rectangular prism with a fixed mass of 500 gr, homogeneously distributed. This prism represents the weight of the robot gripper in the simulation, Figure 8. In addition, a 10 Newtons downward force is applied to the end effector, simulating an external weight of 1 kg and causing a higher end torque. The simulation shows a deflection of 7.38 • and a maximum deformation of 0.75 MPa. Additionally, another stress study was carried out to check if the yield strength of Ninjaflex is not exceeded. It was noted that when applying 60 N force at the end of the soft joint, as shown in Figure 9, a bending angle of 60 • was reached and the maximum deformation was 2.9 MPa. Therefore, a no permanent deformation is confirmed when the soft link reaches an inclination angle of 60 • . Mathematical Model of the Soft Link The position of the soft joint is defined as the combination of orientation and inclination, where inclination is the curvature angle of the joint, and orientation is the angle of the plane perpendicular to the base that contains that curvature. It achieves two DOF of flexion from the three tendons, thus the position depends on the distance of the tendons and their combination. Therefore, a mathematical model of the joint has been created to obtain the theoretical distances of the tendons required for a specific position of the end of the joint. This angle is assumed to be zero when it coincides with the Y axis, and the actuators are named counterclockwise as this angle increases, Figure 10a. Calculation of Tendon Lengths The robot inputs are one inclination value, θ, and one orientation value, ψ, and the outputs will be tendons lengths: Inverse kinematics was used to calculate tendon lengths for the target end position. It is important to point out that unlike works such as [27] or [25], this design does not have the tendons in the open air, but the performance of the tendons is embedded within the morphology of the soft joint itself. This makes the length of the tendons not straight, but rather the tendons project the curvature of the soft joint, thus having a curvature similar to that of the joint. Therefore, L i , the lengths of the tendons form an arc between both ends of the joint, Figure 10b. Thus, tendons and joint are considered robots shaped by continuously bending actuators, such as those described by [30,31], where a pneumatic actuation is usually used, considering joint curvature and tendon curvature as a continuous curvature. The equations shown in [3] are adapted to this specific morphology case. An angular-curved approach is used, with the inclination and orientation parameters. The lengths of the tendons L i depend on both inclination and orientation angles. The length of the joint, L, remains constant in its central fiber at all times, regardless of the curvature; and the distance, a, of the tendons from the center of the joint section, remains constant, too (Figure 10b). For this morphology, a measures 0.035 m, L measures 0.2 m. The actuator for tendon 1 is placed at ν 1 = π 2 radians, tendon 2 is placed at ν 2 = 7·π 6 radians and tendon 3 is placed at ν 3 = 10·π 6 radians. As previously discussed, it can be determined that L, the central fiber length of the soft joint, is constant independently of the inclination angle. Tendon lengths are calculated through the arc equations, due to the assumption of constant curvature. The radius r of the curvature L is determined as L = r · θ, where θ has a value in radians. As the central fiber and tendons move, they move in the direction given by the angle of orientation, and by projecting the arcs and radii, the representation in Figure 11 is obtained. Therefore, L i can be determined as L i = r i · θ, where r i = r − a · cos(ν i − ψ), resulting in the following equations: Hence, φ i is the angle between orientation, which is the plane containing the curvature, and the plane of tendon location, i. This angle φ i depends on the configuration of the orientation and the number of actuators. The relationship of each tendon with the orientation is as follows: A generic equation is obtained for lengths: Figure 11. Representation in the perpendicular view of the orientation plane formed by the orientation angle ψ and an inclination angle θ. It can be seen that the projection of the radii of the constant curvature of the soft joint. The central fiber curvature L and its corresponding radius r are represented in blue. The arcs of tendons L i are represented by dashed black lines, and their corresponding radii r i by continuous black lines. The difference between r and r i is represented by a red line whose distance for each tendon is given by equation a · cos(ν i − ψ). Calculation of the Blocking Angle The proposed morphology is designed with a blocking mechanism that protects or strengthens it at certain angles of inclination and orientation, and that must be parameterized in the kinematics. The angle of inclination at which the blocking occurs depends on the space between the triangular sections, where H s is the height of the point of contact with the bending center of the link, and D s is the distance from the point of contact with the bending center of the link, as shown in Figure 12. However, this distance D s is not a constant parameter as it would be if the sections were circular. The blocking angle depends, in this asymmetric triangular design, on the distance D s , which varies according to the orientation being a maximum value when the point of contact is the vertices of the triangle and a minimum value when the point of contact is the center of the edges of the triangle. From the values H s and D s the angle α is obtained as: This angle is formed as the bisector of the blocking angle. The blocking angle of a link, β, is given as the double of alpha and it is obtained from the following equation: H s has a fixed value (in our case, 8 mm) while D s varies according to the orientation. To calculate D s , we estimated the maximum, max , and minimum, min , possible distances with this morphology (40 mm and 25 mm, respectively), and the angles between them, ψ di f = 60 • . Knowing the orientation angles where the maximum and minimum occur, it can be parameterized according to a factor such that: Based on this factor, we know how the distance between the minimum and the maximum varies for each degree for D s . Once the theoretical blocking angle, β, is estimated for each link according to the orientation, we can calculate the final joint angle, Γ, when blocking occurs. The final angle depends on the number of links within the joint, N, such that: Representation of the Workspace Joint kinematics will block angles greater than the total blocking joint angle, creating an asymmetric workspace. X, Y and Z axes represent the soft joint final position in meters. The soft joint fixed base is at position [0, 0, 0]. Maximum Z value is 0.2 m when the joint is at rest. As the soft joint flexes, Z value decreases. X and Y values are the projection of the joint end position on the base plane. They are zero at resting position, and change with flexion. Therefore, the designed soft joint does not perform the same bending angle, both being performed in the same plane. If this is done for different planes, we obtain a 3D mesh of * marks. The surface of a non-complete sphere is obtained, as seen in Figures 13 and 14. This allows knowledge of where the end will be and how the soft joint will move with respect to the fixed base. Representation of Variations in Tendon Lengths Once tendon distances are adjusted to the joint kinematics, with the blocking angle restrictions, distance changes for each tendon can be represented as inclination and orientation input angles vary. Figure 15 shows tendon lengths according to inclination and orientation variations, the restrictions imposed by the design morphology, 0 to 359°orientation degrees and 0 to 170°inclination degrees, and the final length in meters. These graphs show how each tendon L i varies according to inclination and orientation. The higher the inclination, the higher the variation of tendon length with changes of orientation. For a fixed inclination, when the orientation changes, as in a rotational movement, the tendon length increases and decreases in a sinusoidal shape, with the orientation corresponding to a maximum, a minimum or the initial length value. Due to the soft joint blockages, from certain degrees of inclination, the variation of tendon lengths is not sinusoidal, and, for certain orientation angle ranges, the length remains fixed. Direct Kinematics A direct kinematics is also provided through the works collected in [3]. This kinematics allows us to know the inclination and orientation for the input values L 1 , L 2 and L 3 . These equations assume that the curvature is constant throughout the flexible body. Simulation of the Model Using the above equations, the mathematical model can be represented by simulation. From the inputs, inclination and orientation, the inverse kinematics is made, and the linear displacement of the tendons is calculated. Those values are turned into and angular displacement for each motor. The motor encoders can be used as sensors to measure the real angular motor position and close the control loop. The motor models are represented as a function using the values from the motor datasheet. Following a general control diagram, where K is the motor speed constant in rpm/V, and τ is the mechanical time constant in seconds, we obtain the transfer function G(s) [32], such that: For the simulation, a control loop is created in Simulink Matlab, in which the input values are entered interactively, Figure 16. The tendon lengths for these inputs are obtained through a Matlab function that has been designed from Equation (9), called "Inverse Kinematics", Algorithm 1. if β < θ/N then 6: The three values of L i returned by the inverse kinematics block are used to obtain the target Ω (target angular position of the motors), using the "L i to Omega" function block described by Equation (1) internal constant: L, r 3: procedure 4: return Ω From these target Ω values, the motor control loops return the current Ω values. The direct kinematics is performed using the "Direct Kinematics and 3D representation" function block defined by Equations (14) and (15), Algorithm 3. The current inclination and orientation of the free end through the simulation are obtained. This function block also provides the position of the simulated soft joint represented in a 3D space, Figure 17. internal constant: L, r, a 3: procedure 4: Draw-simulation(θ Simu , ψ Simu ) 8: return θ Simu , ψ Simu Experimental Tests The soft joint assessment is performed through two types of experimental tests. These tests allow us to evaluate motion performance and kinematics model accuracy, based on the error between the target end position and the real end position of the soft joint. A video showing these tests performance can be viewed at https://vimeo.com/537605947 (accessed on 10 May 2021). Data were collected from the tests in two ways. Position data from motor encoders provided information on inclination and orientation through the direct kinematics. Data from the inertial sensor 3DM-GX5-10 IMU, the yaw, roll and pitch data, were transformed into inclination and orientation data for comparison with references. Test 1 Test 1 consists of a bending movement towards a fixed inclination angle, in each of the four orientations: 0 • , 90 • , 180 • and 270 • . This test shows how the joint starts in a resting position, performs the action and then returns to the resting position before it bends at the next orientation. The resting position is 0 degrees of inclination and orientation. Tests were performed for 30 • , 45 • and 60 • inclination and results are shown in Figure 18 for the encoder data and Figure 19 for the sensor data. Test 2 Test 2 consists of a 360 • rotation for a given inclination. This rotation starts in a resting position and is performed by increasing the orientation value by one degree every 0.1 s, starting from 0 • . When the rotation is complete, it returns to the resting position. The test was performed for 30 • , 45 • and 60 • inclination and results are shown in Figure 20 for the encoder data and Figure 21 for the sensor data. Discussion Simulation and experimental results have been performed to analyze and validate both the design and the proposed model for the cable-driven soft joint. The simulation results allow the validation of the soft joint through a finite element study. The soft joint was simulated by applying a load of 60 N, which would be the maximum force expected for this prototype. It has made possible to validate the joint structure, ensuring that when maximum loads are applied, the structure does not exceed the elastic limit and does not lose its elasticity. The experimental tests performed show the behavior of the soft joint system in different situations. Test 1 explores the behavior to reach a target position from a resting position and how the soft link behaves to return to the home position. It is a movement where the inclination changes with a fixed orientation that does not vary. Test 2 explores the ability to maintain a fixed inclination while gradually varying the orientation. Results Using the Encoder Sensor The inclination results, obtained from the encoder during Test 1, show that the experimental inclination reaches the reference inclination, and this is repeated for each of the four requested orientations. We also observed that the higher the requested reference, the longer it takes to reach it. For the orientation results, the orientation reference is a set of four steps of different sizes. The first is a step of zero amplitude and the experimental orientation is quickly reached. This is because, from the zero-degree inclination position (fully extended joint), reaching any orientation is almost immediate. When the joint is requested to return to the resting position, the experimental orientation remains constant. Meanwhile, the inclination decreases and when it reaches zero, the orientation reaches zero, too. This is why, in this test, the orientation values change so quickly back to zero degrees and the time between the reference orientation and the experimental orientation reaching zero is longer. Results Using the Inertial Sensor The data obtained from the inertial sensor show more accurately the real behavior of the end of the soft joint. The inclination results in Test 1 show that the position of the joint does not reach the reference inclination. The 90 • orientation test (downwards, in the sense of gravity) is the one that presents lower errors when tracking the reference. The tests for 0 • and 180 • orientation angles show higher tracking errors, as shown in the attached video. The kinematics designed for these positions assumes that the length of tendon 1 (lower motor) should not change. These theoretical results, when taken to the experimental field, are not fulfilled because the tendons are not perfectly tensioned, and the two upper wires cause the position rise. This rise is reflected in the orientation that has a negative phase shift when the reference is 0 • and a positive phase shift when the reference is 180 • . We also observed that the orientation results do not reach the zero position when the reference is zero. This is because it is difficult to move the orientation to zero due to the fact that the inclination is not exactly zero when returning to the resting position, as the inclination graphs show. This causes a slight inclination while maintaining the same orientation. As discussed above for the encoder data, orientation is very sensitive to inclination. For the sensor results in Test 2, the inclination graphs show how the experimental inclination does not reach the reference value. However, it should be noted that it has a sinusoidal behavior over time. As in the previous test, the reason for both is that the theoretical behavior of the joint is not the same as the real behavior, because the model assumes aspects such as a continuous curvature, and because there are also other influencing mechanical aspects, such as the precision in the tendon length or the tendon winding in the winches. This undulatory behavior is observed again in the orientation graphs. However, it can be seen that for angles 90 • , 210 • and 330 • the orientation does not vary, which coincides with the vertices of the soft joint morphology. For these angles, the inclination is maximum. Moreover, when one of the vertices is passed, the opposite tendons cause the variation of orientation, and it takes a little time to change from unwinding to rewinding. This can be seen in the attached videos for this test. Conclusions This work presents a novel approach to soft robotics with the design of a flexible and compact soft joint. It is not only a low-cost prototype, assembled by 3D printing. It also has a morphology that allows better handling of external loads and gravity thanks to its blocking configuration. Actuated by tendons, the proposed design has a morphology with two main configurations of flexion, which provides more versatility and a flexion limit, unlike previous designs. These characteristics and configurations can be modified through the parameters of the joint morphology, to achieve different fields of work and functionality. A mathematical model of the inverse kinematics of the soft joint is also presented to obtain the length of the tendons as a function of the morphology and the position (orientation and inclination) of the end of the joint. The modeling of the soft morphology is a complex task, but a simplified and sufficiently accurate kinematic model has been shown. For its validation, the soft link prototype has been built and simulation and experimental studies have been carried out. According to the capabilities of the solution described and demonstrated throughout the paper, the soft joint proposed in this work shows an improvement over other designs and it could be used for many different applications requiring manipulation of loads. Our main application will be the use of this joint as an arm for the humanoid robot TEO so that the robot can perform manipulation tasks with the use of a gripper connected to the arm tip. There are several uncertainties and mismatches that affect the model of the prototype, especially when this is a low-cost 3D printed solution. For instance, the curvature of the real model is not constant, the tension and length of tendons are not exact, and small variations in the radius of the winches happen after several turns. Despite these facts, the proposed model is accurate enough to represent the kinematics of the system and will allow a later control of the soft joint in closed loop. Further research will lead to reducing these inaccuracies and prototyping effects and to closing the control loop and testing the platform with different loads during manipulation interactions. Patents The technology presented in this paper is under a patent licensing process. A patent entitled "Eslabón para articulación blanda y articulación blanda que comprende dicho eslabón" ("Link for soft articulation and soft articulation comprising such link") and reference number P202030726 (register number 5349) has been presented to the Oficina Española de Patentes y Marcas-OEPM (Spanish Patents Office) (5 July 2020). Funding: The research leading to these results has received funding from the project Desarrollo de articulaciones blandas para aplicaciones robóticas, with reference IND2020/IND-1739, funded by the Comunidad Autónoma de Madrid (CAM) (Department of Education and Research), and from RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos, FaseIV; S2018/NMT-4331), funded by "Programas de Actividades I+D en la Comunidad de Madrid" and cofunded by Structural Funds of the EU. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2021-09-01T15:11:12.052Z
2021-06-23T00:00:00.000
{ "year": 2021, "sha1": "806bc38cef8bca18c0823b0c2e5fa4418e700cd2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7390/9/13/1468/pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "f7566544d0153025ec10352adc055550091889a1", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
2491077
pes2o/s2orc
v3-fos-license
Characterization of the catalase-peroxidase KatG from Burkholderia pseudomallei by mass spectrometry by The electron density maps of the catalase-peroxidase from Burkholderia pseudomallei (BpKatG) presented two unusual covalent modifications. A covalent structure linked the active site Trp111 with Tyr238 and Tyr238 with Met264, and the heme was modified, likely by a perhydroxy group added to the vinyl group on ring I. Mass spectrometry analysis of tryptic digests of BpKatG revealed a cluster of ions at m/z 6585, consistent with the fusion of three peptides through Trp111, Tyr238, and Met264, and a cluster at m/z approximately 4525, consistent with the fusion of two peptides linked through Trp111 and Tyr238. MS/MS analysis of the major ions at m/z 4524 and 4540 confirmed the expected sequence and suggested that the multiple ions in the cluster were the result of multiple oxidation events and transfer of CH3-S to the tyrosine. Neither cluster of ions at m/z 4525 or 6585 was present in the spectrum of a tryptic digest of the W111F variant of BpKatG. The spectrum of the tryptic digest of native BpKatG also contained a major ion for a peptide in which Met264 had been converted to homoserine, consistent with the covalent bond between Tyr238 and Met264 being susceptible to hydrolysis, including the loss of the CH3-S from the methionine. Analysis of the tryptic digests of hydroperoxidase I (KatG) from Escherichia coli provided direct evidence for the covalent linkage between Trp105 and Tyr226 and indirect evidence for a covalent linkage between Tyr226 and Met252. Tryptic peptide analysis and N-terminal sequencing revealed that the N-terminal residue of BpKatG is Ser22. INTRODUCTION The heme-containing catalase-peroxidases are bifunctional enzymes that degrade hydrogen peroxide either as a catalase (2H 2 O 2 ® 2H 2 O + O 2 ) or as a peroxidase (H 2 O 2 + 2AH ® 2H 2 O + 2A . ). The catalatic reaction, with a more rapid turnover rate, dominates over the peroxidatic reaction, and the in vivo peroxidatic substrate remains unidentified, suggesting that the main role of the enzyme is the removal of H 2 O 2 , preventing the formation of highly reactive and damaging breakdown products of H 2 O 2 . However, the enzyme has a close sequence resemblance to plant peroxidases (1,2), and it remains a possibility that the peroxidatic reaction has a metabolic significance outside of degrading H 2 O 2 . Indeed, it is clear that the catalatic function evolved as an adaptation of the peroxidatic function because the simple change of a tryptophan to a phenylalanine in the distal heme pocket reduces catalatic activity by 1000 fold (of E. coli HPI) and increases peroxidatic activity by 3 fold (3,4,5). Furthermore, the core structure of both the N-and C-terminal domains of the catalase-peroxidases from Haloarcula marismortui and Burkholderia pseudomallei closely resemble the structure of plant peroxidases (6,7). Finally, the conversion of isoniazid into its active antitubercular form by KatG of Mycobacterium tuberculosis is clearly a result of the peroxidatic reaction using INH as a substrate that must mimic the actual in vivo substrate. The structures of the catalase-peroxidases from H. marismortui and B. pseudomallei have been reported (6,7) and have revealed several features that are, so far, unique to this class of enzyme. Present in both structures is an unusual adduct or covalent linkage among the side chains of a tryptophan, a tyrosine and a methionine (Fig 1). The likely mechanistic significance 5 The oligonucleotides CTGTTCATCAAAATGGCATGG (AAA encoding Lys in place of Arg 108 ), CGCATGGCATTTCACAGCGCG (TTT encoding Phe in place of Trp 111 ), and TTCGCGCGCCTGGCGATGAAC (CTG encoding Leu in place of Met 264 ) were purchased from Invitrogen. They were used to mutagenize a 600 bp fragment from pBG306 generated by KpnI-ClaI restriction following the Kunkel procedure (11), which was subsequently reincorporated into pBG306 to generate the mutagenized katG gene. Sequence confirmation of all sequences was by the Sanger method (12) on double-stranded plasmid DNA generated in JM109. Subsequent expression and purification were carried out as described previously (3,8). The catalase and peroxidase specific activities of the variants compared to the native BpKatG are summarized in Table 1. Catalase, peroxidase, protein and spectral determination. Catalase activity was determined by the method of Rrrth and Jensen (13) in a Gilson oxygraph equipped with a Clark electrode. One unit of catalase is defined as the amount that decomposes 1 µmol of H 2 O 2 in 1 min in a 60 mM H 2 O 2 solution at pH 7.0 at 37 o C. Protein was estimated according to the methods outlined by Layne (14). Peroxidase activity was determined by the method of Smith et al (15). One unit of peroxidase is defined as the amount is defined as the amount that decomposes 1 µmol of ABTS (3-ethylbenzothiazolinesulfonic acid) in one minute at 20 o C. Absorption spectra were obtained using a Milton Roy MR3000 spectrophotometer. Samples were dissolved in 50 mM potassium phosphate, pH 7.0. The N-terminal sequence was determined by the Proteomics facility at the Institut de Biotecnologia i Biomedicina (UAB). Mass spectrometry analysis For mass spectrometry, protein was dialysed into 5 mM ammonium acetate. The intact proteins were analysed by electrospray ionization in an orthogonal time-of-flight mass spectrometer (16,17). The declustering voltage was varied in order to assess the stability of the protein-heme complex. Digests of the proteins were prepared using TPCK-treated trypsin, and these were analysed on the Manitoba/Sciex prototype MALDI QqTOF instrument (18). Initial analysis was done with equal volumes (0.5 mL) of digested protein and 2,5-dihydroxybenzoic acid (DHB, 160 mg/mL in water:acetonitrile 3:1, 2% formic acid) spotted onto a custom target. and eluted with a linear gradient of 1 -80% acetonitrile (0.1% TFA). Column effluent (4 mL/min) was collected at 1 min intervals by hand. Under the conditions used, the vast majority of tryptic fragments were eluted in 40 min. These were spotted onto the target as above. MS characterization of the Trp-Tyr-Met adduct in BpKatG The existence of a covalent structure linking the side chains of Trp 111 , Tyr 238 and Met 264 in BpKatG (Fig.1) was originally deduced from the electron density maps derived from crystals of both HmCPx (6) and BpKatG (7). In order to confirm the existence of such an unusual structure, the peptide mixture generated by trypsin digestion of BpKatG was analyzed by mass 7 spectrometry. Each of the key residues in the structure is located on a separate tryptic peptide fragment, and the absence of these fragments combined with the presence of larger fragments of appropriate mass would confirm the presence of the adduct (Fig. 2). Some of the peptides identified by MALDI mass spectrometry from both BpKatG and its W111F variant are listed in Table 2. Significantly, the expected fragment at m/z 1179 (containing Trp 111 ) is completely absent from the BpKatG spectrum, but the equivalent ion in the spectrum of the W111F variant, (Table 4). Assigning X as homoserine, which would arise from hydrolysis of the Tyr-Met covalent bond (Fig 4) and which has a mass 30 Da less than that of Met, explains the mass differential between the ions at m/z 2062 and m/z 2092. In addition, this represents further indirect evidence for the covalent link between Tyr 238 and Met 264 . MS characterization of the Trp-Tyr-Met adduct in HPI The presence of the covalent adduct in catalase-peroxidases from two such disparate sources as the archaebacterium H. marismortui and the Gram negative bacterium B. pseudomallei suggested that it may be a feature common to all catalase-peroxidases. This was 9 explored in an analysis of the tryptic digest of HPI (KatG) from E. coli and its W105F variant (Table 5). All three of the fragments containing respectively Trp 105 (m/z 1149), Met 252 (m/z 2532) and Tyr 226 (m/z 3206) are present, their identities confirmed by MS/MS analysis. While this suggests that the adduct may not be present in HPI, a cluster of ions, also separated by approximately 16 Da, is evident near m/z 4350 (Fig 6A) in the digest of native HPI but not in the digest of the W105F variant ( Fig 6B). Analysis of the predominant ion by MS/MS reveals a fragmentation pattern consistent with the presence of the Trp-Tyr covalent structure (Fig 6C). The location of the modifications, either from the addition of CH 3 -S-(+47 less one proton for +46) or oxidation (+48 for three oxygen atoms less 2 protons for +46), could be localized to the hybrid fragment bounded by ions y25 and y16 ( Fig. 6C and Table 6 Identification of the N-terminus of BpKatG The N-terminal 34 residues predicted by the DNA sequence were not evident in the electron density maps of BpKatG (7), raising the question of whether they were present and disordered or absent as a result of N-terminal processing. The tryptic fragments corresponding to residues 1 to 9 and 19 to 26, including possible partial digest fragments, are absent from the spectrum, whereas the fragment corresponding to residues 27 to 40 is present and its sequence corroborated by MS/MS analysis (Table 2). From these data, it can be concluded that the N- The core structure of the individual N-and C-terminal domains of catalase-peroxidases is very similar to the core structure of plant peroxidases, suggesting that the enzyme is a peroxidase that has adopted an efficient catalatic activity during evolution. A small number of clues for how this adaptation took place are provided by the structure of the active site. The such as cytochrome c peroxidase and ascorbate peroxidase do not exhibit significant catalatic activity. Furthermore, the two other key active site residues in peroxidases, the arginine and histidine equivalent to Arg108 and His112 of BpKatG, are spatially oriented much the same as in BpKatG. Indeed, the root mean square deviation of the C α of 133 residues in conserved α-helical segments, including the three active site residues, is just 0.97 C comparing BpKatG and cytochrome c peroxidase (7). The most significant differences between the catalase-peroxidases and the peroxidases, aside from sheer size, lie in the unusual post-translational modifications in the catalase-peroxidases, the Trp-Tyr-Met adduct and the modified heme, and it is reasonable to consider how these features might make the catalase reaction possible. The covalently linked residues would form a very rigid structure that would fix the position of the indole nitrogen of the essential Trp relative to the heme iron and imidazole ring of the essential His. Such precise positioning with no possibility of movement may be necessary to generate optimal interaction distances with the H 2 O 2 for the reduction of compound I. Indeed, mutation of Met 264 , which would prevent formation of at least part, and possibly all, of the covalent structure, significantly reduces catalatic activity, with little effect on peroxidatic activity ( Table 1) and mutation of the equivalent of Tyr 238 in Synechocystis KatG has a similar effect (20). The covalent linkages may also affect the electronic environment of the indole, enhancing its ability to bind H 2 O 2 for compound I reduction. In addition the adduct creates an obvious 1 2 route for delocalization of the radical from the heme of compound I, a process recently demonstrated in M. tuberculosis KatG (21). From the standpoint of the peroxidatic reaction, electron tunneling from a peroxidatic substrate on the surface (7) to the heme for reduction of compound I or compound II may also be facilitated by the adduct. It is tempting to speculate about the reaction mechanism responsible for the Trp-Tyr-Met covalent structure, and both free radical and ionic mechanisms can be presented that may be initiated by oxidation of the most reactive group, Met 264 . However, structural analysis of variants lacking the three involved residues, and other nearby residues, are required to determine what residues are necessary and to see if partial adducts can be formed will provide a much firmer basis for such speculation. y18 and all the b-series are 30-31 Da larger than expected from the sequence shown, whereas the y6 to y15 ions agree well with the expected sizes. The fragment sizes are summarized in Table 3. The ion at m/z 1792 was not identified. Table 4. Table 6.
2017-08-27T13:24:52.507Z
2003-09-12T00:00:00.000
{ "year": 2003, "sha1": "ea4c48b5518b948d5fef8aa9dbb831aac22cd39d", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/278/37/35687.full.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "f633e4e97770781dbc9da696d1593a8fd7b6586b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
226656936
pes2o/s2orc
v3-fos-license
The crude glycerin reduces losses fermentative and improves the nutritional value of marandu grass silage in a semiarid region The ensiling of marandu grass at the recommended time of management results in low dry matter (DM) content and nutritional value, but the addition of crude glycerin can compensate for these deficits if used during ensiling. Thus, the aim of this study was to evaluate the best level of inclusion of crude glycerin that can improve fermentation and the nutritional value of silages prepared with Urochloa brizantha cv. Marandu. The treatments consisted of five levels of inclusion of crude glycerin (0, 7.5, 15, 22.5, 30% of fresh forage) during ensiling of marandu grass with eight replications following the completely randomized design. For the evaluation of ruminal kinetics, four crossbred steers were used, cannulated in the rumen, following a randomized block design in a split plot scheme. For the percentage unit of inclusion of glycerin, there was a linear reduction of 0.34% in gas losses and increase of 0.45% and 0.55% in the recovery of DM (P <0.01) and in the DM content (P <0.01), respectively. The inclusion of up to 22.5% of crude glycerin in the silage of marandu grass is recommended to reduce losses during fermentation and improve the recovery of dry matter and the nutritional value of silage. Brazilian Journal of Development Braz. J. of Develop., Curitiba, v. 6, n 7. , p.42453-42470, jul. 2020. ISSN 2525-8761 42455 INTRODUCTION In Brazil, the production of ruminants is dependent on native or cultivated forage plants as the main source of nutrients due to the low food cost in relation to intensive production systems (MONÇÃO et al., 2019). In the country, it is estimated that 167 million hectares are cultivated with tropical and subtropical forages for animal feed, 80% of which belong to the Urochloa genus (FERRAZ and FELÍCIO, 2010). However, due to the effects of forage seasonality caused by climatic variations, animal production can be compromised throughout the year, especially in the semi-arid region whose drought duration is longer compared to other regions of Central Brazil. In the semi-arid region of Northern Minas Gerais, forage production is limited by low soil moisture, since temperature and solar radiation have little fluctuation during the winter period (ALVALÁ et al., 2019;MONÇÃO et al., 2020). Tropical grasses belonging to the Urochloa genus, also called Brachiaria, present, when well managed, green mass productivity between 100-120 t/hectare per year, 85-100% being produced in the rainy season (summer season), generating excess mass because animals cannot consume all the forage produced. Therefore, conserving, in the form of silage, the excess forage produced during the summer climatic season is essential to maintain or increase the production of ruminants throughout the year. Currently, there are machines on the Brazilian market that are efficient in harvesting grasses, which has favored the production of low-cost silage. However, at the time of cutting management (40 cm) of Brachiaria brizantha, the DM below 250 g/kg DM, low content of soluble carbohydrates and high buffering power are limiting factors for ensiling, being important insert additives in order to adjust the fermentative capacity of the dough (RIGUEIRA et al., 2018;KUNG JR et al., 2018;MUCK et al., 2018). Among the additives with the potential to be used as a moisture scavenger in grass silage stands out the crude glycerin, which is a by-product obtained from oil processing in the biodiesel industry and contains 900 g DM, 108 g/kg DM of crude fat , 800-880 g glycerol/kg DM (ORRICO JR et al., 2017;RIGUEIRA et al., 2017;RIGUEIRA et al., 2018). According to the National Petroleum Agency (ANP, 2019), in 2018 440,600 m 3 of glycerin were generated as a by-product of biodiesel production (B100), 17.6% more than in 2017. According to the ANP, the largest generation of glycerin occurred in the South Region (40.7% of the total), followed by the Midwest (39.7%), Southeast (9%), Northeast (7.7%) and North (2.9%). The excess of raw glycerin in the biodiesel industries can pose an environmental risk if handled incorrectly due to its pollutant content (considerable levels of glycerol and residual lipids). Thus, the alternative use of this by-product as an additive during grass ensiling has the potential to improve the fermentation of the ensiled mass, as it contains glycerol in its composition. In addition, glycerol is a rich source of energy for anaerobic microorganisms (Carvalho et al., 2017), which can favor microbial growth and improve the quality of fermentation and forage conservation. Several researches were conducted using crude glycerin in the silage of tropical grasses (ORRICO JR et al., 2017;RIGUEIRA et al., 2017;RIGUEIRA et al., 2018). However, the inclusion of crude glycerin has not been tested in the ensilage of Urochloa brizantha cv. Marandu. Due to the variations in the management and nutritional value of forages, it is necessary to know the best level of inclusion of glycerin during silage of marandu grass. Based on the above, the aim of this study was to evaluate the best level of inclusion of crude glycerin that can improve fermentation and the nutritional value of silages prepared with Urochloa brizantha cv Marandu. ANIMAL CARE The study was approved by the Ethics and Welfare Committee of the Montes Claros State University (Protocol No. 167/2018). SITE The experiment was carried out at the Experimental Farm of the State University of Montes Claros, Campus Janaúba, MG, Brazil (15º 52 '38" South and 43º 20 '05" West). The average annual precipitation is 800 mm with an average annual temperature of 28 ° C, relative humidity around 65% and, according to the climatic classification of Koppen and Geiger (1948), the predominant climate type in the region is Aw. TREATMENTS AND PROCESSING The treatments consisted of the inclusion of crude glycerin during ensiling of marandu grass in five levels (0, 7.5, 15, 22.5, 30% of fresh forage) with eight replications. The levels were defined according to Rigueira et al., (2018). The forage was collected in a pre-installed area at the Unimontes Experimental Farm after 60 days of uniform cutting. The grass was cut manually and then crushed in a shredder-chopper (JF, Model Z -6; Itapira, SP, Brazil) coupled to the TL75, 4x4 tractor (New Holland, Curitiba, PR, Brazil). The machine's knives were adjusted to crush the forage and obtain a particle size of 2 cm. Five heaps were made with the chopped forage, the additive was added in the respective proportions and homogenized before ensiling. For silage, experimental polivinil chloride (PVC) silos, of known weights, 50 cm long and 10 cm in diameter were used. At the bottom of the silos, they contained 10 cm of dry sand (300g), separated from the forage by a foam to quantify the effluent produced. After the complete homogenization of the forage with the additives, it was deposited in the silos and compacted with the aid of a wooden plunger. For each treatment, the silage density was quantified and approximately 3 kg of the chopped material from each fresh forage was ensiled. After filling, the silos were closed with PVC caps with a "Bunsen" type valve, sealed with adhesive tape and weighed afterwards. The silos were stored in a covered place, kept at room temperature and opened 60 days after ensiling according Orrico Junior et al. (2017). After opening, samples were collected in the middle of each silo. Ph And Dry Matter Losses During Fermentation To determine the pH of the silages, a peameter (digital; model MA522, Marconi Laboratory Equipment, Piracicaba, SP, Brazil) was used according to the methodology described by Silva and Queiroz (2006). Losses of dry matter in silages in the form of gases and effluents were quantified by weight difference. The loss of dry matter in the form of gases was calculated by the difference between the gross weight of the initial and final ensiled dry matter, in relation to the amount of ensiled dry matter, discounting the weight of the silo and dry sand set. The dry matter recovery was calculated by the difference between the initial and final dry matter content of the silage. All formulas can be found in the methodology described by Jobim et al. (2007). Ruminal kinetics For the evaluation of ruminal degradation kinetics, four crossbred steers, cannulated in the rumen, with an average weight of 530 ± 50 kg were used. The animals received 3.0 kg of concentrate mixed with 200 mL of crude glycerin divided in two equal, morning and afternoon. In addition to the concentrate, the animals received roughage, in the same proportion, based on marandu grass and elephant grass silage (Pennisetum purpureum Schum.) For 14 days before the experiment. The in situ degradability technique was used using non-woven synthetic fiber type bags (NWS, weight 100), with porosity of 50 μm according to Casali et al. (2009), with a quantity of samples following a ratio of 20 mg of cm cm-² of bag surface area (NOCEK, 1988). The bags were placed in filo bags, along with 100 grams of lead. The bags were tied with nylon thread, leaving a free length of 1 m so that they could move freely in the solid and liquid phases of the rumen. The bags were deposited in the ventral sac region of the rumen for 0, 3, 6, 12, 24, 48, 72, 96, 120 and 144 hours, with the end of the nylon thread tied to the cannula. The bags are placed in reverse order, starting with 144 hours. The samples referring to time zero were washed in running water (20ºC) together with the other samples. Subsequently, the samples were placed in a forced ventilation oven, at 55°C until reaching constant weight. The remaining residues in the NWS bags, collected in the rumen, were analyzed for the levels of DM, NDF and ADF. The data obtained were adjusted for non-linear regression by the Gauss-Newton method (NETER et al., 1985), Experimental Designs And Statistical Analyzes For evaluations of pH, losses by gases and effluents, recovery of dry matter and chemical composition, a completely randomized design was used with five levels of inclusion of crude glycerin with eight replications (experimental unit). The variables were analyzed according to the mathematical model: On what: Yijk = The observation regarding the level of glycerin "i", in repetition "j"; μ = constant associated with all observations; Trat i = Effect of glycerin level "i", with i = 1, 2, 3, 4 and 5; eij = experimental error associated with all observations (Y ij) independent which by definition has a normal distribution with zero mean and variance δ2. The ruminal degradability test was carried out in a randomized block design in a split plot scheme, with the treatments being the plots and the incubation times the subplots. The variation in cattle weight was the blocking factor. The following statistical model was used: On what: Yk ( The collected data were subjected to analysis of variance and, when the "F' 'test was significant, the inclusion levels of crude glycerin were analyzed using orthogonal polynomials, where linear and quadratic regression models were tested. For all statistical procedures, α = 0.05 was adopted as the maximum tolerable limit for type I error. RESULTS The inclusion of crude glycerin in the silage of marandu grass modified the pH values (P <0.01) of the silage, and it was verified that the means adjusted to the quadratic regression model with a minimum point at the level of 25%. Gas losses (P = 0.04) decreased from 17.6% in the control silage (without glycerin) to 4.10% at the level of 30%, and for the percentage unit of inclusion of glycerin, there was a linear reduction of 0.34 % in gas losses (Table 2). The DM content (P <0.01) of the silage increased 39.54% with the inclusion of crude glycerin during silage, varying from 23.06% in the control silage to 39.63% at the maximum level of inclusion (0.55% increase for each percentage unit inclusion of crude glycerin) (Table 3). The ash content (P <0.01), crude protein ( For the standardized insoluble fraction (Bp fraction; P <0.01) of the neutral detergent fiber (NDF) of the silage, the means adapted to the quadratic regression model (Table 5). The rate of degradation of the Bp fraction there was not modified with the inclusion of crude glycerin (mean 1.4%/hour). The ED of NDF of the marandu grass silage was modified with the inclusion of crude glycerin, with a quadratic behavior of the means in the passage rates of 2, 5 and 8%/hour. The maximum point of the standardized undegradable fraction (Ip fraction; P <0.04) of the NDF was at the level of 12.61% inclusion of crude glycerin (Table 6). There was no difference in the rate of degradation (P = 0.84) of the fraction Bp of the acid detergent fiber (ADF) with the inclusion of crude glycerin (mean 1.63%/hour). The fraction Bp of the ADF showed the maximum value at the inclusion level of 11.20% crude glycerin. The means verified for the ED of the ADF adjusted to the quadratic regression model. DISCUSSION The chemical-bromatological characteristics of marandu grass before silage (Table 1) also did not find a reduction in the pH of the silage of piatã grass (Brachiaria brizantha). The acid character of the crude glycerin used in this research and its effects on pH values was a novelty. In this research, it was found that the methanol content in glycerin was less than 1 g kg-1 of DM, indicating no possibility of animal poisoning. With 12% inclusion of fresh forage of sugar cane, Carvalho et al. (2017) found a concentration of 1.9 g kg -1 of DM and pointed out that it does not harm the health of animals. The use of 30% in the MN of crude glycerin in silage of marandu grass reduced 76.70% of the losses by gases and minimized the effluent losses in the level of 20.75%. This is justified due to the high DM content of the crude glycerin (895 g/kg) favoring the recovery of dry matter. Thus, the relevance of the additive crude glycerin in the most efficient fermentation during the ensiling of marandu grass is highlighted. Carvalho et al. (2017) observed that the crude glycerin used for silage is not consumed by microorganisms during fermentation and thereby reduces DM losses and improves the recovery of dry matter, which justifies the results obtained in this research. According to Orrico Jr et al. (2017), gas losses during fermentation are related to the type of fermentation during ensiling, which tends to decrease when homofermentative bacteria predominate. In this case, the bacterial metabolism of soluble carbohydrates leads to the production of lactic acid, which is important to decrease the pH of the silage, justifying the behavior of this variable in this research. According to Pasteris and Strasser Saad (2009), strains of Lactobacillus break down glycerol from crude glycerin and use it as an energy source for the synthesis of organic acids. Therefore, fermentation efficiency was better at higher levels of inclusion of crude glycerin due to the lower pH value (Orrico Jr. et al., 2017). However, the final pH of the silage is related to the fermentation quality, but it does not necessarily explain the speed at which the pH drops, neither the type of fermentation (populations of microorganisms), nor the nutritional value of the silage (Orrico Jr. et al., 2017). Even so, the lower gas losses and better dry matter recovery observed in this research are suggestive of a rapid decline in pH during fermentation as a result of the inclusion of crude glycerin. The high DM content of glycerin increased the silage DM content from 23.96% (control silage) to 39.63% (silage with 30% inclusion). As the glycerin was possibly not consumed by the microorganisms inside the silo, there was a dilution effect on the contents of ash, crude protein and fibrous fraction. Although crude glycerin contains minerals and crude protein in its composition (Table 1), these concentrations were not sufficient to reduce the effects of the dilution. This dilution behavior was also verified in other studies with tropical forages (CARVALHO et al., 2017;ORRICO JR et al., 2017;RIGUEIRA et al., 2017;RIGUEIRA et al., 2018;SILVA et al., 2020). In contrast, at the maximum level of inclusion of crude glycerin, there was an increase of 78.36% in the crude fat content, justified by the concentration of lipids in this by-product (108 g/kg). This is important because it increased the TDN content of the silage by 46.19% compared to the control silage (42.39%). Marandu grass is one of the most cultivated forages in Brazil and the low TDN content for animal feed is one of the factors that most limit the silage of surplus production during the rainy season. Compared to corn or sorghum silages, the energy content of marandu grass silage can be considered low (CARVALHO et al., 2016), which can be compensated by offering higher proportions of concentrate to guarantee the weight gain of the animals. However, this can lead to considerable increases in production costs. Therefore, the inclusion of crude glycerin contributes to increase the energy levels of the silage produced due to the increase in the content of ether extract and nonfibrous carbohydrates. It is worth mentioning that energy additives (i.e., ground corn, molasses and pure glycerol), would probably allow superior results in terms of energy levels. However, the low cost of acquisition of the crude glycerin by-product seems to be the differential in decision making to make the inclusion of this additive in grass silage economically viable. Regarding the ruminal kinetics of DM, the inclusion of 16.7% of crude glycerin in the silage of marandu grass maximized the fraction "a". The fraction "a" of DM includes carbohydrates from feeds that are readily soluble when in contact with water and available to microorganisms in the rumen, and when solubilized and fermented, it produces short-chain fatty acids, ammonia and microbial proteins, which are sources of energy and amino acids for ruminants (RIGUEIRA et al., 2018). The use of crude glycerin modified the degradation rate of fraction "b", maximizing the time of microbial colonization up to the level of 17.50%, but the PD of DM was greater when using a maximum of 21.07% of crude glycerin. The ED of DM was higher when it included 30% of crude glycerin in the silage of marandu grass. This increase in PD and ED is associated with the effects of glycerin on fractions "a" and "b" and the rate of degradation "c" of DM. Specifically, ED was higher at the level of 30% of inclusion due to the high rate of degradation "c" of DM (average of 2.6% / hour) in relation to the other levels of inclusion. During silage of Napier grass and BRS capiaçu grass (Pennisetum purpureum Schum.), Rigueira et al. (2018) and Silva et al. (2019) recommended 15% inclusion crude glycerin in fresh forage. Jenkins and Palmquist (1984), Firkins et al. (2007) and Abubakr et al. (2013) observed that in some studies, the addition of saturated lipids in the diet reduces the degradability of the fibrous fraction of foods, depending on the level of inclusion. In this research, the addition of crude glycerin by up to 20.74% increased the fraction Bp AFD of the silage compared to control silage. Levels above 12.62% of inclusion of crude glycerin reduced the fraction Bp of the NDF. At this level of inclusion, it is estimated that 85.93% of the fraction Bp of the NDF has potential for degradation in the rumen, being a valuable contribution of crude glycerin as a source of glycerol for cellulolytic bacteria, since there was an increase of 16.47% in fraction Bp compared to control silage (71.78%). Regarding the use of silage containing crude glycerin to feed ruminants, there is still little information on intake, digestibility, ingestive behavior and animal performance. Most studies with crude glycerin for ruminants deal with the use of this direct by-product in the diet and not in silage. Therefore, a consistent contribution to livestock on animal performance is incipient. However, the results reported in this research are convincing of the potential for the inclusion of crude glycerin by reducing losses during fermentation and producing marandu grass silage with superior nutritional and nutritional value. CONCLUSION The inclusion of up to 22.5% of crude glycerin in the silage of marandu grass is recommended to reduce losses during fermentation and improve the recovery of dry matter and the nutritional value of silage.
2020-08-06T09:06:48.712Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "98889f189cfb487d44c14f7205dcd5a2dcae71ee", "oa_license": null, "oa_url": "https://www.brazilianjournals.com/index.php/BRJD/article/download/12505/10491", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f4f2f381aa8cedd7c3981ba52ca6574cd8d3716b", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
219002769
pes2o/s2orc
v3-fos-license
Chlorella vulgaris growth in different biodigested vinasse concentrations: biomass, pigments and final composition Vinasse, an effluent generated during sugar and alcohol production, has great potential for soil and water pollution; however, it can be treated, used in biomass production and reused in sugarcane plantations. Thus, this work uses different types of biodigested vinasse to produce more biomass. The effect is the removal of ammonia nitrogen quickly and the end of the exponential growth phase of microalgae at different levels from the sixth day of cultivation. Among the concentrations used, the use of 50% biodigested vinasse showed the highest biomass concentration (255 mg L ) after 10 days of growth, coinciding with the end of ammoniacal nitrogen availability and stabilization of effluent color removal. The addition of biodigested vinasse also provides an increase in Chlorophyll a (5.33 mg L ) and b (4.66 mg L ) levels, obtained on the sixth day with 40% of vinasse, as well as protein (40.50%) with 50% effluent. Therefore, with the obtained results we noticed the variation of the biomass composition according to the vinasse concentration and increase of the pigment concentration in the presence of the effluent with higher nutrient concentration. Thus, the higher concentration of vinasse was more productive of the cultivation of Chlorella vulgaris. This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits copying, adaptation and redistribution, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/). doi: 10.2166/wst.2020.192 ://iwaponline.com/wst/article-pdf/82/6/1111/771073/wst082061111.pdf Elias Trevisan (corresponding author) Rodrigo Felipe Bedim Godoy Department of Environment, State University of Maringá, Umuarama 87506-370, Paraná, Brazil E-mail: eliastrevisan@yahoo.com.br Fernando Aparecido Dias Radomski Postgraduate program in Sciences and Materials Engineering, Federal University of Paraná, Curitiba 81530-000, Paraná, Brazil Enzo Luigi Crisigiovanni Postgraduate Program in Forest Sciences, Midwestern State University, Irati 84500-000, Paraná, Brazil Kemely Bruna Zandonadi Ferriani Branco Pedro Augusto Arroyo Department of Chemical Engineering, State University of the Maringá, Maringá 87020-900, Paraná, Brazil INTRODUCTION Brazil is one of the world's biggest ethanol producers and, just in the 2018/2019 crop, produced over 33 million m 3 of it (UNICA ). The ethanol industries generate, depending on the distillery equipment, 10-15 L of vinasse per liter of produced ethanol (Christofoletti et al. ). Vinasse is a dark brown wastewater with high turbidity, high levels of organic matter and mineral nutrients, low pH and high biochemical oxygen demand (Brasil et al. ). This residue disposal in aquatic environments is harmful to microorganisms and wildlife and its use as fertilizers in the sugar cane crops can result in soil salinization due to its high content of potassium (Brasil et al. ). Thus, vinasse treatment and disposal are relevant environmental issues to refineries. Microalgae can be used to treat some industrial effluents and sewage, therefore, several authors investigated its potential to treat vinasses (Francisca Kalavathi et al. ; Valderrama et al. ; Ramirez et al. ; de Mattos & Bastos ). Although this use represents an ecological approach to the vinasse problem, microalgae production nowadays also represents an alternative source for several bioproducts, such as biofuels, special oils, pigments, polymers and others (Perez-Garcia et al. ). In fact, microalgae lipids were identified as the main sustainable source for biodiesel production (Wijffels & Barbosa ); however, the production of microalgal biomass is still not economically viable due to high costs of cultivation, harvesting and processing (Quinn & Davis ). In order to improve this scenario, the use of organic wastewater treatment as a low cost source of nutrients for algal culturing were evaluated and today they are recognized as the most likely solution for economically supporting this industry (Brasil et al. ). Many authors tried to produce microalgae with diluted non-treated vinasse (Altenhofen da Silva et al. ; Santana et al. ), however, Barrocal et al. () and España-Gamboa et al. () showed that vinasse pre-treated by anaerobic process can provide better conditions for microalgae growth. In this context, Marques et al. () evaluated the feasibility of using the effluent of an anaerobic digester treating vinasse for growing Chlorella vulgaris, an oil-producing algae. The authors observed that the non-treated vinasse obtained from an ethanol plant showed high toxicity to C. vulgaris at concentrations higher than 4% and the anaerobic digestion contributed significantly to reduce vinasse toxicity. In fact, the algal specific growth rate observed on nondiluted treated vinasse (0.76 day À1 ) was higher than the maximum observed with C. vulgaris on nutrient sufficient medium (0.53 day À1 ). On the other hand, biomass and oil productivity was about tenfold lower compared to nutrient sufficient medium. Thus, Marques et al. () concluded that microalgae growth on treated vinasse mixtures may be improved by nutrient addition and inoculum adaptation. Besides that, the authors highlighted the necessity of further investigations regarding the effect of potassium concentration in this culture, because, although it is known that potassium is used for several functions related to the photosynthesis machinery, high concentrations are toxic and the potassium concentrations remained statistically unchanged during vinasse treatment with C. vulgaris in this study. Santana et al. () studied 40 microalgae strains for growth in sugarcane vinasse at different concentrations. Although two microalgae strains (Micractinium sp. Embrapa|LBA32 and C. biconvexa Embrapa|LBA40) presented vigorous growth, the authors pointed the existence of issues relating the low light transmittance of vinasse, which is directly related to its turbidity. On the other hand, Candido & Lombardi () evaluated growth and biomass yield of C. vulgaris, same algae as Marques et al. (), in conventional and in biodigested vinasses that have been treated by filtration or centrifugation before their use as microalgae culture medium. Specific growth rates results showed that in 60% filtered conventional and 80% biodigested vinasses, C. vulgaris performed as well as the controls in nutrient rich synthetic culture media, with growth rates of up to 1.2 day À1 . Moreover, the authors also found that a potassium reduction up to 50% was promoted by the filtering processing and this result could solve the soil salinization problem of the vinasse application on sugar cane crops. Turbidity reduction was also significantly obtained by filtration. Thus, the use of treated vinasse as culture medium can lower the costs of microalgae production with the advantage of increasing the residue value. Seeing that the use of biodigested vinasse to produce microalgae presents several promising applications, this study aims to evaluate the growth of Chlorella vulgaris using different concentrations of vinasse pre-treated with anaerobic digestion, focusing in the analyses of nutrients, removal of color of biodigested vinasse and composition of biomass produced, since those parameters appear to be crucial in this process. Wastewater samples Biodigested vinasse samples were obtained in a power generation unit from the digestion of residues from sugar and alcohol production (À23.26589, À52.49256). Located approximately 450 m above sea level in the state of Paraná, Brazil, the region has a climate classified as Aw/As according to the Köppen-Geiger classification. After the digestion process, 50 L of biodigested vinasse were collected. Samples were evaluated for characterization (Table 1) according to the standard methods for the examination of water and wastewater of the American Public Health Association (APHA) (APHA ). The total dissolved solids content was employed drying at 180 C (Method 2540 C) and total suspended solids dried at 103-105 C (2540 D) (APHA ). The color was determined using the cobalt platinum scale according to Hach's DR/2010 Spectrophotometer Method 8025 (Hach After characterization, the biodigested vinasse was sedimented and then filtered with quantitative filter paper (blue strip, D ¼ 125 mm, Unifil) to remove solids. To the crops were added 10, 20, 30, 40 and 50% of biodigested vinasse to supply nutrients and removal (Table 2). Microalgae strain and growing environment Microalgae strain C. vulgaris was kindly provided by the PhD Armando Augusto H. Vieira, of the Botany Department of the Federal University of São Carlos. The experiments were conducted in the Laboratory of Heterogeneous Catalysis and Biodiesel (LCHBio), of the State University of Maringá, Brazil. Strain maintenance and inoculum production was used in the culture medium (Dertmer's medium for green algae) proposed by Watanabe (). For the experiment, microalgae were cultivated in 2 L Erlenmeyers flasks containing approximately 19 mg of inoculum and 1,990 mL of culture medium composed of a solution containing 10, 20, 30, 40 and 50% biodigested vinasse (n ¼ 15). The conical flasks were kept under constant illumination of 5,000 lux (24 h photoperiod), provided by fluorescent lamps for 16 days. The culture room was maintained at 25 ± 2.0 C, measured by a thermohygrometer (Thermo-hygrometer 7666, Incoterm, Porto Alegre, Brazil). For microalgae recovery, 50 mg L À1 of natural tanninbased flocculant (Tanfloc SL, Tanac SA, Montenegro, RS, Brazil) was used with agitation of 700 rpm for 30 s (Mechanical shaker, IKA 20 RW digital). After 25 min of sedimentation it was filtered at 20 mesh. The biomass was oven dried (MA033/1, Marconi, Piracicaba, Brazil) at 60 C for 24 h, homogenized at 20 mesh, and stored at À8 C for further characterization. Microalgae composition Total lipids were determined according to Hosseini et al.'s () adaptation of the methodology proposed by Folch et al. (). Test tubes with a known mass of microalgae and 1.2 mL of a chloroform: methanol solution at the volumetric proportion of 2:1 (v/v) were kept in ultrasonic bath at 42 Hz (Cristófoli) for 15 min. The samples are centrifuged for 20 min at 4,500 rpm and the supernatant arranged in a tube of known mass, the procedure was repeated three times. The solvent was evaporated at 60 C to constant mass and determined by gravimetry. The protein was extracted by alkaline hydrolysis (Meijer & Wijffels ). Test tubes with 5 mg microalgae and 0.8 mL NaOH (0.1 M) solution were heated in a water bath for 30 min at 95 C. Then, it was cooled in an ice bath and neutralized with 0.2 mL of HCl (0.4 M). For quantification, bovine serum albumin (BSA) was used as a standard to estimate protein content (Bradford ). The extracted protein (0.5 mL) was added in Bradford reagent solution (2.5 mL) and then incubated for 10 min. The absorbance intensity at 595 nm by spectroscopy (DR/2800, Hach Company, Loveland, CO, USA). Statistical analysis All the experiments were performed in three independent replicates (n ¼ 3). In the graphs are presented the means and ± error bars. Box-plot graph was also plotted to verify the data variation. Analysis of variance (ANOVA) and Tukey tests were used to determine statistically significant differences (p < 0.05) among averages, after the Shapiro-Wilk homogeneity test. All the statistical tests were conducted using Software R. RESULTS AND DISCUSSION Growth curve and biomass concentration Figure 1 shows the Box-plot graph (Figure 1(a)) and Gompertz growth curve model (Figure 1(b)-(f)) applied for the results of dry biomass concentration over 16 days of cultivation under different vinasse dilution. The highest value seen during the cultivation was with 50% dilution of vinasse in the 10th day, with approximately 300 mg/L. It could probably be associated with the higher nutrients concentrations and availability at this vinasse concentration for these photosynthetic organisms when compared to other dilutions. In addition, as ammoniacal nitrogen was just totally consumed at this day, it means that during the previous days of cultivation there was a source of nitrogen for enzymatic processes of these organisms. It is also seen that in the dilution of 20% and 30% the highest biomass concentration was lower than the maximum found on dilution of 10%. This situation could be associated with a stress generated at this cultivation due to lack of nitrogen (ammoniacal nitrogen consumed at the second day of cultivation for 10%). However, the cultivation with 40 and 50% presents high concentrations of ammoniacal concentration that contributes for the growth. At the first day of cultivation, all samples and their respective replicas (n ¼ 3) started approximately with 9.3 mg/L of dry biomass. At the last day of cultivation, the mean concentrations were 171.7 mg/L, 160.9 mg/L, 149.8 mg/L, 197.5 mg/L, 200.9 mg/L for 10%, 20%, 30%, 40% and 50% of Vinasse concentration, respectively. Candido & Lombardi () also found highest final biomass yield (in number of cell) wither higher concentration of biodigested vinasse when compared to lower dilution of vinasse. The authors suggested that the best growth conditions for Chlorella vulgaris were by the 80-100%. According to these results, higher vinasse concentrations higher dry biomass concentration for C. vulgaris. However, the vinasse characteristics could affect the growth of other species. Ramirez et al. () verified the growth of Scenedesmus sp. using different vinasse concentrations. The highest biomass concentration was with 10% of vinasse added to Guillard Modified Medium. Gompertz growth curve model was applied with experimental data to verify the R 2 value to these cultivation (Figure 1(b)-1(f)). The best value was found for dilution of 30% (R 2 ¼ 0.983). In addition, it is still notable that the boxplot at this dilution did not show high variation compared to other treatments. For dilutions of 10, 30 and 50% the highest biomass concentration mean was found on tenth day, while for dilutions of 20 and 40% were at 6th and 8th days, respectively. On other words, the dry biomass concentration started to stabilize after the 10th day when the ammoniacal nitrogen is totally consumed. Lavín & Lourenço () verified in their experiment that ammoniacal nitrogen as the most important source of nitrogen for some microalgae species. Thus, this nutrient can affect the growth and consequently the dry biomass concentration for lower dilution. Figure 2 shows the box-plot graph and scatter plot graph for color removal from cultivation with different dilutions of vinasse. Figure 2(a) shows the color concentration variation between the days of cultivation for each treatment. The results show outliers in 20% (1), 30% (1), 40% (3) and 50% (3). In fact, vinasse effluent presents high color concentration, therefore lower dilution (10%) presented low variation over the cultivation. The reduction of color treatment in lower dilution could also be associated with lower concentration of nutrients such as ammoniacal nitrogen. In addition, the presence of outliers could indicate there were values well above compared the other values along the cultivation days for the others percentage of dilution. The possible explanation for it is that during the first days there were a higher color removal from cultivation while the final days the removal is not too intensive due to the growth phase of the microrganisms and less availability of nutrients. Figure 2(b) shows the color removal from microalgae cultivation from day 0 until the last day of cultivation (day 16). The percentage of color removal for each treatment was 47.89% (10% of vinasse), 69.57% (20% of vinasse), 65.86% (30% of vinasse), 66.57% (40% of vinasse) and 69.04% (50% of vinasse). The removal of color is most intensive during the first five days, although the removal keeps happening until last day of cultivation. Figure 3 shows the daily phosphate (Figure 3(a)), ammoniacal nitrogen (Figure 3(b)), pH (Figure 3(c)) and potassium (Figure 3(d)) variation. The highest phosphate variation was at treatment with 50% of vinasse varying from 76 mg/L until 22.55 mg/L. Although the initial values for treatment with 30 and 40% were not high (33.45 and 54.33 mg/L, respectively) they did not present high variation as the last treatment at the last day of cultivation (14.90 and 19.85 mg/L, respectively). In the treatment of 10% of vinasse, the phosphate was removed by microrganisms after the 12th day. Nutrients concentration variation Ammoniacal nitrogen did not present high concentrations such as Phosphate. The highest mean concentration was 24.35 mg/L at 1st day of cultivation at treatment of 50% (Figure 2(b)). In addition, the ammoniacal nitrogen concentration turned less than 0.1 mg/L at the 10th day for the treatment of 50%. For other treatments, the values got lower than 0.1 mg/L at the 2nd day for treatment with 10%, at the 8th day for 20 and 30% dilution of vinasse and at the 10th day for 40% of vinasse. After complete consumption of ammoniacal nitrogen is notable the decrease of phosphate removal and dry biomass growth. According to Kim et al. (), each nitrogen source is first reduced to ammonium form for some microalgae species growth, where this form will be assimilated into amino acids. Thus, decrease of ammoniacal nitrogen concentration could have affected the production of dry biomass. For pH results, the values stayed between 6.9 and 9.67. The results were a little bit similar with Candido & Lombardi (), results for filtered biodigested vinasse and centrifuged biodigested vinasse (until day 6). In addition, it is notable that pH means were higher with higher dilution of vinasse (50%). Potassium mean concentrations presented an increase from the day 0 to the second day of cultivation for all treatment. Despite that, at the end of cultivation there were a decrease in the concentration of potassium for almost all treatment. For treatment with 20% of vinasse potassium, mean concentration varied from 168 mg/L at day 1 to 202 mg/L at the last day of cultivation. Pigments concentration variation and final biomass composition Figure 4(a) shows the variation of Chlorophyll a concentration for all treatments. The highest value was found at the 6th day at treatment with 40% of vinasse (5.33 mg/L). This result was also seen by He et al. (), when the authors found the highest concentration of Chlorophyll a for Chlorella sp. at 6th day of cultivation with the light condition of 200 μmol photon m 2 /s. In the end of cultivation, Chlorophyll a mean concentrations varied from 1.31 mg/L (10%) to 3.05 mg/L (40%). Although the highest value was found with 40% of vinasse, the value is next to 50% according to the error bars. Thus, with increase of vinasse there was increase of production of Chlorophyll a. Figure 4(b) shows the variation of Chlorophyll b concentration for all treatments. The highest value was found at the 6th day at treatment with 40% of vinasse (4.66 mg/L). In addition, at the end of cultivation Chlorophyll b mean concentrations varied from 0.99 mg/L (10%) to 2.13 mg/L (40%). The values followed the same pattern from Chlorophyll a. The results of carotenoids were not too different from Chlorophyll a and Chlorophyll b. The highest value was found on the 6th day at treatment with 50% of vinasse (1.16 mg/L). In addition, at the end of cultivation carotenoids mean concentrations varied from 0.015 mg/L (10%) to 0.45 mg/L (40%). Thus, with increase of vinasse there was increase of production of carotenoids. The carotenoid curve showed similarity with results of He et al. () with low luminosity (40 μmol photon m 2 /s). Figure 4(d) shows the biomass characterization after 16 days of cultivation. It is visible that increasing concentration of vinasse increases the percentage of proteins and decrease of carbohydrates. Santana et al. () also found this trend; however, in their experiment they compared 50% of vinasse to 100% of vinasse. This fact could be explained in that higher concentration of vinasse increased concentration of nitrogen, which is an element essential for amino acids constitution and consequently protein formation (Lavín & Lourenço ). Proteins in the biomass composition varied from 6.74% at treatment with 10% to 40.50% at treatment with 50% of vinasse. Total lipids presented highest value (around 10.90%) with treatment of 30% of vinasse. Carbohydrates followed the opposite results from proteins. At treatment with 10% of vinasse, the composition of biomass was 76.40% of carbohydrates and with 50% of vinasse this value was 40.10%. And, ashes did not present values much different from each treatment. The lower value (6%) at treatment with 20% of vinasse and the highest value was found (11.20%) at treatment of 50%). Statistical analysis The heteroscedasticity of the sampled values of biomass, Chlorophyll a and Chlorophyll b was attested by Shapiro-Wilk with a 95% confidence interval. When submitted to analysis of variance, a significant difference (p < 0.05) between the vinasse concentrations was identified for the three parameters analyzed (Table 3). Tukey's test showed that differences can be felt, in short, with an increase of 20% in the concentration of vinasse. For example, in the ranges of 10-30%, or 30-50% there is a difference between the sampled parameters for the treatments employed. The analysis of variance confirms and accepts the priori hypothesis, that increasing concentration of vinasse in the culture medium, the growth of the algae population and photosynthetic activity would increase proportionally until equilibrium was reached. CONCLUSION The results demonstrated that the cultivation of microalgae with vinasse is suitable; however, to work with different concentrations could affect the biomass composition and consequently change its use. Biomass with more percentage of proteins need to have more concentration of vinasse in the cultivation medium, while more carbohydrates need less concentration of vinasse. In addition, the advantage to use vinasse effluent to cultivate these photosynthetic microrganisms serve to remove color (at this experiment approximately 70%), phosphate, ammoniacal nitrogen and potassium. Increase of vinasse concentration as a cultivation medium could increase the concentration of pigments. In this study, the percentage of vinasse most interesting to pigments was with 40% of vinasse.
2020-04-30T09:11:16.886Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "abb7b332545002bb9eb89fc3220e5d259271c9f5", "oa_license": "CCBY", "oa_url": "https://iwaponline.com/wst/article-pdf/82/6/1111/771073/wst082061111.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c4695390098b7e1cc4565557b730f94711bd59e1", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
85547577
pes2o/s2orc
v3-fos-license
Hyperspectral Estimation Model of Forest Soil Organic Matter in Northwest Yunnan Province , China Soil organic matter (SOM) is an important index to evaluate soil fertility and soil quality, while playing an important role in the terrestrial carbon cycle. The technology of hyperspectral remote sensing is an important method to estimate SOM content efficiently and accurately. This study researched the best hyperspectral estimation model for SOM content in Shangri-La forest soil. The spectral reflectance of soils with sizes of 2 mm, 1 mm, 0.50 mm, and 0.25 mm were measured indoors. After smoothing and de-noising, the reciprocal reflectance (RR), logarithmic reflectance (LR), first-derivative reflectance (FR), reciprocal first-derivative reflectance (RFR), logarithmic first-derivative reflectance (LFR), and mathematical transformations of the original spectral reflectance (REF) were carried out to analyze the relevance of spectral reflectance and SOM content and extract the characteristic bands. Finally the simple linear regression (SLR), multiple stepwise linear regression (SMLR), and partial least squares regression (PLSR) models for SOM content estimation were established. The results showed that: (1) With the decrease of soil particle size, the spectral reflectance increased. The smaller the soil particle sizes, the more obvious was the increase in spectral reflectance. (2) The sensitive bands of SOM were mainly in the 580–690 nm range (correlation coefficient (R) > 0.6, p-value (p) < 0.01), and the spectral information of SOM could be significantly enhanced by first-order differential transformation. (3) Comparing the three models, PLSR had better estimation ability than SMLR and SLR. The precision of the 0.25 mm soil particle size and the LFR index in the PLSR estimation model of SOM content was the best (coefficient of determination of validation (Rv) = 0.91, root mean square error of validation (RMSEv) = 13.41, the ratio of percent deviation (RPD) = 3.33). The results provide a basis for monitoring SOM content rapidly in the forests of Northwest Yunnan, and provide a reference for forest SOM estimation in other areas. Introduction The soil organic matter (SOM) is not only an important basis for measuring soil fertility and quality [1], but also an important part of the terrestrial ecosystem carbon pool [2].Traditional SOM monitoring from field sampling to indoor chemical analysis has high precision, but it is time-consuming, laborious, has a long cycle and high cost, and is unable to meet the high efficiency, fast and immediate detection requirements [3].How to quickly and efficiently obtain SOM content Forests 2019, 10, 217 2 of 16 information and monitor the soil environment has been an urgent problem in precise management.With the development of technology, non-destructive, fast, accurate and large-scale remote sensing monitoring makes up for the shortcomings of traditional monitoring methods [4,5], while hyperspectral technology is also widely used in soil quality monitoring [6]. Previous studies have shown that soil spectral reflectance is significantly correlated with soil properties, such as soil moisture, iron oxides, clay minerals, and organic matter, while the absorption in the range of 400-1000 nm is mainly caused by iron oxide and organic matter [7].The reflectance is significantly negatively correlated with organic matter [8][9][10], it has a high correlation with SOM in the visible bands, especially in the red bands [11,12], and the most sensitive bands are mainly in the 550-710 nm range [13][14][15][16][17]. Studies on the spectral estimation of SOM content in different regions by various scholars have shown that the spectral information can be highlighted, the number of characteristic bands can be increased, and the correlation between reflectivity and organic matter can be enhanced by mathematically transforming the original spectrum [18][19][20][21][22].The researchers also established a variety of SOM content estimation models, including simple linear regression (SLR) [23], Multivariate Linear Regression (MLR) [24,25], stepwise multiple linear regression (SMLR) [9,11,26], partial least squares regression (PLSR) [27][28][29][30], principal component regression (PCR) [31][32][33], boosted regression tree (BRT) [34], support vector machine (SVM) [6], artificial neural network (ANN) [35], etc.There are differences in the best estimation models for different soil types, but the results of SMLR and PLSR are better and more stable [36]. Furthermore, soil particle size is also one of the factors impacting on soil spectral reflectance and the accuracy of SOM content estimation [37,38].Ma et al. [39] considered that there was a significant negative correlation between soil particle size and soil spectral reflectance that would affect the modeling accuracy [10].Bao et al. [40] believed that the accuracy of the soil nitrogen content estimation model would be affected when the soil particle size was less than 0.25 mm or more than 5 mm.Si et al. [41] thought that the improvement of the accuracy of the SOM estimation model was not obvious when the soil particle size was less than 0.25 mm, because the soil particles were so fine that they changed the soil physical properties, and the spectral characteristic information of SOM was masked.Li et al. [42] took Huangshui River basin as an example and used soil spectral information to estimate SOM content, which concluded that the SVM model based on 1 mm size had the best estimation accuracy (the coefficient of determination (R 2 ) = 0.96, the ratio of percent deviation (RPD) = 3.3). Currently, research on estimating SOM content based on hyperspectral technology has made abundant achievements, but there are differences in soil types, causes of formation, soil physical and chemical properties in different regions.There is no unified standard for selecting soil particle size and models in existing studies, furthermore, model estimation accuracy varies greatly.So, it is difficult to apply the estimation model established in one region to other regions, and the results are also not comparable.Most of studies on spectral estimation of SOM focus on areas with low organic matter content (<10%).There are few reports on spectral estimation of SOM content in forests, especially the forests in Northwest Yunnan of China which are rich in organic matter in the topsoil, and where the SOM content is even greater than 20%.However, it is also very difficult to realize remote sensing monitoring of the topsoil under natural conditions, especially in Northwest Yunnan, where the vegetation coverage is high.Because of the noise caused by the vegetation that litters the horizon, the sunlight which is necessary for hyperspectral measurements in the field is blocked by trees, making the spectral information of the topsoil inaccurate.Therefore, this study took the forest soil of Shangri-La, a typical area in Northwest Yunnan of China, as the research object, carried out laboratory experiments, and established hyperspectral estimation models of SOM content in forests of different sizes in order to find a method for rapidly estimating SOM on the large numbers of field-collected samples in the laboratory.The results would allow the changes in SOM in larger areas to be frequently monitored and analyzed. Study Area Shangri-La city is the capital of the Diqing Tibetan Autonomous Prefecture, Yunnan Province, China.It is located in the northwest of Yunnan Province, the eastern part of Diqing, between latitudes 26 • 52 N-28 • 52 N and longitudes 99 • 20 E-100 • 19 E (Figure 1).The study area, the average elevation is 3280 m and mountain area accounts for over 90%, is an important area for biodiversity conservation and water conservation for ecological protection in the alpine valleys of northwest Yunnan.The climate is alternately controlled by the southwest monsoon and the south branch of the westerly jet.The vertical zoning of the climate is obvious, the dry season (June to October) and wet season (November to next May) are distinct.The annual average precipitation is about 620 mm, the annual average temperature is about 6 • C. According to research by the Soil and Fertility Station of Yunnan Province (SFSY) and the Office of Soil Survey of Yunnan Province (OSSY), the soil parent materials in Shangri-La include plateau lake sediments, river sediments, flood deposits residual, etc., and the soil types include alpine frost desert soil, alpine shrubby meadow soil, brown coniferous forest soil, dark brown soil, subalpine meadow soil, brown soil, red soil, etc. [43].The city's forest coverage rate is more than 70% and the total area of four constructive species, Quercus aquifolioides (Quercus aquifolioides Rehd.et Wils.),Abies georgei (Abies georgei Orr), Alpine pine (Pinus densata Mast.), and Yunnan pine (Pinus yunnanensis Franch.),accounts for over 80% of the city's arbor forests [44,45].So, the above four kinds of forest soils were selected for research. Study Area Shangri-La city is the capital of the Diqing Tibetan Autonomous Prefecture, Yunnan Province, China.It is located in the northwest of Yunnan Province, the eastern part of Diqing, between latitudes 26°52′ N-28°52′ N and longitudes 99°20′ E-100°19′ E (Figure 1).The study area, the average elevation is 3280 m and mountain area accounts for over 90%, is an important area for biodiversity conservation and water conservation for ecological protection in the alpine valleys of northwest Yunnan.The climate is alternately controlled by the southwest monsoon and the south branch of the westerly jet.The vertical zoning of the climate is obvious, the dry season (June to October) and wet season (November to next May) are distinct.The annual average precipitation is about 620 mm, the annual average temperature is about 6 °C.According to research by the Soil and Fertility Station of Yunnan Province (SFSY) and the Office of Soil Survey of Yunnan Province (OSSY), the soil parent materials in Shangri-La include plateau lake sediments, river sediments, flood deposits residual, etc., and the soil types include alpine frost desert soil, alpine shrubby meadow soil, brown coniferous forest soil, dark brown soil, subalpine meadow soil, brown soil, red soil, etc. [43].The city's forest coverage rate is more than 70% and the total area of four constructive species, Quercus aquifolioides (Quercus aquifolioides Rehd.et Wils.),Abies georgei (Abies georgei Orr), Alpine pine (Pinus densata Mast.), and Yunnan pine (Pinus yunnanensis Franch.),accounts for over 80% of the city's arbor forests [44,45].So, the above four kinds of forest soils were selected for research. Soil Sample Collection and Experiment Soil samples of forests were collected in the study area from July 17 to 26 and September 24 to October 1, in 2017 (Figure 1).Sampling was based on the soil genetic layer, and soil profiles were made at the relatively primitive sites which were less affected by human activities.Soil samples of each occurrence layer were collected from bottom to top.Samples were prepared after indoor Soil Sample Collection and Experiment Soil samples of forests were collected in the study area from 17 to 26 July and 24 September to 1 October, in 2017 (Figure 1).Sampling was based on the soil genetic layer, and soil profiles were made at the relatively primitive sites which were less affected by human activities.Soil samples of each occurrence layer were collected from bottom to top.Samples were prepared after indoor air-drying and after roots and stones were removed.The content of SOM was estimated by multiplying 1.724 with the soil organic carbon (SOC) content that was measured by potassium dichromate oxidation with external heating [46].The remaining dichromate was measured by volumetric titration [46]. Considering the influence of particle size on the accuracy of the SOM content estimation model [40,41], and the difficulty of making soil samples, the air-dried soil samples were prepared into four groups of particle sizes, <2 mm, <1 mm, <0.50 mm, and <0.25 mm, which were named 2 mm, 1 mm, 0.50 mm, and 0.25 mm in turn.Spectral reflectance was measured by an SVC HR-1024i ground object spectrometer (Spectra Vista, Co., Poughkeepsie, NY, USA) with a spectral range of 350-2500 nm and a field-of-view angle of 25 degrees.Spectrum measurement experiments were carried out in a dark room.A light source, with 45 degrees of zenith angle and 65 cm of vertical height, was the standard light source matched with the spectrometer.The spectrometer probe was 10 cm away from the center of the surface of the soil samples which were filled into a black container of 10 cm in diameter and 1 cm in height.The surface was scraped flat, the working table was covered with black paper and the white board was referenced before each spectral measurement.Each soil sample was measured 10 times, the soil sample was rotated 72 degrees every two times, and a total of 10 spectral reflectance curves were obtained for each soil sample.Finally, 64 valid samples were screened with SOM content and spectral data (Table 1). Data Pre-processing In the process of spectral data acquisition, it is unavoidable to be affected by the whiteboard error, instrument error, test environment, sample impurities, etc., resulting in large noise in the spectral data.Therefore, it was necessary to perform whiteboard correction, smooth de-noising, and merging to improve the signal-to-noise ratio.These processes were carried out in software SVC HR-1024i (Spectra Vista Co., Poughkeepsie, NY, USA).In order to enhance the spectral information and highlight the SOM sensitive information, the original spectral reflectance (REF) was transformed into the reciprocal reflectance (RR), logarithmic reflectance (LR), first-derivative reflectance (FR), reciprocal first-derivative reflectance (RFR), and logarithmic first-derivative reflectance (LFR) by Microsoft Office Excel 2010 (Microsoft Corp., Redmond, WA, USA). Modeling and Validation The hyperspectral data contains a large amount of spectral reflection information.It is necessary to establish estimation models to extract the spectral information related to SOM content from the hyperspectral data.Considering the operability, estimation ability, and stability of hyperspectral estimation models, three commonly used Hyperspectral Estimation Models, SLR, SMLR, and PLSR, were established by Matlab R2017a (MathWorks Inc., Natick, MA, USA). The SLR model only uses the band with the highest correlation between SOM content and reflectance to construct a linear model for SOM content estimation.The model is simple, Forests 2019, 10, 217 5 of 16 convenient, and easy to be interpreted physically, but its estimation accuracy is affected in the case of multi-variables [47].The SMLR model uses multiple bands with high correlation between SOM content and reflectance to construct a linear model [47].It is developed from the multiple regression model to solve the collinearity problem among independent variables [48].SMLR usually sets the reliability with 0.05, and adopts a strategy of filtering variables one by one forward or backward, to finally obtain the optimal model [36].The PLSR model combines principal component analysis, multivariate regression and correlation analysis, it considers the dependent variable while revealing the principal component causing the change of SOM content [49,50].When the number of samples is small, and the data has strong collinearity and noise, the modeling analysis can still maintain a good effect, and retain all the variable information [36,51]. The whole dataset (n = 64) was split randomly into 54 samples (about 70%) for calibration and 19 samples (about 30%) for validation.Pearson correlation analysis was performed between SOM content and reflectance by using the calibration set and IBM SPSS Statistics 22.0 (IBM Corp., Armonk, NY, USA).The representative bands with significance (p < 0.01) and high correlation coefficient were selected as the characteristic bands of SOM, then SLR, SMLR, and PLSR models were established.The quality of the model was evaluated based on R 2 (Equation ( 1)), root mean square error (RMSE; Equation ( 2)), and RPD (Equation ( 3)) [36,52].The bigger the R 2 and the smaller the RMSE, the better is the fitting effect of the model [35,42].RPD, the ratio of the standard deviation of the measured value to the RMSE of the validation, was used to test the estimation ability of the model: If RPD >2, the model estimation ability is very good, and if RPD <1.0, the estimation ability is very poor, and cannot be used to estimate organic matter content [52]. (1) where y is the measured value of SOM content, y is the average value of SOM content, ŷ is the predicted value of SOM content, n is the total sample number (i = 1, 2, 3 . . .n), SD is the standard deviation. Spectral Characteristics of Soils with Different Particle Sizes The average spectral reflectance of the four particle sizes was calculated, and the spectral reflectance curves were obtained (Figure 2).Generally, the spectral reflectance increased rapidly in the 400-800 nm range, while the change at 800-2400 nm was relatively stable.There were absorption valleys around 1400 nm, 1900 nm, and 2200 nm, which are generally considered to be caused by moisture [53].The effect of particle size on the soil spectral reflectance was obvious, the smaller the soil particle size, the higher the reflectance and the more obvious were the spectral changes.The main reason was that with the decrease of particle size, the void between the soil particles decreased, which made the soil appear smoother, and the spectral reflection was enhanced.The spectral reflectance of the soil sample with 0.25 mm began to change significantly at around 580 nm, however, the spectral reflectance of soil samples with 2 mm, 1 mm, and 0.5 mm changed significantly around 800 nm. Feature Band Extraction and Analysis The correlation coefficient curves (Figure 3, taking the original spectral reflectance as an example) and the characteristic bands (Table 2) were obtained by analyzing the correlation between spectral reflectance information and SOM content.It was shown that REF was highly negatively correlated with the organic matter content (correlation coefficient (R) > 0.6, p-value (p) < 0.01) in the wavelength range of 580-690 nm which meant it was sensitive to the organic matter.The 0.25 mm particle size had the greatest correlation with SOM at 628 nm (R = 0.67, p < 0.01).After the mathematical transformation, the correlation coefficient was improved and the number of characteristic bands of the first-order differential transformation was increased.However, the characteristic bands of the different indicators were contrasting, mainly concentrated in bands of ranges 564-630 nm, 755-861 nm, 1020-1049 nm, 1365-1428 nm, 1520-1554 nm, 1600-1620 nm, 1738 nm, 1798 nm, 1924-1970 nm, 2160-2191 nm, 2239-2256 nm, and 2312-2320 nm (Table 2). Feature Band Extraction and Analysis The correlation coefficient curves (Figure 3, taking the original spectral reflectance as an example) and the characteristic bands (Table 2) were obtained by analyzing the correlation between spectral reflectance information and SOM content.It was shown that REF was highly negatively correlated with the organic matter content (correlation coefficient (R) > 0.6, p-value (p) < 0.01) in the wavelength range of 580-690 nm which meant it was sensitive to the organic matter.The 0.25 mm particle size had the greatest correlation with SOM at 628 nm (R = 0.67, p < 0.01).After the mathematical transformation, the correlation coefficient was improved and the number of characteristic bands of the first-order differential transformation was increased.However, the characteristic bands of the different indicators were contrasting, mainly concentrated in bands of ranges 564-630 nm, 755-861 nm, 1020-1049 nm, 1365-1428 nm, 1520-1554 nm, 1600-1620 nm, 1738 nm, 1798 nm, 1924-1970 nm, 2160-2191 nm, 2239-2256 nm, and 2312-2320 nm (Table 2). Feature Band Extraction and Analysis The correlation coefficient curves (Figure 3, taking the original spectral reflectance as an example) and the characteristic bands (Table 2) were obtained by analyzing the correlation between spectral reflectance information and SOM content.It was shown that REF was highly negatively correlated with the organic matter content (correlation coefficient (R) > 0.6, p-value (p) < 0.01) in the wavelength range of 580-690 nm which meant it was sensitive to the organic matter.The 0.25 mm particle size had the greatest correlation with SOM at 628 nm (R = 0.67, p < 0.01).After the mathematical transformation, the correlation coefficient was improved and the number of characteristic bands of the first-order differential transformation was increased.However, the characteristic bands of the different indicators were contrasting, mainly concentrated in bands of ranges 564-630 nm, 755-861 nm, 1020-1049 nm, 1365-1428 nm, 1520-1554 nm, 1600-1620 nm, 1738 nm, 1798 nm, 1924-1970 nm, 2160-2191 nm, 2239-2256 nm, and 2312-2320 nm (Table 2). Simple Linear Regression (SLR) The SLR models were established by selecting the highest correlation coefficient band from the characteristic bands which was extracted from six spectral transformation indexes of the four particle sizes as the independent variable and the SOM content as the dependent variable (Table 3).The modeling results showed that the coefficients of determination of calibration (R c 2 ) with indexes of REF, RR, and LR were both less than 0.6, the root mean square error of calibration (RMSE c ) was greater than 41.After the first-order differential transformation, the model fitting effect was improved, the R c 2 was increased maximum by 0.39, and the RMSE c reduced maximum by 21 : coefficient of determination of validation; 4 RMSE v : root mean square error of validation; 5 RPD: the ratio of percent deviation. Among the four particle sizes, the 2 mm, 1 mm, and 0.50 mm particle sizes with the RFR conversion index were better, the 0.25 mm particle size was better fitted with the LFR transformation index, and R v 2 Forests 2019, 10, 217 9 of 16 of all of them was greater than 0.7.In the four best models, except for the 2 mm-RFR model, in which the estimation ability is low (RPD < 2), the other models showed good estimation ability (RPD > 2).The best was the 0. Y SOM = 1 × 10 6 R 807 + 21.494 (5) where: Y SOM = predicted value of SOM, R j = spectral reflectance of soil at wavelength j in nm. Stepwise Multiple Linear Regression (SMLR) Since REF, RR, and LR indexes only extract one characteristic band, the FR, RFR, and LFR transformation indexes were analyzed in the SMLR and PLSR models.The SMLR models were established by selecting the characteristic bands extracted from three spectral transformation indexes of the four particle sizes as the independent variable and SOM content as the dependent variable (Table 4).Rc 2 varied from 0.78-0.90with an average of 0.85, the maximum value of RMSEc is 30.47 and minimum value is 20.68, average value is 24.83.The best calibration result was the model of 0.25 mm particle size and the RFR transformation index.The validation results showed that Rv 2 ranged from 0.29 to 0.91 with an average of 0.70, RMSEv ranged from 42.38 to 13.60 with an average of 26.35, and RPD varied from 1.18 to 3.28 with an average of 2.04.In total, the SMLR model had a good fitting effect and strong SOM content estimation ability, and was superior to the SLR model.Because the average value of Rv 2 was increased by 0.19, RMSEv was reduced by 4, and the model estimation ability (RPD) was improved by 0.49. Stepwise Multiple Linear Regression (SMLR) Since REF, RR, and LR indexes only extract one characteristic band, the FR, RFR, and LFR transformation indexes were analyzed in the SMLR and PLSR models.The SMLR models were established by selecting the characteristic bands extracted from three spectral transformation indexes of the four particle sizes as the independent variable and SOM content as the dependent variable (Table 4).R c 2 varied from 0.78-0.90with an average of 0.85, the maximum value of RMSE c is 30.47 and minimum value is 20.68, average value is 24.83.The best calibration result was the model of 0.25 mm particle size and the RFR transformation index.The validation results showed that R v 2 ranged from 0.29 to 0.91 with an average of 0.70, RMSE v ranged from 42.38 to 13.60 with an average of 26.35, and RPD varied from 1.18 to 3.28 with an average of 2.04.In total, the SMLR model had a good fitting effect and strong SOM content estimation ability, and was superior to the SLR model.Because the average value of R v 2 was increased by 0.19, RMSE v was reduced by 4, and the model estimation ability (RPD) was improved by 0.49.For the four particle sizes, the models of 2 mm and 1 mm particle sizes with the RFR conversion index were better, while the 0.50 mm and 0.25 mm particle size were better fitted with LFR.The best combination of models was the 0.25 mm-LFR (R v 2 = 0.91, RMSE v = 13.60,RPD = followed by the models of 0.50 mm-LFR and 2 mm-RFR, the worst was the 1 mm-RFR (R v 2 = 0.79, RMSE v = 23.21,RPD = 2.16).Figure 5 shows the comparison of the estimated values by SMLR and the measured values of the verification samples, while the expression is shown as Equation ( 6). Forests 2019, 10, x FOR PEER REVIEW 10 of 15 For the four particle sizes, the models of 2 mm and 1 mm particle sizes with the RFR conversion index were better, while the 0.50 mm and 0.25 mm particle size were better fitted with LFR.The best combination of models was the 0.25 mm-LFR (Rv 2 = 0.91, RMSEv = 13.60,RPD = 3.28), followed by the models of 0.50 mm-LFR and 2 mm-RFR, the worst was the 1 mm-RFR (Rv 2 = 0.79, RMSEv = 23.21,RPD = 2.16).Figure 5 shows the comparison of the estimated values by SMLR and the measured values of the verification samples, while the expression is shown as Equation (6). where: YSOM = predicted value of SOM, Rj = spectral reflectance of soil at wavelength j in nm Partial Least Squares Regression (PLSR) The PLSR models were constructed based on reflectance (including its mathematical transformation) and SOM content.The calibration results (Table 5) showed that the fitting accuracy of the PSLR models was good, Rc 2 for all was above 0.79 (the maximum was the model of 2 mm-RFR, Rc 2 = 0.90) and the mean value was 0.85; RMSEc varied from 21.00 to 29.93, and the Partial Least Squares Regression (PLSR) The PLSR models were constructed based on reflectance (including its mathematical transformation) and SOM content.The calibration results (Table 5) showed that the fitting accuracy of the PSLR models was good, R c 2 for all was above 0.79 (the maximum was the model of 2 mm-RFR, R c 2 = 0.90) and the mean value was 0.85; RMSE c varied from 21.00 to 29.93, and the average was 25.23. The validation results showed that R v ranged from 0.65 to 0.91, with an average of 0.77; RMSE c ranged from 13.41 to 26.27, with an average of 20.96; and RPD ranged from 3.33 to 1.70, with an average of 2.20.Overall, the PLSR model was better than the models of SLR and SMLR in modeling accuracy.The average of R v 2 was 0.9 higher than the SMLR model, the average of RMSE v was 5.39 lower while RPD was improved by 0.16. Comparing the best models of the four particle sizes, the results were similar to those of SMLR: the model of 2 mm and 1 mm particle sizes with the RFR conversion index were better, while the 0.50mm and 0.25mm particle sizes were better fitted with LFR.The optimum model was the 0. where: Y SOM = predicted value of SOM, R j = spectral reflectance of soil at wavelength j in nm. Summarizing the three models of SLR, SMLR, and PLSR, the estimation ability of the SLR model was the worst.Most SLR models gave R 2 less than 0.6 and the RMSE was high, while the fitting effect and estimation ability of the PLSR and SMLR models were close and better than the SLR model.However, the accuracy of the PLSR model was higher than that of the SMLR model.Comparing the optimal models of the three modeling methods, the R v 2 and RPD of the 0.25 mm-LFR-PLSR model were improved respectively by 0.08 and 0.93 compared to the SLR model, and RMSE v was reduced by 5.20, in addition, the RPD was 0.05 higher and RMSE was 0.19 less than for the SMLR.Therefore, the hyperspectral estimation model based on 0.25 mm-LFR-PLSR was the best estimation model for forest SOM content in northwest Yunnan.Summarizing the three models of SLR, SMLR, and PLSR, the estimation ability of the SLR model was the worst.Most SLR models gave R 2 less than 0.6 and the RMSE was high, while the fitting effect and estimation ability of the PLSR and SMLR models were close and better than the SLR model.However, the accuracy of the PLSR model was higher than that of the SMLR model.Comparing the optimal models of the three modeling methods, the Rv 2 and RPD of the 0.25 mm-LFR-PLSR model were improved respectively by 0.08 and 0.93 compared to the SLR model, and RMSEv was reduced by 5.20, in addition, the RPD was 0.05 higher and RMSE was 0.19 less than for the SMLR.Therefore, the hyperspectral estimation model based on 0.25 mm-LFR-PLSR was the best estimation model for forest SOM content in northwest Yunnan. Discussion The study measured the spectral data of 2 mm, 1 mm, 0.50 mm, and 0.25 mm forest soil samples, and analyzed the relationship between spectral reflectance (mathematical transformation) and SOM content to obtain the characteristic bands.The results showed that with decreased particle size, the spectral reflectance of the soil increased.When the wavelength was greater than 580 nm, the spectral reflectance changed significantly, and the smaller the particle size, the greater was the Discussion The study measured the spectral data of 2 mm, 1 mm, 0.50 mm, and 0.25 mm forest soil samples, and analyzed the relationship between spectral reflectance (mathematical transformation) and SOM content to obtain the characteristic bands.The results showed that with decreased particle size, the spectral reflectance of the soil increased.When the wavelength was greater than 580 nm, the spectral reflectance changed significantly, and the smaller the particle size, the greater was the increase of the reflectivity.This result is in conformity with other study results in that spectral reflectance varies significantly with particle size change when the wavelength is greater than 600 nm [42] and spectral reflectance increases exponentially with decreasing particle size [39].The response bands of SOM in the original spectral reflectance curve were mainly at wavelength of 580-690 nm (R > 0.6, p < 0.01), this result was basically consistent with those of 620-660 nm proposed by Xu [16], 550-700 nm proposed by Galvão et al. [14,15], 570-630 nm proposed by Peng et al. [13] and 560-710 nm proposed by Fang et al. [17].Although there is no uniform standard for the pre-processing of spectral reflectance [19], it is undeniable that the first order differential transformation of spectral reflectance can significantly enhance the characteristic spectral information of SOM, reduce the effect of noise, improve the correlation between SOM and reflectance, and increase the number of characteristic bands.In addition, the spectral information of soil is complex, which is affected by the parent material, water, soil nutrients, and texture [22,54,55].So the focus of SOM content spectrum estimation research is to select the best band according to different research areas and soil types. SOM spectral reflectance is affected by soil formation factors and soil properties.It is difficult to find a general SOM content estimation model for different soil types [33,56].Hou et al. [23] established the hyperspectral estimation model of SOM content in the desert, the results showed that the fitting results of the PLSR model were better than the SMLR model and SLR model.Si et al. [41] built SOM estimation models based on PLSR and considered that SOM estimation model with the 0.25 mm particle size was the best, R 2 and RMSE were 0.816, 4.26, respectively.Zhou et al. [20] and Hu et al. [21] considered that the logarithmic first-order differential model of spectral reflectance was the best for estimating SOM content in soil plowing layers.The 0.25 mm-LFR-PLSR obtained in this study basically conforms with the results of the above studies, and also conforms with the particle size of the SOM chemical analysis.At the same time, the results of estimating SOM content in soil with small particle sizes (0.50 mm, 0.25 mm) were more accurate, and the optimum particle size was 0.25 mm.This was different from research by Li et al. [42] based on particle sizes of 2 mm, 1 mm, 0.25 mm, and 0.15 mm, which concluded the 1mm particle size of soil was best.There are two reasons for this: (1) the modeling accuracy of small samples tends to increase with the decrease of particle size [21]; (2) there may be impurities such as fine grains and root debris, which will affect the spectral information and modeling accuracy of SOM.For the modeling results, R 2 and RPD of the 0.25 mm-LFR-PLSR are better than the PLSR models in the studies implemented by Fidêncio et al. [27,28], Dunn et al. [29], Kooistra et al. [30], and Hou et al. [23], but the result of RMSE is slightly worse than previous researches [23,[27][28][29][30].The reason is that the variation range of SOM content in soil samples is large (3.28-257.34g/kg), the difference of SOM content is great, and the coefficient of variation is high (0.95), which have a certain impact on the accuracy of the SOM estimation model.However on the whole, the model has good estimation ability. Conclusions Overall, with the decrease of soil particle size, the spectral reflectance increases, and the spectral reflectance of different particle sizes changes obviously near the wavelength of 580 nm.The smaller the particle size, the larger the increased range of reflectance.In the original spectral reflectance curve, the sensitive bands of SOM were mainly in the range of 580-690 nm (R > 0.6, p < 0.01), which is negatively correlated with forest SOM, while the spectral information of SOM can be significantly enhanced by first-order differential transformation.The PLSR method outperformed the SLR and SMLR according to R v 2 , RMSE v, and RPD.The predictive ability decreased in the following order: PLSR > SMLR > SLR.So the method of PSLR based on the 0.25 mm-LFR is best for estimation of forest soil organic matter by hyperspectral technology in northwest Yunnan. The results provide a basis for the rapid monitoring of SOM content and ecological environment management in forests in northwest Yunnan, and also provide a reference for hyperspectral estimation of SOM content in other areas.However, with the increasing demand for precise soil management, it is necessary in further studies to collect and analyze more data for estimating the SOM content in other forest areas to improve the applicability of the estimating methods.We hope that, in further study, satellite-borne or airborne hyperspectral remote sensing can be used to estimate the SOM in such areas, and that satisfactory results can be achieved. Figure 1 . Figure 1.Position map of Shangri-La and distribution of sampling points in different stands. Figure 1 . Figure 1.Position map of Shangri-La and distribution of sampling points in different stands. Figure 2 . Figure 2. Spectral reflectance curves of soil: (a) 60 spectral reflectance curves of the total samples; (b) mean spectral reflectance curves of four particle sizes. Figure 3 . Figure 3. Correlation between original spectral reflectance (REF) and organic matter content of the four particle sizes. Figure 2 . Figure 2. Spectral reflectance curves of soil: (a) 60 spectral reflectance curves of the total samples; (b) mean spectral reflectance curves of four particle sizes. ForestsFigure 2 . Figure 2. Spectral reflectance curves of soil: (a) 60 spectral reflectance curves of the total samples; (b) mean spectral reflectance curves of four particle sizes. Figure 3 . Figure 3. Correlation between original spectral reflectance (REF) and organic matter content of the four particle sizes. Figure 3 . Figure 3. Correlation between original spectral reflectance (REF) and organic matter content of the four particle sizes. Figure 4 . Figure 4. Comparison of estimated and measured values of soil organic matter (SOM) content by simple linear regression (SLR).R 2 indicates determination of calibration, RMSE indicates root mean square error, RPD indicates the ratio of percent deviation. Figure 4 . Figure 4. Comparison of estimated and measured values of soil organic matter (SOM) content by simple linear regression (SLR).R 2 indicates determination of calibration, RMSE indicates root mean square error, RPD indicates the ratio of percent deviation. Figure 5 . Figure 5.Comparison of estimated and measured values of SOM content by stepwise multiple linear regression (SMLR). Figure 5 . Figure 5.Comparison of estimated and measured values of SOM content by stepwise multiple linear regression (SMLR). Figure 6 . Figure 6.Comparison of estimated and measured values of SOM content by partial least squares regression (PLSR). Figure 6 . Figure 6.Comparison of estimated and measured values of SOM content by partial least squares regression (PLSR). Table 1 . Statistical characteristics of soil organic matter (SOM) content. Table 2 . Characteristic bands and correlation coefficients of SOM with different particle sizes. Table 3 . Modeling results of simple linear regression (SLR) for SOM content. 1R c2 : coefficient of determination of calibration; 2 RMSE c : root mean square error of calibration; 3 R v 2 Table 4 . Modeling results of stepwise multiple linear regression (SMLR) for SOM content. Table 4 . Modeling results of stepwise multiple linear regression (SMLR) for SOM content. Table 5 . Modeling results of partial least squares regression (PLSR) for SOM content.
2019-03-27T11:53:07.266Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "25d2f0aee2f113da3e660692e0503e94ca504906", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/10/3/217/pdf?version=1551429301", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "25d2f0aee2f113da3e660692e0503e94ca504906", "s2fieldsofstudy": [ "Environmental Science", "Mathematics", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
164669768
pes2o/s2orc
v3-fos-license
Empire, Tianxia and Great Unity: A historical examination and future vision of China’s international communication A theoretical imagination of the world order and global landscape is necessary for China’s international communication. ‘Empire – nation state’, the dominant structure in the contemporary world entails the logic of imperialism. However, the perspective of ‘the world’ (Tianxia; 天下) in ancient China introduced alternative theoretical challenges. Chinese scholars have been devoted to developing a vision of a society of ‘Great Unity’ (Datong; 大同) over the past century. Based on historical exanimation, this article aims to explore new approaches of China’s international communication. distinguish itself from the Western colonisers . Wang Hui (2012) mentioned that China had done a lot in Africa. However, people in Africa had absolutely no idea what sort of order China hoped to achieve in Africa and in the world and what exactly it was that they wanted? Second, when the emphasis is solely placed 'going out' and overlooks the exploration of thinking and logics, it is likely to lead to power struggle among nations and yet another round of the usual international competition. The authors believe the 'going out' of Chinese culture and media has breathed new life into external communication. It will continue to face new challenges and must offer a theoretical vision of world order and a global picture. Going through and examining the history of the key logic of China's external communication, the paper reflected on the 'empire' or old imperialism logic that continues to exist among contemporary national struggles before reviewing China's historical resources, where observation was carried out from the perspective of Tianxia. Eventually, the reliability and explainability of the ideology of the 'Great Unity' in modern times were explored to offer ideology and inspiration. 'Empire' logic and imperialism Empire may be defined as a monarchy system of government. From a historical context and academic history perspective, however, the connotation of the word has far surpassed its definition. History wise, the world had seen no shortage of empires: ancient Rome, the Byzantine Empire, the Ottoman Empire and the Golden Horde were all once prosperous. In China, even before Qin Shi Huang proclaimed himself emperor, the governmental system of 'all the lands under heaven are King's land' was long in practice, which continued until the replacement of the Qing dynasty by the Republic of China. These 'pre-modern' empires had two key characteristics: first, the power remained in the hands of the emperor or the equal. Second, the desire for territorial expansion remained insatiable. Punitive expeditions were basically how empires communicated with the outside world, while geographical positions afforded natural defences and led to the formation of different civilisations, which gave rise to the foundation of epistemology of the East and the West. Meanwhile, expansive lands and their population in Africa, Latin America and Oceania remained forgotten on the edge of the world. Worldwide, empire has become a modern concept and led to the logic of an international order that is interconnected to the rise of capitalism, colonialism and nationalism. As capitalists desperately seek to expand, the occupation of new land and labour goods means colonialism becomes inevitable. The British Empire between the 18th and the 19th centuries serves as a classical example: the building of 'the empire on which the sun never sets' stretched across Australia, Hong Kong, India, Africa, the British Isles and America. It gave rise to the modern concept of 'empire', which possesses three features: the power remains in the hands of few minorities and social classes, the territory goes beyond ethnical boundaries, and inequality such as colonialism. Modern empires seem to be an extension of the ancient ones but are in fact products of capitalism's global colonialism. To cross geographical barriers, the broadly defined communication, which covers transport and dissemination, continues to innovate tools and systems that break free the limitations of time and space and build infrastructure for modern empires. For instance, the British government decreed in 1845 the standard gauge to be 4 feet 8½ inches, which went on to become an international standard; Greenwich Mean Time became the international time standard; and English has become the lingua franca in international business and politics. Only with 'standardised gauge and writing system' may 'law be enforced' is a point shared among modern colonists and Qin Shi Huang. Zhao Yuezhi (2011) remarked that a British colonial governor in Australia referred to telegraph as 'a great imperial binding force' (p. 144), while the forerunner of BBC World Service was known as BBC Empire Service in 1932. James Carey (1992) elaborated how telegraph changed the trading system and became the new tool of capitalists for speculation and moneymaking. A universal pricing system was thus built, whose dissemination was a part of the endeavour of colonialism. Telegraph turned capital profit point from time to space and separated transport and communication, which enabled capital to control a wide territory. It was exactly the powerful weapon of colonialism that brought about its strong reactant: nationalism and the raise of nation states. Inequality between colonies and suzerains gave rise to calls for independence in North and Latin America between the end of the 18th century and early 19th century, during which time a series of non-imperial republic countries were created. According to Anderson (1983Anderson ( /2006, that was the genesis of modern nation states and nationalism. All the dynasties and empires basically came to an end or disappeared from the scene after the First World War, while nation states became the unit of modern international politics. Anderson believed nations were 'imagined communities' whose existence in a social organism required, at first, change in epistemology. That is, the perception of time turned to horizontal history from oracle time. The combination of capitalism, printing technology and the fatality of human-language diversity then prompted the formation of a community based on a certain worldly language. Although nation states replaced empires to become the main bodies in international politics in early 20th century, the logic of an empire has, however, never died down. Some newly founded nation states continued 'imperialism' of ancient empires, while the other, newly liberated from colonialism, became depended on or went up against imperialism. A few European countries acquired hegemony in the world, and such global system entailed inequality and marginalisation. Despite the rising and falling of power of countries, phenomena such as competition and rivalry, interference or assimilation and territory occupation or economic control remained commonplace. Such logic is a mutation of the 'free competition' of capitalism and 'survival of the fittest' of Darwinism in the Tianxia system. It is also a product that resulted from global civilisations being run over by 'homogenous and hollow time'. Lenin (1917) regarded 'imperialism' as a phase being monopolised by capitalism and global monopolisation being the core characteristic of financial capital. In fact, imperialism still haunts Tianxia. Christine Fuchs, pioneer in critical communication studies in Europe, believed Lenin's theory to be valid even today. The global financial crisis triggered by the US subprime mortgage crisis in 2008 showcased the contemporary façade of capital: global infiltration, extreme monopolisation and unequal financial dependency. Harvey (2003) once quoted Giovanni Arrighi in his analysis; he believed the power of an empire might be divided into territorial and capital logics, and one of them occupied a leading position during a certain historical phase. The Second World War led to the collapse of territorial occupation but rebuilt the dependency on a capitalist economy, which gave rise to 'new imperialism', a mutation brought about by new liberalism. A decentralised global imperial system was proposed in Empire (2000) by Michael Hardt and Antonio Negri, whereby a world ruling system was built with the United States, some supranational unions and monopoly-finance capital placed in the top. The order of such empire coincides with globalisation, which confirmed the criticism thereof. That is, globalisation is in essence a version of 'Americanisation'. Since 9/11 in 2001, the United States has once again confirmed its global hegemony by taking advantage of the call for anti-terrorism. On the one hand, it has immediately resorted to military power to defend 'global order' as centred around itself, while, on the other hand, directing international opinion. However, contemporary US hegemony is no longer simply a military expedition of the imperial era but rather a cultural vanguard. Hollywood films, multinational media corporations, Internet speculators and popular merchandise all directly or indirectly serve as the vanguard of capitalist expansion and the main force that exports values. 'Soft power', as coined by Joseph Samuel Nye Jr., still follows the logic of imperialism. This is exactly the 'cultural imperialism' or 'media imperialism' fiercely criticised by many leftist scholars. As Schiller pointed out in Mass Communications and American Empire (1969) that mass communication had to date emerged as a pillar of American imperialism. Information 'made in America' was been disseminated across the globe, playing the role of the nerve centre of America's national power and expansionism (Schiller, 2006, p. 142). In short, the logic of an empire or old imperialism remains the leading content of the current world system, with communication, media and culture being the new battlefields. As Zhao (2011) put it, The emergence of the modern form of world communication might have brought about the utopian vision of the great world unity; the process, however, remained a basic component in the global expansion of Western colonialism and capitalism as a socioeconomic system (Mattelart, 2000). Today, following some national liberation movements and the rise of post-colonial nation states, world order might have been incorporated into a new empire, where 'outside' no longer existed. This was due to all the regions being drawn into the empire logic and borders fell away, while world political and economic powers could no longer regard the incorporation of the 'outside' into its colony as a goal as during the colonial times. (pp. 143-144) Tianxia system of ancient China The nearly three centuries of Western history have formed comparatively clear venation, that is, the direct succession of the empire logic as embedded in the international political system of 'empire -nation state'. However, Chinese history posed challenges to such analysis of empires: Was ancient China a classical representation of an empire with an insatiable appetite for both internal and external colonies? In what sense did China become (or not become) a 'nation state'? Has China built some national traditions that are distinctly different from the West? Building on such basis, how did the external communication of ancient China differ in concepts and practice from modern international communication? These are no easy questions. 'China' is not a fixed concept; it comprises complex regionality and cultural communities that continuously grow and contract throughout history. The face of its civilisation changes and accumulates and hides among different narratives. The review of 'history' from any time point is actually the reconstruction of history. In The Elaboration of Chinese History, Liang Qichao divided China into three phases: lastgeneration history, middle-age history and early modern history. The first began 'from the imperial times to the Qin's unification of states', the second began 'from Qin's unification to the Qianlong Emperor of the Qing dynasty', while the last began 'from the last year of the reign of the Qianlong Emperor to today'. Liang's division criterion was based on China's relations with Tianxia, and the three phases reflect China's China, Asia's China and World's China. To quote Ge (2011), the first two phases were 'a self-centred visionary era', while the last was 'a reflective era', with the West serving as a reference. During the last-generation historical period, only China existed and there was no world. But the concept of 'China' did not refer to an independent country but rather a region. Be it the archaeological studies of ancient ware or the linguistic analysis of the origin of words, the word 'China' seemed to contain multiple meanings and may refer to the Central Plains, the capital or the imperial government (Hong, 2006;Yang, 2009). It often referred to the Nine Provinces and was a synonym for Huaxia, Zhonghua and the Divine Land, and an antonym for 'the four seas' or 'Siyi'. The chapter Land Explanation of Erya defines the four seas as the wild tribes, Jiuyi, Badi, Qirong, Liuman. Tribes outside of the Central Plains were referred to as Dongyi, Xirong, Nanman and Beidi, and shared the distinct characteristic of being 'uncivilised' and were commonly described as 'people beyond the pale of civilisation'. However, this did not mean the path of these tribes and Huaxia did not cross or the Nine Provinces were monolithic. People of Huaxia lacked the idea of 'Tianxia' and could vaguely differentiate between 'self' and 'others', not to mention a clear awareness of nationality. The chapter Yan Hui in the Analects stated, 'within the four seas all men are brothers'. The chapter Great Learning in the Book of Rites explained that those of the ancient times wishing to extend the way to the whole world must first govern their states well; to govern a state well, they would have to rationally manage their family clans; to be able to rationally manage family clans would require personal probity; […] personal probity directs rational management of family clans; the ability to manage family clans rationally may then be further developed to facilitate proper state governance; well-governed states ultimately lead to world peace. The 'differential mode of association' as put forward by Fei Xiaotong offers the most suitable explanation. 'Person -family clans -states -world' constitute a sequence, arranging different ethnic groups based on cultural closeness, a structure of concentric circles is thus formed. Such a structure is political -but more so a cultural structure. The eventual ideal is what Confucius referred to as the 'Great Unity of Tianxia'. Since the period was marked by the lack of 'outside', the modern sense of 'external communication' or 'international communication' has no place here. During the Spring and Autumn period or the Warring States period, ancient Chinese states communicated, promoted and persuaded, which required diplomatese and public communication. However, the competitiveness and difference were not so glaring, it was simply information communication within the same culture. Li Si's Petition against the Expulsion of Guest Officers as found in the Records of the Grand Historian offered a glimpse of the situation then. Qin recruited men from different states, including those from the neighbouring states of Song, Jin and Wei, and those from outside of the Central Plains, such as Xirong. Referring to treasure, beauties and music, Li remarked the popularity of music from Zheng and Wei in Qin, which illustrated the commonality of cultural communication. He finished off with the point: 'As he embraces the people, the virtue of an aspiring individual seeking to build an empire is thus highlighted'. The land is therefore united and the people unified, which will bring about year-round prosperity and blessing from both heaven and earth. That is how the Three Sovereigns and Five Emperors remained invincible. In other words, the distinction between states and between the Central Plains and the Siyi was not clear-cut. The key was the ability to illustrate 'illustrious virtue' before one was able to undertake what was necessary to become an emperor. All in all, communication was frequent among states during the Spring and Autumn period and the Warring States period; an integrating trend was even emerging save for some political struggles within the cultural community. After Qin united the six states during middle-age historical period, 'Tianxia' expanded and changed. Observing through the structure of concentric circles from the inside out, the communal concept of 'China' was showing clearer borders. A symbolic border was built during the Qin dynasty, when the Great Wall was constructed to keep out the nomads in the north. Borders were established among the Song dynasty, Liao, Jin and Western Xia. The Great Wall was consolidated and geographical advantages exploited during the Ming dynasty. Political powers all around rose and fell along with the dynasties in the Central Plains, be it controlling, paying tribute, expedition, trade or even invasions of the Central Plains by foreign tribes. Countries such as the Korean Peninsula, Japan, Siam and the Ryukyu Kingdom remained on the outskirts of the Chinese civilisation. The relations between the Kingdom of Joseon and the Ming dynasty during the 14th and 15th centuries was considered a classical case of tributary, which led to the creation of a community of East Asian civilisation. Further afield, India, Persia and even Europe were included in China's perception of Tianxia through the dual channel of intermittent communication and continued imagination (Jung, 2006). 'Imagination' is critical to the understanding of 'Tianxia' concept of China during its long and profound history. China believed itself to be 'the centre of Tianxia' and possessed the best etiquette and culture. 'Roping in countries afield through kindness' and mollification were adopted as the basic strategies, whereby foreign countries only needed to pay tribute and required no direct domination. Through his analysis of the three resources based on which ancient China pictured foreign lands, legends, Portraits of Periodical Offering and travel logs, Ge (2011) dissected the formation and essence of such imagination. He then elaborated such world concept through his analysis of the map and pointed out the imagination continued to be challenged after the middle period of the Tang Dynasty, followed by a drastic change that took place during the Song dynasty, as Ge (2011) notes, The change was of great significance. In the history of ideas, the Sino-barbarian dichotomy and the tributary system of ancient China were turned from actual strategies into an imagined order; in an imagined world, from a commanding position in a real system into self-consolation; in the political history, from a grand and arrogant imperial nation into an equal diplomatic strategy; in thinking history, from the mainstream ideology of scholar-officials on Tianxia and China and Siyi and from universalism of all under Heaven is the king's land into nationalism of self imagination. (p. 47) Imagination is constructed by communication. External communication was more open before the Song dynasty, such as Zhang Qian's mission to the Western Regions during the Han dynasty and Xuanzang's journey to the West and Jianzhen's missionary trips to the East during the Tang dynasty. They were events of communication tinged with political colours and meaningful public cultural exchanges, which involved diplomacy while also focusing on cultural and religious activities. Such communication was also facilitated by trade, immigration, marriage alliance and expeditions, just as Li Qi wrote in his poem: Ever more remains are buried in the wilds year after year, vainly in exchange for grapes. Rivalry relations existed between the imperial government in the Central Plains and the surrounding countries during the Song dynasty, greatly limiting cultural communication. Emperors of the Song dynasty repeatedly prohibited the introduction of books, apart from texts related to the Nine Classics, to the market of Liao dynasty, particularly books related to current affairs. As Liu (2006) remarked: Emperor Shenzong of Song decreed during the first year of Yuanfeng (1078 AD) that apart from texts related to the Nine Classics, individuals found selling any books to northerners or foreigners would be punishable by 3 years behind bars, while the inducer would receive a reduced sentence. All the parties would be banished to the next state or by a thousand li, should the circumstances be grave. Informants would be rewarded. The same treatment was imparted on the people of Jiaozhi and Goguryeo. Nevertheless, cultural communication remained unbroken. Lv (2012) stated that cultures were communicative, but the routes tended to be winded. The Qing dynasty was another drastic period of transition. After the Manchu conquest of China, the introduction of new clothes symbolised the makeover of the traditional culture of the Central Plains. This led to the collapse of the community of East Asian culture and the tributary system. Western power had completely shattered the 'world imagination' of China, which pulled the ancient tradition out of existing time and space concepts and threw it into the international system of modern capitalism. Thus began the phase of 'world's China' in early modern history. East and West, tradition and modern constitute the fundamental issues of modern China. In short, Chinese tradition and history constructed an international order, or a vision of international order, that differs from the logic of modern empires. We shall attempt to answer the questions posed at the beginning of the section. First, ancient 'China' did adopt imperialism, but it was not an expansive empire as in the modern sense. As earlier stated, China had a long imperial tradition and the urge for foreign expansion and colonies as Wu (2012) explained. Nevertheless, essential difference distinguished it from modern empires as driven by capitals. Second, if nation states that arose from Europe and independent colonies after the First World War were indeed products of capitalism, then China had long established a clear national consciousness as early as the Song dynasty. If the Western nation states were a 'community of imagination', then the imagination of China seems to have a far longer history and solider foundation. If the former were political, then China is a cultural community. Third, this has led a fundamental difference between China's vision of 'international relations' (we would settle for the term for the time being) and that of the West. The most critical of which are 'Tianxia view' and the 'tributary system'. The civilisation community that China built with its vessel states was no equal foreign diplomacy or pure imperial rule or colonialism (Ru & Gong, 2009). Jung (2006) explained the tributary system might have been an international order centred around China, but its maintenance was not solely dependent on the unilateral compulsion or favour from China. It was partaken by individual stakeholders and relied on the joint effort of surrounding countries. Fourth, the external communication of ancient China was no mind control of cultural imperialism or the draw of cultural soft power. It was more like a cultural ripple effect, among which was interaction, assimilation, resistance and integration. Its effect on surrounding areas was like that of 'overflowing' water. Western great powers arrived in East Asia in the 19th century and pulled China from 'East Asia's China' into 'Tianxia', which brought about a new round of national construction in China and the adjustment of behaviour to partake in international order. Nevertheless, ancient China left behind rich thinking resources, which, when combined with modern thinking and actual practice, may introduce a more imaginative ideology of international order. The Great Unity of Tianxia: re-visioning Tianxia 'Saving the nation' had been a critical theme in modern China. From a grand imperial nation to a backward country under attack, from being the centre of the world to the inability to find a footing in the world, the stark contrast propelled modern Chinese thinkers to continuously explore new paths. From the Hundred Days' Reform, the Self-Strengthening Movement to the Xinhai Revolution and the Socialist Revolution, a prominent characteristic was the use of the West as reference to reset up 'others' for China. However, China's vision for world order had never deviated from historical traditions, which had actually been integrated with modern thinking for theoretical deduction. Notable figures from the Hundred Days' Reform such as Kang Youwei and Liang Qichao were examples of modernised traditional Chinese intellectuals. Kang wrote A Book of Great Harmony during his exile between 1901 and 1902, which drew from the Confucius thinking of 'the great unity' of 'Tianxia as one'. He created a utopia based on the theoretical framework of the three phases (troubled times, peaceful times, great peace) of historical development. He even proposed a world of great unity, whereby the borders were removed: 'Tianxia as one without nations … whereupon states do not exist, emperors do not exist, everyone loves one another, everyone is equal, Tianxia is one, and that is great unity. Such unity is the system of a world with great peace'. Kang's argument may be good and covered the 5000 years of Chinese history, but his thinking was built upon the Confucius thinking of the sound practice of Tao in every aspect and influenced by Social Darwinism. Consequently, he put forward the principle of 'survival of the fittest', whereby a 'civilised country' would take out a 'barbaric country', which then rendered it a 'vulgar theory of historical revolution' (Li, 1955). As Mao Zedong explained, 'Kang wrote A Book of Great Harmony. He did not and could not have found a path to great unity' (Mao, 1991). Similarly, Liang also mentioned 'a nation of cosmopolitanism' and advocated that we may not only be aware of a nation and not a world. We should exploit the talent and gift of every individual in the nation to the fullest as with the help of the nation to greatly contribute to the civilisation of mankind in the world. This will be the trend in every country. Unfortunately, Liang's argument held no substance. Sun Yat-sen, a revolutionary, might have chosen a path that differed from those of the Hundred Days' Reform, but his preference for Confucius's 'Tianxia as one' was no different from them. He had repeatedly referred to the concept in his speeches and writing, which had come to become one of his core ideas for the founding of a nation. For instance, during his Three Principles of the People speech in 1924, Sun stated, 'Unite a world based on existing moral and peace to create governance of great unity'. His idea of a world of great unity was closely interrelated to the three principles: nationalism, democracy and people's welfare. The achievement of national equality, human equality and wealth equality was the achievement of a society of great unity. His thinking melded the view of bourgeois republic democracy, Confucius morality and even socialism to reflect a certain harmony but also nationalism. During his speech to the troops on December 1921, he remarked, Upon the success of the revolution, the treasure left behind by our forefathers throughout history shall be exploited. The nation shall endeavour to provide for the four major needs of the people: food, clothing shelter and transport, so as to strive for the happiness of the public. Meanwhile, the young will be taught, the strong will be used, the old will be cared. The Confucius ideal of Tianxia as one may really be achieved to create a new republic of China that is solemn and grand, and ride above Europe and America. (Quoted from Huang, 2006) If 'the great unity of Tianxia' is a utopia, then it has a lot in common with another 'utopia': communism. Mao published People's democratic dictatorship on 30 June 1946 and also mentioned 'the great unity': For the working class, the working people and the Communist Part, it is not about what is being toppled. It is actually about hard work that creates conditions for the natural annihilation of social class, national power and political parties, thus advancing mankind into the realm of great unity. He believed Kang had failed to find the real path to great unity, which was only possible 'through the people's republic that reaches socialism and communism to achieve the annihilation of social class and the great unity of Tianxia'. This demanded the implementation of socialist revolution and reform at home, and 'uniting the nations and people in Tianxia that treat us equally in our joint struggle' abroad. Following the founding of the nation, the slogans on Tian'anmen Gatetower was finalised as 'Long Live People's Republic of China' and 'Long Live the Great Unity of Tianxia People', which reflect the international-order view of a socialist country. Such view has infiltrated the core policies and diplomatic strategies of the country. The ideal of the great unity of socialism is embedded in the concepts of 'communist ideals' to 'harmonious society' and the 'Chinese dream'. The unique 'one country, two systems' China adopted for Hong Kong and Macau suggests the extension of 'Tianxia ideal', which surpasses the basic framework of nation state and inspires solution to the conflict on the Korean Peninsula (Zheng, 2006). Of course, the idea of 'Tianxia as one' is not unique to China, similar argument was found among individuals such as Marcus Aurelius, Immanuel Kant, Ulrich Beck and countries including the 'utopia' of ancient Greece and 'one world' of India. To a certain extent, 'the great unity' and 'cosmopolitanism' are two of a kind. However, the unique historical traditions of China may offer international politics unique thinking resources and an alternative practice. Contemporary Chinese thinking reflects a lot on theories. The 'Tianxia system' theory of T. Zhao (2003 explains the most critical meaning of 'Tianxia' is that of ethics and political science. It is an ideal of one world, and imagines a political unit that surpasses a nation and offers a value gauge that differs from a nation or state. Wang (2012) goes back to the discourse of Wu Wenzao and Fei Xiaotong on the Chinese nation and proposes the theory of 'surpass the new warring states', through a point of view of the uniqueness of the Chinese nation. Fei's (2000) expression 'appreciate the culture/values of others as one's own, and Tianxia will become a harmonious whole' shares the basic principle as that of 'harmony without sameness', which may be considered a programmatic principle of a new international order. A new vision is thus provided for China's external communication. For example, C. Li (2011), Former President of Xinhua News Agency, published an article titled Toward a New World Media Order in the Wall Street Journal, in which he emphasised 'In our interdependent world, the human community needs a set of more civilized rules to govern international mass communication'. Li also proposed the construction of a mechanism for media communication and negotiation known as a 'media U.N.', which surpasses the design of nation state. Conclusion The external communication of contemporary China necessitates strategic demands, as an immense number of books on the strategy, means, technique and effect are already on offer. However, a more fundamental vision of Tianxia order may offer a sounder foundation. The basic structure of 'empire -nation state' requires reflection, while the embedded logic of the struggle of imperialism is sorely responsible for the world war and conflict. The 'great unity of Tianxia' of ancient China has the potential to offer an alternative world vision. In the vocabulary of the social mass, 'Chinese Century' is often compared with the prosperous period of Han and Tang dynasties. The time has changed, and contemporary political, economic and cultural systems different greatly from those of a thousand years ago. The thinking resources of the Chinese Century are far more complex than those revealed by the logic of an ancient society. How to 'Three Shared Unities' (Gan, 2007), while searching for the core logic that enables China to stand among world nations based on the integration of thinking resources, should be the connotation embedded in the subject of China's search for the 'confident in our chosen path'. Moreover, China's external communication may be thus inspired.
2019-05-26T14:37:39.790Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "9fb3d32d615357447c77d1e6bc4bb54db78633ad", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2059436417725213", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "808a0a25bccf38c02d74a16b86e83151a2e51ec0", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "History" ] }
98068897
pes2o/s2orc
v3-fos-license
TiO 2 / Halloysite Composites Codoped with Carbon and Nitrogen from Melamine and Their Enhanced Solar-Light-Driven Photocatalytic Performance Carbon (C) and nitrogen (N) codoped anatase TiO 2 /amorphous halloysite nanotubes (C+N-TiO 2 /HNTs) were fabricated using melamine as C and N source. The samples prepared by different weight ratios of melamine and TiO 2 were investigated by X-ray diffraction (XRD) and UV-vis diffuse reflectance spectrometer. It is shown that the doping amounts of C and N could influence the photocatalytic performance of as-prepared composites. When the weight ratio of melamine/TiO 2 is 4.5, the C+N-TiO 2 /HNTs exhibited the best photocatalytic degradation efficiency of methyl blue (MB) under solar light irradiation. The obtained C+NTiO 2 /HNTs were characterized by transmission electron microscopy (TEM), N 2 adsorption-desorption isotherm (BET), X-ray photoelectron spectroscopy (XPS), and Fourier transform infrared spectrum (FT-IR). The results showed that the aggregation was effectively reduced, and TiO 2 nanoparticles could be uniformly deposited on the surface of HNTs.This leads to an increase of their specific surface area. XPS and FT-IR analyses indicated TiO 2 particles were doped successfully with C and N via the linkage of the Ti–O–N, O–Ti–N, and Ti–O–C. Photocatalytic experiments showed that C+N-TiO 2 /HNTs had higher degradation efficiency of MB than TiO 2 /HNTs. This makes the composite a potential candidate for the photocatalytic wastewater treatment. Introduction Industrial dyes are one of the main sources of water contamination, which are enormously harmful to ecological environment and human beings [1,2].Numerous representative methods, including Fenton oxidation, biological treatment, photocatalytic degradation, membrane filtration, and adsorption [3][4][5][6][7], have been employed to remove the organic dyes from polluted wastewater.Photocatalysis has been commonly deemed to be a mature and reliable technique for the wastewater treatment.In the past decades, enormous efforts have been devoted to researching oxide semiconductor photocatalysts with high activities for environmental protection [8][9][10].As a promising solar-driven photocatalyst, anatase titania (TiO 2 ) has attracted tremendous attentions for water cleaning.However, some malpractices of anatase TiO 2 are still under concern: (a) the agglomerates, composed of the primary small particles, increase the size of TiO 2 ; (b) the ability of visible-light response is not satisfactory.Therefore, much effort has been made on anatase to improve its visible-light photocatalytic capability by controlling its microstructure (morphology, size, crystallinity, and facets) and by tuning its band structure near the valence maximum and conduction band minimum (with element doping, oxygen vacancies, etc.) [11,12].Among them, element doping impurities may be alternative for the extension of photocatalytic activity of TiO 2 into the visible region compared to other methods because the doping element states are near the valence band edge.The several nonmetal codoped TiO 2 materials, such as nitrogen/fluorine [9], sulfur/ nitrogen [13], and carbon/nitrogen [14,15], mainly based on nitrogen doping effect, could result in higher visiblelight responses as compared to the TiO 2 doped with single element. International Journal of Photoenergy As a sort of available aluminosilicate clay, halloysite nanotubes (HNTs) have been intensively investigated in the treatment of dye wastewater [16][17][18] due to their well-defined hollow tubular structure with ca.15 nm diameter lumen and 600-1500 nm length averagely [19], which owns a large specific surface area and more complicated pore distribution.Furthermore, the clay nanotubes possess the advantages of large surface area, high porosity, and tunable surface chemistry, which enable this nanomaterial to be utilized as an attractive support for the assembly of small metal and metal oxide.Thus, depositing the TiO 2 nanoparticles onto HNTs is a promising method to block their aggregation.Then HNTs can be directly used to support the TiO 2 nanoparticles because of these hydroxyl groups.The combination of TiO 2 and HNTs is promising to simultaneously possess excellent photocatalytic activity and absorptivity, which could deliver exceptional performances in photocatalytic degradation of organics. In this work, TiO 2 /amorphous halloysite composites were facilely fabricated by a "precipitation-dissolution-recrystallization" route at a low temperature, and then C+N codoped TiO 2 /amorphous halloysite photocatalysts were obtained using melamine as C+N source at a high temperature.The performance of TiO 2 /amorphous halloysite composites incorporating C+N codoped TiO 2 /amorphous halloysite photocatalysts on the photocatalytic degradation of methyl blue (MB) under solar light is studied. Preparation of C+N Codoped TiO 2 /Amorphous Halloysite Composite Catalysts (C+N-TiO 2 /HNTs).TiO 2 /amorphous halloysite composites were fabricated by a "precipitationdissolution-recrystallization" route.The concrete procedures were as follows: titanium tetrachloride (TiCl 4 ) solution (2.3 M, 60 mL) was put into the flask with four necks, and then NaOH solution (2.5 M, 165 mL) was added dropwise into the above TiCl 4 solution with vigorous agitation.The reaction temperature was controlled and set to 10 ∘ C. The turbid solution was obtained at the end of reaction.Subsequently, the system temperature was controlled to 80 ∘ C, halloysite dispersion (0.068 g/mL, 320 mL) was added rapidly into the above solution when the turbid solution began to clarify, and finally the whole mixture was kept at 80 ∘ C for 4 h.The obtained TiO 2 /amorphous halloysite dispersion was filtrated and washed with deionized water, and the asprepared TiO 2 /amorphous halloysite composites were again mixed with melamine (MA), and the obtained uniform slurry mixture was placed into a muffle furnace at 550 ∘ C for 4 h.With this, the yellow powders C+N-TiO 2 /HNTs were completely formed.For comparison, C+N-TiO 2 and TiO 2 /HNTs catalysts were prepared under similar conditions. Characterizations. X-ray diffraction (XRD) patterns were recorded on D/Max 2500 PC X-ray diffractometer (Rigaku Corporation, Japan) with Cu K radiation of the X-ray wavelength 0.15418 nm over a 2 range (5∼80 ∘ ).The N 2 adsorption-desorption isotherms and pore distribution (Brunauer-Emmett-Teller, BET, method) were determined by Micromeritics Corporation ASAP2010C surface area and porosimetry system.The morphologies of the as-obtained samples were observed by JEOL Corporation (Japan) JEM-2100 transmission electron microscopy (TEM).X-ray photoelectron spectroscopy (XPS) measurement was carried out by VG Corporation (UK) ESCALAB MKII with an Al K X-ray source.Fourier transform infrared (FT-IR) spectrum was performed by a Nicolet Avatar 370 (Thermo Corporation, USA) from 4000 to 400 cm −1 .The UV-vis absorption spectra were measured under the diffused reflection mode using the integrating sphere (UV2401/2, Shimadu, Japan) attached to a Shimadu 2550 UV-vis spectrometer. Evaluation of Photocatalytic Activity. The photocatalytic activities of the obtained photocatalysts were tested by the degradation of methylene blue (MB) under simulated solar light irradiation in the vessel with Xeon lamp (300 W). 0.03 g of catalyst powders were dispersed in aqueous solution of MB (500 mL, 20 mg/L) by ultrasonication and oscillation, and the obtained mixture was sonicated.Prior to irradiation, the dispersions were magnetically stirred in dark for 30 min.During the MB photodecomposition, samples were withdrawn at regular intervals (10 min) and centrifuged to separate solid particles for analysis.The concentration of aqueous MB was determined using 722 vis spectrophotometer by measuring its absorbance at the range of 664 nm.The MB degradation was calculated by Lambert-Beer equation (1), where 0 is the initial absorbance of the MB solution ( 0 = 1.525), is the absorbance of MB solution after irradiation, and is photodegradation yield.Photoactivities for MB in the dark in the presence of the photocatalyst and under solar light irradiation in the absence of the photocatalyst were also evaluated as follows: XRD Analysis. To ascertain the structures of the products, XRD patterns of HNTs and TiO 2 /HNTs with different mass ratios of MA and TiO 2 are shown in Figure 1.The XRD reflections of HNTs at 2 = 12.1 ∘ , 19.9 ∘ , and 24.8 ∘ in accordance with reflection planes (001), (020), (110), and (002) [20] disappear after calcination, suggesting that the crystal structure of HNTs has been destroyed.The result is in agreement with the previous report [21].However, the tube-like morphology of HNTs can be still maintained from the following TEM in Figure 3. Besides, it can be seen that calcined TiO with those of original TiO 2 /HNTs, indicating that the doping of C and N modifies the bandgap energy of TiO 2 .In addition, the red shift of C+N-TiO 2 /HNTs is slightly different with the increasing dosage of MA, demonstrating that the MA dosage shows important impact on the properties of TiO 2 .In addition, the samples of C+N-TiO 2 /HNTs show the highest red shift when the mass ratio of MA and TiO 2 is 4.5, in which the wavelength is extended to approximately 470 nm, further shifting to the visible region.This result is in accordance with the theoretical electronic structure calculation that the C+N codoped TiO 2 presents strong visible-light absorption in the range of 400-600 nm [22].The observation implies that C+N-TiO 2 /HNTs (mass ratio of MA and TiO 2 is 4.5, C+N-TiO 2 /HNTs (4.5)) may have preferable photocatalytic performance. TEM Analysis. The morphologies of HNTs, TiO 2 , TiO 2 / HNTs, and C+N-TiO 2 /HNTs (4.5) are analyzed by TEM as shown in Figure 3. Pristine halloysite is a cylindrical-shaped tube with multilayer walls.Generally, the HNTs contain agglomerates of nanotubes with some irregularities in diameter, wall thickness, and morphology [23] (Figure 3(a)).Serious aggregation of the oblate-like C+N-TiO 2 particles obtained under similar conditions can be found (Figure 3(b)), and the length of particles is ∼100 nm and the width is ∼30 nm.However, TiO 2 particles are uniformly deposited on the surface of HNTs, indicating that aggregation is effectively prevented by introducing HNTs (Figure 3 The phenomena could be attributed to the presence of more interparticle pores of the composites and a better TiO 2 distribution on HNTs [24], which may be beneficial to dark adsorption for dye.The above results are in good agreement with those of TEM. XPS and FT-IR Analysis. Chemical states of incorporated dopants in the as-prepared materials are determined by XPS (Figure 5).The binding energy (BE) distribution of Ti 2p for calcined TiO 2 /HNTs shifts to high binding energy direction compared with that of pristine TiO 2 , suggesting the formation of Ti-O-Si bond (Figure 5(a)).The electronegativity of Ti is less than that of Si, which makes the BE of Ti 2p of Ti-O-Ti be lower than that of Ti-O-Si.For C+N-TiO 2 /HNTs, the lattice incorporation of N generates Ti-N bonds by the partial replacement of O 2− with N − (Figure 5(a)).This gives rise to an increase in the electron density on Ti due to the fact that the electronegativity of the N atom is smaller than the O atom and partial reduction of Ti 4+ to Ti 3+ occurs, which manifests as a slightly decrease in Ti 2p binding energy [25].The core level of N 1s for C+N-TiO 2 /HNTs can be fitted into two peaks at 398.5 eV and 400.1 eV, respectively (Figure 5(b)).The first major peak at 398.5 eV is assigned to the substituted N in the form of O-Ti-N [26,27], indicating that partial of O atoms in the lattice of TiO 2 is substituted by N − anions.The analysis is in agreement with that of Ti 2p result.The latter minor peak is attributed to the interstitial N-doping or the formation of Ti-O-N species [28].Furthermore, XPS spectra of O 1s for calcined TiO 2 /HNTs and C+N-TiO 2 /HNTs are analyzed.The O 1s peak of calcined TiO 2 /HNTs (Figure 5(c)) can be fitted into two components centered at 530.1 eV and 532.0 eV.The first component is attributed to lattice oxygen in TiO 2 , while the second one can be assigned to oxygen atoms of Si-O/Al-O bonds.For C+N-TiO 2 /HNTs, the O 1s spectra can be fitted into three components.The first two components at 530.0 eV and 532.2 eV are similar to those of calcined TiO 2 /HNTs.The newly emerged component is centered at 532.8 eV; it may be ascribed to O 1s originated from Ti-O-C (Ti-O=C) or Ti-O-N groups due to the substitution of carbon for some of the lattice titanium atom [12].These findings agree well with the results of UV-vis analyses, which are responsible for the high photocatalytic activity. In order to provide additional evidence of the codoping C and N, FT-IR spectra are performed (Figure 6).The broadband at 3424 cm −1 could be ascribed to the stretch vibration of O-H, whereas the peaks at 1634 cm −1 and 1401 cm −1 could be assigned to the bending vibrations of O-H formed by adsorbed water molecules and N-H groups [29].It has been reported that codoping with C and N increased the amount of surface adsorbed water and hydroxyl groups [30].Clearly, the intensity of hydroxyl groups from C+N-TiO 2 /HNTs noticeably increases compared to that of the pure TiO 2 /HNTs, and a new peak at 2050 cm −1 from C+N-TiO 2 /HNTs can be observed, which indicated that TiO 2 may be codoped with C and N after calcination.7(a), inset).The two catalysts exhibit satisfactory photodegradation of MB under solar light irradiation.When MB is exposed for 1 h, approximately 85% and 95% of MB are removed by TiO 2 /HNTs and C+N-TiO 2 /HNTs, respectively.However, C+N-TiO 2 /HNTs display more excellent performance in comparison with TiO 2 /HNTs (Figure 7(a)).The better adsorption and photodegradation of MB by C+N-TiO 2 /HNTs are ascribed to the larger BET surface and the better doping of TiO 2 .To evaluate its usefulness, the two catalysts are reused five times for photodegradation (Figure 7(b)).After five cycles, the degradation effectiveness decreases rapidly for TiO 2 /HNTs, which indicates that photodegradation by C+N-TiO 2 /HNTs remain steady and more effective than TiO 2 /HNTs under successive solar light. Conclusions C and N codoped anatase TiO 2 /HNT photocatalysts with a series of mass ratios of melamine and TiO 2 have been successfully synthesised.It is found that the photoactivity of C+N-TiO 2 /HNTs can be clearly improved when the mass ratio of melamine and TiO 2 is 4.5.The anatase TiO 2 nanoparticles can be uniformly deposited on the surface of HNTs, and their particle size decreases compared with the analogous TiO 2 .Consequently, the C+N-TiO 2 /HNTs exhibit steadier and more effective adsorption and photodegradation due to the larger BET surface and the better doping than TiO 2 /HNTs.Experiments prove that C+N-TiO 2 /HNTs can be employed repeatedly as a promising candidate for the treatment of dye wastewater.
2019-01-03T05:41:41.248Z
2015-11-16T00:00:00.000
{ "year": 2015, "sha1": "222e70415ac1548fd08143122b9dd12e0b097438", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijp/2015/605690.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "222e70415ac1548fd08143122b9dd12e0b097438", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
235600348
pes2o/s2orc
v3-fos-license
Fe-doped chrysotile nanotubes containing siRNAs to silence SPAG5 to treat bladder cancer Background For certain human cancers, sperm associated antigen 5 (SPAG5) exerts important functions for their development and progression. However, whether RNA interference (RNAi) targeting SPAG5 has antitumor effects has not been determined clinically. Results The results indicated that Fe-doped chrysotile nanotubes (FeSiNTs) with a relatively uniform outer diameter (15–25 nm) and inner diameter (7–8 nm), and a length of several hundred nanometers, which delivered an siRNA against the SPAG5 oncogene (siSPAG5) efficiently. The nanomaterials were designed to prolong the half-life of siSPAG5 in blood, increase tumor cell-specific uptake, and maximize the efficiency of SPAG5 silencing. In vitro, FeSiNTs carrying siSPAG5 inhibited the growth, migration, and invasion of bladder cancer cells. In vivo, the FeSiNTs inhibited growth and metastasis in three models of bladder tumors (a tail vein injection lung metastatic model, an in-situ bladder cancer model, and a subcutaneous model) with no obvious toxicities. Mechanistically, we showed that FeSiNTs/siSPAG5 repressed PI3K/AKT/mTOR signaling, which suppressed the growth and progression of tumor cells. Conclusions The results highlight that FeSiNTs/siSPAG5 caused no activation of the innate immune response nor any systemic toxicity, indicating the possible therapeutic utility of FeSiNTs/siSPAG5 to deliver siSPAG5 to treat bladder cancer. Graphic abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12951-021-00935-z. Background Currently, disrupted regulatory networks and their associated genome aberrations are being investigated by large-scale genomics projects, such as The Cancer Genome Atlas project and the International Cancer Genome Consortium, for their effects in promoting cancer progression [1,2]. A wide range of therapeutic agents have been developed guided by these large-scale research efforts, which are designed to inhibit cancer-dependent genes and pathways [3][4][5]. Unfortunately, a large proportion of these therapeutic targets cannot currently be targeted by, or do not respond to, antibodies or small molecule inhibitors [6]. Small interfering RNAs (siRNAs) induce mRNA degradation in a sequence-specific manner, and are used widely to ablate the expression of genes with important functions in cancer cell survival and progression. For instance, siRNA silencing of p53 [7], BCL2 (encoding BCL2 apoptosis regulator) [8], and SPAG5 (encoding sperm associated antigen 5) [9], could regulate human bladder cancer and the growth and progression of other cancer cells. However, in the clinical setting, the use of siRNAs as therapeutics presents several challenges [10]: The administration of naked siRNAs into the human or animal blood stream would result in their degradation by blood-borne nucleases; and the negative charge on siRNAs prohibits them from crossing target cells plasma membranes to achieve gene knockdown. Therefore, an effective siRNA carrier should be developed to establish siRNA therapy. Naturally aluminosilicate minerals with nanosized structures (e.g., halloysite, kaolinite, and montmorillonite) have been constructed to efficiently deliver siR-NAs because of their stable chemical composition, micro-morphology structures, and unique physicochemical properties. Precise control over the main physicochemical features, such as the size, shape, and charge, is essential for the natural aluminosilicate minerals used in the field of bio-medical engineering. Chrysotile (Mg 3 Si 2 O 5 (OH) 4 ) is a 1:1 hollowed tubular morphology clay mineral consisting of one tetrahedral SiO 4 sheet outer surface and one octahedral gibbsite Mg(OH) 3 sheet inner surface. Despite asbestos long-term exposure could cause chronic pleural diseases, pulmonary fibrosis and lung cancers, which strongly depending on lengthdiameter ratio and possibly metal content, the synthetic hydrosilicate chrysotile with a length of several hundred nanometers possess little toxic potential compared to natural chrysotile with a length above 10 μm. According to previous literatures, lung cancer and pulmonary fibrosis caused by asbestos long-term exposure with longer than ~ 15 µm and thicker than 0.1 µm, mesotheliomas and pleural plaques caused by asbestos long-term exposure with longer than ~ 4-5 µm and thinner than ~ 0.1 µm. Geoinspired synthetic chrysotile nanotubes have been prepared using hydrothermal synthesis with the composition (Mg, Fe, Co, Ni) 3 Si 2 O 5 (OH) 4 , and specific properties, such as optical, electronic, and magnetic properties, have been achieved. Previous studies suggest that natural minerals could be synthesized and tailored for cancer therapeutic applications [11][12][13]. Consequently, the present study aimed to use Fe-doped chrysotile nanotubes (FeSiNTs) to deliver an anti-cancer siRNA targeting the SPAG5 oncogene. SPAG5 is located at 17q11, a frequently amplified region, and encodes a protein involved in mitotic spindle assembly. Recently, SPAG5 has been suggested a novel oncogene in various cancers. Studies suggest that SPAG5 is involved in tumorigenesis and cancer progression. The expression of SPAG5 is increased in various tumor tissues, such as breast cancer, prostate cancer, lung cancer, hepatocellular carcinoma, gastric cancer, and cervical cancer, and upregulated SPAG5 is associated with poor prognosis in cancer patients [14][15][16][17][18]. Accordingly, our previously data suggested that SPAG5 upregulation could be detected frequently in primary bladder cancer tissues and high SPAG5 expression was identified a novel independent prognostic marker for patient survival [9]. SPAG5 is associated with tumorigenesis, apoptosis, and the tumor cell cycle in vitro and in vivo [19][20][21]. Moreover, SPAG5 knockdown resulted in marked anti-tumor effects in a number of human malignancies [19][20][21]. In this study, we created Geoinspired synthetic FeS-iNTs, consisting of a special nanostructure combining Fe cations occupying octahedral sites and a tubular morphology, which could be used for the encapsulation, sustained release, and intracellular delivery of siRNAs. We used these siRNA-delivering nanoparticles to treat bladder cancer. Moreover, FeSiNTs could efficiently and safely deliver siSPAG5 into bladder cancer cells, where they escaped from endosomes into the cytosol. We showed that siSPAG5 delivered by FeSiNTs nanoparticles could effectively inhibit the migration, invasion, and proliferation of bladder cancer cells. Synthesis and characterization of FeSiNTs The fixed size, morphology, and chemical composition of chrysotile could be used to ensure that the toxicity evaluation is objective, fair, and accurate. FeSiNTs were prepared with an Fe content up to 1.37 wt.% under the different hydrothermal environments. The Fe doping extent of synthesized chrysotile nanotubes using the hydrothermal method reported in literature ranged from 0.29 to 1.37% [22,23]. The TEM images showed that the FeSiNTs had a hollow tubular morphology similar to those reported previously [24], with a relatively uniform outer diameter (7-8 nm), inner diameter (3-5 nm), and a length of several hundred nanometers ( Fig. 1A; Additional file 1: Figure S1, Additional file 2: Figure S2 and Additional file 3: Figure S3A). Fe was homogeneously distributed into the layer structure, as shown by the Energy Dispersive Spectrometer mapping (EDS) maps (Fig. 1B), and indicated that Fe elements were substituted into octahedral sites with an appropriate mass ratio. Comparison of the XRD patterns (Additional file 3: Figure S3B) showed peaks corresponded to magnesite (JCPDS Card No. 08-0479) and lizardite (JCPDS Card No. 50-1625). The N 2 adsorption isotherms (Additional file 3: Figure S3C) showed the features expected for microporous systems with some mesoporosity, with a Barrett-Joyner-Halenda (BJH) pore diameter of about 7-8 nm, the Brunauer Emmett Teller (BET) specific surface area ranged from 80 to 150 m 2 /g, and the pore volume was approximately 0.45 mL/g. The isoelectric point was approximately 3.5 (Additional file 3: Figure S3D), and most of the zeta potentials were positive between 45.7 mV (pH 2.0) and − 8.42 mV (pH 10.0). The peak fitting program on the XPS spectra (Fig. 1C) showed that the binding energy (BE) value of O 1 s, Mg 1 s, Si 2p in silanol (Si-OH) and silicon-oxygen (Si-O), were located FeSiNTs siRNA-binding efficiency and cytotoxicity The RNA-binding ability of FeSiNTs complexes were tested using agarose gels. FeSiNTs could bind siRNAs in a dose-dependent manner (Additional file 4: Figure S4A), as determined by the level of un-complexed (free) siRNA in the agarose gel. At mass ratios of 30:1 or higher, in the mixture of FeSiNTs with siRNA, the siRNA band was almost absent compared with that of the free siRNA lane, which implied that the FeSiNTs complexes had a high binding ability for siRNA. FeSiNTs cytotoxicity was assessed using the Cell counting kit-8 (CCK-8) assay. Cytotoxicity was assessed using FeSiNTs delivering the NC-siRNA in T24 cells at different concentrations. No obvious toxicity was observed even at the highest concentration of FeSiNTs complexes (200 μg/mL), a level that was 20 times higher than the concentration required for effective transfection (Additional file 4: Figure S4B; Additional file 5: Figure S5). By contrast, Lipofectamine 3000 demonstrated some cytotoxicity at the transfection effective dose. The lack of cytotoxicity suggested that FeSiNTs could be suitable as an siRNA transfer device. Release analysis of FeSiNTs/siRNA in medium and serum The release of FeSiNTs/siRNA in medium and serum was tested by gel electrophoresis. As shown in Additional file 6: Figure S6, the free siRNA was unstable in medium and serum, and the free siRNA bands disappeared after incubation in medium and serum for 16 h and for 8 h, respectively. However, when FeSiNTs/siRNA was incubated with serum and medium for 32 h, the integrity of the FeSiNTs/siRNA band was still evident (Additional file 6: Figure S6). To determine how the formulations' composition influenced siRNA internalization, FeSiNTs/FAM-siRNAs at various concentrations were incubated with T24 cells for 4 h and then flow cytometry was performed after quenching the extracellular fluorescence. The percentage of fluorescent cells (containing FeSiNTs/FAM-siRNA complexes) increased in a FeSiNTs/FAM-siRNA dosedependent manner (Fig. 2B). Moreover, at an FeSiNTs/ FAM-siRNA dose of 150 nM, the cells showed an approximately similar efficiency of transfection compared with that of Lipofectamine 3000/FAM-siRNA complexes. The efficiency of transfection changed little at doses of siRNA up to 200 nM. Thus, the optimal dose of siRNA delivered by FeSiNTs transfection of T24 cells was determined as 150 nM. Importantly, at a dose inducing transfection efficiency similar to that Lipofectamine 3000, no obvious cytotoxicity of FeSiNTs was observed. Cellular uptake and endosomal escape of FeSiNTs-siRNA To achieve efficient gene silencing using siRNAs, high levels of cell uptake and successful release of the siRNA into the cytoplasm [25]. Thus, T24 cells incubated with FeSiNTs/FAM-siRNA for various times were examined for the nanoparticle locations in cells whose nuclei were counterstained with 4ʹ, 6ʹ-diamidino-2-phenylindole (DAPI). At the beginning, the FAM-siRNA fluorescence was distributed a punctate manner in the cytoplasm and nuclear periphery. At 4 and 8 h, the green fluorescence was significantly increased (Fig. 2C). To confirm the lysosomal escape of FeSiNTs-siRNA, LysoTracker was used to stain T24 cells, followed by confocal fluorescence microscopy (Additional file 7: Figure S7). After incubation for 1 h, FeSiNTs-siRNA (green) and LysoTracker (red) fluorescence were co-localized in the transfected cells (Additional file 7: Figure S7), demonstrating the lysosomal location of the FeSiNTs-siRNA. At 4 h, the green (FeSiNTs-siRNA) and red (LysoTracker) fluorescence had separated (Additional file 7: Figure S7), indicating the lysosomal location of the FeSiNTs-siRNA. At 4 h, the green fluorescence (FeSiNTs-siRNA) had separated from the red fluorescence (LysoTracker; Additional file 7: Figure S7), indicating lysosomal escape of the FeSiNTs-siRNA into the cytoplasm. Therefore, we were confident that the siRNA escaped from the lysosomes as FeSiNTs-siRNA into the cytoplasm where it could perform siRNA-mediated gene silencing. SPAG5 knockdown by FeSiNTs-siRNA complex in bladder cancer cells Next, we investigated the SPAG5 gene silencing effect of the FeSiNTs-siRNA complexes in T24 cells. Treatment with FeSiNTs-siRNA, at 150 nM siRNA equivalent, resulted in 95% reduction of SPAG5 mRNA expression compared with that in the untreated control (Fig. 3A). Correspondingly, in T24 cells treated with FeSiNTs-siRNA, the level of the SPAG5 protein was reduced markedly compared with that in the untreated control (Fig. 3A). Thus, FeSiNTs-siRNA silenced SPAG5 expression in a very sequence-specific manner. The FeSiNTs protected the siRNAs from degradation and promoted their uptake by cells. The FeSiNTs-siRNA also demonstrated lysosomal escape to exert siRNA-mediated gene silencing. FeSiNTs/siSPAG5 regulates the cell cycle, apoptosis, colony formation, and proliferation of bladder cancer cells We tested the effect of FeSiNTs/siSPAG5 complexinduced SPAG5 silencing on T24 cell growth using the CCK-8 and EdU assay. T24 cell proliferation gradually decreased after treatment with FeSiNTs/siSPAG5 compared with cells treated with PBS, FeSiNTs, and FeSiNTs/ siNC ( Fig. 3B; Additional file 8: Figure S8). Among the four treatments, the colony forming ability of the FeS-iNTs/siSPAG5-treated group was the lowest (Additional file 9: Figure S9). Thus, silencing of SPAG5 via treatment with FeSiNTs/siSPAG5 specifically reduced T24 cell proliferation in vitro. To confirm that apoptosis was involved in the response of T24 cells to FeSiNTs/siSPAG5, the cells were analyzed using flow cytometry with annexin V and PI staining. At 48 h after FeSiNTs/siSPAG5 transfection, the rate of cell apoptosis (annexin-V+/PI− and annexin-V+/PI+) was higher compared with that in cells treated with PBS, FeSiNTs, or FeSiNTs/siNC. The inhibition of SPAG5 by FeSiNTs/siSPAG5 increased the proportion of apoptotic cells significantly (32.91%) compared with that of the cells treated with PBS, FeSiNTs, and FeSiNTs/siNC (Additional file 10: Figure S10). Flow cytometry assessment of the T24 cell cycle under various treatments was performed. At 2 days post-transfection, there was a reduced proportion of S-phase FeSiNTs/siSPAG5-treated T24 cells compared with that of the controls (Additional file 11: Figure S11). At the same time, the PI values for the PBS, FeSiNTs, FeSiNTs/siNC-treated, and FeSiNTs/ siSPAG5-treated cells were 0.52, 0.51, 0.50, and 0.39, respectively (Additional file 11: Figure S11). Collectively, the results suggested that FeSiNTs/siSPAG5-mediated SPAG5 silencing had a marked anti-tumor effect by decreasing proliferation, and inducing cancer cell apoptosis and cell cycle arrest in vitro. FeSiNTs/siSPAG5 regulates migration and invasion of bladder cancer cells To further evaluate the effect of FeSiNTs-SPAG5-mediated silencing of SPAG5 on T24 cell behavior, we performed a wound-healing assay in transformed T24 cells. The migration of FeSiNTs/siSPAG5-transfected T24 cells was significantly reduced in a time-dependent manner. By contrast, the controls showed no such reduction (Fig. 3C). Next, Matrigel invasion assays were performed. T24 cells transfected with FeSiNTs/siSPAG5 showed reduced invasive behavior compared with that of the control groups (Fig. 3D). Thus, FeSiNTs/siSPAG5-mediated SPAG5 silencing exerted its anti-tumor effect by reducing the migration and invasiveness of bladder cancer cells in vitro. The biodistribution of FeSiNTs/FAM-SiSPAG5 in mice when delivered via intratumoral injection Systemic delivery of siRNAs is associated with many adverse effects, which could be reduced by localized delivery (i.e., direct intratumoral delivery) of siRNA. Local administration via direct intratumoral injection of siRNA complexes has been achieved in mouse xenograft models. For example, polyethyleneimine (PEI)-siRNAs could inhibit glioblastoma xenograph tumor growth after intratumoral injection [26]. Intravesicular instillation of chemotherapeutic drugs (e.g., Calmette-Guérin, pirarubicin, and gemcitabine) is accepted widely as a treatment strategy to prevent the postsurgical recurrence of superficial bladder cancer, which avoids the induction of serious adverse events associated with systemic administration. Therefore, we investigated if direct intratumoral injection into a mouse bladder cancer model would result in even siRNA distribution throughout the tumor. FeSiNTs-siSPAG5 was injected intratumorally into mouse subcutaneous xenografts, and in vivo Imaging Technology was utilized to measure the distribution of FAM-siSPAG5. FAM-siSPAG5, FeSiNTs, and PBS were set as controls. At 0.5 h after injection, a stronger fluorescence intensity of FeSiNTs/FAM-siSPAG5 in the tumor tissue compared with that from tissue injected with FAM-siSPAG5 was observed (Fig. 4A). In addition, in the FeSiNTs/ FAM-siSPAG5 treated group, the fluorescence distribution area was significantly larger than then that for the naked FAM-siSPAG5 treatment, which demonstrated the excellent tumor tissue penetration ability of FeSiNTs carrying FAM-siSPAG5 and its subsequent distribution over a large tumor area. At 16 h post-injection, the tumor site still showed FeSiNTs/FAM-siSPAG5-related fluorescence, whereas tumors injected with naked FAM-siS-PAG5, FeSiNTs, or PBS showed no visible fluorescence. We speculated that the packaging of FAM-siSPAG5 into FeSiNTs inhibited nonspecific adsorption proteins and aggregation of nanoparticles in tumor tissues; therefore, compared with naked FAM-siSPAG5, FeSiNTs/FAM-siS-PAG5 accumulated at the tumor site for a longer period. FeSiNTs/FAM-siSPAG5 biodistribution in mice via tail vein injection We next determined whether FeSiNTs nanoparticlecoated siSPAG5 could reach the tumor site by passing through the blood stream, and if they were stable in circulation. PBS, free FAM-siSPAG5, FeSiNTs, or FeSiNTs/ FAM-siSPAG5 were injected into mice's tail veins. The biodistribution of the complexes was determined using Xenogen IVIS Lumina system in mice at 0.5, 4, 8, and 16 h after injection (Fig. 4B-E). It was shown that the fluorescence intensities in bladder tumor tissues collected from the FeSiNTs/FAM-siSPAG5 treated mice were significantly higher than those from the FAM-siSPAG5 treated mice after intravenous injection ( Fig. 4B-E). This result suggested that in the absence of the nanoparticle coating, the siRNA was cleared rapidly in vivo via the blood stream. The data implied that in circulation, FeS-iNTs/FAM-siSPAG5 were stable, and were delivered efficiently to tumors after tail vein injection. Antitumour efficiency Next, we used three bladder tumor models to investigate the effects of gene knockdown and the antitumor activities of the FeSiNTs-siSPAG5 complexes. Gene knockdown and antitumor effect in the subcutaneous model Subcutaneous tumors were formed by injecting T24 cells into right flank of Balb/c nude mice. FeSiNTs-siSPAG5 complexes, at an siRNA dose of 20 μg per injection, were injected intratumorally weekly for five times in total. Various controls were also injected into separate groups of mice (Fig. 5A). Injection of FeSiNTs-siSPAG5 resulted in the most significant tumor suppression, whereas injection of free siSPAG5 caused limited tumor inhibition (Fig. 5E). Compared with that of the PBS control, the tumor volume for mice treated with FeSiNTs-siSPAG5 was reduced by 69% and was reduced by 19% after free siRNA injection (Fig. 5B). In contrast, tumor growth was not affected by injection of FeSiNTs (Fig. 5B, E). Among all the groups, the FeSiNTs-siSPAG5 group had the smallest tumor sizes and the lowest tumor weights (Fig. 5B, C). Every 4 days, the mice's body weights were monitored, which revealed no gross toxicity among the groups (Fig. 5D). The mice in the FeSiNTs-siSPAG5 group displayed the longest survival compared with that of the other groups according to Kaplan-Meier survival analysis (Fig. 5F). We next evaluated the depletion of the SPAG5 protein level in the tumors induced by the FeSiNTs-siSPAG5 complexes. The FeSiNTs-siSPAG5 showed a greater inhibitory effect on SPAG5 protein levels compared with that of the PBS control (Additional file 12: Figure S12). Moreover, immunohistochemistry (IHC) analysis should that the only the FeSiNTs-siSPAG5-treated xenografts showed decreased SPAG5 protein levels (Additional file 13: Figure S13). Subsequently, to determine whether depletion of SPAG5 inhibited tumor cell proliferation, the levels of the proliferation-related protein Ki67 were determined using IHC. Injection of FeSiNTs-siSPAG5 decreased the Ki67 levels (Additional file 13: Figure S13). A TUNEL assay was then used to determine tumor cell apoptosis. Significant numbers of apoptotic cells were observed in tumors injected with FeSiNTs/siSPAG5 (Additional file 13: Figure S13). Thus, SPAG5 silencing by FeSiNTs-siSPAG5 resulted from increased cellular uptake of FeSiNTs-siSPAG5 complexes compared with that of the free siRNA, and the nanoparticle-delivered siSPAG5 caused RNA interference (RNAi)-mediated gene silencing [27]. Antitumour effect in in-situ bladder cancer model On od the main therapies used to treat bladder cancer is urinary bladder instillation chemotherapy. Therefore, we further investigated the clinical significance of the FeS-iNTs-siSPAG5 complexes by determining whether they have a tumor suppressive effect in an in situ model of bladder cancer. Among the treatment groups, the number of the rat's bladder lesions differed (Additional file 14: Table S1). Significant differences in the histopathological changes were observed among the FeSiNTs-siSPAG5, FeSiNTs, and free SPAG5 treatment groups (P < 0.05). Furthermore, treatment with FeSiNTs-siSPAG5 resulted in a lower tumor stage in most of the bladder cancers examined (stage pTa/T1), which contrasted with that in the other groups, in which most of tumors were at a stage higher than pT2. This suggested that the FeSiNTs-siS-PAG5 treatment had a markedly better therapeutic effect than PBS treatment. This conclusion was confirmed by histological examination of excised bladders and H&E staining of tissue slices (Fig. 6). Antitumour effect in the tail vein injection lung metastasis model We demonstrated highly efficient accumulation of intact FeSiNTs-siSPAG5 in tumors; therefore, the tail vein injection lung metastasis model was used to evaluate the therapeutic efficacy of FeSiNTs-siSPAG5. Pulmonary metastatic nodules were enumerated in the excised lungs from each group of mice. In contrast to those in the PBS and other control groups, almost no macroscopic tumor metastases were seen in the lungs of the FeSiNTs-siSPAG5 group (Fig. 7A, B). The levels of pulmonary metastatic nodules were reduced by 81.8% in the FeSiNTs-siSPAG5 group relative to that in the PBS group (Fig. 7C). Notably, the proportion of metastasis inhibition in the FeSiNTs-siSPAG5 group was significantly larger compared with that in the other groups (P < 0.01). The rapid proliferation and growth of tumor cells can promote the occurrence of tumor metastasis and invasion [28], thus the substantial antitumor effect of FeSiNTs-siSPAG5 probably accounts for its demonstrable anti-metastasis efficacy. Thus, the in vivo findings showed that compared with those of the other treatments, the FeSiNTs-siSPAG5 complexes displayed markedly enhanced antitumor efficiency and a substantial antimetastatic effect. FeSiNTs-siSPAG5's in vivo toxicity We next investigated the in vivo toxicity of the FeSiNTs-siSPAG5 complexes. H&E staining of kidney, lung, spleen liver, and heart sections revealed no obvious histological differences between the FeSiNTs-siSPAG5-treated and the control-treated mice (Additional file 15: Figure S14). These results indicated that treatment with the FeSiNTs-siSPAG5 complexes caused minimal organ toxicity. In the FeSiNTs-siSPAG5 group, the kidney and liver functional parameters were within the normal range (Additional file 16: Table S2). By contrast, in the free siRNA group, the alanine amino transferase (ALT) and aspartate amino transferase (AST; Additional file 16: Table S2) levels were relatively high, suggesting that the free siRNA caused hepatic dysfunction [29]. In addition, treatment with FeSiNTs-siSPAG5 at therapeutic doses resulted in no obvious immune responses, as indicted by the near-control levels of cytokines, such as interleukin-6 (IL-6) and tumor necrosis factor-α (TNF-α), in the FeSiNTs-siS-PAG5 group (Additional file 17: Figure S15). By contrast, in the free siRNA group, increased production of IL-6 and TNF-α was observed (Additional file 17: Figure S15) [30]. Taken together, the results demonstrated the high in vivo safety and low toxicity of the FeSiNTs-siSPAG5 complexes. PI3K/AKT/mTOR signaling pathway members' expression in tumor tissues Next, we performed a preliminary analysis of the downstream signaling pathway by which SPAG5 regulates bladder cancer cell growth and progression. SPAG5 acts as a prognostic indicator in hepatocellular carcinoma and as an oncogene, mediated by the PI3K/AKT pathway [21]. Consistently, during SPAG5 taxol treatment for cervical cancer, SPAG5 was observed to regulate mTOR activity [18]. In addition, in response to oxidative stress, high SPAG5 production is associated with mTOR signaling, which protects cells from apoptosis [31]. Previously, we noted that in bladder cancer, SPAG5 is involved the AKT/mTOR pathway [9]. Consequently, after the animals were sacrificed, the tumors were examined using western blotting and IHC. Consistent with the reduced SPAG5 protein levels induced by FeSiNTs-siSPAG5, western blotting showed significantly decreased expression of PI3K/AKT/mTOR signaling pathway members in the tumor mass (Additional file 12: Figure S12). However, the PI3K, AKT, and mTOR protein levels did not change significantly after treatment with FeSiNTs or free siSPAG5 compared with those in the PBS control. IHC staining of subcutaneous tumor tissue was consistent with the results of western blotting (Additional file 18: Figure S16). Thus, SPAG5 might regulate bladder cancer proliferation and progression via the downstream PI3K/AKT/ mTOR signaling pathway. Discussion The present study reported the synthesis of an FeSiNTs/ SPAG5 siRNA nanoformulation and demonstrated that this carrier system is highly effective to deliver siRNAs into tumor cells. In vivo, it targeted the protooncogenic target SPAG5 gene, to effectively inhibit the progression and growth of bladder cancer by reducing SPAG5 levels (Fig. 8). In addition, our data suggested that SPAG5 has an important function in the growth and progression of bladder cancer tumors, and its inhibition by FeSiNTs/ SPAG5 siRNA nanotherapeutics could be used successfully to target SPAG5 or other target genes in vivo. FeS-iNTs-based oligonucleotide therapeutics could from the basis to develop other targeted therapies. The application of nanoparticles comprising siR-NAs has significant advantages over traditional genetic approaches, such as the capacity to alter the expression levels of several genes without recourse to animal breeding, its flexible experimental design, and its translational potential. Although therapies using siRNAs have shown promise, there are significant barriers to their delivery that prevent their clinical translation. The FeSiNTs nanoparticles developed in the present study were designed to incorporate a method of tumor-specific accumulation to solve the challenges presented by siRNA delivery: Protection of siRNAs, their residence time in circulation, cell uptake, site-specific biodistribution, and escape from the endosomal pathway. SPAG5 promotes cell growth and progression in culture; therefore, we hypothesized that silencing of SPAG5 would affect the survival of cancer cells [9,16,[19][20][21]. Therefore, we demonstrated that FeSiNTs-siSPAG5, which integrates elemental iron with siSPAG5, could increase the efficacy of bladder cancer treatment both in vitro and in vivo. FeSiNTs-siSPAG5 effectively suppressed tumors as follows. Fe-doped chrysotile nanotubes show a hollowed tubular morphology structure, which were similar with chrysotile nanotubes consisting of one tetrahedral SiO 4 sheet inner surface and one octahedral gibbsite Mg(OH) 3 sheet outer surface. The negatively charged siSPAG5 could be effectively anchored on the positively charged external surface through electrostatic interaction, but it was hard to package into the negatively charged internal surface due to the surface tension and the like charges repel. As other studies in the literature have reported, vacuum impregnation process have been used to increase the encapsulation of Fe-doped chrysotile nanotubes aggregation. Summary, Fe-doped chrysotile nanotubes show that the mixedcharge nanoparticles covered with positively charged external surface and negatively charged internal surface, could passive transport siSPAG5 in bladder cancer cells while no cytotoxicity towards normal cells. In addition, the siRNA and its nanocarriers were cleared by phagocytes initially, and then accumulated in the kidneys and liver before being cleared from the body. Moreover, siS-PAG5 induced markedly increased in vivo antitumor activity, as evidenced by the attenuation of PI3K/AKT/ mTOR signaling. The PI3K/AKT/mTOR pathway is a major oncogenic driver that exerts vital functions in cancer growth, survival, and progression, and is frequently activated during carcinogenesis. Our findings suggest that the components of FeSiNTs-siSPAG5 each have specific functions, resulting in multiple regulation mechanisms to treat bladder cancer at different stages of delivery. Conventional nontargeted small-RNA therapeutics cause side effects. To minimize these side effects, we tested the hypothesis that selective delivery to specific cells could be affected by the materials' physiochemical properties alone. The developed FeSiNTs nanoparticles could transfect bladder cancer cells efficiently in vitro and in vivo, but had no effect on matched normal cells. Immune activation induced by siRNAs causes transient increase cytokines, whose levels increase, peak, and decrease at various times from 1 to 24 h after the first siRNA treatment [32]. Other nanomaterials delivering siRNAs were observed to increase IL-6 and IFN-α levels at 6 h after injection [33], whereas, in the present research, these cytokines showed no significant change after FeSiNTs nanoparticles treatment. We showed that this strategy was suitable for the discovery of siRNA carriers specific for cancer cells. We demonstrated that this strategy was feasible to increase the on-target activity of the siRNA. Conclusions In summary, we have developed FeSiNTs nanoparticles for siRNA targeted delivery to treat bladder cancer cells that overexpress SPAG5. These nanoparticles can efficiently avoid endosome degradation and deliver the therapeutic siRNA into the cytoplasm rapidly, resulting in significantly inhibited expression of SPAG5 expression and reduced bladder cancer progression and growth. The developed nanoparticles could potentially be used as an siRNA delivery method for robust bladder-directed therapy. Moreover, FeSiNTs-siSPAG5 was well tolerated, without causing marked dose-dependent or treatmentrelated toxicity. The significance of the present study lies its demonstration of siSPAG5 selective delivery to bladder tumors. Our previously study also showed that > 50% of cases of bladder express SPAG5 [9], suggesting the potential therapeutic utility of targeting SPAG5. Our findings support the use of siRNAs as therapy for patients with bladder cancer. Preparation of FeSiNTs-siRNA and characterization A gel mixture of SiO 2 , 4MgCO 3 ⋅Mg(OH) 2 ⋅5H 2 O and FeCl 2 ⋅4H 2 O with a Si/Mg + Fe molar ratio = 1.50% was used to synthesize the Fe-doped chrysotile nanotubes. The SiO 2 , 4MgCO 3 ⋅Mg(OH) 2 ⋅5H 2 O and FeCl 2 ⋅4H 2 O concentration are 10 mM, 14.8 mM to 13.9 mM and 0.148 mM to 1.0 mM, respectively. Aqueous NaOH (0.4 M) was used to increase the pH of the mixture to 12-14. Hydrothermal treatment was applied at 210-220 °C. The reaction was allowed to proceed for 1 to 3 days, after which the reaction products were dried at 150 °C for 2 h. Fourier transform infrared spectroscopy (FTIR, Nexus-670), X-ray diffraction (DX-2700, XRD), and Transmission electron microscopy (TEM, JEOL JEM-200CX) were used to characterize the microstructure of the composite material. SiRNA loading: Thirty microliters of a 3 mg/mL FeS-iNTs aqueous solution was mixed with the siRNA solution (12 μL; 1 OD unit of siRNA was dissolved in 125 μL of DEPC water), subjected to vacuum impregnation for 3 h, and then centrifuged at 1000 rpm for 5 min. Gel electrophoresis of the supernatant was used to evaluate the siRNA loading efficiency. Gel electrophoresis The FeSiNTs NPs and siRNA were mixed at mass ratios of 1:1, 10:1, 20:1, 30:1, 50:1, and 100:1, and incubated for 30 min at room temperature. Centrifugation for 5 min at 5000 rpm was used to remove insoluble particles. The free siRNA and the complexes in the supernatant were subjected to electrophoresis through a 3% agarose gel containing ethidium bromide (EtBr, 0.1 μg/mL). A UV transilluminator was used to visualize the bands. Release analysis of FeSiNTs/siRNA in medium and serum The FeSiNTs/siRNA and free siRNA were incubated in medium and serum. Centrifugation for 5 min at 5000 rpm was used to remove insoluble particles. At designated time points, samples were stored at − 20 °C until all samples were collected. Samples were then thawed on ice for electrophoresis. In vtro siRNA delivery and the FeSiNTs/FAM-siSPAG5 distribution In three 24-well plates, T24 cells were seeded at 5 × 10 4 cells/well and then incubated overnight at 37 °C and 5% CO 2 until they reached 60-80% confluence. The cells were then transfected using various predetermined formulations. At 6 h after transfection, FAM-siSPAG5 uptake by the cells was assessed using an inverted fluorescence microscope (Olympus, IX71, Tokyo, Japan). T24 cells (5 × 10 4 ) were placed into a 35 mm glass bottom culture dish (MatTek Corporation, Ashland, MA, USA) and cultured for at 37 °C in 5% CO 2 for 24 h, after which the culture medium was replaced by 500 mL of serum-free DMEM containing a 30:1 mass ratio of FeS-iNTs/FAM-siSPAG5 complexes. At 6 h after transfection, cells were washed, fixed, and stained using DAPI. Images were obtained using Olympus FluoView confocal laser scanning microscopy (CLSM) and subjected to analysis using FV10-ASW viewer software (Olympus). SiRNA lysosomal escape To determine the lysosomal escape of siRNAs, T24 cells were seeded in a confocal dish at 1 × 10 5 cells per well for 24 h. Thereafter, LysoTracker Red (75 nm; Invitrogen, Waltham, MA, USA) was used for cell staining for 1 h. Then, stained cells were cultured with free FAM-siRNA (150 nM FAM-siRNA equivalent) and FeSiNTs-siRNA for 1 h at 37 °C. The treated cells were rinsed and cultured in fresh medium for a further 1-6 h. At predetermined time points, aliquots of cells were removed, fixed using 4% PFA, and observed under a confocal microscope. The LysoTracker Red and FAM-siRNA excitation wavelengths were 550 and 480 nm, respectively. Quantitative real-time reverse transcription PCR and western blotting The TRIzol reagent (Invitrogen) was used to extract total cellular RNA, which was converted to cDNA and then subjected to quantitative real-time PCR (qPCR) using SPAG5-specific primers and the 7900 Fast RT-PCR system (Applied Biosystems, Foster City, CA, USA) to assess the mRNA expression of SPAG5. The endogenous control gene ACTB (encoding beta actin) was used to normalize the expression data. Cell proliferation analysis Cell viability was tested with MTT kit (Sigma) according to the manufacturer's instruction. For colony formation assay, a certain number of transfected cells were placed in each well of 6-well plates and maintained in proper media containing 10% FBS for 2 weeks, during which the medium was replaced every 4 days. Colonies were then fixed with methanol and stained with 0.1% crystal violet (Sigma) in PBS for 15 min. Colony formation was determined by counting the number of stained colonies. EdU experiments were performed using a EdU Cell Proliferation Assay Kit (Cat.C10310-1, Ruibo, Guangzhou, China) according to the manufacturer's instructions. Cell cycle determination Flow cytometry incorporating propidium iodide (PI) was utilized for cell cycle analysis. Briefly, at 72 h after transfection, T24 cells were pelleted and washed by centrifugation in ice-cold PBS at 125×g for 5 min, and then fixed overnight at − 20 °C in 75% ethanol. Then, the cells were subjected to RNase treatment for 30 min at 37 °C prior to PI staining (Bestbio, Shanghai, China) in dark conditions at 4 °C for 60 min. Flow cytometry (Beckman Coulter; Fullerton, CA, USA) was utilized to determine the cell cycle distribution, following the manufacturer's protocol. Assay for apoptosis A PI kit and Annexin-V-Fluorescein isothiocyanate (FITC) kit (BD Biosciences, San Jose, CA, USA) were used following the manufacturer's instructions. Prechilled PBS was used to wash the cells (2-3 × 10 5 ) twice, which were then suspended in binding buffer (100 µL) and 5 μL of FITC-conjugated annexin-V, and incubated in dark conditions at room temperature for 30 min. Thereafter, PI (100 μL) was added and incubation continued for 5 min, before the addition of Binding Buffer (400 µL). Subsequently, a flow cytometer CANTO ™ II (BD Biosciences) was used to examine the cells and the data were examined with the aid of the FlowJo software (Becton Dickinson, Franklin Lakes, NJ, USA). In vitro invasion and migration assays Wound-healing, Transwell migration, and Transwell invasion assays were performed to assess the migration and invasion ability of cells in vitro, according to previously published methods [34]. The Transwell migration assay was carried out similarly to invasion assay, except for the addition of the Matrigel (BD Biosciences) coating. The filters with cells were incubated for 48 h at 37 °C and then removed. The adherent cells on the lower surface were fixed and then stained using crystal violet (0.1%; Beyotime). Five randomly selected fields in each well were photographed and the invaded or migrated cells were enumerated under an inverted microscope (Olympus) at a magnification of 200×. These experiments were carried out in triplicate. In vivo fluorescence imaging To generate the xenograft tumor model, T24 cells (0.1 mL a suspension of 1 × 10 7 cells) were injected into the right flank of male BALB/c nude mice. The mice were divided randomly into four groups when the tumor volume was 200 to 300 mm 3 . For the in vivo imaging study, 400 μL of FeSiNTs/FAM-siSPAG5, the equivalent of free FAM-siS-PAG5, FeSiNTs only, or PBS were injected intratumorally. Then, at 0.5, 4, 8, and 16 h post-injection, the Xenogen IVIS Lumina system (Caliper Life Sciences, Hopkinton, MA, USA) was utilized for mouse scanning, with an exposure time of 1 s per image. Living Imaging software (Bio-Real Sciences, Salzburg, Austria) was then used to analyze the images. Mice were sacrificed at 0.5, 4, 8, and 16 h post-intravenous tail vein injection to assess the in vivo distribution of the complexes. Tissues including the heart, liver, lung, kidney, spleen, and tumor were removed and examined using the Xenogen IVIS Lumina system with Living Image software. Antitumour effects in vivo To investigate SPAG5 gene silencing and the antitumor effects of the FeSiNTs/siSPAG5 complexes, we constructed three bladder tumor models: A subcutaneous tumor model, an in-situ bladder cancer model, and a tail vein injection lung metastatic model. Tumor suppression effect of FeSiNTs/siSPAG5 in a model of subcutaneous bladder cancer To study FeSiNTs/siSPAG5's tumor suppressive effects, a subcutaneous bladder cancer model comprising BALB mice with T24 tumors was constructed as described previously. At a tumor volume reached about 100 mm 3 , the mice were placed randomly into four treatment groups comprising five mice per group. The groups were injected subcutaneously with FeSiNTs/siSPAG5 (20 μg of siSPAG5 per injection, 30:1 mass ratio), an equivalent amount of unloaded FeSiNTs, siSPAG5, or PBS solution, respectively. Injections were repeated once per week for 5 weeks. Meanwhile, every 4 d, the short and the long axial lengths of the tumors were measured and the mice were weighed. Kaplan-Meier survival curves were designed to evaluate mouse survival. The tumor xenografts were excised, weighed, and snap-frozen for cryosectioning, or formalin-fixed for paraffin sectioning. Hematoxylin and eosin (H&E) of tumor histological sections was performed. A TdT-mediated dUTP Nick-End Labeling (TUNEL; Colorimetric TUNEL Apoptosis Assay Kit (Beyotime, Haimen, China)) assay of primary tumor sections assessed tumor apoptosis following the supplier's protocol. A Dako Real Envision Kit (K5007, Dako) was used to perform IHC staining, utilizing primary antibodies to detect protein expression. Tumor suppression effect of FeSiNTs/siSPAG5 instillation in an in-situ bladder cancer model Female SD rats were anesthetized using ether inhalation, and their bladders were infused using 0.2 ml N-methyl-Nnitrosurea (MNU) (10 mg/mL; Sigma, St. Louis, MO, USA,) using a 22-gage angiocatheter one every 14 days five times. After catheterization, the rats remained anesthetized for approximately 45 min to avoid spontaneous micturition [35,36]. After successful tumor induction (approximately 16 weeks), 60 rats were assigned to 4 groups of 15 rats. Then, the rats were anesthetized and 500 μL of FeSiNTs/ siSPAG5, siSPAG5 only, an equal dose of free FeSiNTs, or PBS were instilled into the rats' bladders. The rats stayed sedated for approximately 45 min after instillation to minimize spontaneous micturition. The treatments were repeated once per week for 5 weeks. At 48 h after therapy termination, the rats were killed humanely. Their bladders were excised, and fixed in 4% paraformaldehyde for 24 h, paraffin-embedded, and subjected to histopathological examination. At the midportion of the bladder, transverse sections were cut, followed by H&E staining. Tumor suppression effect of FeSiNTs/siSPAG5 instillation in tail vein injection lung metastatic model T24 cells (2.5 × 10 6 cells in 100 µL of cold PBS) were injected into BALB/c-nude mice tail veins. At 4 weeks after injection, siRNA therapy was initiated. The mice were divided randomly into four groups of 5 and treated with 0.3 mg/kg of FeSiNTs/siSPAG5, an equivalent dose of siSPAG5 only, an equivalent dose of free FeSiNTs, or
2021-06-23T13:46:59.360Z
2021-06-23T00:00:00.000
{ "year": 2021, "sha1": "521340e1ca7ebb99160954870c6bc1d0aa1c5809", "oa_license": "CCBY", "oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/s12951-021-00935-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "521340e1ca7ebb99160954870c6bc1d0aa1c5809", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119039182
pes2o/s2orc
v3-fos-license
Stochastic inflation on the brane Chaotic inflation on the brane is considered in the context of stochastic inflation. It is found that there is a regime in which eternal inflation on the brane takes place. The corresponding probability distributions are found in certain cases. The stationary probability distribution over a comoving volume and the creation probability of a de Sitter braneworld yield the same exponential behaviour. Finally, nonperturbative effects are briefly discussed. Introduction The global structure of many four-dimensional inflationary universes is very rich [1,2]. Selfreproduction of inflationary domains leads to a universe consisting of many large domains. In each of these domains there can be different realizations of the inflationary scenario. This property of standard inflation alleviates the problem of fine tuning in inflation. The process of self-reproduction of inflationary domains leads globally to eternal inflation, namely, there is at least one inflating region in the universe. Self-reproduction of inflationary domains can be understood as a branching diffusion process in the space of field values of the inflaton. The probability distributions found in such a way will give the probabilities of finding a certain field value at a certain point in space-time. There is an interesting connection between the stochastic approach to inflation and quantum cosmology. The probability of finding the universe in a state characterized by certain parameters assuming the Hartle-Hawking no-boundary condition yields the same exponential behaviour as the stationary probability distribution in a comoving volume derived from stochastic inflation. This is the case for standard chaotic inflation. As it turns out this also holds for chaotic inflation on the brane. The idea that the universe is confined to a brane embedded in a higher dimensional bulk spacetime has received much attention in recent years. In the one brane Randall-Sundrum scenario the brane is embedded in a five-dimensional bulk space-time with negative cosmological constant Λ 5 [3,4]. Chaotic inflation on the brane in this setting has been investigated in [5,6,7]. The form of the spectrum of perturbations is modified due to the modified dynamics at high energies. Therefore, it seems to be interesting to investigate stochastic inflation on the brane. Inflation on the brane In a braneworld scenario 4D Einstein gravity is recovered on the brane with some corrections at high energies. Furthermore there are corrections due to the gravitational interaction with the bulk space-time. Assuming a fine tuning between the brane tension and the bulk cosmological constant Λ 5 leads to the vanishing of the 4D cosmological constant on the brane. Neglecting contributions from the dark radiation term, this leads to the following Friedmann equation on the brane [5,6] where ρ is the energy density on the brane, λ the (positive) brane tension and M 4 the fourdimensional Planck mass. This is related to the five-dimensional Planck mass M 5 by Assuming that matter on the brane is dominated by a scalar field φ, confined to the brane, with potential V (φ), its equation of motion is given bÿ In the slow roll approximation, For λ → ∞ the usual Friedmann equation is recovered. For V ≫ λ brane effects dominate. Inflation takes place if the Hubble parameter satisfies |Ḣ| < H 2 . In a braneworld with matter given by a scalar field the condition for inflation yields to [5] In the following, the potential of the inflaton will be taken to be of the form, Furthermore, for convenience, everything will be expressed in four-dimensional Planck units, hence M 4 ≡ 1. Eternal inflation on the brane The evolution of a scalar field in an inflationary universe is determined by two contributions. On the one hand, there is the classical rolling down of the scalar field down its potential. On the other hand, there are quantum fluctuations of the inflaton which become classical outside the horizon. This latter contribution can have either positive or negative sign. The classical rolling down is given by, ∆φ ≃φ∆t, where in the slow roll approximationφ is given byφ ≃ − V ′ 3H . The amplitude of a quantum fluctuation is given by δφ = H 2π . In a typical time interval H −1 , e 3 new domains appear each containing an almost homogeneous field φ − ∆φ + δφ [2]. There is a critical value φ s for which for all φ ≥ φ s quantum fluctuations dominate over the classical evolution towards smaller field values. This is the regime of self-reproduction of inflationary domains. φ s is determined by [8] 2π 3 For field values φ ≥ φ s there will be domains in which quantum jumps lead to an increase in the field value of the inflaton. In a small percentage of domains this will lead to the maximal field value at which inflation takes place. The upper boundary φ 5D in this braneworld model is determined by the five-dimensional Planck boundary. For energies higher than V (φ 5D ) = M 4 5 the scalar field becomes deconfined and flows off the brane into the bulk [6]. As in the four-dimensional case inflation will stop at this boundary. Here one might argue that inflation on the brane stops since the scalar field is flowing off the brane and thus four-dimensional inflation can no longer be sustained on the brane. Thus the upper boundary is given by In the low energy regime, V ≪ λ the Friedmann equation on the brane reduces to the standard one, It will be assumed that the whole period of inflation takes place in the low energy regime. The end of inflation φ e is determined by the first two terms in (2.6),φ 2 = V , which yields, φ e = 1/ √ 6π. Furthermore, φ e < φ 5D implies λ > (12π) −3/2 (3/4π)m 3 . The lower boundary for self-reproduction φ s is given by The requirement V /2λ < 1 at φ 5D implies λ > 2π 2 /9. Eternal inflation takes place if φ s < φ 5D which yields the condition, λ > (1/8)(3/4π) 7/4 m 3/2 . In the standard inflationary scenario observations require that m ≃ 10 −13 GeV = 10 −6 M 4 . Thus for realistic values of m eternal inflation takes place on the brane in the low energy regime, as long as λ > 2π 2 /9. However, note that inflation as well as self-reproduction takes place at field values larger than the four-dimensional Planck scale. In the high energy limit, V ≫ λ, the quadratic term in the Friedmann equation dominates, Inflation is assumed only to take place in the high energy regime. The end of inflation is determined by the last two terms in the condition (2.6), 5φ 2 = 2V , implying, φ e = (5/3π) 1/4 λ 1/4 m −1/2 . The lower boundary for eternal inflation φ s is given by Out of the two inequalities φ e < φ 5D and φ s < φ 5D it is found that the first one provides the stronger bound on the brane tension λ, namely, λ > 8 × 10 −6 m 6 . An upper bound on λ is found by requiring that V (φ e )/2λ > 1, implying λ < 3 × 10 −3 m 2 . As shown in [5] in the limit of strong brane corrections the COBE normalization of the curvature perturbations put a bound on m, which can be written in units of M 4 as, m ∼ 6 × 10 −5 λ 1/6 . This implies φ e ∼ 10 2 λ 1/6 , φ s ∼ 3 × 10 3 λ 1/6 , and φ 5D ∼ 4 × 10 4 λ 1/6 . Thus φ e < φ s < φ 5D and eternal inflation takes place for values of m derived from observations. As already pointed out in [5], the 5D Planck boundary φ 5D is below the 4D Planck scale, M 4 . Thus eternal inflation takes place at field values below M 4 . For a certain range of parameters eternal inflation takes place inside the low energy and inside the high energy regime on the brane. In this case domains reproduce themselves. Since it was assumed that the inflationary period is either in the low or in the high energy regime the dynamics of each of them will be determined by the characteristics of the regime that they are originating from. However, it could also be considered that a domain starts in the low energy regime and then due to the process of stochastic inflation, field values in successive domains reach such high values that strong brane corrections become important. Thus in this case the Friedmann equation on the brane changes from equation (3.10) to equation (3.11). In order for this to happen, one has to require that eternal inflation takes place in the low energy regime. Furthermore, the 5D Planck boundary has to be in the high energy regime. This implies that the brane tension has to satisfy, λ < 2π 2 /9. In this case, out of a low energy domain domains with the low energy characterics emerge as well as those with the high energy dynamics emerge. This picture is similar to the model proposed in [9] where the space-time dimension can change locally in chaotic eternal inflation. In this case, on the brane there are regions in which the dynamics are determined by the high energy Friedmann equation (3.11) and there are domains in which the Friedmann equation is the low energy one (3.10). Stochastic description The stochastic nature of the effect of the competition between the classical rolling down and the quantum perturbations is captured by a Fokker-Planck equation [10]. The (classical) field φ is performing a Brownian motion described by a Langevin equation, [1,2] where ξ(t) describes the white noise due to the quantum fluctuations, which causes the Brownian motion of the classical field φ. The probability distribution P c (φ, t) determines the probability to find a given value of the field φ at a given time at a given point. This is the probability distribution over a comoving coordinate volume, i.e. over a physical volume at some initial moment of inflation. P c (φ, t) is determined by the equation [2], The parameter β encodes an ambiguity in the derivation of this equation for systems for which the diffusion coefficient depends on φ. β = 1 corresponds to the Itô version of stochastic analysis and β = 1 2 to the Stratonovich version [10]. An exact stationary solution, for which ∂Pc ∂t = 0, can be found for the Hubble parameter given by equation (2.4), namely, (4.14) In the low energy regime, V ≪ λ, this reduces to the well known expression [2] . Considering the high energy regime, V ≫ λ, the stationary distribution P c (φ) approaches, It is interesting to compare the expression for the stationary probability distribution (4.14) with the probability for creation of a braneworld from nothing. This is described by the de Sitter brane instanton [11]. The probability for creation of a universe in the Hartle-Hawking no-boundary proposal [12] is given by, P ∼ exp(−S E ), with S E the Euclidean action. In the case of the creation of a braneworld containing just one de Sitter brane in an AdS bulk, the Euclidean action, in the notation used here, is given by [11], (4.17) Using the expression for the Hubble parameter on the brane (2.4) the nucleation probability of a de Sitter braneworld is given by Thus comparing the exponentials in the stationary probability distribution P c (4.14) and in the probability distribution P (4.18) it is found that they are exactly the same. Therefore the same coincidence between these two probability distributions appears as is the case in standard fourdimensional inflation [2]. P c (φ, t) is the probability distribution in a comoving volume, neglecting the expansion of the universe. The probability distribution P p (φ, t) in a proper volume takes into account that during a small time interval dt the total number of points associated with the field φ is additionally increased by a factor 3H(φ)dt. Thus this leads to the following equation [8,1] In order to solve this equation it is convenient to make the ansatz [2] P p (φ, t) = ∞ s=1 e λst π s (φ) ∼ e λ 1 t π 1 (φ) for t → ∞ For large times t → ∞ only the largest eigenvalue is kept. In the high energy regime brane effects are dominant and the deviations from standard 4D inflation are the largest. Therefore, in the following, the probability distribution P p will be discussed for an inflationary period entirely in the high energy regime. Thus the Hubble parameter is given by (3.11). Together with this and (4.20) equation (4.19) yields to, where β = 1 2 . The boundary conditions on P p are equivalent to those in the standard fourdimensional case [2]. There is no diffusion below the end of inflation, which implies ∂ ∂φ H 3 2 (φ)P p φe = 0, and inflation stops at the (5D) Planck boundary, thus P p (φ 5D ) = 0. This imposes the following conditions on π 1 (φ), Numerical solutions have been plotted in figure 1. In figure 1, the probability distribution shows a maximum in all cases. For larger values of m at constant brane tension λ it is shifted towards smaller field values φ. For larger values of λ at constant m it is shifted towards higher field values φ, concentrated very close to the 5D Planck boundary. The eigenvalue λ 1 can be related to the fractal dimension of the universe, d f r , defined as d f r = λ 1 /H max [2,13], where H max is the maximal Hubble parameter at the five-dimensional Planck boundary. The fractal dimension is motivated by the observation that at the Planck boundary the total volume of inflationary domains does not grow as e 3 during a time interval H −1 max but only as exp(λ 1 /H max ). Some domains will reach energies beyond the Planck scale and thus drop out of the total volume. On the brane in the high energy regime the maximal Hubble parameter at the 5D Planck boundary is given by H max = 4π For scalar field values φ > φ s self-reproduction takes place. This phenomenon is related to the quantum jumps that cause the field values to increase. However, as pointed out in [8] these quantum jumps could also coherently add up to lead to a larger than usual jump down the potential. This might lead to nonperturbative amplification of inhomogeneities. Domains in which quantum jumps lead to an increase of the field value are pushed up to the five dimensional Planck boundary. At this point the Hubble parameter reaches its maximum, H max . Thus these domains give the greatest contribution to the volume of the universe. The domains will stay as long as possible at the Planck boundary and then "rush down" the potential with an amplitude larger than H/2π [8]. Following [8] the extra time,∆t spent at the Planck boundary can be estimated by,∆t(φ) =δφ/φ, whereδφ is the amplified amplitude of the jump down the potential,δφ = n(φ)H(φ)/(2π), where n(φ) is an amplification factor. In the regime where brane effects are dominant, V ≫ λ, this leads tõ The volume is increased by a factor exp(d f r H max∆ t). Hence the volume-weighted probability (4.20) is given by [8] P where it is assumed that an amplification of the standard jump is suppressed by a factor exp[− 1 2 n 2 (φ)]. Maximizing this with respect to n(φ) gives Expressing this in terms of the ratio of the amplitudes of scalar to tensor perturbations, A S /A T [14], yields to In the standard four-dimensional case dependence on the inflaton field φ in the amplifying factor n(φ) can be entirely expressed in terms of the ratio of the amplitudes of the scalar to tensor perturbations. As it turns out, in the high energy regime of chaotic inflation on the brane this is no longer the case. There is an additional amplifying factor V /λ. Since A S /A T is an observable quantity this means that the amplitude of the jumps down the potential are amplified with respect to the case of standard four-dimensional inflation. Domains which jump down with these amplified amplitudes end up as regions with smaller energy density compared to the background. In a braneworld these wells or infloids [8] are deeper than in standard four-dimensional inflation. Conclusions The stochastic approach to standard 4D inflation and its variations opened the way to a rich global structure of an inflating universe. Here the stochastic approach to inflation has been applied to a braneworld model, namely, to chaotic inflation on the brane. It has been shown that eternal inflation takes place for a certain range of parameters, and in particular, for those satisfying observational bounds. The competition between the evolution towards smaller field values due to classical dynamics and the evolution towards either even smaller or higher field values can be described as a Brownian motion. There exists a well defined procedure to obtain a Fokker-Planck equation determining the probability distribution to find a certain value of the scalar field at a given point in space-time. Furthermore, there are two types of probability distributions. Firstly, the probability distribution P c in a given comoving volume. Secondly, the probability distribution P p in a given physical volume, which takes into account the expansion of the universe. In standard 4D chaotic inflation, apart from some pre-factors, the dominant behaviour is determined by an exponential function, which is exactly the square of the Hartle-Hawking no-boundary wavefunction of the universe. In the braneworld scenario discussed here, a similar result was found. Comparing the expression for P c found in the stochastic approach to chaotic inflation on the brane with the de Sitter brane instanton for a one brane system as calculated in [11], apart from some prefactors, the same exponential function was found in the two cases. The probability distribution in a given physical volume, P p , was calculated numerically in the high energy regime where brane effects dominate. The results are similar to the ones in standard 4D inflation with the distribution concentrated near the 5D Plack boundary. Finally, the process of a scalar field close to the 5D Planck boundary rolling down with amplitudes larger than the usual H/2π, due to quantum fluctuations, was briefly discussed. It was found that the amplification factor is enhanced by a factor V /λ in the high energy regime on the brane. Thus the infloids or wells in the energy distribution are deeper than in standard four dimensional inflation. Acknowledgments It is a pleasure to thank J. Garriga and M.A. Vázquez-Mozo for enlightening discussions. I would like to thank the University of Geneva for hospitality where part of this work was done. This work has been supported in part by Spanish Science Ministry Grant FPA 2002-02037.
2019-04-14T02:51:56.972Z
2003-10-21T00:00:00.000
{ "year": 2003, "sha1": "11e96b523bb8b9478cc8416138d7f313eff075bf", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2004.03.002", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "aba94e9f73077b48e77f089344adc2e2b749194a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Physics" ] }
237572019
pes2o/s2orc
v3-fos-license
Fast Obstacle Avoidance Motion in SmallQuadcopter operation in a Cluttered Environment The autonomous operation of small quadcopters moving at high speed in an unknown cluttered environment is a challenging task. Current works in the literature formulate it as a Sense-And-Avoid (SAA) problem and address it by either developing new sensing capabilities or small form-factor processors. However, the SAA, with the high-speed operation, remains an open problem. The significant complexity arises due to the computational latency, which is critical for fast-moving quadcopters. In this paper, a novel Fast Obstacle Avoidance Motion (FOAM) algorithm is proposed to perform SAA operations. FOAM is a low-latency perception-based algorithm that uses multi-sensor fusion of a monocular camera and a 2-D LIDAR. A 2-D probabilistic occupancy map of the sensing region is generated to estimate a free space for avoiding obstacles. Also, a local planner is used to navigate the high-speed quadcopter towards a given target location while avoiding obstacles. The performance evaluation of FOAM is evaluated in simulated environments in Gazebo and AIRSIM. Real-time implementation of the same has been presented in outdoor environments using a custom-designed quadcopter operating at a speed of $4.5$ m/s. The FOAM algorithm is implemented on a low-cost computing device to demonstrate its efficacy. The results indicate that FOAM enables a small quadcopter to operate at high speed in a cluttered environment efficiently. I. INTRODUCTION Recent advancements in Unmanned Aerial Vehicle (UAV) technologies have accelerated the deployment of UAVs for various applications like visual inspection [1], reconnaissance missions [2], agriculture [3] etc. The current increase in humans' reliance on UAVs is due to their mechanical simplicity and ease of control. Therefore, there is a need to develop robust automation algorithms to enhance the capabilities of a UAV to operate in cluttered environments. In the literature, the problem of obstacle avoidance has been extensively studied. Studies like [4] have been focused on coordinated flights to prevent UAVs from crashing into each other. Similarly, [5] uses proportional navigation for the same. Many works in UAV navigation like [6] use sensor data streamed to the ground station for processing, thus leading to huge latency issues. Also, in these works, a known map of the environment is used. In an uncertain environment, it will not yield successful results due to changes in the map. Therefore Chaitanyavishnu S. Gadde, Mohitvishnu S. Gadde, Nishant Mohanty, and Suresh Sundaram are with the Department of Aerospace Engineering, Indian Institute of Science, Bangalore, India. (e-mail: n140993@rguktn.ac.in, (mohitvishnug, nishantm, vssuresh)@iisc.ac.in). a sensor-based algorithm enables UAVs to act dynamically based on their observed environment. In the literature for sensor-based obstacle avoidance, multisensor fusion techniques have been used to enhance the UAV's capabilities. For example, in [7] one radar, four cameras, and two onboard computers have been used. However, it is not a viable option for low-cost fast moving drone operating in a cluttered environment. To overcome this problem, many algorithms using complex local [8]- [11] and global [12] planners have been designed for obstacle avoidance. However, these algorithms have not shown promising results because their planning modules were constrained due to the sensor's field of vision. Recently a state-of-the-art approach for UAV control with aggressive maneuvers has been presented by [13], which could be used for real-time applications. However, these methods require accurate instantaneous state feedback, which can only be delivered by motion capture systems and perform well in indoor environments only. Thus, there is a need to develop a cheap multi-sensor approach to efficiently use the available sensor data for instantaneous decision-making and control in outdoor environments. Recently in literature, the fusion of laser scanners and RGB-D cameras has been used for obtaining accurate estimates of the environment. In [14], RGB-D sensors is used to obtain point-cloud data of the environment but these sensors are restricted to very low range in outdoor conditions. Accurate estimation of the obstacles can be obtained either by using stereo cameras [15] or monocular cameras by creating depth maps using various Simultaneous Localization and Mapping (SLAM) algorithms [16]. However, these works are computationally intensive for on-board computers and yield in lower control frequency in position and velocity feedback. In the absence of data of the entire environment, learningbased motion planners have been commonly used. Approaches related to learning motion policies and depth from input data were demonstrated by [17]. It used orthogonal structures of indoor scenes to estimate the vanishing point to navigate the MAV in the corridors by maneuvering towards dominant vanishing points. Other works [18]- [20] have applied various learning techniques like imitation learning to navigate a UAV. In [21], autonomous flight is achieved at a stretch of 3 km, at a speed of 1.5 m/s in a cluttered forest environment using the imitation learning method. However, in most cases, due to high computational complexity, the UAV streams data to an external computer and then receives higher-level control commands. Hence, there is still a need to develop a costeffective, less computationally intensive obstacle avoidance 978-1-6654-2849-1/21/$31.00 © 2021 IEEE algorithm for high-speed UAVs in a cluttered environment. In this paper, we propose a novel Fast Obstacle Avoidance Motion (FOAM) algorithm for sense-and-avoid operations in a cluttered environment. A low-cost monocular camera is used to capture images, which are then processed to obtain the probabilistic occupancy map (POM). A 2D Lidar is also used to generate POM in horiziontal plane and is fused with that of camera images. The resultant POM is used to determine the free space by formulating it as an optimization problem. Here, local planner uses this directional information to generate high-level commands like the desired yaw and linear velocity. It enables the quadcopter to traverse the least occupied path to the desired goal. The performance of the proposed algorithm is evaluated in both simulation and actual environments. First, Gazebo simulations were carried out to verify the performance of the algorithm in a cluttered bugtrap environment. Later, simulations in the AIRSIM simulator were conducted in a forest environment to realize the near outdoorlike conditions. Finally, the proposed algorithm was tested on a custom-built quadcopter that uses Pixhawk autopilot in an outdoor environment. The results clearly indicate that FOAM is computationally less intensive and can handle obstacle at a speed of 4.5m/s efficiently. The remaining paper's organisation is as follows: Section II presents the the formulation of the problem, followed by the description of the proposed algorithm. Section III describes the simulations carried out along with the experimental setup followed by the conclusion. II. FAST OBSTACLE AVOIDANCE MOTION (FOAM) ALGORITHM In this section, the problem definition of sense-and-avoid using a high speed quadcopter is provided. The following subsection describes the generation of probability occupancy map (POM) based on monocular camera, followed by POM based on 2D lidar. Later, the fusion of the occupancy map is described along with the information about flight control. Finally, the FOAM algorithm is described followed by that a formal description of the same is provided. A. Problem Definition In this section the sense and avoid (SAA) problem is formulated for a quadcopter Q with m obstacles (O 1 , O 2 , · · · , O m ) within a region A ⊂ 2 . A typical SAA problem with a quadcopter and obstacles is shown in Fig. 1. The initial launch position (X o ∈ 3 ) and the goal position (X g , ∈ 3 ) are user defined and can be anywhere within A. The objective of Q is to move from the initial launch position x o to the goal position x g at a high speed v (i.e, ≥ 4 m/s). Also due to the presence of obstacles in the path it has to avoid them to achieve a collision free maneuver. The mission is assumed to be a success if Q manages to reach the goal position without any collision. However the mission is terminated if the quadcopter fails to reach the goal position within t max , or in case there is a collision. Before the start of the mission, the quadcopter is initialized at x o with velocity v = 0 along with a random heading angle ψ.Also the quadcopter is made to maintain a fixed height of h along the z axis all the time. During the mission the kinematics equations for the control of the quadcopter can be given as:ẋ = v * cos(ψ);ẏ = v * sin(ψ); where,ẋ andẏ represents the velocity of the Q along Cartesian x and y axes. The quadcopter is equipped with a monocular camera and a 2-D lidar. The 2D lidar provides point-cloud data at a radius of r. Every point in the point cloud is associated with two values, i.e., the range measurements (z i ≤ r) and the angle from the evader frame of reference. The monocular camera provides visual sensor information within a 90°field of view. The orientation of the camera is kept similar to that of the quadcopter. In addition to LIDAR and monocular vision, the quadcopter also has a flight controller that provides inertial measurements and position information of the vehicle. Based on the sensing information, the quadcopter is expected to identify the boundary of obstacles in the sensing region and follow an obstacle free paths. For example in Fig. 1, the vehicle senses the obstacle (o 1 ) partially and generate a feasible path to reach the goal point as shown in yellow coloured path. In order to ensure that the quadcopter reaches the goal position optimally, some additional constraint on its motion has been added. To ensure optimally the quadcopter requires to minimize the deviation from the ideal straight line path joining the initial position and the goal while performing the mission. This has to be implemented using a local planner by minimizing the distance between the quadcopter's position (x, y) from the line joining (x o , y o ) and (x g , y g ). Hence, the final mission of the quadcopter is to reach the target location by avoiding collision while having minimum deviation from the optimal path. B. Probabilistic Occupancy Map for FOAM The problem of detecting obstacles from the current frame f at time t is converted to a free space determination problem. In this case, we use a monocular camera and a 2D lidar as shown in Fig. 2. Based on the field of view (FOV) of the monocular camera, the width of the image is divided into M sectors, s 1 , s 2 , ..., s M (where M is always an odd number) as shown in Fig. 2a. Similarly, the point cloud data obtained from the 2D lidar is used. Even though a 2D lidar can provide a 360 • field of view, the algorithm selects the region of interest which camera FOV. Hence, the point cloud data is also mapped into M sectors as that of the camera, shown in Fig. 2b. The objective to do so is to find the sector free from obstacles which has the minimum deviation from the median sector. In order to determine the free space available for the quadcopter to navigate towards the goal, we determine the probability of a sector occupied with an obstacle. A monocular camera is used to capture RGB images (30 fps). Each RGB image (640 X 480 resolution) is converted to a gray-scale image. From the gray-scaled image, robust corners are extracted using the Shi-Tomasi corner detector algorithm [22]. It is shown in the literature that these features are robust to track in an image where the changes are not abrupt. One should note that in a nominal wind condition, the objects present in the cluttered environment do not move abruptly. Hence, the corner features (x i , i = 1, 2, · · · , N ) detected will be useful to identify the free space present in the current frame. Further, the optical flow values of these sparse features set are computed using the Lucas-Kanade method of iterative pyramids [23]. For this purpose, we use the past three frames to track these corner features. Let n i be the number of features present in ith sector, v i is the flow for given feature x i and N = m i=1 n i be the total number of features with reliable optical flow values in the image frame. Using the flow values of these features, we compute the Probability of Occupancy Map (POM) for ith sector as given below, where, s i is the i th sector and the value of is close to zero. Note that the flow values for the features detected very far away from the camera will be low, and flow values for features near the camera will be high. Hence, the occupancy value of the sector will be small if the obstacle is far away from the quadcopter and vice-versa. The computational requirement of corner features and optical flow is small, and one achieves 30 frames per second processing speed to detect the free space. Similarly, a long range 2D lidar is used for depth estimation of the obstacles planar to the quadcopter. In order to estimate the occupancy in the camera plane, the field of view (FOV) of lidar is restricted to the camera's FOV, as shown in Fig. 2b. The effective depth of the 2D lidar is restricted to d max . Based on the resolution of LIDAR, one will always have a finite number of points in each sector. Note that the depth information of points beyond the desired sensing radius is considered effective depth. It is set at 10m in the simulation. Thus, the probability occupancy map using 2D point cloud data for the ith sector is given as From the equation, we can conclude that the probability will be zero if the sector does not have any obstacles. The probability occupancy map obtained from camera and LIDAR point cloud data are fused as: , i = 1, 2, · · · , m (3) where w C and w L are the weights for camera and LIDAR POM respectively and w C + w L = 1. The algorithm is designed to send high-level commands to the quadrotor -(1) forward translational speed, (2) constant height, and (3) rotational velocity (heading/desired yaw). The forward velocity of the complete system is constant throughout the whole mission and can be modified by the user based on the application of the vehicle. Fig. 3 shows the schematic diagram of the flight control used for this algorithm. As C. Flight Control calculate P C i using Equation (1) return P C i function POM LIDAR Get L from lidar D j ← lidar data in sector S i in frame f k , D j ∈ L calculate P l i using Equation (2) H ← min (abs (h -( M +1 2 ))) ∀h ∈ H mentioned, the required high-level commands like the desired yaw, height of the vehicle, etc., are sent to the velocity controller present in the Pixhawk autopilot. These velocity commands are sent via mavlink protocol while using the MAVROS package to the autopilot. The Pixhawk is connected to the on-board computer via a serial connection. The outer loop uses a PID controller for velocity control. It computes the desired thrust, roll, pitch and yaw and sends it to the attitude controller. The PID control of the attitude controller then realises the required motor speed on the quadrotor for the optimal motion. The above-mentioned process is completely realised inside the Pixhawk autopilot of the quadcopter. D. FOAM Algorithm The FOAM algorithm is designed such that it is computationally less intensive. The system architecture of the same is shown in Fig. 4. The figure illustrates various components of FOAM and its integration into the small quadcopter. First, FOAM generates POM based on the data from the monocular camera for the xz-plane. Then it generates POM using the point cloud data from 2D-LIDAR for the xy-plane. Next, free space is determined by formulating an optimization problem by fusing the POM's. The solution is to determine the free space at time t. Finally, the local planner generates the necessary commands to the Pixhawk flight controller such that the quadcopter avoids the obstacle and move towards the goal position. The above-mentioned algorithm is summarized and given in algorithm 1. III. PERFORMANCE EVALUATION In order to evaluate the performance of FOAM, different test scenarios are considered. Custom environments were created and tested using the Gazebo simulator using PX4 Flight Stack in the back-end for the 3DR IRIS quadcopter. Simulations were also conducted on AirSim to verify the performance of the algorithm. Finally, to evaluate the entire proposed system, experiments were conducted in an outdoor environment using a custom-built quadrotor vehicle that would autonomously navigate based on the high-level commands sent from the onboard computer. The videos related to the evaluation can be found in the following link: https://bit.ly/3sFtVyP. A. PX4 ROS Gazebo Simulations The proposed algorithm was tested in various environments. A custom python code was designed to compute and provide the required control commands to the 3DR IRIS quadrotor. The 3DR IRIS model has a monocular camera and 2-D lidar plugins integrated into it. The monocular camera uses a horizontal field of view of 90°and a sensing range of 10 m for the 2-D lidar. The sample Gazebo environment is shown in Fig. 5. B. AIRSIM Simulation Next, the algorithm was tested on AirSim, an open-source simulator to realize the near outdoor-like conditions. The quadcopter model uses a built-in monocular camera along with the 2D LIDAR for perception. Different instances in Fig. 6 show a typical forest environment in an AirSim simulator. The algorithm was also tested on a custom-designed cluttered environment in AirSim to test its robustness. The simulation was carried out using Intel i7 8750H 3.0 GHz processor, powered with 16GB of DDR4 memory, which also housed an NVIDIA RTX 2070 Max-Q graphics memory. C. Outdoor Experiments Finally, to evaluate the proposed system for high speeds, a custom-built quadcopter shown in Fig. 7 was tested in an Carbon fiber frame. The quadrotor houses an of the shelf ARM-based computer, NVIDIA Jetson Nano, for onboard computation of perception, control, and planning. Pixhawk 4 is the flight controller, which uses the PX4 flight stack. The vehicle has a 35 percent hover throttle on a 5000 mAh 3S Lithium Polymer (LiPo) battery. The quadcopter is equipped with a Logitech C930e web camera. The camera module packs up with a 1080p resolution, H.264 video compression with Scalable Video Coding and UVC 1.5 encoding to minimize its dependence on computer and network resources and a wide 90-degree field of view. Slamtec RP LIDAR S1 is used for the depth estimation. It provides 360-degree scan field, 5.5hz/10hz rotating frequency with 40-meter ranger distance measurement with more than 9.2K samples per second. A pole was erected in the middle of a field. The quadcopter was placed at the initial launch position. The destination position was given as an input and the program was run. During the mission the quadcopter was able to avoid the pole in between while operating at a desired velocity of 4.5 m/s. Also the height along z-direction was maintained, i.e, 2 m. An image instance of the same is given in fig. 9. Fig. 8 shows the commanded velocity and estimated velocity. The experiment shows that FOAM can perform sense and avoid in outdoor environments at a very high speed. Also, it demonstrates the low latency of the proposed FOAM architecture by using it on low cost computing devices. IV. CONCLUSION In this paper, the novel Fast Obstacle Avoidance Motion (FOAM) algorithm is proposed to perform sense and avoid (SAA) operations. FOAM is a low-latency perception-based algorithm that uses multi-sensor fusion of a monocular camera and a 2-D LIDAR. It has been shown to perform obstacle avoidance at a speed of 4.5 m/s in outdoor environments. Also, it has been shown to perform high-speed maneuvers in a cluttered environment in both Gazebo and AirSim.
2021-09-21T01:15:34.168Z
2021-07-09T00:00:00.000
{ "year": 2021, "sha1": "dfae300316d90fadc79e125084df8c965d6dbc71", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2109.09159", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dfae300316d90fadc79e125084df8c965d6dbc71", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
239473885
pes2o/s2orc
v3-fos-license
Progesterone receptor membrane component 1 (PGRMC1) binds and stabilizes cytochromes P450 through a heme-independent mechanism Progesterone receptor membrane component 1 (PGRMC1) is a heme-binding protein implicated in a wide range of cellular functions. We previously showed that PGRMC1 binds to cytochromes P450 in yeast and mammalian cells and supports their activity. Recently, the paralog PGRMC2 was shown to function as a heme chaperone. The extent of PGRMC1 function in cytochrome P450 biology and whether PGRMC1 is also a heme chaperone are unknown. Here, we examined the function of Pgrmc1 in mouse liver using a knockout model and found that Pgrmc1 binds and stabilizes a broad range of cytochromes P450 in a heme-independent manner. Proteomic and transcriptomic studies demonstrated that Pgrmc1 binds more than 13 cytochromes P450 and supports maintenance of cytochrome P450 protein levels posttranscriptionally. In vitro assays confirmed that Pgrmc1 KO livers exhibit reduced cytochrome P450 activity consistent with reduced enzyme levels. Mechanistic studies in cultured cells demonstrated that PGRMC1 stabilizes cytochromes P450 and that binding and stabilization do not require PGRMC1 binding to heme. Importantly, Pgrmc1-dependent stabilization of cytochromes P450 is physiologically relevant, as Pgrmc1 deletion protected mice from acetaminophen-induced liver injury. Finally, evaluation of Y113F mutant Pgrmc1, which lacks the axial heme iron-coordinating hydroxyl group, revealed that proper iron coordination is not required for heme binding, but is required for binding to ferrochelatase, the final enzyme in heme biosynthesis. PGRMC1 was recently identified as the causative mutation in X-linked isolated pediatric cataract formation. Together, these results demonstrate a heme-independent function for PGRMC1 in cytochrome P450 stability that may underlie clinical phenotypes. Progesterone receptor membrane component 1 (PGRMC1) is a heme-binding protein implicated in a wide range of cellular functions. We previously showed that PGRMC1 binds to cytochromes P450 in yeast and mammalian cells and supports their activity. Recently, the paralog PGRMC2 was shown to function as a heme chaperone. The extent of PGRMC1 function in cytochrome P450 biology and whether PGRMC1 is also a heme chaperone are unknown. Here, we examined the function of Pgrmc1 in mouse liver using a knockout model and found that Pgrmc1 binds and stabilizes a broad range of cytochromes P450 in a heme-independent manner. Proteomic and transcriptomic studies demonstrated that Pgrmc1 binds more than 13 cytochromes P450 and supports maintenance of cytochrome P450 protein levels posttranscriptionally. In vitro assays confirmed that Pgrmc1 KO livers exhibit reduced cytochrome P450 activity consistent with reduced enzyme levels. Mechanistic studies in cultured cells demonstrated that PGRMC1 stabilizes cytochromes P450 and that binding and stabilization do not require PGRMC1 binding to heme. Importantly, Pgrmc1-dependent stabilization of cytochromes P450 is physiologically relevant, as Pgrmc1 deletion protected mice from acetaminophen-induced liver injury. Finally, evaluation of Y113F mutant Pgrmc1, which lacks the axial heme iron-coordinating hydroxyl group, revealed that proper iron coordination is not required for heme binding, but is required for binding to ferrochelatase, the final enzyme in heme biosynthesis. PGRMC1 was recently identified as the causative mutation in X-linked isolated pediatric cataract formation. Together, these results demonstrate a heme-independent function for PGRMC1 in cytochrome P450 stability that may underlie clinical phenotypes. Cytochromes P450 are an essential superfamily of hemecontaining monooxygenase enzymes that catalyze biosynthetic reactions, detoxify xenobiotic compounds, and metabolize pharmaceutical drugs (1). These enzymes have a characteristic catalytic cycle that allows for the safe activation of molecular oxygen to react with substrates (1). Cytochromes P450 are found in all kingdoms of life (1). In mammals, the liver is the primary site of expression (1,2). Cytochromes P450 rely on protein-protein interactions between a cytochrome P450 and cytochrome P450 oxidoreductase and, in some cases, cytochrome b5 (CYB5) to transfer electrons to the cytochromes P450 to complete catalysis (1). Regulation of cytochrome P450 activity is crucial to maintain homeostasis and metabolize xenobiotics (1). Although the transcriptional regulation of cytochrome P450 activity is wellstudied, the posttranslational control of cytochrome P450 activity is yet to be fully understood (1). PGRMC1 is a heme-binding membrane protein, and despite its name, PGRMC1 binding to heme is better understood than its binding to progesterone (15). As a type I transmembrane protein, PGRMC1 binds heme in a cytoplasmic CYB5-like domain (16). While this domain is structurally similar to CYB5, the characteristics of heme binding are distinctly different. CYB5 is an electron carrier that binds heme in a hexacoordinate fashion with histidine residues coordinating the heme iron molecule (17). PGRMC1 binds heme in a pentacoordinate fashion with the hydroxyl group of a tyrosine residue coordinating the iron in the heme molecule, making it unlikely that PGRMC1 is an electron carrier (18). The significance of PGRMC1 heme binding remains to be elucidated. PGRMC1 has a paralog in mammals, named progesterone receptor membrane component 2 (PGRMC2), with which it shares 60% amino acid identity. In a landmark study, the Saez lab demonstrated that PGRMC2 functions as a heme chaperone and plays a critical role in mitochondrial homeostasis in mouse brown adipose tissue (19). Heme chaperones are necessary because of the reactive nature of free heme. They sequester heme when it is taken up from the environment or synthesized in the mitochondria. PGRMC2 is required for delivery of newly synthesized heme from the mitochondria to the nucleus (19). These observations suggest that in addition to its demonstrated role in cytochrome P450 biology, PGRMC1 may also function as a heme chaperone. PGRMC1 and PGRMC2 have overlapping but different subcellular localizations. Both proteins are found in the endoplasmic reticulum (16,20), where cytochromes P450 reside. PGRMC2 is found in the nucleus where it delivers heme to the nuclear receptor Rev-Erb (19). PGRMC1 has also been reported to localize to the nucleus, mitochondria, and plasma membrane (5,19,21). Here, we examined the function of PGRMC1 in mouse liver using a knockout model and found that PGRMC1 binds and stabilizes a broad range of cytochromes P450 in a heme-independent manner, defining a nonheme chaperone function for this family of proteins. Generation of a Pgrmc1 KO mouse and liver characterization To study the function of PGRMC1 in vivo, we generated mice with a conditionally targeted Pgrmc1 allele. The conditional allele contains loxP sites flanking exons 1 and 2 of Pgrmc1 with a neomycin resistance cassette (Fig. S1A). Mice carrying the conditionally targeted allele were crossed to mice expressing Sox2-Cre recombinase to produce whole body Pgrmc1 knockout (KO) mice. Male and female whole body Pgrmc1 KO mice are viable. Because Pgrmc1 is X-linked (22) and cytochrome P450 expression is known to be sexually dimorphic (23), only male Pgrmc1 KO mice were examined in this study. A complete blood count and full clinical chemistry were performed on Pgrmc1 KO mice (Tables S1.1 and S1.2). While some hematology parameters (red blood cell count, hematocrit, and hemoglobin) were 10% lower in the KO mice, the mice were not anemic as the values were within reference range (24,25). Similarly, the KO mice had 8% less plasma cholesterol than wild-type (WT) mice. This value indicates that the KO mice had a greater challenge maintaining systemic cholesterol homeostasis, but this was a subclinical phenotype (24). In summary, Pgrmc1 KO mice were in good health both clinically and physically. Both PGRMC1 and cytochromes P450 are highly expressed in human and murine liver, so we focused our study on this tissue. Knockout of Pgrmc1 protein was confirmed in Pgrmc1 KO mouse liver by western blotting using an anti-PGRMC1 antibody raised against human PGRMC1 (amino acids (Fig. 1, A and B) (1, 3). Liver size and liver tissue histology were both normal (Fig. S1, B and C) (26), and plasma alanine aminotransferase (ALT) level, a marker of liver injury, was not elevated (Table S1.2). Liver metabolite amounts assayed were similar to control values (Table S1.3), except for a small decrease in glutamine (9%). Overall, the livers of Pgrmc1 KO mice were healthy. Pgrmc1 binds cytochromes P450 in the liver Our previous studies demonstrated that PGRMC1 binds cytochromes P450 in yeast and human cells (3). To test whether Pgrmc1 binds cytochromes P450 in mouse liver, Pgrmc1 KO mice were infected with AAV8 GFP or AAV8 Flag-Pgrmc1 to express the protein in the liver for 8 days (Fig. S2, A and B). A Flag affinity purification was performed on detergent-solubilized liver membrane fractions from these mice (Fig. S2C), and bound proteins were identified by mass spectrometry. Flag-Pgrmc1 binding partners (Tables S2.1 and S2.2) included 32 proteins, of which 13 (41%) were cytochromes P450. In fact, the most enriched gene ontology (GO) term among the candidate binding partners was "exogenous drug catabolic process" (p = 4.35 × 10 −14 ), which reflects the cytochromes P450 (Fig. 1C). The second most enriched GO term was "oxidation-reduction process" (p = 6.70 × 10 −13 ), which reflects the cytochromes P450 and additional proteins involved in electron transfer reactions, such as lathosterol oxidase and retinol dehydrogenase. Binding of Flag-Pgrmc1 to Cyp1a2, Cyp2e1, Cyp3a, and Cyp51a in the liver membrane fraction was validated by western blotting; binding to Cyp2f2 was not confirmed (Fig. 1D). In each confirmed instance, Flag-Pgrmc1 bound to 0.2% to 2% of cytochrome P450 protein in the liver membrane fraction (Fig. S2D). This coimmunoprecipitation experiment demonstrated that Flag-Pgrmc1 binds cytochromes P450 and, more broadly, may bind enzymes involved in redox processes in mouse liver. Pgrmc1 functions to maintain cytochrome P450 protein levels posttranscriptionally We consistently noted an increase in levels of cytochromes P450 expression in Flag-Pgrmc1-expressing KO liver as compared with GFP-expressing KO liver (Figs. 1D and S2E). To determine if Pgrmc1 affects cytochrome P450 protein levels globally, we performed quantitative mass-spectrometry proteomics with isobaric tagging on liver membrane samples from WT and Pgrmc1 KO mice and detected a total of 936 proteins in both of two biological replicates (Tables S3.1-S3.6). We detected 23 cytochromes P450 ( Fig. 2A, Table S3.1), a comparable number to a previous proteomic study that surveyed cytochromes P450 in mouse liver (27). Differentially expressed proteins were defined as those increased or decreased by at least 20% and a signal-to-noise ratio of at least 2 (Tables S3.2 and S3.3). Five core proteasomal subunits were more abundant (20-26% increase) in Pgrmc1 KO samples. Pgrmc1 was the least abundant protein (70% decrease), with its detection in the Pgrmc1 KO samples likely an artifact of coisolation interference during the mass spectrometry run (28). Among the other four decreased proteins (21-38%) in Pgrmc1 KO samples were three cytochromes P450, including Cyp2f2, Cyp7b1, and Cyp3a13. While not all detected cytochromes P450 met the cutoff criteria for significant change and signal-to-noise ratio, the P450 family tended to be decreased in Pgrmc1 KO liver ( Fig. 2A). Protein expression of Cyp1a2, Cyp2f2, Cyp7b1, Cyp51a, Cyp2e1, and Cyp3a was confirmed to be decreased in Pgrmc1 KO mouse liver membranes by western blotting (Fig. 2, B and C). Quantitative proteomics revealed a 14 to 38% reduction in protein across these specific cytochromes P450, and western blotting revealed a 22 to 70% reduction. Together, these complementary methods show that Pgrmc1 functions to maintain protein levels of these cytochromes P450. Although Pgrmc1 binds to cytochromes P450, Pgrmc1 may also affect protein levels indirectly by reducing cytochrome P450 transcript levels in the liver. To investigate this, we performed RNA-seq on WT and Pgrmc1 KO liver. Expression of 16,318 genes was measured, including 83 cytochromes P450 (Fig. 2D, Tables S4.1-S4.3). mRNAs more abundant in Pgrmc1 KO livers were also enriched for the GO terms, "exogenous drug catabolic process" (p = 2.46 × 10 −3 ) and "oxidationreduction process" (p = 2.07 × 10 −3 ) (Fig. 2E). Notably, Pgrmc1 binding partners were enriched for these same GO terms (Fig. 1C). For the subset of the 83 cytochromes P450 detected in the RNA-seq data (Table S4.4), 15% were more abundant in Pgrmc1 KO samples compared with wild-type, and 83% did not change. These data indicate that loss of Pgrmc1 does not reduce cytochrome P450 transcript amounts. In fact, loss of Pgrmc1 leads to the upregulation of transcripts involved in drug metabolism and redox processes, indicating that Pgrmc1 does not reduce cytochromes P450 levels through transcriptional regulation. consists of a single-pass transmembrane domain (TM) and cytochrome b5-like domain, which shares 30% identity with the human cytochrome b5 protein (NP_683725.1). A rabbit polyclonal antibody (5944) was raised to a bacterially expressed recombinant protein consisting of amino acids 43 to 195 of human PGRMC1. B, Pgrmc1 protein expression in Pgrmc1 KO mouse liver. Pgrmc1 was knocked out in the whole animal by crossing mice with a conditionally targeted Pgrmc1 allele containing loxP sites flanking exons 1 and 2 of the gene to Sox2-Cre mice. Knockout in the liver was confirmed by western blotting liver lysate (+β-mercaptoethanol) with an anti-PGRMC1 antibody (5944). Actin is a loading control. Each lane is a biological replicate (WT n = 3, KO n = 3). C, biological process gene ontology (GO) Term analysis on candidate binding partners of Flag-Pgrmc1 from liver. Pgrmc1 KO mice were infected with 5 × 10 11 particles of AAV8 GFP or AAV8 Flag-Pgrmc1 by tail vein injection and sacrificed after 8 days. Liver membrane fractions were subjected to Flag coimmunoprecipitation. Eluates from technical triplicates were pooled for each of three biological replicates and tagged with isobaric labels. Flag-Pgrmc1binding proteins were identified by mass spectrometry-33 proteins have a fold-change ≥20% compared with the GFP control. The GO terms enriched in relation to the complete Mus musculus proteome were identified using PANTHER. A Fisher's exact test and Bonferroni correction were used to determine enriched GO terms with a p-value ≤ 0.05. D, input (×) and bound (20×) fractions from Flag coimmunoprecipitation samples were subjected to western blotting for cytochromes P450 detected by mass spectrometry. Each panel is a montage from a single membrane with dashed lines denoting removed lanes. (* denotes IgG; ** ladder overflow into Lane 1.) Figure 2. Pgrmc1 regulates cytochrome P450 protein levels. A, membrane-enriched proteome of Pgrmc1 KO livers. Steady-state protein levels from WT and Pgrmc1 KO liver membrane fractions were quantified by mass spectrometry with isobaric tagging. Membrane proteins from the livers of four to five male mice of each genotype were pooled. Two biological replicates were conducted with 936 proteins measured in both replicates and further analyzed. The log2 fold-change [log2(KO/WT)] in expression was plotted against the absolute value of the signal-to-noise ratio. The signal-to-noise ratio is a moderated test statistic and reflects how unusually a given value of the log2 fold-change is when considering the whole data set. Proteins with an absolute value of the signal-to-noise ratio ≥2 were considered significant. Blue dashed lines indicate a 20% fold change, cytochromes P450 are colored red, and Pgrmc1 is colored yellow. B, Western blots of liver membrane fractions for cytochromes P450. Liver membrane-enriched protein (15 μg/lane for Cyp2f2, Cyp51a, and Cyp7b1, 10 μg/lane for all others; +β-mercaptoethanol; calnexin panels are loading controls for the panels above them) was analyzed by western blotting using the indicated antibodies. Each lane is a biological replicate (WT n = 8, KO n = 8). C, fold change in liver membrane protein expression of cytochromes P450 in Pgrmc1 KO compared with WT mice for (B). Cytochrome P450 signal intensities for each lane in (B) were first normalized to calnexin. Error bars are 1 SEM. (WT n = 8, KO n = 8; Welch's t test, one-tailed; * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001, **** p ≤ 0.0001). D, transcriptome of Pgrmc1 KO livers. RNA-seq was performed on total RNA from Pgrmc1 KO mice and WT controls. RNAs from five mice per genotype were pooled to produce one sample per genotype for analysis. In total, 16,318 genes were measured and plotted. The log2 fold change [log2(KO/WT)] in expression was plotted against the probability of differential expression (PDE), which is the Bayesian posterior probability that a difference in expression exists. Genes with a PDE ≥0.95 were considered significant. Blue dashed lines indicate a 40% fold change, cytochromes P450 are colored red, and Pgrmc1 is colored yellow. E, biological process GO Term analysis on transcripts more abundant in Pgrmc1 KO liver. Enriched GO terms among the 70 genes with a fold-change ≥40% and PDE ≥0.95 as compared with all transcripts measured were identified using PANTHER. A Fisher's exact test and Bonferroni correction were used to determine enriched GO terms with a p-value ≤ 0.05. F, flag-PGRMC1 stabilizes CYP1A2 in human SV589 cells. PGRMC1 KO cells were cotransfected with 5 μg CYP1A2-1XMyc and 10 μg of empty vector (pcDNA3.1) or Flag-PGRMC1 in a 10-cm plate. At 24 h posttransfection, cells were split 1:6 into a 6-well plate. At 48 h posttransfection, cells were treated with 100 μg/ml emetine and harvested every 2 h. Cell lysates were analyzed by western blotting. Actin is a loading control. Panels are representative of five independent experiments. Each panel is a montage from a single membrane with dashed lines denoting removed lanes. (* denotes background band). G, the half-life of CYP1A2 in the presence and absence of Flag-PGRMC1 was determined from (F). CYP1A2 signal was normalized to the actin loading control signal. Within each replicate, expression was normalized to the t = 0 value for each condition and then averaged. The data were fit to a second-order polynomial linear model (R 2 WT = 1, KO = 1). The half-life is calculated as the x-coordinate of the curve when y = 0.5. Error bars are 1 SEM (No PGRMC1 n = 5, Flag-PGRMC1 n = 5). PGRMC1 stabilizes cytochromes P450 independently of heme Our in vivo data indicate that Pgrmc1 regulates cytochrome P450 protein levels through some posttranscriptional mechanism. We next sought to establish whether Pgrmc1 regulates cytochromes P450 protein levels by altering protein stability. We chose CYP1A2 because it bound Flag-Pgrmc1 in vivo and is a constitutively expressed cytochrome P450 that metabolizes pharmaceutical drugs and other xenobiotics (29). We generated human PGRMC1 KO cells by CRISPR-editing SV589 fibroblasts (30). Knockout of PGRMC1 was validated by sequencing and western blotting (Fig. S3A). When the PGRMC1 KO cells were transfected with Flag-PGRMC1, overexpressed PGRMC1 was twofold higher than endogenous PGRMC1 (Fig. S3B). Next, we sought to validate the PGRMC1-CYP1A2 binding observed in mouse liver using human cells. Specific binding between Flag-PGRMC1 and CYP1A2-1XMyc was confirmed by Flag coimmunoprecipitation in PGRMC1 KO cells (Fig. S3C). To test the effect of PGRMC1 on CYP1A2 stability, PGRMC1 KO cells were transfected with CYP1A2-1XMyc with or without Flag-PGRMC1. At 48 h posttransfection, cells were exposed to the irreversible translation inhibitor emetine, and protein levels were measured over time. PGRMC1 stabilized CYP1A2-1XMyc protein, increasing its half-life by 67% (Fig. 2, F and G). This experiment demonstrated that PGRMC1 stabilizes CYP1A2 posttranslationally. Pgrmc1 is required for maximal Cyp1a2 and Cyp2e1 activity in the liver As Pgrmc1 supports protein levels of cytochromes P450, we wondered whether cytochrome P450 activity was reduced in Pgrmc1 KO mice. Our previous work showed that PGRMC1 binds CYP51A1 and supports its activity in cholesterol synthesis in HEK293 cells (3). To directly measure cytochrome P450 enzyme activity in Pgrmc1 KO liver, we assayed 7ethoxycoumarin O-deethylation (ECOD) using membrane samples from WT and Pgrmc1 KO mice. ECOD is mediated by CYP1A1 and CYP1A2 in humans and rodents with additional cytochromes P450 contributing to metabolism in humans (31). CYP1A1 expression is induced in response to the presence of its substrates, which include highly toxic polyarylhydrocarbons that laboratory animals are unlikely to encounter (32). Thus, effects of Pgrmc1 on ECOD activity in mouse liver likely report on Cyp1a2 activity. ECOD product formation (7-hydroxycoumarin) was monitored over a range of substrate (7-ethoxycoumarin) concentrations, and the data exhibited Michaelis-Menten kinetics (WT R 2 = 0.951, KO R 2 = 0.934) (Fig. 3A). Product formation was reduced at all substrate concentrations tested in the Pgrmc1 KO samples, and the difference was statistically significant between 0.125 mM and 2 mM. The K m of the reaction did not differ in the KO (Fig. 3B). However, the V max of the reaction was 42% lower in the KO (Fig. 3B), consistent with reduced levels of Cyp1a2 protein in Pgrmc1 KO livers (Fig. 2, B and C). Caffeine is another well-established Cyp1a2 substrate (29). We assayed caffeine N3-demethylation by monitoring paraxanthine formation in liver membrane samples from WT and Pgrmc1 KO mice (Fig. 3C). In the KO samples, paraxanthine formation was reduced by 30% at 50 μM caffeine. Taken together, these two enzyme assays demonstrate that loss of Pgrmc1 reduces Cyp1a2 activity, consistent with measured decreases in Cyp1a2 protein (Fig. 2, B and C). Error bars are 1 SD (WT n = 3, KO n = 3 technical replicates; Student's t test for each substrate concentration; **** p < 0.0001; nonlinear regression R 2 WT= 0.951, KO= 0.934). B, apparent K m and V max of the ECOD reaction were calculated from the fitted Michaelis-Menten curves in (A). Values are mean ± SEM. C, caffeine N3-demethylation reaction in Pgrmc1 KO membrane fraction. Liver membrane protein (100 μg) from WT or Pgrmc1 KO mice prepared as in A was combined with 50 μM caffeine and an NADPH regeneration buffer system for 60 min at 37 C. Paraxanthine formed was detected by mass spectrometry. Error bars are 1 SD (WT n = 9, KO n = 9 technical replicates; Student's t test; **** p < 0.0001). D, p-Nitrophenol hydroxylation reaction in Pgrmc1 KO membrane fraction. Samples were pooled liver membrane protein from four male mice of each genotype. Liver membrane protein (125 μg) from WT or Pgrmc1 KO mice was combined with 100 μM p-nitrophenol and an NADPH regeneration buffer system for 60 min at 37 C. Control samples contained no liver membrane protein. p-Nitrocatechol formed was detected spectrophotometrically and absorbances were corrected for background signal. Error bars are 1 SD (Control n = 6, WT n = 9, KO n = 9 replicates; Welch's t test; **** p < 0.0001). We also tested the effect of loss of Pgrmc1 on Cyp2e1 activity. Cyp2e1 is a cytochrome P450 involved in the metabolism of ethanol, pharmaceutical drugs, and low-molecularweight carcinogens, and p-nitrophenol is a known CYP2E1 substrate (1,33). We assayed p-nitrophenol hydroxylation in liver membrane samples from WT and Pgrmc1 KO mice (Fig. 3D). In the KO samples, p-nitrocatechol formation was reduced by 62% at 100 μM p-nitrophenol, consistent with reduced levels of Cyp2e1 protein in Pgrmc1 KO livers (Fig. 2, B and C). These activity assays show that loss of Pgrmc1 reduces cytochrome P450 activity, consistent with measured decreases in cytochrome P450 protein. Pgrmc1 protects against acetaminophen-induced liver injury Since Pgrmc1 was required for maximal Cyp2e1 activity in liver membranes in vitro, we asked whether Pgrmc1 supports liver Cyp2e1 cytochrome P450 activity in vivo. Overdosing on the common over-the-counter analgesic acetaminophen (APAP) is the leading cause of acute liver failure in patients (34,35). In 29% of APAP overdose cases, liver damage is so great that a liver transplant is required (35). Cyp2e1 metabolizes APAP, converting it to the reactive product N-acetyl-p-benzoquinone imine (NAPQI), which is subsequently glutathionylated for excretion (34). When normal doses of APAP are consumed, NAPQI is readily glutathionylated and no longer reactive (34). However, when overdoses of APAP are consumed, the Cyp2e1-dependent production of NAPQI overwhelms the glutathione pool and causes liver damage (34). Cyp2e1 KO mice are protected against APAP-induced liver injury, which highlights the important role of Cyp2e1 in affecting APAP toxicity (36,37). Flag-Pgrmc1 bound Cyp2e1 (Fig. 1D, Tables S2.1 and S2.2), and Cyp2e1 protein levels were 20% higher in WT liver than in Pgrmc1 KO liver (Fig. 2, B and C). To investigate a functional role for Pgrmc1 in Cyp2e1 activity, we tested whether Pgrmc1 KO protected against APAPinduced liver injury. Mice were injected with 600 mg APAP per kg body weight and euthanized 24 h later. All mice survived the study. Liver damage was surveyed by measuring serum ALT and AST and by histologic analysis of hematoxylin and eosinstained liver sections. In WT animals treated with APAP, liver damage was apparent. ALT was elevated 37-fold, and AST was elevated 20-fold compared with vehicle-treated controls (Fig. 4, A and B). APAP-treated WT mice had centrilobular hepatocellular necrosis characteristic of APAP hepatotoxicity and cytoplasmic microvesicular vacuolation (Fig. 4C). Notably, Pgrmc1 KO mice treated with APAP had the same ALT and AST levels as vehicle-treated mice (Fig. 4, A and B). While the APAP-treated KO mice exhibited cytoplasmic microvesicular vacuolation, they did not have the extensive centrilobular necrosis observed in APAP-treated WT mice (Fig. 4C). Thus, lack of Pgrmc1 expression was protective against the damage of APAP-induced liver injury, which is mediated by Cyp2e1. These data show that Pgrmc1-dependent stabilization of cytochromes P450 can have a clinically significant impact in a model of druginduced liver injury. Y113F Pgrmc1 rescues cytochrome P450 levels in the liver The most notable biochemical property of PGRMC1 is its ability to bind heme. Residue Y113 was identified in the PGRMC1 crystal structure (PDB 4X8Y) as the axial ironcoordinating residue in the heme-binding pocket ( Fig. 5A) (18). Kabe et al. reported that in in vitro binding assays with recombinant protein, CYP1A2 does not bind Y113F PGRMC1 lacking the N-terminal transmembrane domain. We tested whether this axial iron-coordinating residue was required for full-length Pgrmc1 to stabilize cytochromes P450. We infected Pgrmc1 KO mice with AAV8 Y113F Flag-Pgrmc1, AAV8 GFP, or AAV8 Flag-Pgrmc1 as described above. We performed quantitative mass-spectrometry proteomics with isobaric tagging on membrane samples from these mice with three biological replicates per condition. A total of 624 proteins were detected in all three mice per condition, including 19 cytochromes P450 (Fig. S4, A and B, Tables S5.1-S5.6). Among the proteins with reduced abundance in the GFP sample compared with Flag-Pgrmc1 sample were Cyp1a2, Cyp2f2, Cyp51a, and Cyp2e1, which were also less abundant in Pgrmc1 KO compared with WT mice (Fig. 2, B and C, Tables S5.1 and S5.2). These cytochromes P450 were also less abundant in the GFP samples compared with Y113F Flag-Pgrmc1 (Tables S5.3 and S5.4) . Similar to the trend observed for Pgrmc1 KO mice ( Fig. 2), cytochromes P450 tended to be less abundant in AAV8 GFP samples and more abundant in Flag-Pgrmc1 and Y113F Flag-Pgrmc1 samples (Fig. S4, A and B). Cytochromes P450 had no significant differential expression between Flag-Pgrmc1 and Y113F Flag-Pgrmc1 samples in the proteomics dataset ( Fig. S4C, Tables S5.5 and S5.6). Protein expression of Cyp1a2, Cyp2e1, Cyp2f2, Cyp3a, and Cyp51a was confirmed to be lower in GFP samples compared with Flag-Pgrmc1 and Y113F Flag-Pgrmc1 samples by western blotting (Figs. 5B and S5A). Expression of Cyp1a2, Cyp2e1, Cyp3a, and Cyp51a was the same in the Y113F Flag-Pgrmc1 sample as the Flag-Pgrmc1 sample (Figs. 5B and S5A). Thus, the axial iron-coordinating residue is not required in mouse liver for Pgrmc1 to maintain cytochrome P450 levels. We next tested whether Y113F Flag-Pgrmc1 binds cytochromes P450 in mouse liver by Flag coimmunoprecipitation coupled to mass spectrometry. Y113F Flag-Pgrmc1 bound 75 proteins (Table S2.2) , which was twice the number of binding partners for Flag-Pgrmc1. Among these 75 binding partners, there were 18 cytochromes P450. Binding of Y113F Flag-Pgrmc1 to Cyp1a2, Cyp2e1, Cyp3a, and Cyp51a was validated by western blotting (Figs. 5B and S5B). Consistent with the result for Flag-Pgrmc1, binding of Y113F Flag-Pgrmc1 to Cyp2f2 was not confirmed (Fig. 5B). The top enriched GO terms among the candidate binding partners were "exogenous drug catabolic process" (p= 9.96 × 10 −18 ) and "oxidation-reduction process" (p = 5.45 × 10 −16 ) and reflect the cytochromes P450 and additional proteins that are involved in electron transfer reactions (Fig. 5C). Flag-Pgrmc1 candidate binding partners were also enriched for these GO terms (Fig. 5C). Like Flag-Pgrmc1, Y113F Flag-Pgrmc1 binds cytochromes P450 and, more broadly, may bind enzymes involved in redox processes in the liver. PGRMC1 stabilizes cytochromes P450 independently of heme Y113F PGRMC1 stabilizes CYP1A2 posttranslationally and supports Cyp1a2 activity in the liver Since Y113F Flag-Pgrmc1 bound and affected steady-state protein levels of cytochromes P450 in the liver similar to Flag-Pgrmc1, we asked if Y113F Flag-PGRMC1 stabilized CYP1A2 in cell culture. We confirmed that CYP1A2-1XMyc bound Y113F Flag-PGRMC1 (Fig. S5C). Y113F Flag-PGRMC1 also stabilized CYP1A2-1XMyc to the same extent as Flag-PGRMC1 (Fig. 5, D and E). This experiment showed that the axial iron-coordinating residue of PGRMC1 is not required to stabilize CYP1A2. Because Y113F Flag-PGRMC1 stabilized CYP1A2-1XMyc in human cells, and Y113F Flag-Pgrmc1 stabilized and bound Cyp1a2 in liver, we tested whether Y113F Pgrmc1 can also support Cyp1A2 activity in the liver. Using liver membrane fractions from Pgrmc1 KO mice infected with AAV GFP, AAV Flag-Pgrmc1, or AAV Y113F Flag-Pgrmc1, we performed the ECOD metabolism assay at a 2 mM saturating substrate concentration (Fig. 5F). As expected, the AAV GFP sample generated the same amount of product as an uninfected Pgrmc1 KO sample, and Flag-Pgrmc1 restored product formation to the level of the uninfected WT control (Fig. 5F). Notably, Y113F Flag-Pgrmc1 also restored product formation to the level of the uninfected WT control. Thus, the axial ironcoordinating residue Y113 of Pgrmc1 is not required for Pgrmc1 to support Cyp1a2 activity. Altogether, Y113F PGRMC1 behaved like wild-type PGRMC1 in all mouse liver and cell culture assays. Y113F Pgrmc1 is a heme-binding protein that does not bind ferrochelatase Since Y113F Flag-Pgrmc1 binds and stabilizes cytochromes P450 like Flag-Pgrmc1, we investigated the ability of Y113F Pgrmc1 to bind heme in vitro. The crystal structure of human PGRMC1 stabilizes cytochromes P450 independently of heme PGRMC1 revealed that four residues (K107, Y113, K163, Y164) coordinate heme in the binding pocket ( Fig. 6A) (18). From Escherichia coli, we purified recombinant truncated 6X His-tagged human PGRMC1 (aa as well as the mutants Y113F PGRMC1 and Y113F, K163A, Y164F (3X MUT) PGRMC1, and the affinity tag was removed by thrombin cleavage (Fig. S6A). After incubation with a 100-fold molar excess of hemin, recombinant PGRMC1 (rPGRMC1) had the deep reddish-brown color characteristic of a heme-binding protein (Fig. S6B). Y113F rPGRMC1 was a similar reddish-brown color (Fig. S6B), suggesting that it retained the ability to bind heme. Unlike rPGRMC1 and Y113F rPGRMC1, 3X MUT rPGRMC1 was much lighter in color, suggesting it has a lower binding affinity for heme than either rPGRMC1 or Y113F rPGRMC1 (Fig. S6B). To measure the heme-binding affinity of each rPGRMC1 protein, 10 μM of each protein was incubated with 0 to 30 μM hemin for 16 h. The amount of hemin bound was measured spectrophotometrically at A 394 . The data were fit to the Hill equation or a linear model as appropriate. rPGRMC1 bound heme with a Figure 1D Lanes 1 to 6 and 7 to 12. (* denotes IgG; ** denotes ladder overflow into Lane 1). C, biological process GO Term analysis on candidate binding partners of Flag-Pgrmc1 and Y113F Flag-Pgrmc1 from liver. Pgrmc1 KO mice were infected with AAV8 Y113F Flag-Pgrmc1 and processed as in Figure 1C. Y113F Flag-Pgrmc1 binding proteins were identified by mass spectrometry; 75 proteins have a fold change ≥20% compared with GFP. GO terms enriched in relation to the complete M. musculus proteome were identified using PANTHER. A Fisher's exact test and Bonferroni correction were used to determine enriched GO terms with a p-value ≤ 0.05. D, PGRMC1 KO cells were cotransfected with 5 μg CYP1A2-1XMyc and 10 μg of empty vector, Flag-PGRMC1, or Y113F Flag-PGRMC1 in a 10-cm plate. At 24 h posttransfection, cells were split 1:6 into a 6-well plate. At 48 h posttransfection, cells were treated with 100 μg/ml emetine and harvested every 2 h. Cell lysates were analyzed by western blotting. Actin is a loading control. Panels are representative of five independent experiments. Each panel is a montage from a single membrane with dashed lines denoting removed lanes. Lanes 1 to 7 are the same images as Figure 2F Lanes 1 to 7. (* denotes background band). E, the half-life of CYP1A2 in the presence and absence of Flag-PGRMC1 or Y113F Flag-PGRMC1 was determined from (D). Half-lives were calculated as in Figure 2G. Error bars are 1 SEM. (No PGRMC1 n = 5, Flag-PGRMC1 n = 5, Y113F Flag-PGRMC1 n = 5). F, ECOD reaction in liver membranes of Pgrmc1 KO mice infected with AAV8 Y113F Flag-Pgrmc1. The reaction was conducted as in Figure 3A with 2 mM 7-ethoxycoumarin. Uninfected WT and KO samples were prepared as in Figure 3A. The infected KO samples were pooled samples from three male mice of each treatment group (AAV8 GFP, AAV8 Flag-Pgrmc1, AAV8 Y113F Flag-Pgrmc1) infected as in Figure 1C. Error bars are 1 SD (n = 3 technical replicates per condition; one-way ANOVA and Tukey HSD; "n.s." is not significant, *** p ≤ 0.001). PGRMC1 stabilizes cytochromes P450 independently of heme K d of 4.9 ± 0.33 μM (Fig. 6B), which is similar to the value reported for PGRMC2 (1.4 μM) (19). Y113F rPGRMC1 bound heme with the same affinity as rPGRMC1 (K d = 4.7 ± 0.42 μM) (Fig. 6B). In contrast, 3X MUT rPGRMC1 bound heme nonspecifically (Fig. 6B). Taken together, these results indicate that mutation of the iron-coordinating residue Y113F in PGRMC1 does not prevent heme binding and that additional residues in the heme-binding pocket of PGRMC1 must be mutated to prevent heme binding. Having identified a heme-binding mutant of PGRMC1, we next tested whether heme binding by PGRMC1 is required for PGRMC1 to bind and stabilize CYP1A2. Using PGRMC1 KO Figure 6. Pgrmc1 binds cytochromes P450 in a heme-independent manner, while binding to ferrochelatase is sensitive to the Y113F mutation in PGRMC1. A, structure of truncated human PGRMC1 with heme ligands Y107, Y113, K163, and Y164 highlighted (PDB 4X8Y). B, heme-binding affinity of rPGRMC1, Y113F rPGRMC1, and 3X MUT rPGRMC1 protein. Each protein (10 μM) was incubated with 0 to 30 μM hemin for 16 h at room temperature. The amount of hemin bound was measured spectrophotometrically at A394. Data were fit to the Hill equation or a linear model as appropriate. C, 3X MUT Flag-PGRMC1 stabilizes CYP1A2 in human cells. PGRMC1 KO cells were cotransfected with 5 μg CYP1A2-1XMyc and 10 μg of either empty vector, Flag-PGRMC1, Y113F Flag-PGRMC1, or 3X MUT Flag-PGRMC1 in a 10-cm plate. At 24 h posttransfection, cells were split 1:6 into a 6-well plate. At 48 h posttransfection, cells were treated with 100 μg/ml emetine and harvested every 2 h. Cell lysates were analyzed by western blotting. Actin is a loading control. Panels are representative of three independent experiments. D, the half-life of CYP1A2 in the presence and absence of Flag-PGRMC1, Y113F Flag-PGRMC1, or 3X MUT Flag-PGRMC1 was determined from (C) and Figure 5D. Half-lives were calculated as in Figure 2G. Error bars are 1 SEM. (No PGRMC1 n = 8, Flag-PGRMC1 n = 8, Y113F Flag-PGRMC1 n = 8, 3X MUT Flag-PGRMC1 n = 3). E, input (1×) and bound (20×) fractions from Flag coimmunoprecipitation samples were subjected to western blotting for ferrochelatase, which was detected by mass spectrometry to bind Flag-Pgrmc1 in Pgrmc1 KO liver membranes. F, quantification of ferrochelatase in the bound fraction of the Flag coimmunoprecipitation for (E). Within each biological replicate, the ratio of cytochrome P450 expression in the Bound fraction to the Input fraction was quantified. Error is 1 SD (GFP n = 3, Flag-Pgrmc1 n = 3, Y113F Flag-Pgrmc1 n = 3); one-way ANOVA and Tukey HSD; n.s. denotes not significant, ****p ≤ 0.0001). cells, we found that 3X MUT Flag-PGRMC1 binds CYP1A2-1XMyc (Fig. S6C). Additionally, 3X MUT Flag-PGRMC1 stabilized CYP1A2-1XMyc in an emetine chase more effectively than Flag-PGRMC1 or Y113F Flag-PGRMC1 (Fig. 6, C and D). These results indicate that PGRMC1 binds and stabilizes CYP1A2 in a heme-independent fashion. Despite the shared ability of PGRMC1 and Y113F PGRMC1 to bind and stabilize cytochromes P450 and their affinity for heme, we discovered one notable difference between Flag-Pgrmc1 and Y113F Flag-Pgrmc1. We performed a stringent quantitative analysis of the proteins that bound Flag-Pgrmc1 and Y113F Flag-Pgrmc1 in the Flag pull-down from liver membrane fractions (Tables S6.1-S6.9). As expected, Flag-Pgrmc1 and Y113F Flag-Pgrmc1 both bound cytochromes P450. Interestingly, ferrochelatase (Fech) bound Flag-Pgrmc1, but not Y113F Flag-Pgrmc1 (Fig. 6, E and F). Fech is responsible for the final step in heme synthesis (38), and binding between PGRMC1 and FECH has been reported previously in human cells (21). Binding had not previously been examined in mammalian liver. Binding of Fech to Flag-Pgrmc1 was robust, but Y113F Flag-Pgrmc1 failed to bind Fech despite equal expression (Fig. S6, E and F). Taken together, these results suggest that Y113F PGRMC1 is capable of binding heme with the affinity of the wild-type protein, but the axial ironcoordinating residue of Pgrmc1 is critical for Pgrmc1 binding to the heme biosynthetic enzyme ferrochelatase. Discussion PGRMC1 is a membrane-bound, heme-binding protein implicated in a plethora of biological processes (16). Previous work demonstrated that PGRMC1 is a cytochrome P450binding protein in mammalian cells, and PGRMC1 supports the enzymatic activity of CYP51A1 in cholesterol synthesis in this system (3). Here, we extend these findings to mammalian liver and show that PGRMC1 is broadly required for cytochrome P450 function in vivo. Using a whole-body Pgrmc1 KO mouse, we observed (1) PGRMC1 binds many cytochromes P450 from diverse families in the liver; (2) PGRMC1 stabilizes cytochromes P450 posttranslationally in a heme-independent manner; (3) PGRMC1 is required for maximal activity in the liver of cytochrome P450 enzymes that it binds; (4) PGRMC1 alters cytochrome P450 activity at a physiologically significant level that may have a clinical impact; and (5) PGRMC1 binds the terminal heme synthesis enzyme Ferrochelatase (FECH) in the liver. Our observations expand knowledge on in vivo cytochrome P450 biology and implicate PGRMC1 in heme metabolism. PGRMC1 binds many cytochromes P450 from diverse families in the liver. Pgrmc1 bound to 56% of the 23 cytochromes P450 assayed by mass spectrometry. We validated the binding interaction with Pgrmc1 for five cytochromes P450 (Cyp1a2, Cyp2e1, Cyp3a, Cyp51a1, Cyp2f2) by western blotting, and all interactions were confirmed, except Cyp2f2 for reasons that are unclear. PGRMC1 bound cytochromes P450 from families involved in different functions, including xenobiotic, pharmaceutical drug, cholesterol, and arachidonic acid metabolism (1). PGRMC1 may yet bind more cytochromes P450 than the 23 we report here. If the interaction between Pgrmc1 and cytochromes P450 is transient, which is likely since we measured that only 0.2 to 2% of the total amount of a cytochrome P450 bound Pgrmc1, then the full complement of cytochromes P450 that bind Pgrmc1 may not have been identified. Additionally, it is unknown whether PGRMC1 preferentially binds apo-cytochromes P450 or the hemeloaded forms, as such the protein synthesis rate and hemeloading rate of a cytochrome P450 may affect the duration of PGRMC1 binding and the ability to detect the interaction at steady state. Also, healthy laboratory mice come in contact with very few chemical stressors that would induce expression of cytochromes P450 that are not constitutively expressed and, therefore, the full complement of mouse P450 enzymes was not tested for Pgrmc1 binding. A structure of PGRMC1 bound to cytochromes P450 or chemical cross-linking studies would advance the field and potentially identify characteristics of the PGRMC1-cytochrome P450-binding interface that may allow for cytochrome P450-binding partners of PGRMC1 to be predicted. PGRMC1 stabilizes cytochromes P450 posttranslationally. Cytochrome P450 proteins that bound PGRMC1 tended to be less abundant in Pgrmc1 KO liver. Of the cytochromes P450 that bound Pgrmc1 for which we have complementary steadystate proteomic data, 64% (7/11) were less abundant in Pgrmc1 KO liver. The decrease in protein expression cannot be attributed to a decrease in mRNA level as gene expression was either unchanged (Cyp2j5, Cyp2c29, Cyp1a2, Cyp2e1, Cyp2f2, Cyp51a1) or elevated (1.4-fold, Cyp3a11) in Pgrmc1 KO liver. The elevation of cytochrome P450 transcripts in Pgrmc1 KO liver, like Cyp3a11, may be a compensation mechanism to restore cytochrome P450 protein levels. Indeed, such a mechanism may actually mask the effect that loss of Pgrmc1 has on cytochrome P450 protein levels. For the three cytochromes P450 that were among the least abundant proteins in Pgrmc1 KO mice (Cyp2f2, Cyp7b1, Cyp3a13), all three were shown to bind Pgrmc1 by at least one method (mass spectrometry or western blotting), and likewise their mRNAs were either unchanged (Cyp2f2, Cyp3a13) or elevated (twofold, Cyp7b1). Cyp1a2 bound Pgrmc1 and was less abundant in Pgrmc1 KO liver. Using cultured cells, we confirmed that Flag-PGRMC1 bound CYP1A2 and stabilized CYP1A2 in a chase assay with a translation inhibitor. Collectively, these in vivo and in vitro data demonstrate that PGRMC1 binds and posttranslationally stabilizes cytochrome P450 enzymes in mouse liver. PGRMC1 supports the activity of cytochrome P450 enzymes that it binds in mouse liver, including Cyp1a2 and Cyp2e1. The apparent maximal reaction rate (V max ) in Pgrmc1 KO microsomes for the Cyp1a2-dependent ECOD reaction was reduced by 40%, but there was no change in enzyme affinity (K m ) (Fig. 3). Likewise, Cyp2e1 product formation was reduced by 62% in Pgrmc1 KO mice, and the KO mice were protected from APAP-induced liver injury (Figs. 3 and 4) (36,37). Since the V max of an enzyme is proportional to the amount of enzyme present and the turnover of the enzyme (k cat ), Pgrmc1 may affect either of these parameters to increase cytochrome P450 activity. Our data demonstrate that a decrease in Cyp1a2 and Cyp2e1 protein underlies the decrease in cytochrome P450 activity in Pgrmc1 KO liver. Both Cyp1a2 and Cyp2e1 proteins bind Pgrmc1, and the proteins are expressed at lower levels in Pgrmc1 KO liver. PGRMC1 most likely controls enzyme activity by stabilizing cytochromes P450 rather than affecting the catalytic cycle of the enzyme, but a concomitant effect on the k cat of cytochromes P450 upon PGRMC1 binding cannot be ruled out as protein-protein interactions are also known to alter cytochrome P450 activity (39)(40)(41)(42). Evidence exists that PGRMC1 binds cytochromes P450 directly. PGRMC1/Dap1p bound the cytochromes P450 Erg11p and Erg5p directly in S. pombe (3), and recombinant PGRMC1 bound CYP1A2 and CYP3A4 in vitro (18). No obvious adaptor protein was discovered among the candidate binding partners of PGRMC1 in this study. However, PGRMC2 cannot be ruled out as an adaptor, as binding of PGRMC1 and PGRMC2 has previously been reported and was observed by mass spectrometry in this study (19,43). Given that Pgrmc1 supports the activity of Cyp1a2 and Cyp2e1, Pgrmc1 may increase the activity of other cytochromes P450 to which it binds and stabilizes. It is unknown if binding of PGRMC1 to cytochromes P450 is regulated. If true, regulated binding would represent a previously unappreciated mechanism for regulation of cytochrome P450 protein levels and activity. While the mechanism by which PGRMC1 maintains cytochrome P450 protein levels is unknown, this interaction alters cytochrome P450 activity in a physiologically significant way that can have a clinical impact. In Pgrmc1 KO mice, systemic cholesterol is 8% lower than controls, which is likely due to a Pgrmc1-dependent effect on Cyp51a1 in the cholesterol biosynthetic pathway. Hughes et al (3). demonstrated that PGRMC1 binds CYP51A1 and supports cholesterol synthesis in HEK293 cells. Recently, deletion of PGRMC1 was shown to cause X-linked isolated pediatric cataract in humans, and the authors proposed that this is due to disruptions in CYP51A1dependent cholesterol synthesis in the lens (44). The fact that Pgrmc1 KO mice are protected from APAP-induced toxicity demonstrates that Pgrmc1 has an in vivo effect on cytochrome P450 activity. PGRMC1 also affects the stability of CYP3A and other cytochromes P450 involved in the metabolism of pharmaceutical drugs, which suggests that PGRMC1 may play a clinically significant role in drug metabolism in patients. Individuals with mutations or polymorphisms of PGRMC1 should be studied for a role in drug metabolism or other disease phenotypes associated with cytochrome P450-dependent synthesis reactions. Consistent with the presence of a subclinical phenotype in Pgrmc1 KO mice, KO livers had increased levels of core proteasome subunits associated with membranes (Table S3.3) and increased mRNA expression for the acute-phase response proteins, SAA1 and SAA2 (Table S4.3). The full scope of PGRMC1 function in P450 biology is unknown since pathways may need to be stressed in order to observe a phenotype as seen for Cyp2e1and APAP overdose. The axial heme coordinating residue Y113 of PGRMC1 is not required for PGRMC1 to bind heme or to bind and stabilize cytochromes P450 (Figs. 5 and 6). This axial residue was not definitively identified until 2016 when Kabe et al. (18) solved the X-ray crystal structure of truncated, recombinant PGRMC1 and identified Y113 as the elusive residue. While this previous study showed that Y113 is required for PGRMC1 to form a homodimer in vitro by mutating the tyrosine to the structurally similar phenylalanine residue, the authors did not report the heme-binding affinity of the Y113F mutant. Our work shows that Y113F PGRMC1 binds heme specifically and with similar affinity as PGRMC1. Kabe et al. (18) reported the Y113F PGRMC1 mutant does not bind CYP1A2 in an in vitro binding experiment that used a truncated form of PGRMC1 (aa . Contrary to this result, our work shows that fulllength Y113F PGRMC1 binds CYP1A2 in both cultured cells and mouse liver and that Y113F PGRMC1 promotes CYP1A2 stability as well as wild-type PGRMC1. In fact, Y113F Pgrmc1 binds all 13 cytochromes P450 that Pgrmc1 binds in mouse liver plus an additional five cytochromes P450. These data suggest that the N-terminal transmembrane domain of PGRMC1 may contribute to cytochrome P450 binding. To render PGRMC1 unable to bind heme, Y113 and two additional residues that coordinate the protoporphyrin ring (K163, Y164) must be mutated. This 3X MUT PGRMC1 bound CYP1A2 and stabilized it in cultured cells (Figs. 6 and S6), demonstrating that PGRMC1 can bind and stabilize cytochromes P450 in a heme-independent fashion. Mechanisms of cytochrome P450 turnover are relatively understudied compared with cytochrome P450 enzymology, even though turnover influences cytochrome P450 activity (45,46). One intriguing hypothesis is that binding of PGRMC1 blocks ubiquitination sites on cytochromes P450 delaying their turnover. If this is the case, the binding between cytochromes P450 and PGRMC1 would not require PGRMC1 to bind heme. Interestingly, Pgrmc1 and Y113F Pgrmc1 also bound other redox proteins in the liver that are not heme-binding proteins (Tables S2.1 and S2.2). This opens the possibility that Pgrmc1 may stabilize these redox proteins as well. The mechanism of the heme-independent stabilization of cytochromes P450 by PGRMC1 and any heme-dependent function of PGRMC1 remain to be elucidated. PGRMC1 binding to heme is not required for cytochrome P450 binding or stability. However, the Y113F mutant that removes the iron-coordinating hydroxyl group disrupts binding to ferrochelatase (FECH) in mouse liver. As FECH is the terminal enzyme in mitochondrial heme synthesis (38), this observation opens up questions about the role of PGRMC1 in heme metabolism. Piel et al. previously reported that PGRMC1 binds to FECH in human embryonic kidney and leukemia cells (21). Here, we extended this observation to mouse liver. Roughly 20% of the liver Fech present bound to Pgrmc1, suggesting a potentially direct interaction. The precise mechanism by which heme is transferred safely from FECH to hemoproteins throughout the cell remains to be elucidated (38). In the cellular environment, heme is a highly reactive molecule that must be sequestered to prevent indiscriminate cellular damage and must be delivered safely to heme-binding proteins (38). The Y113 residue is critical for PGRMC1 to bind FECH. Although we cannot rule out that small differences in heme binding affinity underlie the dramatic change in Pgrmc1-Fech binding (Fig. 6), a more likely interpretation is that the Y113 residue is structurally important for the interaction of the two proteins. Others have hypothesized that FECH transfers heme to PGRMC1 (19,21). To this hypothesis, we add that heme transfer is likely Y113-dependent. In this regard, detailed structural information of the PGRMC1-FECH complex would advance the field. Interestingly, the NCBI dbSNP database contains 1751 SNPs for PGRMC1 of which 96 (5.5%) are missense mutations. No missense mutations occur in the Y113 codon or those of the other three residues that compose the heme-binding site (Fig. 6), consistent with these PGRMC1 residues having an important function in vivo. The binding of PGRMC1 to FECH, which is associated with the inner membrane in the mitochondrial matrix, and ERlocalized cytochrome P450 enzymes raises the question of how PGRMC1 localizes to these different subcellular compartments and whether its localization is dynamic. The binding of PGRMC1 and FECH places PGRMC1 in a key position to transfer heme throughout the cell. It is difficult to explain how an ER-resident Type 1 membrane protein like PGRMC1 can interact with FECH, which resides in the mitochondrial matrix associated with the inner membrane, such that heme from FECH could be transferred to the cytochrome-b5 domain of PGRMC1. While studies have described PGRMC1 as localized to many different subcellular compartments, no model of the PGRMC1-FECH interaction has yet provided a satisfactory explanation that respects the rules of protein trafficking and membrane topology (16,19,21,47). Additional studies on the subcellular localization of PGRMC1 are necessary; two populations of PGRMC1 may exist (ER and mitochondria) or PGRMC1 may reside in the ER at ERmitochondria contact sites. PGRMC1 may deliver FECHderived heme to apo-hemoproteins directly and/or to a heme chaperone, one of which may be PGRMC2. PGRMC2 or a yet unidentified protein may be the intermediary heme chaperone receiving the FECH-derived heme from PGRMC1 and conveying it to apo-hemoproteins, including cytochromes P450, in the cytosol and ER. Galmozzi et al. recently showed that the PGRMC1 paralog, PGRMC2, is a heme chaperone, which suggests that PGRMC1 may also be a heme chaperone (19). In support of this hypothesis, Piel et al. demonstrated that recombinant PGRMC1 can donate heme to apo-cytochrome b5 (21). Further, cells depleted of PGRMC1 by shRNA had reduced levels of labile heme in mitochondria, ER, nucleus, and cytosol (19). Depletion of PGRMC2 by shRNA reduced labile heme only in the mitochondria and nucleus (19). Notably, PGRMC1 was epistatic to PGRMC2 in this assay, indicating that PGRMC1 acts upstream of PGRMC2. Based on these observations and PGRMC1 binding to FECH (21), Galmozzi et al. propose that PGRMC1 accepts heme from FECH, passing it off to PGRMC2 for delivery to the nucleus and the nuclear hormone receptor Rev-Erbα (19). Our data support a role for PGRMC1 in accepting heme from FECH given that loss of heme iron coordination completely disrupts FECH binding. However, our mouse KO studies indicate that if Pgrmc1 is a heme chaperone in vivo, it cannot be the only mechanism for newly synthesized heme transport as Pgrmc1 KO mice are alive. In addition to the model above, it is possible that PGRMC1 chaperones heme to cytochromes P450. Yet based on our data, the inability of PGRMC1 to transfer heme would not impact cytochrome P450 stability. Our findings that PGRMC1 stabilizes cytochrome P450 enzymes in a heme-independent manner demonstrate that PGRMC1 has at least two functions in the cell. Altogether, this study highlights that PGRMC1 has a significant impact on cytochrome P450 activity and physiology in vivo and has a second function in heme metabolism. As noted in the introduction, PGRMC1 has been implicated in a wide range of biological activities. Given the large number of cytochromes P450 bound by PGRMC1, investigators should examine whether defects in cytochrome P450 activity underlie these observations. Here, we examined the role of PGRMC1 in cytochromes P450 activity in mouse liver and present the first mouse tissue interactome for Pgrmc1-binding partners. Pgrmc1 represents 2% of mouse liver protein (48), and we identified many noncytochrome P450-binding partners, such as redox proteins, atlastin, and BAP31 (49). These proteins do not bind heme, suggesting that additional PGRMC1 functions remain to be discovered. Finally, PGRMC1 was recently identified as the causative mutation in X-linked isolated pediatric cataract (44). In addition to confirming that defects in CYP51A1 underlie this disease, careful examination of individuals with PGRMC1 mutations may reveal additional phenotypes associated with other cytochromes P450. Materials Unless otherwise stated, we obtained common reagents from Thermo Fisher. Unless otherwise stated, chemicals were obtained from Sigma. The mammalian expression vector, CYP1A2-1XMyc, encoding human CYP1A2 was generated by subcloning bases 63 to 1608 of the CYP1A2 CDS (NM_000761.5) from Invitrogen Ultimate ORF IOH52560 (Johns Hopkins University High Throughput Biology HiT Center) to pcDNA3.1. The sequence contains a point mutation (C1200T) that is nonsynonymous (NP_000752.2, S380P). CYP1A2 is tagged at the C-terminus with a single Myc tag. The GP78-5X Myc plasmid was a gift of Dr Russell DeBose-Boyd (50). The plasmids for the production of AAV8 viral particles encoding murine Pgrmc1 were generated by replacing the EGFP cassette in an AAV8 EGFP plasmid (AV-8-0101, University of Pennsylvania Vector Core) with Flag-Pgrmc1 or Y113F Flag-Pgrmc1 subcloned from the mammalian expression vectors described here: The mammalian expression vector, Flag-Pgrmc1, encoding murine Pgrmc1 was generated by subcloning the Pgrmc1 CDS (NM_016783.4 nucleotides 95-682) to pcDNA3.1. Pgrmc1 was tagged at the N-terminus with a single Flag tag. The mammalian expression vector Y113F Flag-Pgrmc1 was generated from the Flag-Pgrmc1 mammalian expression vector by site-directed mutagenesis (NM_016783.4 nucleotide 432 A->T). AAV8 EGFP, AAV8 Flag-Pgrmc1, and AAV8 Y113F Flag-Pgrmc1 viral particles were produced by the University of Pennsylvania Vector Core. Animal husbandry The Johns Hopkins animal care and use program is accredited by AAALAC international, and the Johns Hopkins Institutional Animal Care and Use Committee (IACUC) reviewed and approved all procedures. Routine health surveillance using dirty bedding sentinel serology indicated that the mice were free of the following organisms: mouse hepatitis virus, minute virus of mice, mouse parvovirus, epizootic diarrhea of infant mice (rotavirus), Theilers murine encephalomyelitis virus, murine norovirus, Sendai virus, pneumonia virus of mice, reovirus, lymphocytic choriomeningitis virus, ectromelia virus, mouse adenovirus (FL & K87), mouse cytomegalovirus, Mycoplasma pulmonis, fur mites, and pinworms. Mice were housed in social groups (2-5 mice) of the same sex in individually ventilated cages (Allentown Caging Inc) with autoclaved corncob bedding (Teklad, Envigo) and nesting material (Animal Specialties and Provisions). Cages were changed every 14 days. Autoclaved feed (Teklad Global 2018S) was provided ad libitum, and water was provided via in-cage automated watering systems (Systems Engineering, Inc). The room was maintained at 22 ± 1 C on a 14:10 light:dark cycle at 40 to 70% humidity. Euthanasia was performed under isoflurane anesthesia by cervical dislocation. Whole blood was collected via cardiocentesis and placed in a heparin-coated green-top tube for plasma separation or a yellow-top tube with serum separator gel for serum separation (BD Biosciences). Clinical chemistry was performed with a Vet AceTM analyzer (Alfa Wassermann). Complete Blood Count (CBC) was performed with a Procyte Dx (Idexx Laboratories Inc). After harvest, liver tissue was flash frozen in liquid nitrogen for molecular PGRMC1 stabilizes cytochromes P450 independently of heme analysis or stored in 10% neutral buffered formalin (Sigma) for histology. Generating PGRMC1 knockout mice Pgrmc1 floxed mice (C57BL/6N) were generated by inGenious Targeting Laboratory (iTL). The long homology arm of the targeting construct was 889 to 6921 bp upstream of exon 1 (Ensembl GRCm38 X chromosome), and the short homology arm was 296 to 2102 bp downstream of exon 2. The targeting construct inserted loxP sites 568 bp upstream of exon 1 and 1906 bp downstream of exon 2. The targeting construct also contained a neomycin cassette flanked by FRT sites within the loxP sites. Targeted iTL IC1 (C57BL/6N) male embryonic stem cells were microinjected into Balb/c blastocysts. Resulting chimeras with a high percentage black coat color were mated to C57BL/6N mice to generate F1 heterozygous offspring. F1 female heterozygotes were crossed to male Sox2-Cre mice (B6.Cg-Tg(Sox2-cre)1Amc/J, Jackson Laboratories) at Johns Hopkins animal facilities. The resulting whole-body Pgrmc1 knockout (KO) mice lacking exons 1 and 2 were bred for this study. Male mice aged 8 to 12 weeks were used for this study, except for the acetaminophen-induced liver injury study where male mice aged 13 to 15 weeks were used. Liver and cell lysate preparation To make liver lysates, liver tissue was disrupted with a Tissue Lyser (Qiagen) and steel beads in the presence of RIPA buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 1% Igepal (v/v), 0.5% (w/v) sodium deoxycholate, 0.1% (w/v) SDS) plus 1X protease inhibitors (Protease Complete, Roche) at 4 C. Cell lysates were made by resuspending cell pellets in RIPA buffer (approximately five pellet volumes) with protease inhibitors (0.5 μM phenylmethylsulfonyl fluoride, 5 μg/ml Pepstatin A, 10 μg/ml Leupeptin) on ice. Lysates were cleared by spinning in a microfuge at 21,130g for 10 to 15 min at 4 C, and the supernatant was processed as the sample. Liver membrane preparation Liver membrane fractions were prepared as described previously (52) with some minor modifications. Frozen tissue (1 g) was thawed on ice in five tissue volumes of Homogenization Buffer (100 mM Tris-HCl, pH 7.4, 100 mM KCl, 100 mM EDTA) and homogenized with an immersion blender (Bamix) until homogenization was completed based on visual inspection of the sample. The homogenate was cleared by spinning at 10,000g for 30 min at 4 C. The supernatant was spun at 100,000g for 90 min at 4 C. The pellet was washed with Resuspension Buffer (100 mM sodium pyrophosphate, pH 7.4, 1 mM EDTA) and spun at 100,000g for 60 min at 4 C. The pellet was defined as the enriched membrane fraction. It was resuspended in Storage Buffer (50 mM potassium phosphate, pH 7.4, 0.1 mM EDTA, 20% (v/v) glycerol) with 0.1 mM DTT to 10 to 20 mg protein/ml. DTT was omitted for samples intended for coimmunoprecipitation. All buffers contained protease inhibitors (Protease Complete, Roche) at 1X or at 2X for samples intended for coimmunoprecipitation. Immunoblotting Protein concentration in lysates and membrane fractions was measured using the BCA kit (Pierce), and samples were mixed with 5X SDS loading buffer (150 mM Tris-HCl, pH 6.8, 15% (w/v) SDS, 25% (v/v) glycerol, 0.2% (w/v) bromophenol blue) with or without 12.5% (v/v) β-mercaptoethanol (BME). After heating at 65 C for 10 to 15 min, proteins were subjected to SDS-PAGE and transferred to nitrocellulose membranes (BioRad). The membranes were blocked with 5% (w/v) . Incubations with primary antibody were performed overnight at 4 C unless otherwise noted. Bound antibodies were visualized with IRDye800CW or IRDye680RD mouse or rabbit IgG detection reagent (LI-COR, 1:20,000) with one exception. For western blots of samples from Flag coimmunoprecipitation probed with the anti-Cyp1A2 antibody, a Quick Western Kit was used according to manufacturer's instructions (LI-COR). Quantification of western blot signals was performed using Image Studio software (LI-COR). Images exported from Image Studio were adjusted for image rotation and brightness and contrast across the whole image using Adobe Photoshop. Flag coimmunoprecipitation Flag pull-downs from mouse liver were performed on samples from male mice sacrificed 8 days after tail vein infection with 5 × 10 11 particles of AAV8 GFP, Flag-Pgrmc1, or Y113F Flag-Pgrmc1. All buffers contained protease inhibitors (2X Protease Complete, Roche). One milligram of liver membrane protein in 150 μl microsome Storage Buffer (6.67 mg/ml) was diluted to a final concentration of 2 mg/ml in 1 mM MgCl 2 . The sample was treated with ten units of benzonase (EMD Millipore) on ice for 30 min. Next, an equal volume of 2X TAP lysis buffer [12 mM Na 2 HPO 4 , 8 mM NaH 2 PO 4 , pH 7.5, 150 mM NaCl, 4 mM EDTA, 2% (w/v) n-Dodecyl-β-D-maltoside (DDM, ACROS Organics)] was added to dilute the sample to 1 mg/ml. Insoluble material was removed by centrifugation at 20,000g for 10 to 15 min at 4 C. Flag-M2 agarose (Sigma) was washed once in three bead volumes of 1X TAP lysis buffer containing 0.715 mM MgCl 2 . Then, beads were blocked in 20 bead volumes of wild-type liver lysate at 1 mg/ml prepared from a 12-week-old, wildtype male mouse in 1X TAP lysis buffer. The beads were blocked for 1 h at 4 C and then washed three times in ten bead volumes of 1X TAP lysis buffer containing 0.715 mM MgCl 2 . Each sample was incubated with 25 μl Flag M2 beads for 1 h at 4 C while rotating. The beads were washed 3X in 20 bead volumes of 1X TAP lysis buffer. The fourth and final wash was done in 20 bead volumes of 1X TAP lysis buffer containing 0.1% (w/v) DDM, and the samples transferred to a fresh tube. The bound fraction was eluted by incubating the beads at 65 C for 10 min in 50 μl of elution buffer (30 mM Tris-HCl, pH 7.5, 0.125% (w/v) SDS). For mass spectrometry analysis of Pgrmc1-binding proteins, the Flag coimmunoprecipitation was performed in technical triplicate for each biological replicate (three each for GFP, Flag-Pgrmc1, Y113F Pgrmc1). Equal volumes of eluate from each technical replicate were combined to form a pooled sample for each biological replicate that was subsequently analyzed by mass spectrometry. For mass spectrometry of the membrane proteome of AAV8 GFP, Flag-Pgrmc1, or Y113F Flag-Pgrmc1 infected Pgrmc1 KO liver, the samples were prepared as described above with benzonase treatment and diluted to 1 mg/ml before sonication. For Flag pull-downs from transfected cultured cells, cells were lysed in 1X TAP lysis buffer with protease inhibitors (0.5 μM phenylmethylsulfonyl fluoride, 5 μg/ml Pepstatin A, 10 μg/ml Leupeptin). Lysates were solubilized by rotating at 4 C for 1 h and cleared by spinning in a microfuge at 20,000g for 10 to 15 min at 4 C. Flag-M2 agarose (Sigma) was washed once in six bead volumes of 1X TAP lysis buffer. Then, the beads were blocked in six bead volumes of 3% BSA (w/v, Sigma) in 1X TAP lysis buffer. The beads were blocked for 45 to 60 min at room temperature and then washed twice in six bead volumes of 1X TAP lysis buffer. Each sample (300 μg protein) was incubated with 10 μl Flag M2 agarose (Sigma) at a protein concentration of 1 mg/ml for 1 h at 4 C while rotating. The beads were washed 3X in 30 to 50 bead volumes of 1X TAP lysis buffer. After the second wash, the samples were transferred to a fresh tube. The bound fraction was eluted by incubating the beads at 65 C for 10 min in 30 μl of 1X SDS loading buffer without BME. Histology Tissues fixed in 10% neutral buffered formalin (Sigma) were processed, paraffin embedded, sectioned (4 μm), and stained with hematoxylin and eosin according to standard protocols by Oncology Tissue Services (Johns Hopkins University). Slides were viewed on a Nikon Eclipse Ci microscope and images captured using Nikon DS-Fi2 camera. Images were edited for white balance across the whole image using Adobe Photoshop. Metabolite analysis with 1 H-nuclear magnetic resonance spectroscopy Mice were fasted for 4 h before sacrifice around 1:00 PM (ZT-7). Liver tissue (approximately 300 mg) was snap frozen in liquid nitrogen and processed the same day. Briefly, the tissue was homogenized in two tissue volumes of 20 mM phosphate buffer, pH 7.4. To the supernatant, four tissue volumes of methanol were added, and the samples vortexed before incubation at −20 C for 30 min. The samples were spun at 13,000g at 4 C for 15 min. The supernatant was dried in a speed vac overnight, and pellets were saved for protein quantification. Dried samples were constituted with 20 mM phosphate buffer containing 0.1 mM trimethylsilylpropionic acid (TMSP) and 0.1 mM NaN 3 . Proton NMR spectra were acquired, analyzed, and quantified as previously described (56). Mass spectrometry and proteomic analysis To quantify protein levels in Pgrmc1 KO livers, quantitative proteomics was employed. Equal masses of liver membrane protein from four WT and four KO mice (Replicate #1) or five WT and five KO mice (Replicate #2) were pooled by genotype to produce two biological replicates per genotype. In each MS run, a WT and a KO pool were labeled with three different isobaric tags, resulting in three technical replicates per genotype per experiment. The samples were digested with the protease trypsin, which specifically cleaves at the carboxyl side of the amino acids lysine and arginine. Next, the samples were labeled with unique Tandem Mass Tag (TMT) 10-plex reagent (Thermo Fisher) according to the manufacturer's protocol. The combined labeled sample was cleaned up from excess TMT tag with a detergent removal spin column (Pierce). The labeled sample was fractionated with five or four basic, reversed-phase fractions of 5%, 15%, 20%, 30%, and 75% or 5%, 15%, 25%, and 75% (v/v) acetonitrile in 10 mM triethylammonium bicarbonate buffer (TEAB). Each fraction was analyzed by liquid chromatography interfaced with tandem mass spectrometry (MS/MS) using a Nano-Acquity HPLC system interfaced with a QExactive HF (Thermo Fisher). A stepped collision energy 32 s/30 s was used for fragmentation. Precursor and fragment ions were analyzed at resolutions 60,000 and 120,000, respectively, and automatic gain control For statistical analysis, only spectra with an false discovery rate (FDR) less than 1% (based on a concatenated decoy database) in which all reporter ions were detected were included for downstream analyses. Spectra with isolation interference greater than 30% were excluded as well. Within each TMT experiment, relative protein abundances were quantified by a robust median sweep algorithm (57,58). Briefly, reporter ion intensities were log2-transformed, spectrum medians of the log2-transformed reporter ion intensities were subtracted (median-polishing), and all reporter ion intensities that belong to the same protein were used as the measure (median) of that protein abundance in the sample. In a final step, the channel medians across all proteins were subtracted to correct for potential loading differences. Statistical inference between two groups of interest was assessed by moderated t test statistics (58,59). For multiple comparison correction, q-values (60) were calculated from the observed pvalues to control the FDR. That is, if a protein has a q-value of 0.05, we expect to see 5% among the proteins that show smaller p-values to be false-positives. Proteins with calculated q-values smaller than 0.05 between different groups can be declared statistically significant. Only proteins quantified by reporter ion spectra in both biological replicates were included for statistical downstream analyses. Normalized protein abundance values for technical replicates were averaged prior to statistical testing. For proteomics on AAV-infected liver, the samples were digested with the protease trypsin, which has specificity for the amino acids lysine and arginine at the C-terminus of a cleaved peptide. Next, the samples were labeled with unique TMT 10plex reagent (Thermo Fisher) according to the manufacturer's protocol. The combined labeled samples were cleaned up from excess TMT tag with a detergent removal spin column (Pierce). The labeled samples were resuspended in an aqueous 5% (v/v) methanol and 0.5% (v/v) formic acid solution and then desalted via Oasis HLB cartridges (Waters). Briefly, cartridges were conditioned with 100% (v/v) methanol and equilibrated with 100% (v/v) water. Samples were loaded and then washed twice with 5% (v/v) methanol in water. Peptides were then eluted with 100% (v/v) methanol and dried via vacufuge. Peptides were then fractionated via Agilent 3100 OFFGEL isoelectric focusing gel in a 12-well setup according to the manufacturer's protocol. Upon completion, fractions were separately desalted with Pierce C18 spin columns (Thermo Fisher) following the provided protocol. Column resin was activated with 50% (v/v) acetonitrile and equilibrated with 5% (v/v) acetonitrile +0.5% (v/v) trifluoroacetic acid solutions. Fractions were loaded onto columns and washed with 5% (v/v) acetonitrile +0.5% (v/v) trifluoroacetic acid solution. Samples were eluted via two cycles of addition of 70% (v/v) acetonitrile solution to the columns. All sample fractions were dried separately via vacufuge. Each fraction was then analyzed by liquid chromatographytandem mass spectrometry (LC-MS/MS) using an nLC-1200 nano-flow liquid chromatography system (Thermo Fisher) interfaced with a Q-Exactive mass spectrometer (Thermo Fisher). Precursor and fragment ions were analyzed via full scan (resolution of 70,000, automatic gain control target of 3e6, and 40 ms maximum injection time) and a datadependent MS 2 top 10 scan (resolution of 17,500, automatic gain control target of 5e4, 150 ms maximum injection time, 0.8 m/z isolation window, 10 s dynamic exclusion period, and normalized collision energy of 27), respectively. Data from both LC-MS/MS runs for the quantification of Flag-Pgrmc1 and Y113F Flag-Pgrmc1 binding partners and the quantification of the membrane proteome of AAV8 infected Pgrmc1 KO liver were searched together against the M. musculus 10090 Uniprot reference proteome (download date: August 26, 2015) using the Sequest HT search engine running through Proteome Discoverer v.2.1 (Thermo Fisher) with one or two missed cleavages allowed and a precursor mass tolerance of 8 ppm and fragment mass tolerance of 0.02 Da. The trypsin protease cleavage sites permitted were only on the C-terminal sides of lysine and arginine. Dynamic modifications of methionine oxidation and deamidation of asparagine and glutamine, along with static modification of carbamidomethylation of cysteine, were permitted. Peptide assignments were validated using the Target Decoy Peptide Spectral Match Validator node with a relaxed FDR of <0.05 and a strict FDR of <0.01. Peptides from the LC-MS/MS run for the quantification of Flag-Pgrmc1 and Y113F Flag-Pgrmc1 binding partners had a Spectrum File beginning with "03," while peptides from the LC-MS/MS run for the quantification of the membrane proteome of AAV8-infected Pgrmc1 KO liver had a Spectrum File beginning with "04." Included peptides had (1) a Master Accession number, (2) were marked "Unique," (3) had an isolation interference that was ≤30%, and (4) the Peptide Quan Info was used. For candidate Flag-Pgrmc1 and Y113F Flag-Pgrmc1 binding partner identification, reporter ion abundances for each spectrum of a protein were summed, then the protein abundance medians for each condition were taken. For Flag-Pgrmc1 and Y113F Flag-Pgrmc1, the ratio of protein abundance to GFP was compared for enrichment. Candidate Flag-Pgrmc1 or Y113F Flag-Pgrmc1 binding partners were those proteins with a fold change (Flag-Pgrmc1/GFP or Y113F Flag-Pgrmc1/GFP) ≥20%. For stringent quantitative analysis of Flag-Pgrmc1 and Y113F Flag-Pgrmc1 binding partners, only peptides present in at least two biological replicates were used to calculate the protein abundance. Peptides present only in Y113F Flag-Pgrmc1 samples were also eliminated. For quantitative analysis of the membrane proteome of AAV8-infected Pgrmc1 KO liver, all reporter ion intensities that belong to the same protein were used as the measure (sum) of that protein abundance in the sample. Relative protein abundances were quantified by a robust median sweep algorithm as described above except the measure of a protein's abundance in a sample was the sum of the peptides. For statistical analysis, conditions were analyzed in R by linear modeling with limma and p-values adjusted by the Benjamini and Hochberg FDR method (61). GO Term analysis GO Term Analysis was conducted using PANTHER version 14 (Protein Analysis through Evolutionary Relationships, http://pantherdb.org) (62). A Fisher's exact test followed by Bonferroni correction was used to identify enriched GO Terms with statistical significance (p-value ≤ 0.05). For RNA, up-and downregulated genes were considered to be those genes with fold change ≥40% and a probability of differential expression ≥0.95. Genes were searched by Ref-seq gene name. For protein, proteins were searched by Uniprot ID. In the experiment comparing protein expression in WT and Pgrmc1 KO liver, differentially expressed proteins were considered to be those proteins with fold change ≥20% and the absolute value of the signal-to-noise ratio ≥2. The differentially expressed proteins were compared with the reference list of all proteins measured in the experiment. In the experiment comparing protein expression in AAV8 GFP, Flag-Pgrmc1, or Y113F Flag-Pgrmc1 infected Pgrmc1 KO livers, differentially expressed proteins were those with unadjusted p-values ≤ 0.05. The differentially expressed proteins were compared with the reference list of all proteins measured in the experiment. Total RNA preparation and RNA-seq Total RNA was prepared from 30 mg of snap frozen liver with RNA STAT-60 (Amsbio) reagent according to the manufacturer's instructions. DNase digestion was performed oncolumn with an RNAeasy Kit (Qiagen) according to manufacturer's instructions. RNA concentration was estimated with a NanoDrop (Thermo Fisher). For RNA-seq, equal masses of RNA (2.5 μg) from mice of the same genotype were pooled to create one sample per genotype. Library preparation was completed with an Illumina TruSeq Stranded Total RNA kit. Samples were analyzed on a HiSeq 2500 (Illumina) machine in Rapid Run mode with paired-end 100 bp × 100 bp sequencing. CASAVA 1.8.2 (Illumina) was used to convert BCL files to FASTQ files. Default parameters were used. Rsem-1.2.09 was used for running the alignments as well as generating gene and transcript expression levels. The "rsem-calculate-expression" module was used with the following options: "bowtiechunkmbs 200," "calc-ci," "output-genome-bam," "pairedend," and "forward-prob." The data were aligned to the M. musculus mm10 reference genome. The "rsem-run-ebseq" and "rsem-control-fdr" scripts provided by Rsem were used to run EBSeq to perform differential expression analysis. All default parameters were used, except "FDR_rate" was set to 0.05. 7-Ethoxycoumarin O-deethylation assay The 7-ethoxycoumarin O-deethylation (ECOD) assay was conducted as described previously (63). For each genotype, equal amounts of membrane protein from five mice were pooled. Each reaction contained 0.2 mg/ml protein. The substrate 7-ethoxycoumarin was from Sigma. Samples were preincubated at 37 C for 5 to 10 min. Each reaction was initiated by the addition of NADPH tetrasodium salt (Sigma) to 1 mM and incubated for 30 min at 37 C while shaking. Control reactions without addition of NADPH were run in parallel. Reactions were quenched upon addition of ice-cold HCl to 0.2 M. After extraction of 7-hydroxycoumarin, 100 μl per sample was transferred to a 96-well plate in triplicate for fluorescence measurement on a FLUOStar Omega plate reader (BMG Labtech) with a 355/460 filter. The fluorescence of each sample was compared with a 7-hydroxycoumarin (Sigma) standard curve including 0, 0.1, 0.5, 1, and 2 pmol standards. The amount of NADPH-dependent product formed was calculated by subtracting the value of the reaction without NADPH from the value of the reaction with NADPH. The NADPH-dependent product formation was used to calculate the reaction velocity normalized to protein amount. Three technical replicates were conducted. Nonlinear regression was carried out on these averaged data and the K m and V max determined using GraphPad Prism 7.05. Caffeine metabolism assay Pooled membrane protein as described for the ECOD assay was preincubated at 2 mg/ml in 100 mM potassium phosphate, pH 7.4 with an NADPH regenerating system (Corning) at 37 C for 5 min. Reactions were initiated via the addition of caffeine (Sigma) to 50 μM and incubated for 60 min at 37 C while shaking. Reactions were quenched via protein precipitation by direct addition of 50 μl of ice-cold acetonitrile (to 50% v/v) and incubated on ice for 10 min. Precipitate was pelleted by centrifugation for 10 min at 10,000g at 4 C. The supernatant was transferred and dried in a vacuum centrifuge. Samples were reconstituted in 25 μl water prior to mass spectrometry. Reconstituted samples were resolved using a Dionex Ultimate 3000 uHPLC system and analytes were detected using a coupled Q-Exactive benchtop Orbitrap mass spectrometer (Thermo Fisher). Separation of analytes on an Agilent Polaris C 18 column (50 × 2.1 mm, 5 μm) was performed using a mobile-phase system of water with 0.1% (v/v) formic acid (mobile phase A) and acetonitrile with 0.1% (v/v) formic acid (mobile phase B) at a flow rate of 400 μl/min. The gradient used was as follows: 0% B from 0 to 4 min, 0 to 5% B from 4 to 15 min, 5 to 100% B from 15 to 16 min, 100% B from 16 to 21 min, 100 to 0% B from 21 to 22 min, and 0% B from 22 to 25 min. Paraxanthine detection was performed in positive ion mode using a transition of m/z 181.0720 > 124.0509. Comparisons of relative amounts of paraxanthine formed were performed using the integrated peak area from the paraxanthine chromatograms. p-Nitrophenol hydroxylation assay The Cyp2e1 activity assay was performed according to Chang,et al. (33). For each genotype, equal amounts of membrane protein from four mice were pooled. Briefly, 125 μg of WT or KO liver microsomes was added to ice-cold reaction mix containing 100 μM p-nitrophenol (Sigma, 241326), 1.3 mM NADP+ (Sigma/Roche, 10128031001), 3.3 mM D-glucose-6-phosphate (Sigma, 10127647001), 3.3 mM magnesium chloride (Sigma; M9272), and 0.4 U/ml glucose-6phosphate dehydrogenase (Worthington, LS003981) in a final volume of 500 μl of 50 mM potassium phosphate buffer, pH 7.4. The reaction was carried out at 37 C for 1 h, after which the tubes were immediately transferred to ice and 100 μl of TCA was added to stop the reaction. After 5 min incubation on ice, the tubes were centrifuged at 10,000g for 5 min at room temperature. In total, 500 μl of the supernatant was added to 300 μl of 2 N NaOH in a fresh tube, mixed, and absorption spectra between 480 nm and 650 nm were recorded using a Genesys 30 (Thermo Fisher) visible spectrophotometer. Peak absorption at 515 nm was used to determine the Cyp2e1 activity using a p-nitrocatechol (Sigma, N15553) standard curve as described in Chang et al. (33). Three technical replicates were run; in each of these three runs, three WT and three KO sample replicates were assayed with two no microsome controls. The mean absorbance of the two control samples was subtracted from the absorbances of the three WT and three KO sample replicates within each run. Acetaminophen-induced liver injury Wild-type and Pgrmc1 KO mice were fasted overnight for 16 h. Mice were then given a single, intraperitoneal injection of 600 mg acetaminophen (Sigma) per kg body weight in 50% saline/DMSO (v/v). All mice were given DietGel Recovery (Clear H 2 O) in the cage. Mice were then euthanized 24 h postinjection. Whole blood was collected via cardiocentesis, and serum was separated as described above. The liver was harvested, and the median and left lateral lobes were individually separated. A horizontal, transverse section (including gallbladder) was taken from the median lobe. A diagonal, transverse section to include the hilus was taken from the left lateral lobe. All sections were fixed in 10% neutral buffered formalin (Sigma). Liver sections were processed for histologic analysis as described above. Recombinant protein purification and heme loading Recombinant proteins were expressed in BL21 Codon-Plus(DE3)RIPL cells (Agilent) using 200 μM IPTG and overnight shaking at 20 C. Cells were lysed in B-PER (Thermo Fisher) reagent supplemented with 25 U/ml Benzonase (EMD Biosciences), 2 mM MgCl 2 , 2 mM ATP, 20 mM Imidazole, and 1x protease inhibitors (Protease Complete, Roche) at room temperature for 15 min. The cell lysates were clarified by centrifugation at 80,000g at 4 C for 45 min and purified over a 1 ml HisTrap HP column (GE Healthcare) using a 25 mM-500 mM imidazole gradient in 20 mM HEPES-KOH, pH 7.4; 150 mM NaCl buffer. The purified proteins were desalted using PD10 columns (GE Healthcare) equilibrated with 50 mM HEPES-KOH, pH 7.4; 150 mM NaCl; 2 mM MgCl2; 5% (v/v) glycerol and concentrated using 10,000 MW cutoff filters (Amicon). The N-terminal histidine tag was removed by incubating the concentrated proteins with 50 U/ml Thrombin (GE, 27-0846-01) at room temperature for 16 h, followed by desalting by PD10 column. The recombinant proteins had a residual tetra peptide (Gly-Ser-His-Ser) tag at the N-terminus after thrombin cleavage. The majority of the bacterially expressed, tag-removed PGRMC1 lacked heme (apo PGRMC1). To test the hemebinding ability of the recombinant PGRMC1 (rPGRMC1), Y113F rPGRMC1, and 3X MUT rPGRMC1, proteins (1 μM) were incubated with 100 μM Hemin (Bovine, Sigma) in PBS for 15 min at 30 C. Hemin was diluted from a 10 mM stock in DMSO. Thereafter, the proteins were desalted using PBS equilibrated PD10 columns and concentrated using 10,000 MW filters (Amicon). Hemin without any added protein was subjected to the same procedure as a negative control. Hemin-binding assay Hemin was dissolved in DMSO, and the concentration was determined using the extinction coefficient ε 403 = 170 mM −1 cm −1 in DMSO (64). During the hemin binding assay, precaution was taken such that DMSO concentration did not exceed 3% (v/v) in the sample. The recombinant PGRMC1 protein (rPGRMC1, Y113F rPGRMC1, or 3X MUT rPGRMC1) (10 μM in 400 μl) was incubated with 0 to 30 μM hemin (2 μM interval) in PBS at room temperature for 16 h, along with the same dilutions of hemin without any added protein. The protein-bound hemin absorbance was measured at 394 nm using the same hemin concentration without any protein as the baseline. The results were fit with a fourparameter log logistic model (Hill Equation) or linear model, using least square method in R (65). From the fitted curves, the K d and B max of hemin binding by PGRMC1 were determined. Data analysis Sample sizes and numbers of biological and technical replicates are noted for each experiment. Statistical tests (t-tests and ANOVAs followed by Tukey HSD post-hoc tests) were performed in GraphPad Prism versions 7.05 to 8.2.0. Analysis of large datasets was performed using R. Data availability RNA-Seq data are deposited at NCBI GEO (https://www. ncbi.nlm.nih.gov/geo/), accession GSE174375. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE (66) partner repository (http://www.ebi.ac.uk/pride) with the dataset identifiers PXD028238, PXD028284, and PXD028288. All remaining data are contained within the article and supporting information. Supporting information-This article contains supporting information.
2021-10-23T15:13:31.429Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "b53278494dca603b1bc8f97f1b80c5223e078ee3", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/article/S0021925821011224/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "89f00bce086896403ab55db19c37298bc15acfad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
10920689
pes2o/s2orc
v3-fos-license
Hemodynamic Performance of a Novel Right Ventricular Assist Device (PERKAT) Acute right ventricular failure (RVF) is an increasing clinical problem and a life-threatening condition. Right ventricular assist devices represent a reasonable treatment option for patients with refractory RVF. We here present a novel percutaneously implantable device for right ventricular support. The PERKAT device is based on a nitinol stent cage, which is covered with valve-carrying foils. A flexible outlet trunk with a pigtail tip is connected to the distal part. The device is driven by an intra-aortic balloon pump (IABP) drive unit, which inflates/deflates a standard IABP-balloon placed within the stent cage. In-vitro evaluation was done in a liquid bath containing water or blood analog. The PERKAT device was tested in different afterload settings using two different IABP-balloons and varying inflation/deflation rates. We detected flow rates ranging from 1.97 to 3.93 L/min depending on the afterload setting, inflation/deflation rate, balloon size, and the medium used. Flow rates between water and blood analog were nearly comparable, and in the higher inflation/deflation rate settings slightly higher with water. Based on this promising in vitro data, the innovative percutaneously implantable PERKAT device has a potential to become a therapeutic option for patients with RVF refractory to medical treatment. The clinical relevance of right ventricular (RV) dysfunction in acute heart failure has gained increasing interest over the past decades. Right ventricular failure (RVF) occurs as a consequence of right ventricular myocyte damage, e.g., after myocardial infarction, because of volume overload, e.g., post-left ventricular assist device-implantation, or because of pressure overload, e.g., after pulmonary embolism. 1 Right ventricular myocardial infarction-associated cardiogenic shock results in a 1 month mortality rate of up to 50%. 2 Therapeutic management of RV failure includes reversal of the primary cause, inotropic support to enhance cardiac contractility, volume resuscitation to maintain RV preload, and pulmonary vasodilatation to reduce RV afterload. 3 In refractory RV failure, RV assist devices (RVADs) represent a reasonable treatment option. Despite surgical RVADs percutaneously implantable RV support devices illustrate an emerging field. Clinically established strategies such as venoarterial extracorporeal membrane oxygenation, the TandemHeart (CardiacAssist Inc., Pittsburgh, PA), 4 or the Impella RP (Abiomed Inc., Danvers, MA) 5 showed specific limitations such as size, complexity, and absence of a simple percutanous option. RVADs will be needed temporarily and require rapid deployment. We recently published the technical concept and first in vitro data of the PERKAT (PERcutaneous KATheterpump) system 6 offering minimal invasive effective RV support. This system is designed for implantation in the inferior vena cava (IVC). As a self-expandable nitinol stent, it only requires 18 French sheath luminal access, and is driven by a standard intra-aortic balloon pump (IABP) console. The aim of the current study was to evaluate the performance of PERKAT depending in different hemodynamic conditions regarding different viscosities of circulating fluid, various afterload settings, and inflation rates using two different standard IABP-balloon sizes in a standardized in vitro model. Methods We recently published the concept, technical dossier, and implantation technique of the PERKAT system. 6 In brief, PERKAT consists of a 220 mm nitinol stent cage encased by flexible membranes containing foil valves. A flexible outlet trunk with a pigtail-shaped tip containing outflow valves is attached to the distal part of the stent cage (Figure 1). A standard IABP balloon is placed inside the nitinol stent cage and connected to a standard IABP console as the drive for the PERKAT via the helium driveline. Deflation of the IABP generates blood flow into the nitinol stent cage through the foil valves. During balloon inflation, foil valves will be closed because of the pressure. This results in a pulsatile blood flow through the flexible outlet tube towards the pigtail ending. The innovative foil valve concept permits a given direction of blood flow ( Figure 2). The PERKAT 18 French is designed for percutaneous implantation by Seldinger's technique. The nitinol stent body will be deployed by pulling back the sheath after positioning in the IVC, whereas the outlet trunk bypasses the right atrium and ventricle, and the pigtail tip lies in the pulmonary trunk (Figure 3). Composition and working principle of PERKAT is different from that of an IABP. The main difference is the directed flow of blood due to the outlet tube towards the pigtail ending from the IVC straight into the pulmonary arteries without any backflow, whereas the IABP creates an undirected flow with blood also going into the superior vena cava and into the iliac/femoral veins. In Vitro Testing The lower reservoir with predefined fluid levels simulating the preload contained the PERKAT device (Figure 4). The flexible outflow tube of the PERKAT system was attached to the upper reservoir, which was positioned at three specified heights above the lower fluid reservoir to simulate several afterload levels (30 cm = 22 mm Hg; 60 cm = 44 mm Hg; 90 cm = 66 mm Hg). 7 The PERKAT device was then combined with a standard IABP console (e.g., Arrow KAAT II Plus, Teleflex Inc., Morrisville, nC) through the helium drive line. The fluid in the lower reservoir was heated up to 37°C. Then the IABP drive unit was launched and after the initiation process, the experimental setup was ready. The aim of the setting was to quantify the flow rate of the PERKAT system via a magneto-inductive flowmeter (ifm electronics; Type: SM8000), which measured the flow from the upper reservoir back to the lower basin. To obtain the true flow rate during a specific bpm (beats per minute) setting, we measured the flow of PERKAT after the initiation process of the IABP drive unit to avoid start and stop phases. The flow rate was obtained by measuring the mean flow for 60 sec, three times consecutively. The indicated values represent the mean of those three measurements. Production of Blood Analog The viscosity of blood depends on different parameters such as shear rate, hematocrit, and the vessel diameter. For our experimental setup, we aimed to achieve a blood viscosity of 4.07 mPa/sec (at 25°C) and 3.05 mPa/sec (at 37°C), which represents a standard value of blood with a hematocrit of 45%. 8 The blood analog was produced according to Anastasiou et al. 8 and Brookshier and Tarbell. 9 It comprises 74.7% (v/v) distilled water, 25.3% (v/v) glycerol, and 0.032% (w/v) xanthan gum. Xanthan gum is a polysaccharide, which acts as a rheology modifier. The viscosity was determined with the ubbelohde viscometer according to the operating instructions (Schott). For the production of 8 l blood mimicking fluid according to the mentioned composition, we used 5.98 l distilled water, 2.12 l glycerol, and 2.72 g xantham gum. We measured a viscosity of η = 3.05 mPa/sec at 37.1°C, according to the literature, this represents the viscosity of whole blood with a hematocrit of 45%. The viscosity of water at 37°C was η = 0.72 mPa/sec. In Vitro Testing with Water The detected flow rates are listed in Table 1. In all settings, we measured a decrease in flow rates with the increase of the afterload. In the 22 mm Hg afterload setting, the lowest flow rate of 2.13 l/min was seen with 34 ml balloon at 60 bpm. The highest flow rate of 3.93 l/min was detected at 120 bpm with the 40 ml balloon. Although flow rates were continuously increasing with the 40 ml balloon, we measured the same flow rate with the 34 ml balloon at 110 and 120 bpm. In the 44 mm Hg setting, the lowest flow rate was measured with the 34 ml balloon at 60 bpm and the highest rate was found at 120 bpm with the 40 ml balloon. Flow rates increased continuously with both balloons. In the 66 mm Hg setting, the lowest flow of 1.97 l/min was seen at 60 bpm, with the 34 ml balloon, the highest flow was measured with the larger balloon at 120 bpm. Flow rates were constantly increasing with the increase of the inflation/deflation rate in the 34 ml balloon setting. The flow rates with the 40 ml balloon increased from 60 to 110 bpm, while flow rates at 110 and 120 bpm were comparable. In an additional experiment, we could detect flow rates of up to 3.5 l/min (with each balloon) at an afterload of 88 mm Hg (data not shown). In all afterload and bpm settings, flow rates with the 40 ml balloon were higher in comparison to the 34 ml balloon. With increase in the afterload, the flow rates were decreased. This was more pronounced from 22 to 44 mm Hg in comparison with 44 to 66 mm Hg, where flow rates were nearly comparable. In Vitro Testing with Blood Analog All measured flow rates are presented in Table 2. With an afterload of 22 mm Hg, the lowest flow rate of 2.13 l/min was seen at 60 bpm with the 34 ml balloon, the highest flow rate was 3.67 l/min at 120 bpm with the 40 ml balloon. With both balloons, we measured continuously rising flow rates, with the increase of the inflation/deflation rate. The performance of the 40 ml balloon was higher than that of the 34 ml balloon in all settings. In the 44 mm Hg afterload setting, the lowest flow rate of 2.03 l/min again was seen at 60 bpm with the 34 ml balloon, the highest flow rate was 3.50 l/min at 120 bpm with the 40 ml balloon. Flow rates increased from 60 to 120 bpm in both balloon measurement rows with higher flow rates of the 40 ml balloon. With 66 mm Hg afterload, the lowest flow rate of 1.97 l/min was detected at 60 bpm with the 34 ml balloon, the highest flow rate was 3.47 l/min at 120 bpm with the 40 ml balloon setting. Again, the performance of the 40 ml balloon was higher than that of the 34 ml balloon in all settings with continuously increasing flow rates of the 34 ml balloon with an increase of the inflation/deflation rate, whereas the flow rates of the 40 ml balloon were comparable between 110 and 120 bpm. In summary, hemodynamic performance was decreasing with increasing afterload settings. The decline in flow rates from 44 to 66 mm Hg was only marginal. Comparison of Water and Blood Analog Flow rates of water and blood analog are shown in Figure 5. The flow rates of the 34 ml balloon were comparable in both settings between 60 and 90 bpm, respectively, 100 bpm at an afterload setting of 44 mm Hg, at higher bpm rates, PER-KAT performed better in the medium water in comparison to blood analog. In the 40 ml balloon setting, flow rates were comparable between 60 and 100 bpm. The performance of PERKAT was lower with blood analog at 110 and 120 bpm in comparison to water. Discussion We demonstrated in the current study that PERKAT is able to generate a flow rate of up to 3.9 l/min in a standardized in vitro model depending on the size of the IABP-balloon, the afterload setting and inflation/deflation frequency. Flow rates obtained with a blood analog were comparable or with increasing inflation/deflation rates slightly lower than the flow rates obtained with water. We detected the highest flow rates in the 22 mm Hg afterload setting. With increasing afterload, the flow rates were lowered with both fluid media. The decrease was more pronounced with an afterload augmentation between 22 and 44 mm Hg, than from 44 to 66 mm Hg in both media. To some extent, the flow rates measured in the 44 mm Hg were comparable with the rates detected in the 66 mm Hg setting. In both media, we unraveled that the hemodynamic performance of the 40 ml balloon is better than the 34 ml balloon. With the 34 and 40 ml balloon, we could detect a positive correlation between increasing inflation/deflation rates and the resulting flow rate. In the lower range of inflation/deflation frequency (60-100 bpm), the obtained flow rates were comparable between water and the blood analog, irrespective of the used balloon and the afterload setting. As expected in the upper range (100-120 bpm), flow rates measured in blood analog were slightly smaller than in the medium water. We refer this to the higher viscosity of the blood analog in comparison to water. Although we could detect only a slight difference in the hemodynamic performance of PERKAT between water and blood analog, we further recommend the use of a blood analog for device testing because this is stated in the AAMI/ISO standards. Values are given as mean ± standard error of the mean. In comparison to our previously published in vitro data, 6 we observed significantly higher flow rates. We refer this to the use of a PERKAT prototype in the former study, which has been replaced by an improved model system. nevertheless, we have to mention the limited comparability between both studies because of the usage of different balloons (30/40 ml balloons in the previous study vs. 34/40 ml balloons in the current report). In addition, a magnetoinductive flowmeter was now used to measure the flow rates, which led to a higher accuracy in comparison to the former method of weighing the medium. Limitations Because of the fact that blood viscosity depends on several factors such as hematocrit, a viscosity above 3.05 mPa/sec could influence the flow rate. The data received in this optimized model by using large water reservoirs and continuous backflow resulting in low resistance aspiration and ejection by PERKAT need to be further evaluated in an in vivo (animal) model. In summary, we here provide data of the hemodynamic performance of PERKAT under standardized in vitro conditions. The device offers hemodynamic support of up to 3.9 l/min. The flow rates obtained with the blood analog were comparable or with increasing inflation/deflation rates slightly lower than the flow rates with water. These data provide first evidence that hemodynamic support is possible through the use of the PERKAT system under consideration of the afterload setting, the used balloon, and the inflation/ deflation rate. Further data will have to be collected and confirmed in vivo.
2018-04-03T00:28:18.901Z
2017-02-23T00:00:00.000
{ "year": 2017, "sha1": "9243034cda9d83d339d2f3100dac5c83743b7566", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc5327860?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "9243034cda9d83d339d2f3100dac5c83743b7566", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252845400
pes2o/s2orc
v3-fos-license
Cell wall ester modifications and volatile emission signatures of plant response to abiotic stress Abstract Growth suppression and defence signalling are simultaneous strategies that plants invoke to respond to abiotic stress. Here, we show that the drought stress response of poplar trees (Populus trichocarpa) is initiated by a suppression in cell wall derived methanol (MeOH) emissions and activation of acetic acid (AA) fermentation defences. Temperature sensitive emissions dominated by MeOH (AA/MeOH <30%) were observed from physiologically active leaves, branches, detached stems, leaf cell wall isolations and whole ecosystems. In contrast, drought treatment resulted in a suppression of MeOH emissions and strong enhancement in AA emissions together with volatiles acetaldehyde, ethanol, and acetone. These drought‐induced changes coincided with a reduction in stomatal conductance, photosynthesis, transpiration, and leaf water potential. The strong enhancement in AA/MeOH emission ratios during drought (400%–3500%) was associated with an increase in acetate content of whole leaf cell walls, which became significantly 13C2‐labelled following the delivery of 13C2‐acetate via the transpiration stream. The results are consistent with both enzymatic and nonenzymatic MeOH and AA production at high temperature in hydrated tissues associated with accelerated primary cell wall growth processes, which are downregulated during drought. While the metabolic source(s) require further investigation, the observations are consistent with drought‐induced activation of aerobic fermentation driving high rates of foliar AA emissions and enhancements in leaf cell wall O‐acetylation. We suggest that atmospheric AA/MeOH emission ratios could be useful as a highly sensitive signal in studies investigating environmental and biological factors influencing growth‐defence trade‐offs in plants and ecosystems. Supplementary Results • Supplementary Discussion • Metabolic origin of Acetic Acid (AA) Supplementary Figures • Figure S1: Example PTR-MS calibration to a primary MeOH and AA gas phase standard (Su et al., 2016) and (b) a citrus grove in California, USA (Park et al., 2013) during the growing season. Average diurnal MeOH (blue) and AA (green) ambient concentrations with air temperature together with AA/MeOH concentration ratios are plotted. • Figure S10: Leaf 13 C2-acetic acid emissions during branch 13 C2-acetate labeling via the transpiration stream Temperature sensitivities of MeOH and AA emissions and AA/MeOH emission ratios from whole ecosystems We extended our analysis of the temperature sensitivities of AA and MeOH leaf emissions at the ecosystem scale at the Lochristi poplar plantation in Belgium and a mixed hardwood forest in Alabama. Likely due to the very low ambient temperatures at night in Belgium during the growing season (averaging around 15 °C), no significant night time AA concentration or vertical flux was observed (supplementary Figure S8). In contrast, night-time MeOH emissions of 10 µmol ha -1 day -1 and ambient concentrations of 0.5 ppb were detected. Similar to emissions from physiologically active poplar branches (Figure 5), average ecosystem emission rates of AA and MeOH during the 2015 growing season showed a clear diurnal pattern reaching maximum values in the afternoon (16:30) together with air temperature. Ecosystem AA/MeOH emission ratios increased as a function of air temperature from a low of 28% in the early morning to 158% in the afternoon (supplementary Figure S8b). Likewise, AA/MeOH concentration ratios also increased as a function of air temperature from a low value of 41% in the early morning to 188% in the afternoon (supplementary Figure S8a). We also analyzed ambient concentration time series of AA and MeOH as a function of ambient temperature above a mixed forest canopy in Alabama, USA and a citrus grove in California. Similar to the poplar plantation in Belgium, ambient concentrations of AA and MeOH above the mixed forested canopy in Alabama and citrus grove in California followed strong diurnal patterns tightly coupled with air temperature, with air temperature having a positive impact on AA/meOH ambient concentration ratios (supplementary Figure S9). While ambient concentrations increased during the day relative to the night in the Alabama site (supplementary Figure S9a) in a similar way to the Belgium site, ambient concentrations decreased during the day at the California site (supplementary Figure S9b). This may be related to high ambient background concentrations of AA and MeOH in the California central valley and a dilution effect due to turbulent mixing during the day. Nonetheless, the MeOH/AA concentration ratio at the California grove still increased linearly with air temperature from a low of 19% at night to 29% in the afternoon. This is comparable to the pattern at the Alabama mixed hardwood forest site where MeOH/AA concentration ratios also increased linearly with air temperature from < 5% during early mornings below 20 ºC, to high values of 25% during afternoon air temperatures up to 32 ºC. Given that MeOH/AA concentration ratios increased with temperature, but remained below 30%, these ecosystem values are comparable to AA/MeOH emission temperature sensitivities observed in the hydrated AIR, leaf, and branch poplar studies. Proton Transfer Reaction-Mass Spectrometry The high sensitivity quadrupole proton transfer reaction-mass spectrometer (PTR-MS, Ionicon, Innsbruck Austria, with a QMZ 422 quadrupole, Balzers, Switzerland) was operated with a drift tube voltage of 600 V with a pressure of 1.9 mbar at 40 °C. The following mass to charge ratios The PTR-MS was calibrated to VOCs using a custom gas standard (1.0 ppm in nitrogen, Restek Corporation, Bellefonte, PA, USA) diluted using hydrocarbon free air to 0, 2, 4, 6, 7.9, and 9.9 ppb of the gas primary standard. Real-time leaf and branch VOC emission rates were determined using the flow rate through the chamber (leaf: 0.32 L min -1 , branch: 2.0 L min -1 ), the VOC concentration difference with and without vegetation inside the chamber, and total leaf area or dry weight in the chamber (Ortega and Helmig, 2008). Dynamic Branch Gas Exchange Measurements Real-time dynamics of methanol (MeOH), formaldehyde, formic acid + ethanol, acetic acid (AA), acetaldehyde, acetone, and isoprene emissions together with the photosynthesis and transpiration were characterized from intact poplar branches under constant daytime lighting of 800-1,200 mol m -2 s -1 PAR at branch height (6:00-22:00). In the first set of experiments, the temperature sensitivity of branch AA and MeOH emissions was evaluated by controlling the air temperature of a potted tree inside a growth chamber programmed with constant daytime lighting (6:00-20:00) and air temperature linearly changing between a minimum at 5:00 am (20 °C) to a maximum at 14:00 (30 °C). Potted poplar trees (N = 3 individuals not used in the drought experiment) were transported from the greenhouse to the laboratory and placed in the growth chamber. A canopy branch around 2 m height (3-4 g dry weight) was placed inside a dynamic 5.0 L Tedlar enclosure with 2.0 L min −1 of hydrocarbon free air passing through. A small fraction of the air exiting the chamber was diverted to a proton transfer reaction mass spectrometer (PTR-MS, 75 ml min −1 ) for volatile concentration determination and to an infrared gas analyzer (Li7000, Licor Biosciences, USA, 100 ml min −1 ) for CO2 and H2O concentration measurements. Air temperature inside the branch enclosure was recorded using a type-T thermocouple with the sensor shaded from direct light using white Teflon tape. Following the collection of background data with no branch in the enclosure, a canopy branch was carefully inserted into the enclosure and diurnal gas exchange data were collected and analyzed as a function of air temperature. A similar experimental setup was used for short term (1 hour) 'snap-shot' branch gas exchange studies of CO2, H2O, and volatiles on a subset of drought (7 individuals) and control (5 individuals) trees during the greenhouse drought experiment. Following transport to the laboratory, the potted tree was placed under an LED grow light with the canopy branch to be studied for gas exchange exposed to a photosynthetic photon flux density of 800-1200 µmol m −2 s −1 . Following 1 hour of background CO2, H2O, and MeOH and AA concentration measurements of the empty illuminated branch enclosure, a canopy branch was carefully installed and continuous daytime CO2, H2O, MeOH, and AA concentration measurements were collected for 1 hour. A thermal image (E6390 thermal imager, Teledyne FLIR LLC) of the intact branch inside the dynamic gas exchange chamber was taken to record daytime leaf temperature. Finally, real-time branch gas-exchange responses to drought were conducted on potted poplar trees transferred to the laboratory (5 individuals) and placed under the LED grow light. Daily soil moisture additions in the morning and evening, which all plants received in the greenhouse, were withheld and continuous branch gas exchange measurements of CO2, H2O, MeOH, and AA were monitored continuously for the duration of the experiment (2-10 days). Individuals that received watering on the morning before being transported to the analytical laboratory generally required several days before showing physiological signs of drought stress while others that were collected before watering were impacted by the drought on the first day. Dynamic Leaf Gas Exchange Responses to Environmental Variables Real-time leaf fluxes of MeOH and AA were quantified from well hydrated detached branches as a function of environmental conditions. Individual poplar branches were detached from one of 9 trees growing in the greenhouse in the morning, re-cut under tap water, and transported to the adjacent greenhouse laboratory where one mature leaf was inserted into the Li6800 6 cm 2 leaf chamber mounted on the lab bench using a small tripod. In order to facilitate branch hydration, the detached branch in tap water and leaf chamber were covered with a reflective mylar sheet and wet paper towels were placed near the base of the tripod. For all environmental response curves, a flow rate of 320 ml min -1 of VOC free air continuously passed through the leaf chamber, with 75 ml/min diverted for MeOH and AA analysis by the PTR-MS via the Li6800 subsampling port. Humidity in the Li6800 reference air stream was kept low (0-4 mmol mol -1 ) in order to avoid condensation and loss of AA in the chamber and the light spectrum was fixed at 90% red/10% blue. For CO2 response curves, photosynthetically active radiation (PAR) and leaf temperature were held constant at 32.0 °C and 1,000 µmol m -2 s -1 , respectively while the leaf enclosure CO2 concentrations decreased and then increased according to the following sequence (400,350,300,250,200,150,125,100,75,50,25,0,50,100,150,200,250,300,400,500,600,700 (gs, mmol m -2 s -1 ), transpiration (E, mmol m -2 s -1 ), net photosynthesis (Anet, µmol m -2 s -1 ), MeOH and AA emissions (nmol m -2 s -1 ), and AA/meOH emission ratios were determined. Three replicate CO2, light, and temperature gas exchange response curves were acquired (1 leaf/branch, 1 branch/tree, 3 trees/treatment, 9 total trees). Temperature sensitivities of VOC emissions from detached leaves, stems, and hydrated AIR The temperature sensitivity of VOC emissions from detached leaves and stems was determined passing through and 75 ml min -1 diverted to the PTR-MS for VOC analysis. Following the collection of background VOC signals, the empty porous PTFE tube was removed from the chamber and AIR (90-100 mg) was added to the tube then quickly resealed with PTFE Teflon film tape. 1.5 ml of distilled water was then slowly injected over 1 min into the center of the tubing using a syringe by piercing the needle through one of the Teflon tape covers. The tube with hydrated AIR was returned to the temperature-controlled chamber which was ramped to the following temperatures 30, 35, 40, 45, and 50 °C with the establishment of steady state VOC emissions for a minimum of 5-10 min at each temperature. This experiment was performed four times with different leaf AIR samples. Background measurements with an empty chamber were first collected at 30 °C for one hour before the sample was introduced into the chamber. VOC emissions at each temperature were calculated using the flow rate through the chamber, the background subtracted VOC concentration, and the biomass dry weight (leaf, stem, AIR) (Ortega and Helmig, 2008). Temperature sensitivities of VOC emissions from woody crops and forested ecosystems Using 10 One-dimensional 1H spectra were acquired using a standard 90° pulse and acquire sequence ('zgpr') with a 20 ppm spectral width (16.13 kHz), 4 s acquisition time (128k points), 46 s relaxation delay with low-power continuous wave pre-saturation of the residual water resonance for 4 s prior just to the 90° pulse. A total of 512 transients were coadded for each spectrum. The recycle delay (acquisition time plus relaxation delay) was chosen to be at least seven times the measured T1 relaxation time of acetate (6.51 s) which was accomplished using the inversionrecovery ('t1ir') pulse program. Post-acquisition processing included zero-filling to 256k points and multiplication by a decaying exponential (line-broadening of 0.3 Hz) prior to Fourier transform. Spectra were imported into MNova 14.0.1 (Mestrelab Research S. L.), subjected to a multipoint baseline correction (smooth segments algorithm using a combination of automatically and manually selected points), and analyte signals were fit and their peak areas integrated using MNova's line-fitting utility. The experimentally determined peak areas corresponding to each of the four isotopologues were subsequently divided by their respective expectation values assuming the natural abundance of 13 C to be equal to 0.011, or 1.1%, and 14 C to be negligible. As such, the expected fractional abundances of the 13 C-1-acetate and 13 C-2-acetate isotopologues were each taken as 0.011, that of the 13 C2-acetate isotopologue was 0.000121, and 12 C2-acetate comprises the remainder at 0.977879. Metabolic origin of Acetic Acid (AA) The upregulation of aerobic fermentation in plants is now recognized as an evolutionarily conserved drought survival strategy in plants, with the amount of acetate produced directly correlating to survival (Kim et al., 2017). Drought-induced acetate accumulation promotes de novo synthesis of the potent phytohormone jasmonic acid (JA) and the acetylation of histone H4, which influences the priming of the JA signaling pathway for plant drought tolerance (Kim et al., 2017). Thus, acetate regulates an epigenetic switch of metabolic flux conversion and hormone signaling by which plants adapt to drought. However, destructive measurements are required to evaluate acetate-linked drought responses, limiting the temporal and spatial scales that can be studied. As a consequence, few studies have reported aerobic fermentation rates in plants during drought due to the current method requirements of destructive sampling followed by offline tissue analysis of acetate content (Dewhirst et al., 2021b). In this study, by directly quantifying real-time leaf emissions rates of MeOH together with volatiles intermediates of aerobic fermentation (acetaldehyde, AA, ethanol, acetone), we suggest that growth and aerobic fermentation responses to drought can be studied in real-time from individual leaves to whole ecosystems. At the onset of drought in poplar, large increases in the fermentation volatiles acetaldehyde, acetic acid, ethanol, and acetone were consistently emitted from poplar branches despite reduced stomatal conductance. This suggests that drought-activation of the aerobic fermentation pathway occurred (Kim et al., 2017;Rasheed et al., 2018), with foliar emissions of methyl acetate (Dewhirst et al., 2021b) and acetone (Fall 2003, Jardine et al., 2010 associated with acetate activation to acetyl-CoA (Millerd et al., 1954). However, the metabolic source of AA emissions during stress require additional studies as non-fermentative sources of acetaldehyde may be possible, such as the peroxidation of membranes associated with irreversible damage (Jardine et al., 2009). During aerobic fermentation, acetate formed from the oxidation of acetaldehyde does not lead to Nicotinamide adenine dinucleotide + (NAD + ) regeneration, as in the case of ethanol production in anoxic tissues like flooded roots (Kreuzwieser et al.,1999). However, while NAD + regeneration is considered a critical aspect of fermentation under anoxia, it may be less important during aerobic fermentation where acetate may be a key respiratory substrate, effectively coupling aerobic fermentation with mitochondrial respiration to help meet high energy demands of the cell (Tadege 1997). Figure S1: Example PTR-MS calibration to a primary MeOH and AA gas phase standard. Linear calibration of the normalized PTR-MS signals at (a) ncps m/z 33 (methanol) and (b) ncps m/z 61 (acetic acid) during the dynamic dilution of a primary gas standard. Figure S2: Biological replicate #2 of real-time branch gas exchange dynamics of VOCs, CO2, and H2O during a 5-day drought experiment of a potted poplar tree where the activation of acetate fermentation, stomatal closure, and strong reduction of net photosynthesis occurred during the 3rd day after watering cessation. A branch enclosure was installed on a potted poplar tree and water withheld for the duration of the experiment. Daily branch flux patterns of (a) methanol (MeOH), acetic acid (AA), AA/MeOH emission ratio, (b) Acetate fermentation intermediates (acetaldehyde, ethanol, acetone) (c) CO2 and H2O and the photosynthetic product isoprene. Shaded areas represent the nighttime where the grow light was switched off. Figure S3 : Biological replicate #3 of real-time branch gas exchange dynamics of VOCs, CO2, and H2O during a 3-day drought experiment of a potted poplar tree where the activation of acetate fermentation, stomatal closure, and strong reduction of net photosynthesis occurred during the 2nd day after watering cessation (However, note the activation of acetate fermentation by the second night. A branch enclosure was installed on a potted poplar tree and water withheld for the duration of the experiment. Daily branch flux patterns of (a) methanol (MeOH), acetic acid (AA), AA/MeOH emission ratio, (b) Acetate fermentation intermediates (acetaldehyde, ethanol, acetone) (c) CO2 and H2O and the photosynthetic product isoprene. Shaded areas represent the nighttime where the grow light was switched off. Figure S4: Biological replicate #3 of real-time branch gas exchange dynamics of VOCs, CO2, and H2O during a 2-day drought experiment of a potted poplar tree where the activation of acetate fermentation, stomatal closure, and strong reduction of net photosynthesis occurred during the night of the 1 st day after watering cessation. A branch enclosure was installed on a potted poplar tree and water withheld for the duration of the experiment. Daily branch flux patterns of (a) methanol (MeOH), acetic acid (AA), AA/MeOH emission ratio, (b) Acetate fermentation intermediates (acetaldehyde, ethanol, acetone) (c) CO2 and H2O and the photosynthetic product isoprene. The shaded area represents the nighttime where the grow light was switched off. Figure S5 : Biological replicate #5 of real-time branch gas exchange dynamics of VOCs, CO2, and H2O during a 3-day drought experiment of a potted poplar tree where the activation of acetate fermentation, stomatal closure, and strong reduction of net photosynthesis occurred during the 2nd night after watering cessation. A branch enclosure was installed on a potted poplar tree and water withheld for the duration of the experiment. Daily branch flux patterns of (a) methanol (MeOH), acetic acid (AA), AA/MeOH emission ratio, (b) Acetate fermentation intermediates (acetaldehyde, ethanol, acetone) (c) CO2 and H2O and the photosynthetic product isoprene. The shaded area represents the nighttime where the grow light was switched off. Note a gap in the data due to an issue with the data logging computer. Figure S6: Recovery of drought-suppressed branch MeOH emissions by 100 mL soil moisture additions. Regular soil water additions were withheld for the duration of the experiment with 100 mL soil water added at distinct times (red arrows) on the first day of suppressed MeOH emissions. The shaded area represents the nighttime where the grow light was switched off.
2022-10-13T06:18:09.638Z
2022-10-12T00:00:00.000
{ "year": 2022, "sha1": "574a94b77b75e6f381f5dfcaf3d266e0e56fc687", "oa_license": "CCBYNCND", "oa_url": "https://repository.uantwerpen.be/docstore/d:irua:14753", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6417f15a270a7d5fbaa0bc71c327e8f31c28c156", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
32214043
pes2o/s2orc
v3-fos-license
Effects of Variable Diffuser Vanes on Performance of a Centrifugal Compressor with Pressure Ratio of 8 . 0 In numerous applications, centrifugal compressors are required to provide a high pressure ratio with good efficiency while also working in a wide operating range. This is a challenge because as pressure ratio increases, efficiency and operating range inevitably decline. This paper studies the effects of a variable geometry diffuser on the performance and operating range of a centrifugal compressor with high pressure ratios of up to 8.0. The numerical method employed three-dimensional Reynolds-averaged Navier-Stokes simulations. An analysis of the matching of the vaned diffuser with the impeller for different working conditions and diffuser vane angles is presented. The results show that improved matching of the adjusted diffuser increased efficiency by 4.5%. The range extension mechanism of the variable diffuser is explained, and it is shown that adjusting the vane angle by +6◦ to −6◦ extended the operating range of the compressor by up to 30.0% for pressure ratios between 5.0 and 6.0. The interaction between diffuser and impeller was examined, and the independent characteristic of the impeller is illustrated. The connection between the incidence angle at the leading edge of the impeller and flow separation near the tip of the impeller is discussed. Introduction Centrifugal compressors are extensively used in the industry [1].Besides gas turbines and turboshaft engines, unmanned air vehicles (UAVs) are also one of the important applications of centrifugal compressors.Compression ignition engines are suggested as a suitable choice for the propulsion system of high altitude unmanned air vehicles [2][3][4].Because of the lower air consumption compared to turbojets, turbocharged compression ignition engines can deliver constant power at different levels of altitude.However, to maintain constant inlet manifold pressure at high altitudes, a turbocharger with a high-pressure ratio centrifugal compressor is needed.Furthermore, high-altitude aircraft need to have long endurance, which requires using propulsion systems with high efficiency [4].Designing a highly efficient centrifugal compressor with a high pressure ratio needs to use the vaned diffuser.However, the use of vaned diffusers will narrow the stable operating range of the compressor. The operation of a high pressure ratio centrifugal compressor with a wide operating range involves the impeller and/or the diffuser working close to or above their stability margin [5].As a result, flow instability is a bottleneck for developing high pressure ratio centrifugal compressors with wide flow range.At high pressure ratio, the considerable negative pressure gradient in the flow path of the compressor inevitably causes flow instability, which restricts the operating range of a compressor.In addition, the matching between the impeller and diffuser plays a crucial role in designing a high-performance centrifugal compressor.According to Cumpsty [6], poor matching between the impeller and diffuser is a common reason for low performance of the high pressure ratio compressors. Various methods have been proposed and used for extending the operating range and improving the performance of compressors.Examples of these methods are the application of backswept blades [7], casing treatment [8], tandem diffusers [9], and shell cooling [10].The variable geometry method is an alternative way to improve the performance and operating range of the compressor by adjusting the geometry of the compressor under different working conditions [11].Potential locations for the application of the variable geometry method are upstream and downstream of the impeller.The variable vaned diffuser is one of the applications of the variable geometry method.In this approach, the geometrical parameters of the diffuser's blade, such as blade stagger angle, distance from impeller blade's trailing edge and blade's solidity, are adjustable.In each scenario, the geometry would be changed in such a way that the performance of the compressor adapts to the operational requirements.Simon et al. [12] used a vaned diffuser with a different angle in conjunction with variable inlet guide vanes to improve both the operating range and the efficiency of the compressor.Salvage [13] employed a variable geometry split-ring pipe diffuser to improve the surge margin of a compressor with an excess impeller-diffuser gap.Ziegler et al. [14] used a vaned diffuser with adjustable vane angle and radial gap between diffuser vanes and the impeller to study the interaction between the diffuser and the impeller.They adjusted the radial gap ratio between 1.04 and 1.18 and found that the total pressure ratio of the compressor rises with a decrease of radial gap.In a recent work, Huang et al. [15] found that the variable diffuser method can extend the stable operating range of a centrifugal compressor from 23.5% to 54.9% at a pressure ratio of 4.8 with changing the diffuser vane angles by 10 • . In the recent years, with the rising demand for UAVs, turbocharged engines and gas turbines with higher performance, research on high-pressure ratio centrifugal compressors with wide operating ranges is gaining popularity.Zheng et al. [16] showed the potential benefit of the operating range extension of the centrifugal compressor for a turbocharged engine and its operating line.In order to meet requirements of future engine generations, compressors with pressure ratios significantly higher than the current standard are required [17,18].The application of variable diffuser vanes on the high pressure ratio compressor has not been reported in the literature, and the validity of the effects on the performance of such compressor has not been verified.In this paper, effects of variable diffuser vanes on the performance of a centrifugal compressor with a high pressure ratio of 8.0 are investigated.The compressor is of a new design that is under development for application in gas turbines and the future generation of single-stage high pressure ratio turbochargers. The main body of this work consists of three parts.Firstly, the research methodology is stated.Secondly, compressors maps with different setups of diffuser vane angles are shown, and the effects of variable diffuser vanes on range extension and performance are discussed.Thirdly, component performances, especially the impeller's characteristics, are analyzed to reveal flow physics and intention to provide new insights for compressor design. Methodology In this study, CFD simulations were conducted based on three-dimensional, steady-state, compressible, finite volume layout.Numeca FINE/Turbo 10.1 EURANUS solver was used for the computations to solve Reynolds-averaged Navier-Stokes equations.The central scheme with Jameson type dissipation [19] and fourth-order Runge-Kutta scheme were used for spatial and temporal discretization respectively.The Spalart-Allmaras one-equation model [20] was selected as turbulence model. We used a high pressure ratio centrifugal compressor.The compressor has 24 impeller blades, 19 diffuser vanes and can achieve peak pressure ratio of 8.0 and peak isentropic efficiency of 79.9%;Other details are presented in Table 1.We meshed passage of impeller and diffuser together and applied periodic matching on the side faces of the passage to make the computational domain.We employed a multi-block structured mesh with O4H topology scheme.The final grid of a single passage consisted of Energies 2017, 10, 682 3 of 15 1 million nodes which 56% of them were allocated to the impeller.The tip clearance was set to constant 5.6% of exit blade height.The minimum skewness angles in the impeller and diffuser were 16 • and 37 • respectively, and maximum expansion ratios were below 3.The average y + of the mesh was around 1.6 with the highest value lower than 10, which is suitable for the Spalart-Allmaras turbulence model to appropriately resolve the viscous sublayer.A schematic of the impeller blade and diffuser vane passages is shown in Figure 1.applied periodic matching on the side faces of the passage to make the computational domain.We employed a multi-block structured mesh with O4H topology scheme.The final grid of a single passage consisted of 1 million nodes which 56% of them were allocated to the impeller.The tip clearance was set to constant 5.6% of exit blade height.The minimum skewness angles in the impeller and diffuser were 16° and 37° respectively, and maximum expansion ratios were below 3.The average y + of the mesh was around 1.6 with the highest value lower than 10, which is suitable for the Spalart-Allmaras turbulence model to appropriately resolve the viscous sublayer.A schematic of the impeller blade and diffuser vane passages is shown in Figure 1.Boundary conditions at the inlet consisted of the absolute total temperature of 288.15 K, the absolute total pressure of 101.325 kPa and the normal velocity.Likewise, an averaged static pressure with backflow control was imposed at the outlet.The casing surface and blades were defined as static non-slip solid boundary and rotational non-slip solid boundary, respectively.The non-reflecting 2D method was used to model the rotor-stator interface.The same compressor prototype with the same numerical setup was used in a previous study by present co-authors, and the mesh quality, grid independency, and reliability of the turbulence model were validated [21].The simulations were performed at 1.0Nmax, 0.9Nmax, 0.8Nmax and 0.6Nmax rotational speeds.The peak of each pressure ratio speed line was determined to be the surge point as it provides a good approximation for the flow instability [6].Diffuser vanes were rotated in the range of [-6°, 6°] to investigate the effects of diffuser vane angles on the performance of the compressor.The leading edge of the diffuser vanes was used as a pivot point for the rotation, to maintain the width of the vaneless region and the gap ratio between impeller blade's trailing edge and diffuser vane's leading edge.A schematic of the impeller and diffuser vanes with different stagger angles is shown in Figure 2. The angles are relative to the Boundary conditions at the inlet consisted of the absolute total temperature of 288.15 K, the absolute total pressure of 101.325 kPa and the normal velocity.Likewise, an averaged static pressure with backflow control was imposed at the outlet.The casing surface and blades were defined as static non-slip solid boundary and rotational non-slip solid boundary, respectively.The non-reflecting 2D method was used to model the rotor-stator interface.The same compressor prototype with the same numerical setup was used in a previous study by present co-authors, and the mesh quality, grid independency, and reliability of the turbulence model were validated [21].The simulations were performed at 1.0N max , 0.9N max , 0.8N max and 0.6N max rotational speeds.The peak of each pressure ratio speed line was determined to be the surge point as it provides a good approximation for the flow instability [6].Diffuser vanes were rotated in the range of [-6 • , 6 • ] to investigate the effects of diffuser vane angles on the performance of the compressor.The leading edge of the diffuser vanes was used as a pivot point for the rotation, to maintain the width of the vaneless region and the gap ratio between impeller blade's trailing edge and diffuser vane's leading edge.A schematic of the impeller and diffuser vanes with different stagger angles is shown in Figure 2. The angles are relative to the stagger angle of the datum diffuser.For the closed diffuser, the diffuser vanes were rotated clockwise, and the diffuser throat area decreased, while for the open diffuser, the vanes were rotated counter-clockwise and the diffuser throat area increased. Energies 2017, 10, 682 4 of 15 stagger angle of the datum diffuser.For the closed diffuser, the diffuser vanes were rotated clockwise, and the diffuser throat area decreased, while for the open diffuser, the vanes were rotated counterclockwise and the diffuser throat area increased. Discussion on Extension of Stable Operating Range The compressor pressure ratio characteristic curves for various stagger angles of diffuser vanes are shown in Figure 3.The datum compressor reaches a pressure ratio of 8.0 at the maximum rotational speed.The extended part of the operating range is highlighted and can be compared with the operating range Discussion on Extension of Stable Operating Range The compressor pressure ratio characteristic curves for various stagger angles of diffuser vanes are shown in Figure 3. Discussion on Extension of Stable Operating Range The compressor pressure ratio characteristic curves for various stagger angles of diffuser vanes are shown in Figure 3.The datum compressor reaches a pressure ratio of 8.0 at the maximum rotational speed.The extended part of the operating range is highlighted and can be compared with the operating range The datum compressor reaches a pressure ratio of 8.0 at the maximum rotational speed.The extended part of the operating range is highlighted and can be compared with the operating Energies 2017, 10, 682 5 of 15 range of the datum compressor.It can be seen that the compressor with variable diffuser vanes has a significantly wider range compared to the datum compressor.In order to evaluate the range extension potential of the variable diffuser, the stable operating range (SOR) of compressor is defined by Equation (1).In Equation ( 1 The stable operating ranges of the datum compressor and the compressor with variable diffuser vane angles are shown in Figure 4.The difference between the two is the extended range value that is achieved by employing the variable geometry method.The highest stable operating range of the datum compressor is 40.5%, which is at a low pressure ratio of 3.0.By using a variable angle vaned diffuser, operating range increases greatly and reaches the maximum of 63.3% at the medium pressure ratio of 4.9. Energies 2017, 10, 682 5 of 15 of the datum compressor.It can be seen that the compressor with variable diffuser vanes has a significantly wider range compared to the datum compressor.In order to evaluate the range extension potential of the variable diffuser, the stable operating range (SOR) of compressor is defined by Equation ( 1).In Equation ( 1), and represent the lowest and highest mass flow rates within the operating range at each pressure ratio, respectively. SOR = 100% choke surge The stable operating ranges of the datum compressor and the compressor with variable diffuser vane angles are shown in Figure 4.The difference between the two is the extended range value that is achieved by employing the variable geometry method.The highest stable operating range of the datum compressor is 40.5%, which is at a low pressure ratio of 3.0.By using a variable angle vaned diffuser, operating range increases greatly and reaches the maximum of 63.3% at the medium pressure ratio of 4.9.Because of the various effects of the variable diffuser on the compressor performance and the mechanisms governing the range extension at different speeds and pressure ratios, the amount of range extension of the compressor is not constant and varies with different rotation speeds and pressure ratio levels.When we close the diffuser, the operating range of the compressor shifts to the left on the map because of the decrease of the surge mass flow rate.On the other hand, when we open the diffuser, the choke mass flow rate increase and the operating range shifts to the right on the map.At maximum pressure ratio and rotational speed (Nmax), the shift of the surge line to the lower mass flow rates because of the closing of the diffuser is the main factor for the operating range extension.Although at this rotation speed the opening of the diffuser increases the throat area of the diffuser, it does not have any effects on the choke mass flow rate of the compressor.This indicates that at Nmax choking does not happen at the diffuser, and choke mass flow rate of the compressor is dominantly controlled by the choking in the impeller.Since at high pressure ratios of above 7.0 the range extension is only due to the shift of the surge line, the amount of range extension is below 20%. At medium pressure ratio (4 to 6) and rotational speed (0.9Nmax, 0.8Nmax), closing the diffuser shifts the surge line significantly to the left on the map and widen the stable operating range.Besides, opening the diffuser moves the choke line to the right, toward the higher mass flow rates.At these Because of the various effects of the variable diffuser on the compressor performance and the mechanisms governing the range extension at different speeds and pressure ratios, the amount of range extension of the compressor is not constant and varies with different rotation speeds and pressure ratio levels.When we close the diffuser, the operating range of the compressor shifts to the left on the map because of the decrease of the surge mass flow rate.On the other hand, when we open the diffuser, the choke mass flow rate increase and the operating range shifts to the right on the map.At maximum pressure ratio and rotational speed (N max ), the shift of the surge line to the lower mass flow rates because of the closing of the diffuser is the main factor for the operating range extension.Although at this rotation speed the opening of the diffuser increases the throat area of the diffuser, it does not have any effects on the choke mass flow rate of the compressor.This indicates that at N max choking does not happen at the diffuser, and choke mass flow rate of the compressor is dominantly controlled by the choking in the impeller.Since at high pressure ratios of above 7.0 the range extension is only due to the shift of the surge line, the amount of range extension is below 20%. At medium pressure ratio (4 to 6) and rotational speed (0.9N max , 0.8N max ), closing the diffuser shifts the surge line significantly to the left on the map and widen the stable operating range.Besides, Energies 2017, 10, 682 6 of 15 opening the diffuser moves the choke line to the right, toward the higher mass flow rates.At these rotational speeds, the choking of the datum compressor no longer happens at the impeller and the choke mass flow rate of the compressor is determined by the diffuser.Because the choking of the diffuser happens at the throat of the diffuser, the higher throat area of the open diffuser increases the choke mass flow of the diffuser, which in turn increases the choke mass flow rate of the compressor.Since at medium pressure ratios both surge and choke lines shift by changing the angles of the diffuser vanes, the amount of range extension increases and reaches the peak of 30% for pressure ratios between 5.0 and 6.0. At a low pressure ratio and rotational speed (0.6N max ), opening the diffuser increases the choke mass flow rate of the compressor and expands the operating range of the compressor by shifting the choke line.However, the surge mass flow rate of the compressor has very little change.At low mass flow rates, the tip region of the leading edge of the impeller is highly unstable.This is the main reason for instability and stall in the compressor at low rotation speeds and pressure ratios.Because of this, closing the diffuser vanes does not improve the stability of the compressor, and the surge mass flow rate of the compressor does not change.Therefore, the amount of range extension of the datum compressor decreases to about 15% at a low pressure ratio of 2.5. Discussion on Efficiency Performance The compressor efficiency for different diffuser vane angles and rotational speeds are shown in Figure 5.As shown in Figure 5a, at high rotational speed and pressure ratio despite significant range extension, the efficiency of the compressor decreases by closing the diffuser vanes.At maximum rotational speed (N max ), the datum compressor reaches the peak efficiency of 79.9%, while the "closed 6 • " case reaches a peak efficiency of only 69.1%.We can see the same trend at the rotational speed of 0.9N max for closing the diffuser.By contrast, at this speed, the efficiency of the diffuser increases by opening the diffuser vanes and the "open 4 • " case has the highest efficiency of 81.7%.This is because the impeller and diffuser of the datum compressor were designed and matched for the best efficiency at the N max as it is the design speed.However, as the rotational speed of the compressor decreases, the impeller and diffuser become unmatched since the diffuser throat area needed for the appropriate matching increases [22].As a result, at lower rotational speeds of 0.8N max and 0.6N max the compressor with the largest diffuser throat area, which is the "open 6 • " case, has the highest efficiency. Energies 2017, 10, 682 6 of 15 rotational speeds, the choking of the datum compressor no longer happens at the impeller and the choke mass flow rate of the compressor is determined by the diffuser.Because the choking of the diffuser happens at the throat of the diffuser, the higher throat area of the open diffuser increases the choke mass flow of the diffuser, which in turn increases the choke mass flow rate of the compressor.Since at medium pressure ratios both surge and choke lines shift by changing the angles of the diffuser vanes, the amount of range extension increases and reaches the peak of 30% for pressure ratios between 5.0 and 6.0.At a low pressure ratio and rotational speed (0.6Nmax), opening the diffuser increases the choke mass flow rate of the compressor and expands the operating range of the compressor by shifting the choke line.However, the surge mass flow rate of the compressor has very little change.At low mass flow rates, the tip region of the leading edge of the impeller is highly unstable.This is the main reason for instability and stall in the compressor at low rotation speeds and pressure ratios.Because of this, closing the diffuser vanes does not improve the stability of the compressor, and the surge mass flow rate of the compressor does not change.Therefore, the amount of range extension of the datum compressor decreases to about 15% at a low pressure ratio of 2.5. Discussion on Efficiency Performance The compressor efficiency for different diffuser vane angles and rotational speeds are shown in Figure 5.As shown in Figure 5a, at high rotational speed and pressure ratio despite significant range extension, the efficiency of the compressor decreases by closing the diffuser vanes.At maximum rotational speed (Nmax), the datum compressor reaches the peak efficiency of 79.9%, while the "closed 6°" case reaches a peak efficiency of only 69.1%.We can see the same trend at the rotational speed of 0.9Nmax for closing the diffuser.By contrast, at this speed, the efficiency of the diffuser increases by opening the diffuser vanes and the "open 4°" case has the highest efficiency of 81.7%.This is because the impeller and diffuser of the datum compressor were designed and matched for the best efficiency at the Nmax as it is the design speed.However, as the rotational speed of the compressor decreases, the impeller and diffuser become unmatched since the diffuser throat area needed for the appropriate matching increases [22].As a result, at lower rotational speeds of 0.8Nmax and 0.6Nmax the compressor with the largest diffuser throat area, which is the "open 6°" case, has the highest efficiency.For further discussion of the efficiency performance of the compressor at different speeds for different diffuser vane angles, it is necessary to evaluate the matching between the impeller and diffuser.As shown by Tamaki et al. [23], if the flow capacity of the impeller and that of the diffuser match closely; the compressor will have the best performance at the design speed.This assumption also applies to the speeds higher or lower than the design speed.As demonstrated by Dixon and Hall [24], the choke mass flows of the impeller and vaned diffuser are dependent on the impeller blade speed and stagnation conditions at the diffuser inlet, respectively, besides their respective throat area.These circumstances vary at different rotational speeds so the component that chokes and determines the choke mass flow rate of the compressor for each speed line may be different.To analyze the choke mass flow of the impeller and diffuser and the matching between them, we study the component performance of the impeller and diffuser. The diffuser loss coefficient and adiabatic efficiency of the impeller for different cases are shown in Figure 6.The impeller efficiency calculated by using the gas-state parameters of the inlet, the rotorstator interface and the outlet of the compressor and Equation (2). The diffuser loss coefficient is computed by using Equation (3). As stated before, at maximum rotational speed, the datum compressor has the highest efficiency.The reason is that as we see in Figure 6a, the choke mass flow rate of the impeller (2.69 kg/s) is in line with the choke mass flow rate of the diffuser.The close matching of the choke mass flow rates makes the impeller and diffuser work at their highest performance at a mass flow rate of 2.57 kg/s, which is the peak efficiency point of the stage.In other diffuser vane configurations, due to the alteration of the diffuser throat area, the choke mass flow rate of the diffuser changes and mismatches with the impeller.This mismatching results in the significant drop in the stage efficiency, and as this mismatching increases by closing the diffuser vanes, the stage efficiency decreases further.For further discussion of the efficiency performance of the compressor at different speeds for different diffuser vane angles, it is necessary to evaluate the matching between the impeller and diffuser.As shown by Tamaki et al. [23], if the flow capacity of the impeller and that of the diffuser match closely; the compressor will have the best performance at the design speed.This assumption also applies to the speeds higher or lower than the design speed.As demonstrated by Dixon and Hall [24], the choke mass flows of the impeller and vaned diffuser are dependent on the impeller blade speed and stagnation conditions at the diffuser inlet, respectively, besides their respective throat area.These circumstances vary at different rotational speeds so the component that chokes and determines the choke mass flow rate of the compressor for each speed line may be different.To analyze the choke mass flow of the impeller and diffuser and the matching between them, we study the component performance of the impeller and diffuser. The diffuser loss coefficient and adiabatic efficiency of the impeller for different cases are shown in Figure 6.The impeller efficiency calculated by using the gas-state parameters of the inlet, the rotor-stator interface and the outlet of the compressor and Equation (2). The diffuser loss coefficient is computed by using Equation (3). As stated before, at maximum rotational speed, the datum compressor has the highest efficiency.The reason is that as we see in Figure 6a, the choke mass flow rate of the impeller (2.69 kg/s) is in line with the choke mass flow rate of the diffuser.The close matching of the choke mass flow rates makes the impeller and diffuser work at their highest performance at a mass flow rate of 2.57 kg/s, which is the peak efficiency point of the stage.In other diffuser vane configurations, due to the alteration of the diffuser throat area, the choke mass flow rate of the diffuser changes and mismatches with the impeller.This mismatching results in the significant drop in the stage efficiency, and as this mismatching increases by closing the diffuser vanes, the stage efficiency decreases further.The choke mass flow rates of both impeller and diffuser decrease as the rotational speed of the impeller decreases [24], but the rate of decline of the choke mass flow rate of the diffuser is larger than that of the impeller.This difference leads to the off-design matching problem between the impeller and diffuser at lower speeds, which result in lower efficiency of the datum compressor, compared to cases with open diffuser vanes.As shown in Figure 6b, at 0.9Nmax opening the diffuser vanes increases the choke mass flow rate of the diffuser and makes it closer to the impeller choke mass flow rate.The improvement in matching between impeller and diffuser increases the efficiency of the compressor by 4.5% in the "open 6°" case at 0.8Nmax.At this speed, the peak stage efficiency of the open cases is even higher than the peak stage efficiency of the datum compressor at Nmax because, in addition to the higher impeller efficiency, the open diffusers are more efficient. As shown in Figures 6 and 7 in Casey and Rusch [22], the off-design matching problem between the impeller and diffuser is more severe in high pressure ratio compressors, which have high design tip speed.This effect elevates the compressor efficiency deterioration due to the mismatching at speeds other than the design speed.Implementing the variable geometry method to improve the matching by changing the diffuser vane angle at different speeds is a feasible solution for improving the performance of modern high pressure ratio compressors.A demonstration of the potential benefits of implementation of this method for a super high pressure ratio compressor is provided in this paper. Figure 7 illustrates the degree of reaction for different diffuser vane angles at a rotational speed of 0.9Nmax.The degree of reaction of a centrifugal compressor is the ratio of the rotor static enthalpy rise to the stage stagnation enthalpy rise.The degree of reaction for a stage without inlet swirl can be calculated using the following equation: The choke mass flow rates of both impeller and diffuser decrease as the rotational speed of the impeller decreases [24], but the rate of decline of the choke mass flow rate of the diffuser is larger than that of the impeller.This difference leads to the off-design matching problem between the impeller and diffuser at lower speeds, which result in lower efficiency of the datum compressor, compared to cases with open diffuser vanes.As shown in Figure 6b, at 0.9N max opening the diffuser vanes increases the choke mass flow rate of the diffuser and makes it closer to the impeller choke mass flow rate.The improvement in matching between impeller and diffuser increases the efficiency of the compressor by 4.5% in the "open 6 • " case at 0.8N max .At this speed, the peak stage efficiency of the open cases is even higher than the peak stage efficiency of the datum compressor at N max because, in addition to the higher impeller efficiency, the open diffusers are more efficient. As shown in Figures 6 and 7 in Casey and Rusch [22], the off-design matching problem between the impeller and diffuser is more severe in high pressure ratio compressors, which have high design tip speed.This effect elevates the compressor efficiency deterioration due to the mismatching at speeds other than the design speed.Implementing the variable geometry method to improve the matching by changing the diffuser vane angle at different speeds is a feasible solution for improving the performance of modern high pressure ratio compressors.A demonstration of the potential benefits of implementation of this method for a super high pressure ratio compressor is provided in this paper. Figure 7 illustrates the degree of reaction for different diffuser vane angles at a rotational speed of 0.9N max .The degree of reaction of a centrifugal compressor is the ratio of the rotor static enthalpy Energies 2017, 10, 682 9 of 15 rise to the stage stagnation enthalpy rise.The degree of reaction for a stage without inlet swirl can be calculated using the following equation: As the diffuser vane angle decreases, the mass flow rate of the air passing through the impeller and so the velocity of the flow at the impeller outlet decrease.Also because of the backswept blade, the work input of the impeller increases with the decrease of the mass flow rate.As a result, the degree of reaction of the compressor increases with the decrease in the variable diffuser vane angle.As the diffuser vane angle decreases, the mass flow rate of the air passing through the impeller and so the velocity of the flow at the impeller outlet decrease.Also because of the backswept blade, the work input of the impeller increases with the decrease of the mass flow rate.As a result, the degree of reaction of the compressor increases with the decrease in the variable diffuser vane angle. Discussion on Effects of the Variable Diffuser on the Impeller Tamaki et al. [23] assumed in their work on matching between the impeller and diffuser that using different diffuser types does not affect the performance of the impeller and it keeps working with the same performance characteristic.Ziegler et al. [14] reported that the impact of vaned diffusers with different radial gaps and diffuser angles on the performance of the impeller is insignificant.These previous studies have been done on compressors with medium pressure ratio levels.We are not aware of any studies done on high pressure ratio compressors to investigate the effect of a variable diffuser on impeller performance. As can be seen in Figure 6, at every speed, the performance characteristic curves of the impeller for different diffuser vane angles over the broad operating range form a uniform and continuous curve.This uniform trend implies the performance of the impeller is independent of the downstream vane diffuser system.To evaluate this, the average static pressure at impeller exit for different diffuser vane angles is studied. Figure 8 shows the average static pressure at rotor-stator interface at 0.9Nmax.Figure 8 indicates that the impeller performance is the same over the operating range, for different diffuser vane angles.In order to investigate the flow condition at the impeller, various flow parameters at impeller exit for cases with different diffuser angles but the same mass flow rate are shown in Figures 9-11.At this mass flow rate, the stage has different performance and operating conditions with various diffuser settings, but we can see that the flow conditions at impeller exit are the same.This confirms the independence of the impeller performance from the diffuser vane's configuration. Discussion on Effects of the Variable Diffuser on the Impeller Tamaki et al. [23] assumed in their work on matching between the impeller and diffuser that using different diffuser types does not affect the performance of the impeller and it keeps working with the same performance characteristic.Ziegler et al. [14] reported that the impact of vaned diffusers with different radial gaps and diffuser angles on the performance of the impeller is insignificant.These previous studies have been done on compressors with medium pressure ratio levels.We are not aware of any studies done on high pressure ratio compressors to investigate the effect of a variable diffuser on impeller performance. As can be seen in Figure 6, at every speed, the performance characteristic curves of the impeller for different diffuser vane angles over the broad operating range form a uniform and continuous curve.This uniform trend implies the performance of the impeller is independent of the downstream vane diffuser system.To evaluate this, the average static pressure at impeller exit for different diffuser vane angles is studied. Figure 8 shows the average static pressure at rotor-stator interface at 0.9N max .Figure 8 indicates that the impeller performance is the same over the operating range, for different diffuser vane angles.In order to investigate the flow condition at the impeller, various flow parameters at impeller exit for cases with different diffuser angles but the same mass flow rate are shown in Figures 9-11.At this mass flow rate, the stage has different performance and operating conditions with various diffuser settings, but we can see that the flow conditions at impeller exit are the same.This confirms the independence of the impeller performance from the diffuser vane's configuration.Applying variable geometry diffuser technique significantly increases the operating range of the compressor.This makes the impeller to work stably in a much wider mass flow range than operating range of a conventional fixed geometry compressor with the same impeller.Since the performance of the impeller is independent of the diffuser, this technique can be used to study the performance of a new impeller at the design process and improve the accuracy of the matching between the impeller and diffuser in the operating conditions in question. Although closing the diffuser significantly extends the stable operating range of the compressor by shifting the surge line to the lower mass flow rates, but working at low mass flow rates increases the incidence angle at the impeller leading edge.Figure 12 shows the pitch-averaged spanwise distribution of the incidence angle at the impeller leading edge six different operating points at 0.9Nmax.As shown in Table 2, operating points of 1 to 5 are for the "Closed 2°" case from choke to surge, respectively, and the operating point 6 is the choke point of the "Closed 4°" case.As the mass flow decreases, the incidence angle mid-span increases.This rise of the incidence angle decreases the stability of the tip region of the impeller leading edge to the point that flow separation will happen in the tip region, and a sudden drop of the incidence angle happens at spans of above 90% in operating point 2, which is at 1.82 kg/s mass flow rate.Applying variable geometry diffuser technique significantly increases the operating range of the compressor.This makes the impeller to work stably in a much wider mass flow range than operating range of a conventional fixed geometry compressor with the same impeller.Since the performance of the impeller is independent of the diffuser, this technique can be used to study the performance of a new impeller at the design process and improve the accuracy of the matching between the impeller and diffuser in the operating conditions in question. Although closing the diffuser significantly extends the stable operating range of the compressor by shifting the surge line to the lower mass flow rates, but working at low mass flow rates increases the incidence angle at the impeller leading edge.Figure 12 shows the pitch-averaged spanwise distribution of the incidence angle at the impeller leading edge six different operating points at 0.9N max .As shown in Table 2, operating points of 1 to 5 are for the "Closed 2 • " case from choke to surge, respectively, and the operating point 6 is the choke point of the "Closed 4 • " case.As the mass flow decreases, the incidence angle mid-span increases.This rise of the incidence angle decreases the stability of the tip region of the impeller leading edge to the point that flow separation will happen in the tip region, and a sudden drop of the incidence angle happens at spans of above 90% in operating point 2, which is at 1.82 kg/s mass flow rate.Applying variable geometry diffuser technique significantly increases the operating range of the compressor.This makes the impeller to work stably in a much wider mass flow range than operating range of a conventional fixed geometry compressor with the same impeller.Since the performance of the impeller is independent of the diffuser, this technique can be used to study the performance of a new impeller at the design process and improve the accuracy of the matching between the impeller and diffuser in the operating conditions in question. Although closing the diffuser significantly extends the stable operating range of the compressor by shifting the surge line to the lower mass flow rates, but working at low mass flow rates increases the incidence angle at the impeller leading edge.Figure 12 shows the pitch-averaged spanwise distribution of the incidence angle at the impeller leading edge six different operating points at 0.9Nmax.As shown in Table 2, operating points of 1 to 5 are for the "Closed 2°" case from choke to surge, respectively, and the operating point 6 is the choke point of the "Closed 4°" case.As the mass flow decreases, the incidence angle mid-span increases.This rise of the incidence angle decreases the stability of the tip region of the impeller leading edge to the point that flow separation will happen in the tip region, and a sudden drop of the incidence angle happens at spans of above 90% in operating point 2, which is at 1.82 kg/s mass flow rate.Figure 13 shows the pitch-averaged spanwise distribution of the relative Mach number at the tip region of the impeller leading edge superimposed by the streamlines for the mentioned operating points.As we can see, by an increase of the incidence angle due to the decline of the mass flow, the tip region becomes significantly unstable and flow recirculation vortex emerges.As the mass flow decreases further, the vortex grows and impinges the lower spans.Figure 13 shows the pitch-averaged spanwise distribution of the relative Mach number at the tip region of the impeller leading edge superimposed by the streamlines for the mentioned operating points.As we can see, by an increase of the incidence angle due to the decline of the mass flow, the tip region becomes significantly unstable and flow recirculation vortex emerges.As the mass flow decreases further, the vortex grows and impinges the lower spans.As the performance of the impeller is independent of the diffuser, at each speed there is a critical mass flow rate below which the impeller tip region will become unstable and flow separation emerges, regardless of the diffuser setting or the stage-wide working condition (such as choke or surge).Although at operating points below the critical mass flow rate, the stage might be working stably by closing the diffuser, the flow separation at the impeller tip region may damage the impeller due to increasing blade fatigue. Conclusions In this work, a variable diffuser vane was employed to improve the operating range and performance of a high pressure ratio centrifugal compressor.This study employed steady-state numerical simulations using Reynolds-averaged Navier-Stokes equations.Different diffuser vane rotation angles in a range from −6 to +6 were used to examine the effects on the performance of the compressor.The main conclusions can be summarized as follows. 1.A variable vaned diffuser can significantly improve the operating range of centrifugal compressors. The effects of a variable vaned diffuser on high pressure ratio centrifugal compressors are verified.The stable operating range extended at different pressure ratio levels.There is an As the performance of the impeller is independent of the diffuser, at each speed there is a critical mass flow rate below which the impeller tip region will become unstable and flow separation emerges, regardless of the diffuser setting or the stage-wide working condition (such as choke or surge).Although at operating points below the critical mass flow rate, the stage might be working stably by closing the diffuser, the flow separation at the impeller tip region may damage the impeller due to increasing blade fatigue. Conclusions In this work, a variable diffuser vane was employed to improve the operating range and performance of a high pressure ratio centrifugal compressor.This study employed steady-state numerical simulations using Reynolds-averaged Navier-Stokes equations.Different diffuser vane rotation angles in a range from −6 to +6 were used to examine the effects on the performance of the compressor.The main conclusions can be summarized as follows. 1. A variable vaned diffuser can significantly improve the operating range of centrifugal compressors.The effects of a variable vaned diffuser on high pressure ratio centrifugal compressors are verified.The stable operating range extended at different pressure ratio levels.There is an increase of 30.0% in the operating range of the current case for pressure ratios between 5.0 and 6.0.At higher rotational speeds, the main contributor to range extension is the shifting of the surge line to lower mass flow rates, which is due to the closing of the diffuser.At lower rotational speeds, changing the angle of diffuser vanes has minimal impact on the surge mass flow rate while it significantly shifts the choke line.At medium rotational speeds, both surge and choke lines shift by changing the diffuser vane angle.Thus, both contribute to extending the operating range. 2. A variable vaned diffuser has a significant impact on the compressor's efficiency.At higher rotational speeds, the choke mass flow of the diffuser is matched by that of the impeller.Thus, opening the diffuser at these speed neither shifts the choke line nor improves the efficiency.At lower rotation speeds, however, the impeller and diffuser are not matched and choking happens at the diffuser.Because of this, opening the diffuser extends the operating range by shifting the choke line to higher mass flows and increases the efficiency of the stage by up to 4.5% at 0.8N max by improving the matching between impeller and diffuser. 3. The impeller performance is independent of the modification in diffuser vane angle.For every rotation speed, the component performance of the impeller is a consistent curve even for diffusers with different vane angles.A change in the diffuser settings did not change the flow conditions at the impeller exit.4. Centrifugal impellers coupled with variable diffusers are able to operate in a wide range of mass flow rates.This is proposed as a general method to study the behavior of the impeller over a wide range of mass flow rates.By applying this approach, a critical point in the operating range of the impeller from stability was found. 5. At each rotation speed, there is a certain mass flow rate for the impeller where the incidence angle at the impeller leading edge reaches a critical point.Because of the high incidence angle, instability and separation vortexes will arise in the near-tip region of the impeller as the mass flow rate decreases.This critical point is independent of diffuser settings and stage-wide working conditions such as choke or surge, which means even when the stage is in a stable condition there might be large separation vortexes at the near-tip region of the impeller.The instability in the impeller will grow larger with a further decrease in the mass flow rate. Figure 2 . Figure 2. The compressor impeller together with the diffuser vanes at different angles. Figure 3 . Figure 3. Compressor map for different diffuser vane angles and rotational speeds.Extended range of the compressor is highlighted. Figure 2 . Figure 2. The compressor impeller together with the diffuser vanes at different angles. the datum diffuser.For the closed diffuser, the diffuser vanes were rotated clockwise, and the diffuser throat area decreased, while for the open diffuser, the vanes were rotated counterclockwise and the diffuser throat area increased. Figure 2 . Figure 2. The compressor impeller together with the diffuser vanes at different angles. Figure 3 . Figure 3. Compressor map for different diffuser vane angles and rotational speeds.Extended range of the compressor is highlighted. Figure 3 . Figure 3. Compressor map for different diffuser vane angles and rotational speeds.Extended range of the compressor is highlighted. represent the lowest and highest mass flow rates within the operating range at each pressure ratio, respectively. Figure 4 . Figure 4. Comparison between the SOR of the datum compressor and variable diffuser compressor for different pressure ratios. Figure 4 . Figure 4. Comparison between the SOR of the datum compressor and variable diffuser compressor for different pressure ratios. Figure 7 . Figure 7. Degree of reaction for different diffuser vane angles at 0.9Nmax. Figure 7 . Figure 7. Degree of reaction for different diffuser vane angles at 0.9N max . Figure 8 . Figure 8.Average static pressure at the rotor-stator interface for different diffuser vane angles. Figure 9 . Figure 9. Static pressure contours at the rotor-stator interface for different diffuser vanes angles at 2.25 kg/s mass flow rate. Figure 10 . Figure 10.Absolute Mach number contours at the rotor-stator interface for different diffuser vanes angles at 2.25 kg/s mass flow rate. Figure 8 . 15 Figure 8 . Figure 8.Average static pressure at the rotor-stator interface for different diffuser vane angles. Figure 9 . Figure 9. Static pressure contours at the rotor-stator interface for different diffuser vanes angles at 2.25 kg/s mass flow rate. Figure 10 . Figure 10.Absolute Mach number contours at the rotor-stator interface for different diffuser vanes angles at 2.25 kg/s mass flow rate. Figure 9 . 15 Figure 8 . Figure 9. Static pressure contours at the rotor-stator interface for different diffuser vanes angles at 2.25 kg/s mass flow rate. Figure 9 . Figure 9. Static pressure contours at the rotor-stator interface for different diffuser vanes angles at 2.25 kg/s mass flow rate. Figure 10 . Figure 10.Absolute Mach number contours at the rotor-stator interface for different diffuser vanes angles at 2.25 kg/s mass flow rate. Figure 10 . Figure 10.Absolute Mach number contours at the rotor-stator interface for different diffuser vanes angles at 2.25 kg/s mass flow rate. Figure 11 . Figure 11.Swirl angle contours at the rotor-stator interface for different diffuser vanes angles at 2.25 kg/s mass flow rate. Figure 12 . Figure 12.Pitch averaged spanwise distribution of incidence angle at leading edge of the impeller. Figure 11 . Figure 11.Swirl angle contours at the rotor-stator interface for different diffuser vanes angles at 2.25 kg/s mass flow rate. Figure 12 . Figure 12.Pitch averaged spanwise distribution of incidence angle at leading edge of the impeller.Figure 12. Pitch averaged spanwise distribution of incidence angle at leading edge of the impeller. Figure 12 . Figure 12.Pitch averaged spanwise distribution of incidence angle at leading edge of the impeller.Figure 12. Pitch averaged spanwise distribution of incidence angle at leading edge of the impeller. Figure 13 . Figure 13.Pitch-averaged spanwise contour of the relative Mach number at tip of the impeller leading edge superimposed with streamlines. Figure 13 . Figure 13.Pitch-averaged spanwise contour of the relative Mach number at tip of the impeller leading edge superimposed with streamlines. Table 2 . Different operating points shown in Figures12 and 13. Table 2 . Different operating points shown in Figures 12 and 13.
2017-08-14T16:57:37.349Z
2017-05-12T00:00:00.000
{ "year": 2017, "sha1": "3292e20736abb995cdec7a0b79f371f96a9c5553", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/10/5/682/pdf?version=1494594802", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9245919e3b25941662e2d4290aa3167835ecfa0a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
11307700
pes2o/s2orc
v3-fos-license
Density of Key-Species Determines Efficiency of Macroalgae Detritus Uptake by Intertidal Benthic Communities Accumulating evidence shows that increased biodiversity has a positive effect on ecosystem functioning, but the mechanisms that underpin this positive relationship are contentious. Complete extinctions of regional species pools are comparatively rare whereas compositional changes and reductions in abundance and biomass are common, although seldom the focus of biodiversity-ecosystem functioning studies. We use natural, small-scale patchiness in the density of two species of large bivalves with contrasting feeding modes (the suspension-feeding Austrovenus stutchburyi and deposit-feeding Macomona liliana) to examine their influence on the uptake of nitrogen from macroalgae detritus (i.e. measure of ecosystem function and food web efficiency) by other infauna in a 10-d laboratory isotope-tracer experiment. We predicted that densities of these key bivalve species and functional group diversity (calculated as Shannons H, a density-independent measure of community composition) of the intact infaunal community will be critical factors explaining variance in macroalgal per capita uptake rates by the community members and hence determine total uptake by the community. Results show that only two species, M. liliana and a large orbiniid polychaete (Scoloplos cylindrifer) dominated macroalgal nitrogen taken up by the whole community due to their large biomass. However, their densities were mostly not important or negatively influenced per capita uptake by other species. Instead, the density of a head-down deposit-feeder (the capitellid Heteromastus filiformis), scavengers (mainly nemertines and nereids) and species and functional group diversity, best explained per capita uptake rates in community members. Our results demonstrate the importance of species identity, density and large body size for ecosystem functioning and highlight the complex interactions underlying loss of ecological functions with declining biodiversity and compositional changes. Introduction Considering the major community changes, including species losses, documented worldwide in recent years there is an urgent need to gain a mechanistic understanding of the relationship between biodiversity and ecosystem functioning-which ultimately affects the ecological services provided to humanity [1,2]. Accumulating evidence shows that increased biodiversity has a positive effect on ecosystem functions, such as primary production, decomposition of organic matter and nutrient regeneration, but the pattern of response varies depending on the ecosystem and species investigated [1][2][3][4]. Much of what we know about the role of biodiversity in mediating ecosystem functioning stems from manipulative laboratory experiments. Although they have helped articulating hypotheses and provided mechanistic explanations for observed patterns they do not incorporate habitat complexity or allow long-term community dynamics and feedback processes to develop [5][6][7]. A key challenge in the field of biodiversityecosystem function research is to demonstrate whether the observed importance of biodiversity in controlled experimental assemblages also persists in natural systems [4,[8][9][10]. Biodiversity explains variation from the level of genes to ecosystems, with species richness (number of species) being the most commonly used measure in studies examining biodiversity and ecosystem function relationships. Species richness is representative of an environment as it is determined by prevailing biotic and abiotic conditions, as well as being a logistically achievable measure, and is therefore appropriate for such studies. Three main hypotheses have been proposed that relate the responses of ecosystem functioning to species richness. First, the linear or "rivet" hypothesis suggests that all species contribute critically and approximately equally to ecosystem function e.g. Lawton [11]. Second, the "redundancy" hypothesis suggests that ecosystems can lose many species with no consequences for ecosystem performance, as long as the major functional groups are still present, i.e. it is not the number of species per se which is important but the functional traits of the species [11,12]. Redundant species are considered necessary only to ensure ecosystem resilience to perturbation [12]. Third, the "idiosyncratic" hypothesis states that species diversity affects ecosystem functioning, but not in a predictable direction, because the roles of individual species are complex and context-dependent [11,13]. Biodiversity-ecosystem function relationships within any system may be determined by any combination of these three hypotheses. However, there are further important components of biodiversity that may affect these relationships, including the density of a species [14,15]. In many cases it has been shown that certain key-species, rather than species richness, can have a disproportionate effect on ecosystem functioning such as nutrient cycling and productivity e.g. [16][17][18][19]. Loss of a key-species would result in a rapid decline in ecosystem functioning [20] (c. f. rivet hypothesis) as this species is unique and cannot be replaced by another species with similar functional traits (c.f. redundancy hypothesis). Changes in species abundance patterns may have important consequences for ecosystems long before a species is threatened by extinction [14]. At local scales, variations in the absolute density and relative abundance of species can modify biodiversity-ecosystem functioning relationships. For example the per capita performance of individual species may increase as their density declines, reflecting reduced intraspecific competition [21]. Also, decreased relative abundance of one species may likely alter complementary resource use or facilitation [22]. The hypotheses listed above would have difficulty in accounting for these common shifts in biodiversity. Marine soft sediments cover more than 70% of Earth's surface and play a critical role in the global storage and cycling of nutrients and energy [7,23,24]. Benthic invertebrate species often contribute idiosyncratically to ecosystem functioning with their impact strongly dependent on species identity and their functional role e.g. [25][26][27]. There is also some support for the redundancy hypothesis. Raffaelli et al. [28] grouped species by functional group according to their mode of bioturbation and found that increased species richness of benthic macrofauna belonging to different functional groups had a significant effect on nutrient fluxes from sediments, while increased species richness within the same functional group had no effect. Complete extinctions of regional species pools are however comparatively rare in the marine benthos whereas compositional changes and reductions in abundance and biomass are common [29,30]. These changes in benthic abundance and biomass can be important drivers of ecosystem functioning as they direct species dominance patterns and functioning (e.g. infaunal community structure and diversity [31]; bioturbation potential and degradation patterns; [18,32,33]). New Zealand sandflats provide an ideal system to investigate the contribution of species composition and abundance to ecosystem functioning because the macrofaunal community is species rich and has diverse functional groups. Using small-scale patchiness (0.01 m 2 ) in the density of key-species, we compared the uptake of macroalgal detritus by the benthic infaunal community. This process is a fundamental ecosystem function where benthic infauna converts dead organic material to secondary production, which is available for higher trophic levels such as fish. The isotope tracing technique enables quantifiable measurement of detrital uptake by all species in the community; resolving trophic relationships and the outcomes of species interactions which amount to uptake at the community level [21]. This approach also enables the detection of subtle diversity effects, which could be masked from key-species effects when studying cumulative processes only (e.g. nutrient fluxes or bioturbation depth [28]). The use of intact cores with natural infaunal communities under controlled laboratory conditions and the ability to relate macroalgal uptake to the behaviour of individual species and their distribution in the sediment gives greater insight into the mechanisms underlying the relationships between biodiversity and ecosystem functioning than is typical of studies where the contribution of individual species to community interactions cannot be disentangled. The two dominant bivalves on New Zealand intertidal sandflats, the large, mainly surface deposit-feeding deep-burrowing tellenid Macomona liliana and the large suspension-feeding endemic venerid Austrovenus stutchburyi, influence sediment characteristics and community composition, which affects ecosystem functions such as nutrient fluxes, metabolism and primary production [19,32,34]. We predict that densities of these large key-species will drive the patterns of uptake of algal detritus by macrofauna (c.f. the key-species hypothesis, a variant of the rivet hypothesis [20]) but that higher functional group diversity as measured by diversity indices will also contribute in explaining uptake (redundancy hypothesis). Our knowledge of the natural history of key-species allows us to hypothesise that i) higher densities of M. liliana will facilitate uptake of macroalgae detritus by sub-surface deposit feeders, since they draw organic material from the sediment surface with their inhalant siphon and defecate at depth, enhancing the concentration of organic matter at 5-10 cm below the sediment surface [35]. In contrast, ii) high densities of M. liliana should decrease uptake by small surface-feeding infauna due to exploitative and interference competition in the surface layer (as found for macrofauna-meiofauna interactions [36]). Furthermore, we hypothesise that iii) high densities of the clam A. stutchburyi will facilitate macroalgal uptake by surface-feeding infauna, since clams, if feeding on resuspended macroalgal detritus, would produce organic-rich deposits in the surface sediment thereby facilitating uptake by other infauna [37]. However, in laboratory conditions where resuspended material will settle again, their bioturbation and mixing in the upper centimetres of sediment [32,38] may rework algal detritus into the sediment and eventually increase food access also for sub-surface feeders. To summarize, in this study we investigate whether relationships between species diversity, functional group diversity and densities of key-species and ecosystem functioning (detritus uptake) occur in natural communities. We test this using a multiple regression approach; uptake of algal detritus at both an individual level (per capita uptake by each species) and at the community level (total uptake by the whole community) is evaluated in relation to these measures of community structure. Macroalgal labelling The macroalgal species Ulva sp. which blooms in estuaries [39] and later decomposes in soft sediments was collected on Oct 10, 2011, from northern Tauranga Harbour, New Zealand, at low tide. Healthy looking thalli were rinsed in GFC filtered seawater and distributed among aquaria comprising an Ulva to seawater ratio of 10 g ww L -1 . Two days later, we labelled Ulva with stable carbon and nitrogen isotopes by adding 5% Na 15 NO 3 , 10% ( 15 NH 4 ) 2 SO 4 and 99% NaH 13 CO 3 to the seawater in quantities similar to Rossi [40]. We also added KH 2 PO 4 according to the Redfield ratio to improve growth condition and hence ensure that assimilation of isotopes would result in sufficient isotope enrichment. Ulva was placed in a constant temperature room set at 18°C under on a 12 h light:dark cycle for 6 d. The thalli were then carefully and repeatedly rinsed in MilliQ water, quickly dried using paper towels, freeze-dried and ground to a fine homogenised powder using a ball mill. The labelled macroalgae was sampled for stable isotope analyses (see below) and stored frozen until the start of the experiment. Isotope analyses confirmed a strong labelling of the Ulva material; δ 15 N = 9597 ± 95 ‰, δ 13 C = 1745 ± 11‰ compared to unlabelled Ulva; δ 15 N = 8 and δ 13 C = -12. Salinity and temperature was 29.3 and 16.7°C on the outgoing tide and 25.9 and 20.1°C on the incoming tide on the day of sampling. The distinct feeding tracks of M. liliana and the holes created by anemones (Anthopleura aureoradiata) attached to A. stutchburyi enabled estimates of their respective abundances so that cores from low to high bivalve density could be collected and to avoid destructive sampling of individuals close to core edges. Preliminary sampling indicated higher species richness at the Austrovenus than the Macomona site so the former site was sampled more intensively. After sacrificing some cores for initial analyses (see below) there were 41 and 29 experimental cores for the Austrovenus and Macomona site respectively to which labelled Ulva were added. Collection of intact cores Back at the laboratory, cores from the two sites were randomly allocated to 12 tanks that were connected to a flow-through seawater system that generated a 12 h tidal cycle with a 6 h immersion/emersion period. The cores were fitted with an 800 μm mesh net around the circumference of the core that was extended above the simulated "high tide" mark to prevent amphipods escaping. An 800 μm mesh net also covered the base of each core so water could drain through the sediment with the rise and fall of the "tide". The thermo-constant laboratory had windows, which allowed natural light to reach the cores (PAR 4.3±2 μE, 15 cm above the sediment surface). The light:dark cycle was 12:12 h (8 am: 8 pm). Artificial saltwater was used in the experiment (salinity 29.4) and temperature set at 19°C. Start of the experiment The cores were left to acclimatize for two tidal cycles. At low tide the next day (23 Nov), 0.60 ± 0.01 g dw of the labelled, finely ground Ulva was mixed with 20 ml seawater and added to each core by carefully spreading it evenly on the sediment surface using a Pasteur pipette. Recovery of added Ulva from sub-sampling sediment in six cores containing few (< 2) M. liliana and A. stutchburyi after 24 h was 97± 3%, supporting visual observations (S1 Fig) and verifying that detrital recovery at the end of the experiment could be attributed to faunal activity rather than resuspension and loss due to the simulated tidal cycles. Experimental procedures and termination of experiment The experiment was checked twice a day and occasionally dead A. stutchburyi were carefully removed from the surface sediment. After 10 d each core was sieved on a 500 μm mesh and fauna preserved in 70% ethanol until sorted to species level under a stereomicroscope. All specimens were counted and biomass measured (after drying at 60°C) or estimated. For larger polychaetes which were often incomplete, a width-biomass relationship (r 2 = 0.84-0.92) was established from intact individuals of each species [41]. Bivalves were weighed without shells since we were interested in macroalgal uptake in organic material. The species abundance (core -1 ) and total biomass are given in S1 and S2 Tables. The eleven most common macrofaunal species were selected for isotope analyses. Within species, similar sized individuals were selected to minimize biomass/growth dependent enrichment [42]. Only adult individuals were used for isotope analyses with the exception of Naineris sp. which were present as juveniles only. For abundant species with a small biomass (Aonides trifada, Prinospio aucklandica, Naineris sp.), the first 10-20 individuals encountered (to obtain enough biomass for analyses) from each core were collected and transferred to a pre-weighed tin capsule. For amphipods (Parawaldeckia sp.), about 6 individuals core -1 were used. Larger species (Scoloplos cylindrifer, Orbinia papillosa, Nereis sp., Heteromastus filiformis, Nucula sp. M. liliana and A. stutchburyi) were weighed or measured individually, pooled then homogenised to get a representative sample for isotope analyses from each core. Other species either had too small biomass for isotope analyses or did not occur in enough cores to allow statistical analysis, however a few of these additional species were screened for enrichment to improve community uptake estimates. Isotope analyses and calculations Aliquots (about 2 mg dw) of samples for isotope analyses of carbon and nitrogen were packed in tin capsules and analysed at the Chemistry Department, University of Otago, in a Carlo Erba NA1500 elemental analyser coupled to a Europa 20/20 mass spectrometer. Internal standards, which were calibrated against international standards, were run in each batch of samples. The average standard deviation for all runs was ± 0.2 for δ 15 N and ± 0.1 for δ 13 C. The C and N isotope ratios are expressed in the ‰ notation, using the equation: where R is the ratio between the heavy and light isotopes ( 13 C: 12 C or 15 N: 14 N). The stable isotope ratio, denoted by δ, is defined as the deviation in ‰ from an international reference standard (Vienna PeeDee Belemnite for C, and atmospheric nitrogen gas for N). Higher δ values indicate a higher proportion of the heavy isotope. To quantify the macroalgal (Ulva sp.) nitrogen (N) taken up in faunal tissue, a linear twosource mixing model was used [21]: where f1 is the proportion of Ulva N in the animal sample and f2 is the proportion of N derived from the initial sediment. The amount (mg) of Ulva-N taken up in each animal was calculated from the mixing model (proportion N from Ulva) and the total N content (mg) in the animal. This amount was extrapolated to the number of individuals of this species found in this core. To obtain community uptake of Ulva-N the species-specific total uptake values (based on corespecific density) were summed. Uncorrected δ values were used in the mixing model, since species-specific differences in fractionation [43] and fat content [44] were negligible compared to the strong labelling. C uptake is not shown since δ 13 C and δ 15 N enrichment were highly correlated for all species, Pearson product moment correlation r ˃ 0.95 (δ 13 C, δ 15 N, C and N content (%) in benthic fauna can be found in S3 Table). Functional group categorisation and selection of species for statistical analyses of uptake All species were included in a biological traits matrix containing 32 traits based on an organism's living position, sediment topographic features created, the direction of sediment particle movement, the degree of motility, feeding behaviour, body size, shape and hardness [45,46]. Based on these traits, species were assigned to functional groups [47], Table 1. The large keyspecies (M. liliana and A. stutchburyi) separated into single species functional groups (depositfeeding bivalves and suspension-feeding bivalves). The overall prediction was that densities of these two functional groups (species) as well as the functional group and species diversity of the community would determine per capita uptake by infauna, and hence total community uptake (summed uptake by all community members). Species diversity and functional group diversity were calculated for each core using Shannon's H' which accounts for both abundance and evenness of the species (or functional groups) present and is therefore a density-independent measure of community composition. In addition to key species density and diversity indices we also included the densities of another four functional groups as explanatory variables in statistical analyses since their biomass and/or abundance dominated community structure (i.e. they constituted 73 ± 13% and > 95% of total abundance and biomass respectively). The additional four functional groups selected were: "Head-down deposit feeders" (also represented by only one species, the capitellid Heteromastus filiformis), "Large, mobile deposit-feeding polychaetes" (mainly Orbiniids, dominated by S. cylindrifer), "Large, mobile predators/scavengers" (mainly Nereids and Nemertins) and "Small, surface-deposit-feeding polychaetes" (mainly spionids, highly abundant). See Table 1 for details on the classification of all functional groups and Table 2 for site macrofauna metadata. As response variables in the statistical approach taken (described under Data analyses and statistics), we used δ 15 N enrichment of the ten most abundant species (Table 1, one test for each species), and total uptake of Ulva-derived nitrogen by the macrofaunal community. Only three of the selected species were abundant at both sites (see results) and so statistical analyses were restricted to within-site comparisons with the exception of community uptake. Differences in isotope enrichment among species depends partly on differences in feeding mode [21,48] and partly on differences in growth rate and metabolic turnover, resulting in differences in time to reach isotopic equilibrium with the diet [42]. For this reason, we avoided statistical comparisons in δ 15 N enrichment among species (for simplicity, referred to as per capita uptake throughout the paper). Since A. stuchburyi did not show any per capita uptake it was excluded as a response variable (but still included as a predictor variable, see above). Table 2. Macrofaunal metadata. Differences between the Austrovenus and Macomona sites in terms of infaunal species richness, functional group richness (FG), Shannon diversity index forspecies (H'SP) and functional groups (H'FG), total density of individuals and the density of the key FG, (see Table 1 for explanations to abbreviations). Values are mean ± 1 SD. Headings in bold are predictors for statistical analyses. Austrovenus site Macomona site Data analyses and statistics Differences in macrofaunal community composition, biomass and total macroalgal N uptake between sites. Multidimensional scaling using principal coordinate analysis (PCO) and permutational ANOVA (Permanova) as implemented in PERMANOVA+ of PRIMER v6 [49] were used to assess inter-site differences in community biomass and species and functional group composition. Analyses were based on the Bray-Curtis Similarity Index and fourth-root transformed abundance data [49]. For PCO analyses we considered species/functional groups with a Spearman correlation > 0.6 with any of the first two ordination axes as significantly contributing to the difference between sites. Community biomass and community uptake of Ulvaderived N calculated for each core (see methods) was tested for differences between sites using Permanova. Biomass normalised N uptake (core-specific total uptake divided by total biomass of the core using only those species contributing to uptake (i.e. excluding A. stutchburyi biomass)) was tested in the same way to account for inter-site differences in biomass. Predictors of community macroalgal N uptake. To test the overall prediction that community macroalgal N uptake was determined by density of functional groups and the functional diversity of the community, the relationship between community macroalgal N uptake and selected explanatory variables (that included total community biomass (all species) and those listed in Table 2) was assessed for each site separately, using distance-based linear models (DistLM) in PERMANOVA+ of PRIMER v6 [49]. DistLM is a multiple regression routine where a resemblance matrix (in this case based on Bray-Curtis distance of community macroalgal N uptake values using cores as samples) is regressed against a set of explanatory variables. Prior to analyses both response data and explanatory variables were square-root transformed to conform to normality. Skewness of the explanatory variables was inspected using pair-wise Draftman plots of all variable combinations. The explanatory variables were generally not strongly correlated to each other (Pearson's r < the critical 0.95 according to [49]) and distributions were not strongly skewed. See Table 3 for relationships between the explanatory variables at each site and for sites combined. Marginal DistLM was first used to determine which variables accounted for a significant proportion of N uptake when considered alone in the model, ignoring all other variables. The variables included in the final DistLM-models for each species and site were selected using the 'best' selection procedure, which utilizes all possible combinations of explanatory variables to determine which combination accounts for the greatest proportion of uptake explained in the models R 2 based on the corrected Akaike information criterion (AICc). To remove the effect of differential biomass between sites, biomass-normalized community uptake from both sites was tested in a DistLM with the addition of a categorical factor, site (Macomona or Austrovenus). Predictors of per capita uptake (individual δ15N enrichment). The association between δ 15 N isotope enrichment (per capita uptake) and the selected explanatory variables was assessed for each species and site separately using DistLM, as described above. This resulted in 12 species-specific models; nine for the Austrovenus site and three for the Macomona site. Since the main purpose of these individual uptake models was to generalize among responses and predictors we present only significant marginal results and the variables included in the 'best' model based on AICc. For those models where AICc values were within 2 units, the model with highest explanatory power was chosen rather than the most parsimonious model, since the purpose was to find the combinations of species that would best explain enrichment patterns. The specific hypotheses related to the effects of key species feeding mode on macroalgal N uptake by surface-and subsurface feeding infauna were determined by comparing whether A. stutchbury (FG1) or M. liliana (FG2) were included in the best model for a species with those particular feeding modes (Table 6). Community composition and sediment characteristics at the Macomona and Austrovenus sites At the Austrovenus site 30 macrofaunal species and 13 functional groups were encountered, while at the Macomona site only 22 species and 9 functional groups were encountered ( Table 2). There was a significant difference in macrofaunal community composition (based on species abundance) between the Macomona and Austrovenus sites (Permanova, Pseudo-F 1,67 = 63.28, p = 0.0001, Fig 1A, S1 Table). The same clear separation between sites was obtained when functional group composition was used (Permanova Pseudo-F 1,67 = 64.54, p = 0.0001, Fig 1B, Table 2). However, the species which dominated the biomass (A. stutchburyi and M. liliana and the orbiniid Scoloplos cylindrifer) were present at both sites. Other common infaunal species and taxa commonly occurring at both sites were the polychaetes Nereis sp., Prinospio aucklandia, Scolecolepides benhami, Scolelepes sp., Nanieris sp., the amphipod Parawaldeckia sp., the anemone Edwardsia sp. and Nemertines and Oligochaetes. Community uptake of macroalgal N in relation to predictors Community infaunal biomass was similar between sites, 0.56 ± 0.25 (Austrovenus site) and 0.53 ± 0.32 mg core -1 (Macomona site) (Fig 3A), however the macrofaunal community at the Austrovenus sites had taken up approximately three times more Ulva-N than at the Macomona site (0.83 ± 0.86 vs 0.25 ± 0.13 mg; Permanova Pseudo-F = 16.779, p = 0.0001). This difference was even more pronounced (5 times) after normalizing uptake by the enriched biomass since A. stutchburyi did not contribute to uptake (Pseudo-F = 21.877, p = 0.0001). Two species, M. liliana and S. cylindrifer, were mainly responsible for the amount of Ulva-derived N taken up in faunal biomass during the experiment (Fig 3B). S. cylindrifer took up on average 89% of this nitrogen at the Austrovenus site and 33% at the Macomona site, whereas M. liliana took up 6% and 55% at the respective sites. This can be compared with the average contribution to community biomass by the same species; at the Austrovenus site, A. stutchburyi, M. liliana, and S. cylindrifer constituted 39%, 33% and 17% respectively; and at the Macomona site, 7%, 90% and 1%, respectively (Fig 3A). Of the other species, only Naineris sp. (Macomona site, 7%) and Nereis sp. (Austrovenus site, 3%) contributed more than 1% to total community uptake. Accordingly, marginal tests showed that FG3, (i.e. S. cylindrifer) explained most of the variance in total community uptake 3 and 4, Fig 4). Using biomass-normalized data across sites, the same two species as well as head-down feeders (i.e. H. filiformis), and site explained most of the variance. When the biomass effect of M. liliana is removed, the combined sites analysis shows that it has a negative Table 4. Predictors of total community uptake of Ulva-derived nitrogen. DistLM marginal test results reporting the proportion of total community N uptake at the Austrovenus (n = 41) and Macomona (n = 29) sites and both sites combined (biomass normalized) explained by diversity indices and FG densities (see Tables 1 and 2 Density of Key-Species Controls Detritus Uptake effect on community uptake (in agreement with per capita uptake which is biomass independent). Even though not ranked as the most important predictors, it is worth noting that both functional group diversity (as predicted) and species diversity were significant in marginal tests or included in the 'best' model (Tables 4 and 5, Fig 4). Per capita uptake in relation to predictors The results from site and species specific DistLM analyses of per capita uptake (δ 15 N enrichment on species level) are summarised in Table 6 (both marginal tests and 'best' models) and significant relationships are shown in Fig 5. At the Austrovenus site, M. liliana had, as hypothesised, a negative effect on the per capita uptake of the two surface feeders (P. aucklandica and Parawaldeckia sp.) and a positive effect on H. filiformis (Table 6). H. filiformis in turn, was positively associated with per capita uptake in species representing different feeding modes ( Fig 5). As predicted, this was the case also for species and functional group diversity, which were positively correlated with per capita uptake in one and three species, respectively. Per capita uptake of larger species (S. cylindrifer, M. liliana) had a lower proportion of variance explained than smaller species (this was true for both sites). Naineris sp. (abundant only at the Macomona site) had also no clear relationship to any of the explanatory variables included in the analyses. Table 4 for details on the statistical models. Density of Key-Species Controls Detritus Uptake Table 5. 'Best' model of total community uptake of Ulva-derived nitrogen. Results from the 'best' model selection procedure for different numbers of predictor variables at the Austrovenus site, Macomona site and both sites pooled (biomass normalized). AICc denote corrected Akaike information criterion and R 2 is the total cumulative variance explained by the model. See Tables 1 and 2 Discussion This study shows that densities of only a few species in natural communities strongly influence the community uptake of macroalgal detritus. Using isotopically labelled macroalgae, we were able to relate the macroalgae detrital uptake to the ecological role of individual species and demonstrate the importance of densities of key-species for influencing ecosystem functioning. Using natural communities restricts us to a correlative statistical approach which cannot be confused with the species substitution approach commonly used in traditional biodiversityecosystem functioning studies. Still, the use of an isotope tracer provides greater insight into the mechanisms underlying the relationships between biodiversity and ecosystem functioning than is typical of studies where cumulative processes such as nutrient fluxes are the only endpoints measured e.g. [28]. Further, by using intact benthic communities with known functional Table 6 for details on the statistical models and Tables 1 and 2 for definitions of abbreviations. doi:10.1371/journal.pone.0158785.g005 Density of Key-Species Controls Detritus Uptake traits, complex direct and indirect interactions among naturally co-occurring species could be discerned. Although the core constrains mobility of the species, those selected a priori as key-species are sedentary and likely to be less affected. Similarly, most of the species analysed for isotope enrichment are small in body size and the cores could be thought of as mesocosms rather than microcosms. The only exceptions are the few large and mobile polychaete species encountered and the uptake for these species accordingly had poorer statistical models in terms of proportion variance explained. It is however possible that their uptake rates are more influenced by environmental conditions rather than community structure in the field due to their mobility. By sampling gradients in density of a priori selected key-species and measuring detrital uptake (the first step in benthic secondary production), our study bridges a gap between controlled experiments with selected species combinations and field data, where environmental conditions are difficult to control. Uptake of macroalgal (Ulva) nitrogen by the whole community was three-fold (or five-fold when normalized for biomass) greater in the Austrovenus dominated site compared to the Macomona site. Previous studies have documented the importance of Austrovenus stutchburyi for ecosystem functioning due to physical properties of the bivalve bed and biological activities such as elevating sediment organic content through biodeposition [19,32]. Our study demonstrates that A. stutchburyi also indirectly facilitates detrital uptake and food web efficiency in benthic infaunal communities, since its density was positively associated with higher functional diversity, species diversity and higher densities of the head-down feeder Heteromastus filiformis (FG5, Table 3) which, in turn, were all positively correlated to higher isotope enrichment on the individual level (hereafter referred to as per capita uptake of the Ulva nitrogen) for several species (Table 6). Further, A. stutchburyi density was positively related to per capita uptake in four species and it was the variable contributing most in explaining per capita uptake by M. liliana (at both sites). Although A. stutchburyi is a suspension-feeder, we expected some of the detritus, which was added as a fine powder to the sediment surface, to be resuspended from bioturbation activities and thereafter consumed and assimilated by the clam [50]. This was however not the case during the experimental period, perhaps due to their slow growth and metabolic turnover of bivalve foot muscle tissue (up to 1 year to reach isotopic equilibrium, [51]) but perhaps also due to the sheltered hydrodynamic condition in the experimental set-up minimizing resuspension processes, meaning that our results potentially underestimate its direct contribution to community uptake. Below we discuss mechanistic reasons for higher community uptake in the Austrovenus site compared to the Macomona site by examining the species level data. Higher densities of the head-down feeder H. filiformis, which was absent from the Macomona site, was positively related to per capita uptake in three of the surface-dwelling depositfeeders (the spionid Prinospio aucklandica, the amphipod Parawaldeckia sp. and the small bivalve Nucula sp.) as well as in the omnivorous Nereis sp. and H. filiformis itself at the Austrovenus site. It was also included as a predictor of macroalgal uptake in the best model for five species (Table 6). Possibly, buried Ulva detritus was brought to surface layers again through the feeding mode of this species. Similar positive interactions between head-down feeders and other species performance have been found, e.g. Weinberg and Whitlatch [52] reported increased growth of small suspension-feeding bivalves when kept in close proximity to a polychaete with this feeding-mode. The other small polychaete Aonides trifada was only abundant when H. filiformis densities were low so this particular relationship could not be properly tested here. It is however possible that A. trifada also feeds deeper in the sediment and should not be categorised as a surface dwelling deposit-feeder since these two species had very similar initial isotope signatures ( Fig 2B); relatively depleted δ 13 C while enriched δ 15 N values, indicating feeding primarily on aged organic matter in the sediment [43,53]. In support of this, per capita uptake by A. trifada was not influenced negatively by M. liliana which feeds mainly in the surface sediment. In other systems, deposit-feeders separate resources by depth in sediment and/ or by feeding on different fractions of the organic matter e.g. fresh and aged [54,55,36]. Such niche differentiation increases resource utilization and thus promotes a positive biodiversityecosystem functioning relationship, as suggested by Karlson et al. [48]. The initial isotope values of fauna suggest that there is a broader range of primary producers supporting the food web at the more species rich Austrovenus site compared to the Macomona site. Although the aim of this study was not to disentangle the importance of different primary producers to the diet of macrofauna, the more depleted δ 13 C of A. stutchburyi indicates phytoplankton and macroalgae are the primary food sources whereas the enriched δ 13 C of M. liliana (at both sites) suggests feeding on microphytobenthos and seagrass detritus [50]. The generally more enriched δ 15 N values at the Austrovenus site compared to the Macomona site could indicate larger microbial conditioning of detritus that enrich nitrogen isotope values [43], perhaps also an effect of the higher density of individuals and higher species richness at this site. Interpretation of these differences, however, requires caution since the fauna were preserved in ethanol prior to analyses which may enrich δ 13 C values by a few ‰ [56] although other studies have found negligible effects from ethanol preservation on δ 13 C or δ 15 N e.g. [57]. In contrast to the positive effect of the head-down feeding H. filiformis, as hypothesised, higher densities of M. liliana were negatively associated with per capita uptake in two surfacefeeding species; P. aucklandica and the amphipod Parawaldeckia sp. (both in marginal tests and in the best model results). This is likely due to the removal of added detritus from the surface sediment to deeper layers by the bivalve, and partly through consumption and defecation (as evident from the enriched isotope signal in M. liliana tissues demonstrating uptake of Ulvaderived nitrogen). There is a similar situation in the species-poor Baltic Sea, where the functionally and morphologically similar deposit-feeding bivalve Macoma balthica reduces access to food for other surface-feeding species, including amphipods [58,48] and through interference competition lowers uptake rates of phytodetritus by meiofauna [36]. An alternative explanation is that increased oxygenation from the feeding mode of bivalves results in rapid mineralization of the organic matter by the bacterial community [21]. M. liliana generates pore-water pressure gradients during their feeding and burrowing behaviour that may stimulate bacterial activity through alteration of sediment oxygen dynamics [35]. The hypothesised increase in macroalgal uptake by sub-surface feeders, i.e. H. filiformis, along with higher densities of M. lilana (defecating at depth) was partly supported by our data (Table 6). Even more important in predicting H. filiformis per capita uptake was however higher functional group diversity (as Shannon H'FG), suggesting that more of the added material reached deeper in the sediment when more bioturbation modes are present. In a modelling study, Solan et al. [59] found that loss of species richness leads to a decline in bioturbation depth. Larger species, e.g. M. liliana, S. cylindrifer and Nereis sp. had generally lower proportion of their respective per capita uptake explained by densities of other species/functional groups. For polychaetes, this is most likely due to their mobility, which enables them to feed in the whole sediment column. Interestingly, the functional group of large scavengers were selected in the best model for these species as well as for H. filiformis. We speculate that pre-conditioning of the refractory macroalgal food source resulting from feeding activities by e.g. Nereis sp., which is an opportunistic omnivore (the first species to show high uptake of isotopically labelled Ulva in the field after only 1 d of incubation), will facilitate uptake for the other species. This preconditioning is not likely to influence the isotope signal of the Ulva food source, since isotope fractionation effects are negligible compared to the strong enrichment from the labelled macroalgae, especially in a 10 d experiment. Bioturbation activities by Nereis sp. result in spatially redistributed food sources, improving its availability to bacteria and hence promoting stable co-existence through such scale-based partitioning of resources [60]. Species diversity (as Shannon H'SP) was positively associated with isotope enrichment in only one species, Nucula sp. (both in marginal tests and selected in the best model) while functional group diversity was significant for three species (Nucula sp., P. aucklandica, H. filiformis), although only selected in the best model for H. filiformis. Interestingly, not only per capita uptake but also density itself of H. filiformis was significantly positively correlated to functional group diversity ( Table 3). The negative (M. liliana) and positive (H. filiformis) effects of key-species density on per capita uptake in smaller surface-feeders was also mirrored when their density was considered as a response variable. For example P. aucklandica density was also negatively correlated with M. liliana density (Spearman ρ = -0.34, p < 0.05, S1 Table) whereas Nucula sp. and P. aucklandica densities were positively correlated to H. filiformis density (ρ = 0.50-0.75, p < 0.05, S1 Table). On a larger scale, these similarities between uptake and abundance could help explain why few spionids were found at the M. liliana dominated site. Thrush et al. [19] in a field experiment removed large M. liliana which resulted in increases in the density of P. aucklandica and A. trifada. In agreement with these findings Baltic Sea clam and amphipod abundances are negatively correlated in the field, and so are their uptake rates in laboratory experiments [61,48]. Moreover, the negative relationship between meiofauna uptake rates and macrofaunal species diversity due to interference competition found in experimental work agree well with field data on meiofaunal abundance and biomass; both decreasing with higher macrofaunal diversity [36]. Although the spionid P. aucklandica, the amphipod Parawaldeckia sp. and the orbiniid Naineris sp. all had high per capita uptake and high densities, their small body mass (and hence low body nitrogen content), still down-weigh the importance of these species to total community uptake of macroalgal nitrogen. In contrast, M. liliana, which had a low per capita uptake during the experiment, most likely due to its slower growth and turnover relative to polychaetes and amphipods, nevertheless was the top or second most important species for total macroalgal nitrogen uptake in the community, when taking its large body size into account. The orbiniid S. cylindrifer had both highest per capita uptake and a large body size meaning that a large amount of Ulva-derived nitrogen was taken up in its tissues (Figs 2 and 3), suggesting it is a key-species for conversion of detritus to secondary production in this ecosystem. Due to competition, or other factors, the other orbiniid species had either too small body mass (Naineris sp.) or too low abundance and low per capita uptake (O. papillosa) to replace the function of S. cylindrifer. The fact that S. cylindrifer dominated macrofaunal community uptake suggests little redundancy for this particular ecosystem function during the initial rapid breakdown of macroalgal detritus. M. liliana and H. filiformis on the other hand, were the only representatives of their respective functional groups, meaning that species and functional identity cannot be differentiated, hence it is impossible to distinguish between the redundancy and rivet hypotheses. Indirectly, however, our results lend some support for the redundancy hypothesis, since functional group diversity (Shannons H'FG) contributed significantly in explaining both per capita uptake and total community uptake. As expected from the redundancy hypothesis, functional group diversity correlated positively with community uptake only in the Macomona site which had low numbers of species and functional groups (Fig 4A). This observation that an ecosystem process rate saturates at a rather low number of species has been shown from experimental work with synthetic assemblages representing e.g. soil communities, but is rarely shown in natural assemblages [2,14,62]. However, the relatively short duration of the experiment limits uptake of the labelled nitrogen by slow growing or predatory (or omnivorous) animals. It is likely that the importance of species richness for detrital uptake increase over larger spatial and temporal scales, as has been shown for ecosystem processes (e.g. biomass production and cover) in both terrestrial and aquatic systems [2,63,64]. S. cylindrifer had no effect on per capita uptake in other species (with the possible exception of Parawaldeckia sp.). H. filiformis, on the other hand, did not have a large uptake itself but instead facilitated uptake for surface-feeders through its unique bioturbation mode or by preconditioning the detritus into finer particles or more palatable material. The effect of M. liliana, also the only representative of its functional group (deposit-feeding large bivalves) was more ambivalent, since it negatively influenced per capita uptake of other, smaller surface-feeders through either exploitative and/or interference competition. However, as hypothesised it had a positive effect on H. filiformis, which in turn was positively associated to uptake rates of other community members. Finally, the largesize of M. liliana resulted in this species dominating community uptake of macroalgal nitrogen at the Macomona site, supporting the importance of large body size for ecosystem functioning [33,36,65]. In conclusion, our results demonstrate the importance of species identity, body size and density for ecosystem functioning, showing that large key-species determine uptake of algal detritus by macrofauna. These findings highlight the complex interactions underlying loss of ecological services and underscore the importance of understanding compositional and density changes of key-species with declining biodiversity.
2018-04-03T00:22:34.405Z
2016-07-14T00:00:00.000
{ "year": 2016, "sha1": "28614d264b3ff21c593ead94a54aee4d6441f2d1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0158785&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "28614d264b3ff21c593ead94a54aee4d6441f2d1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
251654294
pes2o/s2orc
v3-fos-license
A Study of Bacterial Vaginosis and Associated Risk Factors among Married Women in Zakho City, Kurdistan Region, Iraq Bacterial Vaginosis (BV) is a leading cause of reproductive tract problem affecting mostly of reproductive age group, worldwide. The aim of this study is to detect the infection rate of BV and then evaluate the risk factors associated with this bacterium among married women in Zakho city, Iraq. This cross-sectional study was performed among 150 reproductive age women group from October 2021 till April 2022. The administered and structured questionnaire was designed to measure demographic, risk factors and clinical characteristics. The vaginal swabs were collected from each subject and used for microscopical examination including wet mount, vaginal pH, germ tube and Gram stain methods to analyze the infection rate. The analysis of univariate regression analysis was applied to determine the relationships between BV and associated risk factors and clinical characteristics. The average age of participants was 32.64 years (±8.01 SD). The prevalence of BV was 41 (27.33%) among married women. About 12 (8%) and 1 (0.67%) of participants had mixed infections with Candida albicans and Trichomonas vaginalis , respectively. BV was found mostly among the age group less than 20 years (41.67) followed by age group of 40-50 years (37.93%). We found a higher infection rate among subjects from rural area (34.78%), but statistically not significant (p=0.17). Higher number of births was statistically associated to BV (OR 1.17, 1.006-1.37; p=0.003). BV was also highly associated among symptomatic patients with abnormal vaginal discharges (OR 4.18, 1.89-1.9.23; p=0.002), genital ulcer (OR 0.34, 0.13-0.84; p=0.01), and vaginal pH level more than 4.5 (OR 0.009, 0.002-0.043; p=0.001). BV is still prevalent among the married women in our region. The higher infection rate was significantly associated with higher births number, vaginal discharges, genital ulcer and higher vaginal pH. There is an urgent regular required screening for bacterial vaginosis among symptomatic women. Therefore, the early detection of risk factors associated with bacterial vaginal growth is critical to enhance the health condition of married women, in order to prevent the risk of BV. I. INTRODUCTION Bacterial vaginosis (BV) is leading to cause a problem in female reproductive system specifically among female of child bearing age, worldwide (Bertini, 2017). It is generally characterized by an surge the level of vaginal pH, and discharges (mainly gray discharge color) with a fishy smell after adding 10% KOH, occurrence of clue cells and overgrowth of facultative and anaerobic bacteria (Allsworth and Peipert, 2007). Previous study found that BV is a major risk factor for gynaecological and obstetric out comes such as pregnant and non-pregnant women and it is considered a major role to transmit sexually transmitted infections (STIs) (Fethers et al., 2008). BV is a clinical condition characterized by vaginal dysbiosis caused by a decrease in Lactobacillus H2O2-producing and by a polymicrobial flora such as Gardnerella vaginalis (Allsworth and Peipert, 2007). Moreover, Bacterial vaginalis is detected about 10 -31% of adolescent girls, but it is more common among sexually active women, with prevalence rates as high as 55-60% in the population of high-risk infections (Yen et al., 2003). In general, the infection rate is higher in Africa (Kenyon et al., 2013), but it has been found to vary according to ethnic and race group in different parts of the world (Alcendor, 2016). It is well known that BV during pregnancy could increase pregnancy complications such as spontaneous abortion, postpartum infections such as endometritis, premature rupture of membranes, chorioamnionitis, preterm delivery, and low birth weight (Shimaoka et al., 2019). The associated risk factors and causes of abortion and miscarriage among married women have been extensively reported in Zakho city, but no studies conducted among BV (Naqid et al., 2020a, Naqid et al., 2019. A few indications concerning the cause of BV have been discovered by evaluating epidemiologic factors, women with BV were more probable to use intrauterine devices and other contraceptives (Yen et al., 2003). Furthermore, females with BV are more susceptible to getting other sexually transmitted infections (STIs) such trichomoniasis, gonorrhea, and chlamydia, and it raises the higher risk of pre-term delivery among pregnant women (Fethers et al., 2008). There are several studies reported there are association between BV and socio-demographic, clinical and behavioural characteristics among women (Achondou et al., 2016;Bitew et al., 2017), however, other researchers found no significant association with some of these risk factors (Shayo et al., 2012). Several other reports have reported the involvement of sexual with BV while others noticed BV in women sexually inexperienced (Verstraelen et al., 2010). This study suggested that BV is thought a sexually active infection rather than sexually transmitted infection (STIs) (Verstraelen et al., 2010). Another report has found a significant difference between BV and multiple sex partners (Fethers et al., 2008). In several countries, several researches have performed the prevalence rate of infection among pregnancy women (Shayo et al., 2012;Verstraelen et al., 2010) and in Zakho city the higher resistance profile of bacteria was reported (Naqid et al., 2020b, Naqid et al., 2020c, Naqid et al., 2020d, but there a very limited data from Kurdistan Region, Iraq, regarding the infection rate of BV and its risk factors among married women. To avoid complications among married women due to BV, assessment of prevalence rate of BV and their associated risk factors is essential for therapy and prevention control program. Thus, the aim of present study is to evaluate the infection rate of bacterial vaginosis and its risk factors among married women at Zakho city, Kurdistan Region, Iraq. A. Study design and sampling The present study was conducted as a cross-sectional among married women attending at the Private Specialist Laboratory and Obstetrics and Gynecological Hospital in Zakho, Duhok, Kurdistan Region-Iraq. A total of 150 married women with symptoms of vaginitis and cervicitis were visited private special laboratory and obstetrics and gynecological hospital in Zakho city, Kurdistan Region-Iraq. All participants with ages ranging from 18-48 years. B. Sampling technique and data collection Data was collected from October 2021, to April 2022 using a designed structured questionnaire to collect data on participant's sociodemographic, and clinical characteristics. Demographic data including age, educational levels, place of residence, smoking, types of contraceptives use, past medical history and recurrent infection while clinical data collected comprised symptoms, presence of vaginal discharges, colour of discharges and pH of vagina. All married women who complained of vaginal discharge, vaginal itching, painful intercourse, or painful urination were considered symptomatic. After physical and gynaecological examination, high vaginal swabs (HVS) were immediately collected from each subject using insertion of a sterile unlubricated speculum into the vagina. Two sterile vaginal swabs were used and the samples were labelled properly. One of them was put on a glass slide covering with normal saline and adding KOH 10% for wet mount method and the other swab was put on a clean glass slide, then air-dried and fixed with heat fixed, and then using gram stain. The prepared slides were then examined using oil immersion (X100 and X40 objective) to quantitate for the detection of "clue" cells. Under objective X100 and X40, the wet mount was examined to investigate Trichomonas vaginalis (TV). Sabouraud Dextrose agar (SDA), and germ tube test were also carried out on the samples to isolate and identify Candida albicans. This yeast was able to produce germ-tubes after incubation at 37 °C in serum for 2 hours. C. Identification of Bacterial vaginosis According to the presence of international consensus ascertains the detection of Bacterial vaginosis when 3 of the following Amsel clinical criteria are present (Bhujel et al., 2021a). (1) pH level of vagina > 4.5., (2) A thin, grayish-towhite discharge coats the vagina and vestibule. (3) Whiff test positive after adding of 10% KOH it produces a fishy odour. (4) Detection of clue cells under microscopic examination. D. Inclusion and Exclusion Criteria The inclusion criteria were married women and age more 18 years old were selected and agreed to participate in the present study. The following criteria were used to exclude participants from the study: The patient is not married, the patient does not agree to participate, a patient who previously administrated antibiotics for vaginitis and cervicitis, and the patient is more than 48 years old. E. Ethical Approval and Informed Consent All participants were voluntary and informed consent was gained from each subject. The protocol and procedure of this study was approved by the ethical committee of the Shekhan Technical College of Health at Duhok Polytechnic University's Scientific Committee, Iraq. It was also approved by the Duhok Governorate's Research Ethical Committee (Reference number: 15092021-9-4) F. Statistical analysis The significant differences between variables were reported using the GraphPad Prism version 8. Descriptive information was expressed as frequency and percentages Univariate logistic regression analysis was used to analysis the relationship between BV and associated risk factors according to demographic and clinical characteristics and 95% confidence intervals (CIs) and Odds ratios (ORs) were also assessed. The significant levels were considered as p value < 0.05. A. Characteristics of study participants Overall, 150 married women were participated in the present study. The demographic and clinical features of each subject are presented in D. Risk factors associated with bacterial vaginosis Univariate logistic analysis was applied to analysis the relationship between risk factors and demographic characteristics (Table 4). It was found that there was not significantly associated with all possible associated risk factors in the analysis, except number of birth (OR 1.17, 1.006-1.37; p=0.003). The highest rate of infection was reported among age less than 20 years (41.67%) followed by the age group of 40-50 (37.93%). The prevalence of BV among education levels were not statistically associated (p=0.89). With respect to the residence, the highest prevalence was among participants from rural area (34.78%), but the significant difference was not reported (p=0.17). The rate of infection was slightly higher among married women who used intrauterine device as a contraceptive (44.44%) than other contraceptives, but the significant difference was also not reported (p = 0.61). Although the other investigated risk factors were no significant influence as presented in (Table 4). Other clinical characteristics of participant had no significant influence as presented in (Table 5). IV. DISCUSSION BV is leading to cause a major problem of reproductive system among women of child bearing age globally (Bertini, 2017). It has previously reported that BV is a potential risk factor for gynaecological and obstetric consequences and it is supposed to play an important role in the transmission of sexually transmitted infections (Fethers et al., 2008). This study was performed to analyse the relationship between bacterial vaginosis, risk factors and clinical characteristics among participated women at Zakho City, Iraq. The infection rate in different studies was differs among various populations (11% -71%) (Georgijević et al., 2000). In this report, the overall rate of BV infection among participants was 27.33% and higher prevalence was reported among age less than 20 years (41.67%), the reason behind this may be due to the women are sexually more active during this age. Our results were in agreements with that obtained by several studies, regarding that the occurrence of the BV was not associated significantly with age group (Bhakta et al., 2021;Bitew et al., 2017). The Infection rate in this study was less than other studies reported in Iran (61.7%) (Moussavi and Behrouzi, 2004), but agree to results of studies conducted in Jordan in 2001 (Abu Shaqra, 2001) and Indonesia in 2001 (Joesoef et al., 2001). Our results are also in accordance with study is slightly higher than those studies conducted in the report performed in Cameron (26.2%) (Kamga et al., 2019), Nepal (24.4%) (Ranjit et al., 2018), India (24%) (Modak et al., 2011). However, the infection rate in such the report performed in Cameron (26.2%) (Kamga et al., 2019), Nepal (24.4%) (Ranjit et al., 2018), India (24%) (Modak et al., 2011). However, the infection rate in such different countries; India (19.6%) (Gupta et al., 2016), Brazil (20.7%) (Gondo et al., 2010), Nigeria (17.3%) (Ibrahim et al., 2014) and Ethiopia (20.1%) (Yalew et al., 2022). This discrepancy could be due to random sample variation, population size, analysis methods, distribution of geographical area, differences in the socioeconomic and behavioural among studied population. These differences may also occur due to the variation in the diagnostic criteria used for BV and studied population size (Afolabi et al., 2016;Lamichhane et al., 2014). There are two common methods used for the detection of BV: the Nugent criteria and the Amsel clinical criteria. The Nugent criteria score is expensive, require specialists and lab equipment, time-consuming, which can cause many issues especially in developing countries including Iraq, but it is high sensitivity and reproducibility, However, Amsel clinical criteria are simple technique, inexpensive and fast (Bhujel et al., 2021b). It is important to note that BV is clearly associated to adverse reproductive problems and gynecological outcomes among women. However, the major causes of BV remain not understood clearly and have been commonly linked with sociodemographic, reproductive system, sexual, and behavioural characteristics (Kamga et al., 2019). In the field of public health, the complication of the demographic characteristics challenges is continued to investigate the role of bacteria and its association to a host of biomedical and social conditions these could lead to a major health problem of the community. These factors include educational level, socioeconomic status, reproductive history past medical history and contraceptive use. In the present study age, education levels, residence, using contraceptive, smoking, past medical history and history of recurrent infections were not significantly differences. In contrast to the present study, other reports found that history of recurrent infection, people how lived on the rural area, educational levels and women aged more than 45 years had significant associated with the infection rate (Geng et al., 2016;Kamga et al., 2019;Yalew et al., 2022). In the current study, women using intrauterine device as contraceptive were at an increased risk of infection; but this was statistically not associated (p=0.61) and using condom were at decreased risk of BV. We also found a higher infection rate among women with low education level; this was also not significant differences (p=0.84). This could be due to higher education level is associated with higher sophistication levels, application of conventional medicine by them and enlightenment. Regarding the type of contraceptive, the relationship between using of intrauterine device and BV has not been understood yet. Several studies detected an increased risk of infection in intrauterine device users, could be due to that intrauterine device may affect the normal flora of the vaginal in favor for the bacterial growth and before insertion of intrauterine device should be screened, but others found no association (Bartalena et al., 2007). Similar to our study, other reports have proposed that BV is highly prevalent among women who used intrauterine device than nonusers (Hodoglugil et al., 2000). In contrast to our study, one study performed in 2001 among African-American women, they found that the contraceptive user and low educational levels were highly associated such infection (Holzman et al., 2001). Another study conducted in Iran found that the user of contraceptives appeared as a protective, while the low level of education was significantly associated to an increase in prevalence rate of infection (Ashraf Ganjoei, 2005); these results are similar to our study. This difference may be due to stressor differences, environmental, socioeconomic and behavioural, status and in the geographical variation. The screening of bacterial vaginosis among women is essential as it was led to premature rupture of membranes, still births, abortion, postpartum infections, preterm birth and low birth weight infants (Yudin and Money, 2008). In our study, the infection rate was higher among women who had higher births number (p=0.03). The commonest presenting symptoms of women who have bacterial vaginosis is a vaginal discharge with malodorous. Bacterial vaginosis was also significantly higher among women who had abnormal discharges (p=0.002). Additionally, the infection rate was also highly associated in women who had genital ulcer (p=0.01), and the levels of vaginal pH level more than 4.5 (p=0.001), while other characteristic were not significantly associated between the negative and positive of bacterial vaginosis among women. We suggested that the regular screening program among symptomatic women with vaginal discharges is vital to investigate BV in order to prevent the complications and preterm delivery. Our results for genital ulcer, abnormal vaginal discharge, and vaginal pH were similar with the study conducted in India (Bhakta et al., 2021;Nayak et al., 2020). It has also previously reported that the itching of genitalia, abnormal vaginal discharge, and burning remain the major problems which associated with bacterial vaginosis (Valsangkar et al., 2014). Our study has some limitations. Firstly, the low number of samples lower than expected participated in the present study, which may reduce the evaluation of number size and may indirectly influence the statistical analysis of significance. Secondly, the absence of anaerobic culture media and the lack of facility of molecular techniques for the detection of bacterial species. V. CONCLUSION This study could provide important epidemiologic data on BV for future risk behaviours and population-based studies. BV is one of the major causes of vaginal discharge and itching among child bearing age women and this could be problem for health among this age in our community. BV is still highly detected among the married women in this study. Higher number of births, low level of education, vaginal discharges, higher vaginal pH, genital ulcer, intrauterine device use as a contraceptive were a main risk factors for such bacteria. There is an urgent regular need for screening for BV among symptomatic women with abnormal vaginal discharges. Therefore, the early detection of risk factors associated with bacterial vaginal growth is critical to enhance the health condition of married women, in order to prevent the risk of BV among them.
2022-08-19T15:14:16.194Z
2022-08-17T00:00:00.000
{ "year": 2022, "sha1": "5821945b1b5be80441d5cfe47e19f3f81d6bed10", "oa_license": "CCBYNCSA", "oa_url": "https://jlbsr.org/index.php/jlbsr/article/download/62/34", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "5c26feb2e61a4349612a222c7a25d2da41889ba6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
233798276
pes2o/s2orc
v3-fos-license
Optimization and Simulation of Dynamic Performance of Production–Inventory Systems with Multivariable Controls : The production–inventory system is a problem of multivariable input and multivariant output in mathematics. Selecting the best system control parameters is a crucial managerial decision to achieve and dynamically maintain an optimal performance in terms of balancing the order rate and stock level under dynamic influence of many factors affecting the system operations. The dynamic performance of the popular APIOBPCS model and the newly modified 2APIOBPCS model for optimal control of production–inventory systems is examined in the study. This examination is based on the leveled ground with a new simulation scheme that incorporates a designated multi-objective particle swarm optimization (MOPSO) algorithm into the simulation, which enables the optimal set of system control parameters to be selected for achieving the situational best possible performance of the production–inventory system under study. The dynamic performance is measured by the variance ratio between the order rate and the sales rate related to the bullwhip effect, and the integral of absolute error related to the inventory responsiveness in response to a random customer demand. Our simulation indicates that the 2APIOBPCS model performed better than or at least no worse than, and more robust than the APIOBPCS model under different conditions. Introduction Simulation of a production-inventory system, even the simplest model comprising one manufacturer and one retailer, is a problem of multivariable input and multivariant output in mathematics. The demand (related to new order rate) and the supply or stock level (related to customer satisfaction) must be balanced under the dynamic influence of many factors affecting the system operations [1]. A low level of stock may lead to an increase in customer dissatisfaction and even loss of business opportunities to other competitors. An over exaggerated order rate may lead to over supply that reduces the financial flexibility or could drive down the sale price for the retailer. For example, Cisco encountered $2.2 billion in overstocked inventory due to an imbalance between supply and demand in May 2001 [2]. Sony Electronics faced an excessive production cost because of an over-anticipation of the demand for PlayStation ® 3 [3]. Either way, the productioninventory system would not be operating at a desired status for both profit generation and customer satisfaction. Overstocking is mainly caused by the "bullwhip" effect in the production-inventory system, which is the scenario where orders to the suppliers tend to have larger fluctuations than sales to the buyers [4]. Holweg et al. [5] found that the actual demand signal from the customers in a supermarket for a soft drink was amplified many times before it reached the soft drink supplier. As no system is perfect all the time, industries have to cope with real-world bullwhip, not just a 1-to-2 amplification but a 1-to-20 or higher amplification [6]. Production-inventory systems are also subject to a variety of sources of uncertainties, such as unpredicted delay in manufacturing [7]. These combined impacts can affect the dynamic performance of any production-inventory system. Therefore, studying the dynamic control of production-inventory systems for achieving optimal performance under various uncertainties has been attempted with various models and/or methods by researchers in the world [1,4,[6][7][8][9][10]. Among these models and/or methods, control theory with feedback mechanisms has become a popular choice to analyze and simulate production-inventory systems through different mathematical tools, such as Laplace transforms, Z-transforms, transfer functions, block diagrams and frequency analysis, and numerous studies have been conducted in this space [8][9][10][11][12]. The inventory and order based production control system (IOBPCS) model proposed by Towill [13] has been recognized as a common framework for modeling the control of a production-inventory system. John et al. [14] made an important extension to the IOBPCS model by including the work-in-process (WIP) feedback and proposed the automatic pipeline, inventory, and order based production control system (APIOBPCS) model, which has been widely used for modelling production-inventory systems since then [15][16][17][18][19]. In a new project started from 2017, the authors have attempted to expend the classic APIOBPCS model to a new model named two automatic pipeline inventory and order based production control system (2APIOBPCS) by incorporating the completion production rate into the production-inventory control system so as to mitigate uncertainties in manufacturing for the system modelling [20][21][22]. Our first outcome from this project produced a comprehensive literature review on the applications of classical and modern control theory to production-inventory problems [20]. The second output was mainly focused on deriving the mathematical formulation of the 2APIOBPCS model in the state space by Laplace transforms and demonstrating its stability and convergence with respect to the APIOBPCS model [21]. As both the APIOBPCS and 2APIOBPCS models represent complicated systems with more control parameters, choosing the control parameters used to be experience-based. There is a need to find an intelligent way to deal with the selection of the control parameters to achieve the optimal outcomes. The third study aimed at adopting the multi-objective particle swarm optimization (MOPSO) for selecting the best system parameters to achieve optimal control for the production-inventory systems, with a focus on simulating the well-regarded APIOBPCS model as a benchmark [22]. However, an integrated solution for the 2APIOBPCS model and its performance in dynamic control of a production-inventory system has not been systematically examined and compared to the APIOBPCS model to demonstrate its credibility and usefulness for potential industry adoption. The purpose of this work is to fill this gap by simulating the dynamic performances of these two models under different scenarios, including the consistency between the order and the production (lead time) and flexibility in production capacity for a simple production-inventory system. The rest of this study is organized as follows. Section 2 provides a brief review of the APIOBPCS and 2APIOBPCS models and a summary of their mathematical formulations for simulations, including the performance metrics. Section 3 details the procedure of simulation and experimental considerations. Section 4 presents the simulation outcomes incorporated with comparisons and discussions. Section 5 concludes the study by summarizing the main findings of this work. A Beief Review of the the APIOPBCS and 2APIOPBCS Models A production-inventory system is a basic unit in supply chain that integrates inventory control policies with the production process. In modelling and simulation practices, a production-inventory system is represented by a block diagram, the example of the well-known APIOPBCS being shown in Figure 1. There are three major parts in APIOPBCS, the forecasting mechanism, the production lead time, and the controller strategy [15][16][17][18][19][20][21][22]. • The forecasting mechanism is a feed-forward loop designed to provide the estimated average sales (AV CONS ) and to set the desired work-in-process (WIP) level (D WIP ). CONS represents the sales or consumptions. The feed forward gain ( T p ) works as a safety factor to compensate the production delay and equals the production lead-time (T p ). The estimated average sales (AV CONS ) is commonly used to control the inventory steady-state error. Exponential smoothing with time constant (T a ) representing the average age of the data is a forecasting method commonly used to smooth the demand because of its simplicity and comprehensibility in mathematics for practitioners. The D WIP is obtained from multiplying the AV CONS by feed forward gain T p . • The production lead time represents the total time required between placing an order and receiving the product as a finished item in the inventory. The controller designer cannot manipulate the lead time as it is considered as a characteristic of the system. The production lead time in the production-inventory control system is modelled as a first order lag with time constant T p that responds to a sudden change in the demand. • The controller strategy utilizes the forward and feedback information to generate a sophisticated decision to determine the manufacturing rate for the production-inventory system. In the APIOBPCS model, a production policy based on the pipeline output where the completion rate (COMRATE) is compared to the averaged demand AV CONS and their difference is fed back to the controller. T i is an inventory order constant time for proportional control. The APIOBPCS model utilizes three policies (demand, inventory level, pipeline policies) to determine the Order Rate (ORATE). The average consumption rate AV CONS based on exponential smoothing forecasts with time constant T a is the forward control policy. The feedback consists of two control polices, the fraction 1/T i of the difference between the desired inventory D inv and the actual inventory A inv and the fraction 1/T W of the difference between the desired WIP D WIP and the actual WIP A WIP . The 2APIOBPCS model [21] shown in Figure 2 has the similar structure to the API-OBPCS model, with an additional feedback loop using the fraction 1/T c of the difference between the desired completion production rates D COMP and the actual completion production rates A COMP as an extra control for the production-inventory system. T c is a constant time for the completion rate (COMP) for proportional control. Mathematical Formulation of the Two APIOPBCS Models in the State Space The state-space representation of the APIOBPCS model has three state variables [13,15,[20][21][22]: where x 1 denotes the inventory level; x 2 is the items in the production process that are not finished yet (WIP); x 3 represents the consumption or sales state. The derivative state of the APIOBPCS model is: . . . The outputs of the system are represented by the inventory level A inv and the order rate ORATE defined by The continuous closed-loop state space representation of the APIOBPCS model is Similarly, by considering the completion rates, the continuous closed-loop state space representation of the 2APIOBPCS model is expressed as [21] . Performance Metrics The performance metrics that are used to evaluate the performance of productioninventory systems should have implications on total costs (inventory related costs and productions related costs) and customer service level (CSL) [23]. In this study, the dynamic performance of the production-inventory system is evaluated by firstly the variance ratio (V ar ) between the order rate and the consumption or the sales defined in Equation (12). where σ 2 ORATE refers to variance of the orders placed to the manufacturer and σ 2 CONS represents the consumption variance. The V ar index is used as a metric to measure bullwhip effect. In this criterion, there is zero bullwhip if V ar = 1; the system is amplified if V ar > 1; the system is smoothed if V ar < 1. The second measure to evaluate inventory responsiveness of a production-inventory system is the integral of absolute error (IAE) between the actual and the target levels of inventory defined in Equation (13). where t is the period and E refers to the error in the inventory levels measured as the deviation of the A inv level from the D inv level. The IAE measures positive and negative errors equally. A lower IAE indicates that the system has a better customer service level (CSL). The bullwhip effect and inventory responsiveness are two objectives that have direct impacts on the nature of the basic trade-offs between maintaining the order rates at the optimal performance, in order to avoid the impact of high amplification of orders, and maintaining stocks at a desired level to improve CSL. Simulation Procedure Simulations of a production-inventory system used to be conducted using some sets of input parameters mainly based on the practitioner's experience in operating similar systems. The best values for the control parameters among the different sets of simulation outcomes are then determined by comparing the statistical results of these outcomes. The inputs must of course be reset for simulating a new system. By introducing a designated MOPSO algorithm into simulation using the APIOBPCS model [22], once feeding the demand pattern and production lead time to the system, the automatic simulation process can produce a set of control parameters (T i , T w and T a ), in which each set can achieve the best balance between the variance ratio (V ar ) and the integral of absolute error (IAE), in other words, between the bullwhip effect (cost-effectiveness for the industry) and CSL or customer satisfaction. These sets of control parameters are usually presented as a Pareto optimality curve, in which the system manager can choose any desired set on the curve as the best control parameters. By embedding the designated MOPSO algorithm into the simulation, the process of dynamic simulation of a production-inventory system can be summarized in Figure 3. The inputs to the simulation are the demand pattern and the lead time required for orders. The inputs trigger the MOPSO algorithm for optimization constrained by both Var and IAE. The outputs of this optimization process are the best choices for the system configuration that can produce the optimal performance in terms of improving the system responsiveness related to CSL and reducing the demand amplification related to the bullwhip effect. A chosen set of the best control parameters is then fed to the system to simulate the desired order rate ORATE and inventory level A inv , which would help the manager to adjust the stock level near the optimal status. Experimental Considerations Any simulation is subject to some system constraints and operational assumptions. As the purpose of our simulation is to make a leveled comparison on the dynamic performance between the APIOBPCS model and the 2APIOBPCS model, we choose a simple production-inventory system comprising one retailer and one manufacturer for one product so as to examine the difference in the dynamic performance of the two control models. Sophisticated production-inventory systems logically can only make the difference larger. Since the demand pattern can vary largely, following the common practices in simulation of production-inventory systems, we choose a variable sinusoidal pattern as shown in Figure 4 to represent the random nature of demand to some extent. Other variables are assigned the values used in various published works in production-inventory simulations [8,13,14,[17][18][19][24][25][26]. The major assumptions in all our simulations are as follows. • The period of physical production lead time is four units of time (T p = 4). Of course, this can be assigned to different values but it would not largely alter the general trend. • Backorders (negative inventory) are permitted. • The desired inventory is set to zero (D inv = 0). • Day is the basic time unit in the model. • The simulation was run for 180 days for each scenario. • The production process can only produce a single unit at a time. The simulations with our designated MOPSO were conducted with MATLAB. The optimization process returns the best sets as a vector p 1 = {T a , T i , T w } for the APIOBPCS model. As the completion rate T c for the 2APIOBPCS model is in practice a certain value not to be optimized, it is simply added to the vector p 1 to form the control for the 2APIOBPCS model as The parameters for MOPSO used in the simulation were: • The maximum number of iterations was set to 100. • The number of particles in the swarm was set to 50. • The learning coefficients for local and global searches were both set to 2. • The inertia weight was set as 0.6. • The size of the archive was set to 20. The performance of the two models was first examined by simulating them by the demand pattern under a normal scenario: Matched lead time with flexible production capacity, followed by three different scenarios: Matched lead time with fixed production capacity, mismatched lead time with flexible production capacity, and mismatched lead time with fixed production capacity. The implications of these scenarios are as follows. • Matched lead time means that the actual lead time and the estimated lead time are assumed to be matched during the operation, or in other words, the ordered amount of product should be delivered by the manufacturer to the retailer on time. • Mismatched lead time means that there is a delay of the ordered product from the manufacturer to the retailer. This may be caused by machine breakdowns and/or material shortages to the manufacturer. In such a situation, a longer lead time is expected. The mismatched scenarios evaluate the robustness of the two models by measuring how the systems can recover from such disruptions and get back to the normal level. Such a simulation is represented by a lead time starting at the nominal value T p = 4, then to T p = 6 for a period, and back to the normal T p = 4. • Flexible production capacity means that the manufacturer has no problem to produce the ordered product on time. Even if there is a disruption during production, the manufacturer is able to mitigate the negative impact without delaying the delivery of the ordered product. • Fixed production capacity means that there is a limit for the manufacturer to produce the ordered product within the timeframe. In a normal scenario, the ordered amount would match the top limit of the manufacturer's capacity. However, the situation is prone to any disruption to the production caused by machine breakdowns and/or material shortages. In such a situation, when the production capacity in a period is insufficient to complete the production for an order, the capacity of the next period is used to continue the production of this order. The order in the affected period is capped by a constant C, i.e., In our simulation, the capacity was set to C = 110 items per day (of course it can be a different number without losing generality). Simulation Results and Discussion The control vector p 1 = {T a , T i , T w } represents a Pareto curve of infinitely many combinations that can lead to optimal performance, and each combination can result in a set of simulation results. In this work, we selected three sets to simulate the two models separately (Table 1). These three selections correspond to three different ranges of the bullwhip effect as follows. • Set 1: Bullwhip smoothing in range 0.8 < V ar < 1 • Set 2: Bullwhip avoidance where V ar = 1 • Set 3: Small bullwhip in range 1 < V ar < 1.3 Table 1. Three optimal control sets to simulate the systems (T c for 2APIOBPCS only). Note the optimal parameters for the simulation are resulted from MOPSO under the same demand pattern and lead time (T p ) that are the same for the two models in the same range of bullwhip effect simulated in this study. As a result, the values of the control parameters for the two models in the same range are the same. The completion rate for the 2APIOBPCS model is chosen in proportion to each set accordingly. V ar T i T w T a T c In our discussion on the simulation results, a new indicator named the 'improved inventory responsiveness' (IIR) is used, which is the percentage ratio of the IAE difference between the two models versus the IAE of the APIOBPCS model, i.e., Table 2 shows the performances of the two models under the three optimal control sets for the system in this normal scenario. As expected, there is no difference in the bullwhip effect as the orders should be fulfilled with the same level for both models. However, with the information of partly completed order as the dynamic feedback, the system with the 2APIOBPCS model can produce a better IIR compared to the APIOBPCS model. The improvement varies from 3% for no and smoothed bullwhip effect to 9% for a small bullwhip effect. The improvement is the accumulated outcome over the period of simulation when on most occasions the inventory level of the 2APIOBPCS model is closer to the desired inventory level of zero compared to that of the APIOBPCS model as shown in Figure 5, even though the order rates of the two models are almost identical to each other during the period. Table 3 shows the performances of the two models under the three optimal control sets for the system in this scenario. Due to the limit on the production capacity, a delay in product delivery is highly likely regardless of the level of orders. Owing to the uncertainties in product production and delivery, by introducing the capacity constraint ( Figure 6), the estimated order was capped at the top limit, which effectively smoothed the bullwhip effect for both models, reflected in both models by a smaller V ar . As the system waits for the slow production, the actual IAE is the same to the normal scenario in Case 1. This is in line with the observation reported in [18]. Table 4 shows the performances of the two models under the three optimal control sets for the system in this scenario. As the production capacity is sufficient to produce the ordered product, product delivery on time is guaranteed. In such a situation, the mismatched lead time between the order and delivery should be caused by miscalculation of the stock by the retailer or miscommunication between the retailer and the manufacturer. With the information of partly completed order as the dynamic feedback, the system with the 2API-OBPCS model in this case shows a significant advantage with a larger IIR compared to the APIOBPCS model. For example, if the retailer's miscalculation resulted in a lower order than the required level, the 2APIOBPCS model can mitigate the potential loss with the dynamic feedback to reduce the IAE by 17% as if using the APIOBPCS model. If the retailer's miscalculation resulted in a higher order than the required level, the 2APIOBPCS model can also mitigate the potential loss by reducing the IAE for 12% as if using the APIOBPCS model. For the desired situation without bullwhip effect, the IAE is reduced by about 14%. This accumulated outcome over the period of simulation is shown in Figure 7. Table 5 shows the performances of the two models under the three optimal control sets for the system in this scenario. Similar to Case 2, due to the capacity constraint, a delay in product delivery is highly likely regardless of the level of orders or miscommunication between the retailer and the manufacturer (Figure 8). The capacity constraint on production effectively smoothed the bullwhip effect, which is reflected in both models by a smaller V ar . As the system waits for the slow production, the actual IAE is the same to Case 3. Conclusions Modelling production-inventory system is a problem of multivariable input and multivariant output in mathematics. Hence, selecting the best system control parameters is a crucial managerial decision to achieve and dynamically maintain an optimal performance in terms of balancing the order rate and stock level under dynamic influence of many factors affecting the system operations. Simulation is perhaps the best means to deal with the dynamicity of such a system with multivariable input and multivariant output. By integrating our designated multi-objective particle swarm optimization (MOPSO) algorithm into the popular control model APIOBPCS and our newly modified model 2APIOBPCS for the production-inventory systems, this study compared the dynamic performances of these two models for modelling the production-inventory systems subjected to a random customer demand and production variations. By using the MOPSO optimized control parameters, our simulations point to the following trends between the two models: • Both models can produce the situational best possible balance between the order rate and inventory level under the same bullwhip effort if the production lead time is matched, regardless of the production capacity. However, the 2APIOBPCS model seemed able to improve the inventory responsiveness by a few percentages compared to the APIOBPCS model. • The 2APIOBPCS model seemed able to improve the inventory responsiveness by more than 10% compared to the APIOBPCS model under the same bullwhip effect if the production lead time is mismatched. • By imposing a constraint to the production capacity, the bullwhip effect for both models seemed reduced but the inventory responsiveness kept the same. The dynamic performance of the 2APIOBPCS model seemed better or no worse than that of the APIOBPCS model under different conditions. It looks more robust than the APIOBPCS model when the production lead time is miscalculated. Hence, the 2APIOP-BCS model may have a good potential for companies to better manage their productioninventory systems to maintain optimal performance dynamically. However, we only regard our findings as some general trends in achieving optimal performance for productioninventory systems as these findings must be further cross-validated using different sophisticated cases for intensive simulation experiments by other researchers for objectiveness in the future.
2021-05-07T00:02:58.504Z
2021-03-07T00:00:00.000
{ "year": 2021, "sha1": "24b152204d44a21a2c1b1b75316a3439bfc79c72", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7390/9/5/568/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2fa4a5caa391a40116f49173c29af06205751959", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
237435119
pes2o/s2orc
v3-fos-license
Molecular Heterogeneity in Localized Diffuse Large B-Cell Lymphoma The clinical and molecular characteristics of localized diffuse large B-cell lymphoma (DLBCL) with single nodal (SN) or single extranodal (SE) involvement remain largely elusive in the rituximab era. The clinical data of 181 patients from a retrospective cohort and 108 patients from a phase 3 randomized trial NHL-001 (NCT01852435) were reviewed. Meanwhile, genetic aberrations, gene expression pattern, and tumor immunophenotype profile were revealed by DNA and RNA sequencing of 116 and 53 patients, respectively. SE patients showed similar clinicopathological features as SN patients, except for an increased percentage of low-intermediate risk in the National Comprehensive Cancer Network–International Prognostic Index. According to the molecular features, increased MPEG1 mutations were observed in SN patients, while SE patients were associated with upregulation of TGF-β signaling pathway and downregulation of T-cell receptor signaling pathway. SE patients also presented immunosuppressive status with lower activity of killing of cancer cells and recruiting dendritic cells. Extranodal involvement had no influence on progression-free survival (PFS) or overall survival (OS) in localized DLBCL. Serum lactate dehydrogenase >3 upper limit of normal was an independent adverse prognostic factor for OS, and ATM mutations were related to inferior PFS. Although the overall prognosis is satisfactory, specific clinical, genetic, and microenvironmental factors should be considered for future personalized treatment in localized DLBCL. INTRODUCTION Diffuse large B-cell lymphoma (DLBCL) is the most common subtype of non-Hodgkin's lymphoma and represents a heterogeneous entity with various clinical, immunophenotypic, and molecular features (1, 2). Anti-CD20 monoclonal antibody rituximab in combination with cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP) has significantly improved the outcome of DLBCL patients (3), particularly in the low-risk group of International Prognostic Index (IPI). In addition to IPI (4), National Comprehensive Cancer Network (NCCN)-IPI has recently been established, stratifying patients according to more refined age range and serum lactate dehydrogenase (LDH) level as well as specific exranodal sites including the gastrointestinal (GI) tract, central nervous system (CNS), liver, lung, and bone marrow (5). In a pathological setting, cell of origin (COO) subtype as germinal center B-cell (GCB) and non-GCB (6), as well as BCL2 (≥50%) and MYC (≥40%) double expressors (7), are recognized as important prognostic factors in DLBCL. However, the clinical characteristics and prognostic features of localized DLBCL remain largely elusive in the rituximab era since these patients respond well to R-CHOP immunochemotherapy and are often excluded from clinical trials of DLBCL. According to involved sites, localized DLBCL is divided into single nodal (SN) and single extranodal (SE) group. Other than lymph node, Waldeyer's ring and spleen are considered as nodal tissue (8), while GI tract, breast, and CNS are the most common extranodal sites. More recently, extranodal involvement has been identified as an important prognostic factor for inferior survival in localized DLBCL (9), suggesting the potential heterogeneity between nodal and extranodal involvement. Distinct gene mutations have been related to specific extranodal sites of DLBCL. For example, mutations in MYD88 and CD79B were frequently observed in primary CNS, breast, female genital tract, and testicular DLBCL (10,11) but rarely in primary GI tract DLBCL (12,13). In addition to lymphoma cells themselves, the tumor microenvironment is essential for tumorigenesis and tumor progression in DLBCL (14). Therefore, the genetic and microenvironmental heterogeneity of localized DLBCL needs to be further investigated. In the present study, we analyzed the clinical characteristics and prognostic features of localized DLBCL both in retrospective and prospective cohorts, and evaluated the molecular heterogeneity between SN and SE including genetic aberrations, gene expression pattern, and tumor microenvironment profile, which may be helpful for future personalized treatment in localized DLBCL. Patients From April 2003 to February 2019, a total of 432 stage I patients with newly diagnosed DLBCL were included in this study. Histological diagnoses were reviewed according to the World Health Organization 2016 classification (15). A flow chart describing the cohort selection is outlined in Figure 1. Excluding 19 patients with primary testicular DLBCL, 17 patients with primary CNS lymphoma, 12 patients with primary mediastinal B-cell lymphoma, 56 patients receiving chemotherapy alone, and 39 patients who discontinued treatment for adverse events or patients' intention, a total of 289 patients receiving R-CHOP regimen were analyzed. Among them, 181 patients were retrospectively reviewed, and 108 patients were from a prospective phase 3 trial NHL-001 (NCT01852435) randomly receiving R-CHOP50 (doxorubicin 50 mg/m 2 ), R-CEOP70 (epirubicin 70 mg/m 2 ), or R-CEOP90 (epirubicin 90 mg/m 2 ) regimen as previously described (16). DNA sequencing was performed on 116 patients for detection of genetic aberrations, and RNA sequencing was carried out on 53 patients for gene set enrichment analysis and tumor immunophenotyping (TIP). The study was approved by the Ruijin Hospital Ethics Committee, with written informed consent obtained in accordance with the Declaration of Helsinki. DNA and RNA Sequencing For frozen tumor tissue samples, genomic DNA was extracted using a QIAamp DNA Mini Kit (Qiagen, Hilden, Germany). For formalin-fixed paraffin-embedded (FFPE) samples, genomic DNA was extracted using a GeneRead DNA FFPE Tissue Kit (Qiagen). Targeted sequencing (n = 51), whole-exome sequencing (WES) (n = 51), or whole-genome sequencing (WGS) (n = 14) was performed on 116 patients (including 52 SN and 64 SE patients) with frozen or FFPE tumor tissue samples. Among 65 patients with WES or WGS, the DNA sequencing data of 64 patients were from our previous report on extranodal DLBCL (17), and the data of one patient were newly added. For WGS, the library was validated by Agilent 2100 Bioanalyzer, and sequencing was performed on Illumina HiSeq platform with 150-bp paired-end strategy in WuXi NextCODE, Shanghai. For WES, the exome regions were captured by a SeqCap EZ Human Exome kit (version 3.0), and sequencing was performed on HiSeq 4000 platform with 150-bp pairedend strategy in Righton, Shanghai. As for targeted sequencing, PCR primers were designed by Primer 5.0 software. Multiplexed libraries of tagged amplicons from tumor tissue samples were generated by Shanghai Righton Bio-Pharmaceutical Multiplex-PCR Amplification System. GATK Haplotype Caller and GATK Unified Genotyper were applied to call single nucleotide variations (SNVs) and indels. SNVs reported with low confidence defined by depth (<10) and variant allele frequency (<0.05) were excluded. WGS (n = 17) and WES (n = 25) were performed on 42 matched peripheral blood samples to exclude germ-line polymorphisms. The detailed procedures for DNA sequencing and variant calling were carried out as previously described (17). RNA was extracted with Trizol and RNeasy Mini kit (Qiagen) using frozen tumor tissue samples. RNA sequencing was performed on 53 patients (including 32 SN and 21 SE patients). Among them, the RNA sequencing data of 47 patients were from our previous report on extranodal DLBCL (17), and the data of six patients were newly added. RNA purification, reverse transcription, library construction, and sequencing were performed in WuXi NextCODE according to the manufacturer's instructions (Illumina). The detailed procedures for RNA sequencing were conducted as previously described (17). Gene enrichment analysis was performed by overlapping the genes in a module with Kyoto Encyclopedia of Genes and Genomes gene sets using GSEA (v4.0.3) with the C2 collection of the MsigDB (18,19). A web server TIP was applied to evaluate tumor microenvironment using RNA sequencing data (20). Statistical Analysis The baseline and molecular characteristics of patients were analyzed using Pearson's χ 2 -test or Fisher's exact-test for qualitative data and independent-sample t-test or Mann-Whitney U-test for quantitative data. Progression-free survival (PFS) was calculated from the date of diagnosis to the date when the disease progression was recognized or the date of last follow-up (March 1, 2020). Overall survival (OS) was measured from the date of diagnosis to the date of death or the date of last follow-up. Survival analyses were estimated using the Kaplan-Meier method and compared by log-rank test. Univariate hazard estimates were generated with unadjusted Cox proportional hazards models. Clinical and pathological covariates demonstrating significance with Pvalue < 0.100 on univariate analysis were included in the Clinical and Pathological Characteristics As listed in samples. A total of 55 genes related to the tumorigenesis of DLBCL according to literature were analyzed (Figure 2A) Figure 2C). Among common sites of involvement including GI tract, lymph node, Waldeyer's ring, and breast, genetic aberrations of PIM1, TET2, KMT2D, BTG2, BTG1, and MYD88 were assessed ( Figure 2D). Patients with lymphoma involvement in the GI tract had significantly decreased PIM1 (9.1 vs. 25.0%, P = 0.034) and MYD88 (0 vs. 20.8%, P = 0.001) mutations than those without GI tract involvement. Patients with lymphoma involvement in breast had higher MYD88 (36.4 vs. 10.5%, P = 0.049) mutations than those without breast involvement. Gene Expression Pattern RNA sequencing was performed on 32 of 126 SN patients and 21 of 163 SE patients. The SN and SE patients differed significantly in gene expression pattern, with 1,894 genes differentially expressed (Supplementary Table 2). Of those, 790 genes were upregulated in the SN group, while 1,104 genes were upregulated in the SE group. Compared with SN patients, SE patients were associated with upregulation of the transforming growth factorbeta (TGF-β) signaling pathway and downregulation of T-cell receptor (TCR) signaling pathway ( Figure 3A). Among genes related to the TGF-β signaling pathway, the expression level of TGFB2, BMP2, and BMP4 was significantly increased in SE than SN patients ( Figure 3B). Downstream molecules of the TGF-β signaling pathway related to tumor metastasis, including ANGPTL4 and IL11, were also significantly upregulated in SE than SN patients ( Figure 3B). As for genes associated with the TCR signaling pathway, the expression level of ZAP70, LCK, CD40LG, CD28, and ICOS was significantly decreased in SE than SN patients (Figure 3B). Tumor Microenvironmental Pattern Tumor microenvironment was evaluated by a web server TIP using RNA sequencing data (20). Anti-tumor immune response is generated through a series of stepwise events which are referred to the cancer-immunity cycle, including release of cancer cell antigens (step 1), cancer antigen presentation (step 2), priming and activation (step 3), trafficking of immune cells to tumors (step 4), infiltration of immune cells into tumors (step 5), recognition of cancer cells by immune cells (step 6), and killing of cancer cells (step 7) (21). Among these seven steps, significantly lower immune activity scores of killing of cancer cells (−1.329 vs. −0.905, P = 0.025) were observed in SE, as compared to SN patients, while the other six steps showed no obvious differences between SN and SE groups ( Figure 3C). As for specific immune cells, SE exhibited a significantly lower recruiting activity of dendritic cells (1.644 vs. 2.199, P = 0.010) than SN patients ( Figure 3D). Interactions between dendritic cells and chemokines as well as chemokine receptors of the tumor microenvironment were evaluated. The expression level of CCR7, CCL3, CCL4, CCL5, and CCL21 was positively correlated with the recruiting activity of dendritic cells (Figure 3E). In addition, the expression of dendritic cell marker ITGAX (5.862 vs. 7.261, P = 0.010) was significantly decreased in SE as compared to SN patients ( Figure 3F). Survival Analysis The median follow-up time was 49.5 (5.1-203.9) months. For a total of 289 stage I DLBCL patients, the 4-year PFS and OS rates were 90.3 and 94.1%, respectively (Figures 4A,B) Figures 4G,H). In univariate analysis, serum LDH >3 upper limit of normal (ULN) was significantly prognostic for inferior PFS and OS ( Table 3). Other clinical or pathological factors including age, ECOG performance status, specific extranodal sites, Hans, and BCL2/MYC double expressors had no obvious influence on either PFS or OS. In addition, common sites of lymphoma involvement including GI tract, lymph node, Waldeyer's ring, and breast had no significant impact on PFS or OS. Among oncogenic mutations, ATM mutations were prognostic for inferior PFS. In multivariate analysis, serum LDH >3 ULN was an independent adverse prognostic factor for OS (Supplementary Table 3). The 4-year OS rate was 60.0% for patients with serum LDH >3 ULN, significantly shorter than those with serum LDH ≤3 ULN (94.7%, P < 0.001). DISCUSSION Among localized DLBCL patients, 56.4% were extranodal in origin, consistent with the previous report (22). The GI tract was the most common site of extranodal involvement. Clinically, the majority of localized DLBCL patients presented young age, good ECOG performance status, normal LDH, and no bulky tumors. Significantly increased percentage of low-intermediate risk NCCN-IPI was observed in SE patients due to extranodal involvement of the GI tract, liver, and lung (5). Pathologically, 44.9% of localized DLBCL patients were considered as GCB subtype, similar to the ratio of 42% in total DLBCL (6). Patients with BCL2/MYC double expressors accounted for 14.5% of localized DLBCL patients, while this ratio is up to 20-30% in total DLBCL (23). Meanwhile, SN and SE patients exhibited similar patterns of distribution regarding COO subtype and BCL2/MYC double expressors. In the rituximab era, the treatment outcome of stage I DLBCL patients was satisfactory. Our study observed that, among 289 patients, the 4-year PFS and OS rates were 90.3 and 94.1%, respectively, much higher than those in patients receiving chemotherapy alone (24). Besides that, extranodal involvement showed no obvious influence on either PFS or OS in localized DLBCL, which seems contradictory with a previous report that addressed the inferior survival of extranodal disease (9). This may be attributed to the different study enrollments between two studies. Moreover, compared with the previous study (9), our study included more SE patients with GI tract involvement that was related to favorable outcomes but less SE patients with bone involvement that was related to unfavorable outcomes (17). As reported in DLBCL (25), serum LDH was also recognized as an unfavorable prognostic factor in localized DLBCL, indicating that more potentially effective immunochemotherapy regimen should be applied in this subset of localized DLBCL patients to improve their outcome. Among oncogenic mutations, ATM mutations were related to inferior PFS. As an important cell cycle checkpoint kinase, ATM mutations also predicted inferior prognosis in GCB-DLBCL patients (26). However, TP53 mutations did not have any effect on clinical prognosis, probably due to the limited number of TP53-mutant patients and different mutation types in our study. Therefore, multicenter clinical cooperation should be carried out using a matched patient cohort with similar distribution of specific extranodal sites. As for the molecular features, most frequently altered genes in localized DLBCL included PIM1, TET2, KMT2D, BTG2, BTG1, MYD88, ARID1A, HIST1H1E, MPEG1, TNFAIP3, TP53, CREBBP, FAS, GNA13, and TMSB4X, which were also reported to be commonly mutated in DLBCL (27,28). Of note is that increased MPEG1 mutations were shown in SN than SE patients. MPEG1 encodes a pore-forming protein, Perforin-2, which is crucial for anti-bacterial defense in human cells (29). With a high mutation rate in DLBCL, the functions of MPEG1 mutations need to be investigated further. In concordance with previous reports in DLBCL (11,13,30), MYD88 and PIM1 mutations were frequent in localized DLBCL patients with breast involvement while rare in those with GI tract involvement. As for oncogenic cascades, the TGF-β signaling pathway has been reported to be associated with extranodal involvement in DLBCL (31,32). Indeed key members of the TGF-β superfamily including TGFB2, BMP2, and BMP4, as well as functional molecules of the TGF-β signaling pathway including ANGPTL4 and IL11 (31), were significantly increased in SE patients. Recently, anti-TGF-β therapies have demonstrated potent antitumor activity in several clinical studies (33). Therefore, therapeutic targeting of the TGF-β signaling pathway may be effective in counteracting the extranodal involvement in DLBCL. Meanwhile, the TCR signaling pathway was downregulated in SE patients. Here proximal TCR signaling molecules ZAP70 and LCK (34) and costimulatory molecules CD40LG, CD28, and ICOS (35,36) were also significantly decreased in SE patients. Besides that, evaluation of the tumor microenvironment by the cancer-immunity cycle revealed that, compared with SN patients, SE patients exhibited a lower activity of killing of cancer cells and recruiting dendritic cells. Dendritic cells are antigen-presenting cells and crucial in T-cell priming and antitumor activity (37). Chemokines and chemokine receptors associated with recruiting and homing of dendritic cells including CCR7, CCL21, CCL3, CCL4, and CCL5 (38-40) showed a positive correlation with the recruiting activity of dendritic cells. Moreover, the dendritic cell marker ITGAX, which was related to superior survival in DLBCL patients (41), was significantly decreased in SE patients. Therefore, localized DLBCL with extranodal involvement could be featured with a relatively immunosuppressive tumor microenvironment, indicating some immunomodulatory agents as the potential effective alternatives for targeting extranodal lesion. In conclusion, localized DLBCL patients may differ from nodal to extranodal involvement and present distinct genetic alterations, gene expression pattern, and tumor microenvironment profile, which could provide a clinical rationale for future mechanism-based therapy in localized DLBCL. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.biosino. org/node, OEP001143. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ruijin Hospital Ethics Committee. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS WZ, PX, LW, and SC designed the study. WQ, HY, LD, QSo, XJ, and ZL acquired data. WQ, QSh, WZ, PX, DF, HH, and JH analyzed the data and made the figures. WZ, PX, WQ, and DF drafted the manuscript. All authors contributed to the article and approved the submitted version.
2021-09-08T13:26:54.531Z
2021-09-07T00:00:00.000
{ "year": 2021, "sha1": "9b0f1067e1f369b6f1f3a3aa5f00fdfbc9366e88", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.638757/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b0f1067e1f369b6f1f3a3aa5f00fdfbc9366e88", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252237482
pes2o/s2orc
v3-fos-license
A Novel Suturing Technique for Choroidal Avulsion Ocular trauma has been one of the leading causes of visual impairment, and choroidal avulsion is especially devastating. Surgical treatment of choroidal avulsion is challenging, and very few surgical techniques have been reported. We experienced two cases of globe rupture with 360-degree avulsion of the choroid-ciliary body from the peripheral section. After vitrectomy for a globe rupture, the choroid gradually slid down to the posterior pole over time and vision deteriorated even though the retina was attached. We treated the choroidal avulsion using two surgical methods: a mattress suturing technique using a 10-0 proline long needle and a 7-0 nylon single suture technique. In both methods, the retina-choroid, which had slipped down to the posterior pole, was suspended and fixed to the sclera assisted by a wide-angle viewing system, improving visual acuity. These two methods are considered to be useful surgical procedures for the treatment of an avulsed choroid. Introduction Ocular trauma is one of the leading causes of visual impairment worldwide, especially when the posterior segment of the eye is involved [1,2]. Traumatic choroidal injury is a serious complication and is closely related to the prognosis of ocular trauma, and a variety of choroidal injuries have been reported [3][4][5][6]. It has been reported that choroidal avulsion is especially devastating, with 92.2% of choroidal avulsion cases having a poor prognosis [7]. Choroidal avulsion is the detachment of the choroid from the sclera, with discontinuity of the detached choroid, or sometimes with the choroid and ciliary body together at the scleral spur as a whole unit [7]. However, few reports have focused on choroidal avulsion, mainly because of the lack of effective treatment. Even with the development of vitreoretinal surgery techniques, there are very few treatments for choroidal avulsion [3,5]. Surgical treatment of choroidal avulsion is complicated and very challenging. In this study, we treated two cases with choroidal avulsion, in which the retina-choroid had slipped down to the posterior pole, using two different surgical techniques assisted by a wide-angle viewing system. These two methods are considered to be useful surgical procedures for the treatment of an avulsed choroid. Patients and Methods The ethics committee of Akita University Hospital (Akita, Japan) approved the procedures, and the procedures conformed to the tenets of the Declaration of Helsinki. Informed consent was obtained from participants after explaining the nature and possible complications of the study. Subjects Patient 1: A 17-year-old man had globe rupture, vitreous hemorrhage, retinal detachment, and choroidal hemorrhage, and his vision was limited to light perception in 2017. We performed vitrectomy with concomitant encircling. The lens was not present in the eye, and a 360-degree choroidal-ciliary body avulsion was observed during surgery. Silicone oil 2 of 7 was injected into the vitreous cavity after drainage of the choroidal hemorrhage from the sclera, fluid-air exchange, and laser photocoagulation. The retina was attached, and visual acuity improved to 20/150. Although the retina remained attached, the retina-choroid gradually slid down to the posterior pole together, and part of the choroid was rolling ( Figure 1A). The exposed sclera gradually widened, and the retina-choroid was folded at the posterior pole. The patient was aware of visual field narrowing, and visual acuity decreased to 20/2000 four months after the surgery. The intraocular pressure (IOP) was 3 mmHg. Choroidal suture was then scheduled to repair the choroidal avulsion. Patient 2: A 55-year-old woman had globe rupture, vitreous hemorrhage, retinal detachment, and severe choroidal hemorrhage, and her vision was limited to light perception in 2020. During initial surgery with scleral suturing and vitrectomy, it was found that the lens was not present in the eye and that the choroid-ciliary body was avulsed at 360 degrees. After drainage of the choroidal hemorrhage from the sclera, fluid-air exchange, and silicone oil tamponade, the retina was successfully attached, and her visual acuity improved to 20/250. However, the choroid was still detached from the sclera postoperatively, and the retina-choroid gradually slid downward to the posterior pole together and the exposed sclera widened with time ( Figure 2A). Visual acuity decreased to 20/1000 three months after the surgery, and the IOP was 4 mmHg. The patient underwent additional surgery for choroidal suturing for choroidal avulsion. We performed vitrectomy with concomitant encircling. The lens was not present in the eye, and a 360-degree choroidal-ciliary body avulsion was observed during surgery. Silicone oil was injected into the vitreous cavity after drainage of the choroidal hemorrhage from the sclera, fluid-air exchange, and laser photocoagulation. The retina was attached, and visual acuity improved to 20/150. Although the retina remained attached, the retinachoroid gradually slid down to the posterior pole together, and part of the choroid was rolling ( Figure 1A). The exposed sclera gradually widened, and the retina-choroid was folded at the posterior pole. The patient was aware of visual field narrowing, and visual acuity decreased to 20/2000 four months after the surgery. The intraocular pressure (IOP) was 3 mmHg. Choroidal suture was then scheduled to repair the choroidal avulsion. Patient 2: A 55-year-old woman had globe rupture, vitreous hemorrhage, retinal detachment, and severe choroidal hemorrhage, and her vision was limited to light perception in 2020. During initial surgery with scleral suturing and vitrectomy, it was found that the lens was not present in the eye and that the choroid-ciliary body was avulsed at 360 degrees. After drainage of the choroidal hemorrhage from the sclera, fluid-air exchange, and silicone oil tamponade, the retina was successfully attached, and her visual acuity improved to 20/250. However, the choroid was still detached from the sclera postoperatively, and the retina-choroid gradually slid downward to the posterior pole together and the exposed sclera widened with time ( Figure 2A). Visual acuity decreased to 20/1000 three months after the surgery, and the IOP was 4 mmHg. The patient underwent additional surgery for choroidal suturing for choroidal avulsion. Fundus photographs before and after the 7-0 nylon single suture technique. A 55-year-old woman underwent scleral suturing and vitrectomy for globe rupture, and it was found that the choroid-ciliary body was avulsed at 360 degrees during surgery. The choroid was still detached from the sclera postoperatively, the retina-choroid gradually slid downward to the posterior pole together, and the exposed sclera widened with time (A). The avulsed choroid was sutured to the sclera, and the exposed sclera could be covered by the retina-choroid (B). Surgical Techniques Patients with choroidal avulsion received a 25-gauge pars plana vitrectomy under retrobulbar anesthesia. After creating the three ports, two different suturing techniques to reattach the avulsed choroid to the sclera were performed on the patients with an avulsed choroid. In both suturing techniques, a wide-angle viewing system and chandelier illumination was used to assist bimanual manipulation. For the mattress suturing technique, a 10-0 proline long needle (1713G, ETHICON, LLC, San Lorenzo, PR, USA) in an aphakic eye was inserted into the vitreous cavity at the sclera 6 mm posterior to the limbus ( Figure 3). It was passed through the avulsed choroid by grasping the avulsed choroidal edge with vitreous forceps (V-ARTIST disposable micro forceps, HOYA Corporation, Tokyo, Japan) ( Figure 3C). Because the eye was aphakic, a 27G needle was inserted through the corneal side port to access the vitreous body, and the proline needle was inserted into the 27G needle intravitreously. The 27G needle with the proline needle was withdrawn from the side port in the cornea. After the proline needle was withdrawn from the 27G needle, the 10-0 proline was inserted through the same corneal side port and retained in the anterior chamber, and the 27G needle was inserted into the vitreous cavity through the sclera approximately 5 mm apart horizontally from the insertion site of the sclera. The 27G needle was passed through the avulsed choroid by grasping the avulsed choroidal edge with vitreous forceps. The proline needle was inserted into the 27G needle intravitreously, as before, and the 27G needle with the proline needle was withdrawn from the sclera. The 10-0 proline thread was sutured outside the sclera. The same procedure was performed at two locations in each quadrant, and the avulsed choroid could be sutured to the sclera. During the procedure, the choroid was torn several times due to the mattress suturing attempts to secure the choroid as much as possible to the peripheral sclera. However, no choroidal hemorrhage occurred when the choroid was torn. Fundus photographs before and after the 7-0 nylon single suture technique. A 55-year-old woman underwent scleral suturing and vitrectomy for globe rupture, and it was found that the choroid-ciliary body was avulsed at 360 degrees during surgery. The choroid was still detached from the sclera postoperatively, the retina-choroid gradually slid downward to the posterior pole together, and the exposed sclera widened with time (A). The avulsed choroid was sutured to the sclera, and the exposed sclera could be covered by the retina-choroid (B). Surgical Techniques Patients with choroidal avulsion received a 25-gauge pars plana vitrectomy under retrobulbar anesthesia. After creating the three ports, two different suturing techniques to reattach the avulsed choroid to the sclera were performed on the patients with an avulsed choroid. In both suturing techniques, a wide-angle viewing system and chandelier illumination was used to assist bimanual manipulation. For the mattress suturing technique, a 10-0 proline long needle (1713G, ETHICON, LLC, San Lorenzo, PR, USA) in an aphakic eye was inserted into the vitreous cavity at the sclera 6 mm posterior to the limbus (Figure 3). It was passed through the avulsed choroid by grasping the avulsed choroidal edge with vitreous forceps (V-ARTIST disposable micro forceps, HOYA Corporation, Tokyo, Japan) ( Figure 3C). Because the eye was aphakic, a 27G needle was inserted through the corneal side port to access the vitreous body, and the proline needle was inserted into the 27G needle intravitreously. The 27G needle with the proline needle was withdrawn from the side port in the cornea. After the proline needle was withdrawn from the 27G needle, the 10-0 proline was inserted through the same corneal side port and retained in the anterior chamber, and the 27G needle was inserted into the vitreous cavity through the sclera approximately 5 mm apart horizontally from the insertion site of the sclera. The 27G needle was passed through the avulsed choroid by grasping the avulsed choroidal edge with vitreous forceps. The proline needle was inserted into the 27G needle intravitreously, as before, and the 27G needle with the proline needle was withdrawn from the sclera. The 10-0 proline thread was sutured outside the sclera. The same procedure was performed at two locations in each quadrant, and the avulsed choroid could be sutured to the sclera. During the procedure, the choroid was torn several times due to the mattress suturing attempts to secure the choroid as much as possible to the peripheral sclera. However, no choroidal hemorrhage occurred when the choroid was torn. Fundus schema before (A) and after the repair of the choroidal avulsion (B). A 10-0 proline needle was inserted into the vitreous cavity and passed through the avulsed choroid by grasping the avulsed choroidal edge with vitreous forceps (C). A 27G needle was inserted through the corneal side port to access the vitreous body, and the proline needle was inserted into the 27G needle intravitreously. After the proline needle was withdrawn from the 27G needle, the 10-0 proline was inserted through the same corneal side port, and the 27G needle was inserted into the vitreous cavity through the sclera (D). The 27G needle was passed through the avulsed choroid by grasping the avulsed choroidal edge with vitreous forceps. The same procedure was performed at two locations in each quadrant (E). For the 7-0 nylon single suture technique, a 7-0 nylon (1696G, ETHICON, LLC, San Lorenzo, PR, USA) was introduced into the vitreous cavity at the sclera 6 mm posterior to the limbus (Figure 4). It was passed through the avulsed choroid by grasping the anterior margin of the avulsed full-thickness choroid to accomplish suturing. The needle was then rotated vertically, and the 7-0 nylon was withdrawn from the sclera at the position of the ciliary body, trying to suture the choroid to the sclera as peripherally as possible. At that time, an assistant grasped the tip of the 7-0 nylon needle as it emerged from the sclera, taking care that the 7-0 nylon needle did not stray into the vitreous cavity. The 7-0 nylon needle was sutured outside the sclera. The same procedure was performed at two locations in each quadrant. Finally, fluid-air exchange and silicone oil tamponade were performed at the end of the surgery in both surgical procedures. Fundus schema before (A) and after the repair of the choroidal avulsion (B). A 10-0 proline needle was inserted into the vitreous cavity and passed through the avulsed choroid by grasping the avulsed choroidal edge with vitreous forceps (C). A 27G needle was inserted through the corneal side port to access the vitreous body, and the proline needle was inserted into the 27G needle intravitreously. After the proline needle was withdrawn from the 27G needle, the 10-0 proline was inserted through the same corneal side port, and the 27G needle was inserted into the vitreous cavity through the sclera (D). The 27G needle was passed through the avulsed choroid by grasping the avulsed choroidal edge with vitreous forceps. The same procedure was performed at two locations in each quadrant (E). For the 7-0 nylon single suture technique, a 7-0 nylon (1696G, ETHICON, LLC, San Lorenzo, PR, USA) was introduced into the vitreous cavity at the sclera 6 mm posterior to the limbus (Figure 4). It was passed through the avulsed choroid by grasping the anterior margin of the avulsed full-thickness choroid to accomplish suturing. The needle was then rotated vertically, and the 7-0 nylon was withdrawn from the sclera at the position of the ciliary body, trying to suture the choroid to the sclera as peripherally as possible. At that time, an assistant grasped the tip of the 7-0 nylon needle as it emerged from the sclera, taking care that the 7-0 nylon needle did not stray into the vitreous cavity. The 7-0 nylon needle was sutured outside the sclera. The same procedure was performed at two locations in each quadrant. Finally, fluid-air exchange and silicone oil tamponade were performed at the end of the surgery in both surgical procedures. and after the repair of the choroidal avulsion (B). A 7-0 nylon was introduced into the vitreous cavity at the sclera 6 mm posterior to the limbus (C). It was passed through the avulsed choroid by grasping the anterior margin of the avulsed full-thickness choroid to accomplish suturing. The needle was then rotated vertically, and the 7-0 nylon was withdrawn from the sclera at the position of the ciliary body, trying to suture the choroid to the sclera as peripherally as possible (D). The 7-0 nylon needle was sutured outside the sclera. The same procedure was performed at two locations in each quadrant (E). Results Patient 1: The retina-choroid, which had slipped down to the posterior pole, was suspended and fixed to the sclera after the mattress suturing technique using a 10-0 proline long needle ( Figure 1B). The mattress suturing methods allowed the choroid to be sutured over a wide area and close to its original position in the periphery. The folded retina of the posterior pole was stretched, and vision was improved to 20/200. The subjective symptoms of a narrowing of the visual field also improved. However, the IOP was still 3 mmHg after the surgery. Patient 2: The retina-choroid was fixed to the sclera, and the exposed sclera was mostly covered with retina-choroid after the 7-0 nylon single suture technique ( Figure 2B). The retina-choroid, which had slid downward to the posterior pole, was stretched, and vision was improved to 20/250. The IOP was 8 mmHg after the surgery. and after the repair of the choroidal avulsion (B). A 7-0 nylon was introduced into the vitreous cavity at the sclera 6 mm posterior to the limbus (C). It was passed through the avulsed choroid by grasping the anterior margin of the avulsed full-thickness choroid to accomplish suturing. The needle was then rotated vertically, and the 7-0 nylon was withdrawn from the sclera at the position of the ciliary body, trying to suture the choroid to the sclera as peripherally as possible (D). The 7-0 nylon needle was sutured outside the sclera. The same procedure was performed at two locations in each quadrant (E). Results Patient 1: The retina-choroid, which had slipped down to the posterior pole, was suspended and fixed to the sclera after the mattress suturing technique using a 10-0 proline long needle ( Figure 1B). The mattress suturing methods allowed the choroid to be sutured over a wide area and close to its original position in the periphery. The folded retina of the posterior pole was stretched, and vision was improved to 20/200. The subjective symptoms of a narrowing of the visual field also improved. However, the IOP was still 3 mmHg after the surgery. Patient 2: The retina-choroid was fixed to the sclera, and the exposed sclera was mostly covered with retina-choroid after the 7-0 nylon single suture technique ( Figure 2B). The retina-choroid, which had slid downward to the posterior pole, was stretched, and vision was improved to 20/250. The IOP was 8 mmHg after the surgery. Discussion In this study, we treated choroidal avulsion using two surgical methods. In both methods, the retina-choroid, which had slipped down to the posterior pole, was suspended and fixed to the sclera assisted by a wide-angle viewing system, unfolding the retina in the posterior pole, and improving visual acuity. These two methods are considered to be useful surgical procedures for the treatment of an avulsed choroid. Choroidal avulsion is one of the most serious traumatic conditions [1,2]. If the choroid and ciliary body are disrupted as a whole unit at the scleral spur and lose their adhesion to the sclera, the choroid in that region is easily detached. When the choroid-ciliary disruption at the scleral spur is 360 degrees circumferentially, the choroid slides down to the posterior pole. Because of the different tissue composition of the retina and choroid, it is difficult to reattach the avulsed choroid by the usual methods used in the treatment of retinal detachment. Despite the development and improvement of microsurgical techniques in vitreoretinal surgery, eyes with choroidal avulsion can rarely be repaired. This is due to the connection between the vitreous cavity and the suprachoroidal space after uveal injury, which results in a free flow of aqueous humor between these two compartments and a decrease in intraocular pressure. In addition, the formation of these two compartments significantly reduces the volume of the vitreous cavity, thus reducing the likelihood of retinal reattachment. Thus, without effective treatment, eyes with choroidal detachment usually suffer from phthisis bulbi and cannot be saved. Very few surgical treatments have been attempted to repair choroidal avulsion. It has been reported that medical fibrin glue was injected into the suprachoroidal space before silicone oil injection [5]. However, because fibrin glue can only be applied after fluid-air exchange in a non-water surgical environment, its use has been limited in many cases to repairing an avulsed choroid before the retina is reattached. A method in which a full-thickness scleral incision of the same length as an extension of the avulsed area is made at the equator, and the avulsed choroid is incarcerated into the scleral incision and sutured together has been reported [3]. However, this suturing technique may result in exposure of the uvea and may cause further sympathetic ophthalmopathy. When using the 10-0 proline procedure, we grasped the anterior margin of the avulsed choroid to accomplish suturing with vitreous forceps and passed the proline needle through to perform mattress suturing at the ideal width. Mattress suturing methods allow the choroid to be sutured over a wide area and close to its original position in the periphery. A major problem in suturing the choroid is that the choroid itself is a fragile tissue that tears easily when strong traction is applied. In other words, the choroid is easily torn if it is forcibly suspended in the periphery using mattress suturing methods when there is not enough choroid-ciliary body area to completely cover the exposed sclera. Therefore, although we would prefer to perform mattress suturing in a more peripheral area, it is important to perform this technique in each quadrant with some length to spare. The 7-0 nylon needle is long and curved, making it suitable for threading the sclera, choroid, and sclera in a single vertical operation. Suturing with a 7-0 nylon needle is relatively simple, allowing multiple threads to be sutured in a short time. In the present case, the suture was made vertically, so a U-turn suture was not used. However, it is considered possible to perform a U-turn suture by suturing in the circumferential direction. Considering that a U-turn suture using a 10-0 proline needle procedure can be performed only in aphakic eyes and that the procedure is complicated, we believe that a circumferential suture using a 7-0 nylon needle may be useful for U-turn suturing in the future. A possible complication of choroidal suturing is choroidal hemorrhage. The choroid is a highly vascularized tissue, and bleeding occurs not only from trauma but also from many other causes. The blood circulation in the choroid should be severely affected after choroidal avulsion occurs; therefore, suturing the avulsed choroid seldom causes bleeding. In the present procedure, there was no bleeding when a 10-0 proline was used, and some bleeding occurred when a 7-0 nylon needle was used, but it was controllable with the endo-diathermy. Conclusions In this study, we treated choroidal avulsion using two surgical methods. In both methods, the retina-choroid, which had slipped down to the posterior pole, was suspended and fixed to the sclera assisted by a wide-angle viewing system, unfolding the retina in the posterior pole, and improving visual acuity. These two methods are considered to be useful surgical procedures for the treatment of an avulsed choroid. Informed Consent Statement: Informed consent was obtained from all the subjects involved in the study. Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request.
2022-09-15T15:02:09.313Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "ee7141766d2c998c719296297ad6c2062826c552", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/18/5344/pdf?version=1662976064", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a8557dd285898701f9d45f780a294549c05539a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248071544
pes2o/s2orc
v3-fos-license
Precision treatment of Singleton Merten syndrome with ruxolitinib: a case report Background Singleton-Merten syndrome 1 (SGMRT1) is a rare type I interferonopathy caused by heterozygous mutations in the IFIH1 gene. IFIH1 encodes the pattern recognition receptor MDA5 which senses viral dsRNA and activates antiviral type I interferon (IFN) signaling. In SGMRT1, IFIH1 mutations confer a gain-of-function which causes overactivation of type I interferon (IFN) signaling leading to autoinflammation. Case presentation We report the case of a nine year old child who initially presented with a slowly progressive decline of gross motor skill development and muscular weakness. At the age of five years, he developed osteoporosis, acro-osteolysis, alveolar bone loss and severe psoriasis. Whole exome sequencing revealed a pathogenic de novo IFIH1 mutation, confirming the diagnosis of SGMRT1. Consistent with constitutive type I interferon activation, patient blood cells exhibited a strong IFN signature as shown by marked up-regulation of IFN-stimulated genes. The patient was started on the Janus kinase (JAK) inhibitor, ruxolitinib, which inhibits signaling at the IFN-α/β receptor. Within days of treatment, psoriatic skin lesions resolved completely and the IFN signature normalized. Therapeutic efficacy was sustained and over the course muscular weakness, osteopenia and growth also improved. Conclusions JAK inhibition represents a valuable therapeutic option for patients with SGMRT1. Our findings also highlight the potential of a patient-tailored therapeutic approach based on pathogenetic insight. Background Heterozygous gain-of-function mutations in the IFIH1 gene underlie a spectrum of autoinflammatory phenotypes including Aicardi-Goutières syndrome type 7 (AGS7) [1,2] and Singleton-Merten syndrome type 1 (SGMRT1) [3]. IFIH1 encodes interferon-induced helicase C domain-containing protein 1, also known as melanoma differentiation associated gene 5 protein (MDA5), a pattern recognition receptor of the innate immune system which plays a pivotal role in antiviral defense. IFIH1/ MDA5 recognizes viral double-stranded RNA (dsRNA) in the cytosol and upon ligand binding activates antiviral type I interferon-(IFN) signaling [4]. IFIH1 mutations in AGS7 and SGMRT1 act as gain-of-function mutations that lead to inappropriate sensing of self-derived RNA, resulting in constitutive overproduction of type I IFN with subsequent autoinflammation [1,2]. As such, AGS and SGMRT are also referred to as type I Interferonopathies, a genetically and phenotypically heterogenous group of autoinflammatory and autoimmune diseases associated with perturbation of the type I IFN system [5]. While AGS is characterized by inflammatory neurodegeneration and skin disease, the clinical features of SGMRT comprise abnormal calcification of the aorta and cardiac valves, alveolar bone loss, dental caries, Open Access *Correspondence: philip.broser@me.com 1 Department of Pediatric Neurology, Children's Hospital of Eastern Switzerland, Sankt Gallen, Switzerland Full list of author information is available at the end of the article osteoporosis, psoriasis, and muscular weakness [3]. However, a phenotypic overlap between these disorders has been described, suggesting that AGS and SGMRT due to IFIH1 gain-of-function mutations constitute facets of the same disease spectrum [6][7][8]. Janus kinase (JAK) inhibitors have been recently reported as promising treatment option for type I interferonopathies [9][10][11]. However, whether JAK inhibition also ameliorates symptoms in patients with SGMRT is unknown. Here, we report the case of a child with SGMRT1, in whom treatment with the JAK inhibitor ruxolitinib led to sustained clinical improvement. Case presentation We report on a male patient who was born at term after an uncomplicated pregnancy to non-consanguineous parents. Birth weight, height and head circumference were within normal limits. The patient initially thrived well but was noted to reach developmental milestones later than expected. He was able to sit at 12 months of age, to stand at 18 months of age, and to walk unsupported at 27 months of age. At the age of four years, he presented with muscular weakness of the lower extremities. While he was able to walk on a flat surface, he had difficulties to climb up stairs or to squat. In addition, his growth stunted, and his length was below the 3rd percentile. During clinical examination, he showed a stiff gait with overextension of knee joints and a mild hyperlordosis. In contrast to the lower extremities, the patient exhibited a normal function of his upper extremities with a precise and well-controlled visual coordination of his hands. His speech and cognitive functions were within normal range. Neurophysiological examination revealed a discretely reduced motor nerve conduction speed (N. tibialis 39 m/s [N > 45], N. peroneus 43 m/s, [N > 45 m/s]) with unremarkable findings on repetitive nerve stimulation. His muscular mass was reduced with a normal muscle texture without any signs of inflammation on ultrasound and MR imaging. An MRI of the brain and spine showed normal findings. Endocrinological workup for short stature revealed normal values for growth hormone, IGF-1 and IGFBP-III. His thyroid function test was normal and the inflammatory marker, C-reactive protein (CRP), was below 5 mg/ml. However, an X-ray of the hands showed osteopenia with a thinned cortex of the finger bones and a DEXA scan revealed reduced mineralization of bones (femur, Z-score -2.7SD; hip, Z-score -2.8SD; spine, Z-score -0.8SD). An X-ray of the skull revealed an absent nasal bone (Fig. 1A). The X-ray of the hip and femurs showed a pathological caput-collum-diaphyseal angle (Fig. 1B). The family declined further genetic testing at that point. The child was treated with physiotherapy and orthopedic support. Due to progressive pes equinus an Achilles tendon extension was performed. In the following years, the patient exhibited normal cognitive development but his growth remained retarded and muscular weakness worsened. By the age of seven years, he could barely climb a stair without intense support. A boutonniere deformity of both hands was noted (Fig. 1C) and a hand X-ray showed acro-osteolysis (Fig. 3C, D), consistent with inflammatory bone destruction. At the age of eight years, the patient developed with severe psoriasis (Fig. 2A), unresponsive to steroid treatment. In addition, the patient exhibited failure of secondary dentition with aplasia of several teeth (18, 28, 31, 38 and 48) and hypoplastic roots of remaining teeth (Fig. 1D, E). A cardiologic examination, including echocardiography and electrocardiography, and an ophthalmologic exam were unremarkable. The clinical findings of neuromuscular and inflammatory symptoms of bones and skin suggested a genetic etiology and eventually genetic testing was initiated. A trio exome analysis revealed a heterozygous de novo variant in the IFIH1 gene (NM_022168.4: c.2465G > A; p.Arg822Gln) in the patient. This variant has been previously reported in patients with SGMRT1 [3], confirming the diagnosis of SGMRT1. Although the inheritance pattern of SGMRT1 is autosomal dominant, both healthy parents did not carry the disease-causing variant, p.Arg822Gln, indicating that it had occurred de novo. In addition, there was no evidence for a mosaic in the NGS data. The IFN signature in blood was measured as previously described [12]. In line with constitutive type I IFN activation, peripheral blood mononuclear cells of the patient exhibited a strong interferon signature (an IFN score above 12.49), as shown by up-regulation of IFN-stimulated genes (Fig. 2C). Based on the genetic and laboratory findings and in view of the refractory skin disease, the decision was made to treat the child with ruxolitinib, a JAK1/2 inhibitor, which blocks signaling at the IFNα/β receptor. Following oral administration of 0.5 mg/ kg ruxolitinib per day, the child experienced significant improvement of psoriatic skin lesions that was already visible after three days of treatment and over the course resulted in complete resolution of cutaneous inflammation (Fig. 2B). In addition, the interferon signature was markedly reduced during treatment (Fig. 2C). Six weeks after treatment was started, the child showed an increase in body weight and length (Fig. 3A, B). A hand X-ray revealed a significant increase of bone mineralization of fingers (Fig. 3C). Remarkably, after five months of treatment, the acro-osteolysis of the thumb was completely resolved (Fig. 3D). The patient also experienced improvement of muscle weakness and his gross motor function classification system (GMFCS) score [13] improved from GMFCS level two (ambulatory with assistance) to GMFCS level one (independently mobile). While the patient was on ruxolitinib, he experienced mild upper respiratory symptoms and elevated temperature for two days, followed by quick recovery. As the patient's sister had been tested SARS-CoV-2-positive, a COVID-19 infection was suspected. Serologic testing confirmed sero-conversation both anti-spike antigen IgG and IgM. Three months after initiation of ruxolitinib, the dosage was slightly increased to a final maintenance dose of 0.75 mg/kg per day. The treatment with ruxolitinib was well tolerated without any side effects. Repeated measurements of blood count and biochemistry did not show any changes and the patient felt very well. Discussion and conclusions In 1973, Singleton and Merten and shortly later, in 1976, Gay and Kuhn reported four patients with dental dysplasia, osteoporosis, widened medullary cavities of hand bones and calcification of the thoracic aorta [14,15]. In addition, some of the patients presented with muscle weakness as well as psoriasiform skin lesions. Feigenbaum et al. noted dominant inheritance with significant phenotypic variability of SGMRT even within families and summarized the core manifestations to include progressive aortic calcification, dental anomalies, osteopenia and acro-osteolysis and to a lesser extent, glaucoma, psoriasis, muscle weakness, and joint laxity [16]. In 2015, Rutsch et al. identified a p.Arg822Gln substitution in IFIH1 as the cause of SGMRT1 in three unrelated families and demonstrated by functional analysis that the mutation exerts a gainof-function, resulting in a heightened inflammatory state due to overproduction of type I IFN [3]. Activating mutations in IFIH1 were subsequently shown to underlie AGS7, an early-onset inflammatory leukencephalopathy characterized by basal ganglia calcification and constitutive type I IFN activation [2]. Notably, the p.Arg822Gln mutation initially identified in SGMRT1 was also observed in a patient presenting with typical clinical features of AGS, suggesting that SGMRT1 and AGS7 due to IFIH1 gain-of-function mutations are part of the same disease spectrum [8]. The patient described here exhibited many of the typical features of SGMRT1, including dental anomalies, osteopenia, acro-osteolysis, psoriasis and muscle weakness, yet lacked the core feature of aortic calcification. Given that most patients with SGMRT1 develop aortic calcification early in childhood [17], the lack of this feature was unexpected. However, a patient with SGMRT1 without cardiac involvement carrying a different variant in the IFIH1 gene (p.Leu329Pro) has recently been reported [18]. Nonetheless, due to the high cardiovascular risk we do monitor the patient regularly by echocardiography. After establishing the diagnosis of SGMRT1 by genetic testing, we confirmed constitutive type I IFN activation in the patient by demonstrating up-regulation of IFNstimulated genes in blood. Given the progressive disease course, in particular due to refractory skin inflammation, this led us to consider treatment with the JAK inhibitor ruxolinitib. While JAK inhibition had been shown to ameliorate symptoms in patients with type I interferonopathies, such as STING-associated vasculopathy, CANDLE syndrome or AGS [9][10][11], there have been no reports about targeted treatment approaches in SGMRT1 so far. Based on the assumption that uncontrolled type I IFN signaling was driving the inflammatory symptoms in our patient, we initiated treatment with ruxolitinib at 0.5 mg/kg bodyweight to inhibit type I IFN signaling. The patient responded with significant improvement that was visible within the first weeks of administration of the drug. Thus, psoriatic lesions vanished within days, the muscle weakness and bone mineralization improved, the patient showed a significant weight gain. Clinical improvement was accompanied by a marked reduction of the interferon signature in blood, indicating that inhibition of overactive type I IFN signaling was therapeutically effective. In summary, this case report demonstrates that targeting of uncontrolled type I IFN activation by JAK inhibition is of therapeutic benefit in patients with SGMRT1 and highlights the role of precision medicine in the treatment of rare diseases in children. Moreover, our findings also suggest a hitherto unappreciated role of the interferon signaling pathway in bone metabolism.
2022-04-11T13:38:18.683Z
2022-04-11T00:00:00.000
{ "year": 2022, "sha1": "40b2bb6188c9f344aa8cf896a4f5d0041801b421", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "40b2bb6188c9f344aa8cf896a4f5d0041801b421", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
226595640
pes2o/s2orc
v3-fos-license
Trends in the utilization of nuclear medicine technology in Jamaica: Audit of a private facility This study sought to evaluate the types and frequencies of nuclear medicine studies that were carried out at a privately‑run nuclear medicine facility in Kingston, Jamaica. Previous studies of this nature have not been done among this population, therefore the researchers sought to gather data which may prove to be useful for the growth of nuclear medicine practice in Jamaica. The study was a nonexperimental, retrospective study which involved an assessment of the records of all nuclear medicine patients who received a radiopharmaceutical during January 01, 2017, to December 31, 2018. The data extracted included age, gender, radiopharmaceutical administered, indication for study, and impression from scan. The total number of nuclear medicine scans that were carried out at the facility for the 2‑year period was 3756. Of this number, 1889 (50.3%) were male and 1866 (49.7%) were female, with the age ranging from 3 months to 100 years. The types and frequencies of the most frequently occurring studies conducted were bone (2116, 56.3%), renal (867, 23.1%), thyroid (307, 8.2%), and lung (254, 6.8%). Patients aged 60 years and over accounted for the majority of the bone scans (1353/2116). The age group 26–59 years accounted for most of the scans of the lung (123/254), thyroid (209/307), parathyroid (34/65), and whole body (26/34). Patients under 12 years of age accounted for the majority of the renal (596/867), gastrointestinal (22/26), and hepatobiliary (16/28) scans. The audit of this private facility reflects the documented demand on the International Atomic Energy Agency database for Latin America and the Caribbean, and demonstrates the need for continuity of this specialized service in our population. INTRODUCTION Nuclear Medicine (NM) is an essential technology of medicine globally in the management of both communicable and noncommunicable diseases. Evaluated trends have indicated growth in both diagnostic and therapeutic applications across different regions. [1,2] In spite of this growth, there exists variations in the provision of NM services across territories due to differences in factors such as qualified personnel, instrumentation, radiopharmaceuticals, [1] and financial resources. For example, Canada, in 2017, recorded 330 single-photon emission computed tomography (SPECT), 261 SPECT/computed tomography (SPECT/CT), and 51 positron emission tomography/CT (PET/CT) units, [3] averaging 17.6/million people, while reports obtained up to April 2016 from Member States of the International Atomic Energy Agency (IAEA) for Latin America and the Caribbean have documented that the region possesses a total of 1348 gamma cameras, averaging 2.25/million inhabitants. [4] With a population of approximately 2.8 million, Jamaica currently has two SPECT and one PET units available at private institutions. As a Member State of the IAEA, Jamaica has been a participant of a number of national and regional projects aimed at developing nuclear technologies. The IAEA Technical Cooperation Project JAM6012 geared at re-establishing NM capacity in the public health system in Jamaica, [5] has resulted in the training of a NM physician, a technologist, a physicist, and a radiopharmacist. The project will boost NM services with the addition of a SPECT/CT gamma camera at the University Hospital of the West Indies (UHWI), which is the largest hospital in Jamaica and the largest teaching hospital in the region. This partnership between Jamaica and the IAEA is geared at increasing access to NM technology for all citizens of the country. While private institutions require full payment from all patients for services rendered, the offering of NM services by the UHWI will facilitate a reduction in out-of-pocket expense for patients, and therefore increase the availability of the technology to a larger proportion of individuals. This is in keeping with the strategy for universal access to health and universal health coverage, which Jamaica adopted from the World Health Organization in 2014. [6] In an assessment of the IAEA initiative to advance NM, Dondi et al. stated that the process should be guided by trends that match individual country needs. [1] This study sought to evaluate the types and frequencies of NM studies that were carried out at a privately-run NM facility in Kingston, Jamaica. It is intended to be used in assessing the diagnostic needs of the Jamaican population, as well as highlight the prevalence of diseases that require NM technologies for their management. Previous studies have not been done among this population, therefore the researchers sought to gather data which may prove to be useful for the growth of NM practice in Jamaica. METHODS This study was a nonexperimental, retrospective study, aimed at establishing the types and frequencies of NM procedures that were carried out at a privately-run NM facility in Kingston, Jamaica. It involved an assessment of the records of all NM patients who received a radiopharmaceutical during January 1, 2017 to December 31, 2018. It caters to a maximum of 15 NM patients each day. NM diagnostic procedures that are offered at the facility include bone, lung, thyroid, parathyroid, renal as well as gastrointestinal scans. Patients with cancer of the thyroid may also receive NM therapy. The data extracted included age, gender, radiopharmaceutical administered, indication for study, and impression from scan. Information gathered from the files of patients was assessed with the assistance of a consultant radiologist. Ethical approval was sought and granted from the University of the West Indies Mona Campus Ethics Committee prior to the commencement of the study (ECP 201, 18/19). All methods and procedures were performed in accordance with the guidelines and regulations of the committee. Statistical analysis was performed with the Statistical Package for IBM SPSS Statistics software (version 22), Armonk, New York, United States and Microsoft Excel 2013. Results have been expressed as means, medians, or inter-quartile ranges or percentage frequencies as appropriate. RESULTS The total number of NM scans that were carried out at the facility for the period of January 2017 to December 2018 was 3756. Of this number, 1889 (50.3%) were male and 1866 (49.7%) were female, with the age ranging from 3 months to 100 years. Of the 3756 patients, the indications of 2186 patients were documented in their files, while the age of 23 patients was not recorded. The most frequent indications were breast, prostate, colon, and cervical cancers for bone scans, urinary tract infection and hydronephrosis for renal scans, and Graves' disease, hyperthyroidism, and thyrotoxicosis for thyroid scans [ Table 1]. DISCUSSION This is the first study to report on the demand for NM technology services in Jamaica. The primary indications for NM services over the 2 years were found to be related to bone, renal, thyroid, lung, and parathyroid scans. These findings are consistent with the IAEA reports from both developing and developed countries. [1,7] Most of the bone scans were requested for patients diagnosed with prostate and breast cancer. These are the leading sites of cancers in Jamaica for males and females, [8,9] and accounted for 2012 mortality rates of 33% in males and 20% in females. [10] The assessment of bone metastases, a prevalent form of metastases among these patients, is a crucial component of breast and prostate cancer management. [11][12][13][14] Emphasis should, therefore, be given to advancing the utilization of NM in Jamaica as a tool for the early detection and management of patients with bone metastases from these cancers. Renal scans were the highest for the pediatric population. In pediatric NM units, nephro-urology scans make up the majority of studies conducted [15] as it is used for the detection of both structural and functional abnormalities, and for the determination of glomerular filtration rates in this category of patients. [16] Studies have demonstrated a rise in the incidence of chronic renal failure (from 3.2 to 7.83/million) among the Jamaican pediatric population over the 28-year period 1995-2012, [17][18][19] and with global trends of pediatric patients presenting with febrile urinary tract infection, [20] early detection of kidney involvement is of importance. The audit of the single private NM facility reflects the documented demand of the IAEA database for Latin America and the Caribbean. [1] However, the findings are limited, as the IAEA database also documented a demand for cardiovascular and brain NM procedures in these countries, which are not services offered at the selected private institution. This investigation provides an assessment of the NM services at a facility in Jamaica and demonstrates the need for continuity of this specialized service in our population. The data provided may serve to guide future involvements with the IAEA in its role to provide support for countries which aim to address health needs through the peaceful applications of nuclear technologies.
2020-10-28T18:29:36.331Z
2020-09-14T00:00:00.000
{ "year": 2020, "sha1": "6a3efb4f2820d40c55953cc072c6ad1b2b0b6d8b", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/wjnm.wjnm_92_20", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "451cbd079680831313250e2f2e127f7d97f09b44", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
249861649
pes2o/s2orc
v3-fos-license
Mobility of Manganese in a Compacted Residual Gneissic Soil Under Laboratory Conditions . Given the shortage of information available in the literature on transport parameters of heavy metals in Brazilian tropical soils, the mobility of manganese (Mn 2+ ) in a residual gneissic compacted soil is studied in this work. Manganese can be found in toxic concentrations in landfill leachate, besides being one of the main contaminants from acid mine drainage. Column tests were performed in two groups of compacted soil samples to determine the manganese retardation factor. The sample groups presented slightly different soil compaction degrees and water contents. Soil samples were initially saturated by upward percolation of distilled water without applied counter pressure. A multi-species contaminant solution was then percolated through the soil columns. A different behavior of the hydraulic conductivity along time was observed between the two groups, during water as well as solution percolation. Manganese mobility was observed to be independent of soil hydraulic conductivity, k , for the range of k -values attained in this investigation, emphasizing the importance in evaluating the mobility of this metal in compacted soil barriers. Even when these barriers present low hydraulic conductivity values, this cation high mobility may cause it to reach soil layers below the compacted layer resulting in groundwater contamination. Introduction Municipal solid waste (MSW) dump sites and areas surrounding mining activities are normally subjected to heavy metal contamination. The leachate produced by MSW generally contains high concentrations of metals, including manganese, while acid mine drainage exhibits low pH and high concentrations of iron, aluminum and manganese. Heavy metals are chemical elements frequently associated with contamination since they may accumulate and cause disturbances in living organisms in a given environment. Studies concerning their behavior in soil have received considerable attention, and have helped to increase our understanding of the phenomena related to mobility and retention of these elements in the environment and their inclusion in the food chain. Concern over manganese is relatively recent. However, like other essential elements such as zinc and copper, it can be responsible for soil and groundwater contamination when it is present above certain concentrations. Groundwater pollution below contaminated areas is related to contaminant mobility. When it is high, a greater risk exists. Manganese, a plant and animal micronutrient, is a transition element of the iron family. It is among the most abundant elements (Group VII B), representing 0.09% of the weight of the Earth's crust (Wills, 1992). It is employed in metallurgy as well as in the production of fertilizers, electrolytic batteries, ceramics, varnish and paints, among other uses. According to Barceloux (1999), manganese is present in almost all types of soils in divalent and tetravalent forms and in concentrations varying between 40 and 900 mg kg -1 . In mining areas its concentration can reach levels of about 7000 mg kg -1 . The formation of Mn 2+ complexes in the process of adsorption and the consequent mobility depend on the properties of that metal, the type and amount of ligands, the composition of soil solution and soil pH (Alleoni et al., 2005). Therefore, the main objective of this work was to evaluate Mn 2+ mobility and determine its retardation factor when percolating a multi-species contaminant solution through compacted gneissic residual soil columns. Soil The soil used in this study, extracted from a slope of Visconde do Rio Branco, MG, sanitary landfill, was collected from the B horizon of a yellow red latosol classified, according to Unified System of Soil Classification (USCS), as inorganic silt of high compressibility (MH) and according to the Highway Research Board (HRB) system as A-7 soil with group index 12 (Azevedo et al., 2006). The soil was characterized through geotechnical tests, clay fraction mineralogical analysis and chemical and physicochemical analyses. Soil characterization and compaction tests were performed according to the Brazilian Standards ABNT NBR-7181/84 for particle size; ABNT NBR-6459/84 and NBR-7180/84 for consistency limits; ABNT NBR-6508/84 for specific weight of solids; and ABNT NBR-6457/86 for compaction. The chemical and physicochemical analyses were determined according to EMBRAPA (1987). Geotechnical properties are presented in Tables 1 and 2 while the results of chemical and physicochemical analyses are listed in Table 3. X-ray analysis was conducted with a Rigaku D-Max diffractometer equipped with a cobalt tube (Co-Ka radiation) and a graphite curved crystal monochromator operated at 40 kV and 30 mA. The X-ray analysis of the soil clay fraction was performed in three different types of samples: (i) random-powder, prepared on a glass slide with a cavity in which the natural clay was packed in powder form; (ii) oriented-aggregate, prepared with natural clay by the paste method according to Theisen and Harward (1962) for better mineral preferential orientation; and (iii) oriented-aggregate, prepared after treating the clay to remove the iron oxides, to enhance the preferential orientation of the silicate layer species present (Fig. 1). Analysis of these three sam- ple types allowed definition of the soil clay fraction composition as kaolinite, goethite and a very small amount of hematite (Nascentes, 2006). The amount of iron was determined using the dithionite-citrate extraction method (Coffin, 1963) to quantify the presence of iron oxides. Iron oxides content was 13.3% in mass which was entirely allocated to goethite. It is important to determine the amount of iron oxides in the clay fraction of the soil since these mineral constituents exhibit a high energy retention capacity for heavy metals. Heavy metals contaminant solution An artificial contaminant solution (synthetic landfill leachate) consisting of six heavy metals was used in the column tests. This solution was prepared by addition of nitrate salts, available at the laboratory, which are water soluble, of manganese, zinc, cadmium, copper, lead and chromium, metals commonly encountered in landfill leachates (Azevedo et al., 2006). The pH and heavy metal concentrations used (Table 4) are within the range of values for Brazilian landfill leachate (Oliveira & Jucá, 1999). Column tests The flexible-walled permeameter used in the column tests is similar to a triaxial cell and capable of simultaneously testing four soil samples of 0.05 m in diameter by 0.10 m in height. Each sample cell has an inlet for the percolating fluid and an outlet for effluent collection. Fluid flows upward through the soil samples. Each inlet is connected, by a latex hose, to a Mariotte bottle containing the contaminant fluid. The equipment also has an inlet for applying confining pressure that allows reproduction of in situ horizontal stresses (Azevedo et al., 2003). The tests were performed under controlled temperature conditions (17 to 21°C). A confining pressure of 50 kPa was applied to the samples to simulate a 10 m deep urban solid waste layer over the liner. Tests were performed on two groups of samples, with different compaction energies. Three samples from group I and eight from group II were dynamically compacted in a 0.05 m diameter metallic cylinder at 21.9% (group I) and 22.5% (group II) water content, corresponding to 95% of optimum specific dry density (15.63 kN m -3 ). The compaction energy was such that all samples were compacted until they reached 0.10 m in height and 0.05 m in diameter. As the water content varied slightly, the compaction energy also varied, as shown in Fig. 2. Tables 5 and 6 present a summary of molding and testing conditions for groups I and II, respectively. The different gradients and, consequently, different percolation velocities, adopted for CP06, CP07, CP010 CP011 samples of group II, were adopted with the purpose of evaluating the diffusion coefficient, which was not possible. The procedure used in this type of test is similar to that used in constant head permeability tests. The main differences are the need for measuring the effluent chemical concentration (C e ) and the generation of several pore volumes of chemical solution. During the tests, both affluent and effluent chemical concentrations were determined at regular intervals. The relationship C e /C 0 was calculated considering the value of C 0 read at the instant preceding the collection of the effluent. The hydraulic gradient was maintained constant during the tests. Soil samples were initially saturated by upward percolation of distilled water, without applying counter pressure, prior to the percolation of the contaminant solution. Soil columns were considered saturated when constancy of flow was observed. The soil hydraulic conductivity coefficient was determined using Darcy's law (Lambe and Whitman, 1979). Samples CP04 and CP09 from group II were percolated with distilled water to serve as reference for the other group II samples which were saturated with distilled water and then percolated with the contaminant solution. Column effluents were collected daily from 50 mL burettes fixed to the base of the equipment and stored in bottles, previously washed with a solution of nitric acid, for subsequent determination of metal concentrations in an atomic absorption spectrophotometer. After measuring effluent concentrations of each metal (C e ), for each percolated pore volume (T), breakthrough curves (C e /C 0 vs. T) were elaborated for manganese. Two methods can be used for data analysis of the effluent concentration from column tests (traditional method and the cumulative mass method). The traditional method consists in measuring instantaneous concentrations vs. time, determining the breakthrough curve and applying an analytical model to determine the retardation factor and hydrodynamic dispersion coefficient. The concentration of solutes in any point of the column is calculated using Eq. (1) (Ogata & Banks, 1961), for the initial and boundary conditions given in Eq. (2), as follow: When the length of the column is sufficiently long, the second term in the right side of Eq. (1) is negligible compared to the first, so that the effluent concentration at x = L is given by (Shackelford, 1993): or, where C e [ML -3 ] is the effluent concentration at x = L; T is the number of pore volume; P L is the column Peclet number; and L [L] is the soil column height. Percolation of distilled water Soil hydraulic conductivity (k) vs. number of pore volumes (T) curves obtained from distilled water percolation through groups I and II sample columns are presented in Fig. 3. A significant variation in the hydraulic conductivity values for group II samples with time is evident, as shown in Fig. 3b. Since these samples were compacted with greater water content than those of group I, their structure was slightly more dispersed. Therefore, saturation with distilled water promoted greater variations in hydraulic conductivity of group II samples, which reached constant flow after percolation of almost ten times more number of pore volumes, compared to samples of group I. For these last samples, however, constant flow was reached more quickly (for a smaller number of pore volumes) because of a more flocculated soil structure after compaction, as compared to samples of group II. More flocculated soil structures facilitate the exit of air which in turn allows constant flow values to be reached for a smaller number of percolated pore volumes. Percolation of group II samples with distilled water, associated with colloidal dispersion and double layer expansion, lead to a decrease in soil solution ionic concentration (Na + , Ca 2+ , Mg 2+ ), as shown in Fig. 4. An expansion of this layer results in a narrower and more tortuous solution percolation path and, consequently, in lower soil hydraulic conductivity. In other words, there was more salt leaching and as a consequence, a larger double layer thickness, for a Soils and Rocks, São Paulo, 34 (3) greater number of distilled water pore volumes percolated through the samples. The average final values of hydraulic conductivity were 1.5 x 10 -8 m/s for group I and 5.0 x 10 -9 m/s for group II. The heterogeneity of the soil samples tested and something in this particular testing procedure probably contributed to a slight difference in the final values of k between the two groups. Percolation of contaminant solution Soil hydraulic conductivity vs. number of pore volume curves for percolation of the contaminant solution through the two soil sample groups are shown in Figs. 5 and 6. The significant difference observed in hydraulic conductivity behavior for the two groups is attributed to the distinct double layer thicknesses attained by each after the saturation process. In other words, the soil hydraulic conductivity for contaminant solution percolation depended mainly on the compacted soil structure and the previous percolation with distilled water. Group I samples exhibited an initial great increase in hydraulic conductivity followed by a pronounced decrease, while a monotonic significant decrease was observed for all group II samples. However, this decrease occurred in a distinct way for each sample, possibly as a result of the different structures formed after the saturation process. The large difference in number of distilled water pore volumes directly influenced the behavior of the hydraulic conductivity by the time the contaminant solution was percolated. The small difference in the numbers of percolated distilled water pore volumes in samples CP07 and CP08 from group II (40.1 and 44.1, respectively), the approximate amount of leached cations and the same Standard Proctor compaction degree of 94.5% led to similar behaviors in hydraulic conductivity, when the contaminant solution was percolated through these samples. The introduction of chemical substances to soil generally produces variations in its hydraulic conductivity. The contact between these substances and the soil may lead to redistribution of pore spaces as a result of clay particle rearrangement (flocculation or dispersion) and chemical reac-tions, such as dissolution or precipitation of solids, between these substances and clay minerals. As a result of this contact, ionic changes may occur that can cause double layer contraction or expansion. The thickness of the double layer and the magnitude of acting forces depend mainly on the dielectric constant, temperature, electrolytic concentration in the interstitial fluid and cation valence, and to a lesser extent on cation size, fluid pH and anion adsorption on clay particle surfaces (Boscov, 1997). The samples in group I showed more flocculated structures (thinner double layer) than those in group II after percolation with distilled water. Thus, the initial increase in hydraulic conductivity probably occurred as a result of exchange of monovalent ions, naturally found in the soil, with divalent and trivalent cations present in the contaminant solution, leading to flocculation. Hydraulic conductivity started to decrease when the soil exhausted its capacity to retain zinc and manganese, precisely the metals present in high concentrations in the contaminant solution. In group II samples, the increase in pH could have been a factor favouring metal precipitation and leading to a decrease in hydraulic conductivity, as a consequence of the obstruction of soil pores by metal precipitates. In this case the soil structure was more dispersed and the duration of contact between the contaminant solution and samples was greater, which also favours precipitation. Effluent pH was measured for all samples and curves of pH vs. T are shown in Fig. 7a. In Fig. 7b, for the sake of clarity, only pH vs. T curves for soil columns CP10 and CP11 (group II) are presented. When heavy metal solutions percolate through the soil columns, variations in the pH of the effluent, due to sorption and desorption reactions, are common. In these reactions, the cations naturally present in the soil are liberated and leached, usually associated to the hydroxyl (OH -). In this way, the pH of the effluent varies according to the type and the leached amount of the cation, which could explain the oscillation of the pH value around the one that would be reached when reactions of sorption and desorption cease. According to Fig. 6b, the hydraulic conductivity in sample CP10 was lower than that of CP11 for values of T between 13 and 50, approximately. In this range, effluent pH in CP10 was greater than in CP11, indicating a possible higher precipitation in the former sample. Both samples showed similar pH values as well as hydraulic conductivities between T = 50 and T = 104. From T = 104 on, effluent pH in sample CP11 increased in relation to that in sample CP10 and hydraulic conductivity in CP11 consequently decreased more than in CP10. Determination of transport parameters Breakthrough curves (curves of relative concentration (C e /C 0 ) vs. number of percolated pore volumes) were constructed for manganese for both sample groups and are presented in Figs. 8 and 9. Manganese is a metal that happens naturally in great amount in tropical soils. The easily exchangeable concentration of this element in the studied soil is approximately 0.046 cmol c kg -1 , obtained by sequential extraction method with CaCl 2 , which accounts for the total amount of Mn 2+ released into solution when in competition with other ions for adsorption sites. The largest Mn 2+ desorption in CP03 of group I and in all samples of group II, as shown in Figs. 8c and 9, may be explained by the greater time of contact between the solution and the soil particles, as indicated by the hydraulic conductivity variations observed during the tests. According to Azevedo et al. (2006) and Nascentes (2006) mobility differed from that of manganese and was shown to depend on soil hydraulic conductivity. The Peclet number is a parameter that helps in determining the predominant type of transport. This number for each column was calculated using Eq. (5b), considering the average percolation velocity of each test, up to C e /C 0 = 1, as shown in Table 7. A mean value of 54.2 was determined for group II samples implying, according to the classification proposed by Sun (1995), that the predominant transport processes in column tests were advection and mechanical dispersion, since P L was higher than 10 and less than 100, which depend on hydraulic conductivity. The retardation factor (R d ) values shown in Table 8 were determined using the traditional method (Rowe et al., 1995) with R d given by the value of T for C e /C 0 equal to 0.5. Korf et al. (2008) conducted column tests in an undisturbed clayey soil, which was percolated by a synthetic multispecies solution composed of Cu 2+ (20 mg/L), Cr 3+ (20 mg/L), Mn 2+ (1 mg/L), and Zn 2+ (10 mg/L), and obtained an average value of 10.1 for the retardation factor. It can be noted that the breakthrough curves for the two sample groups shown in Figs. 7 and 8 are quite similar as are the R d values presented in Table 8. These similarities indicate that the mobility of manganese (Mn 2+ ) did not depend on hydraulic conductivity, for the range of k values of this investigation, which was markedly different for the two groups when the contaminant solution was percolated through the soil columns. The importance of test duration must be emphasized since the reactions between the soil and the contaminant solution did not occur in the same way for the manganese and the remaining heavy metals. Long term tests allow the development of chemical interactions of each heavy metal in competition with soil particles since a great number of pore volumes of contaminant solution are allowed to percolate through the soil column. In both sample groups, more than 60 pore volumes of the multi-species solution percolated through the soil columns, but in spite of significant differences in the values of hydraulic conductivities, the mobility of manganese in the soil was nearly the same in both groups. Conclusion The main conclusions drawn from this study can be summarized as follows. Hydraulic conductivity behavior of a compacted clay layer saturated with distilled water and subsequently leached with a heavy metal solution is sensitive to the number of percolated pore volumes in the saturation process as well as to the compaction energy which can promote significant alterations in the structure of the material. A mean value of 54.2 for Peclet number was determined for group II samples implying that the predominant transport processes in the column tests were advection and mechanical dispersion. The mobility of manganese in test columns was not influenced by the compaction water content varying in the range of ±0.5% around the optimum value, indicating the potential of this metal to contaminate soil and groundwater, even for low values of saturated hydraulic conductivity.
2022-06-20T15:02:47.538Z
2011-09-01T00:00:00.000
{ "year": 2011, "sha1": "3882a6596295952cf4535d9876fccff0f8b4ac98", "oa_license": "CCBY", "oa_url": "https://doi.org/10.28927/sr.343227", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2ad0ee40fe80e108b7b641e5f515923b91b10b52", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
25292102
pes2o/s2orc
v3-fos-license
Translation elongation by a hybrid ribosome in which proteins at the GTPase center of the Escherichia coli ribosome are replaced with rat counterparts. Ribosomal L10-L7/L12 protein complex and L11 bind to a highly conserved RNA region around position 1070 in domain II of 23 S rRNA and constitute a part of the GTPase-associated center in Escherichia coli ribosomes. We replaced these ribosomal proteins in vitro with the rat counterparts P0-P1/P2 complex and RL12, and tested them for ribosomal activities. The core 50 S subunit lacking the proteins on the 1070 RNA domain was prepared under gentle conditions from a mutant deficient in ribosomal protein L11. The rat proteins bound to the core 50 S subunit through their interactions with the 1070 RNA domain. The resultant hybrid ribosome was insensitive to thiostrepton and showed poly(U)-programmed polyphenylalanine synthesis dependent on the actions of both eukaryotic elongation factors 1alpha (eEF-1alpha) and 2 (eEF-2) but not of the prokaryotic equivalent factors EF-Tu and EF-G. The results from replacement of either the L10-L7/L12 complex or L11 with rat protein showed that the P0-P1/P2 complex, and not RL12, was responsible for the specificity of the eukaryotic ribosomes to eukaryotic elongation factors and for the accompanying GTPase activity. The presence of either E. coli L11 or rat RL12 considerably stimulated the polyphenylalanine synthesis by the hybrid ribosome, suggesting that L11/RL12 proteins play an important role in post-GTPase events of translation elongation. The "GTPase center" of the ribosome is a region involved in interaction with GTP-bound translation factors, GTP hydrolysis (1), and post-GTPase events including tRNA movements on the ribosome (2,3). Translation elongation is markedly stimulated by the interaction of this region with two elongation factors in a GTP-dependent manner. The GTPase center includes two essential RNA regions around positions 1070 and 2660 (Escherichia coli numbering) of the 23 S/28 S rRNA (4), which appear to bind to the elongation factors (5,6). Despite the highly conserved structure of the 1070 and 2660 RNA regions, ribosomes show a kingdom-dependent accessibility for translation factors, i.e., prokaryotic ribosomes do not engage in translation elongation with the eukaryotic factors instead of the prokaryotic factors (7)(8)(9). Furthermore, there are differences in the rate of GTPase turnover between the two systems; in vitro eukaryotic eEF-2 1 /80 S ribosomedependent GTP hydrolysis is 10-fold slower than the prokaryotic EF-G/70 S ribosome system (10). This may reflect, in part, the elaborate regulation of eukaryotic translation. The other important component of the GTPase center is the acidic stalk protein, termed L7/L12 in prokaryotes (11)(12)(13)(14)(15). Four copies of this proteins bind to protein L10 and form a stable complex (16), designated here as L10⅐L7/L12. This protein complex and another protein, L11, are assembled on the 1070 RNA domain (17). Flexible property of L7/L12 protein in the ribosome (16, 18 -20) seems to be correlated with the fast turnover of EF-G-dependent GTPase. The eukaryotic counterparts of the prokaryotic L7/L12 and L10 are P1/P2 and P0, respectively (21,22). Although formation of the complex, termed P0⅐P1/P2, has been clarified (21,(23)(24)(25)(26), its structure and function have not been characterized extensively. We previously tried replacement of the acidic stalk protein complex L10⅐L7/L12 in the E. coli ribosome with rat P0⅐P1/P2 in vitro and showed, by this replacement, that the ribosome acquired GTPase activity dependent on the eukaryotic translocase eEF-2 instead of prokaryotic EF-G (10). This activity was comparable with that of the rat 80 S ribosome. Meanwhile, other groups exchanged the 1070 RNA region within 23 S/26 S rRNA, with which the acidic stalk protein complex interacts between E. coli and yeast, and this showed no major functional effect (27,28). These studies suggest that P0⅐P1/P2 protein complex on the 1070 RNA domain, but not the RNA itself, are important for the kingdom-specific function. The E. coli ribosome in which L10⅐L7/L12 was replaced with rat P0⅐P1/P2, however, showed no significant activity of eEF-1␣/ eEF-2-dependent polyphenylalanine synthesis in our previous study. 2 This may be due to damage caused during preparation of the core ribosome lacking both L10 and L7/L12, using 50% ethanol, 0.5 M NH 4 Cl at 30°C. To prepare the core ribosome employing milder conditions, we use here an L11-lacking ribosomal mutant from which L10⅐L7/L12 complex is easily removed at 0°C. Both rat P0⅐P1/P2 complex and RL12 (rat counterpart of E. coli L11) are incorporated into the core ribosome. This hybrid ribosome has appreciable activity in polyphenylalanine synthesis dependent on the two eukaryotic elongation factors. The present results clearly show that the eukaryotic ribosomal proteins bound to the 1070 RNA domain play crucial roles in translation elongation regulated by the eukaryotic factors. Rat Ribosomal Proteins and Their Binding to the E. coli Core Ribosome-The rat P0⅐P1/P2 complex and RL12, counterparts of E. coli L10⅐L7/L12 complex and L11, respectively, were prepared as described previously (24,30). In a typical experiment, 10 pmol of the 50 S subunit cores were incubated with 2 g of P0⅐P1/P2 complex and 0.4 g of RL12 in a solution (25 l) containing 10 mM MgCl 2 , 75 mM NH 4 Cl, 20 mM Tris-HCl, pH 7.6, at 37°C for 5 min and used for various assays. In some experiments, E. coli L10⅐L7/L12 and L11 (30) were added instead of the rat proteins. Incorporation of these proteins was confirmed by sucrose density gradient (10) and native agarose-acrylamide composite gel (see below). RESULTS We have characterized rat ribosomal proteins P0⅐P1/P2 complex and RL12, counterparts of the E. coli acidic stalk protein complex L10⅐L7/L12 and L11, respectively (24,30). To investigate the functional significance of these rat proteins in eukaryotic translational mechanism, here we attempted to substitute the proteins for L10⅐L7/L12 and L11 in E. coli 50 S ribosomal subunits. The E. coli core particle lacking L10⅐L7/ L12 and L11 assembled on the domain around 1070 of 23 S rRNA was prepared with the 50 S subunit of the L11deficient AM68 strain (Fig. 1A, lane 2). L10⅐L7/L12 was easily and selectively removed from the mutant 50 S subunit (Fig. 1A, lanes 3 and 4) in 50% ethanol, 0.5 M NH 4 Cl at 0°C. Binding of the rat proteins to the E. coli core particle was examined by native agarose-acrylamide composite gel electrophoresis (Fig. 1B). The mobility of the core particle (lane 2) was much higher than the intact 50 S subunit (lane 1). The addition of the rat proteins changed the gel mobility of the core particle (lane 3). This mobility shift by rat proteins was prevented by adding an excess amount of an RNA fragment containing residues 1029 -1127 of E. coli 23 S rRNA to which rat P0⅐P1/P2 complex (10) and RL12 (30) cross-bind (lane 4), suggesting that the rat proteins bind to the 1070 RNA domain within the E. coli 50 S core particle. The binding of rat proteins to the 1070 RNA region was also confirmed by chemical footprinting. 3 The region of the gel with the shifted ribosomal band (lane 3) was cut out and tested by immunoblotting for reactivity with autoimmune serum that recognizes rat ribosomal proteins P0, P1, P2, and RL12 (35) (Fig. 1C). Fig. 1C, lane 2, clearly shows that all of the added rat proteins comigrated with the E. coli core particle. From these results, we concluded that E. coli L10⅐L7/L12 and L11 are replaced with rat protein counterparts on the 1070 RNA domain of 23 S rRNA in the 50 S subunit, as illustrated in Fig. 1D. The hybrid ribosomes were tested for activity in poly(U)-dependent polyphenylalanine synthesis. Unlike intact E. coli ribosomes, hybrid ribosomes showed activity to be dependent on eukaryotic eEF-1␣ and eEF-2 ( Fig. 2A) but not on prokaryotic EF-Tu and EF-G (Fig. 2B). Therefore, the ribosomal specificity for elongation factors was changed by replacing E. coli L10⅐L7/ L12 and L11 on the 50 S subunit with rat counterparts. This eEF-1␣/eEF-2-dependent activity was suppressed by the addition of the RNA competitor (data not shown), which prevented the hybrid formation (Fig. 1B, lane 4). The polyphenylalanine synthetic activity of the hybrid ribosome was not as high as that of the rat intact ribosome (Fig. 2C); the initial rate of polymerization by the hybrid ribosome was about one-third that of the rat 80 S ribosome. To confirm whether the polymerization activity of the hybrid ribosome depends on the actions of both eEF-1␣ and eEF-2, the activity was assayed without either eEF-1␣ or eEF-2. As shown in Fig. 2D, the polymerization activity of the hybrid ribosome was detected only when both eEF-1␣ and eEF-2 were present, indicating that the hybrid ribosome allow functional access to both the eukaryotic elongation factors. To investigate individual contributions of P0⅐P1/P2 complex and RL12 to translation elongation by the hybrid ribosome, we performed partial replacement of either L10⅐L7/L12 or L11 in E. coli 50 S subunit with the respective rat counterparts. The core ribosome lacking L10⅐L7/L12 and L11 (Fig. 1A) was incubated with L10⅐L7/L12-like proteins (E. coli L10⅐L7/L12 or rat FIG. 2. Polyphenylalanine synthesis by the hybrid ribosome dependent on eukaryotic elongation factors. The eukaryotic eEF1␣/ eEF-2-dependent (A) and prokaryotic EF-Tu/EF-G-dependent (B) polyphenylalanine syntheses were assayed at individually determined times with 10 pmol of E. coli core ribosomes lacking L10⅐L7/L12 and L11 (‚), hybrid ribosomes formed by preincubation of E. coli cores with rat P0⅐P1/P2 and RL12 (q), and intact E. coli ribosomes (Ⅺ) as described under "Materials and Methods." C, comparison of polypeptide synthetic activity between the hybrid (q) and rat 80 S (E) ribosomes. Polyphenylalanine synthesis by 10 pmol of hybrid ribosomes was assayed at individually determined times as described under "Materials and Methods." The activity was also assayed with 10 pmol of rat 80 S ribosomes (37) under the same conditions except the salt concentrations used were 5 mM MgCl 2 , 50 mM NH 4 Cl, 100 mM KCl, 50 mM Tris-HCl, pH 7.5, 0.1 mM dithiothreitol (optimum for rat ribosomes). D, functional accessibility of the hybrid ribosome to both eukaryotic elongation factors eEF-1␣ and eEF-2. The hybrid ribosomes (10 pmol) were tested for polyphenylalanine synthesis with both 5 g of eEF-1␣ and 1 g of eEF-2 and without either factor. P0⅐P1/P2) and of L11-like proteins (E. coli L11 or rat RL12), and we tested the ribosomal functions dependent on eukaryotic elongation factors (Fig. 3). The GTPase activities dependent on eEF-2 ( Fig. 3A) and eEF-1␣ (Fig. 3B) were markedly stimulated by addition of rat P0⅐P1/P2 to the core ribosome. The addition of E. coli L11 or rat RL12, together with P0⅐P1/P2, slightly enhanced eEF-2-dependent GTPase (Fig. 3A) but had no effect on eEF-1␣-dependent GTPase (Fig. 3B). In contrast to GTPase, polyphenylalanine synthetic activity with eEF-1␣ and eEF-2 was stimulated to only a small extent by the addition of P0⅐P1/P2 complex alone to the core ribosome (Fig. 3C). This activity was, however, enhanced 3-fold by the further addition of E. coli L11 and more than 4-fold by the addition of rat RL12. Replacement of E. coli L11 alone with RL12 gave no appreciable effect on the polyphenylalanine synthesis as well as GTPase. The hybrid ribosomes were tested for sensitivity to the antibiotic thiostrepton, which recognizes the E. coli 1070 RNA domain associated with L11 (1). The E. coli ribosomes in which L10⅐L7/L12 alone was replaced with rat P0⅐P1/P2 retained thiostrepton sensitivity, as described previously (10). Replacement of both L10⅐L7/L12 and L11 with P0⅐P1/P2 and RL12 resulted in ribosomes insensitive to the drug (Fig. 4), suggesting that rat RL12 is responsible for the thiostrepton insensitivity of the hybrid ribosome. DISCUSSION The ribosomal proteins L10⅐L7/L12 and L11 bind to the 1070 RNA region in domain II of 23 S rRNA, forming a mobile region that constitutes a part of the GTPase-related functional center of the E. coli ribosome. We here performed in vitro replacement of L10⅐L7/L12 complex and L11 in the 50 S subunit with rat counterparts P0⅐P1/P2 complex and RL12, respectively. By this replacement, ribosomal specificity for elongation factors is changed; the hybrid ribosome is engaged in polypeptide synthesis by the actions of two eukaryotic elongation factors, eEF-1␣ and eEF-2, but not of prokaryotic EF-Tu and EF-G. It has been shown since the earliest work that prokaryotic 70 S ribosomes do not engage in protein synthesis with the eukaryotic translation factors and that eukaryotic 80 S ribosomes are inactive with the prokaryotic factors (7)(8)(9). The present results strongly suggest that a limited number of ribosomal proteins assembled on the 1070 RNA domain are major components responsible for the kingdom-dependent specificity between ribosomes and GTP-bound translation factors. Functional contributions of rat P0⅐P1/P2 and RL12 in the hybrid ribosome were clarified by their individual substitutions for E. coli L10⅐L7/L12 and L11, respectively (Fig. 3). P0⅐P1/P2 complex, but not RL12, contributes substantially to the specificity for the eukaryotic factors and GTPase. Not only rat RL12 but also E. coli L11, however, stimulates polyphenylalanine synthetic activity dependent on eEF-1␣ and eEF-2, although the stimulation level by RL12 is higher than that of E. coli L11. The availability of mutants deficient in L11-type proteins in bacteria (29,36,37) and yeast (38) indicates that L11-like proteins are not essential for cell viability. However, the growth rate of E. coli mutant AM68 lacking L11 was very slow; its doubling time was five times longer than strain Q13, which does have L11 (data not shown). This finding suggests that the efficiency of protein synthesis by ribosomes lacking L11 is quite low within cell, in line with the present in vitro data on poly(Phe) synthesis. RL12/L11 appears to participate in improving the efficiency of a step in post-GTPase events such as translocation of tRNAs. There is a clear difference between E. coli L11 and rat RL12. L11 but not RL12 binding to the 1070 RNA domain makes the ribosome sensitive to the antibiotic thiostrepton (Fig. 4). This is consistent with our previous binding experiment, i.e., thiostrepton stabilizes a complex between L11 and the E. coli 1070 RNA domain but not the RL12-RNA complex (30). The difference in thiostrepton sensitivity may be due to N-terminal differences in amino acid sequences between L11 and RL12 around residue 22, which is important for thio- E. coli core 50 S subunits lacking L10⅐L7/L12 and L11 were preincubated with rat P0⅐P1/P2 and RL12 (q) or with E. coli L10⅐L7/L12 and L11 (Ⅺ). The ribosomal samples were then incubated with the indicated amounts of thiostrepton and assayed for eEF-2-dependent and EF-Gdependent GTPase activities, respectively. strepton binding (39). Despite the divergence of the sequences, a conformation important for function appears to be conserved between the two proteins. The present results clearly show that P0⅐P1/P2 complex plays a crucial role in the functions of the two eukaryotic elongation factors. The involvement of P1/P2 proteins in translation elongation has been suggested previously by immunochemical inhibition assays (40) and partial reconstitution studies with rat and yeast ribosomes (23,41). An essential role of P0 for cell viability has been demonstrated in yeast (23). Cryoelectron microscopic studies of the complex containing E. coli ribosome EF-Tu and aminoacyl-tRNA (42), as well as the ribosome⅐EF-G complex (43,44), have demonstrated direct contacts of these factors (GTP-binding domain) with the L7/L12 stalk and also with its base region. Considering these previous data and the present results together, eukaryotic P1/P2 stalk and P0/RL12 constituting its base region seem to bind directly to eEF1␣ and eEF-2. This view is also supported by chemical cross-linking of eEF-2 with P2, P0 (LA 33 ), and RL12 (45) and of eEF1␣ with RL12 (46). Therefore, the ribosome factor specificity may be explained by the direct interaction between the ribosomal proteins and elongation factors. In addition to interactions with the translation factors, P0⅐P1/P2 complex and RL12 have another important function, which is rRNA binding. Because rRNAs appear to play essential roles in translational mechanism (47), it is important to know the effect of the protein binding on rRNA. P0⅐P1/P2 complex and RL12 bind to overlapping regions of the 1070 (E. coli numbering) domain of 28 S rRNA and affect the RNA conformation (24). In the 1070 RNA region, there is also a site for eEF-2 binding as detected by footprinting (48). It is likely that adjustment of the 1070 RNA region by protein binding may be important for the functional interaction of eEF-2 with the RNA. This is also the case in the hybrid ribosome. Because rat P0⅐P1/P2 and RL12 cross-bind to the E. coli 1070 RNA domain (10,30) and stimulate the ribosome function dependent on eEF1␣ and eEF-2 (present study), these rat proteins appear to affect the structure and function of the 1070 RNA region within the E. coli ribosome. An important and interesting point yet to be addressed is the effect of protein binding on the 2660 RNA region (sarcin/ricin loop), another important RNA region to which elongation factors bind. Because the 1070 and 2660 RNA regions are neighbors in the 50 S subunit (4), it is also likely that bindings of rat proteins to the 1070 region may affect the interactions of elongation factors with the 2660 region. The activity of our hybrid ribosomes in translational elongation implies that tRNA movement as well as its binding occur properly in this artificial construction. It has been shown that translocation of the A-site tRNA to the P-site is stimulated by GTP hydrolysis on EF-G (2) and possibly by the interaction of EF-G with several sites including the decoding region of the E. coli ribosome (43,44,49). Our results demonstrate from a functional aspect that ribosomal compartments engaging in translational elongation, except for the acidic stalk protein complexes, are highly conserved between E. coli and rat. This is in agreement with structural evidence from crystallographic studies, i.e., the ribosomal intersubunit space, including peptidyltransferase and decoding sites, is constructed mainly with conserved rRNA moieties (50,51). Furthermore, cryoelectronmicroscopic studies showed structural resemblance in the interface surfaces of both large and small subunits between rat and E. coli (52). Meanwhile, it is highly likely that there is strong similarity in domains 3, 4, and 5 of the translocases that appear to mimic the structure of tRNA (reviewed in Refs. 3 and 53) between EF-G and eEF-2, although amino acid identity between them is low (26.4% between E. coli EF-G and rat eEF-2). We infer that the basic mechanism of the factor-dependent translocation of tRNAs, which occurs in the RNA-rich intersubunit space of ribosomes, is identical between prokaryotes and eukaryotes. The acidic stalk protein complex located at the outer side of the ribosome appears to participate in the kingdom-specific regulation with its mobile property. This protein complex may regulate the action of translocase not only by its direct binding to translocase and triggering GTP hydrolysis but also through interaction with functional regions of rRNA. Further detailed knowledge on the characteristic features of P0⅐P1/P2 complex could provide insights into eukaryote-specific regulation of translational elongation. It has been difficult to determine the precise functional roles of eukaryotic ribosomal proteins in vitro. The present replacement studies, "molecular plantation," as it were, of mammalian proteins into the E. coli ribosome, provide a novel methodology for the researches of eukaryotic ribosomal proteins.
2018-04-03T04:05:45.639Z
2002-02-08T00:00:00.000
{ "year": 2002, "sha1": "a81fe6b6f0581e7c631a38e8cb75ad6ec0fb7a24", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/6/3857.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "3752036b81c03cd2730639ac22750151d54a2129", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235862307
pes2o/s2orc
v3-fos-license
HW/SW Architecture Exploration for an Efficient Implementation of the Secure Hash Algorithm SHA-256 —Hash functions are used in the majority of security protocol to guarantee the integrity and the authenticity. Among the most important hash functions is the SHA-2 family, which offers higher security and solved the insecurity problems of other popular algorithms as MD5, SHA-1 and SHA-0. However, theses security algorithms are characterized by a certain amount of complex computations and consume a lot of energy. In order to reduce the power consumption as required in the majority of embedded applications, a solution consists to exploit a critical part on accelerator (hardware). In this paper, we propose a hardware/software exploration for the implementation of SHA256 algorithm. For hardware design, two principal design methods are proceeded: Low level synthesis (LLS) and high level synthesis (HLS). The exploration allows the evaluation of performances in term of area, throughput and power consumption. The synthesis results under Zynq 7000 based-FPGA reflect a significant improvement of about 80% and 15% respectively in FPGA resources and throughput for the LLS hardware design compared to HLS solution. For better efficiency, hardware IPs are deduced and implemented within HW/SW system on chip. The experiments are performed using Xilinx ZC 702-based platform. The HW/SW LLS design records a gain of 10% to 25% in term of execution time and 73% in term of power consumption. I. INTRODUCTION Nowadays, the Field-Programmable Gate-Array (FPGA) is becoming a good alternative to the application-specific integrated circuit (ASICs) especially when dealing with such complex implementation like image or signal processing applications [1].Indeed, thinks to the progress brought on programmable circuits, it becomes possible to design a System on Chip (SoC) component based on single or multiple processors and a programmable logic.This kind of system could be exploited in a wide range of applications for its flexibility, short time to market, low power and high capacity Manuscript received January 7, 2021; revised April 1, 2021.Date of publication April 29, 2021.Date of current version April 29, 2021.The associate editor prof.Toni Perković has been coordinating the review of this manuscript and approved it for publication. M. Kammoun is with the Digital Research Center of Sfax, Sfax, Tunisia.M. Elleuchi and M. Abid are with the CES Research Laboratory, National Engineering School of Sfax, Digital Research Center of Sfax, Sfax, Tunisia. A. M. Obeid is with the National Center for Electronics, Communications and Photonics at KACST (e-mails: manelkammounenis@gmail.com, manelelleuchi@gmail.com, mohamed.abidces@yahoo.fr,obeid@kacst.edu.sa). Digital Object Identifier (DOI): 10.24138/jcomss-2021-0006 of integration.In addition, the reconfigurability of FPGA circuits boosts the designers to implement their own programs using a Hardware Description Language (HDL) and also to make several optimizations on the hardware architecture. For decades, the Low-Level Synthesis (LLS) has been adopted as a design method for FPGA implementations as it is more reliable and requires an explicit coding of the control path.This option leads to better improve design capabilities by optimizing whatever parameters.Nevertheless, the designing of the final netlist takes much time an effort, practically for the case of complex algorithms.At this level, the hardware developers are front of a new challenge where many constraints should be taken into account to fulfill the market requirement.Consequently, it is time to think about new design methods which can help to economize timing constraint and facilitate the implementation task on FPGAs.The solution is to raise the abstraction level from LLS to High Level Synthesis (HLS) using a specific high level description language (Matlab, C/C++, etc...).For many reasons, the HLS becomes more and more useful than LLS [2] [3].One of the key benefit of working with HLS is the ability to simulate multiple algorithms in the shortest times [4].Moreover, modern HLS tools such as (Vivado HLS [5], Catapult-C [6], etc..) are able to provide an estimative report of area cost, frequency and latency time more quickly.Also, several optimizations can be exploited at the level of C function to better improve design performances in term of throughput and hardware cost.For instance, the usage of pipeline an unroll pragmas can help to reach higher throughputs at the cost of increasing logical gates.However, there are some restrictions that should be held on before working with HLS tools.First, it is not a simple conversion from high level language to RTL level.In fact, the code must be re-written with a specific way to be correctly implemented on FPGA platform.Second, some particular C instructions based on pointer and recursion are not synthesizable and can cause memory overhead in the context of FPGA.All these reasons prevent designers to go over optimizing more their architectures.Consequently, the main focus of this work will be devoted to study the influence of LLS and HLS design methods when facing a such computational application like cryptographic algorithm. Most of application domains use secure techniques and algorithms to protect them from attacks while respecting the se- curity requirements.Several security protocol and frameworks are based on symmetric cryptographic techniques, Asymmetric cryptographic techniques, hybrid cryptography techniques and hash functions [7].Hash functions are the most important key to keep data safe and secure.They are used as building blocks in various cryptographic and security services such as electronic commerce, digital signature and information authentication.SHA-2 is a family of hash functions that were designed by the US National Security Agency (NSA), based on SHA-1 and SHA-0.Like most of the hash functions, the SHA-2 takes as input a message of arbitrary size and produces a fixed size output.Fig. 1 represents a general model of this type of functions [8]. The size of hash is indicated by the suffix: 224 bits for SHA-224, 256 bits for SHA-256 and 512 bits for SHA-512.For instance, SHA-256 is a type of secure hash operation under the SHA-2 [9] banner with digest length of 256 bits.This family of hashing algorithms uses large digest messages making them more resistant to possible attacks and allowing them to be used with big amount of data blocks, up to 2128bits in the case of SHA-512. The SHA-256 presents a model of hash algorithm that posses many computational rounds.In this context, there is a proliferation of various works which were trying to optimize the complexity of SHA-256 function using the hardware accelerators as provided in [10] and [11].However, these solutions suffer from lack of flexibility and performance degradation.To overcome these deficiencies, two design methods are adopted in this work based on LLS and HLS synthesis.With systemon-chip (SoC) designs growing in complexity, system-level approaches that leverage on HLS and LLS techniques are becoming the workhorse of current SoC design flows.These solutions provide reasonable agreements in term of FPGA resources, throughput and power consumption.To highlight our contribution, the proposed LLS and HLS accelerators are integrated in HW/SW context in order to estimate performances in term of execution time and power consumption. The remainder of this paper is organized as follows.Section II introduces some related works which had implemented SHA-256 algorithm under FPGA platform.Section III presents our proposed architectures implemented on the Xilinx ZC 702 evaluation board [12].The experimental findings of the HW/SW implementations in term of throughput and power consumption are discussed in section IV.Finally, the conclusion and the futures works are provided in Section V. II. RELATED WORKS Several works had experimented the implementation of cryptographic hash functions on FPGA-based platform.For instance, the example given in [13] proposed an improved schemas of the SHA-256 algorithm implemented on Virtex-2 XC2VP-7.These designs were based on the rearrangement technique to compute the inner loop of the SHA-256 hash function such as computing values in advance and changing the control path without increasing the clock cycles.In best case, the maximum throughput achieved in this work was about 909 Mbps with an efficiency of 0.713 Mbps/ slice. In [14], the authors reported a parallel architecture for an efficient usage of encryption/ decryption modules.The synthesis was done on Virtex 5 based platform which provided a rate of 405 Mbps for the SHA-256 implementation. Furthermore, in [15] a design of SHA processor was described which implemented the three hash algorithms SHA-512, SHA-512/224 and SHA-512/256 in both Virtex-5 and Virtex-4 LX FPGA chips.The main purpose of its architecture is to reuse data to keep a high efficiency, minimize critical path and reduce the memory access through using cache memory.The implementation results demonstrated that the proposed design used fewer resources achieving higher performance and efficiency.Otherwise; the data transfer speed is around 50 Mbps. In addition, a multi-mode architecture is presented in [16] which are able to perform either a SHA-384 or SHA-512 hash algorithm or to treat two independent SHA-224 and SHA-256 blocks.The main goal of this approach is to minimize remarkably the computational overhead with zero time latency caused by the processing of the input message.However, the maximum throughput achieved by the SHA-256 hardware block can only reach 308 Mb/s. Another VLSI architecture is provided in [17] which can support three hash functions (256, 384 and 512).The principal contribution of this work is the allocation of the same area resources for all hash algorithms which can affect the speedup of the global design (about 291Mbps in case of SHA-256).There are also some other works focusing on SHA-256 hardware implementation such as [18] and [19].For instance, in [18], authors implemented the SHA-256 secure hash algorithm in both Virtex-5 and Virtex-4 LX FPGA chips.The purpose of its architecture is to exploit data reuse technique to keep high efficiency, minimize critical paths and reduce memory access.The synthesis results using Virtex 5 device demonstrated a fewer FPGA resources in use while the data transfer speed was around 50 Mbps. On the other hand, a Totally Self-Checking (TSC) design was implemented in [19] on Virtex 5 XC5VLX330 FPGA device.Hence, the different components of the SHA-256 function such as the Rotation/Shift registers and Multiplexers as well as the counter and addition components should obey to the described TSC rules.Moreover, a TSC system, even though it introduces a penalty in performance and in area consumption, is more efficient in term of throughput compared to the existing solutions as it can produce a throughput level up to 3.88 Gbps. In contrast, the HLS was adopted as a design method in diverse fields such as financial [20], video coding [21] and stereovision [22] algorithms.Nevertheless, the number of published works of secure algorithms admitting HLS method is relatively tenuous.In this case, the hardware proposed in [23] is designed using Vivado HLS tool under Xilinix Zynq 7000 SoPC.After adding the suitable optimizations, it was capable to operate 1088 bits in 70 clock cycles. At light of the above finding, we note that the recourse proposed solutions are entirely developed in hardware which allows achieving higher throughputs at the cost of affecting the flexibility of the design.Therefore, it is necessary to make into account the synchronization between hardware IP and bus interface throughputs when dealing with processor and FPGA.In this context, the next section will be devote to develop the hardware implementation of the SHA256 hash functions using low-level and high-level design methods under Zynq 7000 SoC platform. III. TOWARD EFFICIENT HW/SW IMPLEMENTATION Several design methods can be explored to perform the implementation of the SHA-256 hash function.Usually, the SW solutions are more flexible and don't require a lot of time to verify and validate the IP which is not the case of the HW implementation.This last is more tended to satisfy real time constraint at low power cost rather than software at the cost of increasing the simulation time.In order to ensure the best trade-off between flexibility and performances, the HW/SW concept is considered as a best solution which combines a microprocessor system and a programmable logic both in the same chip. Thereby, this section discusses the different proposed solutions (SW, HW and SW/HW) for the implementation of the SHA-256 hash algorithm.After studying the whole operation of the hash function, this last is implemented in SW environment using ARM Cortex A9 processor in order to estimate the most consuming part in the SHA-256 function.Based on profiling results, diverse hardware solutions are developed for the implementation of the critical function. A. SHA-256: Specification and Complexity The concept of cryptographic hash function consists of assigning a single relationship between the input message and the hash value.The ideal cryptographic hash algorithm should satisfy some criteria.At first, it should be hard or infeasible to invert a hash function in such a way that the hash output value h produces an input message M such that H(M) = h.Second, given an input m 1 , it is difficult to produce the same hash value with another input value m 2 .This feature refers to a weak collision resistance.The iterative structure is another property specific to hash security functions where the hash value of the current block is computed using the digest message of the previous block [24].This leads to make the compression function output more secure and collision resistant.Thanks to these advantages, hash functions are today widely exploited in real life applications such as MD5 [25] and SHA-1 [26].However, before proceeding any implementation task, it is necessary to present the secure hash algorithm characteristics as mentioned in table I below. The different steps followed to generate the digest message using SHA-256 hash algorithm are explained as follow: SHA-256 operates in the same manner as MD5 and SHA-1.The length of input message is first padded in such away the result length is a multiple of 512 bits.Second, it is parsed into 512-bits blocks M (1) , M (2) ...,M (N ) .The message blocks are computed sequentially one by one, starting from an initial hash value H (0) as given in equation 1 where C is the compression function, + means word-wise mod 2 32 and H (i) is the hash of M. Generally, the SHA-256 can be expressed in form of four functions.Indeed, the 'sha256 init' function initializes the eight 32bits variables H (0) , H (1) , H (2) , H (3) , H (4) , H (5) , H (6) , H (7) for use with 'SHA256 Update' and 'SHA256 final'.The 'SHA256 final' is called when all data has been added via 'SHA256 Update' and loads a message digest.On the other hand, the 'SHA256 Transform' is used by 'SHA256 Update' and 'SHA256 final' to hash the 512-bit input blocks and build the core of the algorithm.Fig. 2 shows the pseudo codes description of the SHA-256 functions. 1) Preprocessing(Overview): As prior hash algorithms, SHA-256 is computed as follow: the hash message is first padded so that the final length (L) will be a multiple of 512 bits [27].Then, a single 1-bit is append in the end of the message followed by K zero bit, where k refers to the smallest positive solution to the equation L+K+1=448 mod 512.A 64bit representation of L is added to the result of the padding.For instance, taking an example of a message (8-bit ASCII) "abc" which has the length equal to 8×3=24.This latter is padded to a 1, then (448-(24+1))=423 zero bits and finally to its length to get the 512-bit binary message as presented in Fig. 3 [28]. This message is parsed into individual N=512 bit blocks M (1) , M (2) ,..,M (N ) and then passed to the message expander. 2) Hash Operation(Overview): A set of logical functions are used in the SHA-256 algorithm and operate on 32-bit words [29].These functions are illustrated in equations 2, 3, 4, 5, 6 and 7 where ⊕ , ∧ and ∼ are respectively the bitwise XOR, the bitwise AND and NOT while R and S represent the right shift and right rotation by n bits.Fig. 4 illustrates the different steps proceeded in order to hash a message M which contains N blocks: Where W u is the expanded message blocks (W 0 , W 1 , W 63 ) which are computed as given in equations 8 and 9, while K u is a sequence of 64 constants used to initialize hash values: According to results supplied in Fig. 5, it is evident that the 'SHA256 Transform' function is the most complex and consumes about 47% of the total execution time required for SHA-256 algorithm.Thus, it is enough to design efficient hardware architecture for 'SHA256 Transform' function in order to ensure a trade-off between flexibility and performances. B. Hardware exploration of 'SHA256 Transform' block As the 'SHA256 Transform' function is the most time consuming function in the SHA-256 algorithm, we present in this section two hardware implementations based on low level architecture and high level architecture in order to find the optimal solution in term of throughput and power consumption. 1) Low level proposed architecture: The block diagram of the proposed low level architecture is detailed in Fig. 6.The proposed hardware architecture is dedicated to support 'SHA256 Transform' function. This design supports a set of components designed as follow: • Input/output registers: A 512 bits register per block which is organized in form of 16 inputs coded within 32 bits.Then, the 8×32 bits digest message generated in the output constitutes the hash value which is the concatenation of (H Furthermore, the pipeline process is applied between the compression engine that is responsible for loading (a, b, c ... and h) values and the W u computing unit.At all, 65 clock cycles are enough to load the digest message which is the concatenation of The maximum throughput achieved by the proposed design is computed as given in equation 10 where δ is the maximum throughput, MB is the message block size, freq is the operating frequency and C is the total clock cycles. Table II summarizes the FPGA resources, the operating frequency and the throughput results of the proposed architecture which was implemented in XC7Z020 Zynq [30] device and simulated using ModelSim tool. The next section will be devoted to develop the high level proposed architecture.The main objective of this study is to determine which design method among LLS and HLS provides better performances in term of area cost, throughput and power consumption. 2) High Level Proposed Architecture: In this section, the HLS synthesis is adopted as a design method in order to improve design performances in term of area cost and throughput factor.At this stage, the Vivado HLS tool is exploited as a tool to develop several optimized hardware solutions for 'SHA256 Transform' function using Zynq 7000 FPGA platform.In the top level function, we use two 32 bits input vectors which are data1 [8] and data [16] to store respectively the first eight initial values (H 7 ) and the 16 words representing the 512 bits input message.In the output, the hash message is loaded into 8×32 bits RAM memory block.The block diagram of the proposed HLS architecture is illustrated in Fig. 9. To improve design productivities, two hardware solutions are elaborated by adding optimized pragmas incrementally to the design. • #Solution 1: In this first experiment, the SW code of 'SHA256 Transform' function is kept in initial condition without adding any optimization.Besides, the hardware design is synthesized under Zynq 7000 FPGA using Vivado HLS tool.Indeed, the experimental results supplied in table III, prove that FPGA implementation is shared between 339 SLICE, 1036 LUT, 5 BRAM and 1322 FF with an operating frequency equal to 222 MHz.Otherwise, the maximum throughput achieved by this solution can reach 96 Mbps. • #Optimized solution: In the second experiment, the UN-ROLL directive is applied to the external loops of the computation and scheduling equations.This optimization leads to reduce latency time and improve maximum throughput achieved by the hardware design.Although this technique has reduced the Latency with a profit up to 87% compared to #Solution 1, it comes with a penalty in area cost.On the other hand, we conclude, from Table III, that the manual solution possesses 15% to 90% higher throughput compared to #Optimized solution and #Solution 1 respectively and also it uses 80% fewer area resources relative to #Optimized solution.On the other hand, we evaluate the power consumption of the different proposed solutions.From table III, it is evident that our proposed solution consumes less power than #Solution 1 and #Optimized solution.This is explained by the fact that the operating frequency of the manual approach is fewer than the HLS solutions.Consequently, it can be noticed that the proposed manual architecture is more efficient than HLS cases as it provides the best trade-off between FPGA resources and throughput. After achieving all hardware acceleration tasks, section 4 will be reserved for the study of the developed solutions in such an SW/HW environment.In fact, a deeply evaluation of design performances in terms of FPGA resources, execution time, and power consumption will be detailed. IV. SW/HW SOLUTIONS EXPLORATION AND PERFORMANCE EVALUATION To highlight the influence of hardware acceleration in terms of time, area cost and power consumption, the SW/HW concept is adopted as a design method.This solution exploits the Zynq SoC architecture which incorporates an ARM Cortex A9 [31] processor operating at 667 MHz and a programmable logic (PL).The communication between the processor and PL is ensured through an AXI4-Stream protocol controlled by the TDATA, TVALID, TLAST and TREADY signals [32].In case of HLS design method, the streaming interface is developed based on #Optimized solution which provides the best trade-off between throughput and area cost.The different signals used to control the transfer status between the processor and the stream interfaces are detailed in Fig. 10.The AXI4-stream uses between 2 and 9 signals to ensure the communication between master and the slave protocols.The main used signals are TVALID which indicates the presence of data, the TREADY flag is equal to 1 when it is ready to receive the data and the TLAST signal notifies the end of the frame.This behavior is explained in Fig. 11.On the other hand, the work presented in [33] proved that the AXI4-Lite bus performances are limited since it provides a sequential transfer of only 32-bit data.This generates a huge communication time and makes the transfer a bit slow.However, In case of the streaming interface, it is just enough to fix the maximum packet size depending on input message.Afterwards, the 8 bits input data are concatenated within 32 bits registers and then communicated in form of train to the DDR memory. A synthesis results of the proposed hardware IPs with HLS (#SHA256 Transform HLS) and LLS (#SHA256 Transform LLS) design methods connected to the streaming interface is presented in table IV. From this implementation results, we confirm that the #SHA256 Transform LLS solution provides a gain of about 84% in LUT compared to #SHA256 Transform HLS solution while the operating frequency of the #SHA256 Trans form HLS is 22% better than the #SHA256 Transform LLS. Furthermore, the full SW/HW designs of the SHA-256 algorithm are carried out in Standalone execution mode using the Xilinx ZC702 board.Indeed, the SW/HW LLS design, given in Fig. 12, includes one DMA channel configured in read/write mode connected to #SHA256 Transform LLS IP. This routing provides a direct access to the DDR external memory in order to read and write data.However, since the #SHA256 Transform HLS stream interface requires two input vectors, the SW/HW HLS design uses two DMA channels to connect the #SHA256 Transform HLS stream interface.The first DMA is configured in read/write mode while the second is operating in write mode only as detailed in Fig. 13 below. The next stage of this project consists of comparing the different proposed design methods (SW, SW/HW HLS, SW/HW LLS) in term of time and power consumption.As a first experiment, we study the impact of only 'SHA256 transform' accelerators on execution time without including the whole SHA-256 chain.Then, the experimental results are compared to the SW implementation as presented in Fig. 14.The general formula followed to evaluate the execution time "DTime" in microsecond is provided in equation 11: DT ime = 1.0 × (End − Start)/(CN T /1, 000, 000) (11) where end is the end time, start is the start time, and CNT = 100,000,000/2.As shown in Fig. 14, the execution time for the developed SW/HW LLS and SW/HW HLS solutions is reduced by nearly 43% and 35% respectively compared to SW case. In the second experiment, we evaluate the execution time of the whole SHA-256 chain after adding the 'SHA256 Transform' accelerators designed with LLS et HLS methods as illustrated in Fig. 15. The experimental results prove that the execution time of the SW/HW LLS solution is 10% and 25% better than SW/HW HLS and SW designs respectively.order to improve the embedded system architecture across the traditional boundaries of hardware and software.Also, it lets the developers to think about design in terms of a trade-off between performance and flexibility. Although the amelioration performed by our work, the results still average, which push us to think of better way to optimize the accelerator to cost less resource.As future works, we will migrate to a more sophisticated processor from RISC-V to have better results.In addition to that, we will study other recent hash algorithms using HW/SW design method in order to estimate implementation constraints and compare results with the SHA-256 proposed solution.Furthermore, we can also exploit reconfigurable methods as a solution to avoid memory overhead and decrease power consumption. Fig. 6 . Fig. 6.The block diagram of the proposed architecture. • The compression engine: this unit is responsible for computing intermediate hash values (a, b,...g, h).Given the expanded message Wu, the constant values Ku and the 32-bits initialized registers (a to h), compute T1 and T2 values used to update A and E registers.Afterwards, the shift registers is used to update the remaining registers in each clock cycle.To accomplish this task, we exploit the pre-calculated functions 0 (a), 1 (e), Maj(a,b,c) and Ch(e,f,g).•W u unit: presented in Fig.7, generates the W u used by the round computation.For the first 16 rounds (W 0 to W 15 ), they are transmitted to the compression engine as input1, input2,...., input16 to provide the first values of W u .After that, W u is computed recursively using its previous values W u−2 , W u−7 , W u−15 and W u−16 .This calculation requires the estimation of δ 0 (W u ) and δ 1 (W u ) values. Fig. 12 . Fig. 12.The block diagram of the SW/HW LLS design. TABLE I SECURE HASH ALGORITHMS CHARACTERISTICS. TABLE III IMPLEMENTATION RESULTS OF THE HLS SOLUTIONS UNDER FPGA PLATFORM. TABLE IV IMPLEMENTATION RESULTS OF THE PROPOSED STREAMING INTERFACES.
2021-07-16T00:05:55.400Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2fa9a45ba2d68910ef254e733b0349772dfb7fbb", "oa_license": "CCBYNC", "oa_url": "https://jcoms.fesb.unist.hr/pdfs/v17n2_2021-0006_kammoun.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c7cc4e0f77df2a5c9c18113032a984d2d11bce75", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
17190955
pes2o/s2orc
v3-fos-license
Impaired PTPN13 phosphatase activity in spontaneous or HPV-induced squamous cell carcinomas potentiates oncogene signaling via the MAP kinase pathway Human papillomaviruses (HPV's) are a causative factor in over 90% of cervical and 25% of head and neck squamous cell carcinomas (HNSCC's). The C-terminus of the high risk HPV 16 E6 oncoprotein physically associates with and degrades a non-receptor protein tyrosine phosphatase (PTPN13), and PTPN13 loss synergizes with H-RasV12 or ErbB2 for invasive growth in vivo. Oral keratinocytes that have lost PTPN13 and express H-RasV12 or ErbB2 show enhanced Ras/RAF/MEK/Erk signaling. In co-transfection studies, wild type PTPN13 inhibited Ras/RAF/MEK/Erk signaling in HEK 293 cells that over-express ErbB2, EGFR, or H-RasV12, while an enzymatically inactive PTPN13 did not. Twenty percent of HPV negative HNSCC's had PTPN13 phosphatase mutations that did not inhibit Ras/RAF/MEK/Erk signaling. Inhibition of Ras/RAF/MEK/Erk signaling using the MEK inhibitor U0126 blocked anchorage independent growth in cells lacking PTPN13. These findings show PTPN13 phosphatase activity plays a physiologically significant role in regulating MAP kinase signaling. INTRODUCTION Malignant transformation often occurs through random, accumulated genetic changes resulting in characteristic features shared by nearly all cancers (Hanahan and Weinberg 2000). It is estimated that viral gene expression plays a role in 20% of cancers. Viral genes frequently target key cellular pathways that are also altered in non-viral cancers. Because viral genes alter these pathways in a mechanistically consistent way, studies of their function often serve as a starting point to understanding non-viral mechanisms of transformation. In most viral cancers, synergistic cellular changes must occur for malignant progression to occur. Therefore, it is important to study viral gene function in the context of these cellular changes. The following study examines a synergy between HPV viral oncogene function and cellular changes that lead to invasion. High risk HPV's promote cancerous growth through over-expression of two multifunctional viral oncoproteins, E6 and E7. Their known transforming functions include inactivation of pRB by E7 and degradation of p53 and activation of telomerase by E6 (Longworth and Laimins 2004). E6 oncoproteins from HPV subtypes that are high risk for malignant progression also contain a C-terminal PDZ binding motif (PDZBM), which has a poorly understood yet necessary role in malignant transformation. PDZBM's are short C-terminal amino acid sequences capable of binding PDZ domain containing proteins (Jelen et al 2003). We have previously investigated the transforming effects of the E6 PDZBM of HPV type 16 in HPV related head and neck squamous cell cancers (HNSCC's) (Spanos et al 2008b) and cervical cancer (Nowicki et al, unpublished data) and have shown it physically associates with and induces loss of PTPN13, a non-receptor protein tyrosine phosphatase that contains five PDZ domains. In addition, HPV 16 E6 or shRNA mediated PTPN13 loss synergizes with H-Ras V12 for invasive growth in vitro and in vivo models of HNSCC (Spanos et al 2008a, Spanos et al 2008b. Besides our data, PTPN13 has been reported as a putative tumor suppressor in a wide range of epithelial cancers (including breast, colon, and hepatocellular (Wang et al 2004, Yeh et al 2006, Ying et al 2006). Analysis of synergistic changes associated with PTPN13 loss in colon cancers showed that a majority had mutations in the MAP kinase pathway (Wang et al 2004) Though some reports show significant association between Ras mutations and HPV in cervical cancers (Landro et al 2008, Lee et al 1996, direct activating Ras mutations (like H-Ras V12 ) are less common in HNSCC's (Hardisson 2003, Lu et al 2006, Yarbrough et al 1994'. Ras pathway stimulation may alternatively be achieved in HNSCC's by overexpression of membrane bound growth factor receptors, most notably the ErbB family of receptor tyrosine kinases. The four members of this family (ErbB1-4) are commonly overexpressed in HNSCC's and are associated with activation of several major cancer associated signaling cascades including signal transducers and activators of transcription (STAT's), Ras/RAF/MEK/Erk (MAP Kinase), and PI3 Kinase/AKT (Ford and Grandis 2003). ErbB2 specifically is over-expressed in up to 47% of HNSCC's (Cavalot et al 2007), and when combined with expression of E6/E7 causes invasive growth in primary oral keratinocytes, although the mechanism of HPV/ErbB2 synergy and the contribution of the E6 PDZBM were not explored (Al Moustafa et al 2004). Therefore, we have investigated if the common HNSCC oncogene ErbB2 synergizes with HPV 16 E6 induced PTPN13 loss to result in invasive growth in vivo. To understand how PTPN13 loss alters cell signaling promoting invasion, we investigated the phosphorylation status of relevant effector pathway signaling components in the presence or absence of functional PTPN13. We describe a mechanism of PTPN13's phosphatase: the regulation MAP Kinase cascade signaling. Furthermore, we provide evidence that PTPN13 loss of function may play a crucial role in potentiating MAP Kinase cascade signaling downstream of multiple different Ras activating oncogenes found in both HPV positive and negative epithelial cancers. ErbB2 synergizes with PTPN13 loss allowing invasive growth in vivo Previously, we have shown H-Ras v12 expression synergizes with HPV 16 E6 or shRNA mediated PTPN13 loss to allow invasive growth (Spanos et al 2008b). Here, we sought to determine if ErbB2, a common HNSCC oncogene which activates Ras, would also cause invasive growth when PTPN13 is absent. In human cancers, ErbB2 over-expression is oncogenic, but in rodent cancer models activating mutations are required for efficient tumor formation (Moasser 2007). We therefore cloned mouse ErbB2 cDNA into a retroviral vector and performed site directed mutagenesis to introduce a transmembrane region point mutation (ErbB2 V660E) corresponding to that present in a constitutively active rat ErbB2 mutant (neuNT) (Moasser 2007). Previously described MTE cell lines stably expressing HPV16 E6, HPV16 E6Δ 146-151 , or shPTPN13 (Hoover et al 2007, Spanos et al 2008b were then transduced with the ErbB2 V660E expression construct. E6Δ 146-151 is a deletion mutant of HPV16 which degrades p53 but lacks a PDZ binding motif and does induce loss of PTPN13 (Spanos et al 2008b). The shPTPN13 cells lack PTPN13 due to an shRNA mechanism (Spanos et al 2008b). Through western blot, we confirmed that retroviral transduction with ErbB2 V660E increases total and phoshorylated ErbB2 as compared to the parental cells (Figure 1a). Previously described PTPN13 levels (Spanos et al 2008b) were not altered by addition of ErbB2 V660E (not shown). To determine if HPV 16 E6 PDZBM mediated degradation of PTPN13 is required for invasive growth in synergy with ErbB2, we injected 1 × 10 6 cells from each ErbB2 V660E expressing cell line subcutaneously into the right, hind legs of C57BL/6 mice (5 mice per group). Weekly caliper measurements were used to calculate average mouse leg circumferences and estimate tumor growth rates over time ( Figure 1b). All mice receiving MTE HPV 16 E6/ErbB2 V660E or MTE shPTPN13/ErbB2 V660E cells formed tumors and met criteria for euthanasia within 50 and 30 days, respectively, but no mice receiving MTEHPV 16 E6Δ 146-151 /ErbB2 V660E cells showed signs of tumor formation for more than 80 days (Figure 1c). Representative mouse legs from each group are shown at two weeks post-injection ( Figure 1d). MTE cells expressing both HPV 16 E6 and E7, and ErbB2 V660E formed tumors at the similar rate as HPV 16 E6/ ErbB2 V660E (data not shown). Thus, oncogenic forms of ErbB2 ( Figure 1) and H-Ras (Spanos et al 2008b) synergize with HPV 16 E6 in a similar PDZBM-dependent manner, suggesting that growth factor receptor mediated Ras activation may be important to allow invasive growth in HPV related HNSCC's. PTPN13 loss correlates with increased Erk phosphorylation Our in vivo findings show expression of ErbB2 V660E or H-Ras V12 is tumorigenic in oropharyngeal keratinocytes only if PTPN13 levels are decreased in parallel, suggesting a potential role for PTPN13 in regulation of cancer signaling pathways affected in common by both ErbB2 and its downstream effector, Ras. To assess what common signaling pathways are altered by loss of PTPN13 and Ras/ErbB2 expression, we examined phosphorylation levels of Erk and AKT. MTE HPV 16 E6Δ 146-151 /H-Ras v12 , MTE HPV 16 E6/ H-Ras v12 , and MTE shPTPN13/ H-Ras v12 cells were grown to confluency, serum starved for 24 h, and protein lysates collected for immunoblot. Although we observed variation in phospho-AKT levels among the MTE cell lines, we found no correlation between phospho-AKT levels, in vivo invasive growth potential, and the presence/absence of PTPN13. However, cell lines with decreased PTPN13 showed increased levels of phospho Erk1/2 (Figure 2a), a finding which correlates with tumor forming ability in vivo. MTE cells expressing ErbB2 V660E showed similar results (Figure 2b). Immunohistochemistry of mouse tumor samples demonstrated increased levels of phospho-Erk compared to overlying normal epithelium. A representative MTE shPTPN13 ErbB2 V660E tumor is shown in Figure 2c. We also examined phospho-Erk levels in primary human tonsil epithelial (HTE) cells and those expressing combinations of HPV 16 E6, HPV 16 E6Δ 146-151 , E7, and H-Ras v12 , as well as in one HPV 16 positive (UPCI-SCC 90) and one HPV negative (UMSCC 84) head and neck squamous cell carcinoma cell line. In previous work, we have shown UMSCC90 cells have decreased PTPN13 levels compared to UMSCC84 (Spanos et al 2008b). Consistent with our findings in the mouse, HTE cells expressing E6Δ 146-151 and E7 showed decreased phospho-Erk levels compared to cells expressing wild type E6 and E7. The addition of H-Ras V12 further increased phospho-Erk levels. Moreover, UPCI-SCC90 cells show increased phospho-Erk compared to HTE primary cells. Interestingly, the HPV negative UMSCC84 cell line also showed elevated phospho-Erk, although to a lesser degree than UPCI-SCC90. These findings in human and mouse cells suggest that induced loss of PTPN13 in conjunction with Ras activation permits potentiation of signaling through the MAP kinase pathway. PTPN13 attenuates MEK and Erk phosphorylation To further study the role PTPN13 plays in regulating growth factor receptor mediated signaling, we performed co-transfection experiments in HEK 293 cells and examined MAP Kinase cascade signaling in response to PTPN13 status, postulating that if loss permits enhanced signaling then replacement of PTPN13 should suppress signaling. Cells were split to equal, subconfluent densities and co-transfected as indicated with mammalian expression vectors for human PTPN13, ErbB2 (wild type), EGFR (ErbB1), H-Ras V12 , or an empty vector control. Following 24 h of serum starvation, protein lysates were collected and vector expression confirmed by immunoblot. As expected, transfection with ErbB2, H-Ras V12 , or EGFR, as compared to empty vector control conditions, increased levels of phospho-Erk, which was inhibited by co-expression of PTPN13 ( Figure 3). We observed a similar change in phospho-MEK, the immediate upstream activator of Erk. Interestingly, Phospho-AKT levels were not increased by transfection with Ras/ErbB2/EGFR or inhibited by PTPN13 (Figure 3). Since a previous report showed PTPN13 may dephosphorylate ErbB2 directly (Zhu et al 2008), we also examined levels of phospho-ErbB2 (tyr 1248), and we also observed a small effect in the presence of wild-type PTPN13 (Figure 4). These findings show that under serum starved conditions, PTPN13 consistently attenuates MEK 1/2 and Erk 1/2 phosphorylation in response to activation by various oncogenes common in cervical and/or head and neck cancers. The phosphatase activity of PTPN13 is required to attenuate MAP Kinase activation Apart from its phosphatase, PTPN13 has many domains likely important for localization, binding potential substrates, or possibly regulatory roles (Erdmann 2003). The functional significance of the phosphatase domain's in preventing transformation has not been demonstrated. In fact, PTPN13 phosphatase domain null mice show only minor phenotypic defects (Nakahira et al 2007, Wansink et al 2004. Contrary to this transgenic mouse data, a recent screen of colorectal cancers suggests loss of phosphatase function may promote cancer because phosphatase domain mutations frequently co-occurred with mutations in the Ras/MAP Kinase cascade (Wang et al 2004). To determine if PTPN13 phosphatase activity regulates MAP Kinase signaling, we performed site directed mutagenesis to create a phosphatase dead PTPN13 construct for use in our co-transfection studies. An identical mutation resulting in a phosphatase domain cysteine to serine substitution (PTPN13 C2389S) has been shown to abolish PTPN13 catalytic activity (Dromard et al 2007). We repeated the HEK 293 transfection assay with expression vectors for EGFR/ErbB2 and either wild type or C2389S PTPN13 constructs. Immunoblot analysis shows a phosphatase domain function of PTPN13 is necessary to inhibit MAP Kinase signaling in the presence of ErbB2 or EGFR as evidenced by increased levels of phospho-Erk and phospho-MEK in PTPN13 C2389S conditions compared to wild type PTPN13 (Figure 4). Mutations in HPV negative HNSCC's do not attenuate MAP kinase signaling The above findings support the hypothesis that loss of PTPN13 phosphatase activity is instrumental in oncogenic signaling. To examine whether HPV negative human tumors demonstrate alternative ways to abrogate PTPN13 enzymatic function we sequenced PTPN13 exons 44-48 (encoding the PTP domain) from 10 HPV negative, human HNSCC specimens. Two patients showed a total of 4 mutations in the coding sequence (one patient had three). This percentage (20%) compares well with that observed in colorectal cancers (9%) (Wang et al 2004). The mutations were cross matched with known SNP's to rule out common polymorphisms and aligned with the protein sequence to determine in which domains the mutations occurred. HNSCC 132 contained one mutation in exon 45, (6874C>T P2276S), and two mutations in exon 46 (7271C>T P2408L and 7316T>C L2423P). HNSCC 134 contained one mutation (c.7205C>T T2386I) in exon 46. The mutations occurring in exon 46 were either within, or close to, important phosphatase domain functional groups (the phosphatase domain binding loop or WPD loop). Therefore, we chose to determine if these mutations affect PTPN13 phosphatase activity and prevent MAP kinase potentiation in conjunction with wild type, human ErbB2. We performed site directed mutagenesis to create corresponding mutations in our PTPN13 expression construct and assessed the PTP activity of immunopurified P2408L and L2423P mutant proteins using DifMUP as an artificial substrate from HNSCC 132. Results indicated that, as compared to wild type PTPN13, both mutants from HNSCC 132 behaved like the catalytically dead C2389S PTPN13 version (Supp. Figure 1). We also repeated the transfection assay in HEK 293 cells and two of the three mutants, one from each HNSCC, did not reduce levels of phospho-Erk 1/2 (Figure 5b). The result of phospho-Erk 1/2 inhibition do not correspond with the DifMUP phophatase activity because the P2408L mutant does not show activity but is able to inhibit Erk phosphorylation. An examination of phospho-Erk levels, by immunohistochemistry, in the human cancers from which the mutations were isolated showed focal areas of dense staining compared to little/none in normal tonsillar epithelium (Figure 5c). Many of the HNSCC samples without phosphatase domain mutations also showed increased levels of phospho-Erk (data not shown), indicating there may be multiple mechanisms through which MAP Kinase activity is increased in HNSCC's. Our findings suggest PTPN13 phosphatase domain mutations are one mechanism through which increased MAP Kinase activity occurs in HNSCC's. MEK inhibition by U0126 abrogates Erk phosphorylation and anchorage independent growth in tumorigenic cell lines that have lost PTPN13 Impaired PTPN13 phosphatase activity in HNSCC's may alter signaling cascades other than the MAP kinase pathway. To determine the importance of MAP Kinase signaling for the tumorigenic process and assess the possible therapeutic potential of our findings, we examined levels of phospho-Erk 1/2 in response to the MEK inhibitor U0126 in two tumorigenic cell lines: a cell line derived from a mouse tumor generated by injection with MTE E6E7/ErbB2 V660E cells, and the MTE shPTPN13/ErbB2 V660E cells characterized above. For both cell lines, treatment with increasing doses of U0126 correlated with decreasing levels of phospho-Erk (Figure 6a). To determine the physiological significance of decreased Erk phosphorylation, we examined anchorage independent growth (AIG) colony forming efficiencies during U0126 treatment. Anchorage independent growth correlates with invasive potential in vivo (Reddig andJuliano 2005, Zhan et al 2004), and we have previously shown a correlation between PTPN13 loss and AIG in both mouse and human cell lines (Spanos et al 2008a, Spanos et al 2008b. In the cell lines examined, colony forming efficiencies decreased in a dose dependent response to U0126 that correlates with a decrease in phospho-Erk (Figure 6b). These studies provide evidence that HPV 16 related cancers are dependent on enhanced MAP Kinase signaling during invasive growth and pharmacological inhibition at the level of MEK may serve as a useful therapy. DISCUSSION The conversion of a normal epithelial cell to a malignant one requires multiple cellular alterations (Hahn and Weinberg 2002). Our findings show that loss of enzymatic activity of a key phosphatase (PTPN13) synergizes with aberrant MAP Kinase signaling resulting from hyperactivity of epithelial oncogenes like ErbB2 and H-Ras to allow invasive growth in vivo. Importantly, this synergy in potentiating malignant growth is relevant both for HPV positive as well as HPV negative tumors. Our data strongly suggests HPV 16 mediated PTPN13 loss synergizes with ErbB2 activity during invasive growth in HPV related head and neck cancers, a finding which expands on several previously published reports. ErbB2 and H-Ras have previously been known to synergize with E6 and E7 to allow invasive growth (Al Moustafa et al 2004, Schreiber et al 2004. And, ErbB2/EGFR over-expression occurs in a large proportion of cervical cancers (Perez-Regadera et al 2009), the majority of which are HPV positive. We show that the HPV 16 E6 PDZBM degrades PTPN13 and this induced loss is needed to synergize with ErbB2. The site(s) where PTPN13 acts to exert this control on MAP kinase signaling is still not clear. Using an independent experimental system, Zhu et al have shown PTPN13 physically associates with and decreases phosphorylation of ErbB2 at phosphotyrosine 1248, and that PTPN13 loss enhances carcinogenic signaling downstream of ErbB2 (Zhu et al 2008). Our findings are largely in agreement in that we also show an effect on ErbB2 mediated signaling at the level or MEK and Erk in response to PTPN13 and also showed a change in ErbB2 phosphorylation at tyrosine 1248 that correlated with PTPN13 status. The fact that both ErbB2 and H-RasV12 were potentiated by PTPN13 loss and PTPN13 inhibited MAP kinase signaling downstream of multiple oncogenes (ErbB2, EGFR, H-RasV12), suggest that the phosphatase target that inhibits MAP kinase signaling may not only be limited to ErbB2 tyrosine 1248. Taken together, the above findings suggest a mechanism of how PTPN13 loss of function synergizes with MAP Kinase activating oncogenes to promote invasive growth across both HPV positive and negative epithelial cancers. Our identification of non-functional PTPN13 mutants that allow aberrant MAP Kinase signaling in HPV negative HNSCC specimens is consistent with studies correlating PTPN13 phosphatase domain mutations in colorectal cancers with alterations in the Ras/MAP Kinase cascade (Wang et al 2004). Also, mounting evidence points to PTPN13 as a potential tumor suppressor in a broad range of other epithelial cancers, including breast, cervical, gastric, and hepatocellular carcinomas (Revillion et al 2009, Wang et al 2004, Yeh et al 2006, Ying et al 2006 There is still some question as to what domain of PTPN13 is required for MAP kinase inhibition. We show some evidence that did not show a correlation between phosphatase activity of a mutant and its ability to inhibit Erk phosphorylation. Mutation P2408L showed very little phosphatase activity by DifMUP assay, yet when transfected with ErbB2 is able to inhibit MAP kinase phosphorylation the same as wild-type. This finding could be explained by a difference in enzymatic specificity for the native substrate versus the artificial DifMUP assay. An alternative explanation is that other domains of the phosphatase are important in forming a possible complex to inhibit MAP Kinase signaling. The mechanism of inhibition and the binding partners involved will be the focus of future studies.. While we have shown a specific synergy with H-RasV12 and ErbB2 in vivo, our findings demonstrate PTPN13 loss likewise alters signaling of EGFR, another common epithelial oncogene. Other receptor tyrosine kinases or pathways that activate MAP kinase signaling may also synergize with PTPN13 disruption during invasive growth. Besides the three examined in this study, a number of MAP Kinase activating oncogenes/mutations have been reported in cancer types known to have decreased or mutated PTPN13. Examples include ErbB3 in the head and neck (Ford and Grandis 2003), RAF in colorectal (Barault et al 2008, Wang et al 2004, and insulin like growth factor receptor (IGFR) and RAF in hepatocellular carcinomas (Avila et al 2006, Hopfner et al 2008,. While our findings provide some insight regarding the role of PTPN13 in MAP kinase signaling, many questions remain unanswered such as what is the target of the phosphatase and what are the roles of the non-phosphatase domains. (Abaan andToretsky 2008, Erdmann 2003)Future work will be required to better understand the complete mechanism of suppression. From a clinical perspective these findings provide rationale for development of better targeted therapies in cancers that have lost PTPN13 function. HPV related cancers, in particular, offer a distinct opportunity for such therapies since the viral oncogenes can be rapidly identified in clinical tumor samples and are associated with consistent changes in cell signaling pathways. It would also be possible to test which tumors have synergizing oncogenes that potentiate MAPK signaling. We have shown that Erk phosphorylation and anchorage independent growth in cells lacking PTPN13 can be inhibited at the level of MEK with U0126, however, U0126 has been shown to be ineffective in vivo, likely due to poor solubility/bioavailability (Wang et al 2007). Clinical trials have been undertaken for MEK inhibitors that show greatly improved pharmacological properties in vivo compared to U0126 (Adjei et al 2008, Lorusso et al 2005, Rinehart et al 2004 , . These new pharmacological MEK inhibitors may have utility in improving outcomes in HPV related cancers. In Vivo Invasive Growth Experiments were carried out as has been previously described (Hoover et al 2007, Spanos et al 2008b. Examination of these tumors by a pathologist demonstrated invasive growth into muscle and capillary/lymphatic growth as was seen in our past model (Spanos et al., 2008). In brief, MTE cells were harvested from tissue culture plates using trypsin (0.25%), and resuspended at 10 3 cells/μL in E-Media. Using an 18-gauge needle, 100 μL of cell suspension was injected subcutaneously in the right, hind leg of each C57BL/6 mouse. Leg circumference was estimated by weekly caliper measurements and using the equation: Circumference=π(3(a+b)−[(a+3b)×(3a+b)] 1/2 ), where a is longest leg dimension, and b the shortest. Mice were humanely euthanized once tumors reached 2 cm in greatest dimension, or the animal became emaciated or had functional leg impairment. An 80-day survival study of 20 mice in four equally sized groups was analyzed using the log rank test with Kaplan-Meier estimated survival functions (Figure1c). The one mortality in the E6Δ group was related to a non-tumor related anesthesia death. This is indicated in the survival graph. Transfection Assay HEK 293 cells were routinely cultured in Dulbecco's Modified Eagle Medium (DMEM) supplemented with 10% Fetal Bovine Serum and 1% penicillin/streptomycin. Twenty-four h prior to transfection, cells were split and plated at 6×10 6 cells/10 cm plate. Co-transfections were performed per manufacturer instructions using Polyfect® (Qiagen 301105) and 4 μg of each indicated plasmid. Twenty-four hours post-transfection, plates were washed twice with phosphate buffered saline and serum starved for 16-24 h in DMEM before protein lysis. Anchorage Independent Growth Assay and MEK Inhibitor (U0126) Studies U0126 (Cell Signaling #9903) was solubilized in dimethlysulfoxide at 8 mg/mL before being added to DMEM at indicated concentrations. To create the MTE E6E7 ErbB2 V660EA tumor cell line, tumor tissue was dissected from the leg of a freshly euthanized, male C57BL/6 mouse injected three weeks prior with MTE E6E7 ErbB2 V660E cells. Tissue was finely minced using a scalpel and subsequent treatment with Dispase II. Cells were cultured using previously described methods (Hoover et al 2007). Indicated amounts of U0126 was applied to the cells in EMEM for 48 h prior to lyses and subsequent immunoblots. Anchorage independent growth assays were performed by applying a 100 μL base layer of one percent noble agar to 12 mm Transwells® with 0.4 μM polyester membrane inserts (Corning #3460) and allowed to solidify at 4 °C. Cells were suspended at 2×10 3 cells/mL in a mixture of 0.33% noble agar and E-Media at 37 °C. 500 μL of this suspension was added in triplicate to Transwells® and allowed to harden 5 min at 4 °C. Transwells® were added to sterile, twelve-well tissue culture plates containing enough growth media to cover the 1% agar layer (approximately 1.5 mL). Media was changed every other day and supplemented with U0126 at indicated concentrations. Cells were grown in suspension for 17 days. Colonies reaching diameters of 100 μm or greater were counted. AIG percent colony forming efficiency was determined using the equation: (AIG colonies/cells seeded) × 100. Anova statistical analysis was used to calculate significance. Patient Samples Formalin fixed paraffin embedded oropharyngeal squamous cell carcinoma cases and tonsil tissues from tonsillectomies were obtained from the Department of Pathology archives at University of Iowa Hospitals and Clinics (UIHC). This study was approved by the Institutional Review Board of the University of Iowa. To enrich the amount of DNA collected from tumor cells for PCR and sequencing, grossly visible tumor was isolated from surrounding normal tumor by macrodissection. PTPN13 sequencing Exons 44-48 of PTPN13, were amplified from genomic DNA using PCR. The conditions for the PCR reactions are described in the supplemental methods. PTPN13 mutations were analyzed for significance using several methods. Each mutation was compared to a list of known PTPN13 SNPs using the NCBI and Ensembl databases (www.ncbi.nlm.nih.gov, www.ensembl.org). Sequence conservation among 10 amniotic vertebrates was analyzed using Ensembl's GeneSeqAlignView (www.ensembl.org). PTP amino acid conservation was analyzed using an alignment of 5 PTP's (Villa et al 2005). BLOSUM scores were calculated using a BLOSUM matrix. Each mutation was analyzed for functional consequence using SIFT technology (blocks.fhcrc.org). The location of each mutation within the phosphatase domain was identified using available crystal structure data (Villa et al 2005). Figure 1. ErbB2 synergizes with PTPN13 loss during invasive growth in vivo (a) MTE cell lines stably expressing HPV 16 E6, E6Δ 146-151 , or PTPN13 shRNA (shPTPN13) were retrovirally transduced to express activated mouse ErbB2 (ErbB2 V660E). Functional protein expression, as compared to non-transduced cell lines, was confirmed by immunoblot with antibodies to total and phosphorylated ErbB2. Actin immunostaining served as loading control. (b,c) One million cells from each line were injected subcutaneously into the right hind legs of C57BL/6 mice (n=5 per group). Mouse leg circumferences and tumor growth rates were calculated based on weekly caliper measurements of leg dimensions, and mice were euthanized once tumors reached 20 mm in greatest dimension. The three treatment groups were compared to E6Δ using 2-sided Dunnett adjustment for multiple comparisons developed by Hsu (Hsu 1992). The shPTPN13 and E6 groups were significantly different than the control group (p < 0.0001 and p = 0.009, respectively). The median survival times in the same order of reporting as above were 18, 44 and 80 days, respectively. (d) Representative mouse legs from each group are shown at approximately two weeks post injection. Figure 2. PTPN13 loss correlates with increased Erk 1/2 phosphorylation (a,b) Lysates of serum starved MTE cell lines stably expressing HPV 16 E6, E6 and E7, E6Δ 146-151 , or PTPN13 shRNA (shPTPN13) as well as H-Ras V12 or ErbB2 V660E were subjected to immunoblot analysis with indicated antibodies. (c) Immunohistochemistry for total and phosphorylated Erk 1/2 was performed on a representative tumor derived from a mouse implanted with MTE shPTPN13 ErbB2 V660E cells. Insets display control immunostainings of parallel sections to reveal antibody specificity. (d) HNSCC cell lines (SCC 90 and SCC 84) as well as human tonsil epithelial (HTE) primary cells or those stably expressing HPV 16 E6, E7, E6Δ 146-151 , and H-Ras V12 as indicated were subject to immunoblot analysis with antibody to phosphorylated Erk 1/2. Immunostaining for GAPDH was performed to monitor equal loading of protein samples. DNA sequencing of HPV negative, paraffin embedded HNSCC samples revealed phosphatase domain mutations in two out of ten patients. (a) HEK 293 cells were cotransfected with ErbB2 and PTPN13 constructs containing phosphatase domain point mutations corresponding to those found in the two HNSCC samples. Cells were serum starved following transfection and immunoblot was performed using indicated antibodies. (b) Immunohistochemistry of these two HNSCC samples and of normal tonsil tissue was performed to assess PTPN13 expression and the levels of total and phosphorylated Erk 1/2. Insets display control immunostainings of parallel sections to reveal antibody specificity. Figure 6. MEK inhibition by U0126 prevents anchorage independent growth in tumorigenic cell lines (a) A cell line derived from a mouse injected with MTE E6E7 ErbB2 V660E cells and MTE shPTPN13 ErbB2 V660E cells were treated for 4 h with 0, 1, 5, or 20 μM U0126 in serum free medium. Levels of phosphorylated Erk 1/2 in the respective lysates were assessed by western blot using GAPDH immunostaining as a loading control. (b) The same cell lines were plated in triplicate, suspended in semisolid agar in 12 mm transwells, and fed growth media with indicated concentrations of U0126 for approximately two weeks. Average colony forming efficiencies for each condition are indicated. Significant differences compared to DMSO controls were obtained as indicated (p values of *= 0.002 and **<0.0001).
2017-11-08T01:32:34.301Z
2009-07-24T00:00:00.000
{ "year": 2009, "sha1": "f54c28b3e47e79ee97ac731c1acd280e6af68f4f", "oa_license": null, "oa_url": "https://www.nature.com/articles/onc2009251.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f54c28b3e47e79ee97ac731c1acd280e6af68f4f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
220363803
pes2o/s2orc
v3-fos-license
Almost finiteness and homology of certain non-free actions We show that Cantor minimal $\mathbb{Z}\rtimes\mathbb{Z}_2$-systems and essentially free amenable odometers are almost finite. We also compute the homology groups of Cantor minimal $\mathbb{Z}\rtimes\mathbb{Z}_2$-systems and show that the associated transformation groupoids satisfy the HK conjecture if and only if the action is free. Introduction The property of almost finiteness for ample groupoids was introduced by Matui and applied to questions in groupoid homology in [13]. Furthermore, in [2], [10], [11] and [19], this property was applied to problems in classification of C * -algebras. Kerr and Szabó in [11] and Suzuki in [19] observed that it is a consequence of work of Downarowicz and Zhang [5] that any free action of a group with subexponential growth on the Cantor set is almost finite. Moreover, free odometers arising from sequences of finite index normal subgroups of an amenable group were shown to be almost finite by Kerr in [10]. On the other hand, there exist interesting examples of non-free odometers. For example, in [18], certain non-free Z ⋊ Z 2 -odometers were shown to be counterexamples to the HK conjecture, which is a conjecture posted by Matui in [14] that relates the homology groups of an ample groupoid G with the K-theory of its reduced C * -algebra. In Section 2 of this work, we give two different proofs that these Z⋊Z 2 -odometers are almost finite. First, we prove that an amenable odometer is essentially free if and only if it is almost finite. The forward implication uses the fact proven by Kar et al ([8]) that for such odometers the acting group admits a Følner sequence consisting of complete sets of coset representatives. In the case of the Z ⋊ Z 2 -odometers mentioned above, this can be done using a specific Følner sequence for Z ⋊ Z 2 (Example 2.7), which we employ next for showing that any Cantor minimal Z ⋊ Z 2 -system is almost finite. In Section 3, we compute the homology groups of Cantor minimal Z⋊Z 2 -systems and conclude that these systems satisfy the HK conjecture if and only if the action is free. It should come as no surprise that, in determining the homology groups, we are able to mainly follow the ideas introduced by Bratteli et al in [3] and Thomsen in [21] in their computation of the K-theory of the associated crossed products since, although the HK conjecture is now known to be false, it has been verified in many cases (see the work of Proietti and Yamashita [16,Remark 4.7] and the references therein). This work was carried out during the tenure of an ERCIM 'Alain Bensoussan' Fellowship Programme. Almost finiteness In this section, we review some terminology aboutétale groupoids and verify almost finiteness for certain classes of non-principal groupoids. 2.1. Almost finite groupoids. Let G be a Hausdorffétale groupoid with range and source maps denoted by r and s. A bisection is a subset S ⊂ G such that r| S and s| S are injective maps. Let G ′ := {g ∈ G : r(g) = s(g)} and G (0) be the unit space of G. Then G is said to be principal if G ′ = G (0) and effective if the interior of G ′ equals G (0) . The orbit of a point x ∈ G (0) is the set G(x) := r(s −1 (x)). We say that G minimal if the orbit of each x ∈ G (0) is dense in G (0) . Also G is said to be ample if its unit space is totally disconnected. Example 2.1. Let X be a locally compact Hausdorff space and let α : Γ X be an action of a discrete group Γ on X. Given g ∈ Γ and x ∈ X we will denote by gx := α(g)(x). As a space, the transformation groupoid G associated with α is G := Γ × X equipped with the product topology. The product of two elements (h, y), (g, x) ∈ G is defined if and only if y = gx, in which case (h, gx)(g, x) := (hg, x). Inversion is given by (g, x) −1 := (g −1 , gx). The unit space G (0) is naturally identified with X. Note that G is principal if and only if α is free (gx = x ⇒ g = e). If X is totally disconnected, then G is ample. Let G be an ample groupoid with compact unit space. A subgroupoid K ⊂ G is said to be elementary if K is compact-open, principal and K (0) = G (0) . The groupoid G is said to be almost finite if for any compact subset C ⊂ G and ǫ > 0, there is K ⊂ G elementary subgroupoid such that, for any x ∈ G (0) , we have that For compact groupoids, the following holds: Proposition 2.2. Let G be an ample almost finite groupoid. If G is compact, then G is principal. Proof. Compactness of G implies that there is a finite partition of G into compactopen bisections. Hence, there is M > 0 such that, for each x ∈ G (0) , we have that Then, for any elementary subgroupoid K ⊂ G, we have that g / ∈ K and Therefore, G is not almost finite. 2.2. Almost finite actions. Let us recall the characterization of almost finiteness for transformation groupoids presented in [11] and [19]. We will restrict ourselves to actions on totally disconnected spaces. Let Γ be a group acting on a compact Hausdorff totally disconnected space X. A clopen tower is a pair (V, S) consisting of a clopen subset V of X and a finite subset S of Γ such that the sets sV for s ∈ S are pairwise disjoint. The set S is said to be the shape of the tower. A clopen castle is a finite collection of clopen towers Given ǫ > 0 and K ⊂ Γ finite, we say that a finite set Let G be the transformation groupoid associated to the action of Γ on X. Then G is almost finite if and only if, given K ⊂ Γ finite and ǫ > 0, there is a clopen castle which partitions X, and whose shapes are (K, ǫ)-invariant (see [19,Lemma 5.2] for a proof of this fact). In this case, we say that the action is almost finite. Recall that an action of a group Γ on a locally compact Hausdorff space X is said to be minimal if the orbit of any x ∈ X is dense in X. Clearly, this is equivalent to the associated transformation groupoid being minimal. It is also equivalent to every open (or closed) Γ-invariant set being trivial. If the action is minimal, then any f ∈ C(X, Z) which is Γ-invariant must be constant. We will say that a homeomorphism ϕ on X is minimal if the Z-action induced by ϕ is minimal. By a Cantor minimal Γ-system, we mean a minimal action of Γ on the Cantor set. Given Γ a group acting on a compact Hausdorff space X, we denote by M Γ (X) the set of Γ-invariant probability measures on X. For g ∈ Γ, let Fix g ⊂ X be the set of points fixed by g. If µ ∈ M Γ (X), we say that the action of Γ on (X, µ) is essentially free if, for any g ∈ Γ \ {e}, we have µ(Fix g ) = 0. The action is said to be topologically free if the interior of Fix g is empty for each g ∈ Γ \ {e}. Notice that topological freeness is equivalent to effectiveness of the associated transformation groupoid. Also observe that if Γ X is a minimal action, then any µ ∈ M Γ (X) has full support. Therefore, for minimal actions, essential freeness of Γ (X, µ) implies topological freeness of Γ X. Remark 2.4. If G is a transformation groupoid associated to an action of a group Γ on a compact Hausdorff space X and µ ∈ M (G) ≈ M Γ (X), then the condition in (1) is equivalent to the action of Γ on (X, µ) being essentially free. It follows from Remark 2.3 that if the action is almost finite, then Γ (X, µ) is essentially free. Odometers. Let Γ be a group and (Γ Then X is homeomorphic to the Cantor set and Γ acts in a minimal way on X by γ(x i ) := (γx i ), for γ ∈ Γ and (x i ) ∈ X. This action is called an odometer. Given j ≥ 1 and Notice that X admits a unique Γ-invariant probability measure. Hence, there is no ambiguity in calling an odometer action essentially free. It is known (see, for example, [7, (1.6)]) that the odometer is essentially free if and only if, for each g ∈ Γ \ {e}, we have Let us now give a characterization of almost finite odometers. First recall that a Følner sequence (F n ) for Γ is a sequence of finite subsets of Γ such that for every finite g ∈ Γ we have that |gFn∆Fn| |Fn| → 0. [9]). Let Γ be a countable amenable group and Γ X := lim ← − Γ/Γ i an odometer. The following conditions are equivalent: (i) The action is essentially free; (ii) There is a Følner sequence (F n ) for Γ such that each F n is a complete set of representatives for Γ/Γ n ; (iii) The action is almost finite; (iv) There is a unique tracial state on C(X) ⋊ Γ. Proof. The equivalence of (i) and (iv) is a consequence of [9, Corollary 2.8]. That (iii) implies (i) is a consequence of Remark 2.4. The implication from (i) to (ii) is the content of [8,Theorem 7]. Let us show then that (ii) implies (iii). Take C ⊂ Γ× X compact and ǫ > 0. We are going to find K ⊂ Γ× X elementary subgroupoid such that, for each x ∈ X, By enlarging C, we may assume that C = D × X for some D ⊂ Γ finite. Take n ∈ N such that It is straightforward to check that K is an elementary subgroupoid of Γ × X. Furthermore, given γ ∈ F n and x ∈ U (k, γΓ k ), we have that Therefore, Γ × X is almost finite. A few remarks are in order about the result above: We do not know whether, for a countable amenable group Γ, topological freeness of a Γ-odometer implies essential freeness. If one drops the amenability assumption, there are several counterexamples in the literature (see, for example, [1, Theorem 1]). Example 2.7. Recall that the infinite dihedral group is the semidirect product Z ⋊ Z 2 associated to the action of Z 2 on Z by multiplication by −1. Let (n i ) be a strictly increasing sequence of natural numbers such that n i |n i+1 , for every i ∈ N. Define Γ := Z ⋊ Z 2 and, for i ≥ 1, By [18,Lemma 3.2] and the fact that any element of the form (n, 1) ∈ Z ⋊ Z 2 is conjugate to either (0, 1) or (1, 1), we obtain that any g ∈ Γ \ {e} fixes at most finitely many points in lim ← − Γ/Γ i . Hence, this odometer is essentially free. Let us describe a Følner sequence for Z ⋊ Z 2 satisfying condition (ii) in Theorem 2.5. Given m ∈ N, let F m ⊂ Z ⋊ Z 2 be defined by Then (F m ) m∈N is a Følner sequence for Z ⋊ Z 2 such that each F m is a set of representatives for Z⋊Z2 (mZ)⋊Z2 . 2.4. Almost finiteness of Cantor minimal Z ⋊ Z 2 -systems. Notice that any action α of Z ⋊ Z 2 on a set X is given by a pair of bijections on X (ϕ, σ) such that σ 2 = Id X and σϕσ = ϕ −1 , so that α n,i = ϕ n σ i , for (n, i) ∈ Z ⋊ Z 2 . Proposition 2.8. Let α := (ϕ, σ) be a minimal action of Z ⋊ Z 2 on the Cantor set X. The following holds: (i) The Z-action induced by ϕ is free and α is topologically free. If α is not free, then either σ or ϕσ has at least one fixed point. (ii) If the Z-action induced by ϕ is not minimal, then there exists a clopen set Y such that Y ∩ σ(Y ) = ∅, Y ∪ σ(Y ) = X, ϕ(Y ) = Y and ϕ| Y is minimal. In particular, α is free. Proof. (i) Suppose that the Z-action induced by ϕ is not free. Then there is n ∈ Z \ {0} and x ∈ X such that ϕ n (x) = x, then ϕ n σ(x) = σϕ −n (x) = σ(x). Hence, the α-orbit of x is finite, which contradicts the fact that α is minimal. Therefore, the Z-action induced by ϕ is free. Suppose that α is not topologically free. Then there is a non-empty open set U ⊂ X and n ∈ Z such that ϕ n σ fixes U pointwise. Fix x ∈ U . By minimality of α, there is (m, i) ∈ Z ⋊ Z 2 such that ϕ m σ i (x) ∈ U and ϕ m σ i (x) = x. Furthermore, by multiplying (m, i) on the left by (n, 1), we can assume that i = 0. If α is not free, then there is n ∈ Z and x ∈ X such that ϕ n σ(x) = x. Since any element of the form (n, 1) ∈ Z ⋊ Z 2 is conjugate to (0, 1) or (1, 1), we conclude that either σ or ϕσ has at least one fixed point. The following lemma is a slight modification of [3, Lemma 1.4], and we include the proof for the sake of completeness. Lemma 2.9. Let (ϕ, σ) be a minimal action of Z ⋊ Z 2 on the Cantor set X and Y ⊂ X a non-empty clopen set such that σ(Y ) = Y . Proof. Define λ(y) := min{n > 0 : ϕ n (y) ∈ Y }, for y ∈ Y . From Proposition 2.8, it follows that the map λ is well-defined, in the sense that for each y ∈ Y there is n > 0 such that ϕ n (y) ∈ Y . It is easy to check that λ is continuous. Hence, it has a finite range {J 1 , . . . , J K }, where J 1 < J 2 < · · · < J K . Define Y i = λ −1 (J i ) for each i. Then the sets {ϕ k (Y i ) : i = 1, . . . , K, k = 0, . . . , J i − 1} are pairwise disjoint. From Proposition 2.8, it follows that the union of these sets is ϕ and σ-invariant, hence minimality of the action implies that this is a partition of X. Let Z be a clopen neighborhood of y and Y := Z ∪ σ(Z). Then σ(Y ) = Y and, given N ∈ N, if we take Z sufficiently small, we can assume that the sets {Y, ϕ(Y ), . . . , ϕ N −1 (Y )} are disjoint. Then, by Lemma 2.9, we can partition Y into clopen sets Y 1 , . . . , Y K such that Consider the Følner sequence (F m ) introduced in (3), and notice that Furthermore, each J i ≥ N . By taking N sufficiently big, we can make the shapes of the castle (4) arbitrarily invariant. Homology of Cantor minimal Z ⋊ Z 2 -systems In this section, we compute the homology groups of Cantor minimal Z ⋊ Z 2systems. Given a group Γ and a Γ-module M , we denote by M Γ the quotient of M by the subgroup generated by elements of the form m − mg, for m ∈ M and g ∈ Γ. Recall that M Γ is canonically isomorphic to H 0 (Γ, M ). If i : Λ → Γ is an embedding, we denote the canonical map i * : H * (Λ, M ) → H * (Γ, M ) by cor. We will use the following result about the homology of free products, whose proof can be found in [20,Theorem 2.3]. Theorem 3.1. Let Γ 1 and Γ 2 be groups and M a (Γ 1 * Γ 2 )-module. Then, for n ≥ 2, and there is an exact sequence Given an action of a group Γ on a topological space X, then C(X, Z) has a structure of Γ-module given by f a(x) := f (ax) for every f ∈ C(X, Z) and a ∈ Γ. Lemma 3.2. Let a and b be involutive homeomorphisms on the Cantor set X and suppose that the Z-action induced by ab is minimal. Denote by A and B the abelian group C(X, Z) endowed with the Z 2 -action given by a and b, respectively. Then (cor, cor) : Z)) is an isomorphism and the following sequence is exact: is injective. Take f ∈ C(X, Z) such that (cor, −cor)(f ) = 0. This implies that there exist g, h ∈ C(X, Z) such that f = g − ga = h − hb. In particular, f a = f b = −f . Therefore, f ab = f . Since the Z-action induced by ab is minimal, we conclude that f is constant. Finally, as f a = −f , we must have f = 0. In order to apply Theorem 3.1 and Lemma 3.2 to Cantor minimal Z 2 * Z 2 -systems, we need to compute homology groups of the form H * (Z 2 , C(X, Z)). Lemma 3.3. Let a be an involutive homeomorphism on a compact, Hausdorff, totally disconnected space X. Then, for k ≥ 0, we have H 2k+1 (Z 2 , C(X, Z)) ≃ C(Fix a , Z 2 ). Proof. By [22, Theorem 6.2.2], we have that Let E : {f ∈C(X,Z):f =f a} {f +f a:f ∈C(X,Z)} → C(Fix a , Z 2 ) be the map given by restriction to Fix a . Clearly, this is a well-defined homomorphism, and we will show that it is bijective. Given F ⊂ Fix a clopen, take for each x ∈ F a clopen set U x ⊂ X such that x ∈ U x , U x ∩ Fix a ⊂ F and a(U x ) = U x . Then there exist x 1 , . . . , x n ∈ F such that F = (U xi ∩ Fix a ). Let U := U xi . Notice that a(U ) = U . Hence, E([1 U ]) = 1 F . Therefore, E is surjective. Let us show now injectivity of E. Take f ∈ C(X, Z) such that f = f a and E([f ]) = 0, and we will show that [f ] = 0. We claim that we can assume that f | Fix a = 0. Indeed, let U 1 , . . . , U n be a-invariant clopen subsets of X whose union cover Fix a , and such that f is constant on each U i . By taking differences, we can assume that these sets are disjoint. By summing f with functions of the form 2m i 1 Ui , we get our claim. Assume then that f | Fix a = 0. We have f = q∈Z\{0} q1 f −1 (q) . Since the support of f does not intersect Fix a , we can, for each q ∈ Z \ {0}, partition f −1 (q) as f −1 (q) = A q ⊔ a(A q ), for some A q clopen. Hence, [f ] = 0. Proof. Clearly, ψ is a well-defined homomorphism, and we will show that it is bijective. Given f ∈ C(X, Z), suppose f + f a = 0. Then f | Fix a = 0. Take A clopen neighborhood of Fix a such that a(A) = A and f vanishes on A. Then we can partition A c as A c = B ⊔a(B) for some clopen set B. Let g := f 1 B . Then f = g − ga. Hence, ψ is injective. Let us now show surjectivity of ψ. Take f ∈ C(X, Z) such that f a = f and f (Fix a ) ⊂ 2Z, and let us show that f ∈ Im ψ. Let U 1 , . . . , U n be a-invariant clopen subsets of X whose union cover Fix a , and such that f is constant on each U i . By taking differences, we can assume that these sets are disjoint. By summing f with functions of the form 2m i 1 Ui , we may assume that f | Fixa = 0. We have f = q∈Z\{0} q1 f −1 (q) . Since the support of f does not intersect Fix a , we can, for each q ∈ Z\{0}, partition f −1 (q) as f −1 (q) = A q ⊔a(A q ), for some A q clopen. Let g := q∈Z\{0} qA q . Then ψ([g]) = f . Finally, notice that by [22,Theorem 6.2.2], we have that, for k ≥ 1, Since the right-hand side of (5) is equal to ker ψ, the result follows. Theorem 3.5. Let α := (ϕ, σ) be an action of Z ⋊ Z 2 on the Cantor set X such that the restricted Z-action is minimal. Let If α is not free, then tr is an isomorphism. If α is free, then ker tr ≃ Z 2 and it is generated by where K and L are clopen sets such that X = K ⊔ σ(K) = L ⊔ ϕσ(L). Proof. Let f ∈ ker tr. In this case, there is h ∈ C(X, Z) such that This implies that h − hϕ = (h − hϕ)σ = hσ − hϕσ. Therefore, h + hϕσ = hσ + hϕ. Composing the right-hand side of this equation with ϕ −1 , we obtain that h + hϕσ is ϕ-invariant. Since, ϕ is minimal, we conclude that there is an integer z such that Furthermore, from (6), Let G σ and G ϕσ be as in Lemma 3.4. By Lemmas 3.2 and 3.4, there is an isomorphism From (7) and (8) Indeed, if there is g ∈ C(X, Z) such that 1 = g + gσ = g + gϕσ, then gσ = gϕσ, which implies that g is ϕ-invariant, hence constant. But this contradicts the fact that 1 = g + gσ. The next result is essentially a summary of what we have obtained so far. Theorem 3.6. Let α := (ϕ, σ) be a minimal action of Z ⋊ Z 2 on the Cantor set X. Proof. (i) The existence of Y is the content of Proposition 2.8. Notice that Shapiro's Lemma [4, Proposition 6.2] implies then that Finally, the homology groups of a Cantor minimal Z-system are easy to compute. (ii) Since H 0 (Z, C(X, Z)) is torsion-free, it follows from [6, Theorem 24.1] that the map tr in Theorem 3.5 admits right inverse. The remaining computations of the homology groups in cases (ii) and (iii) are a consequence of Theorem 3.1 and Lemmas 3.2, 3.3 and 3.4. In case (iii), since the Z-action induced by ϕ is free, notice that Fix σ is disjoint from Fix ϕσ . Let us now apply Theorem 3.6 to an example which had its K-theory computed in [3,Corollary 4.4]. Example 3.7. Fix θ ∈ (0, 1) an irrational number. LetX be the set obtained from R by replacing each t ∈ Z + θZ by two elements {t − , t + }, and endowX with the order topology. Notice that there is an action Z α X by translations (α n (x) = n + x). We let X :=X Z . Then X is homeomorphic to the Cantor set and there is a minimal homeomorphism R θ : X → X given by R θ (x) = x + θ. Furthermore, there is an involutive homeomorphism σ on X given by σ(x) = −x and σ(t ± ) = t ∓ . Notice that the only fixed point of σ is 1 2 , and the only fixed points of R θ • σ are 1+θ 2 and θ 2 . It was shown in [17, Theorem 2.1] that H 0 (Z, C(X, Z)) ≃ Z 2 , and the generators are [1 [0 + ,θ + )] ] and [1 [θ + ,1 + ) ]. Observe that σ * acts trivially on these two elements. Let α be a minimal action of Z ⋊ Z 2 on the Cantor set X. If α is not free, then the associated crossed product C(X) ⋊ Z ⋊ Z 2 is AF [3, Theorem 3.5], hence K 1 (C(X) ⋊ Z ⋊ Z 2 ) = 0. On the other hand, it follows from Proposition 2.8 and Theorem 3.6 that H 2n+1 (Z ⋊ Z 2 , C(X, Z)) = 0. Therefore, α is a counterexample to the HK conjecture. If α is free, it follows from [21, Theorems 4.30 and 4.42] and Theorem 3.6 that α satisfies the HK conjecture. Alternatively, this also follows from Theorem 3.6 and [16,Remark 4.7]. Given an effective, minimal, second countable groupoid G with compact unit space homeomorphic to the Cantor set, Matui conjectured in [14] that the index map I : [[G]] ab → H 1 (G) is surjective and that its kernel is a quotient of H 0 (G)⊗ Z 2 under a certain canonical map (AH conjecture). Assuming that G is also minimal and almost finite, Matui proved that the index map is surjective and if G is also principal, then it satisfies AH conjecture. We have not been able to verify whether the AH conjecture holds for non-free Cantor minimal Z ⋊ Z 2 -systems in general, but note that in [18] it was shown that the Z ⋊ Z 2 -odometers from Example 2.7 satisfy it.
2020-07-07T01:01:19.190Z
2020-07-05T00:00:00.000
{ "year": 2022, "sha1": "ce0c77b207331abe3f2a418972cd010b5b434a09", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d5984bd04541125fb06aacd5c4e666a847e4acf4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
251622600
pes2o/s2orc
v3-fos-license
Possible Long-Term Cardiovascular Effects of COVID-19 Coronavirus Disease 2019 is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and has become a worldwide pandemic. Since 2019, the virus has mutated into multiple variants that have made it harder to eradicate and have increased the rate of infection. This virus can affect the structure and the function of the heart and can lead to cardiovascular symptoms that can have long-lasting effects despite recovery from COVID-19. These symptoms include chest pain, palpitations, fatigue, shortness of breath, rapid heartbeat, arrhythmias, cough and hypotension. These symptoms may persist due to myocardial injury, cardiac inflammation or systemic damage that may have been caused during infection. If these symptoms persist, the patient should visit their cardiologist for diagnosis and treatment plan for any type of cardiovascular disease that may have developed Post-COVID 19. INTRODUCTION SARS-CoV-2 is the virus that caused the novel disease COVID-19, which emerged in late 2019.As of October 11, 2021, COVID-19 has affected over 200 countries where there has been a total of 237,383,711 cases worldwide and a total of 4.8 million fatalities [1].COVID-19 is spread primarily by saliva droplets, coughing or sneezing from an infected person.Common symptoms include fever, chills, cough, shortness of breath, fatigue, muscle aches, headaches, loss of sense of smell and taste, vomiting and diarrhea [2].Serious complications of COVID-19 include acute respiratory failure, pneumonia, acute respiratory distress syndrome, acute liver injury, acute cardiac injury, septic shock, disseminated intravascular coagulation, multisystem inflammatory syndrome, chronic fatigue, rhabdomyolysis and myocardial infarction [3]. People with preexisting cardiovascular diseases such as uncontrolled hypertension, arrhythmias, deep venous thrombosis, congestive heart failure, cardiomyopathy and myocarditis have a 4-fold higher risk of death [4].Renin-angiotensinaldosterone may play a role in the pathogenesis of a COVID-19 infection [5].There have been some concerns that the pandemic has affected the availability of acute cardiovascular care, which may indirectly contribute to excess mortality in patients affected [6]. Coronaviruses are enveloped, single-stranded RNA viruses and can be divided into four genera such as α, β, γ, and δ [7].SARS-CoV (Middle East respiratory syndrome coronavirus) and SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) are both classified under the β coronaviruses [8].Over time, viruses constantly change through mutations and new variants are expected.These variants are important to identify since they differ in pathogenicity, infectivity, transmissibility and antigenicity.There are currently 4 different variants identified in the United Statesalpha, beta, gamma, and delta [9].The alpha variant, from SARS-Cov-2 B.1.1.7,was first identified in South Eastern England in September 2020.This variant is known to be 1.5 times more transmissible, and the risk of death is 1.6 times higher than previous variants [10].The beta variant, from SARS-Cov-2 B.1.351,first identified in South Africa, has the ability to re-infect people who have already had been infected previously with COVID-19 and has shown to be resistant to a few vaccines [11].The gamma variant produced from SARS-Cov-2 P.1, first identified in Brazil, is known to spread faster than other variants and has shown that specific monoclonal antibody treatments are less effective against this variant.The fourth variant, delta, produced 87 by SARS-Cov-2 B.1.617.2, was first identified in India [12].As of August 26, 2021, the Centers for Disease Control and Prevention have classified the delta variant to be twice more contagious than the other variants and may cause more severe illness [13]. With an increase of variants, there is an increase in COVID-19 cases and faster infectivity, resulting in a strain on healthcare resources.Furthermore, this can also lead to more hospitalizations, ultimately increasing a country's mortality rate [14].Although some immune responses driven by current vaccines may be less effective against specific variants, vaccinations do offer a percentage of protection against most variants.The current COVID-19 vaccinations authorized by the United States Food and Drug Administration still offer significant protection against variants and may help prevent serious illness [15]. PATHOPHYSIOLOGY There are three different pathways via which SARS-CoV-2 can affect the heart such as direct myocardial injury, indirect myocardial injury, or hypoxia secondary to respiratory failure.In direct myocardial injury, the spike protein of SARS-CoV-2 fuses to angiotensin-converting enzyme-2 receptors that are present in the lung, heart, ileum, kidney and bladder.Once the virus attaches, this leads to myocardial infarction.Angiotensin-converting enzyme-2 dependent myocardial infarction leads to the decreased angiotensinconverting enzyme-2 expression [16].The infection causes macrophage infiltration in the heart leading to myocarditis [17].In severe cases, myocarditis can lead to blood clots, abnormal heartbeat, heart failure and sudden death [18].Healthy, young people are at a greater risk of developing fatal heart complications with the delta COVID-19 variant.A study done in the United States focused on COVID-19associated cardiac problems and found that 14,000 people between the ages of 12 and 17 developed heart inflammation from COVID-19 [19]. Indirect myocardial injury is another mechanism that causes damage to the heart.It can be caused by different mechanisms that activate T-helper cells type 1 and 2 that release cytokines.Direct and indirect myocardial injury can lead to impairment of intracellular calcium transport and signal transduction through B-adrenergic receptors that can affect the myocardial contractile function.Signs of myocardial injury can be seen with elevated cardiac troponin markers, abnormal ECG findings and magnetic resonance-based imaging (Fig. 1) [16,20]. Hypoxia can occur due to microvascular causes, such as hypercoagulability [16].Lack of oxygen can lead to increased pulmonary vasoconstriction in an attempt to redistribute pulmonary blood flow from regions of low PO2 to high oxygen availability.Thus, chronic pulmonary vasoconstriction can result in pulmonary hypertension that will increase afterload on the right ventricle, resulting in heart failure [21].Severe infections can not only cause hypoxia but also can predispose the patient to thrombotic events [22].One of the major risk factors in acquiring COVID-19 is preexisting cardiovascular disease [23].Pre-existing cardiovascular diseases may include arrhythmias, myocarditis, acute coronary syndrome, left ventricular systolic dysfunction, reverse Takotsubo syndrome and heart failure [24][25][26].In a study with 416 hospitalized patients, there were 19.7% patients who had cardiac injury.This study also showed that 40% of confirmed COVID-19 patients who were hospitalized had a history of cardiovascular disease [27].Patients with an underlying cardiovascular condition have a greater progression of the disease that can become critical.This progression can happen due to heightened coagulation function, pro-inflammatory effects, increased viscosity during febrile illnesses, and endothelial dysfunction.This can ultimately lead to heart failure [28,29].In a study of 191 hospitalized patients in Wuhan, China, 23% of infected COVID-19 patients died of heart failure [30]. Notable mutations in variants have altered the biochemistry of the spike proteins present in the SARS-CoV-2 virus.It affects the transmission rate of the virus.It has been seen in the alpha mutation that presents two deletions of amino acids H69/V70.This deletion has made this virus twice as infectious [31,32].Another mutation called N501 present in Alpha, Gamma, and Beta has made the virus more infective [33].These variants can cause severe disease, evade diagnostic tests or resist antiviral treatment [34]. Current vaccines are great for providing immunological protection against the SARS-CoV-2 virus.These vaccines were designed for initial strains and are still recommended for new mutants even if the effectiveness for the mutants is lowered [35].Booster vaccines are recommended to be taken after a person is vaccinated to provide additional protection against mutated forms of the virus [36].In rare instances, myocarditis and pericarditis have been associated with COVID-19 vaccinations.The Centers for Disease Control and Prevention have reported 4.8 cases out of 1 million, mostly young males, with myocarditis after the 2nd vaccination.On another rare occasion, pericarditis can be seen in older patients who have received vaccines [37]. SIGNS AND SYMPTOMS OF CARDIOVASCULAR INJURY Symptoms from myocardial injury include cough, fever, myalgia, headache, dyspnea, palpitations, chest pain, hypotension, cardiogenic shock, and heart attack (Fig. 1) [38,39].Cardiovascular injury may be asymptomatic in patients, so it is essential to check for cardiac troponin elevation, asymptomatic cardiac arrhythmias, N-terminal pro-brain natriuretic peptide levels and cardiac imaging (Fig. 1) [40].Coronavirus can increase the risk of ST-segment elevation myocardial infarction.(STEMI) [41].COVID-positive patients have a 36% risk of a primary outcome of in-hospital death, stroke, recurrent myocardial infarction, or repeat revascularization.COVID-positive patients with STEMI have an increased risk of a high thrombus burden.ST elevation can occur due to arrhythmias and hypoxia [42,43]. In a Spanish study, 139 healthcare workers who were diagnosed with COVID-19 underwent an ECG study 10 weeks after diagnosis.It was noted that 40% of the cases had pericarditis, and 11% had myocarditis, and some participants were presented with some degree of pericarditis coexisting with myocardial inflammation (Fig. 1).Some of these patients will then be at risk for subsequent arrhythmias.In a study by the Centers for Disease Control and Prevention in July 2020, a third of the people tested claimed that they had not returned to their normal state of health two to three weeks after testing positive [44].There is also growing evidence of an association between post-COVID-19 patients and arterial thrombotic events (Fig. 1).There is evidence that COVID -19 can lead to increased viscosity of blood that can lead to venous thromboembolic events.Symptoms that these patients can present with are swelling, leg pain, chest pain, numbness or weakness [45].Post COVID -19 Syndrome may present with symptoms such as fatigue, shortness of breath, cough, joint pain, chest pain, cognitive difficulties, difficulty concentrating, depression, muscle pain, headache, rapid heartbeat, and intermittent fever (Fig. 1).These symptoms are most likely from systemic damage of the organs post-COVID, grief, loss and post-traumatic stress disorder after treatment in the intensive care unit [46].There is still ongoing research on long-term damage to lungs, heart, immune system, brain and other organs [44]. TREATMENT Patients with persisting symptoms post COVID-19 such as chest pain, fatigue, dyspnea, and heart palpitations should visit their cardiologist four weeks after initial diagnosis to be assessed for any cardiovascular complications.Testing such as ECG, echocardiography, diagnostic tests, and laboratory tests may be required to check for cardiac biomarkers to assess cardiac function (Fig. 1) [47,48].Afterwards, the cardiologist will come up with a personalized care plan and introduce cardiac treatment [49].Patients diagnosed with pericarditis may be prescribed corticosteroids or colchicine to treat the inflammation.Additionally, they may benefit from overthe-counter medications such as Advil or Motrin to relieve the pain [50].For people diagnosed with myocarditis, angiotensin-converting enzyme inhibitor/angiotensin receptor blockers may be prescribed to lower blood pressure, beta blockers to improve arrhythmias and remodeling, diuretics to Long term cardiovascular complications such as pericarditis, myocarditis, arterial thrombotic events and post-COVID Syndrome may develop after a person recovers from COVID-19.After initial recovery, cardiovascular symptoms may develop in which a patient should visit a cardiologist for any abnormalities in blood tests, measurements of cardiac enzymes BNP, lactic acid and medical tests [38,39,[44][45][46][47][48]. 89 decrease fluid congestion, and corticosteroids to reduce inflammation [51].Treatment for arterial thrombosis may include embolectomy, thrombolytic injection, angioplasty, or coronary artery bypass graft.Medical therapy may include statins to lower cholesterol, drugs to reduce blood pressure such as angiotensin-converting enzyme inhibitors or angiotensin receptor blockers, anticoagulants and antiplatelets to reduce blood clotting [52].There is currently ongoing research to treat the overall symptoms of people who suffer from Post-COVID-19 Syndrome, one of them being a new drug called Leronlimab.This drug is currently being tested in new trials targeting patients who still have symptoms after being diagnosed with COVID-19.It is a double-blinded trial that has been approved by the Food and Drug Administration.There is still insufficient evidence to suggest its efficacy for this syndrome [53]. CONCLUSION Long-term cardiovascular effects of COVID-19 include pericarditis, myocarditis, coexisting pericarditis with myocarditis, arterial thrombosis, and post-COVID-19 syndrome.These conditions leave patients previously diagnosed with COVID-19 with long-termsymptoms such as fatigue, breathlessness, and chest pain.The cause of these symptoms may be due to cardiac myocyte damage of the heart, cardiac inflammation, formation of clots, worsening of pre-existing cardiovascular conditions or systemic problems that have occurred during the initial COVID-19 infection.There are still many unknown reasons as to why some of these symptoms may persist after treatment.The current treatment of these symptoms would be to visit a cardiologist, undergo cardiac evaluation, and go through a care plan with a cardiologist.Depending on the diagnosis, medications such as anticoagulants, blood pressure lowering drugs, or corticosteroids may be utilized.There are still new medications that are up for a trial, with an emphasis on long-lasting symptoms Post-COVID 19. FUNDING No funding was received for this paper from any government agency or institution.
2022-08-18T06:17:21.302Z
2022-08-16T00:00:00.000
{ "year": 2023, "sha1": "46deba6924cf6a02bc83645843fd17c75f87433c", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e5ff7f919c349a95a4915f34118c6d4fbb6dcac4", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
259831937
pes2o/s2orc
v3-fos-license
Pharyngeal Reconstruction Methods to Reduce the Risk of Pharyngocutaneous Fistula After Primary Total Laryngectomy: A Scoping Review Introduction The most common early postoperative complication after total laryngectomy (TL) is pharyngocutaneous fistula (PCF). Rates of PCF are higher in patients who undergo salvage TL compared with primary TL. Published meta-analyses include heterogeneous studies making the conclusions difficult to interpret. The objectives of this scoping review were to explore the reconstructive techniques potentially available for primary TL and to clarify which could be the best technique for each clinical scenario. Methods A list of available reconstructive techniques for primary TL was built and the potential comparisons between techniques were identified. A PubMed literature search was performed from inception to August 2022. Only case–control, comparative cohort, or randomized controlled trial (RCT) studies were included. Results A meta-analysis of seven original studies showed a PCF risk difference (RD) of 14% (95% CI 8–20%) favoring stapler closure over manual suture. In a meta-analysis of 12 studies, we could not find statistically significant differences in PCF risk between primary vertical suture and T-shaped suture. Evidence for other pharyngeal closure alternatives is scarce. Conclusion We could not identify differences in the rate of PCF between continuous and T-shape suture configuration. Stapler closure seems to be followed by a lower rate of PCF than manual suture in those patients that are good candidates for this technique. potential comparisons between techniques were identified. A PubMed literature search was performed from inception to August 2022. Only case-control, comparative cohort, or randomized controlled trial (RCT) studies were included. Results: A meta-analysis of seven original studies showed a PCF risk difference (RD) of 14% (95% CI 8-20%) favoring stapler closure over manual suture. In a meta-analysis of 12 studies, we could not find statistically significant differences in PCF risk between primary vertical suture and T-shaped suture. Evidence for other pharyngeal closure alternatives is scarce. Conclusion: We could not identify differences in the rate of PCF between continuous and T-shape suture configuration. Stapler closure seems to be followed by a lower rate of PCF than manual suture in those patients that are good candidates for this technique. Key Summary Points The most common early postoperative complication after total laryngectomy (TL) is pharyngocutaneous fistula (PCF), which increases length of stay and costs, impacts quality of life, and delays beginning of adjuvant treatment. We explored the reconstructive techniques potentially available after primary TL, and critically appraised published systematic reviews to clarify which could be the best reconstruction technique for each clinical scenario. In a meta-analysis we could not find statistically significant differences between both vertical suture vs. T-shaped after primary total laryngectomy. There is an important deficit of information to evaluate the effectiveness of other reconstructive options such as regional pedicled flap vs. free flap after primary laryngectomy. INTRODUCTION The larynx is the second most common site for head and neck squamous cell carcinoma (HNSCC). Currently, upfront primary total laryngectomy (TL) is reserved only for advanced T4a cases [1], while primary treatment for T3 laryngeal cancer consists of chemoradiotherapy (CRT) or induction chemotherapy followed by radiotherapy combined with salvage TL in cases of incomplete response. TL, partial laryngectomy, or transoral laser microsurgery is indicated in selected cases [2][3][4]. Since Theodor Billroth performed the first TL in 1873, the most feared complication has been C. Piazza Unit of Otorhinolaryngology-Head and Neck Surgery, ASST Spedali Civili of Brescia, Department of Medical and Surgical Specialties, Radiological pharyngocutaneous fistula (PCF) [5]. For more than a century, surgeons have been designing surgical techniques aiming to decrease the frequency of PCF, but even in the best hands, the rate for TL remains close to 10% [6]. PCF is the most common early postoperative complication after TL, especially as a salvage procedure after failure of CRT [7], and increases length of stay and costs, impacts quality of life, and delays beginning of adjuvant treatment. The indication for TL (primary or salvage) is one of the most relevant predictive factors. Rates of PCF are higher in patients who undergo salvage TL compared with primary TL and several surgical techniques focused on avoiding PCF have been designed [8]. The current literature reports a number of studies exploring the effectiveness of these techniques, but most of them are case series and case reports without comparisons with standard treatments. Moreover, published meta-analyses have tried to evaluate these interventions by combining case series and comparative studies, primary and salvage TL, which are methodological factors that increase clinical and statistical heterogeneity and introduce a high risk of bias [9,10]. All these reasons make the conclusions of systematic reviews difficult to interpret and limit their application in real-world practice. Specifically, for the case of primary TL, the available information is limited, and heterogeneous [11,12]. The objectives of this scoping review were to explore the reconstructive techniques potentially available after primary TL and to critically appraise published systematic reviews to clarify which could be the best reconstruction technique for each clinical scenario. METHODS The aim of this study was to answer the following research question: Which are the best reconstructive methods to reduce the risk of PCF after primary TL? We designed a scoping review, using the recommendations of the Joanna Briggs Institute (JBI) (www.https://jbi. global/). Of note, scoping reviews are useful for examining available evidence when a robust systematic review cannot be done [13]. Maneuvers aimed at preventing PCF in patients with exclusive salvage laryngectomy will not be discussed in this manuscript. This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors. Definition of Available Reconstructive Alternatives After Primary Total Laryngectomy We first built an inventory of all available reconstructive techniques for this setting. In the first round, the authors generated a list of alternatives for primary closure after primary TL. In the second round, these options were organized in a diagram to identify the potential comparisons between techniques (e.g., for primary closure including both manual suturing and stapler closure) that helped to make a focused search for studies. Literature Search The search was performed in the PubMed/ MEDLINE database using related terms (''larynx' ' In the first step, we searched only for studies that mentioned that a systematic review or meta-analysis was performed (in the title, abstract, or methods section). In the second step, we performed a specific search in the reference section of these systematic reviews. In the third step, we selected all primary references to find studies comparing alternatives and complemented them with the primary database search. We did not consider exclusion based on the year of publication or language. All articles were screened for title and abstract. Two investigators (MPO and AS) reviewed the full texts of selected studies. Divergences in selection were solved by consensus. The flowchart of the study search is shown in Fig. 1. The primary search identified 49 studies. After inclusion and exclusion criteria were applied, 24 remained to be appraised. In this review we only considered studies that included adult patients (over 18 years old) with carcinoma of the larynx who required primary TL, compared two or more techniques, and reported outcomes related to PCF. Therefore, only case-control, comparative cohort, or randomized controlled trials (RCT) were included. Data of the studies were collected in an Excel spreadsheet (Microsoft Corp., USA). Institutional review board approval was not necessary as a result of the study design. Analysis If three or more primary studies were identified and the authors considered them suitable to be pooled, we performed a meta-analysis using Review Manager (RevMan) version 5.4 (The Cochrane Collaboration, 2020). We selected a random effects analysis because of the expected heterogeneity and used a risk difference (RD) outcome with 95% confidence interval (CI). RESULTS First, we identified a list of potential alternatives for pharyngeal reconstruction: manual suture (continuous vertical or horizontal suture or T-shape configuration; single or more than one layer); stapler closure; primary closure with regional pedicled flap reinforcement (pectoral or other regional flaps; in-lay or on-lay); and free flaps (radial forearm free flap [RFFF], anterolateral thigh [ALT] with different techniques such as U-shape or tube-shape, or jejunum). In the second round, potential comparisons for reconstruction after primary TL were identified (Fig. 2). A list of potential comparisons was obtained from this diagram. Comparison 1: Primary Closure vs. Primary Closure with Flap Reinforcement After Primary Total Laryngectomy We could not find any meta-analyses about this comparison or any of its modifications (on-lay vs. in-lay). Stapler Closure There are three systematic reviews comparing manual and stapler closure [14][15][16]. Aires et al. included four studies [16], Lee et al. [15] included seven studies, and Chiesa et al. [14] included eight. A fourth meta-analysis focused on the evaluation of risk factors for fistula after TL found that suturing with staplers decreased the risk of PCF [17]. Most primary studies included in these metaanalyses were non-randomized retrospective cohorts and only two were RCTs [18,19]. Although Galletti et al. [19] report their study as an RCT, it is noteworthy that it includes an unbalanced number of patients between groups and the lack of description of common methodological conditions for this design. All these systematic reviews concluded that stapler closure was superior to manual closure. Aires et al. [16] performed a subgroup analysis exploring the differential rate between studies that included only primary TL and those that mixed primary and salvage TL, and did not find statistically significant differences (Table 1). (a) Studies that included primary total laryngectomy exclusively: Seven studies [18,[20][21][22][23][24][25] exclusively included patients with primary TL. Three new studies, not included in previous systematic reviews, were identified and included in the present analysis [23][24][25]. Santaolalla et al. [21] added a third group with the open technique of mechanical suture (i.e., the mechanical closure is done after resecting the larynx, aligning the mucosal edges of the resultant vertical defect), which was not considered in the analysis. Sansa-Perna et al. [23] discriminated patients for primary and salvage TL, and data were used independently. Asher et al. [25] combined information of T classification and tumor location making it impossible to get specific information. Two of these seven studies found a decrease in the rate of PCF [20,21]. A meta-analysis of them showed an RD of 14% (95% CI 8-20%) in PCF favoring stapler closure, without statistical heterogeneity (I 2 = 0%) (Fig. 3). (b) Studies that mixed primary and salvage total laryngectomy: Five studies included patients with primary/salvage TL [19,[26][27][28][29]. Two studies [27,28] found a decrease in the PCF rate while the others did not find statistically significant differences. A meta-analysis could not find differences (RD -11%, 95% CI -26% to 4%) in the rate of PCF and showed moderate statistical heterogeneity (I 2 = 60%). Because these studies mixed data from primary and salvage TL, the results may be influenced by selection bias (Fig. 3). Comparison 3: Manual Primary Vertical Suture vs. T-Shaped After Primary Total Laryngectomy We only found one systematic review assessing the results based on the shape of the suture after TL [30]. However, this review pooled results from comparative and descriptive studies and did not discriminate between primary or salvage TL. Twelve studies were identified comparing the shape of manual suture, which included primary [25,[31][32][33] and mixed primary/salvage TL [34][35][36][37][38][39][40][41] (Table 2). Bril et al. [42] did not report specific rates of PCF and the study was thus excluded. El-Marakby et al. [37] used other types of reconstruction, but only data related to suture configuration were used. In a meta- ND not determined, RCT randomized controlled trial, PCF pharyngocutaneous fistula analysis we could not find statistically significant differences between both techniques, neither for the primary TL (RD 1%, 95% CI -16% to 19%) nor for the group that mixed primary/ salvage TL (RD 3%, 95% CI -15% to 21%), but both comparisons had high statistical heterogeneity (Fig. 4). compared the two-layer suture with a modified technique using the remnant of constrictor muscles as suture reinforcement and found a statistically significant difference favoring the two-layer technique without muscle reinforcement (PCF rate 3% vs. 10% and 0% vs. 27%, respectively). Comparison 5: Regional Pedicled Flap Inlay vs. On-lay After Primary Total Laryngectomy with Partial/ Circumferential Pharyngectomy We could not find any meta-analyses or primary studies about this comparison. All studies found were focused on patients treated by salvage TL. Regional Pedicled Flap/Free Flap After Primary Laryngectomy with Partial/ Circumferential Pharyngectomy We could not find any meta-analyses about this comparison. Kim et al. [47], in an analysis of Fig. 3 Meta-analysis of studies assessing stapler versus manual closure. CI confidence interval 676 patients (213 patients in the flap group and 463 in the non-flap group) from the NSQIP database, found statistically significant differences between the group with flap (pedicled regional or free flap) vs. no flap regarding wound disruption (1.7% vs. 3.8%) and organ/ space infection (0.4% vs. 2.3%) favoring no flap closure, but this difference disappeared in the multivariate analysis. As this study was based on administrative data, they did not discriminate between the indication for TL, so it is possible that some cases of hypopharyngeal tumors were included. In addition, they could not isolate the rates of PCF and used wound disruption and organ/space infection as a proxy for PCF. Besides, it is possible that a selection bias favoring non-flap closure exists, because patients that need a flap probably have more extensive tumors and thus require reconstruction of larger mucosal defects. Furthermore, they could not define if the flap was used as a reinforcement of the primary suture (on-lay technique) or as a part of the pharyngeal wall (in-lay technique). Comparison 7: Regional Pedicled Flap vs. Free Flap After Primary Laryngectomy with Partial/Circumferential Pharyngectomy We could not find any meta-analyses about this comparison. Haidar et al. [48], in a subgroup analysis of the National Cancer Database, found that free flap reconstruction has similar rates of PCF compared with regional pedicled flaps in patients who underwent primary TL. Kim et al. [47], in a subgroup analysis of data from the NSQIP, did not find differences between the Therefore, the first aims of this review were to describe the surgical techniques of pharyngeal closure and to identify the potential comparisons needed to determine the best options. The first step allowed the design of a framework of alternatives for pharyngeal closure in primary TL and the potential comparisons needed to solve uncertainty. The simplest approach is the primary suture of the pharynx using manual suture, and it can become as complex as a case when a free flap is needed. In each of these levels of complexity, there are common surgical questions about technical details such as the number of layers to be sutured, the type of suture to be used, the need for using mechanical devices through to the selection of the most appropriate flap. This effort to organize available literature helps us to design a search to solve these common questions, but also serves as a map to design future trials to fill the knowledge gaps still existing on this specific subject. This exercise identified seven potential comparisons to be evaluated, but this number can be higher depending on the specificity of the research question. However, it is unlikely that a flap would be needed for reconstruction after primary TL. Therefore, flap reconstructions due to insufficient mucosa for direct closure in laryngeal tumors are sporadic and some alternatives are not used as was evident in comparisons 1 and 5. The most basic question was how to suture the pharynx. It could be done primarily by a manual suture, and this suture could be with a continuous or interrupted suture. We only identified one comparative study specifically evaluating this question, showing that a continuous suture decreases the rate of PCF. Avci et al. [43] found a difference of almost 20% lower rate of PCF with a two-layer suture and, although this is an observational non-RCT, the magnitude of the difference is so high that it should be accepted as conclusive [44]. Regarding the number of layers the situation was similar: two layers were more effective than a single one [44]. However, when the question was if it was worthwhile adding a third layer using the constrictor muscles, the answer was not so clear. Only two studies [45,46] evaluated this strategy and, although the incidence of PCF was higher in the three-layer group, the magnitude was not so high, and the results of the studies were very heterogeneous. In this case, a specific trial could help to clarify this issue. Pending this discussion, the decision to use a third layer will depend on the individual conditions of the case and the surgeon's preference. The second question addressed the best configuration to manually suture the pharynx, if in a continuous suture, be it vertical or horizontal, or using a T-shape configuration. It is a common belief that T-shape suture could have a higher rate of PCF owing to its greater length and the risk of mucosal necrosis at the intersection of suture lines [49]. However, T-shape closure builds a wider neopharynx that could improve postoperative swallowing [50]. This study, which included 12 trials in patients with primary TL and mixed primary and salvage TL [25,[31][32][33][34][35][36][37][38][39][40][41], could not find statistically significant differences between both techniques. However, although this represents the best available evidence, these conclusions could be affected by selection bias due to the observational design and the high heterogeneity found. The final decision will depend on other factors such as surgeon experience, the size of the defect, and intraoperative findings such as suture tension. The next question was whether using a mechanical device with a standard distance between staplers and avoiding field contamination with saliva would decrease the risk of PCF. Confirming the findings of three systematic reviews [14][15][16], a meta-analysis of 12 trials [18][19][20][21][22][23][24][25][26][27][28][29] with patients who underwent primary TL showed that stapler suture significantly decreases the rate of PCF by about 13%, with minimal statistical heterogeneity. However, this conclusion was not reproduced in the trials that combined primary/salvage TL. According to these findings, the use of stapler should be encouraged, but the selection of patients (endolaryngeal tumors without risk of hypopharyngeal extension), the surgical technique (wide liberation of the tracheoesophageal groove and retraction of the epiglottis), and surgeon experience in the procedure are critical factors to get the maximal benefit. However, many surgeons no longer use staples on a regular basis. For cases in which a larger mucosal resection is needed, an alternative could be the use of a regional flap to reinforce the suture. Unfortunately, we could not find studies evaluating these options. This makes it necessary to design trials focused on the subgroup of primary TL to make a more robust clinical recommendation. All previous scenarios were focused on patients with endolaryngeal tumors without any involvement of the hypopharynx. However, in patients with tumors invading the hypopharynx or the oropharynx it is necessary to include resection of extralaryngeal mucosa to obtain free margins. In these cases, a primary suture will not be feasible because of the high risk of neopharyngeal stenosis and/or fistula, and technical modification will be necessary. Some authors [47,48,51] have suggested the use of regional pedicled or free flaps. Although we did not find literature evidence supporting the use of regional or free flaps for patients with significant mucosal defects, results are prone to selection biases, and expert opinion generally favors use of regional or free tissue flaps when there is significant mucosal deficit. The final decision in these cases will depend on the advantages and disadvantages of each flap (operative time, lack of donor vessels in the neck, functional and cosmetic consequences, availability of a microsurgical team, associated comorbidities, and surgeon's preference). It is necessary to highlight the limitations of this study. First, most meta-analyses included trials with a retrospective observational design and are therefore prone to selection biases. In most cases, the comparisons were not adjusted for by other factors such as clinical tumor stage and subsite, extent of surgery, and comorbidities. The data were difficult to analyze because the publications did not discuss the amount of pharyngeal tissue resected or the status of the mucosa (edematous, fragile, etc.). Besides, some studies mixed data from primary and salvage TL, which are populations with very different risks of PCF. To resolve this difficulty, we performed a subgroup analysis that allowed us to use the data and assess the effect that the combination of the two groups might have on the results. CONCLUSIONS This scoping review evaluates the different options for mucosal reconstruction after TL and found that a continuous double-layer suture offers a lower rate of PCF. We could not identify differences in the rate of PCF between continuous and T-shape suture configuration. Stapler closure seems to be followed by a lower rate of PCF than manual suture in patients that are good candidates for this technique, but there is an important deficit of information to evaluate the effectiveness of other reconstructive options. A framework that identifies knowledge gaps was designed and it can serve as a tool for future clinical trials addressing specific issues that are still unclear. ACKNOWLEDGEMENTS Funding. Open Access funding provided by Colombia Consortium. No funding or sponsorship was received for this study or publication of this article. Compliance with Ethics Guidelines. This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors. Data Availability. The data sets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/bync/4.0/.
2023-07-13T06:17:32.495Z
2023-07-12T00:00:00.000
{ "year": 2023, "sha1": "a234b6bd912a95b1c01a1c1a066f54559f43beda", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12325-023-02561-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4d165d334b531efe6b555871901d0c4f460ed914", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1689208
pes2o/s2orc
v3-fos-license
Dimensionally continued Oppenheimer-Snyder gravitational collapse II: solutions in odd dimensions The Lovelock gravity extends the theory of general relativity to higher dimensions in such a way that the field equations remain of second order. The theory has many constant coefficients with no a priori meaning. Nevertheless it is possible to reduce them to two, the cosmological constant and Newton's constant. In this process one separates theories in even dimensions from theories in odd dimensions. In a previous work gravitational collapse in even dimensions was analysed. In this work attention is given to odd dimensions. It is found that black holes also emerge as the final state of gravitational collapse of a regular dust fluid. I. INTRODUCTION A generalization of Einstein gravity to other dimensions while keeping the same degrees of freedom (the field equations for the metric remain of second order) is given by the Lovelock action [1]. The theory can also be considered as an extension of Eintein-Hilbert action (see e.g. [2]), in which new terms make their appearance by taking into the action the Euler densities of the spaces with dimensions lower than the space in consideration. In a previous work [3] we have studied gravitational collapse in Lovelock gravity for a spacetime with even dimensions, thus extending the Oppenheimer-Snyder collapsing model. Following the work of [2,4], the reason for separating even from odd dimensions in the Lovelock theory comes naturally in a D−dimensional spacetime when one considers embedding the Lorentz group SO(D − 1, 1) into de anti-de Sitter group SO(D − 1, 2). The Lovelock theory then branches into two distinct classes, with Lagrangians for even dimensions and Lagrangians for odd dimensions. One also finds in this way that the number of constants, which proliferates when one goes to higher and higher dimensions, reduces drastically to two, the cosmological constant Λ and Newton's constant G. In this work we study gravitational collapse in odd-dimensional spacetimes and show that black holes form from regular initial data consisting of a dust fluid. We follow closely the nomenclature and the division of sections made in [3]. In section II the Lovelock gravity for restricted coefficients in odd-dimensional spacetimes is presented. In section III we display the static solutions in odd dimensions found in [4]. In section IV we find some cosmological or interior matter solutions for perfect fluids. In section V we match the solutions found in section IV to the solutions of section III. In section VI we show that black holes can form through gravitational collapse in Lovelock odd-dimensional gravity. Section VI comments on the formation of naked singularities and section VII presents some conclusions. In the paper we usually do G = c = 1. II. THE LOVELOCK THEORY The most general action in D ≥ 3 spacetime dimensions that yields the same degrees of freedom of Einstein's theory is the so called Lovelock action, given by [1,2] where R ab = dω ab + ω a c ∧ ω cb is the curvature two-form, e a is the local frame one-form, and ω ab is the spin connection, with a i = 0, 1, . . . , D − 1. The symbol [] over the summation symbol means one should take the integer part of (D − 1)/2. S m is a phenomenological action wich describes the macroscopic matter sources. In general, the constant coefficients α p are arbitrary. However, it is shown in [4] that taking certain special choices one is able to get simple meaningful solutions. Following [4] one first considers embedding the Lorentz group SO(D − 1, 1) into de anti-de Sitter group SO(D − 1, 2), and then separates into two distinct classes of Lagrangians: Lagrangians for even dimensions and Lagrangians for odd dimensions. For odd dimensions, D = 2n−1, one can find a construction similar to the Chern-Simons action construction in three dimensions. One starts with the Euler density in one dimension above D, with A 1 , A 2 = 0, 1, . . . 2n − 1 being the anti-de Sitter indices.R AB is the anti-de Sitter curvature two-form constructed with the SO(D-1,2) connection W AB . Equation (2.2) is a local exact form, and can be written as an exterior derivative of a Lagrangian in 2n − 1 dimensions, i.e., E 2n = dL 2n−1 , see [4]. Decomposing the connection W AB into the connection under D rotations w ab and inner translations e a , one finds the anti-de Sitter curvatureR in terms of the Lorentz curvature R: where l is a scale factor which is to be related to the cosmological constant l 2 = −1/Λ. Using Eq. (2.3) one finds that the Lagrangian in Eq. (2.2) can be put in the form where the coeficcients are given by where Q a D is a (D − 1)-form associated with the energy momentum tensor T a b through the following expression (2.8) III. EXTERIOR VACUUM SOLUTIONS In the vacuum all components of the energy-momentum tensor vanish, so that the field equations (2.7) are given by Inserting the coeficients α p and the constant κ given in (2.5) and (2.6) in equation (3.1), one gets for odd dimensions (D = 2n − 1), We consider now a static, spherical symmetric spacetime. One can write the metric in the following form, where t and r are the time and radial coordinates and dΩ 2 D−2 is the arc-element of a unit (D − 2)-sphere. The subscript + reminds that (3.3) is to be viewed as an exterior solution. With metric (3.3) and equations (3.1) and (3.2), Bañados, Teitelboim and Zanelli found the following exact solution for D = 2n − 1 [4], (3.4) These solutions describe black holes. We will show that they also represent the exterior vacuum solution to a collapsing (or expanding) dust cloud in Lovelock's odd-dimensional theory, as in the even-dimensional case [3]. IV. INTERIOR MATTER SOLUTIONS The interior spacetime is modeled by a homogeneous collapsing (or expanding) dust cloud, whose metric is described by the Friedmann-Robertson-Walker in D dimensions The coordinates t and r are comoving coordinates (we omit throughout the subscript − to indicate an interior solution). Note that that k has dimension of 1/[length] 2 . The energymomentum tensor for a perfect fluid is given by where ρ is the energy-density, p the pressure, and u α is the D-velocity of the fluid. From where the coefficients α p are given in (2.5), and κ is given in (2.6). Equations (4.3)-(4.4) have a first integral given bẏ where ρ 0 and a 0 are constants. Equations (4.3)-(4.4) have also a second integral, i.e., the solution of the Eq. (4.6) is given by (see also [5]) where b is an arbitray phase which will be neglected henceforward. The Ricci quadratic scalar and the Kretschmann scalar are given by respectively. We now assume a dust fluid, p = 0. For such an equation of state we have where ρ 0 and a 0 are the constants defined above. Inserting Eq. (4.7) in Eq. (4.10), we obtain the evolution of the density in the dust model: We see that the density (4.11) and the curvature scalars (4.8)-(4.9) diverge at t/l = π which represents the formation of a singularity. V. JUNCTION CONDITIONS Now we match the exterior and interior spacetimes found in sections III and IV, respectively, across an interface of separation Σ. The junctions conditions are [6] ds 2 where K αβ is the extrinsic curvature, and n ± ǫ are the components of the unit normal vector to Σ in the coordinates x ± , and ξ represents the intrinsic coordinates in Σ. The subscripts ± represent the quantities taken in the exterior and interior spacetimes. Both the metrics and the extrinsic curvatures in (5.1)-(5.2) are evaluated at Σ. The metric intrinsic to Σ is written as Where τ is the proper time on Σ and dΩ 2 D−2 denotes the line element on a D − 2 dimensional sphere. Using the junction condition (5.1), metric (5.4) and the exterior metric (3.4) we obtain and where · ≡ d/dτ , and both equations are evaluated at Σ. From now on, we will usually omit the subscript Σ to denote evaluation at the interface. Using (5.5) in (5.6) we find The unit normal to Σ in the exterior spacetime is From (5.3) we then get In what follows the other components of K + ab are not needed. The unit normal to Σ in the interior spacetime is 1 − k r 2 , 0, · · · , 0 (5.10) and from (5.3) we have Using the junction condition (5.1) for the interior spacetime yields ar Σ = R(τ ). From the condition K + θθ = K − θθ , (5.9) and (5.11) we obtaiṅ Multiplying equation (4.6) by r 2 Σ we geṫ . (5.13) Comparing equation (5.12) and (5.13) we have which is the mass of the cloud expressed in terms of the constants given in the problem. VI. BLACK HOLE FORMATION In order to study black hole formation in this theory we work with the solution found in (4.7). The interior and exterior metrics are given in (4.1) and in (3.4) respectively, and as we have shown in section V, it is possible to make a smooth junction between both spacetimes. To be complete we treat the cases D ≥ 3. The case D = 3 reduces to the collapse studied in [7]. For convenience we rewrite Eqs. for the density and and for the quadratic Ricci and Kretschmann scalars respectively. In this work we restrict the values of the quantity k r 2 Σ , assuming k r 2 Σ = 0, ±1/2. These values have no special meaning, although for k r 2 Σ positive and large enough there is no solution at all. Note also that the expression (5.14) for the mass is independent of the value chosen for k r 2 Σ . Gravitational collapse occurs for π/2 ≤ t/l ≤ π. The time t/l = π/2 marks the onset of collapse. At this moment there are no singularities in spacetime, as the curvature scalars (6.3)-(6.4) and the density (6.2) indicate. In fact, the singularity appears only at t/l = π, where all these quantities blow up. To know whether a black hole has formed or not, one has to search for the appearance of an apparent horizon and an event horizon. The apparent horizon is defined to be the boundary of the region of trapped two-spheres in spacetime. To find this boundary on the interior spacetime one looks for two spheres Y ≡ a(t)r =constant whose outward normals are null, i.e., ∇ Y · ∇ Y = 0 . Using metric (4.1) this yields, Using (6.1) in (6.5) gives the evolution of the apparent horizon in comoving coordinates, Now, the apparent horizon first forms at the surface r Σ . Then, for r = r Σ , equation (6.6) gives the time t/l at which the apparent horizon first forms. On the other hand, one should also be able to find the formation time of the apparent horizon on the surface Σ through an equation on Σ, equation (5.12). Indeed, at the junction one has R = a(t)r Σ . Then from junction condition (5.12) and equation (6.5) we have that the apparent horizon first forms when Now, using (6.1), the time of formation of the apparent horizon can be found through the Given a dimension D and an M one can obtain R AH through equation (6.7). Then equation Equation (6.9) can be put in the following integral form, where x ≡ (1/2)t/l . Now, the time x 1 is to be precisely equal to the formation time of the apparent horizon, since one expects that in vacuum both horizons coincide [8]. One has then to integrate (6.10) to find the time x 0 at which the event horizon first forms, at r = 0. For instance, D = 3, M = 0.25 and k r 2 Σ = −1/2 we obtain t 0 /l = 1.96. A plot in comoving coordinates (t/l, r/r Σ ) shows the evolution of the apparent and event horizons in To study what happens to external observers we note that a light signal emitted from the surface r + ] Σ at the exterior time t + obeys the null condition Thus t + /l → ∞ when r + ] Σ /l → (M + 1) 2/(D−1) − 1, so the collapse to the event horizon appears to take an infinite amount of time to an exterior observer, and the collapse to r + = 0 is unobservable from the outside. Also, the redshift from the dust edge is given by When the dust edge crosses the event horizon we haveṘ = − 1 − k r 2 Σ , so z → ∞. Thus the collapsing dust will fade from sight, as the redshift of the light from its surface diverges. VII. NAKED SINGULARITIES To study the presence of naked singularities, i.e., singularities not hidden by an event horizon we analyse equations (3.4), (6.1)-(6.4) and (5.14). Naked singularities appear only when M < 0. Although solutions with negative mass are usually considered unphysical, they will be studied here because these generalize the three-dimensional solutions found in [9,10,7]. In the model adopted here it is useful to separate two distinct classes: i) If l remains finite (in which case Λ = 0), for any D ≥ 3 the curvature scalars (6.3)-(6.4) will blow up when t/l = π, indicating the formation of a curvature naked singularity. ii) If we take the limit l → ∞ (in which case Λ = 0) we see from the exterior metric (3.4) that the event horizon is no longer present, and the collapse will form a naked singularity. Taking the limit on Eqs. (6.3)-(6.4) we have For any D > 3 both (7.1)-(7.2) will vanish because from Eq. (5.14) M = −1+O(l −D+3 ), so in the limit we have M = −1. Also, from Eq. (6.1) we have in the limit, a(t) = √ −k t, so that the only possible solution is when k r 2 Σ < 0. Note also that M = −1 implies that the exterior metric (3.4) is a Minkowski one, although the interior density (6.2) is non-zero everywhere in the dust cloud. So at t/l = π we will have ρ → ∞ in a flat Mikowski spacetime. This is analogous to a Newtonian singularity. VIII. CONCLUSIONS We have analysed gravitational collapse in Lovelock gravity for odd-dimensional spacetimes. We have showed that gravitational collapse of a regular initial non-rotating dust cloud proceeds, to form event and apparent horizons, and terminates at a spacelike curvature singularity.
2014-10-01T00:00:00.000Z
1996-08-02T00:00:00.000
{ "year": 1999, "sha1": "fc8647f0628808beee55cc1c700a584f259bb055", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/9902054", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fc8647f0628808beee55cc1c700a584f259bb055", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
53790976
pes2o/s2orc
v3-fos-license
Stakeholder involvement in systematic reviews: a scoping review Background There is increasing recognition that it is good practice to involve stakeholders (meaning patients, the public, health professionals and others) in systematic reviews, but limited evidence about how best to do this. We aimed to document the evidence-base relating to stakeholder involvement in systematic reviews and to use this evidence to describe how stakeholders have been involved in systematic reviews. Methods We carried out a scoping review, following a published protocol. We searched multiple electronic databases (2010–2016), using a stepwise searching approach, supplemented with hand searching. Two authors independently screened and discussed the first 500 abstracts and, after clarifying selection criteria, screened a further 500. Agreement on screening decisions was 97%, so screening was done by one reviewer only. Pre-planned data extraction was completed, and the comprehensiveness of the description of methods of involvement judged. Additional data extraction was completed for papers judged to have most comprehensive descriptions. Three stakeholder representatives were co-authors for this systematic review. Results We included 291 papers in which stakeholders were involved in a systematic review. Thirty percent involved patients and/or carers. Thirty-two percent were from the USA, 26% from the UK and 10% from Canada. Ten percent (32 reviews) were judged to provide a comprehensive description of methods of involving stakeholders. Sixty-nine percent (22/32) personally invited people to be involved; 22% (7/32) advertised opportunities to the general population. Eighty-one percent (26/32) had between 1 and 20 face-to-face meetings, with 83% of these holding ≤ 4 meetings. Meetings lasted 1 h to ½ day. Nineteen percent (6/32) used a Delphi method, most often involving three electronic rounds. Details of ethical approval were reported by 10/32. Expenses were reported to be paid to people involved in 8/32 systematic reviews. Discussion/conclusion We identified a relatively large number (291) of papers reporting stakeholder involvement in systematic reviews, but the quality of reporting was generally very poor. Information from a subset of papers judged to provide the best descriptions of stakeholder involvement in systematic reviews provide examples of different ways in which stakeholders have been involved in systematic reviews. These examples arguably currently provide the best available information to inform and guide decisions around the planning of stakeholder involvement within future systematic reviews. This evidence has been used to develop online learning resources. Systematic review registration The protocol for this systematic review was published on 21 April 2017. Publication reference: Pollock A, Campbell P, Struthers C, Synnot A, Nunn J, Hill S, Goodare H, Watts C, Morley R: Stakeholder involvement in systematic reviews: a protocol for a systematic review of methods, outcomes and effects. Research Involvement and Engagement 2017, 3:9. https://doi.org/10.1186/s40900-017-0060-4. Electronic supplementary material The online version of this article (10.1186/s13643-018-0852-0) contains supplementary material, which is available to authorized users. Background The concept of active involvement in research of people with a healthcare condition, their families, friends and carers, was founded on the principle that people affected by the condition have a moral right to contribute to decisions about what research is undertaken and in what way [1][2][3]. The active involvement of other stakeholders (meaning patients, the public, health professionals, health decision makers and funders) grew from a desire to address the lack of real-world relevance of research and to ensure more effective implementation of research findings into practice [4,5]. It is now widely accepted in many parts of the world that the active involvement of many of these groups (that we collectively refer to as 'stakeholders') is beneficial to the quality, relevance and impact of health research [2,3]. Accordingly, many funding bodies, including government and charities, now mandate that researchers actively involve patients and the public in their research, including systematic reviews [6][7][8][9], although there is evidence of international variation in the extent to which patients and the public are involved [10]. Systematic reviews aim to inform and support the delivery of evidence-based practice, by finding and bringing together, in an explicit and transparent way, all the research evidence that addresses a particular topic or healthcare question. Stakeholder involvement within systematic reviews has been proposed as a way to enhance the actual and perceived usefulness of synthesised research evidence, addressing barriers to the uptake of evidence into practice [11]. In this paper, we define (based on a number of published definitions, e.g. [1,12,13]) 'active stakeholder involvement' as the contribution of people who are not researchers throughout the process of production and dissemination of a systemic review, including the planning and conduct of an individual systematic review. While there are a number of examples of active stakeholder involvement in systematic reviews, the approaches to, and extent of, involvement have varied considerably [14][15][16] and synthesised evidence and resources to guide practice is lacking. As well as active involvement within individual systematic reviews, stakeholders may also get involved at the level of organisations which commission or carry out systematic reviews. A recent review explored examples of consumer involvement within organisations (such as Cochrane) that support production of systematic reviews [17], but evidence relating to relevant activities and roles of individual researchers and how they may involve stakeholders in their reviews remains scant [18]. As part of a wider project to provide guidance to researchers about how to involve stakeholders in systematic reviews [19], we undertook a mixed-method evidence synthesis, first completing a scoping review to create a broad map of evidence relating to stakeholder involvement in systematic reviews, followed by two contingent syntheses [20]. Here, we report the results of the scoping review. The aims of this paper are therefore to: 1. Document the evidence-base relating to stakeholder involvement in systematic reviews 2. Use this evidence to describe key features of how stakeholders have been involved in systematic reviews Design We carried out a scoping review, following a protocol [20]. We followed the methodological steps outlined by Arksey and O'Malley [21] and used an iterative team approach, with regular team meetings to discuss progress and reach consensus on next steps, to ensure clarity of purpose and balance between breadth and comprehensiveness of the review [21][22][23]. Protocol deviations, with justifications, are described in Additional file 1. Search strategy We implemented a stepwise approach [24] to promote efficient identification of up-to-date literature, balancing the expected large volume of literature with available time and resources. Details of this approach, including pre-agreed criteria and contingencies to inform decisions relating to the extent of the searches, have previously been described [20]; below we report the actual steps of searching and brief justification for these steps. We used a comprehensive search strategy, adapted for each database (see Additional file 2). In step 1, we searched a comprehensive set of databases (CENTRAL (CDSR, DARE, HTA, Cochrane Methodology Register), Embase (Ovid), MEDLINE (Ovid), CINAHL (EBSCO), AMED, Joanna Briggs Database and ProQuest Dissertations and Theses (handsearched)), within a narrow time period (from 01 January 2014 to 09 April 2016). The aim of step 1 was to, in an efficient way, identify the databases most likely to include relevant papers. In step 2, we searched a more limited set of databases (Embase, MEDLINE, CINAHL and HTA) for a longer time period (01 January 2010 to 31 December 2013) with the aim of exploring whether there was justification for extending the search beyond 2010. Searching and application of inclusion criteria was applied to each step prior to progression to the next step. For step 1, we noted the source database (or databases) of each identified record, and the databases from which the greatest number of included papers were identified. The results of these explorations were discussed and review team consensus reached on which databases to include in step 2. After step 2, the review team explored the publication dates of records meeting our inclusion criteria. The majority of papers meeting our inclusion criteria (63%) were published in either 2014 or 2015 (see Additional file 3). The sharp drop in numbers of included papers from 2014 to 2013 and relatively stable number of included papers between 2013 and 2010 were key factors in the team decision not to extend electronic searching to before 2010. Additional sources we searched include the reference lists of recent relevant reports and reviews (e.g. [6,17,25]), the reference lists of all included studies and articles published in the journal Research Involvement and Engagement. To identify unpublished reports, we contacted authors of published papers and promoted this review via social media. Selection criteria Selection criteria for inclusion were purposefully wide. We included any paper, published or unpublished, regardless of study design, including commentaries, letters and expert opinion, which investigated, reported or discussed any aspect of stakeholder involvement in a systematic review. We anticipated that we would include (but would not be limited to) evidence such as published systematic reviews which reported involvement; reports of methods of involvement in an individual systematic review; studies quantitatively or qualitatively evaluating involvement in individual systematic reviews; and opinions, commentary and discussion relating to involvement in systematic reviews. We excluded papers that focussed on stakeholder involvement in the generation of research priorities (unless they were specifically generating questions for a systematic review) and in both research more broadly, and guideline development, unless there was an explicit mention of involvement in systematic reviews. Systematic reviews that focussed on synthesising the evidence related to stakeholder involvement in primary research were also excluded. We excluded titles without abstracts and review protocols; this was a pragmatic decision made in light of the high volume of search results. Definition of key terms We used the following operational definitions, pre-stated in the protocol [20], to support the application of the selection criteria: Stakeholder-any person who would be a knowledge user of research but whose primary role is not directly in research. Potential stakeholders include a broad range of people, including those who are actual or potential recipients of health or social care, where this may include patients, carers and family members, or people interested in remaining healthy who are seeking information about a health condition or treatment for personal use [26]; members of organisations that represent people who use services; people with a professional role in health and social care; policy makers and managers. We documented the types of people involved within any evidence included in this review, highlighting where this included patients, carers and family members, and where this included other stakeholders only. Systematic review-a research process in which literature relevant to a stated question is identified and brought together (synthesised) using explicit methods [27], including reporting of inclusion/ exclusion criteria, search methods and details of included studies. We accepted systematic reviews regardless of the type of evidence synthesised (i.e. quantitative, qualitative, mixed-methods) and the type of question addressed (e.g. intervention effectiveness, diagnostic test accuracy, patient experiences). Involvement in a systematic review-any role or contribution of stakeholders toward the development of a review protocol, completion of any of the stages of a systematic review or dissemination of the findings of a review. Methods of applying selection criteria One review author (PC) ran the search strategy and excluded any obviously irrelevant titles. Two reviewers (PC, AP) independently reviewed the abstracts and applied selection criteria to the first 500 records; agreement was explored and a full team discussion held to clarify the selection criteria. This clarification led to a number of post hoc exclusion criteria (described above under selection criteria and within Additional file 1). Subsequently, we agreed that two independent review authors (PC, AP) should review a further 500 records using the clarified criteria and that if agreement between independent reviewers was greater than 95% when using these refined criteria, then subsequent selection of papers would be performed by one reviewer only; this agreement was 97%, and therefore, one reviewer (AP) screened the remaining abstracts. The full papers from abstracts included after the screening process were considered at the data extraction and judgement stages (see below); if a paper was found not to meet the inclusion criteria at this stage, it was excluded. Data extraction and synthesis Data extraction For all included papers, one reviewer (AP) extracted and categorised data into structured tables. Extracted data included bibliographic information, type of paper, stated aim, topic/focus of systematic review, study/review methodology, description of reported involvement, details of people involved, stage in review process at which people involved and any formal research methods used. Retrospective categorisation of data included focus of review and type of evidence synthesised (see Additional file 1, protocol deviations). Details of the operationalisation of these data extraction items are provided in Additional file 4. Judgement of comprehensiveness of description Our review aim was to describe key features of how stakeholders have been involved in systematic reviews; consequently, we were principally concerned with the comprehensiveness of the description of methods of involvement, rather than appraising the quality of the methods of the reviews. We devised a method for judging the comprehensiveness of the description of the method or approach to involvement, given that there are no standardised tools for such a task. Criteria for categorising the comprehensiveness of the description provided within papers were developed, adapted from Pollock [28]. Initially, two reviewers (AP, CS) assigned these criteria independently for a random sample of 20% of papers identified from step 1 of searching; this was 42 of 210 papers. There was agreement between independent reviewers for 57% (24/42) of the assessed sample. The agreement between reviewers, implications relating to disagreements and perceived risk of bias to the review results are reported in Additional file 5. Following discussion and clarification of criteria (see Additional file 5), it was agreed that one reviewer would assign judgements to the remaining papers, using the following criteria: 'Green'-comprehensive description of one (or more) specific method or approach to the involvement of stakeholders in systematic reviews. Description sufficient to enable replication of methods 'Amber'-a brief or partial description of one (or more) specific method or approach to the involvement of stakeholders in systematic reviews. Description sufficient to enable partial replication of methods 'Red'-few details provided and/or inadequate description of the method or approach of involvement of stakeholders in systematic reviews. Description insufficient to enable any replication of methods Detailed description of methods or approaches to involvement Additional, more detailed, data extraction was performed for papers that were judged as 'green' for comprehensiveness of description. In addition to a narrative description of the methods or approaches to involvement, one reviewer (AP) extracted and tabulated the stated aim of involvement, number and characteristics of people involved, methods of recruitment, format of involvement (e.g. face-to-face meeting, telephone meeting, written consultation, online survey), amount of involvement (number of meetings, number of days involved), details of ethical approval and financial compensation given to stakeholders, evaluation of the involvement and tools used for reporting involvement. Stakeholder involvement in this systematic review One consumer (HG) and two consumer representatives (RM, CS) were members of the project and author team for this systematic review. All contributed to face-to-face discussions which led to the development of the review protocol, and read, commented on and had authorship of the published protocol. All contributed to project teleconferences throughout the review, particularly when making decisions relating to the stepwise search methods. Additionally, CS independently applied judgements of comprehensiveness to a sample of full papers. All three discussed the key findings of this review and contributed as authors to the final manuscript. Results of the search We screened 12,908 titles and abstracts and applied selection criteria to 672 full papers. Three hundred sixty-nine of these 672 full papers were excluded: 118 as they were abstracts only, 18 as they were protocols, 16 as they were duplicates and 217 as they did not meet our inclusion criteria. Reasons that these 217 did not meet our inclusion criteria are listed in table of excluded studies (Additional file 6); main reasons for exclusion were that the paper was a systematic review but there was no involvement of people (approximately 30%), the paper did not describe or report a systematic review (approximately 25%) or the paper described involvement in research other than a systematic review (approximately 25%). This left 291 papers that met our criteria for inclusion in the scoping review (see Fig. 1). Characteristics of included papers Details of the 291 included papers are provided in the table of included studies (Additional file 7). A brief summary is described below. Type of paper Thirty-one percent of included papers were published systematic reviews; 54% were reports of a guideline or recommendation in which a systematic review component was described; and 5% were papers specifically describing methods of involving stakeholders in a systematic review. Stakeholders involved Thirty percent of the included papers involved patients and/or carers within the systematic review process, while 41% involved other stakeholders (e.g. health professionals, academic experts, representatives of patient organisations) but not patients or their family members. In almost one third of the included papers (29%), it was not clear who the stakeholders involved in the review were and whether this included patients and/or carers. Country One third (31.6%) of papers were from the USA, one quarter (26.1%) from the UK and 10.0% from Canada. Of the remaining papers, 22.7% were from Australia, Netherlands, Germany, Italy, France or Spain, and 9.6% from a further 15 countries with between 1 and 4 papers each (see Table 1). Stage of the review process In almost half of the papers (47.8%), the stage of the review process at which stakeholders were involved was unclear. In just over one quarter (27.5%), stakeholders were involved in interpreting the results after the evidence had been synthesised. In around one fifth (22.3%), stakeholders were involved either throughout the whole review process or during one or more stages of review completion (see Table 2). Focus of the review Seventy-one percent of the included systematic reviews were judged to be focussed on one of the International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10) categories (Table 3 and table of included studies (Additional file 7)). Most frequently (10%), this was 'factors influencing health status and contact with health services' , where reviews covered topics such as the effectiveness or implementation of care pathways for specific population (e.g. paediatrics, geriatrics, emergency care). The specific diseases or health areas covered by the greatest numbers of reviews were mental and behavioural disorders (8.6%), neoplasms (6.9%), diseases of the musculoskeletal system and connective tissue (6.2%) and certain infectious and parasitic diseases (5.5%). Thirteen percent of the reviews which did not fit one of the ICD-10 categories were focussed on a specific intervention, most commonly medical or surgical interventions (8.6%) and public health interventions (5.2%). Ten percent of reviews were focussed on an area of research, rather than a specific health or disease area or intervention; more than half of these (55%, n = 29) were focussed on methods of stakeholder involvement or engagement, while the remainder focussed on other areas of research methods, such as methods of statistical tests within primary research. The remaining 7% were unable to be categorised within any of these groups and focussed on, for example, areas such as teaching, data protection and criminal justice. Comprehensiveness of description of method or approach to involvement Table 4 shows the assigned judgements of the comprehensiveness of the description of the method or approach to involvement. Figure 2 illustrates the proportion of different types of paper which were judged to be 'green' , 'amber' or 'red' , when patients/carers were involved and at different stages in the review process. In total, 59% of the included papers were judged to provide few or inadequate details ('red'), with only 10% judged to provide a comprehensive description of one, or more, method or approach to involvement ('green'). Detailed description of methods or approaches to involvement The 30 papers which were judged as providing a comprehensive ('green') description of their methods or approaches to involvement included 14 'methods' papers describing an experience of stakeholder involvement in one (or more) systematic review [25,[29][30][31][32][33][34][35][36][37][38][39][40][41]; 11 systematic reviews in which the stakeholder involvement was concurrently described [42][43][44][45][46][47][48][49][50][51][52]; 2 guidelines or clinical recommendations, in which the involvement in the systematic review component was described [53,54]; and 1 paper which described the development of a tool to report stakeholder involvement [55]. Two of the papers each described two different systematic reviews [37,55], meaning that there are a total of 32 systematic review described. Table 5 summarises the key characteristics of these 32 systematic reviews, and Table 6 summarises the data relating to stakeholder involvement and a brief narrative summary of key features is provided below. Table 5 states the aim and focus of the 32 systematic review. Sixty-eight percent were focussed on one of the ICD-10 categories; with mental and behavioural disorders being the most common health topic (22%). Sixteen percent were focussed on a specific intervention rather than a disease area, most commonly on a public health intervention (12%). The remaining 16% were focussed on either research or another topic. Review aim/focus A majority of the reviews (56%) synthesised both qualitative and quantitative evidence, while 19% only included quantitative studies and 12.5% only included qualitative studies. The type of evidence included was unclear for 12.5%. Two of the reviews described using a 'realist' review methodology, and 2 were Cochrane reviews of randomised controlled trials. Note: no papers were categorised as XVI Certain conditions originating in the perinatal period; XVIII Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified *III Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism; VII Diseases of the eye and adnexa; VIII Diseases of the ear and mastoid process; XVII Congenital malformations, deformations and chromosomal abnormalities; XX External causes of morbidity and mortality; ICHI Functioning intervention Table 4 Comprehensiveness of description of method or approach to involvement Comprehensive description of one (or more) specific method or approach to the involvement in systematic reviews. Description sufficient to enable replication of methods A brief or partial description of one (or more) specific method or approach to the involvement in systematic reviews. Description sufficient to enable partial replication of methods Few details provided and/or inadequate description of the method or approach of involvement. Description insufficient to enable any replication of methods People involved Seventy-eight percent of the systematic reviews involved patients, carers or family members, while in one (3%), the people involved were peer support workers. In 19% of systematic reviews, the only people involved were professionals or academic experts, although one of these [56] aimed to recruit patient representatives, but failed to do so. Where there were face-to-face meetings, the number of stakeholders involved ranged from 2 to 27; where there were one-off events, often advertised as open to the general public, the numbers of stakeholders involved ranged from 15 to 81; where involvement did not require a face-to-face meeting, for example using an electronic Delphi or survey, the numbers invited ranged from 29 to 340 (see Table 6). Geographical location (from which stakeholders were recruited) The majority of the involvement occurred in the UK, with two thirds (66%) of papers describing UK-based activities. Of the remaining 34%, 2 recruited people from across Europe, 3 were carried out in Canada, 3 in the Netherlands and 1 in Australia, USA and Spain. How people were recruited [32,38,40,56]). For a further 7/32 of the systematic review, involvement opportunities were advertised to the general population, often snowballing information out via target groups and organisations, and anyone who volunteered could get involved [25,36,47,49,50,52,53]. A combination of different recruitment strategies was used for 1 systematic review [33], and the method of recruitment was unclear for 3 systematic reviews [30,45,51]. Format of involvement The format of involvement comprised direct, face-to-face interaction in 81% and an electronic Delphi method or survey in 19% of the systematic reviews. The face-to-face interaction was either in the format of a meeting (53%; [29, 31, 32, 34, 35, 38, 40-44, 46, 50, 52, 55, 58]), a larger workshop or public event (19%; [25,36,37,47,49]) or a combination of both of these (9%; [33,45,48]). In each of the 6/32 systematic reviews which used an electronic Delphi method, there was a specific and focussed aim of stakeholder involvement; in 4/32 [30,53,54,56], this was broadly related to reaching consensus on factors, recommendations or statements arising from the results of the systematic review, and in 2/32 [39,55], this was to reach consensus on the topic or focus of the systematic review. Amount of involvement Where there was direct face-to-face interaction, there could be between 1 and 20 meetings or events. The majority (83%) of the 24 reviews providing this information held 4 or less meetings (median 2 meetings), while one held 5 meetings plus 3 public workshops [45]. Three held multiple meetings (12,15 and 20 respectively by [25,46,58]); in each of these three examples, the approach is described as 'participatory'. Where reported, the length of face-to-face meetings varied from 1 h to ½ day. Generally, the Delphi approach involved three rounds of an electronic survey, although in one example after two rounds of Delphi voting there was a direct face-to-face consensus meeting [54]. Ethical approval Details of ethical approval were reported for 31% of systematic reviews; for details, see Table 6. One paper reported that ethical approval was sought but not required [44]. No details relating to ethical approval were provided by the remaining 66% of papers. Financial compensation Expenses (such as travel, accommodation and care costs for family members) were reported to be paid to people involved in 25% of systematic reviews; in two, this was expenses only, while in six money or a voucher was provided in addition to expenses (see Table 6). No details relating to financial compensation are reported in the remaining 75% of systematic reviews. Tools or method of reporting involvement Thirty-four percent of the included papers had a clear method of reporting involvement. Four used some sort of tool, framework or checklist: Concannon et al. [42] developed and used a 7-item question for reporting stakeholder involvement in research, Liabo [46] used a framework for considering impact of involvement, Martin et al. [36] reported an evaluation based on reporting standards within Guidance for Reporting Delphi approach Ethical approval granted. No information provided Braye and Preston-Shoot [31] The first meeting aimed to seek views on the content and process of the study, finalising the research questions and concluding the protocol, and to Service users and carers (n = 15), professionals (n = 16) Two face-to-face meetings. Information was sent out prior to meetings, introductory presentation. - No information provided In addition to researcher time to undertake these negotiations, money was also set aside to meet the costs of travel and special Consumers, professionals, researchers (n = 7) Two face-to-face meetings. Email and phone communication throughout the review. - No information provided No information provided - No information provided No information provided No information provided Morgan et al. [58] As part of a partnership approach for a wider project relating to incentive mechanisms for smoking cessation in pregnancy and breastfeeding, 2 mother-and-baby groups were recruited and were coapplicants on the Members of 2 existing groups (groups of around 12 people). Groups were mother and baby/toddler groups. RAND-e modified Delphi procedure (involves considering patient vignettes). Consensus meeting Delphi approach No information provided Pearson et al. [50] Two meetings at different stages in the review process. Meeting 1 took place at the start of the review and aimed to 'sharpen the focus of Professionals (n = 10)-primary and secondary schoollevel educational professionals and senior academics linked to the review Two face-to-face meetings (length unclear) - No information provided No information provided [30,35,37,38,47,50]. Within the remaining 66% of systematic reviews, information about methods of involvement was not reported within a particular section, table or file, but was distributed throughout the paper. Evaluation of the methods of involvement None of the 32 studies carried out any formal evaluation of the impact of involving stakeholders; however, 28% collected data relating to the views and experiences of people involved. Of these, four used a questionnaire to elicit the views and experiences of stakeholders [29,30,36,41]; three held a discussion with stakeholders in which they were encouraged to share or reflect on their experiences and perspectives [31,33,47]; and two had both a questionnaire and a discussion [38,44]. In addition, Liabo [46] reported data arising from audio recordings and minutes of all meetings, Hyde et al. [34] described 'impact' within a table, while the reflections of the researchers on the process of involvement were discussed by others [35,37,40,52]. Discussion Key findings: evidence-base relating to stakeholder involvement in systematic reviews We identified 291 papers describing stakeholder involvement in systematic reviews. Approximately two thirds of published examples describe UK activities, but we found examples from at least 24 countries. Reporting of who was involved, in what ways and at what stage in the review process was generally very poor, and the majority of the papers (59%) were judged to provide few details and/or an inadequate description of the method or approach of involving stakeholders. Thirty percent of systematic review teams clearly involved patients/carers, but in many cases (41%), the stakeholders involved health professionals, academic experts or representatives of patient organisations, but not patients or their family members. We identified 30 papers, describing 32 systematic reviews, which we judged to have sufficiently comprehensive reporting to allow a more in-depth synthesis of methods or approaches to the involvement of stakeholders in systematic review. We have described key features of how stakeholders have been involved in systematic reviews, using data from these 32 examples. However, it was notable that, despite the selection of systematic reviews which were judged to provide a comprehensive description of one or more method of involvement, there was still inadequate (or absence of ) reporting of a number of features in which we were interested. For example, the majority of papers did not provide any information relating to ethical approval or financial compensation to the stakeholders involved. A key contributing factor to the poor reporting relating to aspects of how stakeholders were involved may have been the lack of a tool or standardised method for reporting. On the few occasions where a particular tool has been used to support reporting of information relating to involvement, the tool has often been developed specifically by the systematic review authors. In many cases, the method of reporting comprises a written description of the activities in which stakeholders have been involved, but we found inconsistencies in the type of information presented and the location of this information within published papers. Implications: methods of involving stakeholders in systematic reviews The evidence which we have synthesised demonstrates that actively involving stakeholders within systematic reviews is feasible, and can be incorporated into a wide range of different types of systematic review. While there can be considerable variation in how stakeholders are involved, and the types of stakeholders who are involved, and there is currently an absence of evidence to directly inform choices for methods of stakeholder involvement within future reviews, a number of implications can be drawn from our synthesised evidence. In particular, evidence drawn from the 32 examples explored in this review can highlight some of the methodological decisions which may be made when planning stakeholder involvement in future reviews. These include: Will people directly affected by the healthcare topic addressed within the systematic review (i.e. individual patients, carers or family members) be involved? Will health professionals, academic experts or representatives from patient organisations be involved? How to find people to involve? Within our 32 examples, we found two key methods of recruiting stakeholders to be in systematic reviews; in the majority of our examples, there were personal invitations to known individuals or groups, but in some cases, recruitment occurred through advertising to the general population in order to get stakeholders to volunteer to be involved. How will people be involved? Within our 32 examples, two distinct methods of involving people in a systematic review were identified: (i) face-toface meetings or events or (ii) electronic Delphi method. Where there were face-to-face meetings, these could be attended by invited participants only or could be an open event or workshop to which members of the public are invited to attend. Invited participants may only attend a small number (often between 1 and 4) of meetings during the course of a systematic review, but this may be much more where a participatory approach is used. How many stakeholders to involve? The current evidence base indicates that the number of stakeholders depends on the way in which they will be involved. Evidence from the 291 papers in our synthesis shows that 1 stakeholder may be a coauthor on a systematic review, 2-10 stakeholders may be members of a steering group, 5-50 stakeholders may attend face-to-face meetings or focus groups and 20-400 stakeholders may participate in Delphi rounds or attend events or conferences. Use of research methods? Our examples highlighted that the following research methods have sometimes been incorporated into stakeholder involvement in systematic reviews: focus groups, interviews and a number of consensus decision-making techniques such as Delphi, Nominal Group Technique and voting/ranking processes. Other issues to consider when planning stakeholder involvement in systematic reviews are whether ethical approval will be required and resources for payment of expenses and any other financial compensation or reward. Although there is insufficient evidence to directly inform choices relating to who to involve and in what way, the findings arising from the 32 papers identified in this review have been used to produce, in collaboration with Cochrane Training, freely available online learning material and resources [60]. There have been many urgent calls for high-quality training materials, reporting guidelines and examples of best practice to support active stakeholder involvement and to enhance the relevance, usefulness and accessibility of systematic reviews [2,16,18,33,61]; the evidence from this review therefore can arguably currently play a key role in learning and support relating to active stakeholder involvement in systematic reviews. Implications: reporting stakeholder involvement in systematic reviews Recording and reporting of stakeholder involvement is important, both to ensure transparency in relation to the contributions and roles of different stakeholders within the review process and to contribute to the evidence base relating to this field. This scoping review highlights that the current reporting of involvement in systematic reviews is very poor and sometimes absent, and rarely provides a comprehensive description of who was involved and in what way. While there are a number of tools and frameworks which review authors could consider using (e.g. [36,46,55]), there is not currently any tool, guidance or recommendations specifically designed to support reporting of involvement within systematic reviews. Generic guidance relating to the reporting of stakeholder involvement in research has recently been updated (GRIPP2, [62]); however, this guidance has not been specifically tested for use with systematic reviews and has lacked international input during development. It is clear that there is an urgent need for improved reporting of involvement of stakeholders in systematic reviews. Such reporting should enhance the ability to develop evidence-based guidance around how to involve stakeholders in systematic reviews, and to explore and evaluate the impact of involvement. Identification of relevant systematic reviews and data extraction It is unlikely we identified all relevant examples of stakeholder involvement in systematic reviews, as we adopted a pragmatic search approach aimed at efficiency within project time and resource constraints. This was compounded by poor reporting and inconsistent terminology in this area. We believe it is highly likely that there are many systematic reviews where stakeholders played a key role that our methods could not identify. Our decision to exclude titles without abstracts and review protocols at the study selection stage may have introduced publication bias into our results, with a bias toward inclusion of papers published in peer-reviewed journals. Only one review author extracted data from the included studies, and there is the potential that this may have introduced bias and errors in extraction. In an attempt to improve transparency and reduce data extraction errors, we copied and pasted data verbatim from included papers into an electronic data extraction sheet. This is reported in the table of included studies (Additional file 7). Judgement to identify those with most comprehensive description The agreement between independent reviewers when applying the 'comprehensiveness' judgement to a subset of papers indicated that there were disagreements on around 17% of 'green' categorisations. We did not have the time or resources to have independent judgement on a higher proportion of studies. We are therefore not confident that our subset includes all papers which may provide an adequate description of some parts of the methods of involving people in a review. However, as the aim of this phase was to identify and describe methods of involvement from examples of systematic reviews, the impact of potentially falsely including or excluding a paper from this subset was perceived to be low. We present the included 'green' papers as examples of systematic reviews in which there was involvement of stakeholders and take care to stress that these are examples rather than a comprehensive sample. Our judgement of the comprehensiveness of the description of the methods was not a judgement of the quality of the involvement methods and only relates to the depth of the description of stakeholder involvement provided in the identified paper. Over half (54%) of the 291 included papers were reports of a guideline or recommendation, but only 2 of these were judged as 'green' for comprehensiveness of description. A potential explanation for this finding could be that stakeholder involvement is generally a core component of guideline development, but the primary focus of related journal publications is often the key clinical messages and implications, rather than the methods of the guideline, which are often fully described elsewhere. A judgement of 'amber' or 'red' for the comprehensiveness of the description of the method of involvement in the published paper is not an indication either that the quality of the methods was poor or that details of methods of stakeholder involvement are not available elsewhere. Conclusion This systematic review summarises evidence relating to the involvement of stakeholders in systematic reviews. We identified a relatively large number (291) of papers reporting stakeholder involvement in systematic reviews, but the quality of reporting was generally very poor. The level of reporting of involvement of stakeholders in systematic reviews, and the inconsistencies in which this is reported, must be improved so that guidance around how people can be involved in systematic reviews can be developed and the impact of involvement explored. This scoping review lends support to calls for high-quality training materials and examples of best practice to support active patient and public involvement and enhance the relevance, usefulness and accessibility of systematic reviews [2,16,18,61,63]. We identified a subset of 30 papers which we judged to provide a comprehensive description of stakeholder involvement in systematic reviews, and used these examples to summarise different ways in which stakeholders have been involved in systematic reviews. These examples arguably currently provide the best available information to inform and guide decisions around the planning of stakeholder involvement within future systematic reviews. This evidence has been used by Cochrane Training to develop online learning resources relating to how to involve people in systematic reviews [60], and has been used to develop a framework for describing stakeholder involvement in systematic reviews (Pollock A, Campbell P, Struthers C, Synnot A, Nunn J, Hill S, Goodare H, Morris J, Watts C, Morley R: Development and application of a framework to describe how stakeholders have been involved in systematic reviews, submitted). database of systematic reviews of health systems evidence an internationally recognised model for building partnerships between the public and researchers. Sophie Hill is the Head of the Centre for Health Communication and Participation (www.latrobe.edu.au/chcp), at La Trobe University, a centre she has established from the foundation of the Cochrane Consumers and Communication Review Group (http://cccrg.cochrane.org/). The Centre has an applied focus, with three roles: coordinating the production and publication of evidence on interventions to communicate with people about health; innovative research on communication issues that have been neglected, such as multimorbidity; and a knowledge translation function, for getting evidence into practice and policy. Heather Goodare is a Cochrane consumer reviewer for breast cancer and stroke; she was the first patient representative on the BMJ Editorial Board (1995)(1996)(1997)(1998)(1999) and is a Life Fellow of the Royal Society of Medicine. Originally an academic book editor, Heather trained as a counsellor after her own experience of breast cancer and of a flawed research study (Chilvers et al. [64]) in which she was a patient (see Goodare [65]). Chris Watts is the Learning and Support Officer for Cochrane Training. He works on a range of Cochrane learning projects including design and development of learning materials and pathways to support learners in a variety of Cochrane roles, particularly through online resources and initiatives for distance learners. Chris is a researcher by background and previously worked at the Royal College of Nursing in the UK, where he led a team of Research Analysts delivering evidence synthesis and evaluation projects supporting professional development and policy. Richard Morley is the Consumer Coordinator for Cochrane, supporting consumer involvement in the production and dissemination of Cochrane evidence. He has extensive experience of public engagement and partnership working in the voluntary, public and education sectors. Jacqui Morris is a Reader in Rehabilitation Research with a particular interest in how systematic review evidence can be implemented in allied health professionals practice. She is a co-author on several Cochrane reviews. Additional files Ethics approval and consent to participate Not applicable Competing interests The authors declare that they have no competing interests. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2018-11-24T13:03:31.471Z
2018-09-11T00:00:00.000
{ "year": 2018, "sha1": "93d13ddeb30bd3ba5bf788e652bb132f798f59a8", "oa_license": "CCBY", "oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-018-0852-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93d13ddeb30bd3ba5bf788e652bb132f798f59a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16914464
pes2o/s2orc
v3-fos-license
Cosmic Electroweak Strings We examine the Standard Model field configurations near cosmic strings in a particular class of models. This class is defined by the condition that the generator of the flux in the string, $T_s$, commutes with the Standard Model Lie algebra. We find that if the Standard Model Higgs carries a charge $F_h /2$ under $T_s$, cosmic string solutions have Z-flux $\Phi_Z =[n-F_h N/F_{\phi}]4\pi \cos \theta_w /g$, where $n$ is any integer and $4\pi N/qF_{\phi}$ is the flux of the gauge field associated with $T_s$. Only the configuration with the smallest value of $|n-F_h N/F_{\phi}|$ is stable, however. We argue that the instabilities found at higher $\Phi_Z$ are just associated with paths in configuration space reducing $|n-F_h N/F_{\phi}|$ by one unit. This contradicts recent claims that the instabilities in such models represent the spontaneous generation of current along the string. We also show that the stable strings have no Standard Model fermion zero modes: therefore there is no possibility of supercurrents carried by Standard Model particles in this class of models. In Grand Unified Theories (GUT) of particle physics with spontaneous symmetry breaking (SSB) there are often topological defects [1,2].In fact if the symmetries of nature are unified into one simple Lie group G, then there must exist topological monopoles.The mass density from such monopoles in a cosmological setting would dominate the universe, and this is not observed.This is the monopole problem, for which a number of solutions have been proposed; the leading contenders being inflation or the formation of strings at a later SSB which connect monopole and anti-monopole and lead to their annihilation. The existence of other topological defects, specifically domain walls and strings, are dependent on the details of the SSBs present in the model.If domain walls are formed they too will dominate the mass density of the universe.This can be avoided by inflation or strings carving up the walls.Such considerations of cosmological implications can lead to restrictions on the allowable GUTs [3]. Topological strings on the other hand generally do not lead to cosmological catastrophies.In fact strings are considered as a possible source of large scale structure in the universe. Witten showed [4], that for some particle physics models strings could be superconducting and support very large currents, via two different mechanisms.The first involves the occurrence of a charged scalar condensate, the other involves the appearance of fermion zero modes on the string.A third mechanism using charged vector bosons was later identified [5,6].All three types depend upon the details of the GUT model and none are generic. Recent papers [7,8], however, have argued that all GUT scale strings become superconducting at the electroweak symmetry breaking, and furthermore that a supercurrent spontaneously develops without an applied electric field.This could have serious cosmological implications, not least because superconducting string loops can shrink to form stable rings, and a population of such loops are as disastrous as topological monopoles. It is therefore important to check that GUT strings are 'generically' superconducting, and in doing so we return to an old question: how does a cosmic string affect the fields of the electroweak theory in its vicinity ? So consider a GUT with two symmetry breakings and a lagrangian of the form where D µ is the covariant derivative, and V (φ, H) is some general gauge invariant fourth order potential.Now φ will acquire a vacuum expectation value (vev), which we take to be at the GUT scale of 10 16 GeV, and breaks the symmetry group G down to G 1 , then H acquires a vev at a lower energy scale and breaks the symmetry group G 1 down further to G 2 .In general there can be more stages of symmetry breaking, but we are only considering two for simplicity.At the first symmetry breaking some of the various components of the scalar field H will acquire masses, while a subset of the H fields will develop an effective potential that will lead to the second SSB.We will take this subset to be such that its elements can be identified with the Higgs doublet of the electroweak standard model, and so the second symmetry breaking occurs at the electroweak scale of 10 2 GeV.Now suppose that the first homotopy group for the first symmetry breaking is nontrivial, π 1 (G/G 1 ) = 0, so that there are stable string solutions with the asymptotic forms φ = φ 0 exp(iT s θ), X s θ = T s /gr as r → ∞, where T s is the string generator.Now consider the terms that may be present in the lagrangian above that couple the GUT string fields to the Standard Model fields.There may be a cross term in the potential |φ| 2 |h| 2 between the GUT scalar φ and the Higgs doublet h, and the Standard Model covariant derivative may contain an additional term proportional to X s µ h.These terms were stated in ref. [8] to be the most general terms coupling the GUT string fields to the Standard Model fields.However, there are a number of other terms that may be present in the lagrangian above.For example, there could be terms of the form tr((∂ µ H)X sµ h) and tr(HX µ X sµ h) where H is a component of H which is orthogonal to h, X s µ is the string gauge field and X µ is some other vector boson.The GUT string may be unstable to solutions with non-zero values of the fields H, h and X µ which could possibly give a charged condensate on the string.Whether this possibility is realised or not would depend upon the details of the GUT model. So we eliminate these later sort of terms from consideration by assuming that the string generator commutes with all the electroweak generators i.e. [T s , τ a ] = 0 and [T s , Y ] = 0 where τ a are the weak isospin generators and Y is the hypercharge generator.Note that this implies that the GUT string cannot be superconducting in the sense of Everrett, because [T s , Q] = 0 where Q is the charge generator. We will further assume that the GUT string is not superconducting in the sense of Witten either, because the effect of the electroweak phase transition on a superconducting string has already been considered in refs.[9,10]. To illustrate the effect of the potential term and the addition of X s µ h to the covariant derivative, we will follow refs.[7,10] and extend the electroweak model to include an extra U(1).So we consider the SU(2 where the potential is given by and the covariant derivatives are where σ a are the Pauli spin matrices, W a µ are the SU(2) L gauge fields, B µ are the U(1) Y gauge fields and X µ are the U(1) F gauge fields.h is the electroweak Higgs doublet and has a coupling to the U(1) F gauge field, while φ is an electroweak singlet. If we assume that this symmetry group is unified into a simple Lie group G then the U(1) F charges F φ and F h will in general be rational, but we cannot characterize them further without specifying G. The U(1) F symmetry is broken first and gives rise to topologically stable string solutions of the form where S(r) and A(r) are Nielsen-Olesen profiles [11], and N is the winding number.The string is taken to be along the z axis.The scalar field φ shall be taken as a GUT scale field and so φ 0 ≃ 10 16 GeV, whereas the Higgs field acquires a vev of the order 10 2 GeV.Since the characteristic scale over which a field of mass m varies is of the order 1/m, we see that the characteristic scale of h is fourteen orders of magnitude bigger than that of φ.So the internal structure of the GUT string is irrelevant and we need only consider the asymptotic forms of the Nielsen-Olesen string which are S(r) = 1 and A(r) = 1 for r → ∞. We first consider the case F h = 0, when the potential term is the only coupling between the Higgs doublet and the GUT string.The minimum of the potential is given by The vacuum values are given by If we consider a region where |φ| = 0 then the potential energy is minimized by We can see that for f > 0 the expectation value of the Higgs is likely to be raised in the string core, while for f < 0 it is lowered.For f sufficiently negative |h| 2 can become less than zero so we must take |h| = 0. Consequently the electroweak symmetry can be restored about a GUT string; this is the result given in [10].Note that electroweak symmetry restoration for F h = 0 only occurs for a range of parameters, and that the above considerations do not give this range because we have ignored the self-energy potential terms and the kinetic terms.Conversely, for f > 0 the electroweak symmetry is always broken in a region of size m −1 h arround the GUT string [12].We now look for a solution of the form h † = (0, h * d ) in the background of the GUT string.Since the GUT string is so massive the back-reaction of an electroweak field configuration on the GUT string will be negligible.The equation of motion for h d is where S(r) is a Nielsen-Olesen profile, and so The width of the GUT string is approximately 1/ λ φ v 2 φ and so for v φ ≫ v h the potential cross term is well approximated by a delta function δ(r).For f large and negative the delta function gives the boundary condition at the origin (taken to be the location of the GUT string), h ′ d (0) = 0.The profile obtained by solving the equation of motion for h d with this boundary conditions is shown in Figure 1.Note that it does not appear to satisfy h ′ d (0) = 0.This is because on the GUT scale the Higgs gradient is given by h φ , and we are considering the limit when r φ = 1/m φ → 0. Essentially, the electroweak symmetry is restored completely on the scale for which |φ| 2 = 0, i.e. on the GUT scale, then h d returns to its vacuum value over its characteristic length scale 1/m h .Now when F h = 0 the potential term is irrelevant, except possibly for very large positive values of the parameter f .This is because the energy density has a contribution of the form |X θ h| 2 from the covariant derivative, which for the GUT gauge field X θ = 2N/qF φ r and the vacuum h d = v h / √ 2, will give a logarithmically divergent contribution to the energy per unit length [10,13]. To cancel this logarithmic part of the θ covariant derivative requires either a θ dependence for h d , Z θ = 0 or both.So consider the field configuration where we take a(r) → 1 as r → ∞, and α such that αF h /F φ is an integer, so that h d is a single valued function of θ.Substituting these fields into the covariant derivative above gives and so to cancel the logarithmic divergence requires α+γ = N.Using this we can rewrite the covariant derivative as Then the energy of the above configuration is given by where we have rescaled We are using the standard field basis of W + µ , W − µ , Z µ and A µ for the electroweak fields.This expression for the energy is the same as for the Nielsen-Olesen string but with the winding number replaced by −γF h /F φ , which is in general non-integer.The profiles h d (r) and a(r) will therefore be string-like, as can be seen in Figure 2, and the energy per unit length in the electroweak fields is ∼ πv 2 h .Electroweak strings are non-topological and it is possible for them to unwind via 'W-condensation' [14] to the electroweak vacuum.In the case we are considering it is not possible for the string-like solution to decay to the electroweak vacuum, because of the logarithmic term in the energy that would result.There are, however, a range of possible values of α and γ that satisfy the condition α + γ = N.To see which values give stable string-like solutions, we consider h u and W + µ perturbations about the solution and look for negative modes.Since the GUT string fields are so massive we need not consider perturbations in the GUT string fields.This means that the perturbations about the string-like solution above, give rise to the same perturbation equations as in the electroweak string case [15] but with the winding number replaced by −γF h /F φ .We expand the perturbations as where m ′ = m+(αF h /F φ ).The symbols ↑ and ↓ refer to the component of spin along the z axis being +1 or -1 respectively.The resulting perturbation equations in the background gauge where and we have rescaled as before.The parameter ǫ is γF h /F φ .There are two terms in the perturbation equations above which can give negative contributions; they are the potential term in D 1 and the last term of D 2 .This latter term corresponds to a −m.B interaction energy between the Z-magnetic moment (m) of the W-boson and the Zmagnetic field (B) of the string-like solution. We know that for integer values of ǫ (which will occur for F h /F φ an integer ) the string solution has negative eigenvalues for modes corresponding to the W-bosons acquiring nonzero values in the core of the string [15,16].But if the equations of motion are solved with this 'W-condensate', it has been shown that the solution is gauge equivalent to a string of lower winding number [14].In the case of the string-like solution the equations of motion for a 'W-condensate' are the same as in ref. [14] but with the winding number replaced by −ǫ.Since the generator of the GUT string acts on the electroweak doublet as a constant times the identity, and the GUT scalar field is an electroweak singlet, the presence of the GUT string does not prevent a similar gauge transformation from being made.This will still be true for non-integer values of ǫ.So if we find negative modes to the equations above, we must distinguish between those which are 'W-condensation' and those which result in a physical W-boson condensate trapped in the string core.The former are unwindings of the string while the later would give a charged condensate which would break the U(1) of electromagnetism and so give rise to superconductivity. If we consider the energy expression (1) we see that the energy is lower for smaller values of |γF h /F φ |.If we consider then since NF h /F φ is fixed and αF h /F φ can only change by an integer, we conclude that γF h /F φ (the Z-flux of the string-like solution in units 4π/g z ) can only change by an integer.We would expect this lowering of the flux by integer amounts to occur by 'W-condensation' for all ǫ, as it does for integer ǫ.Thus we expect ǫ to be lowered by integer amounts until it lies in the range − 1 2 < ǫ < 1 2 .If the string-like solutions with ǫ in this range were to have any negative modes they could not be interpreted as unwindings since they would raise the energy, and so would have to be interpreted as the occurrence of a physical charged condensate. To investigate the above arguments numerically for a GUT string of winding N = 1, we consider F h /F φ values of (a) 0, (b) 1, (c) 0.4 and (d) 0.5.Case (c) is actually realised in an SO(10) model considered by Alford and Wilczek [17]. For case (a) the δ-function potential cross term is the only coupling between the GUT string and the electroweak fields, and its only effect is to give h d (0) ′ = 0.As seen earlier, this condition is satisfied on the GUT scale but is negligible on the electroweak scale, and so for string solutions the effect of the potential term on the profiles is negligible.So the string solutions and their stability are the same as for electroweak strings solutions; they are unstable for physical values of parameters [15,16].The 'vacuum' solution in this case is that shown in Figure 1. For case (b) ǫ will be an integer, and so the string solutions and their stability will again be the same as for the electroweak string.The 'vacuum' in this case will have ǫ = 0, but the Higgs field will still have a winding in order to cancel the logarithmic contribution to the energy from the GUT gauge field.The 'vacuum' solution is again given by the profile in Figure 1. For case (c) first consider (α, γ) = (5/2, −3/2) which gives ǫ = −0.6 and is outside our proposed stability range.The profiles for the string-like configuration were solved for by a relaxation method on the energy (1) and substituted into the perturbation equations.These were then solved by direct matrix methods for sin 2 θ w = 0.23.A negative mode was found for angular momentum m = −1.As with the electroweak string, this mode is interpreted as an instability to the winding (ǫ) increasing by one unit.The stability line i.e. the line in (β, θ w ) parameter space for which ω 2 = 0, is an approximate vertical line at about θ w = π/4.For ǫ < −0.6 this line moves up to higher θ w , while for ǫ > −0.6 this line moves down to lower θ w and so for some ǫ we would expect no negative modes to occur at sin 2 θ w = 0.23.Now consider case (c) with (α, γ) = (0, 1) which gives ǫ = 0.4, i.e. it is the solution the above configuration decayed to.This had no negative modes and so is a stable solution. For case (d) the parameter values (α, γ) = (0, 1) and (α, γ) = (2, −1) have ǫ values of +0.5 and −0.5 respectively and so the two solutions are degenerate in energy.For sin 2 θ w = 0.23 both of these solutions were found to be stable.For sin 2 θ w = 0 the ǫ = −0.5 solution was found to have an m = −1 zero mode while the ǫ = 0.5 solution had as m = 1 zero mode.Integer ǫ strings also have zero modes at sin 2 θ w = 0, and these also occur at angular momentum m = 2ǫ [15].These modes are to be interpreted as transitions between the −ǫ and +ǫ solutions via a W-string.For sin 2 θ w > 0 the energy of the W-string is above that of the corresponding Z-string and so there is a barrier to such transitions, while for sin 2 θ w = 0 the W-string and Z-string solutions are degenerate in energy. So at sin 2 θ w = 0 all strings with |ǫ| ≤ 0.5 are stable.At sin 2 θ w = 0.23, in addition to the stable strings above there are metastable string solutions for |ǫ| in the approximate range 0.51 -0.53 for β in the range 0.25 -4.0. Now in [7] it was claimed that because the electroweak symmetry was restored the W-bosons would be massless, since the W-boson gets its mass from a term proportional to |h d | 2 .The confinement energy was, however, ignored and since the potential well is approximately 1/m h wide and the characteristic scale of the W-boson 1/m W is comparable, the confinement energy will be sizeable. So we looked at the W-boson bound modes at sin 2 (θ w ) = 0.23 for √ β = 0.5, 1.0 and 2.0 for the various cases above.The eigenvalues obtained for the bound W-bosons in case (a) with angular momentum m = 0, are ω = 0.88, 0.91 and 0.92 m W respectively (m W is the mass of the W-boson in the vacuum), which in view of the above comments is to be expected.The remaining cases also possessed bound W-bosons for angular momentum m = 0 with similar sized eigenvalues. For angular momentum m = −1 the most likely case to have bound W-bosons with significantly lower eigenvalues than above is |ǫ| = 0.5, since there are zero modes at sin 2 θ w = 0. We found that for ǫ = 0.5 the lowest bound mode eigenvalues are ω = 0.25, 0.27 and 0.28 m W respectively.These are the lowest bound mode eigenvalues that were obtained for any of the stable strings for the θ w and β values given above.We therefore find no massless W-boson states on the strings [19]. Finally we consider whether there are any fermion zero modes present on the stable string-like configurations.We know that electroweak strings possess fermion zero modes [20] and so we might expect there to be fermion zero modes on the string-like solutions as well.To investigate the possible existence of fermion zero modes consider the SU(2) L × U(1) Y × U(1) F invariant lagrangian for the first family of leptons where and Y e is a constant.For the Yukawa coupling term to be U(1) F invariant we must have F e L − F e R = F h .The Dirac equations in the presence of the GUT string with an electroweak string-like configuration about it are where where m is the angular momentum of the mode.We are looking for zero modes so we consider |ω| = |k| = 0. Then the equations separate into two pairs of coupled equations, and For there to be a zero mode solution both fields in the pair must be non-singular at the origin [21].For (2) this requires and so for both conditions to be true, we must have where we used α + γ = N. Similarly for (3) we get So for there to be fermion zero modes we must have |γF h /F φ | ≥ 1 but we showed earlier that the string-like solutions are only stable for |γF h /F φ | ≤ 1/2, and so the stable solutions do not have fermion zero modes.So in conclusion we find that if the only coupling between the GUT string fields (φ, X θ ) and the electroweak fields is a qF h X µ h/2 term in the covariant derivative and a |φ| 2 |h| 2 term in the potential, then there are electroweak string-like solutions about the GUT string.The Z-flux of such strings is Φ Z = (n − F h N/F φ )4π cos θ w /g, where n is an integer and N is the integer winding of the GUT string.We found no evidence for the formation of stable charged condensates: the strings with |n − F h N/F φ | > 1/2 did possess negative modes, but we surmise these instabilities are due to the string decaying to one with lower Z-flux by the 'W-condensation' mechanism of ref. [14].Those strings with |n − F h N/F φ | ≤ 1/2 possess no negative modes and so GUT strings can have stable electroweak strings arround them, similar to those found arround global strings in a two Higgs doublet model in ref. [22]. The mechanism for superconductivity given in ref. [7] required the occurrence of Wboson zero modes on the string.We have shown that these do not occur for this class of string solutions and so supercurrents do not arise as claimed in refs.[7,8]. We have further shown that the stable string solutions do not possess fermion zero modes, and so conclude that a non-superconducting GUT string does not become superconducting after the electroweak phase transition. The effects of there being stable electroweak string solutions about GUT strings should be negligible.First of all, particle production by the string due to the coupling between the GUT string and light particles has been considered in ref. [23], where it was shown that gravitational radiation was a more significant energy loss mechanism.We would not expect any significant change in the dynamics of the GUT strings due to the forces between the electroweak strings because they are negligible in comparison to the GUT string mass. In ref. [24] a baryon production mechanism was outlined which involved the de-linking of linked electroweak strings.Electroweak strings are, however, highly unstable and so it is unclear whether or not they form.Here we have stable electroweak strings forming about GUT strings.However, the GUT string network will have reached a scaling solution by the electroweak phase transition and so the number density of linked strings would be extremely low.The net baryon number produced by this mechanism would be negligible. Figure captions Now we write e T R = (c 1 , c 2 ), e T L = (d 1 , d 2 ) and expand c 1 , c 2 , d 1 and d 2 as Figure 1 : Figure 1: h d profile showing symmetry restoration about a GUT string for F h = 0.Figure 2: h d (r) and a(r) profiles for ǫ = 0.4 (solid line) and those for a Nielsen-Olesen string (dashed line) (ǫ = 1 for comparison) Figure 2 : Figure 1: h d profile showing symmetry restoration about a GUT string for F h = 0.Figure 2: h d (r) and a(r) profiles for ǫ = 0.4 (solid line) and those for a Nielsen-Olesen string (dashed line) (ǫ = 1 for comparison)
2014-10-01T00:00:00.000Z
1995-10-27T00:00:00.000
{ "year": 1995, "sha1": "10c8958fbb113673a16a8865847c934077250bd3", "oa_license": null, "oa_url": "https://arxiv.org/pdf/hep-ph/9510434", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "10c8958fbb113673a16a8865847c934077250bd3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
6730142
pes2o/s2orc
v3-fos-license
Structural development and dorsoventral maturation of the medial entorhinal cortex We investigated the structural development of superficial-layers of medial entorhinal cortex and parasubiculum in rats. The grid-layout and cholinergic-innervation of calbindin-positive pyramidal-cells in layer-2 emerged around birth while reelin-positive stellate-cells were scattered throughout development. Layer-3 and parasubiculum neurons had a transient calbindin-expression, which declined with age. Early postnatally, layer-2 pyramidal but not stellate-cells co-localized with doublecortin – a marker of immature neurons – suggesting delayed functional-maturation of pyramidal-cells. Three observations indicated a dorsal-to-ventral maturation of entorhinal cortex and parasubiculum: (i) calbindin-expression in layer-3 neurons decreased progressively from dorsal-to-ventral, (ii) doublecortin in layer-2 calbindin-positive-patches disappeared dorsally before ventrally, and (iii) wolframin-expression emerged earlier in dorsal than ventral parasubiculum. The early appearance of calbindin-pyramidal-grid-organization in layer-2 suggests that this pattern is instructed by genetic information rather than experience. Superficial-layer-microcircuits mature earlier in dorsal entorhinal cortex, where small spatial-scales are represented. Maturation of ventral-entorhinal-microcircuits – representing larger spatial-scales – follows later around the onset of exploratory behavior. DOI: http://dx.doi.org/10.7554/eLife.13343.001 Introduction The representation of space in the rodent brain has been investigated in detail. The functional development of spatial response properties has also been investigated in the cortico-hippocampal system (Ainge and Langston, 2012;Wills et al., 2014), with studies suggesting the early emergence of head-directional selectivity (Tan et al., 2015;Bjerknes et al., 2015), border representation (Bjerknes et al., 2014) and place cell firing, but a delayed maturation of grid cell discharges (Wills et al., 2010;Langston et al., 2010). Even though there is information on the emergence of functional spatial properties in the hippocampal formation, remarkably little is known about the structural development of the microcircuits which bring about these properties. To understand this, we investigated the development of the architecture of the medial entorhinal cortex (MEC) and parasubiculum (PaS), two key structures in the cortico-hippocampal system. In adult animals, layer 2 of MEC contains two types of principal cells, stellate and pyramidal cells (Alonso and Klink, 1993;Germroth et al., 1989). Stellate and pyramidal neurons are distinct in their intrinsic conductance (Alonso and Llinás, 1989;Klink and Alonso, 1997), immunoreactivity (Varga et al., 2010), projections (Lingenhö hl and Finch, 1991;Canto and Witter, 2012) and inhibitory inputs (Varga et al., 2010). Pyramidal neurons in layer 2 of MEC can be identified by calbindinimmuno-reactivity (Varga et al., 2010) and are clustered in patches across various mammalian species (Fujimaru and Kosaka, 1996;Ray et al., 2014;Naumann et al., 2016), while stellate cells can be identified by reelin-immuno-reactivity (Varga et al., 2010) and a lack of structural periodicity . In rodents, the grid-like arrangement of pyramidal cell patches is aligned to cholinergic inputs Naumann et al., 2016). Functionally, about a third of all cells in layer 2 exhibit spatial tuning with grid, border, irregular and head-directional discharges being present . Neurons in layer 3 of MEC are characterized by rather homogenous in vitro intrinsic and in vivo spatiotemporal properties (Tang et al., 2015). A majority of cells exhibit a lack of spatial modulation, and the remaining are mainly dominated by irregular spatial responses (Tang et al., 2015) with a fraction also exhibiting grid, border and head-directional responses (Boccara et al., 2010). The parasubiculum is a long and narrow structure flanking the dorsal and medial extremities of MEC (Video 1). The superficial parasubiculum, corresponding to layer 1 of MEC is divided into large clusters, while the deeper part, corresponding to layers 2 and 3 of MEC, is rather homogenous (Tang et al., 2016). In terms of functional tuning of cells, a majority of the cells of PaS show spatially tuned responses, and include grid, border, head-directional and irregular spatial cells (Boccara et al., 2010;Tang et al., 2016). Here we investigate the emergence of the periodic pyramidal-cell patch pattern in layer 2 of MEC, as well as the development of cellular markers that characterize the architecture of adult MEC and PaS. The results indicate an early emergence of pyramidal cell organization, a delayed maturation of pyramidal but not stellate cells and a dorsal-to-ventral maturation of MEC circuits. Results We first investigated development of brain size and thickness of layers of the MEC (Figure 1) by observing rats at E18, P0, P4, P8, P12, P16, P20, P24 and adults (>P42). The majority of the brain development takes place within the first few weeks postnatally (Figure 1a), with the brain size increasing 1000% from 0.12 ± 0.00 g at E18 (mean ± SD; n=3) to 1.23 ± 0.07 g at P12 (n=5). Subsequently, the growth plateaus to~25% with the brain weighing 1.71 ± 0.08 g at P24 (n=6) and having a weight of 2.11 ± 0.14 g in adults (n=9) (Figure 1b). The superficial layers (layers 1-3) of the MEC (Figure 1c) double in thickness during this early postnatal period from 243 ± 35 mm at P0 (mean ± eLife digest Many animals, from rats to humans, need to navigate their environments to find food or shelter. This ability relies on a kind of memory known as spatial memory, which provides a map of the outside world within the animal's brain. Specifically, cells in a part of the brain called the medial entorhinal cortex act like the grids present on a map, and are known as grid cells. Other cells in this region represent boundaries in the environment and are known as border cells. These cells and other cells connect to each other to make the spatial memory circuit. Previous research had reported that the grid cells were not present in the very early stages of an animal's life. It was also not clear how the different cell types involved in spatial memory develop after birth. Ray and Brecht have now studied rats and found that certain characteristic structures in the circuit are present at birth. For example, cells that were most likely to become grid cells, were already laid out in a grid, indicating that this layout is instructed by genetic information rather than experience. Ray and Brecht also found that the cells that most likely become grid cells matured later than the cells that most likely become border cells. Further analysis then revealed that the circuits in the top part of the medial entorhinal cortex, which represents nearby areas, matured earlier than those in the bottom part of this region, which represent farther areas. These findings could therefore explain why rats explore nearby areas earlier in life before going on to explore further away areas at later stages. More work is needed to characterize other components of the neural circuits involved in spatial memory to provide a complete understanding of how these memories are formed. Future experiments could also ask if encouraging young rats to explore a wider area can cause the circuits to mature more quickly. SD; n=21, 4 rats) to 652 ± 50 mm at P12 (n=24, 4 rats). A similar increase is also observed in the deeper layers (layers 4-6) from 167 ± 21 mm at P0 (n=21, 4 rats) to 329 ± 54 mm at P12 (n=24, 4 rats).The overall thickness plateaus around this point to 981 ± 81 mm at P12 (n=24, 4 rats) and remains at 882 ± 78 mm in adults (n=24, 4 rats) ( Figure 1d). Proportionally, the thickness of the layers remains similar during development, with layer 2 accounting for~20% and layers 3 and 5/ 6 each accounting for~30% of the MEC. Layers 1 and 4 are the thinnest at about 10% and 5% of the total thickness respectively ( Figure 1d). We next investigated the microcircuit organization of superficial layers of MEC. Calbindin, a calcium binding protein, is selectively expressed in layer 2 pyramidal cells (Varga et al., 2010;Fujimaru and Kosaka, 1996), which form a gridlike arrangement in adult animals . Concurrently, reelin, an extracellular matrix protein, is selectively expressed in stellate cells in layer 2 of MEC, which are scattered throughout layer 2. To visualize the development of entorhinal microcircuits we first prepared tangential sections (see our video animation on preparing tangential sections, Video 1) through layer 2 of medial entorhinal cortex and stained for calbindin-immunoreactivity. From the earliest postnatal stages, calbindin+ neurons in the MEC exhibited clustering, forming patches at P0 (Figure 2a). The calbindin+ patches at P0 exhibited a grid-like (Figure 2a Figure 3a). However, the calbindin+ patches in the MEC did not exhibit clustering of their dendrites, as previously described in adults at E18 and P0 (Figure 3a,b). Some dendritic clustering could be observed at P4 (Figure 3c), while from P8 (Figure 3d-h) the dendritic clustering of calbindin+ pyramidal neurons was similar to that in adults. In layer 3 of the MEC, we observed a transient presence of calbindin expression. The number of calbindin+ neurons in layer 3 declined progressively from prenatal stages to P20 (Figure 3a-g), where it attained adult-like levels with rarely any calbindin+ neurons in layer 3 ( Figure 3h). Quantitatively, calbindin+ neuronal density (calbindin+ neurons per mm 2 ) decreased from 955 ± 315 (mean ± SD; count refers to n=3776 neurons in 8 rats) in P4-P8 rats to 333 ± 99 (n=2104 neurons, 8 rats) in P12-P16 rats to 141 ± 56 (n=828 neurons, 7 rats) in adults ( Figure 3i). A closer analysis of the co-localization of the immature neuronal marker doublecortin with calbin-din+ pyramidal cells and reelin+ stellate cells (Figure 7a-c) revealed doublecortin to be mostly colocalized with calbindin+ rather than reelin+ neurons (Figure 7d). Spatial cross-correlations between doublecortin and either calbindin or reelin (Figure 7e; n=8 rats from ages P8 -P20) from tripleimmunostained calbindin, reelin and doublecortin regions of layer 2 of the MEC revealed a greater overlap of doublecortin with calbindin (0.54 ± 0.10) than with reelin (0.08 ± 0.13). This difference in the Pearson's cross correlation coefficient was significant at p=0.0009 (Mann-Whitney two tailed). Third, wolframin expression, a marker which co-localizes with calbindin+ pyramidal neurons in layer 2 of MEC in adult rodents (Kitamura et al., 2014), develops from dorsal to ventral in layer 2 medial entorhinal cortex and parasubiculum ( Figure 8). Specifically, wolframin expression starts to appear in the dorsal MEC and the dorsal PaS shortly after birth ( Figure 8a) and is present only in the dorsal~10% of the PaS. It extends progressively more ventrally (Figure 8b) and covers~40% at P8 and~75% at P12 of PaS. At P20 it is expressed throughout the full extent of medial entorhinal cortex and the parasubiculum (Figure 8c). Discussion Neurogenesis in the medial entorhinal cortex is completed prior to E18 (Bayer, 1980a;1980b), and at this time the basic laminar organization of medial entorhinal cortex is already evident. While the basic structure of medial entorhinal cortex appears early, we observe massive developmental The clustering of layer 2 MEC calbindin+ neurons into patches is also an early developmental event, and key aspects of the grid-layout of calbindin+ neurons are already present at birth. This observation indicates that the periodic structure of patches is a result of genetic signaling rather than spatial experience. Periodic patterns are ubiquitous in nature, and several chemical patterning systems have been explained on the basis of interaction between dynamical systems (Turing, 1952). Since it has been suggested that the grid layout of calbindin+ neurons is functionally relevant for grid cell activity , it would be interesting to investigate, whether genetic manipulations would result in changes of layout periodicity and have functional effects. The dendritic clustering of calbindin+ pyramidal neurons is similar to dendritic development in the neocortex (Petit et al., 1988) and is established by the end of the first postnatal week. The cholinergic innervation of the calbindin+ patches was present by P4 in line with other long-range connectivity patterns in the MEC (O'Reilly et al., 2015), which are also established early in development. Reelin is an important protein in cortical layer development (D'Arcangelo et al., 1995) and in the early stages of postnatal development we see the strongest reelin expression in layer 1, where reelin secreting Cajal-Retzius cells are involved in radial neuronal migration (Pesold et al., 1998). Stellate cells in layer 2 of MEC, which can be visualized by reelin-immunoreactivity (Varga et al., 2010), were scattered throughout postnatal development. Layer 3 of the MEC features a complementary transition of calbindin+ and reelin+ neurons during the first couple of postnatal weeks. While the density of reelin+ neurons increases, there is a concurrent decline in calbindin+ neuronal density in layer 3 of MEC, though part of the calbindin+ neuronal density decline can be attributed to the increasing brain size. Taken together with the presence of radial neuronal migration promoting Cajal-Retzius cells in layer 1 during this period, it would be An interesting observation is the presence of clusters of neurons in the parasubiculum, which transiently express calbindin in early postnatal stages, and subsequently express wolframin. Transient expression of calbindin has been observed in early postnatal development in the neocortex (Hogan and Berman, 1993) and midbrain regions (Liu and Graybiel, 1992), but its functional significance remains largely unknown. Our data show, however, that at early developmental stages the parasubiculum and medial entorhinal cortex share a similar organization in calbindin+ patches. Additionally, the expression of wolframin in the parasubiculum persists in adults, while calbindin+ neurons in MEC layer 2 also exhibit wolframin (Kitamura et al., 2014) from the end of the first postnatal week. Current studies generally focus on cell-type specific investigations using proteins expressed by these cells. However, investigations to study the specific roles of these proteins (Li et al., 1995) might provide interesting insights towards understanding the finer differences in the functionalities exhibited by these cells. For instance, calbindin is a calcium buffer, and reduces the concentration of intracellular calcium (Mattson et al., 1991), while wolframin is implicated in increasing intracellular calcium levels (Osman et al., 2003). With the medial entorhinal cortex and parasubiculum having many similarities in their spatial discharge properties Boccara et al., 2010;Tang et al., 2016), a structure-function comparison of the wolframin+/transiently-calbindin+ neurons in the parasubiculum and the wolframin+/ permanently-calbindin+ neurons in the medial entorhinal cortex would be worthwhile. A dorsal-to-ventral development profile was observed in the superficial layers of the MEC and parasubiculum. This conclusion was suggested by the progressive disappearance of the calbindin expression in layer 3 from dorsal to ventral; the progressive disappearance of doublecortin expression in layer 2 and parasubiculum from dorsal to ventral; and the progressive appearance of the wolframin expression in superficial layer 2 of MEC and parasubiculum from dorsal to ventral. Homing behavior in rats, as well as spontaneous exploratory behavior develops around the end of second postnatal week (Wills et al., 2014;Bulut and Altman, 1974) while spontaneous exploration of larger environments outside the nest emerge towards the end of the third postnatal week (Wills et al., 2014). This is coincident with the timeline of maturation of calbindin+ patches in the dorsal and ventral MEC respectively. Since the dorsal MEC represents smaller spatial scales and the ventral MEC progressively larger scales (Hafting et al., 2005;Stensola et al., 2012), these data may indicate that the rat's navigational system matures from small to large scales. Early eyelid opening experiments have indicated an accelerated development of spatial exploratory behaviour (Kenny and Turkewitz, 1986;Foreman and Altaha, 1991), and similar experiments might provide insights into whether early behavioral development is accompanied by an accelerated development of the microcircuit underlying spatial navigation. The higher co-localization of doublecortin with calbindin+ pyramidal cells than reelin+ stellate cells, supports further the dichotomy of structure-function relationships exhibited by these two cell types Tang et al., 2014). Grid and border cells have been implicated to be largely specific to pyramidal and stellate cells respectively and the delayed structural maturation of pyramidal cells might reflect the delayed functional maturation of grid cells (Wills et al., 2010;Langston et al., 2010), with the converse being applicable to stellate and border cells (Bjerknes et al., 2014). The divergent projection patterns of pyramidal and stellate cells, with the former projecting to CA1 (Kitamura et al., 2014) and contralateral MEC (Varga et al., 2010) and the latter to dentate gyrus (Varga et al., 2010;Ray et al., 2014) and deep layers of MEC (Sürmeli et al., 2015), have differing theoretical interpretations in spatial information processing. The same sets of neurons, which correspond to grid and border cells , have also been implicated to be differentially involved in temporal association memory (Kitamura et al., 2014) and contextual memory (Kitamura et al., 2015) respectively. An underlying differential structural maturation timeline of the microcircuit governing these processes may also translate into a differential functional maturation profile of these memories. We conclude that the structural maturation of medial entorhinal cortex can be coarsely divided into an early appearance of the calbindin+ neuron patches and a progressive cell-type specific refinement of the cellular structure, which proceeds along the dorsal to ventral axis. Materials and methods All experimental procedures were performed according to the German guidelines on animal welfare under the supervision of local ethics committees (LaGeSo) under the permit T0106-14. Brain tissue preparation Male and female Wistar rats (n=83) from E18 to P24 and adults (>P42) were used in the study. The ages were accurate to ± 1 day. Animals were anaesthetized by isoflurane, and then euthanized by an intraperitoneal injection of 20% urethane. They were then perfused transcardially with first 0.9% phosphate buffered saline solution, followed by 4% formaldehyde, from paraformaldehyde, in 0.1 M phosphate buffer (PFA). For prenatal animals, pregnant rats at E18 were perfused in the aforesaid manner and the E18 animals were then extracted from the uterus. Subsequently, brains were removed from the skull and postfixed in PFA overnight. Brains were then transferred to 10% sucrose solution for one night and subsequently immersed in 30% sucrose solution for at least one night for cryoprotection. The brains were embedded in Jung Tissue Freezing Medium (Leica Microsystems Nussloch, Germany), and subsequently mounted on the freezing microtome (Leica 2035 Biocut) to obtain 60 mm thick sagittal sections or tangential sections parallel to the pia. Tangential sections of the medial entorhinal cortex were obtained by separating the entorhinal cortex from the remaining hemisphere by a cut parallel to the surface of the medial entorhinal cortex (Video 1). For subsequent sectioning the surface of the entorhinal cortex was attached to the block face of the microtome. Immunohistochemical stainings were performed according to standard procedures. Briefly, brain sections were pre-incubated in a blocking solution containing 0.1 M PBS, 2% Bovine Serum Albumin where, r is the cross-correlation between the monochromatic images f1 and f2 without smoothing. n is the number of pixels in the image. The Pearson's cross-correlation coefficient can vary from -1 (anti-correlated) through 0 (un-correlated) to 1 (correlated). For analysis of dorso-ventral variation in overlap between doublecortin with calbindin, two regions of the same size were selected from a section double-stained for calbindin and doublecortin. One region was selected from the dorsal half of the section and another from the ventral half and the regions were represented as pairs. Where, due to section damage, it was not possible to obtain regions from both dorsal and ventral parts, the data was presented as unpaired. For analysis of variation in overlap between doublecortin and calbindin/reelin, comparisons were performed between the same regions from a section triple stained for calbindin, reelin and doublecortin.
2016-11-08T18:56:27.780Z
2016-04-02T00:00:00.000
{ "year": 2016, "sha1": "1cf5cc8c337dfac51099f76cadce6662f262d898", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.13343", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1cf5cc8c337dfac51099f76cadce6662f262d898", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250518076
pes2o/s2orc
v3-fos-license
MATHEMATICS CONNECTION ABILITY FOR JUNIOR HIGH SCHOOL STUDENTS BASED ON LEARNING INDEPENDENCE LEVEL Mathematical connection ability is one of the basic mathematics skills that must be possessed by junior high school students. It turns out that so far, the student's learning outcomes in terms of mathematics have not been encouraging. This study aims to describe the mathematical connection skills of junior high school students in solving a problem in terms of learning independence. This research is a qualitative descriptive study on eighth-grade students in a junior high school in Lamongan district, East Java. Data was collectied using test questions, interviews Introduction Mathematics is a science in schools that must be learned by all students starting from elementary school, junior high school, middle school, high school, to college. It is because mathematics is a tool that can develop thinking (Burton, 1984;Silver & Others, 1990). Science and mathematics cannot be separated from other sciences and are always related to one another and also related to everyday life (Kiray, Gok, & Bozkir, 2015;Ng, Lay, Areepattamannil, Treagust, & Chandrasegaran, 2012). Therefore, every student feels the need to have mathematical connection skills to support their understanding of mathematics. In the 2004 curriculum, the ability to connect mathematically is one of the basic mathematical skills that must be possessed and mastered by junior high school students. Regarding the importance of mathematical connections, NCTM argues, "when students can connect mathematical ideas, their understanding is deeper and more lasting" (National Council of Teachers of Mathematics, 2009). The meaning of these words is that when a student can connect mathematical ideas, their thinking will be deeper and will be stored in their memory for a long time. In mathematics and science concepts are hierarchical and each concept is related to one another means that when we learn a certain concept, it is necessary to first study the previous concepts as a prerequisite (Septian & Rizkiandi, 2017;Suhandri, Nufus, & Nurdin, 2017). However, in reality, in terms of mathematical connections, Indonesian school student and Mexican re-university student learning outcomes have not been satisfactory at all (Anita, 2014;Diana, Suryadi, & Dahlan, 2020;García-García & Dolores-Flores, 2021;Kenedi, Helsa, Ariani, Zainil, & Hendri, 2019). For this paper, the following definition applies: Mathematical connection refers to the ability to recognise and make connections between mathematical ideas, between mathematics and other subjects, and between mathematics and everyday life. It takes the effort or attention of educators so that student learning outcomes can be achieved. One of the student factors that can bring out their potential to succeed in achieving satisfactory learning outcomes is learning independence (Arista & Kuswanto, 2018;Mulyono, 2017;Suhendri, 2015). Another researcher states that a process within a person to play an active role and not depend on others to achieve certain goals in learning is the definition of learning independence (Fajriyah, Nugraha, Akbar, & Bernard, 2019;Rustyani, Komalasari, Bernard, & Akbar, 2019). Based on the discussion above, the researchers were very interested in conducting a study listed in the title. Method The paper uses descriptive qualitative research. The descriptive method is an analytical method that clearly describes the conditions of the thing being studied by collecting data, and then the data is classified, analyzed, and interpreted. The qualitative method is a process linked to everyday reality. This study aims to describe and examine the connection abilities of junior high school students in solving problems in terms of learning independence. This research was conducted with subjects consisting of 4 students of class VIII in a junior high school in the city of Lamongan. The four subjects will be referred to as Subject 1 (S1), Subject 2 (S2), Subject 3 (S3), and Subject 4 (S4). Analysis of the data obtained from 3 essay questions on the subject's mathematical connection ability, interviews, transcribing interview results, and learning independence questionnaires was completed. The questions and questionnaires used are shown in Figure 1 and Figure 2. There are 3 instruments used in this study, namely questions to measure the ability of mathematical connections; the interview rubric was asked by the researcher to find out the information he wanted to know; documentation in the form of recordings, student answer sheets, and learning independence questionnaires. The student learning independence questionnaire rubric is presented in Table 1. The scoring technique on the student learning independence questionnaire is as follows. Number of items per aspect = 6; minimum score = 1; minimum value = 1 × 6 = 6; maximum score = 4; maximum value = 4 × 6 = 24; number of interval classes = 5. To obtain a mathematical connection ability test score, it is carried out scoring using an assessment adopted from Nursaniah, Nurhaqiqi, & Yuspriyati (2018), which can be seen in Table 3 below. There is an answer that does not match the criteria. 1 No answer 0 Mathematics in daily life Answers are correct, recognize and understand the use of mathematical concepts in everyday life. 4 Answers are correct according to the criteria requested but some are not quite right. 3 Answers are correct, do not match some of the criteria requested. 2 There is an answer that does not match the criteria. 1 No answer 0 The categorization of mathematical connection ability test scores by Agustiani (2020), was adopted in this study. It can be seen in Table 4 below. Result and Discussion The results obtained by researchers after testing students by giving them mathematical connection ability test questions along with an independence questionnaire study are presented below. The Subject Classification The results of the classification of learning independence subjects are obtained according to the calculations in Table 2 and the categorization of students' mathematical connection abilities in Table 4, and the subject classifications results are shown in Table 5 as follows. Based on the data in Table 5, we can see that the first subject (S1) has a high enough independence learning level, and S2, S3, S4 are subjects with low independence learning levels. In Table 5, we can also see that there is one subject (S2) who have high mathematical connection ability with low learning independence, one subject (S1) who has moderate mathematical connection ability with fairly high learning independence, and two subjects (S3 & S4) who have low mathematical connection ability with low learning independence. The Mathematical Connections of Subjects The researcher used 3 essay questions to measure the mathematical connection ability and the assessment was according to the rubric in Table 3. After obtaining the test scores for each subject, they categorized their mathematical connection abilities based on independent learning. The following is a description of the answers and interviews on each subject. Figure 3. The answer to question number 1 by S1 Figure 3 shows that S1 is quite capable of using connections between mathematical topics, and can solve problems quite well. Related to this, the following is an excerpt from the researcher's (Q) interview instrument with the first subject (S1). The Subject 1 The interview quoted on script 1 shows that S1 has not been able to use connections between mathematics topics. Q : In number one, what was asked? S1 : Determine the value of … Q : What is known? S1 : 7 2022 Q : How many results did you get? S1 : 72023 (Seventy-two thousand twenty-three) Q : Are you sure it's true? S1 : Yes Figure 4. The answer to question number 2 by S1 Figure 4 shows that S1 is quite capable of using connections in everyday life, and can solve problems quite well. Related to this, the following is an excerpt from the researcher's interview instrument with the first subject (S1). The interview excerpt on script 2 shows that S1 is quite capable when using connections in everyday life. Figure 5. The answer to question number 3 by S1 Figure 5 shows that S1 is quite capable of using connections between mathematical topics, and can solve problems quite well. Related to this, the following is an excerpt from the researcher's interview instrument with the first subject (S1). The interview excerpt in script 3 shows that S1 has not been able to use connections between mathematical topics properly according to the specified criteria. Figure 6. The answer to question number 1 by S2 Figure 6 shows that Subjek 2 can use connections between mathematical topics, and can solve problems well. Related to this, the following is an excerpt from the researcher's interview instrument with the second subject (S2). The Subject 2 The interview excerpt in script 4 shows that S2 can use connections between mathematical topics. It can be seen that S2 has been able to solve problem number 1 well. Figure 7 shows that S2 is able to use mathematical connections in everyday life and can solve problems quite well according to the criteria but some are not quite right in the answers. Related to this, the following is an excerpt from the researcher's interview instrument with subject 2 (S2). Q : What is the second known number? S1 : 1, 2, 4, 6, and 9 Q : What was asked about number two? S1 : Silence... (for a moment) Number difference Q : What number? S1 : The biggest and the smallest Q : What results did you get? S1 : 8, 5, 3, 1, 0 Q : Where did this number 36 come from? S1 : From the name of the olympiad the same. The interview excerpt on script 5 shows that Subject 2 is capable of using mathematical connections in everyday life. It indicates that S2 has been able to solve question number 2 quite well according to the criteria but there is something that is not quite right from the answer. Figure 8 shows that S2 has not been able to use connections between mathematical topics properly according to the specified criteria. Related to this, the following is an excerpt from the researcher's interview with the S2. The interview excerpt in script 6 shows that S2 has not been able to use connections between mathematical topics properly according to the specified criteria. Figure 9. The answer to question number 1 by S3 Figure 9 shows that S3 is quite capable of using connections between mathematical topics and can solve problems quite well. Related to this, the following is an excerpt from the researcher's interview with S3. The Subject 3 The interview excerpt in script 7 shows that S3 has not been able to use connections between mathematical topics well. Figure 10. The answer to question number 2 by S3 Figure 10 shows that S3 is able to use mathematical connections in everyday life, and solves problems quite well but does not meet the criteria. Related to this, the following is an excerpt from the researcher's interview with S3. Interview quotes on script 8 show that S3 has not been able to use connections in everyday life well. Figure 11. The answer to question number 3 by S3 Figure 11 shows that S3 has not been able to use connections between mathematical topics well. Related to this, the following is an excerpt from the interview instrument of the researcher with S3. The interview excerpt from script 9 shows that S3 has not been able to use connections between mathematical topics well. Figure 12. The answer to question number 1 by S4 Figure 12 shows that S4 is quite capable of using connections between mathematical topics and can solve problems quite well. Related to this, the following is an excerpt from the researcher's interview with S4. The Subject 4 The interview excerpt on script 10 shows that S4 has not been able to use connections between mathematical topics well. Figure 13. The answer to question number 2 by S4 Figure 13 shows that S4 is quite capable when using connections in everyday life, and solves problems quite well but does not meet the criteria. Related to this, the following is an excerpt from the researcher's interview with S4. The interview excerpt on script 11 shows that the S4 has not been able to properly use the connection in everyday life. Figure 14. The answer to question number 3 by S4 Figure 14 shows that S4 is not good at using connections between math topics. The following excerpt from the researcher's interview instrument with the fourth subject (S4). The interview excerpt on script 12 shows that S4 has not been able to use connections between mathematical topics well. Regarding the results obtained, it shows that the low mathematical connection of students is because students seem to have difficulty understanding the tests given, often they work according to their assumptions but do not understand the concept. Shodikin, et al. (2019) found pre-service teachers did similar actions. In this study, the ability of mathematical connections in modeling existing problems is still not good, students often forget and don't write it down. When conducting interviews, many of the subjects looked nervous and afraid, making it difficult for them to express their hearts or thoughts. The learning independence of the four subjects, revealed three with results in line with some reseach results that suggest the learning independence of students is still low (Anita, 2014;Diana et al., 2020;García-García & Dolores-Flores, 2021;Kenedi et al., 2019;Wastono, 2015). In tackling these problems, teachers or educators could often give practice questions requiring mathematical connections when learning takes place, teachers could also hold story sessions in front of the class, for each student so that they could bring out their potential thinking and heart without feeling nervous or afraid anymore. teacher also should provide learning innovations so that when learning takes place the main focus is on the students, no longer teachers, to help them become independent in learning, to think critically, and to think learning is fun and no longer boring. The role of parents is also needed to motivate their children to be more active and independent in learning. Conclusion The low ability of students in mathematical connection and independent learning contributes to their difficulty working on problems. They were not able to understand the concept. Internal factors that influenced this include the students themselves, lack of practice questions, lack of motivational stimulus to encourage students to study independently. External factors that affected students include the level of questions given that were too difficult. Based on the results of this study, it is recommended for teachers to always motivate students in the learning process, increase practice questions, encourage independent study, never give up spirit and choose a learning model that is more oriented to student needs. To strengthen the ability of mathematical connections, teachers can get used to reflecting on student understanding, at the beginning and at the end of learning.
2022-07-14T18:10:37.220Z
2022-06-05T00:00:00.000
{ "year": 2022, "sha1": "b09fc8d048c19863b6d5be4d9b76e7b281e8cb06", "oa_license": "CCBYNCSA", "oa_url": "https://ojs3.unpatti.ac.id/index.php/jupitek/article/download/4954/4271", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b28adaef4578a6cd349faa304cef85ef79c45fea", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
9196000
pes2o/s2orc
v3-fos-license
Parameter identification in Choquet Integral by the Kullback-Leibler divergence on continuous densities with application to classification fusion Classifier fusion is a means to increase accuracy and decision-making of classification systems by designing a set of basis classifiers and then combining their outputs. The combination is made up by non linear functional dependent on fuzzy measures called Choquet integral. It constitues a vast family of aggregation operators including minimum, maximum or weighted sum. The main issue before applying the Choquet integral is to identify the 2 M − 2 parameters for M classifiers. We follow a previous work by Kojadinovic and one of the authors where the identification is performed using an information-theoritic approach. The underlying probability densities are made smooth by fitting continuous parametric and then the Kullback-Leibler divergence is used to identify fuzzy measures. The proposed framework is applied on widely used datasets. Introduction In most of pattern recognition tasks, a first step consists in extracting relevant features bringing information on classes of interest. Features are then transformed into membership degrees according to the different classes by entities called classifiers. A classifier is a system that takes as inputs a Qdimensional vector X t = [x 1 . . . x Q ] (also called features or attributes) and generates a degree of confidence in the statement "X belongs to class ω j " for all classes in Ω = {ω 1 . . . ω K }. Multiple Classifier Systems (MCS) [1] are designed when complementary (and sometimes redundant) information sources (here classifiers) are used in order to improve classification accuracy and decision-making. MCS can be also viewed as information fusion systems where inputs are classifiers. MCS can take several forms and among them the parallel one which takes as input M × K partial degrees of confidence and generates an output, called global confidence degree, made up of K degrees of confidence (one for each class). We denote by φ m,j (X) the degree of confidence delivered by a classifier m ∈ {1 . . . M } for class ω j ∈ Ω given the observation X. Usual combinations of classifier outputs include product, naive Bayes and decision templates among others [1] but most of them can be used provided each output represents an independent source of information. However, the independance assumption is not always satisfied. To face this problem, an approach that considers interactions among classifier outputs such as fuzzy integrals and in particular the Choquet Integral [2,3] can be used. The explicit interaction coefficients (in the 2-additive form) provide very interesting information on complementarity and redundancy of the fused data which can also be used for subset selection [4]. A fuzzy integral is a type of non-linear functional dependent on fuzzy measures which constitutes a vast family of aggregation operators including many widely used operators (minimum, maximum, weighted sum, ordered weighted sum and so on) [5]. In order to be combined by the Choquet Integral, the commensurability [6] of classifier outputs must be satisfied. That means the classifier outputs must be defined on the same measurement scale. The combination of all partial confidence degrees provided by classifiers is thus made up by a Choquet Integral which is described in the next section. Choquet capacities and Choquet Integral Let the M classifiers (sources) be denoted by Θ = {θ 1 , θ 2 . . . θ M }. A fuzzy measure µ k for a given class ω k weighs the importance of a subset of sources S ⊆ Θ and is defined by [2,3]: satisfying the following constraints: • µ k (∅) = 0 and µ k (Θ) = 1 The fuzzy measure is said: In classification problems, the fuzzy measure is used in order to take into account interactions between sources. One fuzzy measure is tuned for each class and each discrete Choquet Integral aggregates the information provided by the sources as follows [2,3]: where µ k (S (i) ) is the importance of subset of sources S (i) = {θ (i) , ..., θ (M ) } and the value φ (i) is provided by source θ (i) . The notation (.) indicates a permutation of indices according to the values provided by the sources such as φ(1) ≤ φ(2) ≤ ... ≤ φ(M ) ≤ 1 (and by convention φ (0) = 0). The Choquet integral thus coincides to the weighted arithmetic mean when the fuzzy measure is additive. One approximation of Eq. 2 called 2-additive Choquet Integral is often used and consists in considering a 2-order additive capacity which takes into account both the weights of each source and the interaction between pairs. The weight ν i of a source θ i (for the detection of class ω k ) and the coefficient I ij of interaction between both sources θ i and θ j can be obtained from the fuzzy measure µ k by [2,3]: These parameters are interesting for interpreting the fuzzy measure and also to highlight which sources are important and how they interact. When interactions between two sources are positive, the sources are said complementary while they are said redundant when interactions are negative. The problem of Choquet Integral parameters identification was treated by several authors [4,7]. In the context of classification as considered here (where the classes are known), the method proposed by Grabisch [3] and called Heuristic Least Mean Square (HLMS) is often used. However it requires the global scores (the real output of the Choquet Integral) to perform the optimization of the fuzzy measure. Recently, two information theoritic methods based on entropy [8,6] and on relative entropy [9] were proposed. The former is purely unsupervised and requires only degrees of confidence of classifiers while the latter requires the ground truth, i.e. the real class of each pattern. The relative entropy-based approach is supervised but requires less prior information than HLMS. A probabilistic view Each fuzzy value µ k (S) expresses the relative importance of a subset S for distinguishing class ω k from the others [8,6]. In order to identify them, the authors in [9] proposed to use the relative entropy, also called Kullback-Leibler divergence (KL) [10], which is a measure of divergence between two densities. It could be interpreted as the expected discrimination information between two hypotheses and thus appears very natural for the identification of fuzzy measures. To compute KL, one needs first to compute: • the distribution (say P S k ) of confidence degrees in class ω k conditional to class ω k , • and the distribution (say P S k ) of confidence degrees in class ω k conditional to the other classes (ω k = Ω\ω k ), both given a subset of sources S. These distributions characterize the input data (confidence degrees) and the greater is the difference (calculated by KL) between them, the higher is the discrimination power. Identifying fuzzy measure using a probabilistic approach was introduced in [8,6] where the author proposed an unsupervised entropy-based method. When the class is known for each input pattern, the KL-based approach proposed in [9] should be used. It fully exploits the available information provided by the training dataset and, as expected, increases the discrimination power. Relative entropy We assume all confidence degrees to be commensurable values in [0, 1] which is generally true in classification. ) be the probability that classifiers 1, 2 . . . M jointly provide the values φ 1,k (X),φ 2,k (X),. . . and φ M,k (X) given the ground truth is class ω k (resp. given ω k ) and observation X. In [9], the distributions were assumed discrete and the relative entropy (KL) of both distributions was thus given by: For the sake of simplicity, we will denote by R k (S) the KL value given by D(P S k ||P S k ) for a given subset of sources S ⊆ Θ: Note that when the distributions P Θ k and P Θ k are computed, the distributions P S k and P S k for S ⊂ Θ are obtained by marginalizing out the components θ ∈ Θ, θ / ∈ S. To compute Eq. 4, the support of the distribution P S k must be included in the support of the distribution P S k otherwise the relative entropy diverges towards infinity. In order to respect this constraint, the skew divergence was used in [9]. From relative entropy to Choquet capacities The relative entropy has to satisfy the conditions presented Section 2 in order to be interpreted as a Choquet capacity. For that, the relative entropy R k (S) for a subset S is normalized as in Kojadinovic's method [8,6] by the entropy of the whole set of sources R k (Θ): Moreover, the relative entropy is zero when the set S is empty but also when both distributions are identical. Therefore, a source that provides the same degrees of support for a sought-after class ω k and for the other classes ω k is assigned a low importance value since it can not distinguish class ω k from the others. This is exactly the mean of discrimination power. The relative entropy has also to satisfy the monotonicity constraint (Section 2), i.e. given two sources θ i and θ j , the relative entropy has to satisfy the following equations: In order to check these constraints, one can rewrite the relative entropy as [11,8,6]: that is always positive and therefore has a monotonic behavior [11,8,6,9]: This reasoning can be extended easily to larger subsets of sources. Therefore, the normalized relative entropy satisfies all the constraints in order to be interpreted as a Choquet capacity. Modeling positive and negative interactions When sources θ i and θ j , that provide distributions P S k and P S k , are independent, the relative entropy has an additive behavior [11,8,6]: When sources θ i and θ j are interacting one each other, the relative entropy can be expressed by: where the last term can be negative or positive according to sources θ i and θ j implying that the identified Choquet capacities can be super-additive or sub-additive. Therefore the proposed method is able to model and identify both positive and negative interactions whereas Kojadinovic's approach can only identify negative ones. On using a continuous approach The core of the KL-based method is the evaluation of the multidimensional probability distributions (P S k and P S k ). In [8,6,9], the distributions were computed using discretization of confidence degrees (and histograms). We rather propose to remain in the continuous space (the space of the degrees of confidence) and to use parametric continuous densities for confidence degrees modelling. These densities allow to: • Ensuring an infinite support for the distributions and therefore avoiding using artificial methods to solve the problem of minimum support. • Avoiding the necessity of finding the optimal number of bins for the histograms. This could be a serious problem for high dimensional data such as in image processing or in complex systems diagnosis. • Obtaining a more precise paving of the input space and therefore generating smooth distributions and improving the computation of the relative entropy by summing over more data points sampled from the continuous densities. Modelling We assume that the joint probability density function related to P Θ k (and similarly for P Θ k ) has a continuous and parametric form. For example, we consider mixtures of Gaussians which are very general and have interesting properties: where with Y = (φ 1,k (X), φ 2,k (X) . . . φ M,k (X)) ∈ [0, 1] |Θ| the joint observation of degrees of confidence, c k,a the mixing coefficient of component a (among L k , with a c k,a = 1) and N (Y |α k,a , Σ k,a ) a multidimensional normal density with mean α k,a and covariance Σ k,a (positive definite): The dimension of the parameters is the same as Θ which is M . The same expression holds for P Θ k : with different parameters indexed by subscripts (k, b). Learning parameters of densities The parameters of the densities can be estimated automatically by standard methods such as the Expectation-Maximization algorithm (EM [12]) where L, the number of components, can also be estimated. When the parameters of the distributions P Θ k and P Θ k have been specified, it is easy to compute P S k and P S k for subsets S ⊂ Θ by marginalization. In case the joint density related to P Θ k is represented by a mixture of multivariate Gaussians, the marginal is also a mixture of multivariate Gaussians where some components (marginalized out) have been eliminated. In particular, the |S| components of the mean vector of the marginal are the means of the variables in S and its covariance matrix is composed of the pairwise covariances of the same variables. Continuous relative entropy For two unimodal multivariate normal densities f k and g k (with L a = L b = 1), the KL has an exact closed form [13]: When densities are multimodal, the continuous relative entropy is obtained by integrating on the support of P S k , Supp(P S k ) = {Y : P S k (Y ) > 0}: (17) To evaluate this expression, several methods can be used [13]. In this paper, we have used Monte Carlo sampling (MC) and variational approximation (VA). The MC method consists in drawing samples from the mixture associated to P S k . For that, a component is chosen randomly using the distribution c k,. . A continuous sample is then drawn from the associated Gaussian component and the density is evaluated. Given {Y i , i = 1 . . . N } the set of i.i.d. sampled points, we can approximate the integral (17) by its MC estimate: In the VA method, the integral is approximated by the following expression [13]: ) is the exact value of KL between component a of f k and component b of g k given by Eq. 16. Final algorithm The overall algorithm for computing the fuzzy measure is as follows: Require: L k the set of confidence degrees in class ω k of M classifiers given the ground truth is class ω k Require: L k the set of confidence degrees in class ω k of M classifiers given the ground truth are classes different from ω k Ensure: the fuzzy measure µ k for class ω k 1: P Θ k ← Estimate the parameters of the densities on L k 2: P Θ k ← Estimate the parameters of the densities on L k 3: 10: end for From µ k , one can compute the weight of each source (i.e. classifier) and their interactions using Eq. 3b. These values can help an end-user or any people interested in knowing which classifiers contribute to the final results as well as how they interact. Experiments A toy example is first presented. Then, the proposed method is evaluated on two datasets from UCI [14]: vehicle and image segmentation. Classifiers used were the following: Evidential Neural Network (EvNN) [15] (with 4 prototypes for each class), Evidential Nearest Neighborhood (EvKNN) [16] (with K = 5) and Support Vector Machines (SVM) [12] (with a Gaussian Kernel of size 2.2). Classifiers were learnt using 1-vs-1 strategy for each class, and the final scores were obtained by using a weighted vote. SVM scores were transformed into probabilities using a sigmoid transfer function. Note that classifier parameters were not "optimized" for each dataset, since the goal is here to assess the fusion process. The KL was assessed using the MC method using 1e6 samples. As an example, let consider two classifiers θ 1 and θ 2 with confidence degrees in ω k , given the ground truth is ω k , being distributed according to Fig. 2, and according to Fig. 3 for confidence degrees in ω k given the ground truth is another class ω k . From these densities, we are looking for characterizing the importance of the coalition {θ 1 , θ 2 } in distinguishing ω k from ω k . A toy example In Fig. 2, the confidence degrees of classifier 1 are globally close to unity given class ω k . That means classifier 1 often provides high scores for ω k given the ground truth is ω k . Classifier 2 however seems to provide some results close to 0.5 meaning classifier 2 is frequently not certain about the predicted class. Given the ground truth is ω k (Fig. 3), classifier outputs are globally close to 0 for ω k . That means classifiers generally provide low values for ω k when the ground truth is ω k as expected. In order to quantify the importance µ k ({θ 1 , θ 2 }) of the coalition {θ 1 , θ 2 } given ω k , we compute the divergence between both distributions. The higher is the divergence, the higher is the importance of {θ 1 , θ 2 } for distinguishing ω k from the other classes. In this example, densities were obtained using two mixtures with the following parameters: With these parameters, Eq. 17 leads to R k ({θ 1 , θ 2 }) ≈ 9.44 (with N = 1.10 6 ). Vehicle dataset The UCI's vehicle dataset is a four-classes problem composed of 946 examples almost uniformly distributed between classes. The goal is to classify data into one of the following types of vehicle: OPEL (ω 1 ), SAAB (ω 2 ), BUS (ω 3 ) and VAN (ω 4 ). The half of the dataset was used for classifier training, and the other half for testing. Figure 7 computed from the results of the fusion process proposed in this paper. Table 1 also gives the obtained fuzzy measures, while interaction and classifier weights computed from them (as detailed previously) are provided in Tables 2 and 3. ROC curves clearly show the complementarity of individual classifiers. For example, class ω 1 is better recognized using EvKNN (Fig. 5) with almost 98% of good classification. Class ω 2 is better recognized using SVM (Fig. 6) with an accuracy close to 83%. Class ω 3 is well recognized (about 93%) by EvNN (Fig. 4) and EvKNN (Fig. 5). Lastly, class ω 4 is better detected by SVM (Fig. 6). Table 3: Weight values associated to the fuzzy measure of Table 1. As shown in Figure 7, the proposed fusion process draw benefits of all these classifiers, providing AUCs close to 99%, 84%, 97% and 88% for class ω 1 , ω 2 , ω 3 and ω 4 respectively (improvement close to 10%). Interaction indexes can explain this result. Indeed, class ω 1 , that is well detected by all classifiers, is represented by a fuzzy measure with negative interactions because of redundancy. The highest redundancy is detected for class ω 2 between EvNN and EvKNN (I 12 = −0.20) while the highest complementarity is detected for class ω 3 between EvNN and SVM classifiers (I 23 = +0.33). Weights are also the highest for classifiers with the best accuracies, except for class ω 2 . In general, an efficient classifier has also a relatively high weight (Tab. 10), and if it is not the case, interaction values provide compensation. Table 10: Weight values for application 2. Conclusion We proposed an information-theoritic approach relying on Kullback-Leibler divergence for fuzzy measures identification in the context of supervised classification. The use of well known parametric and continuous functions for the representation of confidence degrees allows to simplifying the estimation of joint densities and marginalization. We shown its application on widely used datasets where the fuzzy measure brought a lot of useful information concerning classifier importance and interactions. Results also emphasized that the proposed fusion process allows, on the one hand, to improve classification results and, on the other hand, to be robust to classifiers mistakes. Further investigations concern the study of algorithms used for learning distribution parameters which are of key of importance.
2015-07-17T21:52:16.000Z
2011-08-06T00:00:00.000
{ "year": 2011, "sha1": "4aa85caeffad2958f3350f1233509fa55f2a8652", "oa_license": "CCBYNC", "oa_url": "https://download.atlantis-press.com/article/2254.pdf", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "7744cfd437d53a902269a4bdd61399d1ef41740e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
249536721
pes2o/s2orc
v3-fos-license
Influencing Factors on the Household-Waste-Classification Behavior of Urban Residents: A Case Study in Shanghai As the process of urbanization in China continues to accelerate, the amount of domestic waste generated correspondingly increases and directly affects the living space of residents. This indirectly implies that to reduce the production of municipal solid waste and the need for garbage disposal and recycling, household-waste-classification activities by the residents are of great significance. Using Shanghai as a case study, this study investigated the influencing factors on residents’ household waste classification by conducting a survey. Statistical analysis was then adopted, which is specified below. First, this study proposed research hypotheses related to the influencing factors of residents’ domestic-waste-sorting behavior from three levels: government, society and individuals. Second, the study designed a questionnaire from five perspectives: individual characteristic variables, government, society, residents and classification behavior. Then, SPSS software was used to carry out descriptive statistical, reliability and validity assessments using ANOVA, correlation and regression analyses on the sample data obtained from the questionnaire. The results suggested that the research hypotheses were statistically significant: (1) females and residents with higher education were more likely to participate in domestic waste classification; (2) reward and punishment measures had the most significant impact on residents’ waste-classification behavior; and (3) publicity and education, classification standards, classification facilities, the recycling system, subjective norms, environmental knowledge and environmental attitudes all had a positive effect on residents’ household waste classification. Finally, based on the results of the empirical analysis, this paper provides reference suggestions for the further development of domestic waste classification in Shanghai. Introduction With the continuous acceleration of China's modernization process and the rapid improvement of people's living standards, the quantity and types of domestic waste have gradually increased. According to the 2020 National Annual Report on the Prevention and Control of Environmental Pollution by Solid Waste in Large and Medium Cities released by the Ministry of Ecology and Environment of China, the total production of domestic waste in 196 large-and medium-sized cities in China totaled 235.602 million tons. Among them, Shanghai produced the largest amount of domestic waste, accounting for 10.768 million tons, or approximately 4.57%. The extremely large quantity of waste produced has evolved into a common problem faced throughout the country, and as such, has strongly affected the cities' hygiene and residents' health. To meet the growing demand for a high-quality living environment, domestic waste classification was proposed to reduce this waste from its source, thereby improving treatment efficiency at later stages and realizing resource utilization of municipal solid waste. From the residents' perspectives, the classification and management of domestic waste were deeply analyzed, and the influencing factors of residents' implementation of household waste classification were thoroughly understood. Subsequently, the problems existing in the process of promoting waste classification in Shanghai were drawn, the reasons were clarified and corresponding solutions were proposed, which will inevitably provide a decision-making reference for the orderly development of waste classification in Shanghai. Research Hypothesis The influencing factors of residents' source classification and release of municipal solid waste are diverse, including not only the impact on the external environment but also the subjective factors of residents. When it comes to external factors, the most popular one is policy and incentives, which were shown to substantially influence residents' wastesorting behavior [1][2][3]. However, internal factors, such as psychological constructs, can also produce an essential influence on people's waste-separation intention [4]. However, to the best of our knowledge, there is no piece of work that investigated both the outer and inner factors of people's waste-sorting behavior in Shanghai, the city in China that pioneered in launching an official waste-sorting campaign. In this regard, this research contributed to filling in such a gap. According to previous research, the influencing factors of household-waste-classification behavior of Shanghai residents were hypothesized from three levels: government, society and residents. Governmental Factors A. Impact of publicity and education on residents' household-waste-classification behavior In the process of promoting household waste classification in various countries, publicity and education are widely used as basic means. Rousta et al. [5], Liu et al. [6], Choon et al. [7] and Sarbassov et al. [8] investigated the household-waste-classification behavior of Swedish, Chinese, Malaysian and Kazakh residents, respectively. The results suggest that when government departments publicize the relevant contents of waste classification to residents, residents' awareness of participating in waste classification can be effectively enhanced and the implementation of waste-classification behavior by residents can be effectively promoted. Cui et al. [9] researched waste sorting in Beijing, China. They determined in the study that in order to carry out household waste classification work well, the relevant government departments should strengthen publicity, enrich publicity means, improve publicity facilities, and innovate perspectives and methods in the publicity process to improve residents' acceptance of classification knowledge, thereby improving residents' enthusiasm to participate in waste classification. Therefore, the following hypothesis was put forward: Hypothesis 1a (H1a). Publicity and education have a positive effect on residents' household-wasteclassification behavior. In other words, the greater the publicity intensity and the more various the forms of publicity, the more likely residents are to participate in household-waste-classification activities. B. Impact of classification criteria on residents' household-waste-classification behavior Zheng et al. [10] and Wang [11] verified that the household-waste-classification standard is related to whether residents can understand and easily implement waste classification. Scientific and reasonable classification standards can promote residents' wasteclassification awareness and improve their enthusiasm to participate in waste classification. Therefore, the following hypothesis was put forward: Hypothesis 1b (H1b). The classification standard has a positive effect on residents' householdwaste-classification behavior. In other words, the more reasonable and understandable the classification criteria are, the more likely residents are to participate in household-waste-classification activities. C. Impact of reward and punishment measures on residents' household waste classification Lucia et al. [12], Guo et al. [13], Miafodzyeva et al. [14], Convery et al. [15] and Wu [16] claimed that residents' household waste classification will be significantly affected by reward and punishment policies, and positive economic incentives are more easily accepted by residents. Wadehra et al. [17] confirmed in the investigation of waste classification in India that the quality of the reward and punishment mechanism is closely related to the implementation effect of the classification policy, and a high-quality reward and punishment mechanism can strengthen the residents' willingness to classify garbage and promote the residents to implement the behavior of waste classification. Therefore, the following hypothesis was put forward: Hypothesis 1c (H1c). Reward and punishment measures have a significant positive effect on residents' household-waste-classification behavior. In other words, the greater the rewards and punishments, the more likely residents are to participate in household-waste-classification activities. Social Factors A. Impact of classification facilities on residents' household waste classification According to surveys of residents' willingness to classify waste, Liu et al. [18], Malmir et al. [19], Wan et al. [20] and Kirakozian [21] showed that waste-classification infrastructure will have an impact on residents' willingness to participate in household waste classification, and its high quality, including convenience, is positively correlated with residents' enthusiasm for participating in waste classification. Zhang et al. [22] reported that lowquality waste-collection facilities will significantly reduce residents' willingness to classify garbage when studying the current situation of the garbage sorting and recycling system in Chengdu. Therefore, the following hypothesis was put forward: Hypothesis 2a (H2a). Classification-supporting facilities play a positive role in promoting residents' household-waste-classification behavior. In other words, the higher the quality of the classification facilities, particularly, the higher the convenience, the more likely residents are to participate in household-waste-classification activities. B. Impact of the recycling system on the household waste classification of residents Cui et al. [9] further standardized the process of waste collection and showed that this can effectively strengthen the willingness of residents to classify garbage. Vassanadumrongdee et al. [23] and Meng et al. [24] insisted on standardizing the recycling, transportation and disposal of the garbage that is sorted by residents. It was suggested that this is very important for enhancing the enthusiasm of residents to participate in waste classification. If residents find that the sorted garbage is not being processed, it will reduce their enthusiasm and enthusiasm for continued classification, thus reducing their willingness to participate in waste classification. Therefore, the following hypothesis is put forward: Hypothesis 2b (H2b). The recycling system has a positive promoting effect on residents' householdwaste-classification behavior. In other words, the more standardized the recycling system is, the more likely residents are to participate in household-waste-classification activities. Resident Factors A. Impact of subjective norms on residents' household waste classification Subjective norms refer to other individuals or organizations that are important to residents, such as family, friends, neighbors, colleagues, the government and environmental protection associations, whose attitudes and behaviors have a profound impact on individual residents. It includes the subject's specific perception of the passive pressure of public opinion and the subjective will to cater to the expectations of the public opinion. Shaufique et al. [25] showed in their research on garbage recycling in Minnesota that social pressure will have an impact on individual-specific behavior decisions, that is, the expectations and views of other important individuals or organizations around individuals often affect the willingness of residents to participate in household waste classification. Janmaimool [26] and Wang et al. [27] came to a similar conclusion that subjective norms significantly affect residents' garbage-sorting behavior. Therefore, the following hypothesis was put forward: Hypothesis 3a (H3a). Subjective norms play a positive role in promoting residents' householdwaste-classification behavior. In other words, the stronger the residents' subjective perception of the expectations of the social reference groups and the higher the degree of compliance, the more likely they are to participate in household waste classification. B. Impact of environmental knowledge on residents' household waste classification Márquez et al. [28], Babaei et al. [29] and Almasi et al. [30] carried out investigations and studies on the waste classification influencing factors of Mexican and Iranian residents. Those studies pointed out that improving residents' knowledge of waste classification can effectively enhance residents' willingness to participate in waste classification. Therefore, the following hypothesis was put forward: Hypothesis 3b (H3b). Environmental knowledge has a positive effect on residents' householdwaste-classification behavior. In other words, the richer the residents' environmental knowledge, the more likely they are to participate in household-waste-classification activities. C. Impact of environmental attitudes on residents' household waste classification Mahmud et al. [31] and Pakpour et al. [32] found in their study of waste-classification influencing factors of Malaysian and Iranian residents that residents' attitudes and views on waste classification have an indirect impact on their classification intention. This is supported by Rauwald et al. [33] and Li et al. [34], who found that residents' views on waste classification have a direct impact on their willingness to classify waste. Therefore, the following hypothesis was put forward: Hypothesis 3c (H3c). A pro-environmental attitude has a positive effect on residents' householdwaste-classification behavior. In other words, the more positive the residents' attitude toward environmental protection, the more likely they are to participate in household-waste-classification activities. Questionnaire Design It was validated that a survey is one of the best ways to obtain first-hand data for studies concerning waste management [35,36]. Based on the above research hypotheses, a total of 42 questions were designed to investigate household-waste-classification activities in this study. The concise questionnaire formulation process is clarified below in this section. All questions originated from previous studies and were modified to fit into this research context. In total, 700 questionnaires were distributed through online and offline channels in the urban area of Shanghai city, which ended up producing 517 valid electronic and 106 valid paper questionnaires. Online questionnaires were distributed through www.wjx.cn (accessed on 12 October 2021), which is the most popular online questionnaire design, distribution and collection website and could reach the most representative respondents of this study. Offline paper questionnaires were mainly distributed to the elderly who had difficulties accessing the internet. Several neighborhoods that had high numbers of elderly in Shanghai urban districts were selected. To avoid sample bias, the number of questionnaires that were distributed to different districts was varied according to the population size. The research team was comprised of four people from the university's research lab that conducted this survey. Two months and one week were used for questionnaire pre-test, modification and official distribution. In the first section of this questionnaire, questions were set to investigate the respondents' socio-demographic characteristics, such as gender (i.e., male or female), age, level of education (i.e., senior high school or below, junior college, bachelor's, master's or above) and profession (i.e., student, self-employed, education workers, government staff, retired worker, company employee or other). The independent variables were set according to the three levels of government, society and residents, and 31 terms were designed. According to the Likert scale, five responses were designed: "strongly agree", "agree", "neutral", "disagree" and "strongly disagree". The dependent variable was the construct of household-waste-classification behavior, which was comprised of seven items (questions). There were five options for each question, which were "always classified", "often classified", "occasionally classified", "improbable classified" and "never classified". Government From the point of view of the government, three independent variables were set, namely, publicity and education, classification standards, and measures of reward and punishment (Tables 1-3). Table 1. Publicity and education questions. Independent Variable Questions Publicity and education [37] Q5: My community has launched a publicity campaign for household waste classification. Q6: The household waste classification campaign can guide me in the correct classification of household waste. Q7: Regular publicity of household waste classification promotes my correct classification of household waste. Q8: A variety of waste classification publicity activities to promote my correct classification of household waste. Table 2. Classification criteria questions. Independent Variable Questions Classification criteria Q9: I think the current household waste classification standard is reasonable. Q10: I think the current household waste classification standard is simple and easy to understand. Q11: I think the unification of the household waste classification standard is helpful for daily classification. Table 3. Reward and punishment questions. Independent Variable Questions Reward and punishment measures [38,39] Q12: My community has reward and punishment measures for household waste classification as required. Q13: If there are incentives for household waste classification, I will be willing to classify. Q14: I will be penalized if I do not conduct household waste classification, I will be willing to classify. Q15: If the household waste classification implements charge by volume, I will be willing to classify. H1a. Publicity and education have a positive effect on residents' household-waste-classification behavior; in other words, the greater the publicity intensity and the more abundant the publicity forms, the more likely residents are to participate in household-waste-classification activities. H1a was examined using four item dimensions. H1b. The classification standard has a positive effect on residents' household-waste-classification behavior. In other words, the more reasonable and understandable the classification criteria are, the more likely residents are to participate in household-waste-classification activities. H1b was examined using three item dimensions. H1c. Reward and punishment measures have a significant positive effect on residents' householdwaste-classification behavior. In other words, the greater the rewards and punishments, the more likely residents are to participate in household-waste-classification activities. H1c was examined using four item dimensions. Social Two independent variables were set up from a social point of view, namely, classified supporting facilities and recycling systems (Tables 4 and 5). Table 4. Classification and supporting facilities questions. Independent Variable Questions Classification of supporting facilities [40] Q16: There are household waste classification collection facilities in my community. Q17: There are eye-catching classification standard descriptions on the household waste classification facility in my community. Q18: The household waste classification collection facility in my community is convenient for me to dispose of household waste. Q19: Intelligent waste classification equipment can attract me to carry out household waste classification. Table 5. Recycling system questions. Independent Variable Questions Recycling system Q20: The cleaning staff in my community sorts and recycles the classified waste. Q21: Classified and transported waste in our community sanitation department. Q22: Sort and dispose of sorted waste in our community sanitation department. Q23: The recycling norms in our community will prompt me to sort waste. H2a. Classified supporting facilities play a positive role in promoting residents' household-wasteclassification behavior. In other words, the higher the quality of the classification facilities and the higher the convenience, the more likely residents are to participate in household-waste-classification activities. H2a was examined using four item dimensions. H2b. The recycling system has a positive promoting effect on residents' household-waste-classification behavior. In other words, the more standardized the recycling system is, the more likely residents are to participate in household-waste-classification activities. H2b was examined using four item dimensions. Residents Three independent variables were set from the perspective of residents, namely, subjective norms, environmental knowledge and environmental attitudes (Tables 6-8). Table 6. Subjective norm questions. Independent Variable Questions Subjective norms [41] Q24: My family supports the sorting of household waste. Q25: My friends all think I should sort household waste. Q26: Sorting household waste by others in my community will motivate me to sort. Q27: I think I should be consistent with the people around me. Table 7. Environmental knowledge questions. Independent Variable Questions Environmental knowledge [42] Q28: I know the categories of various household wastes. Q29: I know what the recyclable waste includes. Q30: I know to separate organic perishable waste from other waste Q31: I know which classification waste bin should be put into after the household waste classification. Independent Variable Questions Environmental attitude [43,44] Q32: I think household waste should be sorted. Q33: I think household waste classification is beneficial to resource recycling and energy saving. Q34: I think sorting household waste is a responsible behavior. Q35: I think household waste classification can reduce pollution and protect the environment. H3a. Subjective norms play a positive role in promoting residents' household-waste-classification behavior. In other words, the stronger the residents' subjective perception of the expectations of the social reference groups and the higher the degree of compliance, the more likely they are to participate in household waste classification. H3a was examined using four item dimensions. H3b. Environmental knowledge has a positive effect on residents' household-waste-classification behavior. In other words, the richer the residents' environmental knowledge, the more likely they are to participate in household-waste-classification activities. H3b was examined using four item dimensions. H3c. Environmental attitude has a positive effect on residents' household-waste-classification behavior. In other words, the more positive the residents' attitude toward environmental protection, the more likely they are to participate in household-waste-classification activities. H3c was examined using four item dimensions. Classification Behavior In this study, the household-waste-classification behavior of residents was used as a dependent variable to evaluate the classification of household waste. Taking the implementation level of household waste classification as the model's dependent variable, the questionnaire presented five options: "always classified", "often classified", "occasionally classified", "improbable classified" and "never classified" (Table 9). Table 9. Classification behavior questions. Dependent Variable Questions Classification behavior [45] Q36: I will separate waste cardboard. Q37: I will separate kitchen waste. Q38: I will separate waste batteries and electronic equipment into categories. Q39: I will separate waste plastics. Q40: I will separate medicine waste. Q41: I will separate the scrap metal. Q42: I will separate the waste glass products. Data Analysis A combination of electronic and on-site questionnaires was used in this study. A total of 517 valid electronic and 106 valid paper questionnaires were collected, totaling 623 valid questionnaires. This section comprises the results of the descriptive analysis, reliability and validity analysis of the scale, difference analysis, correlation analysis and regression analysis. Descriptive Statistical Analysis First, this study analyzed the basic characteristics of the valid questionnaires and analyzed the respondents according to their personal trait variables (gender, age, education level, occupation, etc.), reflecting the suitability of the questionnaire coverage. As shown in Table 10, in terms of gender, there were slightly more males (51.7%) than females (48.3%). In terms of the age structure, the largest number of respondents were aged between 35 and 55, accounting for 50.4% of the total number of respondents, followed by those aged between 19 and 35, accounting for 28.6% of the total number of respondents. In terms of education level, the highest proportion of respondents had a bachelor's degree (45.4%), followed by a junior college degree (27.9%) and a master's degree or above (14.0%). In terms of occupation, the highest percentage of respondents were company employees (40.0%), followed by education workers (17.8%), and the remaining occupations were relatively evenly distributed. In general, the demographic characteristics of the valid sample in this study are relatively evenly distributed and representative. Reliability Analysis A reliability analysis was performed on the questionnaire to guarantee the consistency and stability of the data. In this study, the reliability of the questionnaire was tested by using the reliability coefficient method of Cronbach's α. The reliability of a questionnaire is generally considered to be very high when Cronbach's α is greater than 0.9. When Cronbach's α is greater than 0.7 but less than 0.9, the reliability of the questionnaire is high. When Cronbach's α is greater than 0.6 but less than 0.7, the reliability of the questionnaire is acceptable. If Cronbach's α is less than 0.6, it means that the reliability of the questionnaire is poor and that the questionnaire needs to be revised and more data needs to be collected in a new survey [46]. In this study, the reliability analysis was conducted on the data of each variable of the sample separately. As shown in Table 11, the Cronbach's α values for the governmental, social and resident factors, as well as the waste-sorting behavior, were all greater than 0.8. This indicated that the reliability of the questionnaire in this study was relatively high and the questionnaire could be analyzed empirically. Validity Analysis The validity of the variables (publicity and education, sorting standards, reward and punishment measures, auxiliary facilities for sorting, recycling system, subjective regulations, environmental knowledge, environmental attitudes, sorting behavior) was tested by using structural validity analysis. As shown in Table 12, the KMO value of the sample was found to be 0.873, which is greater than 0.6; the chi-squared value of Bartlett's spherical test was 4932.868; the degree of freedom was 149; and the significance was 0.000. This indicated that the variables were correlated, the variables were set reasonably and the questionnaire was valid. Difference Analysis In this study, the differences in waste-sorting behavior between respondents with different demographic characteristics were analyzed, and the test procedure and results are shown below. Analysis of Differences in Waste-Sorting Behavior between Respondents of Different Genders Differences in the waste-sorting behavior of respondents of different genders were analyzed using the independent sample t-test. As shown in Table 13, there was a significant difference in waste-sorting behavior between respondents of different genders (t = −2.574, p < 0.05), indicating that females were more likely to sort household waste than males. Analysis of Differences in Waste-Sorting Behavior between Respondents of Different Ages One-way ANOVA was used to test for differences in waste-sorting behavior among respondents of different ages. As shown in Table 14, there was no significant difference in waste-sorting behavior between respondents of different ages (F = 1.521, p > 0.05), indicating that there was no significant effect of age on respondents' waste-sorting behavior. Analysis of Differences in Waste-Sorting Behavior between Respondents with Different Education Levels One-way ANOVA was used to test for differences in waste-sorting behavior between respondents with different education levels. As shown in Table 15, there was a significant difference in waste-sorting behavior among the respondents with different education levels (F = 7.644, p < 0.05). The higher the education level, the more likely the respondents were to engage in waste sorting. Analysis of Differences in Waste-Sorting Behavior between Respondents of Different Occupations One-way ANOVA was used to test for differences in waste-sorting behavior between respondents of different occupations. As shown in Table 16, there was no significant difference in waste-sorting behavior between respondents with different occupations (F = 1.952, p > 0.05), indicating that there was no significant effect of occupation on respondents' waste-sorting behavior. Correlation Analysis First, the correlation between governmental, social and residential factors and the respondents' household waste-sorting behavior was initially investigated using Pearson's correlation analysis, and the results of the study are shown in Table 17. In terms of the governmental factors, publicity and education (r = 0.522, p < 0.01), sorting standards (r = 0.548, p < 0.01), and reward and punishment measures (r = 0.562, p < 0.01) were significantly and positively correlated with respondents' waste-sorting behavior. Among the social factors, auxiliary facilities for sorting (r = 0.508, p < 0.01) and recycling systems (r = 0.525, p < 0.01) were significantly and positively correlated with respondents' waste-sorting behavior. Among the resident factors, subjective regulation (r = 0.515, p < 0.01), environmental knowledge (r = 0.509, p < 0.01) and environmental attitude (r = 0.477, p < 0.01) were significantly and positively correlated with the respondents' waste-sorting behavior. Regression Analysis In order to further investigate the influence of governmental, social and residential factors on the respondents' waste-sorting behavior, a multivariate regression analysis was carried out with publicity and education, sorting standards, reward and punishment measures, auxiliary facilities for sorting, recycling system, subjective regulation, environmental knowledge and environmental attitude as independent variables, and waste-sorting behavior as the dependent variable. A multivariate regression equation can be expressed as: Y is the dependent variable; X 1 , . . . , X p are the independent variables; β 0 is the intercept; β 1 , . . . , β p are the estimated coefficients; and ε is the random error. As shown in Table 18, from the model summary, the R 2 of the model was 0.535, indicating that publicity and education, sorting standards, reward and punishment measures, auxiliary facilities for sorting, recycling system, subjective regulation, environmental knowledge and environmental attitude could predict 53.5% of the variance in waste-sorting behavior, and overall, the explanatory power of the model was fair. As shown in Table 19, from the ANOVA results of the model, the F-value was 88.356 (p < 0.001), indicating a significant linear relationship between the independent variables and the dependent variable of the model in this study. The regression coefficients of the model are shown in Table 20. There, it can be found that the standardized regression coefficient of publicity and education on wastesorting behavior was 0.143 (p < 0.01) in terms of governmental factors. This indicated that the implementation of publicity and education activities was conducive to motivating and promoting residents to engage in household-waste-sorting behavior. Moreover, the greater the publicity efforts and the richer the forms, the more likely the residents were to participate in household waste-sorting activities. Therefore, H1a of this study was true. The standardized regression coefficient of sorting standards on waste-sorting behavior was 0.155 (p < 0.01), indicating that sorting standards had a positive effect on encouraging and promoting residents' household waste-sorting behavior. In other words, the more reasonable the sorting standards were and the more easily understood they were, the more likely the residents were to participate in household-waste-sorting activities. Therefore, H1b of this study was true. The standardized regression coefficient of reward and punishment measures on waste-sorting behavior was 0.181 (p < 0.01), indicating that the implementation of reward and punishment measures was conducive to promoting household-waste-sorting behavior among residents. That is, the stronger the reward and punishment measures, the more likely the residents were to participate in household-waste-sorting activities. Therefore, H1c of this study was true. The three influencing factors, in order of weight proportion, were reward and punishment measures (0.157), sorting standards (0.131), and publicity and education (0.121). In terms of social factors, the standardized regression coefficient of auxiliary facilities for sorting on waste-sorting behavior was 0.100 (p < 0.01), indicating that auxiliary facilities for sorting had a positive role in stimulating and promoting household-waste-sorting behavior among residents. In other words, the more complete and convenient the auxiliary facilities for sorting were, the more likely the residents were to participate in householdwaste-sorting activities. Therefore, H2a of this study was true. The standardized regression coefficient of a recycling system on waste-sorting behavior was 0.089 (p < 0.05), which indicated that a recycling system was conducive to encouraging and promoting householdwaste-sorting behavior among residents. In other words, the more normative the recycling system was, the more likely the residents were to participate in household-waste-sorting activities. Thus, H2b of this study was verified. The influencing factors, in order of weight, were a recycling system (0.082) and auxiliary facilities for sorting (0.079). In terms of residential factors, the standardized regression coefficient of subjective regulation on waste-sorting behavior was 0.134 (p < 0.01), indicating that subjective regulation was conducive to promoting and facilitating the household-waste-sorting behavior of residents. In other words, the higher the residents' subjective perception of the expectations of the social reference group and the higher the degree of compliance, the greater the likelihood that residents engaged in household waste sorting. Therefore, H3a was considered validated. The standardized regression coefficient of environmental knowledge on waste-sorting behavior was 0.122 (p < 0.01), indicating that environmental knowledge had a positive role in encouraging and promoting the household-waste-sorting behavior of residents. In other words, the more environmental knowledge residents had, the more likely they were to participate in household-waste-sorting activities. Therefore, H3b was verified. The standardized regression coefficient of environmental attitude on waste-sorting behavior was 0.093 (p < 0.01), indicating that environmental attitude had a positive role in stimulating and promoting the household-waste-sorting behavior of the residents. In other words, the more positive the residents' attitude toward environmental protection, the more likely they were to participate in household-waste-sorting activities. Therefore, H3c was true. The three influencing factors, in order of weight, were subjective regulation (0.122), environmental knowledge (0.116) and environmental attitude (0.074). Discussion and Conclusions Considering the importance of waste classification in medium and large cities, this study investigated questionnaire responses from residents of Shanghai. Collecting 637 valid samples in total, this study produced the following results and arguments after statistical analyses. With regard to the socio-demographic characteristics and waste-classification behavior of residents, the results suggested that females and people with higher education tended to be more willing to sort waste, which are consistent with previous studies [1]. It was identified that women were more likely to participate in household waste sorting than men, which may be related to the fact that women undertake more housework. Additionally, the higher the education level of residents, the higher the likelihood of their participation in waste sorting. In the process of education, residents can receive relevant knowledge about waste sorting. The longer they are educated, and the higher their education level is, the more environmental knowledge they will receive and the more conducive it will be for residents to engage in household-waste-sorting activities. When it came to the independent variables of the government, society and residents, the results are presented below. The governmental factors that influenced residents' waste-sorting behavior were, in order of weight, reward and punishment measures, sorting standards, and publicity and education. This suggested that economic means could significantly promote residents' waste-sorting behavior, and that relevant government departments should introduce relevant policies, improve reward and punishment measures, and set reasonable and easily understood sorting standards, which were validated multiple times in previous research [48][49][50][51]. The effect of publicity and education on residents' waste-sorting behavior works for a long period after its implementation, and a permanent mechanism should be established. Socially, the factors that influenced residents' waste-sorting behavior were, in order of weight, the recycling system and supporting facilities for sorting. The standardization of the recycling system enabled residents to feel that it was meaningful to sort their waste, which, in turn, could effectively increase their motivation to sort their waste from the source. Therefore, the sanitation department and the waste-recycling company should standardize the operations of the whole process, improve the supporting facilities for waste sorting and motivate residents to sort at the source. For residents, the factors that influenced residents' waste-sorting behavior were, in order of weight, subjective norms, environmental knowledge and environmental attitudes. We live in a society and are influenced by people around us all the time. The waste-sorting behavior of the people around us will create public opinion pressure on the residents themselves, forcing them to engage in waste-sorting activities. The community can mobilize the power of the masses to enable everyone to engage in household waste sorting, which is led by Chinese Communist Party members and officials, guided and monitored by volunteers, and participated in by residents. At the same time, the community should step up its efforts to publicize the environmental knowledge in the residential area, guide the residents to become actively involved in the public affairs of the community, cultivate the residents' sense of ownership and allow them to participate in the waste-sorting work with a positive attitude. This study innovated in terms of exploring the external and internal factors of wastesorting behavior of Shanghai residents. Furthermore, the results demonstrated that government, society, and resident's environmental attitude and knowledge could also influence their intention of waste classification. Moreover, publicity also played a very important role in promoting the public's waste-sorting recognition, which should be rolled out broadly, covering primary students. However, there were limitations to this study. First, this study did not dig deeper into how the three independent variables interact with each other and synergistically exert effects on residents' sorting behavior. Second, the COVID-19 pandemic may have produced impacts on people's willingness toward waste separation; more studies should be conducted to explore the influence of COVID-19 on people's sorting behavior in the future.
2022-06-10T17:08:09.914Z
2022-05-27T00:00:00.000
{ "year": 2022, "sha1": "2d8bf941d64bd361384a73a037661e3940d03582", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/11/6528/pdf?version=1653641934", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2d8bf941d64bd361384a73a037661e3940d03582", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
228905747
pes2o/s2orc
v3-fos-license
Feeding Ecology of Sicydium bustamantei (Gree ff 1884, Gobiidae) Post-Larvae: The “Little Fish” of S ã o Tom é Island : The rivers of S ã o Tom é Island are colonized by Sicydium bustamantei (Gree ff 1882), an amphidromous fish that spawns in those areas. After hatching, larvae drift to the ocean with the river flow. In the marine realm, the planktonic larvae develop and migrate to freshwater as post-larvae. The migrations of post-larvae support important local fisheries at the mouth of rivers in tropical volcanic islands. Amphidromous post-larvae rely on plankton as their main source of organic matter. However, the biology and ecology of S. bustamantei in the West African islands are understudied, despite its importance for local fisheries. Thus, this study aimed to start bridging this gap by studying its feeding ecology. Our objectives were to identify the main prey of S. bustamantei post-larvae, combining gut content with stable isotope analyses. The gut contents included zooplankton (Chaetognatha, Ostracoda, and unidentified crustaceans), debris from plant and / or macroalgae-derived material, and microplastics (including microfibers). The stable isotopes analysis indicated that zooplankton and macroalgae detritus were the main sources of organic matter assimilated by this species. We also demonstrated that S. bustamantei post-larvae are omnivorous and secondary consumers. These data provide pioneering information that can be used in management plans that still need to be developed. Introduction Amphidromy is a type of diadromy that requires freshwater-marine connectivity in the early stages of a species life cycle [1,2]. Amphidromous species such as gastropods, decapods, and fish are adapted to tropical and subtropical insular environments [2][3][4][5]. Post-larvae support significant artisanal fisheries during the return migration (goby-fry fisheries), with significant nutritional, cultural, and socio-economical value in developing tropical and sub-tropical countries [3,5,7,8,11]. Returning post-larvae can be caught using beach seine nets made from mosquito nets or traps (baskets) made from vegetable fibers [12,13]. Globally, goby-fry fisheries are declining due to the degradation and loss of suitable habitat and river-ocean connectivity due to instream barriers (e.g., channelization, riverine and coastal zone development), and overfishing [7,12,14,15]. Goby-fry fisheries are largely unmanaged, with insufficient biological and fishery data [7,15], albeit some species are listed as endangered [12]. In São Tomé island (São Tomé and Príncipe archipelago), Sicydium bustamantei (Greeff 1882), called "peixinho" (little fish), is caught as post-larvae in several rivers (e.g., the Io-Grande, Manuel Jorge, Malanza, and Ouro Rivers) and sold in local fish markets. This species is one of the main sources of income and protein for these communities. It has been found in several islands across the Gulf of Guinea (West Africa)-namely, Bioko, São Tomé, Príncipe, and Annobón [9,[16][17][18]. Little is known about the biology and ecology of this species, particularly during the return migrations of early life-cycle stages. However, according to local knowledge in São Tomé, S. bustamantei forms shoals at the mouth of rivers. Here, fish are caught with baskets, mosquito nets, or even with cloths. The post-larvae are caught throughout the year, but mainly in the dry season during the full and new moon periods. These descriptions coincide with the scientific information available elsewhere for other Sicydiinae species [5,8,11,13,19]. The IUCN (International Union for Conservation of Nature) has not yet attributed a conservation status to S. bustamantei due to insufficient scientific data [20] and, given its importance to many human populations across the species distribution range, it is necessary to obtain scientific data to start implementing sound management plans. We opted to start studying the food web ecology of this species, as well as the prevalence of microplastics in their diet. There are two main reasons for this decision. First, food web ecology discloses the relationship patterns between species on the multidimensional mosaic of habitats where they live. This is especially true for migratory species that move across ecosystems and serve as links and conduits of energy between the land and the ocean. Second, the prevalence of microplastics off São Tomé and Príncipe is unknown but likely high when considering the high levels of plastic pollution in beaches. Studying microplastic pollution is relevant because the contaminants sorbed into it or that are incorporated in the microplastics may dysregulate the physiological processes of the animals that accumulate it in their organisms [21]. These contaminants may be transferred and accumulated throughout the food web, impacting the health of multiple species, including humans [21]. Thus, our specific objective was to identify the main food sources consumed and assimilated by S. bustamantei post-larvae in the Gulf of Guinea, using the population of São Tomé island (São Tomé and Príncipe) as a model population. For that, we combined gut content analysis with carbon (δ 13 C: 13 C/ 12 C) and nitrogen (δ 15 N: 15 N/ 14 N) stable isotope analysis. The gut content analysis also provided the first assessment of the seriousness of microplastic pollution in São Tomé and Príncipe and its prevalence in the guts of such an important species for the people of this country. Study Area and Collection of Samples The Democratic Republic of São Tomé and Príncipe includes two islands, São Tomé and Príncipe, that form an archipelago with the Bioko and Annobón islands (Equatorial Guinea) in the Gulf of Guinea. São Tomé island ( Figure 1) is a volcanic island with a high relief, located about 150-200 km off the west coast of Africa, and is the second-largest island (859 km 2 ) of the archipelago [22]. Sicydium bustamantei post-larvae were acquired in the city of São Tomé fish market (caught mostly in the southern part of the island) and caught in the Mangrove of Malanza River (in South of São Tomé) in January 2017 and August 2017, corresponding to the wet season (October to May) and dry season (June to September), respectively (Figures 1 and 2). Samples were preserved in ethanol 96% and later identified as S. bustamantei, an endemic gobiid in the region of Gulf of Guinea, with the help of Dr. Peter Wirtz (independent researcher). Since samples from the fish market included multiple species, we separated and quantified the individuals by taxonomic groups. The total length of larvae (TL; ±0.01 mm) was measured based on photographs taken under a stereomicroscope (Leica 58APO, coupled with a Leica MC170 HC camera) and using Image J (v1.50i). The standard deviation was used as a measure of data dispersion in this paper. A t-test was used to analyze the differences in total length between the wet and dry seasons. The analysis was carried out using the R 3.5.3 statistical software, with the level of significance set at p ≤ 0.05. Oceans 2020, 1, FOR PEER REVIEW 3 (June to September), respectively (Figures 1 and 2). Samples were preserved in ethanol 96% and later identified as S. bustamantei, an endemic gobiid in the region of Gulf of Guinea, with the help of Dr. Peter Wirtz (independent researcher). Since samples from the fish market included multiple species, we separated and quantified the individuals by taxonomic groups. The total length of larvae (TL; ±0.01 mm) was measured based on photographs taken under a stereomicroscope (Leica 58APO, coupled with a Leica MC170 HC camera) and using Image J (v1.50i). The standard deviation was used as a measure of data dispersion in this paper. A t-test was used to analyze the differences in total length between the wet and dry seasons. The analysis was carried out using the R 3.5.3 statistical software, with the level of significance set at p ≤ 0.05. Gut Content Analysis The diet of S. bustamantei post-larvae was determined by analyzing the guts of 30 individuals collected in each season. The gut contents were exposed after dissecting the abdomen with fine needles and identified under a stereomicroscope (Leica 58APO) and an inverted microscope (Zeiss MB). The prey items were identified to the lowest taxonomic level possible. The presence of microplastics and microfibers was also recorded. Gut Content Analysis The diet of S. bustamantei post-larvae was determined by analyzing the guts of 30 individuals collected in each season. The gut contents were exposed after dissecting the abdomen with fine needles and identified under a stereomicroscope (Leica 58APO) and an inverted microscope (Zeiss MB). The prey items were identified to the lowest taxonomic level possible. The presence of microplastics and microfibers was also recorded. The incidence of food items was calculated as the percentage of post-larvae with at least one prey item in their guts. A chi-square test was used to compare the incidence of each food item between the wet and dry seasons. The analysis was carried out using the R 3.5.3 statistical software, with the level of significance set at p ≤ 0.05. The graphical method proposed by Costello [23] and modified by Amundsen et al. [24] was used to analyze the feeding strategy of S. bustamantei post-larvae. Individuals with no gut content were excluded from the analysis. Briefly, each point in the plot corresponds to the frequency of occurrence (i.e., the percentage of guts with a specific prey item) and prey specific abundance (i.e., the percentage of a prey taxon in relation to all prey items in the guts in which this prey was present). The importance of prey and feeding strategy were inferred by examining the points' distribution along the axes in the plot. Stable Isotope Analyses The main sources of organic matter assimilated by post-larvae were identified and quantified using carbon (δ 13 C: 13 C/ 12 C) and nitrogen (δ 15 N: 15 N/ 14 N) stable isotopes. We analyzed five individuals collected during the wet season. Samples were also collected during the dry season, but due to visible signs of deterioration after collection they were not included in the analysis. The potential prey were collected near the mouth of the Malanza River on the south coast of São Tomé island also during the wet season (January 2017) and included zooplankton (Chaetognata Pterosagitta draco 1868)). Zooplankton were collected using a plankton net with a mesh size of 500 µm. Macroalgae and seagrasses were collected in the intertidal and subtidal areas through freediving. Tree leaves were hand collected on the beaches near the mouth of the Malanza River. Samples were cleaned with deionized water, oven-dried at 60 • C for at least 48 h, and ground to a fine and homogenous powder using a mortar and pestle (animals) or a mixer mill (plants and macroalgae). Stable isotope ratios were measured using a Thermo Scientific Delta V Advantage IRMS via Conflo IV interface (Marinnova, University of Porto). The raw data were normalized by three-point calibration using international reference materials, such as IAEA-N-1 (δ 15 N = +0.4% ), IAEA-NO-3 (δ 15 N = +4.7% ), and IAEA-N-2 (δ 15 N = +20.3% ) for the nitrogen isotopic composition, and two-point calibration using USGS-40 (δ 13 C = −26.39% ) and USGS-24 (δ 13 C = −16.05% ) for the carbon isotopic composition. Stable isotope ratios were reported in δ notation, δX= (R sample /R standard − 1) × 10 3 , where X is the C or N stable isotope, and R is the ratio of heavy/light stable isotopes. Vienna Pee Dee Belemnite and air are standards for δ 13 C and δ 15 N, respectively. The analytical error, the mean standard deviation of the replicate reference material, was ±0.1% for δ 13 C and δ 15 N. The zooplankton and S. bustamantei post-larvae δ 13 C values were corrected for lipid content [25], and the δ 13 C and δ 15 N values were corrected for ethanol preservation [26]. To identify and quantify the contribution of the most likely food sources to the S. bustamantei post-larvae biomass, we combined biplot analysis (post-larvae δ 13 C and δ 15 N values were adjusted for trophic fractionation [27]), with the results from the dual-stable isotope mixing model produced by SIAR (Stable Isotope Analysis in R) [28,29]. This mixing model uses Bayesian inference to solve the indeterminate equations (more than n + 1 sources relative to n stable isotopes) and produces a probability distribution that represents the likelihood a given source contributes to the consumer biomass [28]. The model also allows each of the sources and the trophic fractionation (TEF; or trophic enrichment factor) to be assigned as a normal distribution [28]. SIAR produces a range of feasible solutions to the mixing problem to which are assigned credibility intervals (CIs) (in this study, 95% CI) [28]. SIAR also includes a residual error term. For the SIAR mixing model, the δ 13 C and δ 15 N values were adjusted for one trophic level using the trophic fractionation estimates from Vander Zanden and Rasmussen [27] (+0.47 ± 1.23% δ 13 C, +3.40 ± 0.41% δ 15 N). Results and Discussion Post-larvae collected during the wet season were larger than those collected during the dry season (t(29) = 67.08, p < 0.001). The total length of the S. bustamantei post-larvae varied between 17 and 30 mm (26.7 ± 2.9 mm) in the wet season and between 18 and 28 mm (24.8 ± 2.2 mm) in the dry season ( Figure 3). These values are within the range described for the total length of post-larvae of other species of Sicydiinae during their return migrations (recruitment) [3,7]. The feeding incidence was higher during the wet season (53.3%) than during the dry season (20.0%) ( Table 1). This may be due to the fact that runoff is higher during the wet season than during the dry season, which increases the downstream transport of food and nutrients from upriver to the estuaries/mangroves and adjacent coastal areas, consequently increasing food availability [30]. Table 1. Total number of Sicydium bustamantei (Greeff 1882) post-larvae guts examined and guts with food items; feeding incidence (%); and incidence of plant and/or macroalgae, zooplankton, microplastics, and microfibers (%). Samples were collected during the wet and dry seasons of 2017 in the island of São Tomé (São Tomé and Príncipe). Seasons Examined Guts (N) The feeding incidence was higher during the wet season (53.3%) than during the dry season (20.0%) ( Table 1). This may be due to the fact that runoff is higher during the wet season than during the dry season, which increases the downstream transport of food and nutrients from upriver to the estuaries/mangroves and adjacent coastal areas, consequently increasing food availability [30]. Table 1. Total number of Sicydium bustamantei (Greeff 1882) post-larvae guts examined and guts with food items; feeding incidence (%); and incidence of plant and/or macroalgae, zooplankton, microplastics, and microfibers (%). Samples were collected during the wet and dry seasons of 2017 in the island of São Tomé (São Tomé and Príncipe). The incidence of each food was not statistically different between seasons (χ 2 (2) = 4.98, p = 0.082). However, zooplankton (16.7%), such as Chaetognatha, Ostracoda, and unidentified crustaceans, were only observed in the guts of post-larvae during the wet season. Most of the gut contents consisted of plant and/or macroalgae detritus (46.7% in the wet season and 20.3% in the dry season) and microplastics/microfibers (20.0% and 23.3% in the wet and dry seasons, respectively) ( Table 1). Thus, S. bustamantei post-larvae showed a specialist food strategy [22,23], feeding on a dominant prey taxon-vascular and/or macroalgae-derived material, and occasionally on small proportions of other prey types (rarest items)-zooplankton ( Figure 4). The stable isotope values from S. bustamantei post-larvae, after being corrected for trophic fractionation, indicate that they assimilated 15 N-and 13 C-enriched sources, such as zooplankton and macroalgae detritus, and also tree detritus ( Figure 5). In fact, based on the SIAR mixing model (95% CI), zooplankton was the source with the highest relative contribution to the S. bustamantei postlarvae biomass during the wet season, varying between 0.42 and 0.71, followed by macroalgae The stable isotope values from S. bustamantei post-larvae, after being corrected for trophic fractionation, indicate that they assimilated 15 N-and 13 C-enriched sources, such as zooplankton and macroalgae detritus, and also tree detritus ( Figure 5). In fact, based on the SIAR mixing model (95% CI), zooplankton was the source with the highest relative contribution to the S. bustamantei post-larvae biomass during the wet season, varying between 0.42 and 0.71, followed by macroalgae detritus (0.16-0.50) and tree detritus (0.02-0.18) ( Table 2). The stable isotopes and gut content analyses showed different results for the relative contribution of each food source during the wet season. While the gut contents were mainly composed of vascular and/or macroalgae-derived material, zooplankton was the source with the highest relative contribution to the post-larvae biomass. Detritus is not the main source of energy for most aquatic organisms [31], because it is less likely to be assimilated than animal-derived material [32,33]. Moreover, because we collected zooplankton using a net with 500 μm mesh size, small-sized zooplankton such as Ostracoda were not included in the stable isotope analysis. However, this would probably not change the main conclusions about the contribution of zooplankton to post-larvae biomass. We expect that larger zooplankton, such as the carnivore Chaetognatha, will present higher δ 15 N values than Ostracoda (or other small-sized zooplankton), which feed on phytoplankton and detritus. Because we do not know the origin of the basal sources that support their biomass (pelagic or benthic), we cannot speculate about the potential differences in their δ 13 C values. Thus, if The mode values represent the most likely value, and the low 95% and high 95% values represent the 95% Bayesian credibility intervals calculated by a dual-stable isotope mixing model produced by SIAR (Stable Isotope Analysis in R) [28,29]. The stable isotopes and gut content analyses showed different results for the relative contribution of each food source during the wet season. While the gut contents were mainly composed of vascular and/or macroalgae-derived material, zooplankton was the source with the highest relative contribution to the post-larvae biomass. Detritus is not the main source of energy for most aquatic organisms [31], because it is less likely to be assimilated than animal-derived material [32,33]. Moreover, because we collected zooplankton using a net with 500 µm mesh size, small-sized zooplankton such as Ostracoda were not included in the stable isotope analysis. However, this would probably not change the main conclusions about the contribution of zooplankton to post-larvae biomass. We expect that larger zooplankton, such as the carnivore Chaetognatha, will present higher δ 15 N values than Ostracoda (or other small-sized zooplankton), which feed on phytoplankton and detritus. Because we do not know the origin of the basal sources that support their biomass (pelagic or benthic), we cannot speculate about the potential differences in their δ 13 C values. Thus, if Ostracoda had lower δ 15 N values than the one estimated for zooplankton, the contribution of zooplankton to the S. bustamantei post-larvae would likely increase. Food Items Our data indicate that S. bustamantei post-larvae are secondary consumers and omnivorous during their pelagic phase. They feed on zooplankton, as reported for other Sicydiinae species [34,35], and on plant/macroalgae detritus. During recruitment, metamorphosis occurs and modifications to anatomical feeding structures lead to a change in the S. bustamantei diet, from being a carnivorous fish feeding on plankton to an herbivorous fish feeding on the benthos [3,5,34]. This may explain the omnivory of this species during the post-larval phase. Although the number of samples analyzed was small to draw firm conclusions about the foraging habitat of post-larvae, the fact they showed high δ 13 C values so close to those from marine zooplankton and macroalgae indicate that these fish spent part of their life in the marine environment before moving to freshwater streams, as described for other Sycidiinae post-larvae [36]. Still, other studies have reported that the biomass of recruiting amphidromous fishes have an inshore signature typical of environments influenced by freshwater. This suggests that S. bustamantei post-larvae can be retained temporarily in the freshwater plumes of rivers while waiting for the appropriate conditions to start the return migration [37,38]. Large amounts of microplastics/microfibers (20.0-23.3%) were found inside the guts of S. bustamantei, along with zooplankton and vascular and/or macroalgae-derived material (Table 1). Unfortunately, large amounts of plastic litter lay on the beaches of São Tomé, some of which will break into microplastics. The ingestion of microplastics by fish larvae has been associated with a decrease in growth rates, changes in feeding preferences, innate behavior, swimming behavior, response to olfactory cues, and increasing mortality [39,40]. Thus, plastic pollution may also increment the deleterious effect of overfishing upon several populations of this and other marine species across the Gulf of Guinea. It has been globally reported that amphidromous fishes suffer many anthropogenic threats beyond overfishing-namely, water abstraction, degradation and loss of suitable habitat and connectivity due to instream barriers, and pollution [1,7,12,14,15,41,42]-with consequences to their physiology, reproduction, and migration patterns between freshwater and marine coastal areas [13]. In fact, Bell [7] considered these land-use threats to be more likely to cause population declines than overfishing. In samples collected during both wet and dry seasons, S. bustamantei post-larvae corresponded to 80% of the total biomass, while 20% included small crustaceans (19.4%) and other non-identified fish species (0.6%). As described for other countries, the goby-fry fishery in São Tomé and Príncipe is not selective and it is not regulated-i.e., when local people find fish schools they catch as much as they can. Small crustaceans (e.g., isopods and decapods) and post-larvae and juveniles of other fish species are commonly found performing upstream migrations together with S. bustamantei post-larvae. This has been observed for other Sicydiinae species, and by-catch is often discarded during goby-fry fishery [19,41]. Conclusions Sicydium bustamantei is a secondary consumer with an omnivorous diet during the post-larval phase and is not exclusively carnivorous, as described for other species of the same genus. Additionally, large amounts of microplastics/microfibers were ingested by post-larvae. Plastic pollution may cause detrimental impacts on the conservation status of this species and not only its overexploitation. Thus, the silent health risk problem that microplastic pollution may cause to humans through the consumption of S. bustamantei, in tandem with the ecological and economic importance of this species, represents another compelling reason to undertake a critical long-term monitoring program to assess the conservation status of the species.
2020-11-12T09:07:48.128Z
2020-11-05T00:00:00.000
{ "year": 2020, "sha1": "f242e4b018029f451df2910c4531300498e3be7d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-1924/1/4/20/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "fb6f2aef0eee665ff95bd73e037e83561e67d352", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
246816655
pes2o/s2orc
v3-fos-license
Range expansion of the alien red-eared slider Trachemys scripta (Thunberg in Schoepff, 1792) (Reptilia, Testudines) in Eastern Europe, with special reference to Latvia and Ukraine An increasing number of thermophilic invasive species are spreading and becoming naturalized in Eastern Europe, at least partially due to recent climate change. This can be exemplified by current expansion of the red-eared slider, Trachemys scripta , in Latvia and Ukraine. We collected 44 records of the species in Latvia and 79 in Ukraine. Two of the three subspecies have been found – T. s. elegans and T Introduction Climate change in recent years has allowed more thermophilic alien species to naturalize and actively invade Eastern Europe (Pupins and Pupina 2011;Nekrasova 2013;Cerasoli et al. 2019;Kuybida et al. 2019;Nekrasova et al. 2019aNekrasova et al. , b, c, 2021a, b;, b;Marushchak et al. 2021).One of the reasons for the emergence of new alien species is the trade in exotic animals and their uncontrolled release into nature.Freshwater turtles are among the most traded reptiles.Until the 1990's, the trade of freshwater turtles focused mostly on the American red-eared slider Trachemys scripta (Thunberg in Schoepff, 1792).In the period 1989-1997, more than 52 million red-eared sliders were exported from the USA (Telecky 2001).Today red-eared sliders are one of only two reptiles included in the list of the "World's Worst Invasive Alien Species" (Global Invasive Species Database 2021) and considered one of the 100 worst invasive alien species in Europe (Scalera 2009).In Ukraine, several alien turtle species have been discovered, T. scripta, Testudo horsfieldii, Mauremys rivulata, M. caspica (Nekrasova 2013;Kukushkin et al. 2017) and, perhaps Testudo graeca (Nekrasova and Tytar 2012).In Latvia T. scripta, M. rivulata, M. caspica, Pelodiscus sinensis, and T. horsfieldii (Pupins and Pupina 2011) were recorded. Among the noted species T. scripta represents the greatest threat for local biodiversity because this species can transmit diseases to which native turtles are susceptible, occupy similar ecological niches of native turtles, competing for their food resources, displacing them from their favored basking sites, participating in their breeding-courtship attempts between the autochthonous turtles, and overall reducing their survival (Cadi and Joly 2004;Pupins 2007;Semenov 2009;Cerasoli et al. 2019;Espindola et al. 2019;Pupins et al. 2019;Nekrasova et al. 2021b).According to the Handbook of global freshwater invasive species (Ficetola et al. 2012), pond sliders are recorded in Finland, Latvia and Lithuania (but not in Estonia), Poland, Slovakia, Hungary, Romania, Bulgaria, Russia and other countries (Cadi and Joly 2004;Semenov 2009;Pupins and Pupina 2011;Kukushkin et al. 2017;Kornilev et al. 2020).Numerous findings of exotic turtles continued to appear at the beginning of the 21 st century in the regions of Ukraine -Odesa, Crimea, Transcarpathia areas next to the border with Hungary and Romania (Nekrasova 2013;Кurtyak and Kurtyak 2013;Kukushkin et al. 2017). Recently, the red-eared slider has been found in a number of regions in Ukraine due to breeding in captivity and release into various wetlands (mostly urban; Nekrasova et al. 2021b).Despite the availability of some distribution maps at different scales (Rödder et al. 2009;Banha et al. 2017) there are no distribution maps concerning the species in Eastern Europe, particularly Latvia and Ukraine.Yet, for biodiversity conservation and species management purposes, it would be important to predict how far the red-eared slider can advance.The availability of spatially explicit maps of risk of establishment may allow to set up specific preventive measures in different regions, like trade regulation or appropriate communication campaigns (Masin et al. 2014).In this study we attempted to summarize findings of the red-eared slider in outdoor settings both in Latvia and Ukraine and, by using a species distribution modeling (SDM) approach, to identify areas where the species could survive under current climatic conditions. To explore the potential distribution of T. scripta in our study area we employed Bayesian Additive Regression Trees (BART), a machine learning technique consisting of a Bayesian approach to Classification and Regression Trees (CART), capable of producing highly accurate predictions without overfitting to noise or to particular cases in the data.Models of this method estimate the probability of a given output variable (a binary classification of habitat suitability or species presence) based on decision "trees" that split predictor variables with nested, binary rule-sets (Carlson 2020).Running SDMs with BARTs has recently been greatly facilitated by the development of an R package, "embarcadero".The algorithm computes habitat suitability values ranging from 0, for fully nonsuitable habitat, to 1, for fully suitable habitat.Model performance was assessed using measures of accuracy: the area under the receiver-operator curve (AUC) (Fielding and Bell 1997), and the true skills statistic (TSS) (Allouche et al. 2006). As input, SDMs require georeferenced biodiversity observations and geographic layers of environmental information.Because the use of restricted data (similar to not capturing the full species' environmental range) reduces strongly the combinations of environmental conditions under which the models are calibrated and reduces the applicability of the model for predictive purposes (Pearson and Dawson 2003;Thuiller et al. 2004), we used the full set of European invasive records from the GBIF data base (Trachemys scripta, GBIF 2021), updated by our personal field investigations.These occurrence points (n = 1,147) varied in spatial density due to variable sampling intensity.As a result, and to avoid overemphasizing heavily on sampled areas, the BART algorithm selects points for model calibration using subsampling to reduce sampling bias and spatial autocorrelation, which would produce models of lower rather than higher quality (Beck et al. 2013).The niche for the species (not discriminating for subspecies) was described based on climate.We transformed 30 climate data from the CliMond global dataset (Kriticos et al. 2014, see Table S3).This dataset consists of climatic variables in raster format covering temperature, rainfall, solar radiation, and seasonality conditions that reportedly influence reptile ranges (Martínez et al. 2020); following Araújo et al. (2006), a 10′ grid resolution was chosen.Because collinearity among environmental predictors will increase uncertainty in model parameters and decrease statistical power we used principal components analysis (PCA) in SAGA GIS (Conrad et al. 2015) to reduce collinearity (Petitpierre et al. 2017), giving a new, simpler environmental space defined in fewer, and fully orthogonal axes.Significance of the components was identified using the broken stick method (Jackson 1993).PCA reduced collinearity amongst the 30 variables from the CliMond dataset (Table S3); the first four components were identified as significant, which together accounted for almost 90% of the variation (according to standard techniques https://www.climond.org/BioclimData.aspx,Kriticos et al. 2014, Tables S4-S5).To differentiate between suitable and non-suitable environments in terms of invasion risk two thresholds were used: the minimum training presence (Pearson et al. 2007) and the more conservative one percentile threshold.These are the thresholds at which all or all but 1% of the training presences are required to be included within projected suitable environments, respectively.Contour lines depicting the threshold values separating suitable from unsuitable areas for the species were produced in SAGA GIS.Maps of habitat suitability in the GeoTIFF format were processed and visualized in SAGA GIS. Results and discussion We collected 123 findings of T. scripta in Latvia and Ukraine (Figure 1; Tables S1-S2).Turtles were found mostly in stagnant water bodies.In Latvia, pond sliders were first found in 1999 (Table S1) and in 2006, when six adult animals were recorded in Nitaure, a village in Amata municipality, where they had successfully over-wintered.In addition to the records of Pupins (2007) and Pupins and Pupina (2011), 11 new findings of T. s. scripta (Schoepff, 1792) and 33 of T. s. elegans (Wied-Neuwied, 1839) were added to our database (Figure 1; Table S1). In Ukraine, the turtle was found in drainage waters of Kyiv (Ukraine) in the late 1990s, namely at the Bortnychi sewage water treatment plant (50.3837N; 30.6642E),where juveniles and adults were recorded (Table S2).Interestingly, at any time of the year various exotic species of fish and plants could also be found in the warm waters (about +19-+20 °С) of this sewage water treatment plant, for instance guppy (Poecilia reticulata Peters, 1859) (Nekrasova et al. 2021a, b). Two of the three subspecies of T. scripta were found in Latvia and Ukraine -T.s. elegans and T. s. scripta.The more common one was T. s. elegans -75% of the records in Latvia and 97.5% of the records in Ukraine.It is possible that the T. s. troostii subspecies also occurs in both study areas, but it is rather difficult to identify it. In Ukraine the red-eared slider is often kept in captivity and is a fairly popular pet.This may be evidenced by a simple Google Search query using the key words "buy" and "red-eared slider" (in Ukrainian) that returned 1,350 results (4 th of August 2021).Unfortunately, turtles are commonly released into the wild to nearby ponds and lakes, thus invading urban ecosystems.In the wintertime, turtles in these habitats usually don't survive and this becomes noticeable in the spring.For instance, dead turtles were found on the 21 st of March 2013 in ponds of Myrhorod (Poltava Region) and on the 13 th of April 2015 in fish ponds of Didorivka (a suburb of Kyiv). Occasional records have been made of egg-laying and courtship behavior.In Latvia (Daugavpils) an individual was recorded laying eggs after successfully over-wintering in a fenced outdoor pool (Figure 2A); these eggs were fertilized but the turtles did not hatch.Similarly, a female T. scripta was observed laying eggs on the 28th of June 2017 in Mariupol (Donetsk Region) (personal communication -O.Lazarenko, Figure 2B; Table S2), but the fate of the clutch is unknown.Courtship behavior in Ukraine has been recorded in ponds of the Athletic Park in Odesa on the 21st of June 2020 (Figure 2С). In urban ecosystems T. scripta was found alongside with a native species, Emys orbicularis (Linnaeus, 1758).In these ecosystems in southern Ukraine T. scripta is numerically dominant over E. orbicularis.For instance, of the 80 turtles found on the 8 th of May 2020 in the Athletic Park 94% were T. scripta.Within this system of lakes in the park, koi fish (a domesticated variety of the common carp) are also found.These exotic fish, together with red-eared slider, are assumed to have been successfully wintering here for at least 10 years.In Europe, strong competition occurs between T. scripta and E. orbicularis, since they occupy similar habitats and ecological niches (Cadi and Joly 2004).For Europe habitat suitability maps showed a strong correlation (Pearson's r = 0.68) between the species, meaning high chances of co-occurrence and the potential for interspecific competition (Pupins et al. 2019). These PCA-derived variables were used for the modelling instead of the original set of environmental information (Tables S3-S5).In terms of discrimination accuracy the BART model showed acceptable performance: AUC = 0.825 and TSS = 0.513 (Swets 1988;Jiménez-Valverde et al. 2011). Analyzing the models, we came to the conclusion that the minimum training presence logistic threshold, which provides minimum requirements for the species climate preferences, indicated all sites in Latvia and Ukraine as suitable in terms of the bioclimatic niche for the red-eared slider, although on the average habitat suitability (HS) conditions for the species are higher in Ukraine (HS 0.219 against HS 0.106 in Latvia), meaning the risk of invasion is potentially higher in Ukraine.Employing the more conservative one percentile threshold shows that in Latvia the areas more suitable for the species are in the south and south-west of the country, whereas in Ukraine these are located primarily in the west and south.Moreover, this species successfully winters in the south of Ukraine.Therefore, specific preventive measures should be planned and undertaken here.These include both legal measures to control pet trade and campaigns to explain the problems caused by imported turtles and to encourage people to change their attitudes towards nature and support biodiversity conservation (Ficetola et al. 2012;Masin et al. 2014). Figure 1 . Figure 1.Distribution of records of the red-eared slider in Latvia (LV green dots) and Ukraine (UA red dots); numbers correspond to information in the Tables S1-S2 of the Supplementary material. Figure 3 . Figure 3. Predicted habitat suitability for the red-eared slider.Warmer colours indicate higher bioclimatic suitability for the species.The contour line in color fuchsia depicts the one percentile threshold: A -Latvia; B -Ukraine.Areas of the highest habitat suitability (> 0.3-0.5)are colored in red and areas of the lowest (< 0.2) -in blue (SagaGis).
2022-02-15T16:02:20.464Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "c0202f6004a8b2d337c31025d8cdc018fd414c4e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3391/bir.2022.11.1.29", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "428c4677c452c672397eef379916e53ad164aa42", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
17477565
pes2o/s2orc
v3-fos-license
Evaluation of Distributed Intelligence on the Smart Card We describe challenges in the risk management of smart card based electronic cash industry and describe a method to evaluate the effectiveness of distributed intelligence on the smart card. More specifically, we discuss the evaluation of distributed intelligence function called"on-chip risk management"of the smart card for the global electronic cash payment application using micro dynamic simulation. Handling of uncertainty related to future economic environment, various potential counterfeit attack scenarios, requires simulation of such environment to evaluate on-chip performance. Creation of realistic simulation of electronic cash economy, transaction environment, consumers, merchants, banks are challenge themselves. In addition, we shows examples of detection capability of off-chip, host based counterfeit detection systems based on the micro dynamic simulation model generated data set. INTRODUCTION The smart card market is expanding rapidly as a result of its superior security, reliability, and capacity. Its ability to carry intelligent applications on the card such as "access", "credit/debit", "electronic cash", etc. gives the smart card an expanding market. The smart card provides distributed processing power, a computer in your wallet. 2. Using Dynamic authentication the terminal will generate some random data, known as seed, and will ask the smartcard to encrypt the data. On receipt of the encrypted data the terminal will decrypt the data. If the decrypted data is the same as the seed then the card is genuine. Dynamic authentication is only possible with smartcards due to their ability to perform cryptography. As card industries move from magnetic strip cards to smart cards, ability to process information on the cards drastically increases. In the case of magnetic strip card, it is imperative to rely on the host system's intelligence to authorize the transactions (e.g., credit/debit) since it has no information processing capability of its own. As we move to smart card, the intelligence doesn't have to be concentrated on the host system, but it can be moved from the host system to more balanced combination of host and smart card itself. DISTRIBUTED INTELLIGENCE ON SMART CARD AS RISK MANAGEMENT TOOL Security and risk management are integral parts of development and deployment of "risk managed" smart card application for a global electronic cash payment such as Mondex electronic cash. There are three critical components, --prevention, detection, and containment, --to achieve balanced risk managed smart card application. The security is primarily concerned with "prevention." The risk management is primarily concerned with "detection" and "containment" in the event that the security were to be broken. The discussion of security can be found in [Maher, 1997]. The objectives of smart card electronic cash risk management can be summarized as follows: • To contain the economic risk exposure to a predetermined level, and • To ensure the stability and continuity of the product. One of the key economic risk exposures is due to "counterfeit" of electronic currency. Among other things, the security and risk management is designed to address this threat head�on to minimize the impact of such attacks. At the same time, it is designed to ensure the stability and continuity of the product. More specifically, to accomplish smart cart electronic cash risk management objectives, risk management strategy can stand on the four pillars: Micro Dynamic Simulation Each pillar has its unique contribution to the objectives, but when they are balanced and combined, they become a formidable structure to base the risk management strategy, and to accomplish the objectives. It may seem obvious, but the prudential risk management is essential to the success of the product. It includes corporate governance and structural control. It is the foundation for the rest of the risk management is build onto. effectiveness of the on-chip detection, the on-chip incidence response, and off-chip detection systems. It also generates data sets to create off-chip detection models. As we succeed in risk management, counterfeit transactions won't be available. The evaluation of new enhancement to on-chip functionality and the re calibration of off-chip detection models have to come from simulator using real market inputs. The paper is organized as follows. Section 2 describes Mondex global electronic cash payment scheme to set 1he stage. Section 3 discusses the distributed intelligence -on-chip risk management capability on the smart card as an example of such intelligence. Section 4 discusses the micro dynamic simulation. Section 5 discusses the quantification of impact of counterfeiter's threat scenarios using micro dynamic simulator. Section 6 discusses the effectiveness of off chip, host system based counterfeit detection systems. Section 7 summariz es the discussion. GLOBAL SMART CARD BASED ELECTRONIC CASH PRODUCT The global smart card based electronic cash product such as Mondex electronic cash has the security and the risk management to prevent, detect, contain, and recover from potential counterfeit activities. It is designed to make counterfeiter's "chain" of tasks as difficult as possible in every step of the way [Ezawa et al. 1998]. The product is designed for the efficient electronic cash payment transactions. It performs purse (chip) to purse (chip) transactions without central authorization. It has many on-chip capability and features such as physical security, cryptographical security, purse class structure (i.e., it restrict the interactions of different type of purses), purse limit, on-chip risk management capability (e.g., credit turnover limit), and migration1• Purse class structure, purse limit, credit turnover limit will be revisited in the following section. Ideally, an advanced smart card based electronic cash scheme, as a substitute for "real" money, should parallel the existing money supply and banking system. DISTRIBUTED INTELLIGENCE --ON CHIP RISK MANAGEMENT As we have already discussed, one of the fundamental strategies in smart card electronic cash risk management such as Mondex electronic cash is to economically exploit the on-chip data processing power of the smart card to the maximum extent. It allows risk management tasks done on the chip autonomously for each transaction without external intervention. On-chip functionality in the "security" arena has been around many years, but in the risk management arena it is a new and relatively unexplored field. In the past, an The on-chip risk management capability is protected by the chip (tamper resistant). To disable its capability, it has to pass the layers of the security of the chip. One of the critical elements and advantages of the on chip risk management capability is that it continuously functions even under complete physical security breakdown. Yes, it is true that the risk management functionality of the compromised chip will be disabled. But for the counterfeiters to benefit from their activities, i.e., to obtain economic gain, they need to interact w _ ith other legitimate purses (cards) which still have ac��e and functioning on-chip risk management capabthty which are unique to each purse. Its wide range of functionality is discussed in the next subsection, but it is a formidable tasks tq pass all the screens without triggering some actions on the on-chip risk management part. Lastly, it is more cost effective to invest in on-chip risk management functionality than that of off-ch1p (host) risk management infrastructure to perform the same functionality. Although risk management must invests in the off-chip (host based) risk management detection, not to duplicate the functionality of on-chip, but to complement. Each has unique capability to co � tribute to the overall risk management. Off-chip nsk management discussion can be found in [Ezawa, et. al. 1999). DISTRIBUTED INTELLIGENCE PORTFOLIO-ON-CIDP RISK MANAGEMENT PORTFOLIO There are two primary methods for fraud and counterfeit detection in general, one measures the "velocity" of transactions, and the other compares transactions against "statistical signature" of the purse. It is true for both on-chip and off-chip (i.e., host system based) detection. The "velocity" method, which monitors amount and volume of transactions, is widely used in the telecommunications, and fmancial industries to monitor potential fraudulent transactions. The "statistical signature" method, which monitors transactions against the past behavioral patterns, is more computationally intensive and requires more infrastructure support. It is also widely used in _ the telecommunications and fmancial industries to mon1tor net bad debt as well as fraudulent transactions and accounts [Ezawa,95 & 96]. Risk management can use both "velocity" and "statistical signature" methods in on-chip as well as off chip risk management. The "credit turnover limit" is a good example of "velocity" based method implemented as the on-chip risk management monitoring and detection capability. There are two merchant segments. "Simulator" node allows us to defme the simulation property, such as duration, starting date, etc. "MXICA" represents "Certificate Authority" node, and allows us to send C3 commands, ''value creation", etc. "Originator" node controls the circulation of the currency in the simulated territory (e.g., country). The simulated diffusion of the counterfeit value and an effectiveness with which it can be detected and contained provide the critical information that allows us to quantify a threat scenario in question. We briefly described the micro dynamic simulator under development. This simulator can provide quantitative information to analyze the effectiveness of on and off chip risk management schemes. It will be also useful for recruiting new members, satisfying fmancial authorities, as well as existing members by demonstrating and quantifying the security and risk management issues. EVALUATION We evaluated the above mentioned detection systems in the "Street Comer Counterfeit Value Distribution Threat Scenario" that is discussed in [Ezawa, et a/., 1998]. This is still a preliminary result. This counterfeit threat scenario assumes that the counterfeiters will sell, at a discount, counterfeit electronic cash to a fraudulent population, in exchange for "real" local currency. The fraudulent population is defmed as the one that would engage in such transactions knowingly and willingly. The fraudulent population is not necessarily as loyal as agents of counterfeit organization and the "secret" is bound to be leaked to the law enforcement institutions or electronic cash issuing institution. It showed that this is quite a difficult task to carry out fl awlessly. For the sake of the evaluation of on-chip risk management capability, we assumed the following: • Counterfeit organization has a well fmanced, well established world wide network, and a large number of dedicated agents in place. • It successfully broke the security of the chip I purse application on the smart card that required a complete secrecy over an extended period of time while various tasks are performed to break security. • It created a counterfeit electronic cash application --"shrink wrap" product of "golden goose" that can generate counterfeit electronic cash with flawless imitation of electronic cash application (e.g., Mondex purse) functionality. • It established counterfeit value distribution channels with no "informants". • Counterfeiter/agents can correctly identify "fraudulent" population who is willing to buy counterfeit values with discount. They never make mistakes. If they approach a normal/honest person, he or she might inform the fmancial institution or authority. COUNTERFEIT A'ITACK SCENARIO Simulation model was set to run 180 days and the counterfeit attack starts at the last 6 days. The length of the run is set so that simulation transaction data will provide significant amount of normal transactions. One the first day of the attack, April I, 1998, the counterfeiters inject a very small amount of counterfeit value to the electronic cash economy to test the system. RESULTS In this section, we show the effectiveness of credit turnover limit only. Due to security reasons, although the central command based security renewal and dynamic re-custornization are found to be very effective, the discussion is omitted. CURRENCY MONITORING SYSTEM The objective of currency monitoring system is to detect the presence of potential counterfeit value in ( We discussed the risk management of smart card based electronic cash industry and a method to evaluate the effectiveness of distributed intelligence function called "on-chip risk management" of the smart card for the global electronic cash payment application using micro dynamic simulation. We found that it is critical to evaluate the distributed intelligent capability quantitatively using micro dynamic simulation. We demonstrated the effectiveness of distributed intelligence -credit turnover limit to be very effective in detecting and containing counterfeit activities. We showed examples of detection capability of off-chip, host based counterfeit detection systems based on the micro dynamic simulation model generated data set, and found to be very effective.
2013-01-23T07:57:51.000Z
1999-07-30T00:00:00.000
{ "year": 2013, "sha1": "54101455030248f4f2abbdee62beba4c57622986", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "54101455030248f4f2abbdee62beba4c57622986", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }