id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
15974160
pes2o/s2orc
v3-fos-license
On the lack of X-ray iron line reverberation in MCG-6-30-15: Implications for the black hole mass and accretion disk structure We use the method of Press, Rybicki&Hewitt (1992) to search for time lags and time leads between different energy bands of the RXTE data for MCG-6-30-15. We tailor our search in order to probe any reverberation signatures of the fluorescent iron Kalpha line that is thought to arise from the inner regions of the black hole accretion disk. In essence, an optimal reconstruction algorithm is applied to the continuum band (2-4keV) light curve which smoothes out noise and interpolates across the data gaps. The reconstructed continuum band light curve can then be folded through trial transfer functions in an attempt to find lags or leads between the continuum band and the iron line band (5-7keV). We find reduced fractional variability in the line band. The spectral analysis of Lee et al. (1999) reveals this to be due to a combination of an apparently constant iron line flux (at least on timescales of few x 10^4s), and flux correlated changes in the photon index. We also find no evidence for iron line reverberation and exclude reverberation delays in the range 0.5-50ksec. This extends the conclusions of Lee et al. and suggests that the iron line flux remains constant on timescales as short as 0.5ksec. The large black hole mass (>10^8Msun) naively suggested by the constancy of the iron line flux is rejected on other grounds. We suggest that the black hole in MCG-6-30-15 has a mass of M_BH~10^6-10^7Msun and that changes in the ionization state of the disk may produce the puzzling spectral variability. Finally, it is found that the 8-15keV band lags the 2-4keV band by 50-100s. This result is used to place constraints on the size and geometry of the Comptonizing medium responsible for the hard X-ray power-law in this AGN. INTRODUCTION The X-rays from active galactic nuclei (AGN) are thought to originate from the innermost regions of an accretion disk around a central supermassive black hole.Thus, in principle, the study of these X-rays should allow one to probe the immediate environment of the accreting black hole as well as the exotic physics, including strong-field general relativity, that operates in this environment. In the past decade X-ray astronomy has begun to fulfill that promise.Both EXOSAT and Ginga discovered iron Kshell features (including the Kα fluorescent line of cold iron at 6.4 keV) in the X-ray spectra of Seyfert galaxies which were interpreted as 'reflection' of the primary X-ray continuum by cold, optically-thick material in the immediate vicinity of the black hole (Guilbert & Rees 1988;Lightman & White 1988;Nandra et al. 1989;Nandra, Pounds & Stewart 1990;Matsuoka et al. 1990).It was suggested that this cold reflecting material was the putative accretion disk of AGN models.With the launch of ASCA and the advent of medium resolution spectroscopy, the iron line in several objects was shown to be broad (∼ 80 000 km s −1 FWZI) and skewed (Tanaka et al. 1995;Nandra et al. 1997).The overall line profiles are in good agreement with models for fluorescent line emission from the innermost regions of geometrically-thin black hole accretion disks (Fabian et al. 1989).Such data allow us to address issues such as the location of the radius of marginal stability, the spin of the black hole, and the inclination distribution of various classes of AGN (see Reynolds 1999 and references therein for a review of these studies).In the current, RXTE era, we can now probe the iron line and Compton reflection hump in individual objects in some detail (e.g., MCG-5-23-16, Weaver et al. 1998;MCG-6-30-15, Lee et al. 1998, 1999a;NGC 5548, Chiang et al. 1999). While these spectral studies have been successful, a complete picture of the AGN phenomenon is not possible without addressing the timing properties.Timing studies are important for two intertwined reasons.Firstly, AGN are inherently variable systems.In general, the variability timescale in a given object is seen to shorten as one considers higher frequency radiation.In the X-ray and γ-ray bands, dramatic variability has been seen in many Seyfert galaxies with doubling timescales of only a few minutes (e.g.see Reynolds et al. 1995).Although it is poorly understood to date, the nature of this violent variability is a vital component of any final AGN model.Careful characterization of the timing properties, as well as determining the observed spectral evolution during dramatic temporal events, is required if we are to understand this phenomenon. Secondly, timing studies are needed to break certain degeneracies that exist in models which, to date, have only been constrained by purely spectral data.The spin of the black hole in MCG-6-30-15 provides an excellent example of such a degeneracy -by fitting the 'very-broad' state (Iwasawa et al. 1996) of the iron line in this object with models consisting of a thin, disk-hugging corona, Dabrowski et al. (1997) inferred that the black hole in this AGN must be almost maximally rotating, with a dimensionless spin parameter of a > 0.94.However, by including line emission from within the radius of marginal stability, Reynolds & Begelman (1997) showed that a geometry in which the X-ray source is at some height above the disk plane can produce the same line profile even if the black hole is completely non-rotating.While there are subtle spectral differences 1 JILA, Campus Box 440, University of Colorado, Boulder CO 80303, USA 2 Hubble Fellow 1 between the two scenarios (Young, Ross & Fabian 1998) the most obvious way of distinguishing these scenarios is through their timing properties.The Reynolds & Begelman (1997) geometry predicts substantial time delays between fluctuations in the primary power-law continuum and the responding fluctuations in the iron line.More generally, the reverberation characteristics of the iron line contain tremendous information on the mass and spin of the black hole as well as the geometry of the X-ray source (Stella 1990;Reynolds et al. 1999). The observational situation is more complex.Lee et al. (1999b) and Chiang et al. (1999) have analyzed extensive RXTE datasets for MCG-6-30-15 and NGC 5548, respectively, in order to study the timing properties and spectral variability.In both of these objects, the same pattern of spectral variability is seen.Firstly, the X-ray photon index displays flux-correlated changes in the sense that the source is softer when it is brighter.Secondly, and more surprisingly, the iron line flux was found to be constant over the timescales probed by these direct spectral studies (∼ 50 − 500 ksec).As discussed by both sets of authors, these results are difficult to interpret in the framework of standard X-ray reflection models since the breadth of these lines indicate that they originate from a small region.It appears that some feedback mechanism regulates the amount of iron line emission in order to produce approximately constant iron line flux.Flux-correlated changes in the ionization state of the disk represent one such mechanism (we discuss this in more detail in Section 5 of this paper).Unless this feedback mechanism operates instantaneously, we might still expect variability of the iron line flux on short timescales. Driven by these motivations, this paper addresses the problem of determining causal relationships between light curves in different X-ray bands, with particular emphasis on timescales shorter than those that can be probed by direct spectroscopy.In particular, we use the long RXTE observation of the bright Seyfert 1 galaxy MCG-6-30-15 reported by Lee et al. (1999a,b) and consider the relationship between the 2-4 keV band (hereafter called the continuum band) and the 5-7 keV band which contains most of the iron line photons (and hereafter called the line band).An important special case is one in which there is a linear transfer function relating one band to the other: where a(t) and b(t) are continuum and line band fluxes respectively, and Ψ is the transfer function.Such relationships between bands contain much of the important physical information, such as the reverberation characteristics of the iron line. Mathematically, the linear transfer equation can be easily inverted using Fourier methods to obtain, where ã(ω) represents the Fourier transform of a(t).However, in real situations, a large number of regularly sampled measurements are required to obtain an accurate deconvolution using this simple method.More often, deconvolution is achieved using maximum entropy techniques or some other regularization method (Horne et al. 1991;Krolik et al. 1991).Another common approach (and one that is often used with less well sampled data) is to compute cross-correlation functions (CCFs), or some variant thereof which accounts for the finite and irregular sampling often encountered in real data.The discrete correlation function (DCF; Edelson & Krolik 1988) is one example of such a variant.Lee et al. (1999b) apply such methods to the observation of MCG-6-30-15 considered in this paper and detect both phase and time lags between RXTE bands (also see Nowak & Chiang 1999).While these methods are powerful, it can be difficult to separate subtle time leads/lags from the autocorrelation properties of the data. Here, we take an alternative approach which is heavily based on the method of Press, Rybicki & Hewitt (1992;hereafter PRH92).In essence, we use the correlation properties of the continuum band data to reconstruct an optimal continuum light curve in which the data gaps have been interpolated.Most importantly, we also compute the expected deviation of the continuum flux from the interpolated curve.The reconstructed continuum band light curve is convolved with a trial transfer function and compared with the line band light curve in a χ 2 sense.We then examine changes in the χ 2 statistic as a function of the parameters that define the trial transfer function. Section 2 recaps the PRH92 method.This is then applied to the RXTE data for MCG-6-30-15 in Section 3. The robustness and validity of our approach is demonstrated by applying this method to simulated data (Section 4).Section 5 draws together our results and discusses their implications for the nature of this source.In particular, we argue that the black hole in this AGN has a mass of only 10 6 − 10 7 M ⊙ .In order to explain the spectral variability, it is suggested that there are flux correlated changes in the ionization state of the surface layers of the accretion disk.Section 6 presents a short summary of the results and relevant astrophysical implications. The optimal reconstruction The continuum band light curve is reconstructed from the data using the technique of PRH92.For completeness, this section summarizes their method.The reader who is primarily interested in the application of this method may skip to Section 3. Suppose that the true flux of the source at time t is s(t), but we measure y(t) = s(t) + n(t), where n(t) is the noise in the measurement.In our case, the noise is Poisson in nature.Our knowledge of s(t) is further impeded by the fact that the measurement is only made at a finite number of times t i , where i = 1, ..., N .We denote y(t i ) as y i and refer to this as the continuum data vector. We seek an optimal reconstruction of s(t) which is continuous in time, ŝ(t), such that is minimized for all t.As usual, angle brackets denote the expectation value.We impose that ŝ(t) is linear in the data vector in the sense that where q i (t) are a set of inverse response functions that are also continuous in time. Assuming that the noise is uncorrelated with both s(t) and itself, PRH92 showed that eqn (3) can be minimized to yield, (5) Here, is the total covariance matrix.To keep the notation concise, PRH92 define the correlation statistics: These functions define what PRH92 call the 'covariance model'.The expected variance of the real signal from the optimal reconstruct in eqn ( 5) is then given by Once the covariance model is known, eqns ( 5) and ( 10) define the optimal reconstruction of the continuum light curve together with a statistic measuring the expected deviation of the real signal from the reconstruction. The covariance model Here, again, we follow the method of PRH92 to determine the covariance model for our continuum data.At this stage, we make the assumption that the underlying process is statistically stationary so that where C(τ ) is the autocorrelation function that we have to determine.This function is related to the first order structure function V (τ ) by and V (τ ) can be approximated by forming pair-wise estimates for all distinct pairs of data points in the continuum light curve, and then binning by the time lag of the pairs.We find that the analytic form fits the structure functions of this paper well.In fact, the reconstruction is fairly insensitive to the exact analytic form used to approximate the structure function. Making the reasonable assumption that C(τ ) → s 2 as τ → ∞, our final expression for the autocorrelation function is 3. In this Section, we apply the method outlined above to a long RXTE observation of the bright Seyfert 1 galaxy MCG-6-30-15. The RXTE data RXTE observed MCG-6-30-15 for approximately 7 × 10 5 s starting on 4-Aug-1997.We retrieved these data from the NASA-HEASARC public archive situated at the Goddard Space Flight Center.Our data reduction closely parallels that of Lee et al. (1999a) who has studied the spectral characteristics of this observation.Since, as mentioned in the introduction, we are interested in the soft X-ray continuum and the iron line band, the Proportional Counter Array (PCA) is the appropriate instrument for us to consider.Examining the housekeeping files for this observation reveals that Proportional Counter Units (PCUs) 3 and 4 suffer occasional breakdown and shut off.Hence, we do not consider data from these units and, instead, extracted STANDARD-2 data from PCUs 0-2.We applied fairly standard faint-source screening criteria to these data: the source must be at least 10 • above the Earth's limb (ELV>10), the source must be located within 0.02 • of the nominal pointing position (OFFSET<0.02),there must be at least three PCUs on (NUM PCU ON>2), it has been at least 30 minutes since a passage of the South Atlantic Anomaly (TIME SINCE SAA>30), and the electron background is not too high (ELECTRON0<0.1).After application of these screening criteria, approximately 3.5 × 10 5 s of 'good' data remain.From these data, 2-4 keV and 5-7 keV light curves were extracted using 64 s bins.We also extracted the 8-15 keV light curve which we will use in Section 3.3. The background was estimated using the L7-240 background models which are appropriate for faint sources such as AGN.Background light curves were computed and subtracted from the measured light curves in order to form the final background subtracted light curves that we shall use in our study.Figure 1 shows the continuum band (2-4 keV) light curves that results from this procedure.For clarity, the light curve shown in this figure has been binned with 256 s bins. Searching for lags and leads We now apply the procedure outlined in Section 2 to these lightcurves.To begin with, we must estimate the structure function for these data.Figure 2a shows a pair-wise estimate of the continuum band structure function obtained following the method of PRH92.This figure also shows our analytic approximation which is given by eqns ( 13) and ( 14) with Using this covariance model, the PRH92 reconstruction was applied to the continuum light curve using N = 5000 data points. A portion of the resulting reconstructed light curve is shown in Fig. 2b.The next stage in the procedure is to convolve the reconstructed continuum band light curve with a trial transfer function and compare the result with the line band light curve in a χ 2 sense.We can then minimize the χ 2 statistic in order to constrain free parameters in the trial transfer function.We also minimize χ 2 over multiplicative and additive offsets between the continuum and line band light curves, i.e. we set and minimize over B and K as well as the parameters describing the trial transfer function ψ.In this work, we choose two trial transfer functions.The first represents the case where some fraction f tr of the line band flux is a delayed copy of the continuum band with a time delay t tr : The second represents the case where some fraction f tr of the line band flux is a delayed and smeared copy of the continuum band flux, where a Gaussian kernel is used: No extrapolations were performed during this procedure.In order to avoid extrapolating, the χ 2 statistic was calculated using a subset of data points.For the trial transfer function ψ 1 , only data during times t start + t tr,max < t < t end − t tr,max were used to compute χ 2 , where t start and t end are the times of the start and end of the reconstructed continuum light curve.For ψ 2 , χ 2 is computed based upon data from times t start + t tr,max + 2σ tr,max < t < t end − t tr,max − 2σ tr,max . Figure 3 shows the χ 2 surfaces and confidence contours once this procedure has been performed.When displaying the χ 2 surfaces, we plot log 10 (χ 2 − χ 2 min + 1) in order to highlight the topography of the surface near the global minimum in the surface.It can be seen that the minimum of the χ2 surface corresponds to the two lines f tr = 0 and t tr = 0, i.e. no time delayed component of the line band light curve is detected.Here we only show the results for ψ 1 -the ψ 2 results are trivial (i.e.χ 2 surface is completely flat) since the preferred solution always have f tr = 0.The best fit values of the multiplicative and additive constants are B = 0.78 and K = 0.90. The overall time delays between bands By considering the f tr = 1 slice through the χ 2 surface produced with trial transfer function ψ 1 , we can examine overall lags and leads between energy bands.Examining the 2-4 keV and 5-7 keV light curves for MCG-6-30-15 in this way, we find that the χ 2 slice possesses a minimum at zero lag -i.e.we find no evidence for overall time lags or leads between the continuum and line bands down to 64 s, the bin size of the data.Performing the same procedure for the 2-4 keV and 8-15 keV light curves reveals a one bin offset in the position of the minimum (Fig. 4), indicating that the 8-15 keV light curve is delayed by ∼50-100 s as compared with the 2-4 keV light curve.Lee et al. (1999b) have applied CCF methods to this RXTE dataset.By carefully comparing with simulations, they find evidence that the 7.5-10 keV band lags the lower energy bands with a phase delay of φ ∼ 0.6.They also find evidence that the hard band (10-20 keV) lags the softer bands with a time delay similar to that found in this work.Figure 5 shows the DCF for our 2-4 keV and 8-15 keV lightcurves (this is very similar to Fig. 17 of Lee et al. 1999b).A small time lag of 50-100 s between these two bands is evident.Thus, CCF methods and the optimal reconstruction method both suggest a time lag of 50-100 s between the 2-4 keV and 8-15 keV bands. The meaning of an additive offset Our fitting in Section 3.2 clearly reveals the need for an additive offset (i.e.non-zero K value) between the line and continuum band light curves.In other words, the fractional variability about the mean level is less in the line band that it is in the continuum band. The spectral analysis of Lee et al. (1999b) allows this behaviour to be understood in terms of the spectral phenomenology.Firstly, Lee et al. found that on timescales of few×10 ksec, the iron line flux does not track the continuum flux and, instead, remains approximately constant.Secondly, it was found that there are flux correlated changes in the photon index by as much as ∆Γ ≈ 0.2 in the sense that higher flux states are softer.Both of these spectral changes will tend to reduce variability in the line band as compared with the (softer) continuum band. Constructing the simulated light curves In order to assess the significance and robustness of the above results, this section describes the application of this method to simulations.We tailor our simulation to match the RXTE observation MCG-6-30-15 as much as possible.EXOSAT showed that the high frequency fluctuations of MCG-6-30-15 possess a power spectrum of the form f −1.36 .We use this power spectrum with an additional low-frequency cutoff at f c = 10 −6 Hz: We then make a (noiseless) simulated continuum light curve, F (t), by summing Fourier components of random phase between f min = 10 −7 Hz and f max = 1 Hz, i.e. where φ(f ) is a uniformly randomly distributed in the range 0 to 2π for each distinct value of f .Without loss of generality, we assume that the line band flux possesses the same mean normalization as the continuum band light curve.However, in order to mimic the situation found in Section 3 as closely as possible, we assume that there is an additive offset between the continuum band and line band light curves as well as the convolution a transfer function.In other words we compute a (noiseless) line-band light curve using the expression: Here, Ψ 1,nzl is the non-zero-lag component of our imposed simulated transfer function for which we use a Gaussian: where f is the fraction of the continuum flux that is delayed, t 0 is the mean time delay, and σ is the temporal standard-width of the smearing.For concreteness, we set This value of f is approximately the fraction of the line-band flux which originates from the iron line, and hence this simulation crudely mimics the effect of iron line reverberation with a 10 4 s time delay.Our value of K is set to be similar that found for MCG-6-30-15 above. From these 'perfect' noiseless light curves, we formed Possion sampled noisy light curves assuming a mean count rate of 4 cps in both the continuum and line bands, and using 64 s bins.The datagap structure of the real MCG-6-30-15 dataset was then imposed on the simulated light curves. We now examine our realistic, simulated, data in order to assess how well we can detect the existence of the imposed lag and recover the properties of Ψ 1,nzl using our method. Extracting the lag from the simulations We use the method of Section 2.1 and 2.2 to form an optimally reconstructed, evenly-sampled continuum lightcurve.The covariance model used is given by eqn ( 13) and ( 14) with A total of N = 3000 simulated data points were used to form the reconstruction which spans a simulated observation time of 400 000 s.A portion of the simulated dataset and its reconstruction are presented in Fig. 6.Note how well the reconstruction algorithm recovers the real signal during the times with data, and brackets the real signal during other times.Figure 7 presents the χ 2 surfaces and confidence contours that result from passing the simulated light curves through the trial transfer functions ψ 1 and ψ 2 , including minimization over any additive offset between the continuum and line band light curves.Both trial transfer functions clearly detect the imposed lag in so far as a a deep and isolated hole is present in the χ 2 surface at approximately the right time delay, delay fraction and delay width.Note that the f tr dimension, which has been suppressed in the ψ 2 plots, has a value of f tr = 0.16 at the global minimum.This demonstrates the power of this technique for finding and characterizing subtle time lags or leads that are present in such data. DISCUSSION In order to bring structure to the discussion that follows, we will summarize the pertinent results from this paper. 1. We clearly see reduced fractional variability in the iron line band (5-7 keV) as compared with the continuum band (2-4 keV).This is the origin of the additive offset, K, that was introduced in Section 3. The spectral fitting results of Lee et al. (1999b) suggests that this is due to a combination of a constant iron line flux and flux correlated changes in the photon index. 2. Our analysis finds no evidence for iron line reverberation effects.By running a number of simulations, we find that any reverberation time delays must be less than ∼ 500 s or greater than ∼ 50 ks.Together with the above result, this suggests an approximately constant iron line flux over these timescales.Thus, we can extend the work of Lee et al. (1999b) and infer a constant iron line on timescales down to 0.5 ksec. 3. Any overall time lag between the 2-4 keV and 5-7 keV band is less than ∼ 50 s.However, we do find that the 8-15 keV band is delayed with respect to the 2-4 keV band by 50-100 s.We can use this time delay to obtain a rough size scale for the Comptonizing cloud that is producing the hard X-rays.Assuming a coronal temperature of ∼ 100 keV, it takes approximately 3 inverse Compton scatterings for a photon to be boosted between the 2-4 keV and 8-15 keV bands.Thus, the mean free path of a photon is approximately 15-30 light seconds.This is a lower limit on the size of the Comptonizing region. As we will see, this combination of facts presents problems for current models. Simple reflection models Initially, let us discuss these results in the light of simple Xray reflection models (e.g.George & Fabian 1991).Assuming that variations in the primary flux are not accompanied by gross changes in geometry, we expect to observe one of two cases.Firstly, if the light crossing time of the fluorescing part of the disk is shorter than the timescale being probed by the observation, an iron line with constant equivalent width will result (i.e. the iron line flux will track the flux of the illuminating primary X-ray source).On the other hand, if the light crossing time of the fluorescing disk is greater than the timescale being probed, a constant flux line will result. Within the context of these simple reflection models, we are forced to conclude that the light crossing time of the fluorescing region is larger than ∼ 50 ks.Since the line is relativistically broad, most of the fluorescence occurs in the central r ∼ 20GM/c 2 of the disk.Setting the light crossing time of this region to be greater than 50 ks gives a black hole mass of M BH ∼ 2 × 10 8 M ⊙ .Given such a large black hole mass, the accretion rate must be less than 1% of the Eddington rate in order to produce the observed luminosity of L bol ∼ 10 44 erg s −1 (Reynolds et al. 1997).Furthermore, the size of the X-ray emitting blobs must be small, r blob /r disk ∼ 10 −3 , in order to produce the very small time delays seen between different bands.Despite being so small, these blobs must be at large distances above and below the accretion disk plane, or else one would still see iron line variability as a flaring blob illuminated the patch of disk directly beneath. To date, there are no dynamical measurements of the black hole mass in MCG-6-30-15, and hence such a model does not explicitly contradict any data.However, there are several indirect arguments that lead us to reject the inference of a large black hole mass in MCG-6-30-15.An independent indicator of the black hole mass is possible by estimating the bulge mass of the host galaxy and then applying the bulge/hole mass relationship of Magorrian et al. (1998).The B-band luminosity of the S0-galaxy which hosts this Seyfert nucleus is approximately m B = 13.7 (RC3 catalogue), and this is likely to be completely dominated by the bulge since the nucleus is heavily reddened in the B-band and the galactic disk is very weak.Using a Hubble constant of H 0 = 65 km s −1 Mpc −1 , the absolute B-band magnitude of the bulge is then M B − 19.Using the standard relations (Faber et al. 1997), the bulge mass is then M bulge ∼ 3 × 10 9 M ⊙ .Finally, applying the Magorrian et al. (1998) scaling factor between bulge mass and black hole mass gives M BH ∼ 1 − 2 × 10 7 M ⊙ , an order of magnitude smaller than the black hole mass estimate in the previous paragraph. There are also X-ray constraints that suggest a black hole FIG.7.-Results for the simulated light curves: χ 2 surfaces and confidence contours resulting from applying trial transfer functions ψ 1 and ψ 2 to the reconstructed continuum light curves and comparing with the line band light curve (allowing for an additive offset between the bands).Surfaces are plotted using log 10 (χ 2 − χ 2 min + 1) as the ordinate in order to display the topography of the region near the minimum.Contours are shown the following levels: 3, 4.6, 9.2, 20, 50, 100, 200.The first three of these contours correspond to 1 − σ, 90% and 95% for two interesting parameters and are shown in bold.In both cases, the existence of a deep hole in χ 2 space demonstrates that the imposed lag has been clearly detected and its parameters recovered.mass significantly smaller than 10 8 M ⊙ .MCG-6-30-15 has exhibited large amplitude X-ray variability on timescales as short as 100 s.However, the dynamical timescale of the accretion disk where the bulk of the energy is released is t dyn ∼ 10 5 (M/10 8 M ⊙ ) s. Thus, if the black hole really is as massive as M BH ∼ 2 × 10 8 M ⊙ , large amplitude variability would be occurring on timescales as short as 10 −2 t dyn .It is difficult to conceive of processes which would give such variability.The final X-ray argument against a 2 × 10 8 M ⊙ black hole in MCG-6-30-15 comes from the power spectrum derived by Lee at al. (1999b) and Nowak & Chiang (1999).By comparing the power spectral density (PSD) of MCG-6-30-15 with that of NGC 5548 and Cygnus X-1, they estimate that the black hole in MCG-6-30-15 has a mass of M BH ∼ 10 6 M ⊙ . More complex scenarios Since the application of simple X-ray reflection arguments led us to deduce an unacceptably large black hole mass, we must examine alternative avenues.Indeed, the spectral fitting of Lee et al. (1999b) forces us to consider complications beyond the simple reflection picture -In their spectral fitting, they found that the Compton reflection continuum fails to show the expected correlation with the iron line equivalent width (in fact, they are anti-correlated; Lee et al. 1999b).Very similar behaviour is also seen in NGC 5548 (Chiang et al. 1999) Ionization of the disk surface is one of the few physical phenomenon that can (partially) decouple the strength of the Compton reflection continuum from the strength of the iron line.Matt, Fabian & Ross (1993) demonstrated that the iron emission line is more sensitive to ionization effects than the general form of the Compton reflection continuum.In other words, patches of the disk with certain (surface) ionization parameters can produce a Compton reflection continuum without producing appreciable iron fluorescence. We use this fact to construct the following simple model.Let the X-ray flux illuminating the surface layers of the accretion disk be A variety of X-ray source geometries give β ≈ 3 at large radii, and β < 3 as one approaches the innermost parts of the disk.Now, the ionization parameter of at the surface of the disk is given by where n(r) is the density of the surface layers of the disk.We suppose that n(r) ∝ r −γ .Hence, we have Standard disk models (Shakura & Sunyeav 1973) give γ ≈ 2 at large radii, and γ < 2 near the inner part of the disk.Now, suppose that there exists a critical ionization parameter ξ crit above which there is no iron line produced.For reasonable values of β and γ, this gives a critical radius r crit within which no iron line is produced.The total iron line flux expected from the object is then given by which is readily manipulated to give For our canonical values of β and γ, this gives F line ∝ F −1 X .Thus, this simple model produces an iron line flux which is anti-correlated with the flux of the illuminating source.Provided a strong Compton reflection continuum can still originate from the ionized portions of the disk, this type of picture may explain the spectral behavior that we observe. One simple prediction of this model is that the velocity width of the line profile gets smaller as the continuum flux increases (due to an outward migration in the inner radius of the line emitting region).Of course, the toy model presented above only captures the crudest aspects of the problem.Fully selfconsistent ionized reflection models must be calculated (taking into account the vertical structure of the disk; e.g.see Nayakshin, Kazanas & Kallman 1999) and compared with the data in order to test whether the picture sketched here is reasonable or not. Even if global, flux-correlated changes in the ionization of the disk surface are responsible for the observed spectral changes, we would still expect reverberation signatures on short timescales.We have set upper limits of ∼ 500 s on the timescale of any reverberation delay.If the black hole mass is M BH ∼ 1 × 10 7 M ⊙ , the light crossing time of the iron line producing region is ∼ 2000 s, and hence we need to infer a disk-hugging corona (with h/r ∼ 0.3) in order to be compatible with the reverberation limits.If, instead, the black hole is M BH ∼ 1 × 10 6 M ⊙ , the light crossing time of the entire line producing region is only 200 s and so the X-ray source geometry is unconstrained by our reverberation limits.The corresponding Eddington ratios are ∼ 10% and ∼ 100% for black hole masses of 10 7 M ⊙ and 10 6 M ⊙ respectively. CONCLUSIONS In this paper, we have used an interpolation method based upon that of PRH92 to search for temporal lags and leads between the 2-4 keV, 5-7 keV and 8-15 keV bands in a long RXTE observation of the bright Seyfert 1 galaxy MCG-6-30-15.In essence, we use the PRH92 method to compute an optimal reconstruction of the 2-4 keV light curve in which the datagaps are interpolated across.We then fold this reconstructed light curve through trial transfer functions and compare with data from the other bands in a χ 2 sense. Our search for lags and leads was tailored to find reverberation effects in the iron line which is thought to originate from the innermost regions of the black hole accretion disk.We find no evidence for any reverberation, and rule out reverberation delays in the range 0.5 − 50 ksec.We can extend the conclusions of Lee et al. (1999b), and infer that the iron line possesses a constant flux on timescales on timescales as short as 500 s.We also find that the hard band (8-15 keV) is delayed by 50-100 s relative to the 2-4 keV band. We attempt to put these various results together into a coherent picture for this object.The constancy of the iron line flux leads one to consider large black hole masses (in excess of 10 8 M ⊙ ).However, such a large mass is found to be unacceptable from the standpoint of both X-ray variability constraints, and constraints based on the mass of the galactic bulge.Indeed, using the bulge/hole scaling factor of Magorrian et al. (1998), we estimate that the hole has a mass of M BH ∼ 1−2×10 7 M ⊙ .Given that this is a more reasonable mass estimate, some mechanism beyond the simple X-ray reflection model must be invoked to explain the temporal variability of the iron line and Compton reflection continuum.We suggest that flux correlated changes in the average ionization state of the surface layers of the accretion disk may be such a mechanism.While we support this suggestion with a toy model, the plausibility if this suggestion can only be assessed once detailed modeling has been performed. FIG. 1 FIG.1.-2-4keV band, 3-PCU, light curve for the 1997-Aug-4 RXTE observation of MCG-6-30-15.For display purposes, a bin size of 256 s has been used, although 64 s bins are used in the analysis presented in this paper. FIG. 3.-Results for MCG-6-30-15: χ 2 surfaces and confidence contours resulting from applying trial transfer function ψ 1 to the reconstructed continuum light curves and comparing with the line band light curve.Surfaces are plotted using log 10 (χ 2 − χ 2 min + 1) as the ordinate in order to display the topography of the region near the minimum.Contours are shown the following levels: χ 2 − χ 2 min = 2.3, 4.6, 9.2, 20, 50, 100, 200.The first three of these contours correspond to 1σ, 90% and 95% for two interesting parameters and are shown in bold. FIG. 5 FIG. 5.-Discrete Correlation Function (DCF;Edelson & Krolik 1988) between our 2-4 keV and 8-15 keV light curves.Note the asymmetry in the DCF which validates our detection of a ∼ 50 − 100 s time lag between these two bands using the PRH92 method.
2014-10-01T00:00:00.000Z
1999-12-01T00:00:00.000
{ "year": 1999, "sha1": "10b7f9d2e586ff22361a246ed19a69d54fa540de", "oa_license": null, "oa_url": "https://doi.org/10.1086/308697", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "10b7f9d2e586ff22361a246ed19a69d54fa540de", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
12251853
pes2o/s2orc
v3-fos-license
A note on “ A mixed integer programming formulation for multi floor layout ” [ African Journal of Business Management 3 ( 2009 ) 616-620 ] In the aforesaid paper, some pages are omitted reluctantly and are corrected thus. In this paper, the two-floor facility layout problem with unequal departmental areas in multi-bay environments is addressed. A mixed integer programming formulation is developed to find the optimal solution to the problem. This model determines position and number of elevators with consideration of conflicting objectives simultaneously. Objectives include to minimize material handling cost and to maximize closeness rating. A memetic algorithm (MA) is designed to solve the problem and it is compared with the corresponding genetic algorithm for large-sized test instances and with a commercial linear programming solver solution to small-sized test instances. Computational results proved the efficiency of solution procedure to the problem. minimizing the total cost of material transportation and maximizing the total closeness rating between the two departments.In some cases, they are combined as (Meller and Gau, 1996): which is the weighted coefficient of objective functions, that is, it shows the material flow between the departments, it shows the cost of moving in unit distance of material flow between the two departments, it shows the closeness ratio between the two departments and is an indicator which shows that when the departments have common boundary, it is otherwise zero.Setting the parameter α has been studied by Meller and Gau (1997).Aiello et al. (2006) represented a two-stage multiobjective flexible-bay layout.Genetic Algorithm (GA) was used to find Pareto-optimal in the first stage and the selection of an optimal solution was carried out by Electre method in the second stage.These objectives considered minimization of the material handling cost, maximization of the satisfaction of weighted adjacency, maximization of the satisfaction of distance requests and maximization of the satisfaction of aspect ratio requests.Pierreval et al. (2003) described evolutionary approaches to the design of manufacturing systems.Chen and Sha (2005) presented a multi-objective heuristic which contained workflow, closeness rating, material-handling time and hazardous movement.Şahin and Türkbey (2008) proposed simulated annealing algorithm to find Pareto solutions for multi-objective facility layout problems including total material handling cost and closeness rating.A qualitative and quantitative multi-objective approach to facility layout was developed by Peters and Yang (1997).Peer and Sharma (2008) considered material handling and closeness relationships in multi-goal facilities layout.Konak et al. (2006) conducted a survey on multi-objective optimization using genetic algorithms and Loiola et al. (2007) provided a review paper for the quadratic assignment problem (QAP) which concerned multi-objective QAP. In this paper, we consider both issue of multi objective and multi floor.Nowadays, when it comes to the construction of a factory in an urban area, land providing is generally insufficient and expensive.The limitation of available horizontal space creates a need to use a vertical dimension of the workshop.Then, it can be relevant to locate the facilities on several floors Drira et al. (2007).Meller and Bozer (1997) compared approaches of multi-floor facility layout.Lee et al. (2005) used GA multifloor layout which minimized the total cost of material transportation and adjacency requirement between departments while satisfied constraints of area and aspect ratios of departments.A five-segmented chromosome represented multi-floor facility layout.Many firms are likely to consider renovating or constructing multi-floor buildings, particularly in those cases where land is limited (Bozer and Meller, 1994).Matsuzaki et al. (1999) developed a heuristic for multi-floor facility layout considering capacity of elevator.Patsiatzis et al. (2002) presented a mixed integer linear formulation for the multi-floor facility layout problem.This work was extended model of the single-floor work of Papageorgiou and Rotstein (1998). In this paper, we formulate a multi floor layout considering conflicting objectives.Objectives are commonused in previous works and include to minimize material handling cost and to maximize closeness rating.Then, a memetic algorithm is developed to solve the problem. Sets and indices Set of cells in block layout graph.( ) Coordinates of the centroid of department i in second floor; x ij d Distance between the centroid of departments i and j in the x-axis direction in first floor; x ij d′ Distance between the centroid of departments i and j in the x-axis direction in second floor; Coordinates of the northeastern corner of department i; ( ) Coordinates of the southwestern corner of department i.Weights of objective functions. C. Assumptions i.The coordinates of the southwestern corner of the facility are (0, 0). ii.In the model description, the long side of the facility is along the x-axis direction, and bays are assumed to be vertically arranged within the facility. iii.If a department is assigned to a bay, then the bay must be completely filled.iv.If the aspect ratio is specified to control departmental shapes, then D. Problem formulation In our paper, we extend their model with following constraints: ( ) ) ) ) ) Constraints ( 1) to (8) linearize the absolute value term in the rectilinear distance function in the first and second floor. Constraints ( 9) and (10) states that each department is located in a bay. ( )( ) Constraints ( 11) to (33) state restrictions of length and width of each department and determine coordination of each department. Constraints ( 34) and (44) determine which two departments can have common boundary.45) calculates material handling cost if two departments are in the same floor. ) ) 46) to (50) determine material handling cost between two departments if they are in different floors. ( ) Objectives were formulated in a weighted form using ( 56) and ( 57) Krishnan and Jaafari 7213 Constraints ( 58) to (61) can afford to linearize product of variable by integer variable. GA and MA IMPLEMENTATION Evolutionary algorithms have been applied to many fields of optimization, and it has been observed that combination of evolutionary algorithms with problem-specific heuristics can lead to highly effective procedures.A memetic algorithm (MA) is a hybrid algorithm that augments a population-based search approach with a local search heuristic.MA is similar to genetic algorithm (GA).However, GA is based on biological evolution while MA imitates cultural evolution-in the sense that memes can be modified during an individual's lifetime whereas genes cannot.Thus, MAs are more likely to improve the quality of an individual.This approach proved to be highly efficient for this problem when large-sized test instances were used. Chromosome definition A flexible bay structure is used to place departments.The bay width is flexibly adjusted for departments assigned to the bay.However, the emphasis is on the location of the bay rather than on the arrangement of the departments. A chromosome is divided into three sections (Figure 1), two of which are composed of numbers that represent the departments for first floor and second one, and there is no explicit indication of the breakpoints between bays.However, the sequentially arranged departments can be grouped if they meet the conditions for forming a bay when the areas of the departments are added sequentially from left to right.If the summed area is in the range of the designated bay size and the length and width of the departments are within permissible intervals, then this set of departments can be grouped into a bay.Thus, a set of candidate breakpoints between each pair of bays can be determined.The third section of the chromosome represents binary variables S 1 , S 2 and S 3 to determine the number and location of elevators.There can be one or two elevators.The mathematical model and the solution procedure are to determine the number of elevators after doing a trade-off between vertical material handling cost and elevator setup cost. Parent selection The same first individuals are uniformly randomly generated, and the selection of the parents for the next generation is done through tournament 3; consequently, the best individuals in the generation are more likely to move to the next generation. Crossover A modified partially matched crossover is used to prevent generation of infeasible solutions; since the partially matched crossover method exchanges genes within a parent, feasible offspring are obtained.The integer number in a chromosome, which displays the department, should not be duplicated after the process of crossover is finished since each number is unique within the chromosome.After exchange between the chromosomes, each chromosome is checked to see whether there are two of the same numbers in the new chromosomes.If there is a repeated number, then the repeated gene outside the crossed-over segment is selected replaced with the first number from 1 to n, which is absent within the mentioned chromosome.This method is iterated until there is no repeated number.This method was easier to be coded and proved to perform crossover operation with less number of computations rather than partially matched crossover.Figure 2 shows an example of the crossover method.After exchanging genes between the two chromosomes in child 1, the first and third genes (number 5) are the same.So, the first gene which is outside the crossed-over segment must be replaced with 4, as the first number from 1 to n which does not already exist within child 1.The same method is applied to child 2. Mutation A swap mutation operator is used for the mutation.Two uniformly randomly selected genes are swapped to generate a new chromosome.As mentioned with the crossover operator, the gene that represents each department is unique in the chromosome and should not be duplicated.The standard mutation operator chooses one of the genes in the chromosome and changes its value, but in this problem, the chromosome becomes an infeasible representation because the new gene would be either one of the integer numbers already in the chromosome or a number that does not stand for one of the departments. Local search A simple 2-opt algorithm, also known as a pairwiseinterchange heuristic, was used to enhance the GA to procedure are to determine the number of elevators after doing a trade-off between vertical material handling cost a Bay 1 Bay 2 Bay 1'… S 1 S 2 S 3 4 1 3 5 9 6 … 8 7 2 … The First floor The Second floor Elevators MA.The 2-opt algorithm considers only two departments at a time for exchange, and the algorithm discards the previous best solution whenever a better solution is found. Computational results The revolutionary algorithm was tested for different sizes. For small-sized test instances departments, the mathematical model yielded the optimal solution in reasonable time using Lingo 8:00.The algorithm obtained the optimal solution to some small-sized test instances.largesized test instances, the combined SA and MA was compared with the combination of SA and corresponding GA and proved to yield better solutions but in more CPU runtime.The results proved that the designed algorithm is quite efficient in terms of runtime and quality of solutions. CONCLUSION In this paper, a multi-objective mixed integer linear programming model was developed to find the optimal solution to the multi-floor facility layout problem with unequal departmental areas in multi-bay environments where the bays are connected at one or two ends by an inter-bay material handling system.Also, a memetic algorithm was designed to solve large-sized test problems, and obtained optimal solution to some small-sized test instances.It proved to be highly efficient after a comparison with corresponding genetic algorithm for large-sized test instances and with the mixed integer formulation for small-sized test instances. w Width (the length in the x-axis direction) of bay i in first floor; k w′ Width (the length in the x-axis direction) of bay i in second floor; 1 ik w Width (the length in the x-axis direction) of bay i in bay k in first floor; 2 ik w Width (the length in the x-axis direction) of bay i in bay k in second floor; y i l Height (the length in the y-axis direction) of department i in first floor;y i l′ Height (the length in the y-axis direction) of department i in second ( ) Coordinates of the centroid of department i in first floor; of departments i and j in the y-axis direction in first floor; y ij d′ Distance between the centroid of departments i and j in the y-axis direction in second floor; ik h Height (the length in the y-axis direction) of department i in first floor; ik h′ Height (the length in the y- axis direction) of department i in second floor; ( ) f Number of departments; W: Width of the facility along the x-axis; H: Width of the facility along the y-axis; Amount of material flow between departments i and j; : ij c Amount of material cost between departments i and j if they would be in different floors in y-axis; : ij adj Adjacency ratio between departments i and j; Figure 2 . Figure 2.An example to illustrate crossover method.
2016-01-13T18:10:52.408Z
2011-09-04T00:00:00.000
{ "year": 2011, "sha1": "843d593ea8d1d399d74bd8948c8b5b3cac55e72d", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJBM/article-full-text-pdf/F03A82D15411.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "843d593ea8d1d399d74bd8948c8b5b3cac55e72d", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Economics" ] }
139621542
pes2o/s2orc
v3-fos-license
Deformation of rutting due to temperature change from recycle of hot-mix asphalt with crumb rubber The cover layer of the pavement can be functioned as a structural layer or non-structural layer. Improvement of road surface conditions is done by overlay with additional layers above it or stripping the surface layers of the road. Reclaimed Asphalt Pavement (RAP) or asphalt stripping results into a new problem of aggregate asphalt waste. Utilization of material from the pavement surface layer can reduce environmental pollution and can save maintenance cost. Research has been done by knowing the content of asphalt and gradation of aggregate from stripping asphalt through the extraction process. A few modified asphalts as Asbuton Retona Blend-55 (Asbuton-R) and fresh aggregates have been added to obtain the aggregate bitumen mixture on the specification type of Asphalt Concrete Wearing Coarse (ACWC) mixture. Crumb Rubber (CR) of used tire rubber has been added to the new aggregate and mixed with the old asphalt mixture, and then added asphalt modified Asbuton-R. Crumb rubber (CR) of 0.5%, 1%, 1.5% and 3% were added to each variation of asphalt content ranging from 1.5%; 2%; and 4%. The result of the recycled aggregate asphalt mixture was tested with Marshall Standard and Marshall Immersions on the optimum Asbuton-R content. The deformation resistance of pavement mixture to weather changes has been simulated using Wheel Tracking Machine (WTM) at 26° C, 30° C, 35° C. The results showed that RAP content of more than 80% can still be utilized in the recycling process. The asphalt pavement mixture resistance to rutting deformation due to temperature change, with the addition of 1.5% of optimum Asbuton-R content has shown an increase. Introduction On roads who receive relatively heavy traffic loads, frequent road distress occurs. Road deteriorations are usually repaired by an overlay, this way causes the pavement to become thick and easy to damage the structure. Road pavement recycling technology becomes one of the prospective alternatives. The damaged paved layer was scrapped and crushed using a recycling machine. This scrapped material which is then called as Reclaimed Asphalt Pavement (RAP). Some agencies provide restrictions on RAP used percentages ranging from 10 to 30% in their regulations due to concerns over pavement performance [1]. Other studies can even maximize the use of RAP with warm mix recycled asphalt specimens were prepared with 100% RAP and different emulsion contents [2]. But not all applications of RAP usage can be maximally, depending on the type of mixture of recycled materials and the amount of material, due to factors such as source and replacement binding rates, the aging rate on recycled materials, and the proportion in the mixture influence the recycling dose [3]. The use of RAP can even be used for sub-base and has been observed that California Bearing Ratio (CBR) values of 100% RAP are in the range of 8% -20%. It is observed from the previous study that CBR values of 50% RAP + 50% Crushed stone aggregates with 2% cement is in excess of 100% [4]. This study investigated 20%-40% RAP content with additional crumb rubber (CR) against rutting deformation with temperature change. Deformation tested using a wheel tracking machine (WTM). This recycling technology offers various advantages such as saving aggregate and asphalt needs, reducing fuel requirements and emissions of gas products. Regards to the utilization of RAP, this potential trend has become a major attractive solution to conserve energy and keep environmental sustainability [5]. Used tire waste in Indonesia is predicted to continue to grow in line with the growth of vehicle ownership which is marked by the increase in vehicle volume. The crumb rubber (CR) produced from scrap tires is generally used for the construction of asphaltic pavements. Usually crumb rubber is used in two ways for asphaltic pavement applications (i) dry process: where crumb rubber is mixed with aggregates first and then with asphalt binder for production of mixes, and in (ii) wet process: where crumb rubber is mixed with binder first, like polymer modified binder, and then used for preparation of mixes [6]. The use of CR as an additive material on asphalt mixture cannot be standardized because the microstructure of rubber asphalt may not be stable at elevated temperature, which could lead to separation of crumb and asphalt during storage [7]. Development of new technologies was always focused on increasing cost-effectiveness, but recently along with the economic effects environmental issues have been considered, such as reducing the negative impact on the environment in the production of materials [8]. This research objective is expected to know the performance recycle of hot mix asphalt against repeated load wheels of the vehicle due to temperature change, where the recycle of hot mix asphalt was added by crumb rubber additives. It inquired the contribution of the recycle hot mix asphalt and additives regarding with the effect on deformation of rutting. The use of additives is also intended to provide the asphalt mixture that is environmentally friendly, especially for the temperature change. A series of laboratory tests were conducted in sequences to determine the characteristics of RAP. Material and Method The RAP material is taken from the Jakarta Outer Ring Road-S Section (JORR-S) toll road segment taken in August 2017. The average maximum grain size is found about 9.5-12.5 mm. Asphalt RAP content is known by extraction using reflux apparatus as described in table 1. The aggregate sieve analysis process is then performed using a sieve shaker with the results as described in table 2. Mix design and fresh aggregate adding The addition of fresh aggregate is done to improve the gradation of RAP to be able to approach the aggregate provision of hot asphalt mixture based on Indonesian standard. The addition is done so that the aggregate size makes the curve line below the maximum standard and above the minimum standard. After the addition of aggregate for RAP weighing 1000 gr, obtained the formula of adding the size of aggregate with the size of pass 12.5 mm as much as 5 grams; addition of aggregate with sieve pass sizes 9.5 mm as much as 90 gr; the addition of aggregate with the size of pass sieve 2.36 mm as much as 10 gr and the addition of aggregate with sieve pass sizes 0.075 as much as 75 gr. The total weight of RAP aggregate after added fresh aggregate to 1123.3 gr. More details related to the design and the addition of aggregate described with the graph in figure 1. Buton Natural Asphalt (Asbuton) BNA is produced by refining Buton Island rock asphalt to separate minerals and increase the bitumen content from 13-20% to 55-60% [9]. The BNA-R was used as an additive or modifier to modify the properties of base bitumen. BNA-R pulverized until a particle size smaller then mixed with asphalt Pen 60/70 at temperature of 140 0 C while stirring at a speed of approximately 2000 rpm. The addition of BNA-R has reduced penetration and has increased the softening point and viscosity [10]. In this study used Asbuton Retona Blend-55 (Asbuton-R). Experimental Design and Test Procedures Testing in this study was conducted using RAP and CR with Asbuton-R with 60 penetration grades. The different forms of HMA were analyzed using the Marshall standard and WTM tests. Two types of samples were tested: RAP, and modified RAP+CR asphalt concrete with added asphalt content of 1.5% and CR content of 1%. Marshall Stability The Marshall test is perfomed at the standard and immersion condition. The Marshall test is perfomed on cylindrical specimens, 102 mm in diameter and 64 in height, at a temperature of 60 0 C and rate of loading of 51 mm per minute. figure 2, and then the WTM tested results are depicted in figure 3. Figure 3 shows a RAP briquette sample that has been added a fresh aggregate of 20% and Asbuton weighing 1.5% after to testing WTM and figure 4 shows a modified RAP specimen that has been added CR 1% after a WTM test was performed. Results and discussion The curve illustrated in figure 5 were result of WTM at different test temperature of 26˚C, 30˚C, and 35˚C for hot mix RAP modified and hot mix RAP modified with CR addition. Shown clearly that the modified RAP plus CR as much as 1% and Asbuton as much as 1.5% has a relatively good strength against rutting deformation for temperatures of 26˚C and 30˚C as compared to not added CR. But when the given temperature is added up to 35˚C the sample decreases drastically at WTM test. Many authors consider that permanent strain changes during loading cycles are divided into primary, secondary and tertiary zones [11]. The VESYS model was developed for rutting prediction models. Where; ( ) is permanent or plastic strain due to a single load application, i.e., at Nth application; is the elastic or resilient strain, generally assumed to be independent of load repetition (N); and is the permanent deformation parameter, which represent the constant of proportionality between the permanent strain and the elastic strain and  is a permanent deformation parameter indicating the rate of decrease as the number of load applications increases [11][10]. With an empirical approach that connects between di / Ni and Ni, is obtained function that can be written as: The value of the coefficients "a" and "b" are the parameters of the recycle of HMA characteristics that can be identified through a series of WTM tests. The equation generated from the wheel tracking tests are summarized in Figure 6, 7 and 8, shows that the addition of temperature to the wheel tracking test will increase rutting deformation. Likewise, with the addition of CR to the RAP mixture which relatively increases the pavement quality so that the rutting deformation that arises becomes smaller. From the results of the above tests it can be concluded that the recycle hot-mix asphalt is not recommended to be applied at high temperatures and the addition of CR is not too significant help when it used at relatively high temperatures. Conclusions The use of additives to improve the performance of recycling hot-mix asphalt greatly affect the resistance of deformation of asphalt mixture hot mix asphalt (HMA). Asbuton blend-55 made from Asbuton semi extraction and crumb rubber can be used as a modifier of bitumen base of RAP mixture. In general, the ability of RAP with bitumen modification (Asbuton 1.5% and CR 1.0%) to withstand repeated loads is preferable when used at low temperatures, high-temperature use is not recommended for this mixture. Further research is needed so that the performance criteria of recycling hot-mix asphalt in permanent deformation as well as or even better than HMA.
2019-04-30T13:08:41.287Z
2018-11-29T00:00:00.000
{ "year": 2018, "sha1": "54285d08793504c9979df0047e3e02b69aa705d7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1757-899x/453/1/012015", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4607c98e868a5a21556126190e09ee93a8052681", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
247595339
pes2o/s2orc
v3-fos-license
Hadwiger number always upper bounds the chromatic number -- 1852-1943 -- A far-reaching generalisation of Guthrie's postulate In a simple graph $G$, we prove that the \textit{Hadwiger number}, $h(G)$, of the given graph $G$ always upper bounds the \textit{chromatic number}, $\chi(G)$, of the given graph $G$, that is, $\chi(G) \leq h(G)$. This simply stated problem is one of the fundamental questions in combinatorial mathematics, which was made by Hugo Hadwiger in 1943. Consequently, it independently verifies the most famous Four-Color Theorem: the case $h(G) = 4$ is equivalent to the Four-Color Theorem, that is, every planar graph is $4$-colourable. In our novel approach, we use algebraic settings over a finite field $\mathbb{Z}_p$. The algebraic setting, in essence, begins with the complete graph with $h(G)$ vertices (which is a minor, $\mathcal{M}$, of the given graph $G$) and iteratively extends to the simple graph $G$. This conjecture has remained elusive, owing to a lack of understanding of the interdependence, particularly the importance of Lemma 3.1, Lemma 3.2, Lemma 3.3, and Lemma 3.6 in Section 3. The Hadwiger number, h(G), is the number of vertices in the largest complete graph to which the simple graph G can be contracted. The chromatic number, χ(G), is the minimum number of colors needed for a vertex coloring of a simple graph G (vertex coloring of a graph G, is a map f : V (G) → K, where K is a set of colors, such that no adjacent vertices are assigned the same color). In algebraic terms, Hadwiger's conjecture can be stated as follows, In order to prove that Claim 3.2 is true (which will be proved in Corollary 3.4), we have to prove that Hypothesis 3.1 is not true (which will be proved in Corollary 3.3). In order to prove that Hypothesis 3.1 is not true, we prove Claim 3.3 and Claim 3.4 We only consider the following two elementary operations because removing a vertex is the same as removing all edges incident to that vertex and deleting the isolated vertex: 1. Removal of an edge. 2. Contraction of an edge. Most importantly, we assume that each elementary operation will remove or contract at most one edge, and that the resulting graph will be a simple graph (isolated vertex will be deleted). Consider a graph G = (V (G), E(G)) with n vertices, that is, V (G) = {v 1 , v 2 , . . . , v n }. Let G 1 and G 2 be simple graphs. If we say G 1 = G 2 , then V (G 1 ) = V (G 2 ) and E(G 1 ) = E(G 2 ). Let ≀ i denote an elementary operation. Let ≀ 1 < ≀ 2 < . . . < ≀ q < ≀ q+1 be a sequence of q + 1 elementary operations performed on the graph G to obtain the minor of the graph G, that is, M. Let M 0 = G < M 1 < M 2 < . . . < • The t-color set K = {1, 2, . . . , t − 1, t}, prime number p > 2∆ 2 n, ∆ is the maximum degree of the graph G with n vertices. For q + 1 ≥ i ≥ 0, proving that a simple graph M i is t-colorable is exactly equivalent to proving that P ′ i (v) ≡ 0 mod p, • Interweaving relations among the polynomials , and K(v) defined in Section 3 are shown here, as well as a bird's eye view of the paper's presentation. WLOG, we assume that, , is the edge that is either contracted or deleted by an elementary operation ≀ i on the simple graph M i−1 to obtain the simple graph M i . In light of . • It is apparent from the above interwoven relations that just because there exists an α α vj ∈ K ′ (1 ≤ j ≤ n) such thatP i−1 (α) ≡ 0 mod p, it does not necessarily imply that there exists a β β vj ∈ K (1 ≤ j ≤ n) such that P i−1 (β) ≡ 0 mod p. M q < M = M q+1 denote a sequence of graphs obtained from G by a sequence of elementary operations ≀ 1 < ≀ 2 < . . . < ≀ q < ≀ q+1 , note that each graph M i (q + 1 ≥ i ≥ 1) is a simple graph and is obtained by performing an elementary operation ≀ i on the simple graph M i−1 , and the sequence of operations and graphs is ordered with respect to subscript. In other words, to obtain a simple graph M i (which is finite, connected, and without loops and multiple edges), elementary operations are performed as follows: is the set of vertices adjacent to the vertex v c in G. We use the notation G/e to represent edge contraction, which results in a simple graph, and G \ e to represent edge deletion (see Figure 3 and Figure 4 for example), which results in a simple graph or union of two simple graphs. Suppose e = v s v b is an edge that is contracted in the simple graph G. Without loss of generality, we choose the vertex v s of the graph G for isolation and deletion. Then G/e denotes a simple graph obtained after an edge contraction operation that deletes all edges incident to v s and the isolated vertex v s and adds new edges such that the vertex v b is adjacent to all the vertices in Let p (> 2∆ 2 n) be a prime number, and Z p is a finite field. Let F(v 1 , v 2 , . . . , v r ) ∈ Z p [v 1 , v 2 , . . . , v r ] denote polynomial of v 1 , v 2 , . . . , v r over Z p . Further, by Fermat's theorem, we know that x p ≡ x mod p. Let F ′ (v 1 , v 2 , . . . , v r ) denote the polynomial obtained after applying the Fermat's theorem if exponent of variables is ≥ p, that is v p i ≡ v i mod p ( 1 ≤ i ≤ r ), in F(v 1 , v 2 , . . . , v r ). And, we can observe that the exponent of each Figure 5. WLOG, we choose the vertex v2 for isolation and deletion. Algebraic settings and results We know that a given graph G with n vertices has no K t+1 (t > 0) minor, and that minor M (= M q+1 ) is the complete graph with t vertices, or h(G) = t. Since M is the complete graph, it is a fact that t vertices of M can be coloured using t-colors. To prove Hadwiger's conjecture, we must show that the n vertices of the given graph G can also be coloured using t-colors. We know that M i (q + 1 ≥ i ≥ 1) is a simple graph, and that it can be obtained by performing an elementary operation ≀ i on M i−1 . So, given a graph G, We now define the algebraic settings for colouring the n vertices of the given graph G with t-colors as follows (p > 2∆ 2 n, ∆ is the maximum degree of the graph G): We notice that proving that the n vertices of a given graph G can be coloured with t-colors is exactly equivalent to proving that there exists an n-tuple α α vj ∈ K (1 ≤ j ≤ n) such that P 0 (α) ≡ 0 mod p. And the vertex colouring of the given graph G with t-colors is defined by the mapping Remark 3.1. Suppose that there exists an n-tuple α such that P 0 (α) ≡ 0 mod p. Now we concentrate on proving the following lemmas, which are fundamental to the algebraic settings in this paper and also necessary for the proof of Claim 3.2 to be valid. In other words, without the validity of Lemmas 3.1, 3.2, 3.3 and 3.6, as well as the fact that the largest complete graph to which the given graph G is contractible is K t (t > 0), Claim 3.2 may face an existential crisis (to the foundation of the algebraic approach). Then there exists an Proof. Because the simple graph H \ e is t-colorable, as shown by Lemma 3.1, there exists an h-tuple (α v1 , α v2 , . . . , Then there exists an Proof. This follows from the fact that H \ e is t-colorable. Proof. We already know that |V (H)| = |V (H \ e)| + 1. We assume that V (H \ e) = V (H) \ {v s } without losing generality. Because the graph H \ e is t-colorable, we can see that all of the vertices in the graph H are coloured with t-colors except the vertex v s ; colouring the vertex v s is sufficient to make H t-colorable. We can always colour the vertex v s such that H is t-colorable because the degree of the vertex v s is one. Corollary 3.2. In a simple graph H, let e = v s v b is an edge that is removed such that |V (H)| = |V (H \ e)| + 1. Suppose graph H \ e is t-colorable (isolated vertex will be deleted). The algebraic setting of the simple graph H with vertex set Although the following two lemmas, Lemma 3.4 and Lemma 3.5, are based on the fact that the largest complete graph to which the given graph G is contractible is K t (t > 0), their importance cannot be ignored. is defined as follows, given Y = {v y1 , v y2 , . . . , v yt , v yt+1 } ∈ Y, 1 ≤ y 1 , y 2 , . . . , y t , y t+1 ≤ n : This implies that the vertices in the vertex set Y induce a complete subgraph with t + 1 vertices, that is, we must have an induced subgraph from the set Y = {v y1 , v y2 , . . . , v yt , v yt+1 } ∈ Y, that is, any two vertices in Y are adjacent if and only if they are adjacent in the graph M i−1 , which is a complete subgraph with t + 1 vertices. This contradicts the statement that K t (t > 0) is the largest complete graph to which the given graph G can be contracted. The following Lemma 3.5 is a stronger version of the previous Lemma 3. 4, and it will be proven using Brooks' theorem [5]. The lemma proves that an induced simple graph whose degree is at most t can be coloured using t-colors. Proof. The three cases that follow are as follows: It follows from the fact that the minor M of the graph G is K t . Therefore, this case is not possible. Remark 3.2. Suppose that e = v s v b is an edge that is contracted or deleted by an elementary operation ≀ i (q + 1 ≥ i ≥ 1) on the simple graph M i−1 to obtain the simple graph M i , and M i is t-colorable. The polynomial H i−1 (v) is defined as follows: (v c − l) . Then Lemma 3.1 and Lemma 3.2 guarantee that there exists an In light of Remark 3.2, the following Lemma 3.6 will give us all the hints on how vertex coloring of the simple graph M i−1 (q + 1 ≥ i ≥ 1) is interdependent on the Lemma 3.1, Lemma 3.2, and Lemma 3.3 and the fact that the largest complete graph to which the given graph G is contractible is K t (t > 0). Lemma 3.6. Suppose that an edge e = v s v b is contracted or deleted by an elementary operation ≀ i (q + 1 ≥ i ≥ 1) on the simple graph M i−1 to obtain the simple graph M i . The polynomial S i−1 (v) is defined as follows, and there is an α α vj ∈ K (1 ≤ j ≤ n) such that S i−1 (α) ≡ 0 mod p. Then P i−1 (α) ≡ 0 mod p. Proof. Here we have two cases: Case 1: Suppose that an edge e = v s v b is deleted by an elementary operation ≀ i on the simple graph M i−1 to obtain the simple graph M i . Here, again, we have two more cases: We have, by the definition of P i−1 (v), it can be rewritten as, that is, Since S i−1 (α) ≡ 0 mod p, we can conclude that, Case 1b: when |V (M i−1 )| = |V (M i )| + 1. Without loss of generality, we assume that, V (M i ) = V (M i−1 ) \ {v s }. We have, by the definition of P i−1 (v), it can be rewritten as, From Corollary 3.2, it follows that there exists an α α vj ∈ K (1 ≤ j ≤ n) such that P i−1 (α) ≡ 0 mod p. We have, by the definition of P i−1 (v), it can be rewritten as, Since S i−1 (α) ≡ 0 mod p, we can conclude that, So far, we have proven all of the necessary lemmas to validate the main result. Now we will concentrate on proving the main point. Since the minor of the given graph G, M, is the complete graph with t vertices, we can make the following claim. Proof. We know that the minor M is a complete graph consisting of t vertices. It is a basic fact that the simple graph M can be colored using t-colors. Therefore, there exists an n-tuple α α vj ∈ K (1 ≤ j ≤ n) such that the polynomial P q+1 (α) ≡ 0 mod p. We have proven that Claim 3.1 is true, that is, Then there exists some i (q + 1 ≥ i ≥ 1) such that following Hypothesis 3.1 has to be true, We have to prove that the Hypothesis 3.1 is not true, in other words, we have to prove that P ′ i−1 (v) ≡ 0 mod p. Without loss of generality, we assume that, e = v s v b (1 ≤ s, b ≤ n), is the edge that is either contracted or deleted by an elementary operation ≀ i on the simple graph M i−1 to obtain the simple graph M i . That is, will then colour the vertices of the simple graph M i−1 , resulting in a t-colorable simple graph M i−1 . Hypothesis 3.1 is therefore false. Suppose S ′ i−1 (v) ≡ 0 mod p (congruence relation (3.2)), that is, there does not exist an α α vj ∈ K (1 ≤ j ≤ n) such that S i−1 (α) ≡ 0 mod p, in other words, we have it can be rewritten as, We denote the under-brace products by Qi−1(v). (v s − l) ≡ 0 mod p, that is, We can easily claim the following, Proof. We know that P ′ i (v) ≡ 0 mod p, that is, there exists an α α vj ∈ K (1 ≤ j ≤ n) such that P i (α) ≡ 0 mod p. Then the mapping ℓ(v j ) = α vj (1 ≤ j ≤ n) will color the vertices of the simple graph M i . This implies we can use the same colors The claim follows if we are able to color just the vertex v s . And this is possible in the light of the absence of the products p l=t+1 (v s − l) in Q i−1 (v), that is, we have no restriction that the color of the vertex v s has to be chosen from the t-color set K while coloring the vertex v s in the simple graph M i−1 . Therefore, there exists (γ ′ v1 , γ ′ v2 , . . . , γ ′ vn ) γ ′ vj (j = s) ∈ K (1 ≤ j ≤ n) and γ ′ vs ∈ Z p such that Remark 3.4. In this remark, we will understand the reason when S ′ i−1 (v) ≡ 0 mod p (congruence relation (3.2)). From Claim 3.3 we have that Q ′ i−1 (v) ≡ 0 mod p, is nothing but, And Q ′ i−1 (v) can be written as follows (after applying Fermat's theorem to each v r ( 1≤r≤n r =s ) in Q i−1 (v)), (v s − l) ≡ 0 mod p, In order to prove that Hypothesis 3.1 is false, our strategy is to find a new color set K ′ = {1, 2, . . . , t − 1} ∪ {β}, where β ∈ Z p \ {0, 1, 2, . . . t − 1, t} such that the following Claim 3.4 is true. Later, we use Claim 3.4 to prove that Hypothesis 3.1 is false in Corollary 3.3. We are defining the polynomialŜ i−1 (v) as follows, which will be used in the following claim, SinceŜ i−1 (β ′ v1 , β ′ v2 , . . . , β ′ vn ) ≡ 0 mod p, using the Lemma 3.6 we can conclude that,P ≡ 0 mod p. Therefore, the mapping ℓ(v j ) = β ′ vj (1 ≤ j ≤ n) (β ′ vj ∈ K ′ ) will give vertex coloring of the simple graph M i−1 . On recoloring vertices in M i−1 that are coloured β by the colour t, we notice that the simple graph M i−1 remains vertex-colored. This implies that there exists an α α vj ∈ K (1 ≤ j ≤ n) such that P i−1 (α) ≡ 0 mod p, that is, (v c − l) ≡ 0 mod p, which implies Hypothesis 3.1 is false. The Corollary 3.3 has established that Claim 3.2 is true, Corollary 3.4. The Claim 3.2 is true, that is, there exists an n-tuple α α vj ∈ K (1 ≤ j ≤ n) such that P 0 (α) ≡ 0 mod p. We can use this fact to claim that χ(G) ≤ h(G) in the following corollary, Corollary 3.5. Hadwiger conjecture is true. Proof. We know that there exists an n-tuple α α vj ∈ K (1 ≤ j ≤ n) such that P 0 (α) ≡ 0 mod p because of Claim 3.2. The mapping ℓ(v j ) = α vj (1 ≤ j ≤ n) will use t-colors to colour the vertices of the given simple graph G with n vertices. The Corollary 3.5 and Wagner's Theorem together establish that the Four-Color problem is true, Corollary 3.6. Every planar graph is 4-colorable. Proof. We know from Wagner's Theorem that the Hadwiger number of a planar graph is at most 4. And Corollary 3.5 guarantees that the upper bound of the chromatic number of a planar graph is 4. Proof. This follows from Corollary 3.5.
2021-09-22T01:16:03.754Z
2021-09-19T00:00:00.000
{ "year": 2021, "sha1": "7810a9f29724ba686be8c36aa578a7917346e9cc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8e3aa859178bfeeb7f13f1680a5c58a318d6ae01", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
253118856
pes2o/s2orc
v3-fos-license
Imported Haycocknema perplexum Infection, United States We report an imported case of myositis caused by a rare parasite, Haycocknema perplexum, in Australia in a 37-year-old man who had progressive facial, axial, and limb weakness, dysphagia, dysphonia, increased levels of creatine kinase and hepatic aminotransferases, and peripheral eosinophilia for 8 years. He was given extended, high-dose albendazole. The Study A 37-year-old man from New Zealand who had previous long-term residence in Australia came to the Mayo Clinic (Rochester, MN, USA) because of an 8-year history of progressive weakness, muscle atrophy, and 32-kg weight loss. Onset was gradual, first involving the pectoralis and biceps brachii, then neck, facial, and distal limb muscles. Additional symptoms included dysphagia, dysphonia, and dyspnea on exertion. Laboratory testing showed peripheral eosinophilia (5%, reference value <3%), and an increased level of creatine kinase (maximum ≈2,000 U/L, reference range 39-308 U/L). Toxoplasmosis had originally been suspected based on finding a possible Toxoplasma gondii cyst on muscle biopsy 1 year after symptom onset, but his weakness progressed despite trimethoprim/sulfamethoxazole therapy, and T. gondii serologic test results were negative. Prednisone therapy worsened his symptoms. He became wheelchair-bound 7 years after onset of symptoms. He had lived in coastal northern Queensland (Mackay region), Australia from ages 8-20 years, where he had extensive bush exposure but denied bush meat consumption. Neurologic findings included profound asymmetric weakness predominantly affecting proximal upper and lower limbs, neck flexors, and sternocleidomastoids (Figure 1). He also had asymmetric scapular winging, severe weakness of the frontalis, and mild weakness of orbicularis oris. A formalin-fixed, paraffin-embedded muscle tissue from a previous muscle biopsy specimen was obtained, and additional sections showed nonencapsulated male and gravid female nematodes within muscle fibers consistent with H. perplexum ( Figure 2). The presence of adult worms enabled trichinellosis to be definitively excluded because only the larval stage of Trichinella sp. is detected in muscle. Attempts at molecular amplification of the cytochrome c oxidase subunit 1 and 18S rRNA genes as described (2,8) from archival formalin-fixed, paraffin-embedded tissue were unsuccessful. The patient was prescribed a 3-month course of albendazole (400 mg 2×/d). Nineteen months after completing albendazole, the patient reported no further deterioration. However, his muscle power did not improve. Creatinine kinase levels decreased to within the reference range. Conclusions Haycocknema perplexum is an enigmatic and presumably zoonotic nematode. Clinical histories of We report an imported case of myositis caused by a rare parasite, Haycocknema perplexum, in Australia in a 37-year-old man who had progressive facial, axial, and limb weakness, dysphagia, dysphonia, increased levels of creatine kinase and hepatic aminotransferases, and peripheral eosinophilia for 8 years. He was given extended, high-dose albendazole. affected patients indicate that contact with wilderness or marsupial wildlife in Australia (n = 6) and consumption of bush meat (n = 4) might be associated with infection, but this possibility has not been confirmed (Table, https://wwwnc.cdc.gov/EID/ article/28/11/22-0286-T1.htm). The phylogenetic position of H. perplexum is unresolved, but it appears to be intermediary between Oxyuridomorpha (e.g., Enterobius vermicularis) and Ascaridomorpha (e.g., Ascaris lumbricoides) (2). All cases of haycocknematosis to date have originated in Australia, specifically in the tropical north of Queensland and Tasmania (Table). Nonhuman animal hosts are unknown. The route of human infection is also unknown but is presumed to be linked to consumption of, or contact with, mammalian wildlife. Because females are ovoviviparous (eggs hatch in utero within the female worm), infection caused by the ingestion of embryonated eggs is unlikely. With an apparent single-host (monoxenous) life cycle, an arthropod vector is also unlikely. Ongoing release and maturation of larvae results in persistent infections. Of the 13 known case-patients (Table), 12 had weakness and muscle wasting, 7 had dysphagia, and 2 had dysarthria or dysphonia. One case was discovered incidentally during evaluation of low back pain; the patient was otherwise asymptomatic. All case-patients had increased levels of creatinine kinase (270-6,218 U/L). Peripheral eosinophilia was observed in 12 (92%) of 13 patients. Myalgias, unintentional weight loss, increases in erythrocyte sedimentation rates, and mild-to-moderate increases in levels of liver aminotransferases were also common. Needle electromyography findings were available for 8 patients (patients 2, 4, 7, 8, 10, 11, 12, and 13). Except for patients 10 and 12, who had ambiguous or limited findings, the remaining patients had myopathic motor unit potentials. Results of nerve conduction studies were within reference ranges when described. The time from symptom onset to diagnosis ranged from 1.5 to 8 years, with the case-patient in this study having the longest known timeframe. Seven patients had received corticosteroids at some point in their illness for a presumptive diagnosis of polymyositis, during which time most experienced progressive deterioration, including our patient. All patients were given extended, high-dose albendazole therapy, and 7 patients had a partial to near complete recovery. One patient (7) died from complications resulting from corticosteroid administration, mechanical ventilation, and a prolonged stay in the intensive care unit. Diagnosis of haycocknematosis is based primarily on histopathologic features. The morphologic characteristics of H. perplexum nematodes in histopathologic preparations include a thin cuticle, meromyarian/platymyarian musculature, amphidelphic uteri (females), lateral bacillary bands (especially conspicuous in immature females), and conspicuous subventral glands (10). There are no cephalic inflations or lateral alae. Adult males, adult females, and larvae might be observed in muscle specimens, but never in ex utero eggs. Adults often have an undulating, serpentine morphology, which is parallel with the muscle fibers. Male worms have a maximum width of 15 µm (range 14-15 µm), and female worms have a maximum width of 36 µm (range 15-36 µm) (10). Larvae are similar in size to adult males, have a maximum width of 15 µm (range 12-15 µm) (10), and complete their lifecycle in the host. Although other parasites in biopsy specimens include Trichinella spp., Strongyloides stercoralis, and Halicephalobus gingivalis, these parasites can be differentiated by morphologic, clinical, and epidemiologic features. H. perplexum and other nematodes might also be potentially confused for tissue cysts of Toxoplasma gondii and Sarcocystis spp. when seen in cross-section (as in our case-patient), but this finding can usually be resolved by examining deeper sections from the tissue block to identify additional parasite forms. A PCR was developed that enabled diagnosis of the 10th case of H. perplexum nematode infection from a muscle biopsy in the absence of visible nematodes (2,8). This PCR was unsuccessful when performed for our case-patient. However, this result is not unexpected, given the age of the block (7 years at time of testing) and the relatively large sizes of the PCR amplicons (400 bp for cytochrome c oxidase subunit 1 and 830 bp for 18S rRNA). It is unknown why to date human cases appear to have been acquired only in Tasmania or the northern regions of Queensland. Molecular sequence data demonstrate that the strains from Queensland and Tasmania belong to the same species (2) wider mainland Australia, but these infections have not been detected because of lack of awareness and difficulty in diagnosis. The optimal antimicrobial drug management for treatment of haycocknematosis is unknown. Our patient was given albendazole based on experiences from previously reported cases. In 1 case, viable nematodes were still observed after 4 weeks of treatment, but not after 9 weeks (7). Additional studies are needed to determine the most efficacious antiparasitic treatment for haycocknematosis. Patients who have H. perplexum parasitic myositis might be a diagnostic challenge to clinicians and pathologists, particularly when seen outside disease-endemic regions. The disease is progressive, potentially life-threatening, and might persist for >8 years with a delayed diagnosis, as shown by our case-patient. A high degree of suspicion is required to diagnose this treatable mimic of muscular dystrophy and inflammatory myopathy, and to avoid harm through corticosteroid treatment. Additional studies are needed to clarify the exposure risks, parasite life cycle, disease prevention, and treatment.
2022-10-27T06:16:40.774Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "e6e8337d043b41af55245c4ff11c7fe782da790e", "oa_license": "CCBY", "oa_url": "https://wwwnc.cdc.gov/eid/article/28/11/pdfs/22-0286.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "0fe59b5aab234a37ba38ea5ea9d71338d888fb40", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1379068
pes2o/s2orc
v3-fos-license
Restriction of dietary protein leads to conditioned protein preference and elevated palatability of protein-containing food in rats The mechanisms by which intake of dietary protein is regulated are poorly understood despite their potential involvement in determining food choice and appetite. In particular, it is unclear whether protein deficiency results in a specific appetite for protein and whether influences on diet are immediate or develop over time. To determine the effects of protein restriction on consumption, preference, and palatability for protein we assessed patterns of intake for casein (protein) and maltodextrin (carbohydrate) solutions in adult rats. To induce a state of protein restriction, rats were maintained on a low protein diet (5% casein) and compared to control rats on non-restricted diet (20% casein). Under these dietary conditions, relative to control rats, protein-restricted rats exhibited hyperphagia without weight gain. After two weeks, on alternate conditioning days, rats were given access to either isocaloric casein or maltodextrin solutions that were saccharin-sweetened and distinctly flavoured whilst consumption and licking patterns were recorded. This allowed rats to learn about the post-ingestive nutritional consequences of the two different solutions. Subsequently, during a preference test when rats had access to both solutions, we found that protein-restricted rats exhibited a preference for casein over carbohydrate whereas non-restricted rats did not. Analysis of lick microstructure revealed that this preference was associated with an increase in cluster size and number, reflective of an increase in palatability. In conclusion, protein-restriction induced a conditioned preference for protein, relative to carbohydrate, and this was associated with increased palatability. Introduction There is considerable evidence that of the three macronutrients dietary protein is most tightly regulated [1][2][3]. As such, when presented with diets that differ in macronutrient content, rats will adjust their consumption to ensure that protein intake meets a baseline level [4]. The mechanisms by which these adjustments occur are still not fully understood. An important outstanding question is whether the drive for protein is immediate and innate or whether there is a role for learning using post-ingestive consequences [5,6]. Some evidence suggests that when protein-restricted a specific appetite for protein arises, similar to the appetite for sodium that arises under conditions of sodium depletion. Rats have been shown to rapidly increase their intake of a number of protein sources when protein-restricted in a manner that precludes using post-ingestive effects to guide their intake [7]. Further research suggested these rapid effects on protein appetite were driven by olfactory cues [8]. However, a large body of evidence indicates that adjustments to protein intake are slow, require experience with each food/diet, and likely involve post-ingestive feedback. For example, when allowed to select between diets that differ in protein content, it takes rats several days to adjust their intake appropriately [9]. This adaptation is more rapid in young rats, although still not immediate, presumably because protein requirements are elevated early in development and positive post-ingestive feedback is enhanced. The majority of the above studies have assessed food intake and diet selection in home cage tests in which diets are given ad libitum. This arrangement does not allow precise monitoring of lick patterns over time. Sophisticated analysis of lick patterns, or lick microstructure, is a key method for assessing palatability of solutions in rodents [10]. As such, when individual licks are grouped into runs based on interlick intervals (termed bursts, clusters and bouts), increases in palatability are associated with longer runs of licking. Importantly, with respect to protein appetite, lick microstructure has not yet been investigated. Learned shifts in the palatability of protein or protein-containing foods could contribute significantly to increased protein intake under protein-restriction. As a striking example, when rats are sodium-depleted normally aversive concentrations of sodium chloride become highly palatable [11]. Moreover, learning an association between conditioned flavors and intragastric infusions of glucose leads to an increase in palatability of the flavors paired with positive post-ingestive consequences [12,13]. However, increased intake is not always associated with shifts in palatability. For example, rats made deficient in a single essential amino acid increase their intake of the missing amino acid but this is not associated with an increase in palatability [14]. Here, we have used analysis of lick patterns to assess the effect of protein-restriction on intake and palatability of isocaloric protein-and carbohydrate-containing solutions in adult rats. We find that protein-restricted rats, relative to controls, develop a learned preference for protein-containing solutions over carbohydrate and this is associated with an increase in relative palatability. Animals Forty adult male Sprague-Dawley rats were used for experiments (Charles River; >275 g at start of experiment). Twenty-four of these rats were used for the main behavioral experiment and a further sixteen contributed to the food intake data. Rats were group-housed (2-3 per cage) in IVCs with bedding materials as recommended by NC3R guidelines. Temperature was 21 ± 2˚C and humidity was 40-50% with 12h:12h light/dark cycle (lights on at 07:00). Water was available ad libitum; chow containing different protein:carbohydrate ratio was available ad libitum (details below). All experiments were covered by the Animals [Scientific Procedures] Act (1986) and carried out under the appropriate license authority (Project License: 70/8069). Diet manipulations All rats were initially maintained on standard laboratory chow containing 20% dietary casein. To induce a state of protein restriction in half of the rats, standard chow was switched for experimental diets based on modified AIN-93G that differed in protein:carbohydrate ratio (Table 1). Non-restricted diet (#D11051801, Research Diets, New Brunswick, NJ) contained 20% casein whereas protein-restricted diet (#11092301, Research Diets) contained 5% casein. Body weight data were collected daily throughout the experiments. As rats were group-housed, food intake data were collected by cage and divided by the number of rats in the cage to give an average intake per animal. Conditioning experiments started 2 weeks following diet switch. Table 1. Experimental diets used in study. List of ingredients (upper) and macronutrient breakdown (lower) in control diet (#D11051801; 20% casein) and protein-restricted diet (#D11092301; 5% casein). Behavioural testing All testing took place within standard operant chambers (in cm: 30.5 L, 24.1 D, 21.0 H; Med Associates, St. Albans City, VT) equipped with a house light and two bottles. Each bottle was connected to a contact lickometer calibrated to detect individual licks. Licks were recorded on a computer for all sessions as a measure of intake. All sessions lasted for one hour. For one to three days at the start of each experiment, rats were placed in the chambers with 0.2% sodium saccharin in both bottles to familiarize them with the apparatus. Following this, rats underwent a series of conditioning sessions and a preference test. In conditioning sessions, which occurred in a block of 4 days, only one bottle each day was available and was filled with either protein-containing solution (4% casein + 0.21% methionine + 0.2% sodium saccharin + 0.05% flavored Kool-Aid) or an isocaloric carbohydrate-containing solution (4% maltodextrin + 0.2% sodium saccharin + 0.05% flavored Kool-Aid) on alternate days. Methionine was added to the protein-based solution to make up for the relatively low levels of this amino acid that are present in casein [3]. Flavors (cherry vs. grape Kool-Aid) associated with each macronutrient and order of presentation (protein on days 1 and 3 vs. carbohydrate on days 1 and 3) were counter-balanced. In preference test sessions, both bottles and test solutions were available. Analysis and statistical methods Lick timestamp data from all experiments were analyzed in Python. All data files and custom scripts are available as supplemental files and are deposited on Mendeley Data (doi:10.17632/wgd83v3ntb.1). Lick microstructure was analyzed by using interlick intervals to divide licks into clusters [10]. Clusters were defined as runs of licks with no interlick intervals >500 ms. Body weight data were analyzed using two-way mixed ANOVA with diet as betweensubjects factor and day as repeated measure. Food intake data were analyzed with cage as the statistical unit using an unpaired Student's t-test. Lick data for conditioning days, preference test, and measures of palatability were analyzed using two-way ANOVA with dietary group (non-restricted vs. protein-restricted) as between-subjects factor and solution (casein vs. maltodextrin) as within-subjects factor. On preference test day, protein preference was calculated as licks for casein divided by total licks. Non-restricted vs. proteinrestricted rats were compared using unpaired Student's t-test. For all analyses, α was set at .05 and all tests were two-tailed. Food intake and body weight data across low protein/high protein First, we assessed whether maintenance on protein-restricted diets affected food intake and body weight of adult rats (Fig. 1). To date, much of the work on protein restriction has used younger rats when protein requirements are greater than in true adulthood. Here, we examined data from rats following the initial dietary manipulation but before conditioning sessions had started so that intake during these sessions did not confound our interpretations. No difference in body weight was observed between the diet groups over the course of the experiment (Fig. 1A). As such, two-way ANOVA revealed a main effect of Day As rats were group-housed, we obtained food intake data by cage. Food intake data from the eight cages of rats (three rats per cage) that participated in the main study are shown in Fig. 1B and visual inspection suggests a slight increase in intake (hyperphagia) in rats on protein-restricted diet. However, the small number of data points precludes statistical analysis. To address this, we combined this data set with food intake data from a pilot experiment in which an additional eight cages of rats were monitored (two rats per cage) and examined this extended data set (Fig. 1C). Statistical analysis of these data showed that protein-restricted rats did increase their intake of the low protein diet, relative to intake of non-restricted rats (t(15)=3.179, p=0.007). Thus, restriction of dietary protein resulted in hyperphagia without changes in body weight. Protein restriction leads to development of preference for protein-containing solutions Next, we asked whether rats would display a greater preference for protein-containing solutions over carbohydrate-containing solutions when they were protein-restricted. During conditioning sessions, when only one solution or the other was available, we found no significant differences in the amount of consumption between protein-restricted and nonrestricted rats although there was a trend for protein-restricted rats to drink more of both solutions than non-restricted rats (Fig. 2). As such, two-way mixed ANOVA revealed a trend towards a main effect of diet (F(1,22)=3.609, p=0.0707) but no main effect of solution (F(1,22)=1.203, p=0.285) and no interaction between diet and solution (F(1,22)=2.087, p=0.163). On day 5, after these four conditioning sessions, rats were given access to both solutions during the same session (Fig. 3). In this session, protein-restricted rats drank more casein than maltodextrin and this elevated intake appeared to occur in the first twenty minutes of the session (Fig. 3A). Furthermore, protein-restricted rats showed a significant preference for casein over maltodextrin whereas non-restricted rats did not (Fig. 3B & 3C). As such, twoway ANOVA revealed that there was a main effect of solution (F(1,22)=7.466, p=0.01216) and an interaction between solution and diet (F(1,22)=11.677, p=0.00247). Subsequent analysis of each diet group individually showed that protein-restricted rats licked more for casein than maltodextrin (t(11)=4.630, p=0.0007) but non-restricted rats did not (t(11)=0.458, p=0.656). In addition, we calculated a casein preference score by dividing casein licks by total licks (Fig. 3D) and found that protein-restricted rats showed a greater protein preference, relative to non-restricted rats (t(21)=2.660, p=0.0146). Palatability of protein-containing solutions is increased by protein-restriction Finally, we used analysis of lick microstructure [10] to examine whether the palatability of protein-containing solutions was affected by the state of protein restriction. Lick patterns were divided into clusters, separated by interlick intervals greater than 500 ms. An increased number of licks per cluster is generally thought to reflect increased palatability. We found that the state of protein restriction influenced palatability of casein, relative to maltodextrin (Fig. 4). As such, two-way ANOVA revealed a significant interaction between solution and diet (F(1,22)=7.099, p=0.0142). Further analysis of each diet group separately showed that casein and maltodextrin had similar palatability in non-restricted rats (t(11)=0.761, p=0.4626) but the palatability of casein was elevated relative to maltodextrin in protein-restricted rats (t(11)=2.688, p=0.0211). In addition, the number of clusters was also influenced by the state of protein-restriction as two way ANOVA revealed a main effect of Solution (F(1,22)=5.677, p=0.0263) and an interaction between Solution and Diet (F(1,22)=7.119, p=0.0140). Analysis of each diet group separately showed that in non-restricted rats there were the same number of clusters for both casein and maltodextrin (t(11)=0.203, p=0.843) whereas protein-restricted rats had an increased number of clusters for casein, relative to maltodextrin (t(11)=3.550, p=0.005). Discussion Here, we examined the effect of protein restriction on development of preference and palatability of protein-vs. carbohydrate-containing solutions. We found that maintenance on a protein-restricted diet resulted in rats developing a preference for protein vs. carbohydrate when given a choice between the two. Moreover, the increase in protein intake was associated with an increase in palatability of the protein-containing solution, relative to the carbohydrate-containing solution. We monitored food intake and body weight for the two weeks following the change to a protein-restricted diet but before beginning behavioral sessions. Previous studies have found that rats on diets that are moderately low in protein show hyperphagia without weight gain [9,15,16]. In support of these studies, we found that protein-restricted rats increased food intake, relative to controls, without changing their body weight. It is of note, however, that the slight increase in food intake we observe is still far below what is needed to match the protein intake of control, non-restricted rats. In our studies, we used a low protein diet that contained 5% protein whereas other studies using rats have found effects on behavioural and metabolic parameters using diets containing 10% protein [15]. Our choice of 5% was based on pilot experiments, in which we found no effects of 10% protein diet on food intake or conditioned preferences in adult rats (data not shown). The likely explanation for this variation in effective dietary manipulations is different protein requirements during development. Many studies have used late adolescent or young adult rats rather than mature animals and differences in the effects of low protein diets across age and development are well documented [9,17]. In conditioning sessions, over four days rats were given one type of solution -containing either protein or carbohydrate -and lick patterns were monitored. Rats from both dietary conditions drank similar amounts of casein and maltodextrin during these sessions although there was a suggestion (p=0.07) that protein-restricted rats drank slightly more of both solutions than control rats. This may reflect a moderate form of hyperphagia, similar to home cage intake reported above. Interestingly, in the case of these conditioning sessions, when only one solution was available, consumption was increased similarly for the carbohydratecontaining solution meaning that protein-restriction may also generate a hyperphagic response that disregards the macronutrient content of the food on offer. In the preference test, when rats were given access to both solutions, we found a strong preference towards the protein-containing solution in protein-restricted rats. This preference was not present in control rats. This finding corroborates other work showing that proteinrestricted rats can direct their behavior to increase protein intake. In addition, we have extended these previous studies by analyzing the precise temporal patterns of licking to assess how lick macrostructure and microstructure are affected by protein restriction. By analyzing lick microstructure during the preference test, we found that palatability of the protein-containing solutions increased in protein-restricted rats indicating that this might be a mechanism that drives increased intake of protein-containing foods. This situation parallels studies that examined palatability after flavor-nutrient conditioning. When flavored saccharin is paired with intragastric glucose infusions, palatability of the paired flavor is elevated [12]. Our studies used a similar paradigm in which solutions were sweetened with saccharin and distinctly-flavored with Kool-Aid, as is common in studies of flavor-nutrient conditioning [18]. Thus, increased palatability (flavor evaluation) might be a mechanism that drives increased intake by promoting more meals and longer meals. The presentation of macronutrients in combination with saccharin and flavoring means that we do not know whether the changes in palatability that we observe reflect a change in palatability of individual components of the solution or the combination. When rats are made sodium-deficient, the nutrient itself, sodium, immediately becomes more palatable in an experience-independent manner [11]. This shift is profound as it applies to high concentrations of sodium, which are normally evaluated as aversive in sodium-replete animals. Moreover, sodium-evoked dopamine signals and appetitive behavioral responses to sodium-associated cues also emerge with no experience of sodium in a depleted state [19,20]. Literature suggests that appetite for protein may differ from sodium appetite. For example, when rats are maintained on a diet deficient in a single essential amino acid (lysine), they develop compensatory responses, which increase their intake of lysine, but these responses take ~30 min to emerge and longer if they are required to discriminate between two different amino acids [14]. Interestingly, in this study no evidence of an increase in palatability, assessed by bout size, was observed. One of the most thought-provoking theories developed to explain the obesity crisis is the protein leveraging hypothesis [21,22]. This theory posits that a steady decrease in the proportion of protein in Western diets occurring over the last few decades has resulted in carbohydrate and fat being overconsumed. The relatively minor role of protein in overall energy intake (generally less than 20%) produces this leveraging ability and means that compensating for even small changes in protein can lead to significant overconsumption of energy from fat and carbohydrate. An important assumption of this hypothesis is that deficiencies in specific nutrients influence our feeding behavior by triggering consumption but that this consumption is indiscriminate and not well-targeted to replenish the nutrient in deficit. Contrary to this assumption, our data suggest that, at least in rats, protein-restriction does recruit mechanisms that enable rats to guide their behavior towards consumption of protein-rich food. However, our studies are far from modelling the human situation and there are numerous important discrepancies to be addressed. First, the level of protein restriction is likely more severe in our protocol than that which most humans in the developed world encounter. Second, the choice of food provided in our studies was limited (protein vs carbohydrate with similar sweetness but distinct flavor) and did not include foods that contained a mixture of macronutrients. Third, the pattern of experience (each solution separately on alternate days) was designed to maximize the ability of rats to discriminate post-ingestive effects and learn about the nutritional value of each solution. In the human situation, where foods contain mixtures of macronutrients and other flavorings, fine discrimination of nutritional consequences of ingestion is likely far more difficult. Moreover, numerous other factors influence our intake such as social setting, cultural norms and access, which may bias us against choosing food stuffs based solely on nutritional outcome. Future studies will attempt to address the ability of rats to develop protein preferences in more challenging situations that better model the human context.
2017-12-12T18:19:45.932Z
2017-10-27T00:00:00.000
{ "year": 2017, "sha1": "0a2fc7f77d6fe1b0b40c2e39f8415e9b9946085a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physbeh.2017.12.011", "oa_status": "HYBRID", "pdf_src": "BioRxiv", "pdf_hash": "be99df9102f58cf4f569f8366843cbcf4e9a2a89", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Biology", "Medicine" ] }
267654986
pes2o/s2orc
v3-fos-license
Trajectories of mental health outcomes following COVID-19 infection: a prospective longitudinal study Background The COVID-19 pandemic has triggered a global mental health crisis. Yet, we know little about the lasting effects of COVID-19 infection on mental health. This prospective longitudinal study aimed to investigate the trajectories of mental health changes in individuals infected with COVID-19 and to identify potential predictors that may influence these changes. Methods A web-survey that targeted individuals that had been infected with COVID-19 was used at three time-points: T0 (baseline), T1 (six months), and T2 (twelve months). The survey included demographics, questions related to COVID-19 status, previous psychiatric diagnosis, post-COVID impairments, fatigue, and standardized measures of depression, anxiety, insomnia. Linear mixed models were used to examine changes in depression, anxiety, and insomnia over time and identify factors that impacted trajectories of mental health outcomes. Results A total of 236 individuals completed assessments and was included in the longitudinal sample. The participants’ age ranged between 19 and 81 years old (M = 48.71, SD = 10.74). The results revealed notable changes in mental health outcomes over time. The trajectory of depression showed significant improvement over time while the trends in anxiety and insomnia did not exhibit significant changes over time. Younger participants and individuals who experienced severe COVID-19 infection in the acute phase were identified as high-risk groups with worst mental ill-health. The main predictors of the changes in the mental health outcomes were fatigue and post-COVID impairments. Conclusions The findings of our study suggest that mental health outcomes following COVID-19 infection exhibit a dynamic pattern over time. The study provides valuable insights into the mental health trajectory following COVID-19 infection, emphasizing the need for ongoing assessment, support, and interventions tailored to the evolving mental health needs of this population. Background The SARS-CoV-2 infection (COVID-19) outbreak has led to mental health problems in the general population [1][2][3], most profoundly affected by demographical variables such as age, sex, and education as well as preexiting mental health problems [4,5].In addition, there have been notable changes in mental health problems since the onset of the pandemic, marked by a spike during the first wave of the COVID-19 pandemic and a subsequent decline from the initial baseline assessment to subsequent follow-ups [6][7][8][9].However, levels of mental ill-health have been found to be more elevated in individuals infected with COVID-19 compared to the general population [10], suggesting that the mechanisms through which COVID-19 infection impacts mental health may differ from those observed in the general population. Studies investigating mental ill-health following COVID-19 infection shed light on a bidirectional association between SARS-CoV-2 infection and mental ill-health [11][12][13][14][15].However, the impact of COVID-19 infection on mental health becomes more intricate in the context of long-term complaints of COVID-19.Follow-up studies on COVID-19 survivors highlighted the associations between mental ill-health and post-COVID complications [10,16].Long term impacts after COVID-19-infection include multi-systemic problems, disabilities, and mental health problems, of which fatigue has emerged as the most reported symptom [17][18][19].As many as almost half of all who have a history of probable or confirmed COVID 19-infection experience symptoms after recovery from infection [18], and about 40% of COVID-19 survivors experience fatigue three months after infection, with anxiety, depression and psychiatric comorbidity generating elevated risk [20].We have previously shown in a cohort study that individuals with a history of probable or confirmed COVID-19 infection/infections are more likely to suffer from mental health problems, with post-COVID impairments and fatigue appearing as the main predictors of mental ill-health [10]. To summarize, available data highlights that COVID 19-patients are a high-risk group for mental ill-health, and points to an interplay between COVID-19-infection and mental ill-health and a possible bi-directional association.However, more knowledge is needed regarding the specific role of post-COVID impairments, especially fatigue, on mental health following COVID-19 infection.Hence, we aimed to investigate the trajectories of mental health changes over time in individuals infected with COVID-19; and to explore potential predictors that may influence these changes. Participants In this longitudinal study, we used data from a web-based longitudinal project to study the impacts of COVID-19 infection on a sample of Swedish population [10,17].To recruit participants, we used convenience sampling by spreading e-posters on platforms of COVID-19-related Facebook groups, Swedish COVID-organization (Svenska Covidföreningen), and the Karolinska Institutet website.Participants could access the web-survey through an online platform, Research Electronic Data Capture (RED-Cap), hosted locally at Karolinska institutet [21,22].Inclusion criteria were: (i) having been infected with COVID-19; (ii) age (≥ 18 years); (iii) ability to understand Swedish, and use the internet in order to complete the web-survey.The main exclusion criteria in the current study was absence of a prior COVID-19 infection, serving as a key parameter for participating. The web-based survey was conducted at three time points: (i) at baseline or T0 (February/March 2022), (ii) first follow-up or T1 (September/October 2022), and (iii) second follow-up or T2 (February/March 2023).The number of participants in each cross-sectional data collection varied.A total of 501 participants responded at the baseline (T0), while the response rate was 60.1% at T1 and 57.3% at T2.The longitudinal analysis included 236 (47.1%) participants who completed the survey at all time points. Ethical considerations The study was approved by the Swedish national ethical board (Dnr 2021-06617-01).Informed consent was obtained from all participants.All procedures utilized in collecting data for the current paper followed the ethical standards of the Helsinki Declaration of 1964 and subsequent amendments [23]. Time-invariant covariates Time-invariant covariates in the current study consisted of sociodemographic variables, COVID-19-related variables, and previous psychiatric diagnosis, which were obtained at T0 and assumed to remain unchanged across the study.Sociodemographic variables included age, gender, educational level, work status, and economic status.The ages were grouped by decades. COVID-19-related variables included time of first infection, hospitalization for COVID-19, being vaccinated against COVID-19, and COVID-19 severity in the acute phase.Time of first infection was measured by a single item in which respondents stated date of first infection (year and month).The variable was dichotomized into during the year 2020 versus during the year 2021 and 2022, in line with our previous study that revealed that individuals who were infected for the first time during the first and second pandemic waves in Sweden (the spring and autumn of 2020) experienced more COVID-19 related problems [17].Hospitalization for COVID-19 was measured using a single item in which respondents stated on a binary question if they had been hospitalized because of COVID-19 (yes/no).Being vaccinated against COVID-19 was measured with a single item in which respondents indicated if they have received vaccine against COVID-19 on a binary question (yes/no).COVID-19 severity in the acute phase was measured with a 15-item scale describing common symptoms of the COVID-19 infection, namely fever, fatigue, cough, loss of smell and taste, difficulty breathing or shortness of breath, headache/migraine, aches or pain in the body, diarrhoea, skin rash, runny or blocked nose, nausea/vomiting, arrhythmia/palpitations, sore throat, cognitive difficulties such as memory and attention, and mental health problems such as sleep problems, depression, and anxiety [24,25].Participants rated symptoms that they have had at the beginning of the infection and those the following 4 weeks on a 4-point scale (0 = no, 1 = mild, 2 = moderate, 3 = severe).The respondents' answers to 15 symptoms of COVID-19 items were summed up to calculate a COVID-19 severity in the acute phase (range 0-45, α = 0.77). Previous psychiatric diagnosis was assessed using a single item in which respondents stated on a binary question if they had received a psychiatric diagnosis before COVID-19 infection (yes/no). Time-varying covariates Fatigue and post-COVID impairments were treated as time-varying covariates and assumed to be subject to change across the study.Time-varying covariates were assessed at all three time points (T0, T1, and T2). Fatigue The Multidimensional Fatigue Inventory (MFI) is a self-report instrument aiming to measure fatigue.The MFI is a 20-item scale and consists of five subscales namely general fatigue, physical fatigue, reduced motivation, reduced activity, and mental fatigue.Each scale contains four items, each rated on 5-point scale, from 1 (Yes, that is true) to 5 (No, that is not true) [26], and total score is calculated by summing all items.Higher scores indicate higher fatigue levels [27], and total score > 60 has been reported as clinically significant fatigue in a previous study [28].In this study, we used the Swedish version, which has shown adequate psychometric properties [29,30]. Post-covid impairments Post-covid impairments were measured using a scale consisting of 54 items rated on a 4-point Likert scale (0 = no, 1 = mild, 2 = moderate, 3 = severe), developed and used in our previous studies [10,17].Items were categorized into four sub-categories according to the International Classification of Functioning, Disability and Health [31] as impairments in mental functions, impairments in sensory functions and pain, impairments in body system functions, and impairments in activities and participation.The respondents' answers to each sub-category of post-COVID impairments were summed up and divided by the number of items to obtain the mean for each sub-category. Study outcomes Mental health variables were considered as study outcomes and consisted of depression, anxiety, and insomnia.Depression was measured with the Patient Health Questionnaire-9 (PHQ-9).The PHQ-9 consists of nine items answered on a four-point Likert scale (0-3), with the total score ranging from 0 to 27 [32][33][34].Anxiety was assessed with the General Anxiety Disorder-7 item scale (GAD-7), which contains seven items answered on a four-point Likert scale (0-3) and with a score range from 0 to 21 [35][36][37][38].Insomnia was measured with the Insomnia Severity Index (ISI), that consists of seven items to assess the nature, severity, and impact of insomnia answered on a five-point Likert scale (0-4), the total score ranges from 0 to 28 [39,40].The recommended cutoff score of ≥ 10 on each scale was considered as clinically significant depression, anxiety, and insomnia in the current study [33,36,40]. Statistical analysis Descriptive statistics for sociodemographic variables are provided in terms of percentages, means, and standard deviations for both the baseline and longitudinal samples.Moreover, descriptive statistics for fatigue, post-COVID impairments, and study outcomes are presented in the form of means and standard deviations.Additionally, we computed the intraclass correlation coefficient (ICC) to evaluate variations between the initial baseline and subsequent follow-up assessments for time-varying covariates and study outcomes.An ICC less than 0.4 was categorized as very low, 0.4 to 0.74 as low to acceptable, and 0.75 or higher as excellent [41]. To assess the potential impact of the covariates, we used mixed-effects models, which are well-suited statistical tools for longitudinal data analysis.Participants were included in the model only if data from all three measurements were available for a given mental health outcome.The alpha value of the two-tailed level of significance was set at 0.05. We ran linear mixed models with random intercepts to examine differences in mental health outcomes (PHQ-9, GAD-7, and ISI scores) over time with adjustment for sociodemographic variables, COVID-19-related variables, and previous psychiatric diagnosis.Furthermore, we ran linear mixed models to identify factors that impacted the trajectories of depression, anxiety, and insomnia by including both time-invariant and time-varying covariates in the model.We considered AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion) as model fit in the current study.A lower AIC or BIC value indicates a better fit.Statistical analysis was performed using statsmodel library (version 0.13.5) in Python, and IBM Statistical Software Package of Social Science (SPSS; version 26). Descriptive statistics Descriptive statistics for sociodemographic variables are presented for the baseline sample and the longitudinal sample (Table 1).We compared whether sociodemographic variables could predict whether participants completed surveys at each time point.The results showed that there were no significant differences between participants who completed the survey at all time points and those who did not complete the survey regarding sex, age, education level, marital status, work status, and economic status. The majority of the longitudinal sample had been infected with COVID-19 for the first time during the year 2020 (69.5%), had not been hospitalized for COVID-19 (85%), and had been vaccinated against COVID-19 (83.9%).The average severity of COVID-19 in the acute phase was 24.7 (standard deviation = 7.8, ranging from 4 to 44).Furthermore, 27.6% of the respondents reported that they had received a psychiatric diagnosis before COVID-19 infection. Table 2 presents descriptive statistics for fatigue, post-COVID impairments, and mental health outcomes over time in the longitudinal sample.A decline in mean total fatigue score was observed from T0 to T2.In addition, the prevalence of fatigue (scores > 60 points) decreased constantly from 90.5% to 83.5% from T0 to T2.The mean values of post-COVID impairments decreased slightly from T0 to T2. Figure 1 presents the proportion of clinically significant levels of depression (≥ 10 points on PHQ-9), anxiety (≥ 10 points on GAD-7), and insomnia (≥ 10 points on ISI) over time. Predictors of the trajectories of depression, anxiety, and insomnia Adjusted estimates of the changes in depression, anxiety, and insomnia scores over time from the linear mixed models are shown in Table 3.The results demonstrated a significant decline in depression over time, while no significant changes were observed in anxiety and insomnia.We also studied the interaction between time and other variables including sociodemographic variables, COVID-19-related variables, and previous psychiatric diagnosis, but none of the interactions proved significant.The model fit metrics (AIC and BIC) suggested that adding the interactions only diminished the model fit. Table 4 presents estimates derived from the linear mixed models examining the associations between sociodemographic variables, COVID-19-related variables, previous psychiatric diagnosis and the outcome variables.Separate models were employed for depression, anxiety, and insomnia.The findings indicated that younger adults and individuals experiencing more severe COVID-19 infection in the acute phase exhibited poorer mental health outcomes (Table 4). The outcome of the linear mixed model, examining the associations between fatigue, post-COVID impairments and the outcome variables (depression, anxiety, and insomnia) are presented in Table 5.We conducted the analysis at the individual level, ensuring implicit adjustment for sociodemographic factors, COVIDrelated variables, and previous psychiatric diagnosis. The results showed that fatigue appeared to be a significant predictor for all outcomes, and impairments in mental function were an additional significant predictor for depression and anxiety.Both variables had a positive impact on all outcomes, with fatigue being the strongest predictor. Discussion We investigated trajectories of mental health outcomes over one year in Swedish adults with COVID-19, using a three-wave survey.Our results demonstrated a significant decline in depression over time, while small, In this study, levels of depression decreased constantly, anxiety exhibited a slight increase, followed by a subsequent decrease, remaining below the baseline level, and insomnia increased slightly and then decreased, consistently remaining above the baseline level.Our findings are in line with previous studies indicating that mental health problems remained more prevalent among individuals who have had COVID-19 infection [42][43][44][45][46][47].However, symptoms of depression and anxiety decreased over time regardless of the initial severity of the disease [48,49].There are several possible explanations for these findings.Firstly, depression and anxiety symptoms have shown a decreasing trend in the general population, including our participants, during the COVID-19 pandemic [50].During the COVID-19 pandemic in Sweden, individuals were encouraged to work from home when possible.Additionally, gatherings of more than 50 people were prohibited, many businesses and higher education institutions voluntarily transitioned to video conferencing, and non-essential travel was significantly reduced.However, at the onset of the study period in February 2022, Swedish authorities changed their strategies in response to the pandemic similar to other European nations, leading to the lifting of the majority of COVID-19 restrictions [51].The relaxation or removal of COVID-19-related restrictions, facilitated by the global vaccination campaign, has enabled people to resume their pre-pandemic lifestyles and activities.This transition may have alleviated depression and anxiety symptoms, as individuals restore a sense of normality and participate in activities that provide them with joy and fulfillment.Another potential factor is the enactment of mental health recovery strategies by policymakers in various countries, including Sweden.Strategies include initiatives to monitor, inform, educate, intervene, and research mental health issues in society [52], and efforts target both immediate and long-term mental health outcomes.The third possible explanation for these findings is sustained recovery of COVID-19-related persistent symptoms over time.A substantial proportion of individuals infected with COVID-19 reports experiencing at least one moderate-to-severe impairment due to COVID-19 infection, with fatigue being the most commonly reported symptom [17,[53][54][55][56][57][58][59][60][61].Furthermore, our previous cross-sectional study revealed that post-COVID impairments and fatigue emerged as significant predictors of mental ill-health in individuals who were infected with COVID-19 infection [10].However, a progressive improvement has been observed in a wide array of symptoms over time [48,62,63].Our study results indicate that impairments in mental function and fatigue affect depression and anxiety changes over time.These factors shape the dynamics of depression and anxiety and are key for their longitudinal course, thus, managing these complaints may improve mental well-being.In summary, the reduction of symptoms of depression and anxiety observed in this study may be linked to the global recovery from the COVID-19 pandemic and the improvement of post-COVID complaints, especially fatigue. We found that insomnia, unlike depression and anxiety, increased slightly before decreasing during COVID-19 recovery, but remained above baseline throughout the study period.This indicates a complex interaction of factors affecting sleep quality in this population.These findings are consistent with a previous study which demonstrated a decrease in the symptoms of depression and anxiety whereas increased symptoms of insomnia among COVID-19 patients over time [64].Additionally, another study indicated that there was no significant change in insomnia over time among COVID-19 patients [65].Several factors may contribute to this pattern of insomnia exhibiting a different pattern than other symptoms of mental ill-health.First, the rates of insomnia increased significantly during the COVID-19 pandemic like other mental health issues [4] and the prevalence of insomnia was higher in COVID-19 infected patients compared with the general population [10,64,66].The initial increase in insomnia could be attributed to the physiological and psychological effects of the acute phase of COVID-19 infection and the side effects of COVID-19-related medications which disrupted sleep quality and quantity during the early stages of recovery and increased the risk of developing chronic insomnia later after recovery [55,[67][68][69].Second, sleep-related problems were reported as one of the most common remaining symptoms experienced after recovering from COVID-19 [17,48].However, post-COVID impairments did not significantly contribute to changes in insomnia following COVID-19 infection in the current study.Interestingly, it was observed that fatigue emerged as a significant predictor in relation to insomnia.The cooccurrence of fatigue and insomnia has previously been found to be highly prevalent among individuals following recovery from COVID-19 infection [70].This finding suggests that these two symptoms frequently manifest together in individuals who have experienced the illness.Additionally, several studies have highlighted the presence of fluctuations and relapses in post-COVID-19 fatigue over time [48,71].The interplay between fatigue and insomnia can create a vicious cycle, particularly among patients with long-term COVID [72].Fatigue can contribute to increased sleep difficulties, while insomnia can exacerbate feelings of fatigue and prolong recovery.This bidirectional relationship between fatigue and insomnia may lead to a chronic cycle of symptoms and further impact overall wellbeing.Lastly, it is essential to consider the bidirectional relationship between mental health and sleep.Insomnia can exacerbate persistent symptoms of depression and anxiety, while these mental health conditions can also contribute to sleep disturbances [73].To reduce symptoms of depression and anxiety may help to improve insomnia in COVID-19 survivors, and better mental health and coping skills can improve sleep quality.Insomnia needs ongoing assessment and treatment in individuals infected with COVID-19.In addition, addressing fatigue and mood may also reduce insomnia. Further analysis revealed that younger adults and individuals who experienced more severe COVID-19 infections in the acute phase exhibited poorer mental health outcomes.Previous studies have demonstrated that younger adults have been more profoundly affected by the pandemic and exhibit higher levels of mental health problems [4,74,75].Younger adults, despite primarily experiencing mild COVID-19 infections, faced greater challenges related to the long-term impacts of COVID-19 infection, which significantly disrupted their presentation and performance in their work, education, and daily activities.Hence, it can be concluded that young adults remain within the at-risk group for mental ill-health following COVID-19 infection.Moreover, prior research has consistently demonstrated that the severity of COVID-19 infection in the acute phase is strongly linked to persistent post-infection symptoms [53,55,56,61], emerging as the most robust predictor of post-COVID impairments (Badinlou et al., 2023).Additionally, it has been shown to contribute to higher levels of mental illhealth following COVID-19 infection [10].Therefore, it is reasonable to conclude that individuals who experienced severe COVID-19 infection in the acute phase continue to be at risk for mental ill-health.These findings highlight the importance of considering sociodemographic and COVID-19-related factors when examining the impact of COVID-19 on mental well-being. The primary objective of our study was to explore the potential trajectories of mental health changes following COVID-19 infection.To achieve this goal, we focused on minimizing the risk of overlooking real effects (i.e., Type II errors) rather than strictly controlling the risk of falsely identifying effects (i.e., Type I errors).This approach was deemed more appropriate for our exploratory research, since it allowed us to prioritize detecting patterns in the data, even if it meant there was a slightly increased risk of overlooking some effects. The current study has several practical implications.First, understanding changes in mental health outcomes following COVID-19 infection and identifying risk factors could help healthcare providers to develop targeted interventions to support those who have been infected and may be experiencing psychological problems.Second, the findings provide policymakers with evidence-based insights to implement strategies that can mitigate the long-term mental health impact of the COVID-19 infection, and promote mental well-being in individuals infected with COVID-19, even those who experienced a mild infection.Finally, it contributes to the broader body of research on the mental health consequences of infectious diseases, potentially guiding future pandemic preparedness and response efforts. Nevertheless, it is important to interpret the results of the study in the context of its limitations and consider potentially confounding factors.First, the current study uses self-reported data for mental health outcomes, which may be biased or inaccurate compared to clinical assessments.Second, it uses a convenience sample, which may limit the generalizability of the findings.Hence, future studies should use more representative samples.Third, it may suffer from non-response bias, as participants who continued or dropped out may differ in important ways.Fourth, we could not establish causality between COVID-19 infection and mental health changes, as there may be other confounding factors.Fifth, it lacks a control group that did not contract COVID-19, which makes it hard to isolate the effects of the infection on mental health.Sixth, majority of participant in the current study was female, introducing the possibility of gender-related biases and potentially limiting the generalizability of the findings to a more balanced demographic. Conclusions This study provides a longitudinal perspective on mental health issues following COVID-19 infection, shedding light on the dynamic nature of mental health outcomes over time and underscoring the importance of continued support and interventions tailored to the changing mental health requirements of this affected population.Further research is needed to understand the underlying factors contributing to these changes and to develop targeted interventions for individuals experiencing persistent mental health symptoms. Table 1 Sociodemographic characteristics of the baseline sample and the longitudinal sample Table 2 Descriptive statistics for fatigue, post-COVID impairments, and mental health outcomes over the three measurement points (N = 236) ICC Intraclass Correlation Coefficient, SD standard deviation, MFI-20 Multidimensional Fatigue Inventory-20, PHQ-9 Patient health questionnaire-9, GAD-7 Generalised Anxiety Disorder-7 item scale, ISI Insomnia Severity Index Fig.1Proportion of people reporting clinically significant levels of depression, anxiety, and insomnia over time Table 3 Adjusted estimates of the change in depression, anxiety, and insomnia over time from linear mixed models PHQ-9 Patient health questionnaire-9, GAD-7 Generalised Anxiety Disorder-7 item scale, ISI Insomnia Severity Index, Coef regression coefficient, CI Confidence interval a Linear mixed model with random intercepts adjusted for sociodemographic variables b Linear mixed models with random intercepts adjusted for sociodemographic and COVID-19-related variables c Linear mixed models with random intercepts adjusted for sociodemographic variables, COVID-19-related variables, and previous psychiatric diagnosis * p < .05. **p < .01. ***p < .001
2024-02-15T05:07:36.618Z
2024-02-13T00:00:00.000
{ "year": 2024, "sha1": "4f3599b9fc1bd3dbbbf769b0d8ed7a7fdc34d6c3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4f3599b9fc1bd3dbbbf769b0d8ed7a7fdc34d6c3", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235507907
pes2o/s2orc
v3-fos-license
Pushed back, pulled forward: Exploring the impact of COVID-19 on young adults’ life plans and future mobility The COVID-19 pandemic has caused unprecedented disruption to how people live, work and travel. There has been a recent outburst of research on the short-term impacts that the pandemic has had on travel behaviour. However, the long-term impact of the pandemic on travel behaviour is still uncertain and difficult to predict. In particular, young adults are facing some of the most significant disruptions from the pandemic; these disruptions are likely to have long-term impacts on their lives. This study aims to unpack the direct and indirect effects that COVID-19 may have on the travel behaviour of young adults. It does this through in-depth interviews with 26 young adults living in Melbourne and Victoria, Australia. Interviews suggest that while the pandemic has had significant impacts on the short-term travel behaviour of all young adults, the long-term impacts are more complex and mediated by how they are moving through key life milestones. Many respondents are relatively unimpacted by the pandemic. Others have faced a significant disruption to their lives. Those who had planned to live or work overseas have found their life plans ‘accelerated’, which may also accelerate their dependence on the car. In contrast, those who have lost work are facing a significant delay to their life plans. We propose a framework for how COVID-19 may directly and indirectly impact travel behaviour in the short- and long-term. The strongest impacts on mobility, through changes to life stage transitions, are indirect and unevenly spread across the population of young adults. Introduction The COVID-19 pandemic and the experience of stay-at-home orders has had a profound impact on society. In response, there has been a recent flurry of research on the impacts of COVID-19 on travel behaviour. Initial findings are that overall rates of travel plummeted during stay-at-home orders, especially for trips on public transport (Beck and Hensher 2020; de Haas et al., 2020). However, what is not clear is whether these changes are short-lived and will be reversed when vaccination rates provide populations with herd immunity. In many cities, car travel quickly returned to prepandemic levels when domestic travel restrictions eased (Sipe 2020). Even public transport has seen significant recoveries in places like New Zealand where infection rates are extremely low (Ipsos 2020). Any lasting impacts on travel behaviour are likely to play out indirectly through changing the upstream influences on travel behaviour, such as home and work location, lifestyle and preferences. Already it has been shown that COVID-19 is having a disproportionate impact on young adults, who are more likely to have lost their job (Blundell et al., 2020) or face negative mental health impacts (Pieh et al., 2020). Prior to the pandemic, young Australians already faced a precarious economic future. Compared with older generations, they are more likely to be underemployed, make up a disproportionate share of the casual labour force and have experienced stagnant income growth in recent years (Productivity Commission 2020;de Fontenay 2020). In addition, in many countries young adults are taking longer to obtain their license, are more likely to use transit and own fewer cars (Kuhnimhof et al., 2011;Delbosc and Currie 2013). In part, this has been attributed to delays in reaching 'adulthood' milestones, such as starting full-time work, buying a home and having children (Grimsrud and El-Geneidy 2014;Hjorthol 2016;. Young adults are at a formative period cementing their future lifestyle aspirations, and in particular, on the cusp of making long-term decisions regarding housing and employment. As such, the impacts of COVID-19 are likely to be more pronounced among this group. Using the results of 26 qualitative interviews with young Australians, this paper seeks to explore whether the pandemic has compounded or alleviated the delays in reaching 'adulthood' milestones. Further, it seeks to understand how the pandemic may have shaped future lifestyle aspirations and, in turn, how these changes may influence travel behaviour into the future. Although much of this paper explores how the pandemic has impacted the lives of young people beyond travel and transport, the findings provide valuable context for transport decisionmakers. It also builds upon a popular framework that characterises the relationship between life events and travel behaviour (Müggenburg et al., 2015), providing guidance for transport researchers looking at the relationships between COVID-19 and travel behaviour. After firstly discussing relevant literature and the approach to conducting the interviews, this paper outlines how the pandemic has had one of three different effects on the life course of young adults. It then discusses enduring changes to lifestyle aspirations that have been shaped by the lockdown experience. It concludes with some discussion and policy implications of these findings. Literature review Travel behaviour is shaped by myriad influences, from short-term demand through to long-term mobility decisions, such as housing location and car ownership (Müggenburg et al., 2015). The COVID-19 pandemic has significantly disrupted the spatial and social environment in which young people operate. This is likely to have significant impacts on the short-term travel behaviour of all, and the long-term locational and lifestyle behaviour of some. The impact of COVID-19 on short-term travel behaviour Countries that have enforced stay-at-home orders in response to the COVID-19 pandemic have inevitably experienced drastic changes in daily travel patterns. For instance, de Haas et al. (2020), using Dutch panel data, show that the frequency of trips reduced by over half (58%) during the Dutch lockdown period (March/April 2020) compared with the previous survey wave. As a proportion of all trips, walking increased while all other modes decreased. In Australia and Canada, similar reductions in travel were evident during the same period and under comparable stay-at-home orders: trip frequency reduced by approximately 50% in both Australia and British Columbia Fatmi 2020). Moreover, attitudes towards private transport have tended to become more favourable while attitudes towards public transport became more negative de Haas et al., 2020). For instance, an Australian survey conducted in the initial period of the first lockdown in Australia (late March and early April 2020), showed the vast majority of respondents (84%) would feel most comfortable travelling by private car . Similarly, de Haas et al. (2020) found attitudes towards the private car improved while attitudes towards public transport became notably less favourable. Little change was evident in attitudes towards walking and cycling (de Haas et al., 2020). Stay-at-home orders have had a significant impact on how many people work, shop and socialise. Although it depends on the degree of local travel restrictions, surveys suggest that between a third and a half of workers switched to working from home during the height of the pandemic (Ipsos 2020;Roy Morgan 2020). Travel for 'non-essentials' (recreation, socialising and non-grocery shopping) drops significantly during stay-at-home restrictions but tends to rebound quickly when restrictions are eased (Ipsos 2020). Although the short-term impacts on travel behaviour are now fairly well documented, the long-term impacts are less certain. Moreover, it is unclear how COVID-19, as an unprecedented and sudden event, might interact with cohort and period effects in shaping future travel behaviour. For instance, recent research has shown that US commuters who came of age during the 1970s oil crisis were less likely to commute by car, even decades later. The authors attribute this to the formation of cost-sensitive transport preferences during early adulthood that endure throughout adulthood (Severen and Van Benthem 2019). However, as restrictions ease and vaccination programs accelerate, lasting impacts (if any) are most likely to be mediated by two broad factors: changes to when and how people transition through life stages and changes to long-term mobility choices. Life stage, lifestyle and travel behaviour A range of research explores the important part that life stage and life transitions play in determining travel behaviour (Müggenburg et al., 2015). In particular, young adulthood is a formative period in which future lifestyle preferences are cemented. Finishing schooling, moving out of the parental home, entering the workforce, buying a house and starting a family are all significant milestones for many young adults. Moreover, for a significant proportion of young adults, travelling overseas (sometimes for years at a time, to live and work) has become a defining part of young adulthood (Delbosc and Nakanishi 2017). In a large-scale qualitative study of teenage Australians' future life aspirations, over half of respondents' imagined futures featured travel, typically before 'settling down' to start a family (Bulbeck 2005). However, how young people approach these milestones (if at all) varies considerably as young adults have a greater variety of life courses available compared to previous generations (Du Bois-Reymond 1998;Delbosc and Nakanishi 2017). In recent years, some young adults are making more sustainable travel choices compared with earlier generations (McDonald 2015;Delbosc 2016). This is in part attributed to the 'delayed' life course becoming increasingly common, where the transition to 'adulthood' milestones associated with increasing car-use, such as starting full-time work and having children, happens later in life, once other activities such as a travel and further education are pursued (Delbosc and Currie 2014;Delbosc and Nakanishi 2017). This delay, in turn, creates fewer constraints on daily travel choices, and, potentially, delays the transition to more car-dependent lifestyles. The impact of COVID-19 on when and how young adults move through life stages is hard to predict. However, it is quite likely that for young adults (who are less likely to experience the health impacts of the pandemic), the largest impacts on their life stage will be felt through economic recessions that are unfolding in many countries. A breadth of research has examined the economic effects of the Great Recession of 2007-2009 on young adults' lives. Much of this research, largely undertaken in a US context, shows that young adults were disproportionately impacted by the recession, which in turn, lead to far reaching ramifications. Some researchers found that even before the recession, economic factors were the primary reason for lower driving among American youth . Fewer employment prospects led to a greater uptake of education (Clark 2011) and more young people returning or continuing to live with their parents (Fry 2013). Moreover, an estimated 2 million fewer births occurred in the US during the five years following the 2007 Great Recession, suggesting that starting or growing a family was postponed or forgone (Sobotka et al., 2011). These findings suggest that COVID-19 may act to delay key life stage transitions, which in turn may delay the transition to car dependence. Long-term mobility decisions Travel behavior is strongly influenced by 'upstream', long-term decisions such as where to live and where to work (Salomon and Ben-Akiva 1983). The COVID-19 pandemic has significantly disrupted working practice and may also be influencing preferences for home location. Preliminary research suggests that working from home may be an area where enduring changes may occur. In Australia and New Zealand, between a third and half of workers have had to work from home during the pandemic (Ipsos 2020, Morgan, 2020, an unprecedented disruption to workplace location. Several recent studies have demonstrated the majority of individuals working from home have had positive experiences doing so and would want to work from home more in the future Rubin et al., 2020). However, it is premature to determine whether working-from-home rates will remain high in the long term Mokhtarian 2020). Potential long-term impacts on housing preferences are even less clear. The preliminary research on this topic tends to focus on modelling housing prices in response to the economic shock of the pandemic (e.g. Allen-Coghlan and McQuinn 2020; Evans et al., 2020). Other work looks at how the unfolding economic crisis is making housing more precarious and unstable, especially for renters or people living in share housing (Raynor and Panza 2020). For those with the means to purchase a home, housing prices in 2020 and early 2021 continued to rise in many countries, driven largely by low interest rates (The Economist 2020). However, initial reports that people were fleeing apartment and inner-city living for suburban and rural homes have proven to be more nuanced. After dropping in 2020, housing in inner-city areas of many American cities saw a strong rebound in early 2021 (Gopal and Wittenberg 2021); in contrast, apartments in inner areas of Australian cities have seen significant price drops due to the decrease in demand from the international student market (Eddie and Booker 2021). The impact that these price changes will have on young adults is likely to depend on whether they were already in a position to take advantage of historically low interest rates. Summary Short-term changes to travel behaviour as a result of travel restrictions introduced in response to the COVID-19 pandemic are now increasingly understood. However, what is still unclear is the impact that these changes present for long-term mobility. Understanding the long-term impacts on young adults is particularly important as they are at a formative period of their lives. The housing and employment decisions that they make now are likely to have long-term effect on their mobility. As such, this paper seeks to address this gap by exploring the impact of COVID-19 on young adults' life plans and their future mobility. Study location This study was conducted in Victoria, Australia. Greater Melbourne, Victoria's capital city and the location where the majority of interviewees were based, comprises a large land area of just under 10,000 square kilometres and, in 2020, a population of 4.9 million. On average, there are 1.7 motor vehicles per household. Reflecting the high rates of car ownership the city has a predominantly sprawling, car-dominated urban environment. Nearly two-thirds of residents (67%) travel to work by car compared with just 16% travelling by public transport and 5% walking or cycling (Australian Bureau of Statistics, 2016). Interview approach Interviews were conducted during late July and early August 2020. During this period, Australia recorded the highest number of COVID-19 cases since the first recorded case was reported in January 2020. In Melbourne, Victoria, where most interviewees were located, stay-athome orders were in place throughout the interview period. Midway through the interview period, mandatory face covering policies were introduced and stay-at-home orders increased in duration and restrictiveness. These changes, and an upward trajectory of reported case numbers, may have compounded a sense of uncertainty towards the future plans among participants. As stay-at-home orders were in place, interviews were conducted either over the phone or by an online video meeting. Interviews ranged in length from 20 minutes to an hour. Interview participants Twenty-six interviewees were recruited from participants of the Millennial Mobility Panel, a three-year study exploring the prospective life plans and travel behaviour of Australian millennials. The survey panel consisted of 885 residents of Melbourne and regional Victoria aged between 21 and 25 when recruitment began in 2017 (aged 24-29 in 2020). Once a year, for three years, they were surveyed about their travel behaviour, life stage and plans for the future. See Delbosc and Farhana (2019) for more information about this survey. Panel members were contacted and asked to participate in interviews about the impact of COVID-19 on their travel behaviour. We aimed for the participants to be broadly representative of Victorian Millennials in terms of gender, housing and employment status. However, as the interviews progressed we deliberately targeted specific demographics to reduce the chances of missing key themes. These groups included people without a driving license (as their mobility experience is likely to differ from those who can drive), people without a university degree and people who had lost their job. Interviewees were in a range of housing and employment situations (see Table 1). Most interviewees had moved out of home and were now living independently, with flatmates or a partner. Over half of interviewees were in full-time employment, and generally within in the early stages of the career. The remaining interviewees were either unemployed prior to the pandemic, had lost work as a result of the pandemic and were currently unemployed, or were studying. A minority of interviewees had purchased their first home, but the majority aspired to home ownership, even if this was presently a very distant prospect. Only one respondent had a child, two respondents were planning on having a child in the next few years and the remainder either were not planning on having children or had no firm plans. Analysis approach All interviews were audio-recorded and transcribed. A thematic analysis of the interview material was conducted following the process set out by Braun and Clarke (2006). After analysing 23 interviews, initial themes were discussed within the project team. A further 3 interviews were conducted to further develop the preliminary themes identified and to broaden the demographic range of participants. After 26 interviews no new themes were emerging and we considered data saturation to be achieved. Interviewees' quotations have been coded using the following characteristics: gender, age and typology. For example, "F26_Stable" refers to a female, aged 26, and a member of the 'Stable' typology. Results Interview participants focussed extensively on how the pandemic impacted their lives in the short-term. Through the course of coding the discussions, it became evident that these short-term impacts had diverging effects on the life course pathways that young adults were pursuing. This, in turn, is likely to have differing impacts on their longterm mobility choices. The next three sections unpack these three themes. Short-term impacts on travel behaviour Short-term changes in interviewees' travel behaviour echoed recent findings in COVID-19 travel literature (de Haas et al., 2020). All respondents reported, unsurprisingly, a reduction in the amount that they travelled since the stay-at-home orders were in place. As a proportion of travel modes used, some reported driving, walking and cycling more whilst most also reported marked reductions in using ridesharing and public transport. The vast majority of interviewees anticipated returning to the travel modes that they used prior to the pandemic once travel restrictions eased. Interviewees described a breadth of activities that had formerly been conducted in-person which had been replaced with online versions. Work and study were largely conducted from home. Furthermore, interviewees engaged in a range of recreation activities online, such as catching up with friends and family and even attending festivals and volunteering. However, with the exception of some work and retail shopping, most interviewees reported they were likely to return to inperson versions of these activities once restrictions ease. This was particularly the case for social activities which nearly all interviewees commented did not replace in-person interactions. "I don't really do them much [online social video chats] unless it's someone's birthday because I don't like them … It's just not quite the same and it also just seems to always go on forever and then I don't always feel more connected to people after them." F25_Stable. Moreover, there was an element that online social activities gradually reduced in frequency as the novelty wore off. As one interviewee describes: "I think initially during that first wave I was on zoom and all the things that everyone else was getting on … but now I think I mainly only use those sort of chat things for special occasions like if it's friend's birthday or something." F27_Stable. Furthermore, the experience of stay-at-home orders and restrictions on gatherings with friends and family had a profound impact on all interviewees. The majority of interviewees expressed that they found the experience of lockdown socially isolating, with many reporting deteriorating mental health. This, in turn, appeared to reinforce the importance of in-person social contact. "It's just made me feel a lot more alone because, I mean, sure I've been living on my own but before the pandemic I could still see my friends whenever I wanted to see them … I went through months without hugging anyone and even thatlike even that was something that took me to a really bad place." M25_Delayed. Overall, interviewees said that short-term changes to their travel behaviour and activities were unlikely to result in enduring changes to the way they travel. However, subtle changes to housing preferences and employment practices, discussed in the final part of this section, may end up instigating changes to their long-term mobility. Changing pathways and adulthood milestones Study participants were in different stages of life before the pandemic. Some were living with parents or studying; others were living with partners or working full-time; a few had already purchased a home. The COVID-19 pandemic influenced how young adults approached these life stages in different ways, resulting in three distinct categories of disruption: Stable, Delayed or Accelerated. The largest group of respondents (18 interviewees), referred to throughout as the 'Stable' typology, did not experience a significant disruption to their life course. The remaining two groups experienced a significant disruption in their lives prompting either an acceleration or delay in their life course. The unifying characteristic of interviewees in the 'Stable' typology was that they did not experience a significant economic disruption as a result of the pandemic. They were not all at the same stage of life: although nearly all members of this group were in full-time employment or education, some were not in the workforce due to mental or physical health. Rather, the defining feature is that their economic circumstance had not changed. As a result, only subtle changes were apparent in the future plans of the Stable typology. These ranged from slight education and career progression delays to, in some cases, financially driven milestones (such as buying a home) being brought forward due to savings accruing more quickly. In contrast, the remaining eight interviewees experienced a significant disruption in their lives as a result of the pandemic. Most commonly, the disruption was propelled by losing employment and difficulties finding new employment. However, for one interviewee working overseas at the start of the pandemic, the disruption was prompted by returning to Australia in response to the government's requests for citizens to return. Overall, the effect of this disruption was either to accelerate or delay their movement through life stage milestones. Although these two 'groups' were quite small (4 participants each), they shared a range of characteristics that warrant discussion. The common characteristics among those interviewees in the 'Accelerated' group (4 respondents) was shifting their focus from living and working overseas to starting a career or further studies in Australia. This, in turn, created new long-term commitments in Australia that meant plans to live and work overseas became increasingly unlikely. In contrast, among the 'Delayed' group (4 respondents), the impact of losing employment and difficulty regaining employment prompted significant delays in their life course. Most commonly, delays were anticipated in the career and home ownership steps. One's life path includes changes to one's career, travel, housing and family milestones. The next sections discuss how (if at all) the pandemic has changed these milestones among the Stable, Accelerated and Delayed typologies. Career Stable interviewees, by definition, did not experience a significant disruption and therefore their career milestones were largely unaffected (at least in the short term). However, they frequently reported concerns about anticipated career delays due to an increasingly uncertain labour market. This made several interviewees more reluctant to change jobs and, as another interviewee expressed, meant that there were fewer internal opportunities for progression: "We have our promotion cycles and I think that at the moment because the industry is incredibly vulnerable and also uncertain. I think that there may not be many promotions this year, although you may be performing quite well, so I think that maybe will come into effect and maybe in two or three years' time you'll have a lot of people fighting over positions because there's a lag as well coming through." M25_Stable. Of Stable respondents in education, the majority were studying a professional qualification part-time alongside full-time employment while three were studying full-time and working part-time. Several interviewees reported delays to their study due to in-person components, such as placements and practical coursework, being postponed. "What was supposed to be a six-month course at [university] and that's just been prolonged again and again because of delays in teaching and delays in being able to, I guess, offer examinations for things so that's actually still ongoing" F28_Stable. Among the 'Accelerated' respondents, however, study plans were typically brought forward. For instance, two respondents had planned on taking extended overseas trips prior to commencing postgraduate studies. International travel bans have brought forward these study plans by, in one case, several years. "Everything that was future has become current if that makes sense. So, you know, the plan to move [back] to Australia and commence study and everything that was supposed to happen next year and it is happening now instead." M26_Accelerated. In contrast to the 'Stable' and 'Accelerated' interviewees, interviewees in the 'Delayed' segment experienced pronounced career disruptions. For one interviewee, a job loss immediately prior to the pandemic was compounded by changing expectations regarding necessary skills to undertake the role, as a result of the pandemic. "I lost my job in January … The pandemic has really just made it hard to get any job because I'm a teacher. So once everything was online, nobody was hiring … Especially because now a lot of interview questions are around distance education and online learning and because I haven't had a job, I don't have any experience doing that." F27_Delayed. One interviewee reported delays to not only entering the workforce but to returning to study. "I know whatever I want to do, I have to go back to study and the couple of things I have looked at there is no online version, so it is going to have to wait until things reopen … I'm definitely nowhere near where I thought I would be 2 years ago, 5 years ago, 10 years ago." F25_Delayed. Moreover, for one interviewee, difficulty finding employment during the pandemic not only prompted career delays but also goals to attain financial independence from their parents. "The main reason that I want to get full-time work is to be able to pay all [my rent and bills], so that my parents don't have to anymore." M25_Delayed. Overall, most interviewees (regardless of which segment they were in) expressed prioritizing stable and secure employment more than they had done previously. Several interviewees expressed adjusting their career aspirations towards industries perceived as more secure: "I'm no longer thinking about academia as a likely career. It's more just something I'm interested in whereas I'd previously thought that was something I definitely wanted to do and would try and do." F24_Accelerated. International travel International travel (both for recreation and for work) is an important milestone for many young Australians. The restrictions on international travel was experienced differently between the three segments of respondents. For those in the 'Stable' group who had firm plans to live and work overseas, they continued to aspire towards these goals, albeit at a later date than originally planned. The timing was highly dependent on the feasibility of overseas travel resuming. As one interviewee describes: "The original plan was to be the latter half of this year [2020] or early next year. Clearly that's not going to really happen, well, there is always opportunity, but, it doesn't look likely now. But yeah definitely still in the short to medium term future hopefully." M25_Stable. Stable respondents with less concrete aspirations to live and work overseas appeared less certain that these plans would now eventuate. Reasons that interviewees gave for no longer aspiring to live and work overseas were generally due to the perception that this was now a riskier prospect. As one interviewee describes: "I don't want to move and have to deal with, let's say, visa issues and stuff like that when there's uncertainty about a job or an ability to get government support, so staying in Australia would be better for just having that security, that if something were to happen I'd be supported." F25_Stable. These considered responses are in stark contrast with those in the disrupted category, whose plans to live and work overseas were completely forgone. Among disrupted interviewees, a desire for stable employment and housing became more pressing and, as one interviewee described, there was less willingness to lose their present housing and employment stability: "At the moment my partner has a good job and our house is great and all those kinds of things so if we were to leave and had to come back, they're things like we'd have a loss of stability there and I don't think prior to the pandemic that would have bothered of us so much because the likelihood of us then being able to find another house, another job that would be good, the chances of that would be much higher." F27_Delayed. Similarly, among the 'accelerated' group, travel was forgone rather than postponed. This was generally due a shift in focus from living and working overseas to starting a career or further studies in Australia. This, in turn, created new long-term commitments in Australia that meant plans to live and work overseas became increasingly unlikely: "I would still really like to do the travel but I think because it's so hard to see when it could happen … I have full time work coming up again next year, the plan was always to [travel] whilst I still wasn't tied down to a full time job … It's actually pretty disappointing." F27_Accelerated. Housing While the pandemic prompted slight delays in career and travel milestones, for many interviewees in the Stable group the goal of home ownership actually became more attainable. As one interviewee describes: "I guess it's mainly helped a bit with savings at the moment not really going out and I've been saving up probably a bit more. So, I think hoping to hopefully buy a house in maybe the next two years." F27_Stable. The prospect of home ownership also became a more likely prospect among one of the Accelerated interviewees as travel plans were forgone. This prompted plans for home ownership to be brought forward as they planned to live in Australia for the foreseeable future: "[Buying a home is] probably something I'm considering more now than I have been, because of not being able to travel. So, the thought I'm going to be stuck here." F29_Accelerated. In contrast, all interviewees in the Delayed group reported delays to housing aspirations. The lack of a stable income impacted not only the ability to save for a home but, as savings were accessed to support an unexpected period of unemployment, it prompted home ownership to become a more distant prospect for several interviewees. "So last year we thought we would be in a position to, around this time this year, actually put a down-payment on a house. That was also under the assumption that, where I was working last year, was going to be ongoing. And then it turned out that it wouldn't be and I lost my job … now we just have, we have no idea." F27_Delayed. Furthermore, for several interviewees, the experience of losing employment in an uncertain economic environment appeared to have long-term impacts on moving out of their parents' home. For instance, one interviewee had been just about to move out of their parent's home when they lost their job: "It was like I was moving out. That was going to be very soon and now that is not even a thought now … and that was something that I really, really wanted to do." F25_Delayed. Finally, several interviewees reported mixed experiences of living with flatmates or living alone during a pandemic. Several interviewees described feeling fortunate to be living with others rather than alone during lockdown, which was perceived as an even more socially isolating experience. However, for a minority of interviewees frictions and unease between housemates came to the fore prompting several interviewees to move. As one interviewee, who moved in with her partner during the pandemic, describes: "I [now] don't have to worry about living with housemates who like, while they were great people and very trustworthy, you don't know where they've been or who they're seeing and that kind of thing. It would kind of worry you during a pandemic … I probably just get a bit impatient with people so being locked down with my housemates even though they're lovely, I just started getting a little bit stir crazy." F28_Stable. Starting a family The pandemic had a significant impact on finding and living with a partner. Among those who were single before the pandemic, stay-athome orders have significantly delayed that process. "Well, I was single before COVID and that's certainly not changing and with COVID going on it's certainly doesn't look like it will be changing any time soon so yeah that's particularly frustrating, when you are single and you know, you can't really do much on that." M28_Stable. In contrast, several interviewees in a relationship but not living with their partner prior to the pandemic, brought forward plans to move in together. Typically, moving in together was prompted by the need to minimise travel while stay-at-home orders were in place but this was generally viewed as a permanent change in their living arrangements. "During the restrictions that have come in … my partner has moved in to stay with us, so we haven't had the need to travel in between our houses" F26_Stable. Among several interviewees there was a recognition that sharing the extraordinary lockdown experience had cemented their relationship more quickly than would otherwise have occurred. "Even when you move in with someone you still have your time apart, whereas now you're sort of like compacted together, and it's like you better learn to really be liking them or you know, if you have any problems or little quirks with each other, you've really got to be vocal about it and talking about it." M23_Stable. Although interviewees were aged between 24 and 29, for most the prospect of starting a family was either not envisioned or was a very distant prospect. Among interviewees who were considering starting a family, most felt this would occur in a sequential order only after secure employment (stable career) and housing (home ownership) had been achieved. A delay in either the career or home ownership step, in turn, may prompt delays to starting a family. As one interviewee describes: "Financial factors would affect that [getting married and having a child] so if that's something that was on the cards in the future, it's probably even more in the future now and even more delayed" F27_Delayed. Summary of life stage impacts Overall, the extent that life plans were altered as a result of the pandemic typically depended on whether the respondent economic livelihoods were disrupted. So-called 'Stable' respondents, whose employment was unaffected by the pandemic, showed subtle adjustments to future plans, such as buying a home or living and working overseas. In contrast, 'Disrupted' respondents, who lost employment, struggled to find employment or whose plans to live and work overseas were indefinitely postponed, showed more pronounced changes to their future life courses. Among the 'Disrupted' respondents, the changes suggest either an acceleration of reaching some milestones, as careers are prioritised over plans for overseas travel or, in the case of some respondents, delays are exacerbated due to growing financial insecurity. Table 2 summarises the impact of the pandemic on interviewees' attainment of milestones by each typology. Long-term mobility changes All respondents, regardless of whether they had experienced a disruption to their employment or life plans, were affected by stay-athome orders. The experience of spending extended periods at home was shown to shape participants' future aspirations for housing and employment. This, along with subtle shifts in attitudes towards private transport, are likely to impact long-term mobility decisions such as housing location and car ownership. Car ownership Attitudes towards car ownership tended to elicit differing responses depending on the interviewee's car ownership status. Overall, the response to the pandemic appears to have confirmed, rather than altered, interviewees' decisions to own (or not own) a car. As one interviewee describes, in response to a question about whether the pandemic had changed his preferences about owning a car: "No, not really. I have [found more need for a car] but still not quite enough to justify the expense of buying a car. But that's always been the thing that, sure a car would be useful but not useful enough to justify the expense." M27_Stable. Among several interviewees, perceptions of car ownership had become even more positive as a result of public health campaigns discouraging public transport use (to allow those who required to use this mode sufficient space to physically distance). This was most apparent among those who recently purchased a car, as they could more readily draw on their previous experience without a car. As one interviewee who recently became a car owner for the first time describes: "I'm definitely happy to have a car because I realise that some people are in a bit of a pickle now that transport is a bitall the changes have meant owning a car is very useful in Melbourne whereas previously you could very easily get around without needing one" F24_Accelerated. In contrast, interviewees carless by choice 1 typically expressed a level of validation about their decision not to own a car. As one interviewee describes: "… when you are on restrictions and you can't really travel much, having a car just sitting in the driveway or the street just wouldn't make sense to me" F25_Stable. Although the pandemic generally reinforced beliefs about car ownership, for some interviewees changes to work practices prompted them to reconsider the importance of their car. For instance, one car commuter who anticipated that remote working would become a permanent feature of his employment, was prompted to reflect on selling a car given the lack of use. "We're considering dropping to one car between me and my girlfriend, because we don't really need to own two cars at the moment because, you know, she works in a hospital so she needs to drive there but I work from home so I don't need to drive anywhere." M26_Stable. Overall these discussions suggest that the pandemic largely reinforced existing beliefs about car ownership. However, in the long term the use of the car may shift in response to changes in home or work locations. Housing preferences and employment practices Overall, those in the Stable and Accelerated groups were more likely to have experienced remote working during the pandemic and be approaching a financial position in which to contemplate purchasing their first home. In contrast, for those in the Delayed group home ownership had become a considerably more distant prospect. Further, their focus tended to be on the more immediate concern of obtaining employment rather than the ways in which their future employment would be different to pre-pandemic. As such, much of the material in this section draws on interviewees in the Stable group. Among those in the Stable group, changes in long-term mobility are likely to be propelled by subtle changes in housing preferences and employment practices. Among interviewees yet to purchase their first home, the experience of spending extended periods at home shaped their preferences for the type of home they would buy in the future. Several interviewees reported a shift to prioritizing space over location and buying a home with some 'greenery' and a 'back yard'. As one interviewee describes: "It's certainly made me rethink any sort of thoughts I would have had about purchasing an apartment and I guess a house and land or townhouse property, is certainly looking more appealing" I2_Stable. For a minority of interviewees, it prompted an aspiration for a 'tree change' to a more rural area. The appeal of a rural area was attributed to the scenic outlook, more affordable housing, but also, given the rapid pace in which pandemic events unfolded and growing uncertainty, greater space to become self-sufficient: "I would really like to have a bit of land to grow a bit of food. If not just for my own peace of mind. But I mean that's something I've always considered, but it's just made it a bit more to the forefront of what I would choose now." I3_Stable. The shift in housing preferences for some interviewees was linked with the anticipated widespread adoption of remote working practices. Early literature examining the impact of COVID-19 on remote working practices suggests that for most people, where remote working is an option, a combination of home-based and office-based work is preferred (Rubin et al., 2020). This sentiment was largely shared by the interviewees who reported a range of positive and negative experiences of remote working. As one interviewee describes: "Definitely no commute time is extremely nice. Then being able to fit life admin around it like being able to check on laundry and like eat whatever I want out of my own kitchen, just like home comforts …. [The downsides are] the isolation and the level of self-management required. Like there's a little too much freedom with hours, a little too much freedom with like when I can work and how much I can work and it's a lot harder to collaborate … I definitely miss being physically with other people working on something together." I13_Accelerated. Many interviewees anticipated a combination of home and office working becoming the norm, or at least something they would request from their employers. This, in turn, prompted some interviewees to consider moving further away from the city centre due to the prospect of commuting less frequently. However, some interviewees still valued the benefits that inner-city living provided, such as proximity to work and social activities, at least in the short term: "I think my preference while I'm in my twenties would be to still stay in the inner suburbs rather than moving let's say an hour or 40 min out … [It's] just a social thing more than anything." I26_Accelerated. Taken together, these discussions suggest that for young people who are able to work from home, purchasing a home with more space (but farther from their workplace) may be more appealing in the long term. However, these attitudes are by no means universal. Discussion A number of conceptual models of travel behaviour explore the relationship between long-term processes (lifestyle, life events, home location decisions) and short-term travel behaviour (see for examples Salomon and Ben-Akiva 1983;Van Acker et al., 2010;Müggenburg et al., 2015). Fig. 1 draws upon the framework of Müggenburg et al. (2015) to illustrate how COVID-19 is potentially influencing travel behaviour through its impact on life events, long-term mobility decisions and short-term travel behaviour. As this framework has emerged from discussions with young adults, Fig. 1 emphasises life events relevant to this age group. Both from these interviews and other survey research, the direct impact of COVID-19 on day-to-day mobility is considerable. However, those impacts are likely to be short term; longer-term impacts are mediated by the potential influence of the pandemic on life events, longterm mobility decisions and, potentially, long-term processes affecting travel behaviour, such as new norms regarding remote working. The interviews suggest that in the short-term COVID-19 has a weak and uncertain impact on long-term mobility choices like where to live/ work and whether to own a car. For most interviewees, home and work locations became 'locked in' place because of a strong desire for stability (although a few participants moved to more stable living locations at the start of the pandemic). In the longer term, some interviewees expressed a preference for larger and more rural housing, though these wishes were not universal and whether they translate into reality is still uncertain. Although it was not expressed in these interviews, other research suggests that the recession caused by the pandemic is already increasing housing stress and instability, especially among renters (Raynor and Panza 2020); people under housing stress are unlikely to be able to act on a desire for spacious housing. Rather, it appears that the longer-term impacts of COVID-19 may be driven more strongly by uneven changes to life stage transitions. The majority of interviewees were relatively stable with few major disruptions to their lives; their progression through life stages was largely untouched and long-term changes to their mobility choices are likely to be driven by subtle shifts in preferences for home location, or for working from home, that play out in the future. In contrast, other interviewees faced a major disruption to their life course trajectory, which charted a different pathway to mobility changes. For those who were 'Accelerated' through their life plans (generally because plans to live overseas were cancelled), life milestones such as starting a career or purchasing a home have been moved forward. Based on past research about the impact of life stage on travel behaviour (Kitamura 2009;Busch-Geertsema and Lanzendorf 2017), this is likely to accelerate car dependence, especially if home and work locations are chosen while the pandemic is still active. Conversely, for those who have lost work because of the pandemic, this financial disruption is likely to delay life milestones and restrict choices of home and work locations. The delay in life milestones, in turn, may postpone the shift towards car dependence. Moreover, if income is unstable or uncertain, it is difficult to act on a desire for a larger home; when unemployment is high there are fewer job locations to choose from. And many of the jobs that have been lost cannot (such as retail and hospitality) be conducted from home, reinforcing the need for jobs and housing to be co-located. Policy implications and future research directions As qualitative research, this study has its limitations. Recruiting from an existing survey panel meant that the study could be conducted quickly, targeting young adults who are likely to feel considerable impacts of the pandemic due to their stage in life. However, this also means that it may disproportionally recruit people in a more stable life situation, compared to young adults who had ceased participating in the panel survey. For this reason, the size of the response typologies should not be used to represent the proportion of the population in Stable, Delayed or Accelerated life segments. Furthermore, the long-term travel behaviour impacts discussed in this paper should be considered informed speculation. There is still much that is unknown about the impact of COVID-19: how often cities will face additional virus 'waves' and lock-downs, how extensive international travel restrictions will be, the depth of economic recession and how long until a viable vaccination is developed and distributed. But at this stage it appears safe to say that long-term processes such as housing decisions and life stage transitions will be disrupted, at least for some portion of young adults. This suggests several policy implications and areas for future research, discussed below. COVID-19 as an opportunity for travel behaviour change In a 'business as usual' situation, the dominant mobility regime is strongly entrenched by socio-technical regimestechnology, policy, governance and habits (Geels 2012). An extensive body of work has demonstrated the capacity for travel behaviour change in response to significant disruptions (Marsden et al., 2020). The COVID-19 pandemic is a dramatic example of a shifting landscape that disrupts the dominant regime and opens a potential window for change. This presents an unprecedented opportunity to encourage the widespread adoption of more sustainable travel choices in a post-pandemic world. However, the pandemic has already resulted in more favourable attitudes to private transport (Beck and Hensher 2020; de Haas et al., 2020). If significant changes to policies and infrastructure do not occur, it is likely that the dominant regime of car dependence will re-crystalize as the pandemic eases. In this study, the Delayed group faced a significant economic disruption (generally a job loss) which constrains their ability to choose where to live and how to travel. For this group, low-rent housing close to job opportunities will not only help them economically but can also support sustainable transport choices. Many cities are already transforming their transport systems in ways that will benefit these young adults. Successful examples including investments in active transport infrastructure, including reallocating road space to walking and cycling (Barbarossa 2020;De Vos 2020) and reducing speed limits to favour 'slower' modes (Katrakazas et al., 2020). Furthermore, ensuring that public transit is frequent and reliable (to reduce overcrowding) can make it a safe and viable option for those who cannot afford a car (De Vos 2020). In contrast, young adults in the Stable and the Accelerated groups are the most at-risk of quickly returning to car dependence. Prior to the pandemic, the Accelerated group may have delayed this transition because of plans to live and work overseas; instead they are more rapidly moving to a stable career and home ownership in Australia. Stable and Accelerated respondents were also more likely to be interested in moving farther from work in order to enjoy a larger home and outdoor space. If urban living is to remain attractive to these young adults, longterm policies should aim to better accommodate the needs of households as they move through life stage milestones, purchase homes and start families. For instance, increasing the supply of family-friendly housing, outdoor space, quality schools and childcare in urban areas may help curb a potential shift to the suburbs (McLaren 2016). Furthermore, encouraging businesses to support work-from-home practices after the pandemic may help reduce travel demand. Although these policies are likely to reduce peak-hour travel demand, there is still a great deal of disagreement in the literature over whether the reduction in commute travel is compensated for by increases in local travel (see for example Zhu 2012;Shabanpour et al., 2018;Hook et al., 2020). And if these work-from-home policies encourage more people to relocate to more distant suburban homes, this has flow-on effects in how planners manage increased local travel demand in more auto-oriented housing locations. Recent research on the travel behavior of American millennials found that the majority of young adults in suburban areas were heavily reliant on driving for their daily travel . COVID-19 has significantly disrupted the transport landscape in the short-term. Although much is still uncertain, times of change also provide the opportunity for cities to re-think the future of their transport systems. Future research areas This study opens up possibilities for future research using quantitative methods. It identified that not all young adults are facing the same impacts from COVID-19. Future survey research might measure the extent to which COVID-19 has impacted different areas of young adults' lives. In turn, this information could be used to estimate future demands on the transport system or design policies to support young adults facing disproportionate impacts. CRediT authorship contribution statement Alexa Delbosc: Research conception, research design, data collection, interpretation, paper writing. Laura McCarthy: Research design, recruitment, lead data collection, interpretation, paper writing.
2021-04-28T13:08:56.315Z
2021-04-22T00:00:00.000
{ "year": 2021, "sha1": "403586b724751cdccbdd72247166483e6f21150b", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.tranpol.2021.04.018", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9b62a5e0d6f9df98d18aa63365ccc32ca1193925", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "History" ] }
227953459
pes2o/s2orc
v3-fos-license
Antigiardial Activity of Acetylsalicylic Acid Is Associated with Overexpression of HSP70 and Membrane Transporters Giardia lamblia is a flagellated protozoan responsible for giardiasis, a worldwide diarrheal disease. The adverse effects of the pharmacological treatments and the appearance of drug resistance have increased the rate of therapeutic failures. In the search for alternative therapeutics, drug repositioning has become a popular strategy. Acetylsalicylic acid (ASA) exhibits diverse biological activities through multiple mechanisms. However, the full spectrum of its activities is incompletely understood. In this study we show that ASA displayed direct antigiardial activity and affected the adhesion and growth of trophozoites in a time-dose-dependent manner. Electron microscopy images revealed remarkable morphological alterations in the membrane, ventral disk, and caudal region. Using mass spectrometry and real-time quantitative reverse transcription (qRT-PCR), we identified that ASA induced the overexpression of heat shock protein 70 (HSP70). ASA also showed a significant increase of five ATP-binding cassette (ABC) transporters (giABC, giABCP, giMDRP, giMRPL and giMDRAP1). Additionally, we found low toxicity on Caco-2 cells. Taken together, these results suggest an important role of HSPs and ABC drug transporters in contributing to stress tolerance and protecting cells from ASA-induced stress. Introduction Giardia lamblia is a ubiquitous protozoan that colonizes the human upper small intestine, causing an acute and chronic diarrheal disease worldwide, giardiasis. The most commonly used medications for the treatment of this disease are metronidazole (MTZ), tinidazole, nitazoxanide (NTX) and albendazole (ABZ) [1][2][3]. Even though current therapies have proven to be useful, all of them present variable efficacies and adverse side effects. Additionally, drug resistance is an increasing concern [4,5], so much that the search for new safe and effective treatments continues to be a very important issue in experimental and clinical research. The discovery and development of new drugs are long (10-15 years), complex and expensive processes, and the success rate is only 2.01%. In this regard, drug repurposing is a highly efficient strategy to find new indications for existing approved drugs that have already passed preclinical and clinical stages [6,7], and it stands out as an attractive option to find alternative antigiardial options. In this context, acetylsalicylic acid (ASA) has emerged as an interesting option for repurposing drugs against parasites. ASA may also be used for secondary prevention of stroke and acute cardiac events. ASA acts on the two cyclooxygenase isoforms (COX-1 and COX-2) via acetylation of serine 532 in the active site; inhibits the action of both enzymes and prevents the formation of prostaglandins from arachidonic acid [8]. Epidemiological and clinical studies showed that ASA also reduces the incidence of epithelial tumors by the acetylation of multiple proteins including transcription factors, cytoskeleton proteins, stress response proteins (including the heat shock proteins, HSPs), membrane proteins, among others [9][10][11], suggesting other mechanisms and molecular targets. Some studies have evidenced antiparasitic activity. Aaina and Sushma (2010) have shown that ASA improved the antifilarial activity of diethylcarbamazine [12]. On Entamoeba histolytica, it affects the dynamics of the actin cytoskeleton, and amebic movement decreases [13]. On the other hand, several studies have shown that ASA modifies the expression of distinct multidrug-resistant genes, interfering with their transporter activity and generating diversity in the multidrug-resistant phenotype [14][15][16]. ATP-binding cassette (ABC) transporters constitute one of the largest families of integral membrane proteins. In protozoa, they mediate ATP-dependent transport of a wide variety of chemotherapeutic drugs, away from their targets inside the parasites [17]. Another important molecule, HSP70, plays a crucial role in tissue defense mechanisms due to its chaperon activity [18]. In this work, we demonstrated that ASA exposure to Giardia affected the growth and adhesion of trophozoites. Microscopy images revealed dramatic changes on membrane and cell morphology. Remarkably, ASA activity was associated with the modulation of HSP70 and the overexpression of five ABC transporters. ASA Inhibits the Growth, Adhesion and Cell Viability of Giardia lamblia Trophozoites The effect of ASA on parasite growth was kinetically determined. Reduction in parasite number was observed 12 h after the assay was initiated ( Figure 1A), and IC 50 value was 0.29 mM at 24 h ( Table 1). The maximal inhibitory effects were observed after 48 h of incubation. ASA at 0.25 mM decreased cell growth by 73.9%, whereas treatment with 0.5 and 1 mM decreased cell growth by 90.4% and 98.6%, respectively ( Figure 1B), suggesting a time-dose-dependent effect. All assayed ASA concentrations promoted the detachment of trophozoites; parasites showed weak cell surface adhesion depending on dose and time. The adherence was reduced to 20%, 35%, 44% and 76%, with ASA 0.125, 0.25, 0.5 and 1 mM, respectively, after 48 h of incubation ( Figure 1C). Finally, viable parasites after exposure to ASA were determined by trypan blue exclusion test. ASA at 0.125, 0.25, 0.5 or 1 mM inhibited cell viability 12 h after the culture started (5%, 5%, 19% and 36%, respectively). The most dramatic effect was observed with 0.5 and 1 mM at 48 h; only 42% and 24% of trophozoites were viable ( Figure 1D). Positive control with 1 µM MTZ inhibited trophozoite growth by 24%, 45.7% and 76.5% at 12, 24 and 48 h, respectively. DMSO-treated cells did not exhibit any significant differences compared with untreated cells. ASA Alters the Morphology of Giardia lamblia Trophozoites Analyses of ultrastructural changes in trophozoites after 48 h of treatment were performed using scanning electron microscopy. The micrographs revealed untreated and DMSO-treated trophozoites with normal morphology; ventral disc, flagella and ventrolateral flange without alterations ( Figure 2A ASA Alters the Morphology of Giardia lamblia Trophozoites Analyses of ultrastructural changes in trophozoites after 48 h of treatment were performed using scanning electron microscopy. The micrographs revealed untreated and DMSO-treated trophozoites with normal morphology; ventral disc, flagella and ventrolateral flange without alterations (Figure 2A Tubulin Expression Is Not Modified by ASA Treatment To address if morphological alterations by ASA might be associated to microtubule destabilization, levels of soluble and polymerized tubulin were detected by Western blotting in parasites exposed to the drug for 24 and 48 h. The results showed similar tubulin expression levels, neither drug concentration nor the time of exposition influenced soluble and insoluble tubulin fractions ( Figure 3A,B). When data were expressed as percentage of polymerized tubulin, a slight decrease of protein in the soluble fraction was observed with 0.125 mM of ASA; however, this was not significant ( Figure 3C,D). Tubulin Expression Is Not Modified by ASA Treatment To address if morphological alterations by ASA might be associated to microtubule destabilization, levels of soluble and polymerized tubulin were detected by Western blotting in parasites exposed to the drug for 24 and 48 h. The results showed similar tubulin expression levels, neither drug concentration nor the time of exposition influenced soluble and insoluble tubulin fractions ( Figure 3A,B). When data were expressed as percentage of polymerized tubulin, a slight decrease of protein in the soluble fraction was observed with 0.125 mM of ASA; however, this was not significant ( Figure 3C HSP70 Is Associated with ASA Damage in Giardia Following electrophoretic separation of Giardia extracts, the whole protein pattern after ASA treatment was almost identical to that of the controls, except for one protein ranging from 55 and 70 kDa that was more abundant with increasing concentrations of ASA. The latter was more evident with 0.5 and 1 mM at 48 h ( Figure 4A,B). To know the identity of this protein, it was processed by tryptic digestion and UPLC-TOF-MS mass spectrometry was used. Figure 5 shows the obtained mass spectrum of the tryptic peptides. Eleven proteins with more than 96% reliability were identified, among them, 8 corresponded to uncharacterized proteins. The cytosolic HSP70, phosphomannomutase 2, arginine deiminase and Bip had higher sequence coverage of 49.4%, 35.9%, 31.4% and 30.1%, respectively. The other identified proteins with lower coverage were 21.1 protein, ATPase, HSP90, glucose-6-phosphate isomerase and 2 hypothetical proteins ( Table 2). HSP70 Is Associated with ASA Damage in Giardia Following electrophoretic separation of Giardia extracts, the whole protein pattern after ASA treatment was almost identical to that of the controls, except for one protein ranging from 55 and 70 kDa that was more abundant with increasing concentrations of ASA. The latter was more evident with 0.5 and 1 mM at 48 h ( Figure 4A,B). To know the identity of this protein, it was processed by tryptic digestion and UPLC-TOF-MS mass spectrometry was used. Figure 5 shows the obtained mass spectrum of the tryptic peptides. Eleven proteins with more than 96% reliability were identified, among them, 8 corresponded to uncharacterized proteins. The cytosolic HSP70, phosphomannomutase 2, arginine deiminase and Bip had higher sequence coverage of 49.4%, 35.9%, 31.4% and 30.1%, respectively. The other identified proteins with lower coverage were 21.1 protein, ATPase, HSP90, glucose-6-phosphate isomerase and 2 hypothetical proteins ( Table 2). ASA Induced Hsp70 Overexpression in Giardia lamblia Trophozoites To determine whether ASA regulates hsp70 expression on Giardia, parasites were exposed to 0.5 mM ASA or DMSO, the diluent, for 24 and 48 h, and the expression of hsp70 was analyzed by SYBR Green Real-Time Quantitative RT-PCR. After ASA treatment, HSP70 mRNA level began to increase after 24 h, reaching a nearly sixfold maximum expression level at 48 h compared with the DMSO control ( Figure 6). ASA Induced Hsp70 Overexpression in Giardia lamblia Trophozoites To determine whether ASA regulates hsp70 expression on Giardia, parasites were exposed to 0.5 mM ASA or DMSO, the diluent, for 24 and 48 h, and the expression of hsp70 was analyzed by SYBR Green Real-Time Quantitative RT-PCR. After ASA treatment, HSP70 mRNA level began to increase after 24 h, reaching a nearly sixfold maximum expression level at 48 h compared with the DMSO control ( Figure 6). Relative-quantitative RT-PCR assay for hsp70. Total RNA was obtained from parasites exposed to DMSO or 0.5 mM of ASA for 24 h and 48 h (* p < 0.02, ** p < 0.0001). ASA Modifies the Expression Level of ABC/MDR Transporter Genes In mammal cells, it has been demonstrated that some multidrug resistance (MDR) transporters play a critical role in cell death induced by ASA treatment [14,16]. Several works suggest that heat shock factor 1 (HSF1) induces multidrug resistance; it has been related to MDR1 expression and in establishing P-gp transporters, suggesting that HSPs and MDRs could be regulated simultaneously by a different mechanism [19][20][21][22]. To determine whether the response to stress generated in Giardia Figure 6. Relative-quantitative RT-PCR assay for hsp70. Total RNA was obtained from parasites exposed to DMSO or 0.5 mM of ASA for 24 h and 48 h (* p < 0.02, ** p < 0.0001). ASA Modifies the Expression Level of ABC/MDR Transporter Genes In mammal cells, it has been demonstrated that some multidrug resistance (MDR) transporters play a critical role in cell death induced by ASA treatment [14,16]. Several works suggest that heat shock factor 1 (HSF1) induces multidrug resistance; it has been related to MDR1 expression and in establishing P-gp transporters, suggesting that HSPs and MDRs could be regulated simultaneously by a different mechanism [19][20][21][22]. To determine whether the response to stress generated in Giardia by treatment with ASA is also related to MDR gene overexpression, first, we performed BLAST analysis in the Giardia genome database using human P-gp/MDR1, MRP4 and ABCG2 mRNA sequences [14][15][16]. Based on Bit score, sequence identity and e-value, five sequences were identified: ABC transporter family protein (giABC), ABC transporter, putative (giABCP), multidrug resistance ABC transporter ATP-binding and permease protein (giMDRP), MRP-like ABC transporter (giMRPL) and multidrug resistance-associated protein 1 (MDRAP1) ( Table 3) Relative-quantitative RT-PCR analysis demonstrated that all five ABC transporter proteins were differentially overexpressed due to ASA treatment (Figure 7). Interestingly, mRNA expression levels of giABC, giABCP, giMDRP and giMRPL in the first 12 h were significantly higher than in the DMSO control. Subsequent measurements revealed no differences between MDRs and DMSO at 24 h, followed by significantly increased levels at 48 h ( Figure 7A-D). At this time, giMDRP reached a maximum expression level nearly twofold than that observed with DMSO, followed by giMRPL (1-fold), giABCP (0.7-fold) and giABC (0.3-fold). For giMDRAP1, a tendency to increase the expression level over time was observed ( Figure 7E). . Asterisks indicate statistically significant differences between DMSO control and treatment groups, * p < 0.006, ** p < 0.0001). In E, giMDR AP1 transcript amount was significantly lower than DMSO control (indicated with &, p < 0.004). ASA Is Partially Cytotoxic to Caco-2 Cells at High Doses The cytotoxicity of ASA to human cells was evaluated to support its potential use against Giardia lamblia. Caco-2 cells were treated with ASA 0.5 mM for 24 h and, to evaluate viability, an MTT assay was used. Figure 8 shows that ASA at 0.125 and 0.25 mM was not cytotoxic against Caco-2 cells, except for 0.5 and 1 mM of ASA (24.2% and 37.7%, respectively, * p < 0.05), with a CC50 value of 2.32 mM (Table 1). Asterisks indicate statistically significant differences between DMSO control and treatment groups, * p < 0.006, ** p < 0.0001). In E, giMDR AP1 transcript amount was significantly lower than DMSO control (indicated with &, p < 0.004). ASA Is Partially Cytotoxic to Caco-2 Cells at High Doses The cytotoxicity of ASA to human cells was evaluated to support its potential use against Giardia lamblia. Caco-2 cells were treated with ASA 0.5 mM for 24 h and, to evaluate viability, an MTT assay was used. Figure 8 shows that ASA at 0.125 and 0.25 mM was not cytotoxic against Caco-2 cells, except for 0.5 and 1 mM of ASA (24.2% and 37.7%, respectively, * p < 0.05), with a CC 50 value of 2.32 mM (Table 1). Discussion Giardiasis is a globally distributed diarrheal disease. There are six classes of compounds whose efficacies are well-studied and that have been approved for the treatment of this infection [23,24]. However, due to undesirable side effects, resistance and an alarming increase of refractory cases to first-line treatments [4,5,25], the search for new alternatives with greater efficacy is still important. Drug repurposing of an approved drug is among the most advantageous strategies to identify new therapeutic alternatives against parasitic protozoan diseases. Acetylsalicylic acid, commonly named Aspirin, is one of the best documented medicines in the world and is one of the most used drugs of all time. It is the most prescribed anti-inflammatory and analgesic-antipyretic agent [26]. In addition, it has been widely studied to demonstrate new beneficial, biological anticancer, antibacterial or antiparasitic compound properties [13,[27][28][29][30]. In this study, the effect of ASA against the parasite G. lamblia was demonstrated with an IC50 of 0.29 mM, and a SI of 8 (Table 1). Based on SI result, it indicates a high degree of cytotoxic selectivity against Giardia with minimal cell toxicity, however, its therapeutic use as an antigiardial candidate is unclear; the SI value is low compared to drugs of therapeutic use [31][32][33]. Even though the effective doses of Discussion Giardiasis is a globally distributed diarrheal disease. There are six classes of compounds whose efficacies are well-studied and that have been approved for the treatment of this infection [23,24]. However, due to undesirable side effects, resistance and an alarming increase of refractory cases to first-line treatments [4,5,25], the search for new alternatives with greater efficacy is still important. Drug repurposing of an approved drug is among the most advantageous strategies to identify new therapeutic alternatives against parasitic protozoan diseases. Acetylsalicylic acid, commonly named Aspirin, is one of the best documented medicines in the world and is one of the most used drugs of all time. It is the most prescribed anti-inflammatory and analgesic-antipyretic agent [26]. In addition, it has been widely studied to demonstrate new beneficial, biological anticancer, antibacterial or antiparasitic compound properties [13,[27][28][29][30]. In this study, the effect of ASA against the parasite G. lamblia was demonstrated with an IC 50 of 0.29 mM, and a SI of 8 (Table 1). Based on SI result, it indicates a high degree of cytotoxic selectivity against Giardia with minimal cell toxicity, however, its therapeutic use as an antigiardial candidate is unclear; the SI value is low compared to drugs of therapeutic use [31][32][33]. Even though the effective doses of ASA against Giardia are higher than MTZ, it is important to consider that they are different classes of drugs. Additionally, depending on the source, ASA doses range from 75 to 650 mg (approximately 0.4-3.6 mM) for pain relieve or cardiovascular disease. As an antirheumatic drug ASA could be used at higher doses in a range of 3.3 to 24.9 mM/day (http://www.arthritis.org), indicating that it is very often a well-tolerated drug [34,35], thus, ASA shows credible promise as an alternative antigiardial treatment. After ASA treatment, SEM images revealed dramatic alterations on the cell membrane and cytoskeletal structures, which can be associated with the decrease of its adhesion capacity ( Figure 1D). Given that microtubules are an essential part of the Giardia cytoskeleton [36], soluble and insoluble tubulin was determined by Western blot using a monoclonal anti-α-tubulin antibody, revealing that ASA does not affect tubulin dynamics. According to different investigations, ASA locally perturbs lipid bilayers in a concentration-dependent manner by primarily interacting with lipid head groups and cholesterol, thereby inhibiting raft formation [37,38]. Considering that lipids and fatty acids play an important role in regulating the growth and encystation of Giardia and that cholesterol is the only sterol present in trophozoites [39,40], our results suggest that the ASA mechanism of death occurred principally by altering the composition of parasite membrane and causing loss of its integrity as we observed in the SEM micrographs ( Figure 2). On the other hand, prokaryotic and eukaryotic cells respond to potentially harmful stimulations by inducing the synthesis of stress proteins; the heat shock proteins (HSPs) and other metabolites [41]. Using UPLC-TOF-MS mass spectrometry, we identified HSP70, among other stress-related proteins ( Table 2). A variety of works have reported that the overexpression of these proteins correlated to resistance to apoptosis induced by a wide range of stimuli [18]. Additional studies are necessary to assess the possible involvement of HSP70 on Giardia apoptosis. Additionally, in Giardia, one of the mechanisms related to drug resistance seems to be through ABC membrane transporters [17]. Giardia strains resistant to MTZ showed overexpression of different ABC transporters, as well as negative regulation of others, suggesting a possible role of ABC transporters in protecting parasites against this drug [42]. Previous reports have shown that ASA has activity on human Pgp/MDR1, MRP4 and ABCG2 genes [14][15][16]. The search for homologous sequences in Giardia DB revealed five ABC transporters with 25% to 35% identity with the query sequences. A time-dependent change was observed in the expression of all the transporters. In contrast to that of giMDRAP1, mRNA expression of giABC, giABCP, giMDRP and giMRPL showed a significant increase at 12 h, but returned to normal levels at 24 h. These changes in ABC transporter expression might be due to morphological changes in the cell membrane caused by ASA. The transporters provide a defense to ASA treatment but eventually fail to afford protection during trophozoites proliferation. This is the first study to provide evidence that ASA kills Giardia lamblia trophozoites and, as in mammal cells, the effect of the drug was accompanied by variations in mRNA expression of important molecules such as ABC transporters and HSP70 proteins, necessary for defense against cellular stress. Additional studies are required to establish its mechanism of action. Giardia Lamblia Trophozoites Culture Trophozoites of Giardia lamblia (WB clone C6) were grown axenically at 37 • C in borosilicate culture tubes containing Diamond's TYI-S-33 medium, pH 7.0 (supplemented with 0.5 mg/mL of bovine bile and 10% fetal bovine serum) [43]. Cultures were maintained by subculturing the cells twice a week. Growth Inhibition Assay To evaluate the effect of ASA on Giardia lamblia growth, 10,000 parasites/mL were grown in TYI-S-33 medium containing 0.125, 0.25, 0.5 or 1 mM of ASA (Sigma-Aldrich, Saint Louis, MO, USA), and incubated for 12, 24 and 48 h. Untreated cells and the drug diluent, 0.09% dimethyl sulfoxide (DMSO, Sigma-Aldrich, Saint Louis, MO, USA) were used as negative controls, while 1 µM of metronidazole (MTZ) was used as a positive control. After the incubation periods, the cells were harvested by cooling and counted using a Neubauer chamber. To determine if ASA could affect the pH level, after cell counting, pH was monitored, showing a slight increase from 7.01 to 7.05. The percentage of parasite growth inhibition was calculated in relation to the negative control, which was defined as 100% parasite growth. Cell Viability Assay A dye exclusion test was used to determine the viability of trophozoites after DMSO or ASA treatment, about 10 µL of culture was mixed with 10 µL trypan blue 0.4% (Gibco-BRL, Gaithersburg, MD, USA). The total number of parasites (including those which had excluded the dye), was counted in a Neubauer chamber and cell viability was calculated as the percentage of viable cells in the samples relative to untreated cells. Adherence Assay To evaluate the effect of ASA on adherence, 10,000 parasites/mL were grown at concentrations and time described above. After the incubation periods, the medium containing nonadherent cells was removed and kept on ice; tubes were filled with cold phosphate-buffered saline (PBS) and placed on ice for 30 min to detach the adherent trophozoites. The number of adherent and nonadherent trophozoites was determined by counting in a Neubauer chamber. The effect on adherence was expressed as percentage of adhered trophozoites in relation to the total number of cells, and the results obtained were compared with control cultures. Scanning Electron Microscopy (SEM) To analyze the morphology of trophozoites after ASA or DMSO treatment, parasites were ice-chilled for 20 min, harvested by centrifugation for 10 min at 1973× g (4 • C), washed with PBS, fixed for 1 h with 2.5% glutaraldehyde (Sigma-Aldrich, Saint Louis, MO, USA) in PBS and adhered to polyethylenimine-coated coverslips (Sigma-Aldrich, Saint Louis, MO, USA). The fixed cells were washed three times with PBS and post-fixed for 1 h in 1% OsO 4 . Next, cells were washed with PBS, dehydrated in gradient of ethanol series (50-100%) and subjected to critical point drying with CO 2 . Finally, cells were mounted on stainless steel holders, sputter-coated with a thin layer of gold and examined in a JEOL-JSM6510LV SEM (JEOL, Tokyo, Japan). Preparation of Protein Fractions and Western Blotting G. lamblia trophozoites grown in the presence and absence of ASA under the previously mentioned conditions were used to extract the soluble (cytosolic) and insoluble (polymerized) tubulin fractions according an earlier report [44]. Briefly, the parasites were collected by cooling and posterior centrifugation at 1973× g for 10 min at 4 • C. The cell pellets were suspended in lysis buffer (50 mM Tris-Cl, pH 7.4, 150 mM NaCl, 1 mM EDTA, 1% Triton X-100 and Complete Protease Inhibitor (Roche)) and incubated on ice for 30 min. The supernatant (soluble fraction) was obtained by centrifugation at 13,000× g at 4 • C. To collect the insoluble fraction, the remaining pellets were resuspended again in lysis buffer and kept at 4 • C. The protein concentration of each fraction was determined by Bradford assay using the Pierce Detergent Compatible Bradford Assay Kit (Thermo Scientific, Waltham, MA, USA) reagent. Readings were made at 595 nm in a microplate reader (Bio-Tek Synergy HT, Winooski, VT, USA). Soluble and polymerized tubulin were then analyzed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) (10%) and Western blot. Samples (15 µg) were prepared by the addition of 2× Laemmli Sample Buffer (Bio-Rad) supplemented with 5% 2-mercaptoethanol and heated at 100 • C for 5 min in a StableTemp thermoblock (Cole-Parmer, Vernon Hills, IL, USA). The electrophoretic separation of the proteins was carried out at a constant voltage of 100 V for 2 h, gels were either stained with Coomassie blue or transferred to PVDF membranes (Amersham Pharmacia Biotech, Little Chalfont, UK) at a 100 V for 70 min with a Transblot apparatus (Bio-Rad, Hercules, CA, USA). After transfer, the membranes were blocked for 1 h with 5% solution of low-fat milk in PBS supplemented with 0.05% Tween-20 (PBS-T). Three washes with PBS-T were performed and the membranes were incubated for 2 h with 1/200 mouse anti-α-tubulin antibody (Invitrogen, #13-8000, Carlsbad, CA, USA). Subsequently, five washes were performed with 0.05% PBS-T and incubated for 1 h with 1/10,000 goat anti-mouse IgG antibody coupled to horseradish peroxidase (Pierce, #31437, Waltham, MA, USA). The membranes were thoroughly washed with PBS-T and the signal was detected by chemiluminescence (ECL immobilon Western, Millipore, #170-5060, Burlington, MA, USA), the signal was captured with the C-Digit system. Semiquantitative evaluation was performed by densitometry using Image Studio Digits Software version 5.2. Protein Extract and SDS-PAGE Trophozoites treated with DMSO or ASA (0.125, 0.25, 0.5 or 1 mM), were collected by centrifugation, suspended in PBS (complemented with protease inhibitors) and lysed by sonication for 30 s at 130 W (3 cycles) (Ultrasonic processor, Sonics and material INC). Cell debris were removed by centrifugation (10,000× g for 10 min) and protein concentration was determined by a micro-Bradford assay. Total protein extracts (15 µg) were separated by electrophoresis in 10% (w/v) SDS-PAGE at 100 V for 2 h. The gel was stained with Coomassie Blue dye to reveal the protein bands. A prominent single major band (55-70 kDa) was excised and processed for mass spectrometry analysis. Tryptic Digest Protocol Subsequent to Coomassie Staining For mass spectrometry, the selected band was cut in smaller pieces followed by subsequent destaining with 50% methanol 5% acetic acid solution and rinsed with deionized water. After that, the gel pieces were incubated in 100 mM ammonium bicarbonate (NH 4 HCO 3 ). To reduce disulfide bonds and alkylate free cysteines, the gel pieces were incubated for 45 min at 55 • C in 50 mM dithiothreitol (DTT), this solution was then exchanged with 30 mM iodoacetamide (IAA), and the gel pieces were incubated for 30 min at room temperature in the dark. The gel pieces were once more washed with 100 mM NH 4 HCO 3 and dehydrated with 100% acetonitrile for 10 min. The proteins in the gel pieces were digested with porcine trypsin (20 ng/µL) at 37 • C for 18 h. The resulting peptides were extracted with 50% acetonitrile (v/v) and 5% formic acid (v/v) for 30 min and vacuum dried. Dried peptides were dissolved in 1% formic acid (20 µL), desalted, and concentrated by ZipTip C 18 . Ultra-Performance Liquid Chromatography-Time of Flight Mass Spectrometry (UPLC-TOF/MS) All chromatographic measurements were performed on a nanoAcquity UPLC system. The peptides were analyzed in a Waters nanoACQUITY UPLC HSS T3 C 18 Relative-Quantitative RT-PCR The expression level of giHSP70 and drug transporters (giABC, giABCP, giMDR, giMRPL and giMDRAP1) mRNA was determined by relative quantification in real-time PCR (qRT-PCR). Total RNA was obtained from DMSO-or ASA-(0.125, 0.25, 0.5 or 1 mM) treated trophozoites using a Total RNA Purification Kit (Norgen Biotek, Thorold, ON, Canada). cDNA was synthesized by a reverse transcriptase reaction (Verso cDNA Synthesis Kit, Thermo Scientific™) using 1 µg of RNA and Oligo dT primer. This kit is supplied with RT Enhancer to remove contaminating DNA, eliminating the need for DNAse I treatment. Specific primers for each gene were designed and are listed in Table 4. The amplification was performed in a StepOne TM Real-Time PCR System (Applied Biosystems™, Foster, CA, USA) using Maxima SYBR Green/ROX qPCR Master Mix (Thermo Scientific). The expression levels of the above-mentioned genes were normalized to the expression level of shippo1. A melting curve was performed to confirm the absence of contaminants and no dimer formation by the primers. The conditions for RT-qPCR were as follows: hot start at 95 • C for 10 min and 40 cycles of 95 • C for 15 s, 60 • C for 30 s and 72 • C for 30 s. The data were analyzed using StepOne v2.3 Software based on the 2 −∆∆Ct method (2 −∆∆Ct = RE, relative expression). Table 4. Specific primers to assess the effect of ASA on the expression level of HSP70 and MDRs by qRT-PCR. GenBank Accession Number Gene Primer (5 -3 ) XM_001707918.1 HSP70 The cells were cultured at 37 • C in Dulbecco's modified Eagle's culture medium (DMEM), supplemented by 10% fetal bovine serum FBS (Invitrogen), in a humidified atmosphere (5% CO 2 and 95% air). For routine maintenance, cells were split twice a week by detachment with 0.25% Trypsin-0.025% EDTA (Sigma-Aldrich, Saint Louis, MO, USA), and re-seeding in 25 cm 2 flasks in a split ratio of 1:4. For experiments, the number of Caco-2 cells per well was estimated by counting the cells in a Neubauer hematocytometer chamber. Cell Viability (MTT Assay) Caco-2 cell viability was evaluated by MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) tetrazolium reduction assay. The MTT assay was performed with slight modifications as previously described [45]. Briefly, the cells were seeded in cell culture plates of 96 wells at a density of 5000 cells per well and incubated at 37 • C for 48 h. Then cells were treated with DMSO 0.09% and ASA (0.125, 0.25, 0.5 or 1 mM) for 24 h. After the incubation period, the medium was removed and 100 µL of MTT reagent (0.8 mg/mL in serum-free medium) was added to each well, and the cells were incubated at 37 • C for 4 h in 5% CO 2 -95% air atmosphere. Next, the medium was replaced with DMSO (150 µL) to dissolve the formazan crystals and the absorbance was measured at 570 nm with a microplate reader (Bio-Tek Synergy HT). Statistical Analysis All experiments were performed in triplicate. Data were analyzed by ANOVA and post hoc Dunnett's multiple comparisons test. p values of ≤0.05 were considered statistically significant compared with untreated cells. Error bars in graphics indicate standard deviations for the experiments. IC 50 calculation was analyzed by nonlinear regression. GraphPad Prism version 6.01 for Windows, GraphPad Software, La Jolla, CA USA was employed.
2020-12-09T14:06:49.988Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "aa3327156fd23bb336d63cf6deb2213b7e1aa87a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ph13120440", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c31aeeffdacd91b9db81b5a44321412d8aade845", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
8859345
pes2o/s2orc
v3-fos-license
Downregulation of Heat Shock Protein 70 Impairs Osteogenic and Chondrogenic Differentiation in Human Mesenchymal Stem Cells Human mesenchymal stem cells (hMSCs) show promise for bone and cartilage regeneration. Our previous studies demonstrated that hMSCs with periodic mild heating had enhanced osteogenic and chondrogenic differentiation with significantly upregulated heat shock protein 70 (HSP70). However, the role of HSP70 in adult tissue regeneration is not well studied. Here, we revealed an essential regulatory mechanism of HSP70 in osteogenesis and chondrogenesis using adult hMSCs stably transfected with specific shRNAs to knockdown HSP70. Periodic heating at 39 °C was applied to hMSCs for up to 26 days. HSP70 knockdown resulted in significant reductions of alkaline phosphatase activity, calcium deposition, and gene expression of Runx2 and Osterix during osteogenesis. In addition, knockdown of HSP70 led to significant decreases of collagens II and X during chondrogenesis. Thus, downregulation of HSP70 impaired hMSC osteogenic and chondrogenic differentiation as well as the enhancement of these processes by thermal treatment. Taken together, these findings suggest a putative mechanism of thermal-enhanced bone and cartilage formation and underscore the importance of HSP70 in adult bone and cartilage differentiation. Bone marrow derived human mesenchymal stem cells (hMSCs) have shown great potential for tissue-engineering applications. This is due to their ability to differentiate into different tissue types such as bone, cartilage, adipose tissue, and muscle 1 . Our own previous studies revealed that periodic heat shock at 41 °C enhanced osteogenic 2 and chondrogenic differentiation of hMSCs 3 . We also observed that heat shock protein 70 (HSP70), a highly conserved protein family member in mammalian cells, was significantly upregulated by heat shock in differentiated hMSCs 2,3 , indicating a potential role of HSP70 in hMSC bone and cartilage differentiation. HSP70 plays an important role during development and repair under normal cellular homeostasis and stress conditions via its molecular chaperone functions 4,5 . Most of the heat shock proteins are constitutively expressed in animals and humans. The stress-inducible HSP70 is rapidly synthesized in response to many physiological (e.g. hyperthermia, energy depletion, inflammation, exercise, and differentiation) and pathological (e.g. hypoxia, ischemia, acidosis, viral infection, and reactive oxygen species) conditions 6,7 . The cellular protection function of HSP70 induced by mild heating provides thermo-tolerance to cells in culture and animals from extreme heat-induced cellular damage 8,9 . In the case of tissue injury, HSP70 serves as a modulating molecule for immune and inflammatory responses. For example, macrophages with accumulated HSP70 had decreased secretion of the inflammatory cytokines, TNF-α and IL-1 10 , and HSP70 enabled cells to have a high tolerance to inflammatory cytokines. Accumulating evidence also indicates that HSP70 plays specific roles in cell differentiation and tissue development. One study on erythropoiesis showed that HSP70 bound to apoptosis-inducing factor (AIF) to block AIF-induced apoptosis, thus enabling the differentiation of erythroblasts 11 . Another study showed an increase of HSP70 protein expression in the cerebral hemispheres and kidney throughout the postnatal development of rats 12 . A study of endochondral bone development in mouse embryos correlated HSP expressions to different stages of bone and cartilage differentiation 13 . Another study linked HSP70 to differentiation of mammalian osteoblasts 14 . However, the role of HSP70 in bone and cartilage tissue engineering using adult stem cells, especially during heat-enhanced bone formation, has not been explored. Results Periodic exposure of hMSCs to mild heat shock at 39 °C upregulated HSP70 expression, enhanced osteogenic and chondrogenic differentiation. As shown in Fig. 1A, mild heat shock at 39 °C for one hour followed by 16 hours incubation at 37 °C increased the inducible HSP70 expression in hMSCs with growth medium. Effects of periodic mild heating at 39 °C for one hour once every two days on hMSC chondrogenic differentiation at Day 17 are illustrated in Fig. 1B with upregulated type II collagen in pellet cultures using hMSCs transfected with scrambled shRNA. For osteogenic differentiation of hMSCs transfected with scrambled shRNA, Fig. 1C and D shows characteristic osteogenesis indicated by upregulation of ALP activity and calcium deposition in cultures with differentiation medium. Furthermore, the same heating pattern also significantly enhanced ALP activity at Day 6 ( Fig. 1C) and calcium deposition (Fig. 1D) in differentiated hMSCs with the scrambled shRNA. These differentiation data are consistent with our previous reports 2,3 , indicating that osteogenic and chondrogenic functions of hMSCs as well as heat-enhancing effects on both types of differentiation were not altered by non-specific shRNA transfection. Figure S2 and density quantification of bands, respectively, (B) Immunohistochemical staining of collagen II in 3D pellets at Day 17 during chondrogenesis, (C) ALP activity at Day 6 and 12, and (D) calcium deposition at Day 19 and 26 during osteogenesis. One enzyme Unit of ALP is the amount of enzyme which releases 1 nmol p-nitrophenol per 15 minutes. Each sample had 10 4 seeded hMSCs and N = 4. HS: heat shock; S: scrambled; Chon: chondrogenic differentiation. *P < 0.05; **P < 0.01. Scientific RepoRtS | (2018) 8:553 | DOI:10.1038/s41598-017-18541-1 HSP70 shRNA specifically knocked down HSP70 expression in hMSCs and confirmation of hMSC surface markers after HSP70 knockdown. To investigate the role of HSP70 in hMSC osteogenic and chondrogenic differentiation, an shRNA-based gene silencing strategy was employed to knockdown HSP70 expression in hMSCs. The efficiency of HSP70 knock down using shRNAs was confirmed in heated hMSCs at both mRNA and protein levels ( Fig. 2A and B). RT-PCR showed that one out of six shRNAs (shRNA5 targeting HSPA1B) dramatically downregulated HSP70 expression in hMSCs ( Fig. 2A). Densitometry analysis confirmed that the expression of HSP70 mRNA was reduced by more than 70% in heated hMSCs transfected with shRNA5 compared with those transfected with a scrambled shRNA. Consistent with the reduction of mRNA level in heated hMSCs, HSP70 protein expression in these samples decreased to 50% of the control treated with scrambled shRNA (Fig. 2B). In this study, passage 3 hMSCs were used for HSP70 gene silencing or the control cultures with shRNA5 or scrambled shRNA, respectively, and passage 4 hMSCs were used for osteogenic and chondrogenic differentiation. To examine whether HSP70 knockdown changed the surface marker expression of hMSCs, we performed flow cytometric analysis using hMSCs at passage 4. In Fig. 2C and Table 1, flow cytometric analysis shows that isolated hMSCs of passage 4 were positive for surface markers CD29, CD44, and CD147, and negative for CD34 and CD45. Furthermore, there was no significant difference in the expression of these surface markers between hMSCs with HSP70 knockdown and the control (Table 1). Figure 2C Figure S3 and (B) Western Blot (WB) results from a full WB membrane in Figure S4 and band density quantification of heated hMSCs transfected with a scrambled shRNA (control) and HSP70 shRNAs #1 to #6; (C) Representative dot plots of flow cytometric analysis of surface marker expressions of hMSCs transfected with scrambled shRNA (top row) and HSP70 shRNA#5 (bottom row). The proliferation of hMSCs after HSP70 knockdown was also studied. During hMSC osteogenic differentiation, cells were harvested on Day 6, 12, 19, and 26 and samples were analyzed for dsDNA content. As shown in Fig. 2D, there was a slow increase in dsDNA during the early differentiation stage, which plateaued on Day 19. Figure 2D also shows that there was no significant difference in dsDNA content between samples in growth and osteogenic conditions on the same days. In addition, hMSCs with HSP70 knockdown had a similar dsDNA content during osteogenic differentiation as the control samples with scrambled shRNA. Furthermore, mild heating at 39 °C for one hour once every other day did not affect dsDNA content in samples of different culture conditions on the same culture days (Fig. 2D). Thus, HSP70 knockdown did not affect the MSC proliferation. These data of hMSCs stably transfected with HSP70 shRNA demonstrate that knockdown of HSP70 did not change proliferation of hMSCs or their stemness in terms of the MSC surface marker expressions. Expression of HSP70 in hMSCs with HSP70 knockdown during osteogenesis and chondrogenesis. To confirm HSP70 knockdown efficiency in differentiated hMSCs during osteogenesis, HSP70 mRNA expression levels were assessed by real time RT-PCR. In Fig. 3A-D, real time RT-PCR analysis on Day 6, 12, 19, and 26 of osteogenic differentiation showed a significant reduction of HSP70 expression in hMSCs stably transfected with HSP70 shRNA5 compared with controls transfected with scrambled shRNA. Interestingly, periodic mild heat shock did induce slightly more HSP70 expression in heated than non-heated hMSCs with HSP70 knockdown at all four time points during osteogenic differentiation ( Fig. 3A-D). This may be explained by an incomplete knockdown of HSP70 in hMSCs by shRNA5. As shown in Fig. 3E, pellets with the scrambled shRNA had increased staining of HSP70 in heated compared to non-heated conditions on day 17 and 24 during chondrogenesis. Meanwhile, Fig. 3E pellets with HSP70 knockdown ("HSP70 KD") had much lower HSP70 expression compared to controls with scrambled shRNA on both days, and mild heat shock slightly rescued the dramatic reduction of HSP70 expression in the knockdown samples. However, the slightly rescued HSP70 expression could not reach the same high level of HSP70 expression after heat shock in the control pellets with scrambled shRNA. Taken together, these results confirm that HSP70 was efficiently knocked down in hMSCs undergoing chondrogenesis in pellet culture, and that periodic mild heat shock slightly rescued HSP70 expression in HSP70 knockdown pellets likely due to the fact that shRNA5 did not completely abrogate HSP70 expression. Periodic mild heating slightly recovered the diminished ALP activity at Day 6 and calcium deposition at Days 19 and 26 in hMSCs with HSP70 knockdown as shown in Fig. 4A and B (p < 0.05). However, the slight recovery of ALP and calcium from heating in HSP70 knockdown samples could not compensate for the expression reduction of osteogenic markers due to HSP70 knockdown. For example, 45 ± S.D. units of ALP from the heated HSP70 knockdown samples is much lower than 99 ± S.D. units of ALP from the non-heated samples with scrambled shRNA (Fig. 4A Day 6). Similarly, there was significantly less calcium (83.11 μg) in heated HSP70 knockdown samples than non-heated controls (173.78 μg) (Fig. 4B Day 19). The complete results of ALP activities and calcium deposition for growth and osteogenic conditions with and without HSP70 knockdown are included in Tables S1 and S2 in supplementary materials. We further examined the expression of osteogenesis-specific genes, Runx2 and Osterix during MSC osteogenesis. Figure 4C and D include results of Runx2 and Osterix gene expression assessed by real-time RT-PCR on Day 12,19, and 26 during osteogenic differentiation. As expected and shown in Fig. 4C and D, undifferentiated hMSCs showed very low expression levels of Runx2 and Osterix compared with the differentiated hMSCs, while both Runx2 and Osterix expression gradually increased in osteogenic samples from early to late stage differentiation. Day 12 data in Fig. 3C and D show that there was no significant difference in the expression of Runx2 and Osterix between hMSCs with HSP70 knockdown and control hMSCs (i.e. hMSCs with the scrambled shRNA). However, HSP70 knockdown significantly reduced the Runx2 expression on Days 19 and 26 ( Downregulation of chondrogenic markers in hMSC pellet culture after HSP70 knockdown. 3D pellet culture was performed for chondrogenic differentiation of hMSCs with and without HSP70 knockdown. Pellets were exposed to mild heat shock at 39 °C for one hour every other day. The pellet samples were examined for collagen II, collagen X, and aggrecan expression by immunohistochemistry at day 17 and 24 during chondrogenesis (Fig. 5). Table 2 includes quantified results of immunohistochemical staining intensity of collagen II, X and aggrecan. The mild heating at 39 °C upregulated the expression of chondrogenic markers, collagen II and aggrecan, and the hypertrophic cartilage marker, collagen X, in chondrogenic pellets with scrambled shRNA at days 17 and 24 (Fig. 5A,B and C, comparisons between panels a and b). HSP70 knockdown reduced collagen II and X expressions dramatically in chondrogenic pellets ( Fig. 5A and B, comparisons between panels a and c, b and d; Table 2). Similar to osteogenic conditions, there was a slight increase in collagen II and X expression in heated HSP70 knockdown pellets that might be caused by partial knockdown of HSP70 ( Fig. 5A and B, comparisons between panels c and d). Interestingly, HSP70 knockdown did not result in dramatic decrease of aggrecan expression in non-heated and heated conditions (Fig. 5C comparisons between panels a and c, b and d). Additionally, heating itself could lead to the significant increase of aggrecan expression though the samples have the same 70% HSP70 knockdown (Fig. 5C, comparisons between panels c and d). Discussion Bone can regenerate itself in vivo, but large bone defects and slow bone growth remain obstacles. Heat shock proteins and their distribution have been observed during bone development in mouse embryos 13 . Mild periodic heat stimulation facilitated and enhanced mesenchymal stem cell differentiation toward bone and cartilage lineages 2,3 . Here, our mechanistic study revealed that HSP70 plays an essential role in osteogenesis and chondrogenesis of mesenchymal stem cells (hMSCs) as well as in thermally-induced enhancement of osteo-chondogenic differentiation. Our results demonstrated that inhibition of HSP70 with HSP70 shRNAs led to a significant reduction in the expression of osteogenic markers (e.g. ALP activity, calcium deposition, Runx2 and Osterix), chondrogenic markers (e.g. collagen II), and hypertrophic cartilage markers (e.g. type X collagen) during osteogenic and chondrogenic differentiation of hMSCs, respectively. Mild periodic heat shock to hMSCs with downregulated HSP70 cannot restore the normal osteogenic and chondrogenic differentiation. Owing to the potential cytotoxic and off-target effects of chemical inhibitors, we used target-specific HSP70 shRNAs to knock down HSP70 expression. The data from hMSCs with scrambled shRNA indicates that osteogenic functions of hMSCs were not affected by shRNA transfection during differentiation. Furthermore, HSP70 silencing up to ~70% did not affect hMSC proliferation or hMSC surface marker expression. In contrast, studies from Jaattela et al. showed that high expression of HSP70 was required for the survival of glioblastoma, breast and colon carcinoma 24,25 . They showed that RNA interference-based knockdown of HSP70 resulted in massive cell death in various cancer cell lines such as MDA-MD-468, LoVo-36, U373MG, and PC-3 24 . Interestingly, Jaattela et al. also found that the survival of noncancerous breast epithelial cells or fetal fibroblasts was not affected by the inhibition of HSP70 in spite of the massive death of human breast cancer cells 26,27 . However, other studies reported that individual silencing of HSP70 had no significant effect on proliferation and apoptosis of HCT116 and A2780 cancer cells 28,29 . We speculate that differences in the extent of cell death and proliferation observed in the various studies involved in HSP70 depletion are due to inherent differences in the various cell types used in the respective studies. It is well known that primary healthy cells have lower growth rates than cancer cells. The fast and abnormal metabolic reactions in cancer cells may cause more protein aggregations than in healthy cells, leading to a higher demand for HSP70 molecules as chaperones in cancer cells than that in healthy cells. To verify the specificity of HSP70 shRNA, the expression of HSP27 and HSP90 mRNA was examined in hMSCs transfected with HSP70 shRNAs and the control plasmid. The HSP70 shRNA used in this study had no effect on HSP27 and HSP90 expression in hMSCs as indicated by the lack of difference in the expression of HSP27 and HSP90 between hMSCs knocked down with HSP70 shRNA and hMSCs infected with scrambled shRNA during osteogenic differentiation on Days 12 and 26 (see Figure S1 in Supplement). In this study, heat-enhanced osteogenesis and chondrogenesis using hMSCs transfected with scrambled shRNAs and periodically heated at 39°C once every other day was first confirmed, which was consistent with our previous study using a heating protocol of 41 °C once a week in hMSCs without shRNA transfection 2,3 . Furthermore, a direct and essential role of HSP70 in MSC osteogenesis and chondrogenesis was revealed even though the expression level of HSP70 in hMSCs during osteogenic and chondrogenic differentiation is low. A need for more chaperone proteins (e.g. HSP70) most likely is caused by quicker biochemical reactions, higher protein synthesis levels and more protein aggregations during stem cell differentiation 30,31 than a relative equilibrium state of slow stem cell growth. However, exact signaling pathways of HSP70 interacting with MSC osteogenesis and chondrogenesis are not clear. It is widely known that bone morphogenetic proteins (BMPs), members of the TGF-β superfamily, promote osteogenic differentiation indicated by increased expression of osteoblast specific makers such as ALP, calcium deposition, Osterix, and Runx2 after BMP stimulation 32,33 . Involvement of BMPs in the induction of chondrogenesis in primary cultures of articular chondrocytes has been demonstrated in vitro, showing BMPs increased the expression of collagen II 34,35 and X 35 and aggrecan 36 . Our previous studies revealed that periodic heat shock actually enhanced BMP2 expression during the late stages of osteogenic differentiation of hMSCs in 2D culture and in 3D culture on PuraMatrix peptide hydrogels 2 . The signaling pathways associated with thermal enhancement of MSC differentiation and HSP70 involvement with MSC osteogenesis and chondrogenesis are likely more complicated than the action of BMP2 alone. For example, it was reported that BMP2 activated mitogen-activated protein kinase components, such as p38 and Erk1/2, during osteoblast differentiation, while phosphorylated p38 and Erk1/2 also mediate BMP2-induced Osterix expression 37,38 . More and in depth investigations are needed to unveil the exact signaling pathways that drive HSP70-mediated MSC osteogenesis with or without thermal enhancement. In vitro chondrogenesis of hMSCs provides a useful model for studying the cellular differentiation into the chondrogenic lineage using a high cell density pellet culture system 39 . HSP70 knockdown in hMSCs by shR-NAs resulted in a significant reduction of collagen II and X in heated and non-heated chondrogenic pellets during chondrogenesis. Collagen type II is a major component of hyaline cartilage and collagen X is considered a marker of hypertrophic chondrocyte differentiation in articular cartilage. Kubo and Tonomura et al. investigated the role of HSP70 in chondrocytes and revealed HSP70 as positive mediator in the protection of chondrocytes against extreme heat stress via inhibition of NO-induced apoptosis and promotion of metabolic activity of chondrocytes 40,41 . Supporting our findings, it was shown that HSP70 was elevated in rabbit articular chondrocytes concomitant with enhanced expression of collagen II and proteoglycan core protein after low energy microwave stimulation 42 . Direct upregulation of HSP70 via a vector in a human chondrocyte-like cell line (HCS-2/8) resulted in increased mRNA expression of proteoglycan core protein 43 . Aggrecan is the most abundant structural proteoglycan in articular cartilage and plays an important role in mediating chondrocyte-chondrocyte and chondrocyte-matrix interactions 44 . Interestingly, our data showed that HSP70 knockdown did not significantly affect aggrecan expression in chondrogenic pellets. The discrepancy might indicate that HSP70 is not the only regulatory molecule that directs aggrecan core protein expression. More investigations into how HSP70 interacts with collagen II and X but not aggrecan during MSC chondrogenesis will help elucidate the specific mechanisms of HSP70-mediated thermal enhancement of chondrogenesis. Overall, this study revealed a direct and positive regulation of HSP70 on hMSC osteogenic and chondrogenic differentiation in vitro. Further work is underway to use a vector to upregulate HSP70 expression without heat shock and to examine that HSP70 upregulation alone without heat shock can enhance hMSC osteogenic and chondrogenic differentiation. There is a potential to develop new therapeutic strategies (e.g. ultrasound heating) using heat shock or heat shock proteins for accelerated bone regeneration via MSC osteogenesis and chondrogenesis to benefit a speedy recovery from trauma or pathological/physiological bone loss. Confirmation of Gene Silencing by Western Blotting and Reverse Transcription PCR. Human MSCs transfected with HSP70 shRNAs were harvested 14-16 hours after mild heating at 39 °C for one hour. Protein gel blotting analyses were performed as previously described 2 . Briefly, cells were lysed in lysis buffer (150 mM NaCl, 1% NP-40, 0.5% Deoxycholic Acid, 0.1% SDS, 50 mM Tris pH 8.0) containing protease inhibitor cocktails (Sigma, St. Louis, MO). Proteins were separated with SDS-polyacrylamide gel electrophoresis, transferred to PVDF membrane (Bio-Rad, Hercules, CA) and immunoblotted with the primary antibodies of HSP70 and actin, followed by incubation with the horseradish peroxidase (HRP) conjugated secondary antibody. Tetramethylbenzidine (TMB) substrate kit (Vector Laboratories, Burlingame, CA) was used to visualize the protein bands. Membranes were dried and scanned into digital images. Protein bands were analyzed quantitatively using Image Pro Plus software (Media Cybernetics). Protein expression levels were presented as ratios of total intensity of the protein bands normalized by that of the actin band to minimize the protein loading variation. For RT-PCR, briefly, total RNA from hMSCs pellets was isolated using RNeasy Kit (Qiagen) following reverse transcription by SuperScript III RT (Invitrogen). TagDNA polymerase (Invitrogen) was used in PCR cycles and β-actin was used as a control. Primers for human HSP70 and β-actin are CCATGGTGCTGACCAAGATGAAG (forward)/CACCAGCGTCAATGGAGAGAACC (reverse), and CCAGAGCAAGAGAGGCATCC (forward)/ CTGTGGTGGTGAAGCTGTAG (reverse) respectively 46 . Human MSC Osteogenic Differentiation. Ten thousand (10 4 ) cells at passage 4 were seeded per well in 24-well plates. Osteogenic differentiation was induced the next day (Day 0) after seeding using osteogenic medium consisting of growth medium with 50 µM ascorbic acid phosphate (Wako Chemicals USA, Richmond, VA), 10 mM β-glycerophosphate, and 0.1 µM dexamethasone. Heat Exposure. Human MSCs were exposed to mild heating at 39 °C for one hour once every other day starting Day 1 in an incubator with 5% CO 2 and humidified air. After heating, the medium was changed to avoid any possible loss of water, and the cells were returned to the 37 °C incubator. The medium change was also performed for the non-heated culture samples. dsDNA Quantification. dsDNA content was determined on Days 6, 12, 19, and 26 during differentiation using Quant-iT TM PicoGreen dsDNA reagent kit (Invitrogen) following the manufacturer's instructions. Briefly, 0.5% Triton-X 100 was used to lyse cells to release DNA. The cell lysate was incubated with pepsin overnight at 4 °C. Samples were neutralized with pH 8.0 Tris buffer, and incubated with PicoGreen working solution for 5 minutes at room temperature. The fluorescence signals were measured using a SpectraMax M2e microplate reader (Molecular Devices) at excitation and emission wavelengths of 480 nm and 520 nm, respectively. Alkaline Phosphatase Assay. ALP activity was measured on Days 6 and 12 during differentiation as previously described 2 . Briefly, cells were lysed with 0.5% Triton-X 100. Samples were incubated with 1.5 M alkaline buffer solution and phosphatase substrate solution for 15 minutes at 37 °C. After incubation, 1 N NaOH was added to stop the reaction. The absorbance of the p-nitrophenol product was measured at 405 nm using the SpectraMax M2e microplate reader. One enzyme unit of ALP is defined as the quantity of enzyme which produces 1nmol p-nitrophenol per 15 minutes. Scientific RepoRtS | (2018) 8:553 | DOI:10.1038/s41598-017-18541-1 Calcium Quantification. Calcium deposition was performed on Days 19 and 26 during differentiation as previously described 2 . Briefly, samples were collected using 0.5 N HCl and calcium was extracted by shaking the samples for 4 hours at 4 °C. Sample calcium concentrations were measured using the Stanbio Laboratory Calcium Liquicolor kit (Fisher Scientific). Gene Expression Measured by Real-time Quantitative RT-PCR. Real-time RT-PCR was performed for HSP70, Runx2, Osterix, and GAPDH using SYBR Green RT-PCR Master Mix (Life Technologies). Real-time quantitative RT-PCR was performed using the ABI Prism 7300 Thermocycler (Applied Biosystems) following the manufacturer's instructions. The thermal profile for real-time PCR was 95 °C for 10 min, followed by 40 cycles of 95 °C for 15 s and 60 °C for 1 min. Relative gene expression levels from real-time PCR experiments were assessed using the 2 −∆∆CT method 47 . Primers for human Runx2, Osterix, and GAPDH are AAATCGCCAGGCTTCATA (forward)/CTGCCAGGAGTGGTCAAA (reverse); CCTGCGACTGCCCTAATT (forward)/GCGAAGCCTTGCCATACA (reverse); and GGATTTGGTCGTATTGGG (forward)/ GGAAGATGGTGATGGGATT (reverse) respectively 2 . Histology and Immunohistochemistry (IHC). Chondrogenic pellets were harvested for histological analyses on Days 17 and 24 during differentiation. Immunohistochemistry for HSP70, collagen type II, type X, and aggrecan was performed on chondrogenic pellets as previously described 3 . Statistical analysis. Data are reported as mean ± standard deviation with a sample number of four for each condition. Student's t-tests were applied to analyze surface marker expression from FACS and dsDNA data for proliferation of hMSCs. ANOVA with post hoc Holm-Sidak tests were performed for all the other experiment data. A P value of less than 0.05 was considered as statistically significant. Data Availability. The datasets generated and/or analyzed during the current study are available from the corresponding author on request.
2018-04-03T00:00:38.028Z
2018-01-11T00:00:00.000
{ "year": 2018, "sha1": "047bdeee2d948795bb3819d9b829fc9cd118d317", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-18541-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "780f871255ed17fd5574f2c8032cd7af3e73317e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
256627675
pes2o/s2orc
v3-fos-license
Towards Meaningful Anomaly Detection: The Effect of Counterfactual Explanations on the Investigation of Anomalies in Multivariate Time Series Detecting rare events is essential in various fields, e.g., in cyber security or maintenance. Often, human experts are supported by anomaly detection systems as continuously monitoring the data is an error-prone and tedious task. However, among the anomalies detected may be events that are rare, e.g., a planned shutdown of a machine, but are not the actual event of interest, e.g., breakdowns of a machine. Therefore, human experts are needed to validate whether the detected anomalies are relevant. We propose to support this anomaly investigation by providing explanations of anomaly detection. Related work only focuses on the technical implementation of explainable anomaly detection and neglects the subsequent human anomaly investigation. To address this research gap, we conduct a behavioral experiment using records of taxi rides in New York City as a testbed. Participants are asked to differentiate extreme weather events from other anomalous events such as holidays or sporting events. Our results show that providing counterfactual explanations do improve the investigation of anomalies, indicating potential for explainable anomaly detection in general. INTRODUCTION Detecting rare events is essential in various domains [10,25]. For example, in manufacturing, engineers want to find early indicators of machine failures that would allow them to retentively conduct maintenance. In cyber security, experts aim to find security breaches and attacks. At the same time, monitoring continuous data streams is very challenging even for human experts [33,55], primarily due to the vast amounts of data-in terms of granularity and variability-that need to be analyzed. For this reason, system designers build so-called anomaly detection systems (ADS) that aim to support human experts in identifying anomalies. Recently, more and more of these systems are based on machine learning (ML) [5,26], with autoencoder being the most commonly applied method [14]. However, anomaly detection in general and autoencoder, in particular, cannot perform fully automated detection of relevant anomalies. This is because ADS can only find anomalies, not the specific events that domain experts are interested in. Validating whether an identified anomaly is relevant is a challenging process that requires close collaboration between human experts and ADS. In manufacturing, for example, engineers are not interested in every anomaly in the production process but only those that might indicate machine failure. Figure 1 highlights this distinction between relevant and non-relevant anomalies. To perform anomaly investigation, domain experts often need to investigate hundreds of different features [42], of which not all are relevant to a specific anomaly. For example, in maintenance, a typical production line registers thousands of measurements per second. To improve the accuracy of this classification process-the anomaly investigation-experts need support in boiling down the vast amount of interrelated data into relevant ones. It becomes evident that the anomaly detection problem is actually twofold: detecting anomalies in the data and investigating them afterward. While the first process step is well researched, the second lags behind [17,51,64]. Even though it may not be possible to automatically investigate anomalies due to missing labels in many use cases, we hypothesize that ADS can still provide valuable information for human experts that improve their human classification. More precisely, we hypothesize that explanations of anomaly detection may support the subsequent classification task. In the manufacturing example, the ADS could highlight which sensor signals led to the detection of anomalies. This highlighting potentially simplifies the investigation of the anomaly by reducing the number of sensors that must be investigated. A review of related literature reveals that while there are certainly studies focusing on explainable ADS [18,62], none empirically evaluate whether these explanations support the subsequent anomaly investigation. Therefore, we derive the following research question: Research Question: Can explanations of anomaly detection improve the anomaly investigation? To answer the research question, we conduct a behavioral experiment. In this work, we focus on detecting and investigating anomalies in multivariate time series-a data type with many anomaly detection use cases [39,43]. As the basis of our ADS, we have chosen an autoencoder as it is one of the most common approaches [43]. As previous research has pointed out, the most suitable type of explanation in the presence of multivariate time series might be counterfactual explanations [4]. Counterfactual explanations are as similar as possible to the sample explained while having a different classification label; i.e., "if the values of these particular time series were different in the given sample, the classification label would have been different. " Other types of explanations exist, e.g., feature-based methods that highlight features relevant to the decision or example-based explanations that give an exemplary sample from the same class. In the setting of time-series, Ates et al. [4] argue that counterfactual explanations are superior to the previously mentioned explanation types as for feature importance, experts need to have knowledge of normal value ranges. Regarding example-based explanations, they argue that more than showing examples is required to understand the underlying patterns. We follow their thought process and focus on counterfactual explanations. To summarize, we focus on counterfactual explanations for autoencoder in multivariate times series to investigate our research question. Standard anomaly detection benchmark datasets for multivariate time series, e.g., from manufacturing [58], or cyber security [21], have the drawback that they are too complex for behavioral online experiments. Therefore, based on a set of requirements, we searched for a suitable dataset and chose the New York City taxi trips. We use these recordings as a testbed and design an autoencoder that can identify different events by their substantial impact on the local taxi industry. As a subsequent classification task, we ask participants to differentiate extreme weather events from other events. We conduct a behavioral experiment with 64 participants to answer our research question. We find that counterfactual explanations indeed improve the anomaly investigation. To our knowledge, we are the first study that empirically investigates the effect of explanations of anomaly detection on anomaly investigation. By validating the potential, we motivate novel use cases based on anomaly detection that could have a major impact on how ADS are approached in the future. Next, we will introduce the fundamentals of our work in Section 2. Following that, in Section 3, we introduce, our dataset, the design of our explainable autoencoder and our experimental design. In Section 4, we present the results of our experiment. In Section 5 we discuss our results, and in Section 6 we conclude. RELATED WORK In this chapter, we introduce the fundamentals of our work and provide an overview of related work. First, we introduce foundations of anomalies, anomaly detection, investigation, and explainable AI. Then, we introduce the related work that covers explainable autoencoder-based anomaly detection in multivariate time series. Anomaly Definition. First, it is imperative to define the term "anomaly" to establish a common ground. An anomaly is essentially a data point or a sequence of data points with substantial deviations from the majority of data points [28,32]. The term anomaly does not describe a specific event but rather a property of those events. Those events are described as "unusual" , "rare" and simply not "normal" [28]. Anomalies can be categorized into different types: Point anomalies are the most trivial to find, as these anomalies are only single points located outside the normal value range. Next, contextual anomalies can consist of sequences and can only be identified as anomalous in comparison to different points with the same context. The most complex type is the collective anomaly. Collective anomalies always span over sequences and only gradually show a different pattern compared to normal data. Individual values within this type of anomaly may seem ordinary and only collectively raise suspicion [12]. In most use cases where anomalies are to be detected, the ultimate goal is not to detect any anomaly, but particular ones [42,63]. Therefore, most anomaly detection use cases essentially boil down to a rare event classification. For example, in manufacturing, the goal of operators may be to detect early indicators of machinery failures. However, not only these early indicators but also other events deviate from the "normal" operation, e.g., planned shut-downs. This means only a subset of the anomalies is actually of interest. To classify those rare events of interest, in the past knowledge-based systems were used, i.e. systems that explicitly store the knowledge of experts to detect the events [64]. However, experts have a limited, more global view, and as data size grows, it becomes harder for experts to explain deviations in values and their effects [64]. Moreover, acquiring their knowledge is a time-consuming and challenging task [64]. For this reason, more and more ADS were developed. However, ADS cannot classify the detected anomalies based on their relevancy. This means a human anomaly investigation is still an imperative [63]. In this work, we focus on improving this anomaly investigation with explanations of the anomaly detection. Anomaly Detection. Anomaly detection approaches consist of either classification (e.g., isolation forest), nearest neighbors (e.g., Distance Based), compression-based (e.g., autoencoder), or clustering methods [46]. Further, anomaly detection can be categorized into three different classes [47]. Supervised anomaly detection aims to build a classifier model that learns from a labeled training dataset. Here, the training dataset contains labels for normal and anomalous instances. In real-world, it may be challenging to create such datasets due to anomalies being rare events and models requiring vast amounts of data. Next, semi-supervised anomaly detection requires only a training dataset with instances being labeled as normal. Accordingly, any instance different from the normal class is classified as an anomaly. Finally, unsupervised anomaly detection is the most common type for anomaly detection as it does not require any labels in the training dataset, with autoencoders being one of the most powerful model classes. While there are numerous methods to perform anomaly detection, the scope of this work is related to deep anomaly detection due to its superior performance [14]. Deep anomaly detection describes the application of deep learning to the anomaly detection task. Autoencoders are the most frequently used deep anomaly detection method [14]. This architecture was already introduced in the 1980's [60] and attempts to compress the input data to then reconstruct it with as little information loss as possible [7]. The most basic structure of an autoencoder contains an encoder that generates a compressed representation of the input data and a decoder that aims to reconstruct the input data from the compressed representation [8]. Not all the information can be stored in the so-called bottleneck layer, so the model must learn statistical patterns in the training data [9]. These lower-dimensional representations are the latent space of an autoencoder [19]. For time series anomaly detection, autoencoders are usually equipped with LSTM layers that can capture temporal dependencies [43]. To decide whether a sample is anomalous or not the autoencoder reconstruction error is used. If the reconstruction error of a sample exceeds a certain threshold, the sample is labeled as an anomaly. The threshold can be tuned manually or set by a certain percentage of the highest errors. Anomaly Investigation. Only a few articles focus on anomaly investigation [42,61]. Anomaly investigation can happen on multiple levels based on the necessary, and available data [61]. It can either be conducted based on the same data used for the anomaly detection or by taking additional data into account. This work focuses on use cases where the same data is used. Most of the existing work deals with data visualization to improve anomaly investigation [61,67]. Xue and Yan [67] develop an ADS for detecting and analyzing anomalies in cloud computing performance. They provide rich visualization and interaction designs to help understand the anomalies in a spatial and temporal context. Soldani and Brogi [61] improve the process of detecting and investigating anomalies in time series data in industrial contexts. To do so, they characterized six design elements required and developed a visual ADS to support this process. They argue that future work should investigate how explanations can improve anomaly investigation. Explainable Anomaly Detection Explanations are required to understand how specific predictions are generated [69]. On an abstract level, approaches can be divided into local and global explanations [4]. Global explanations focus on the entire dataset [35], whereas local explanations refer to individual observations [54]. By reviewing related work, it becomes apparent that there are many implementations of explainable anomaly detection [18,62]. However, in most cases, the models were the only aspect evaluated quantitatively with metrics. Overall, to the best of our knowledge, no study has ever empirically evaluated whether the explanations of the anomaly detection also provide a benefit for anomaly investigation. Therefore, we argue that this work's topic is highly relevant. Explainable Autoencoder-based Anomaly Detection in Multivariate Time Series Within the context of multivariate time series, a lack of explainable AI approaches can be observed, while simultaneously, analytics for these time series are increasing in popularity [4]. Counterfactuals are a promising explainability technique for time series [11]. While there are many counterfactual approaches in various domains, the multivariate time series domain remains mostly uncovered [29]. Hereby, the work of Ates et al. [4] is the only known framework for counterfactual explanations in time series classification. As their approach is model agnostic, they only require class probabilities as the model's output to create explanations. To do so, they modify the input data in a way that is as close as possible to the original input while receiving a different class label. However, not all available input features are altered and, instead, only the ones with the highest deviations between the original input and the modified instance with a different label. Reducing the number of adjusted variables helps human experts as previous research has pointed out that humans are only capable of processing four variables simultaneously [31]. A typical example of counterfactual explanations outside the domain of time series is a loan application scenario: The AI declines a person's request, stating that similar customers have also been declined. In contrast, a counterfactual statement can convey that the request would have been accepted if the person had slightly lowered the credit amount [38]. During the review of related works that implement explainable autoencoder, it becomes apparent that most works utilize some form of feature importance as an explanation technique. [1,20,27] use the model's built-in reconstruction error to detect important features. However, Roelofs et al. [59] argues that this methodology is not very robust as the reconstruction error does not always match the actual feature importance. Other work uses well-known frameworks such as SHAP or LIME to generate feature importance through a surrogate model (e.g., [36]) or even deploy multiple SHAP explanations to capture temporal and feature interactions respectively [34]. Ha et al. [30] calculates the feature importance through SHAP by applying a flattening layer on their LSTM autoencoder. The new model uses the weights from the autoencoder and generates explanations by using Gradient SHAP. Oliveira et al. [48] designs its framework, the residual explainer, which interprets deviations of the reconstruction errors to create feature importance. In an experiment, the approach produces better results than SHAP and takes only a fraction of the time. The only work to our knowledge that uses an explanation technique besides feature importance is the work of Sulem et al. [65], who generate counterfactual explanations. To summarize, we do not find any study that empirically researched the influence of explainable anomaly detection on anomaly investigation. METHODOLOGY In this section, we first derive hypotheses on the impact of explanations on anomaly investigation. Next, we provide information about the data and task we use to test the hypotheses. Then, we describe the development of the autoencoder and the counterfactual explanations, together forming our explainable ADS. Finally, we present the experimental design. Hypotheses Anomaly investigation is often an error-prone and challenging task [37]. For example, in condition-based monitoring, experts are often left with an alarm and thousands of sensors which could lead to the anomaly being detected. Those explanations may have the potential to localize the anomalies, i.e., highlight the features relevant for the anomaly detection [68]. Further, in the case of counterfactual explanations, they may give context information on how typical values look. Therefore, we hypothesize: H1: Providing explanations of the anomaly detection improves the effectiveness of the anomaly investigation. Additionally, explanations could not just improve the performance but also make the anomaly investigation faster and improve the efficiency of the process. Therefore, we hypothesize: H2: Providing explanations of the anomaly detection improves the efficiency of the anomaly investigation. In addition, to formalizing hypothesizes on the direct effect of explanations on anomaly investigation, we discuss a first potential mediator of the effect-cognitive load [53]. Traditionally, cognitive load can be divided into the intrinsic cognitive load, i.e., difficulties of the task, and extraneous cognitive load, i.e., visualization of the task. In this work, we focus on extraneous cognitive load as intrinsic cognitive load cannot be changed [53]. The explanations can help boil down from the potentially large amounts of variables to those relevant for anomaly investigation. Accordingly, fewer inter-variable effects must be considered, which leads to less extraneous cognitive load. Therefore, we formulate: H3: Providing explanations of the anomaly detection decreases the extraneous cognitive load required for the anomaly investigation. Following related work [24,57], we hypothesize that cognitive load negatively influences anomaly investigation performance as well as efficiency: H4: Reduced extraneous cognitive load improves the effectiveness of the anomaly investigation. H5: Reduced extraneous cognitive load improves the efficiency of the anomaly investigation. Having general hypotheses defined, in the following, we introduce our task and dataset to test the hypotheses. Dataset, Task & Data Preprocessing Dataset. We search for a suitable task and dataset by specifying a list of requirements the dataset must fulfill. The dataset must consist of multivariate time series and must include anomalies and, ideally, external information about the respective anomalies. Since the participants are non-experts, the dataset must come from a context they can understand. Based on these requirements, we evaluate several well-known multivariate benchmark datasets frequently used in anomaly detection on multivariate time series (e.g., [21,58]). All these datasets are multivariate and stem from a technical context. While these characteristics are desirable for a technical evaluation of a model, they conflict with our requirement to be easy to understand. For this reason, we picked a dataset with a more common context. One dataset that meets all these requirements is the public New York City Taxi dataset [66]. Currently, around 1 million trips are recorded every day [66]. TLC has made this data available to the public since 2009. Each trip record contains 19 features, e.g., information about the pick-up and drop-off time and location, the trip distance, payment types, fares, and the number of passengers. Nearly 13 years of data are available -in these years, the taxi industry has changed considerably. Fares, availability of cabs, or, for example, new competitors have, among other factors, influenced the collected data and represent a considerable challenge that is out of the scope of this work. We address this issue by using a shorter period of observation. Certain days, such as holidays or days with extreme weather conditions, cause considerable deviations from the usual behavioral pattern. These days are thus suitable as anomalies because they are out-of-distribution by nature while serving as ground truth at the same time [22]. For extreme weather events, ground truth can be found on the governmental extreme weather website 1 . All of the anomalies are collective anomalies, e.g., they are just anomalous as Task. Accordingly, we want to also provide an easy-to-understand task based on the dataset that supports the classification of the identified anomalies. Our dataset lays the ideal basis for this task as the anomalies have different classes, e.g., public holidays, extreme weather events, or other events. While it may not be possible to differentiate between such events deterministically, some frequently appearing patterns can be observed, e.g., during extreme weather, fewer people use taxis and, at the same time, the share of the tips increases while people usually give less tip on a holiday. Therefore, we provide participants with the task of classifying whether the shown anomaly is an extreme weather event or not. We employ a binary classification (extreme weather event or not) to be close to a realistic task. For example, in condition-based maintenance, the binary classification may be to discriminate faults from anomalies. Data Preprocessing. As the data is provided directly from the recording, it is necessary to clean and preprocess it. The goal of the preprocessing is to increase the data quality and, therefore, also the performance of the entire system [23]. We merely make basic assumptions that ensure the validity of individual recordings while not removing any anomalies the model should detect, e.g., the trip duration should be longer than zero minutes [6]. After the data cleaning, we aggregate the taxi demand hourly and perform a few preprocessing steps. To increase the comprehensibility of the dataset, we drop some of the original 19 dimensions, as they are sometimes difficult to understand and negligible for anomaly detection. Further, we create additional features that are easy to interpret and thus support the classification task. Therefore, our final dataset consists of the following features: trip count, average trip duration per mile, the proportion of tips in the total fare, average trip distance, proportion of trips starting and ending in the city center, and, finally, the average number of passengers per trip. Lastly, we scale the data to unify the magnitude of different features [45], as this can have otherwise undesired results on the model's decision-making process [49]. Explainable Anomaly Detection System Modeling. In the following, we present our anomaly detection modeling. Similar to related work [27,30,36], we use LSTM-layer to take intertemporal and multivariate dependencies into account. To optimize our architecture, we conduct a grid search and identify the following parameters as the best combination: window size (8), step size (2), hidden dimensions (8,6,4), and latent space (4). We use the reconstruction error and the detection of known anomalies as the target. Our approach is to calculate the average reconstruction error of a window over all timesteps and features and compare this value to a threshold. The threshold must be optimized based on the results. The goal of this optimization is the recall of the model, meaning that all anomalies are identified as such by the model. Counterfactual Explanations. The standard autoencoder architecture must be extended to enable explanations for common explanation frameworks such as SHAP or CoMTE that cannot handle the autoencoder output. This is because the autoencoder output has the same dimensions as the input data. Current XAI frameworks, however, expect outputs in the form of a classification or regression prediction. Thus, we design a new layer that manipulates the model's output to provide class probabilities [4]. To calculate the necessary class probabilities, a new layer is given a threshold value in addition to the already existing sum of the reconstruction error, as proposed in [4]. The threshold is determined by calculating the 99 % percentile of the training data error. The reconstruction error is then, similar to [2,3], converted to a binary class probability by first subtracting the threshold value ( ) from the calculated mean error (see Equation 1). The Sigmoid function afterward projects that value to a range between 0 and 1. The layer is concatenated after training the autoencoder. Finally, we use the CoMTE framework to generate the counterfactual explanations [4], which serve two purposes. First, they reduce the number of features that experts need to analyze (On average, our approach changes 3.2 features). Second, they highlight how the time series should have looked liked to be not flagged as anomalous. Figure 3 depicts two examples of our explanations. The used approach modifies four input features in the extreme weather event and three for the public holiday. Having introduced our explainable ADS, we now describe how we conduct our behavioral experiment. Pilot Study. To get qualitative feedback on our ADS, we conduct a focus group with five experts with backgrounds in machine learning. The session lasted 30 minutes. First, we briefly introduce the basic information about this work and then focus on the core of the study. There, an anomalous window with the respective features is presented. The session is recorded and transcribed to evaluate the results more precisely in retrospect [44]. The experts argue that the only cue generated by the explanations, the distance between the two lines of the counterfactual explanation, is not enough. We observe that it is vital to provide exemplary patterns in the pre-training of the user study to ensure that participants can understand the events being classified. We argue that this also transfers to real-world cases, as domain experts also have prior knowledge within their domain, which they incorporate into the anomaly investigation. Study Procedure. The research model is tested in an online experiment with a between-subject design. We tested two different conditions. First, a control condition in which the human receives the ADS without counterfactual explanations, and second, a counterfactual explanation (CF) condition. The study is approved by the University IRB. Sampling Strategy. In each condition, participants are provided with eight events. For the sampling of the eight events, we apply rules to ensure that the patterns of the underlying event are visible and prevent participants from being able to classify events based on previously seen anomalies, e.g., the same extreme weather at two different times during a day. Therefore, we first label our anomalies based on the provided dates, start and end times of extreme weather of the government storm website and public holidays 2 . For extreme weather events, we label an anomaly as an extreme weather if the identified anomaly starts at most 2 hours before the start of the extreme weather or two hours before the end of the extreme weather. Similarly, we label anomalies as a holiday if they start on the date of a public holiday. Finally, we randomly draw four extreme weather events, three holiday events, and one anomaly classified as neither. While sampling, we verify that we do not draw two anomalies of the same day. Interface. Next, we create visualizations of the sampled anomalies (see Figure 4). Similar to Liu et al. [41], we use two views with varying information: the context view and the zoomed view. First, our context view shows past data of the last three weeks for all variables, with the anomaly being highlighted. This should support participants in understanding the behavior and interactions of the variables in non-anomalous times and thus provides context (following the requirements from the pilot study). However, we refrain from flagging additional anomalies in this period to avoid the possibility of inferring the date based on the position of the anomalies, e.g., New Year's Day, by the previous anomalies of Christmas. Second, the zoomed view allows a detailed look at the specific time window of the anomaly. It is also the single point in which the treatments differ. While the AI treatment solely receives the data during the anomaly, the zoomed view of the counterfactual treatment additionally displays explanations. To support the anomaly investigation, we provide supplemental information about the time of the anomaly to enable a better understanding of the anomaly's patterns without allowing conclusions on the specific date (e.g., Christmas). For example, while an extreme weather event may result in fewer trips, this is also true for nights on regular working days. Additionally, we argue that in real-world cases of anomaly, investigation time is a feature that is also available. Task flow. The online experiment is initiated with an attention control question that asks participants to state the color of grass. To control for internal validity, participants are randomly assigned to the condition groups. As multivariate time series are difficult to interpret for humans [37], we include multiple tutorials. First, both conditions receive an introduction to the task and are given examples of extreme weather events and other events. Following that, we explain the two views of our ADS and ask participants four comprehension questions. Afterward, we give a short tutorial on how participants can detect extreme weather events, followed by two comprehension questions. Finally, we sample event patterns based on related literature [40,56]. For the CF condition, we follow on with an explanation of counterfactual explanations. We provide the participants with a general intuition of the explanations rather than specific technical information. During the experiment, we neither used the terms AI and ML nor counterfactual explanations to prevent issues of AI literacy. Instead, we speak of ADS and expected values. Then, the participants conduct two training tasks (one extreme weather event and one holiday) to familiarize the participants with the task and, depending on the condition, with its explanations. Additionally, the participants receive feedback on the training tasks. After the two training reviews, the participants are provided with the eight main tasks. For each task, we ask them how much they agree with the statement "The anomaly is an extreme weather event" on a four-point Likert scale (Strongly agree, agree, disagree, strongly disagree). This allows us to get not only a binary classification but additionally certainty information. After classifying the anomalies, we collect data on cognitive load and demographic variables. Reward. To incentivize the participants, they were informed that for every correct decision, they get an additional 12 Cents in addition to a base payment of 6 Pounds per hour. However, the two training classifications do not count for the final evaluation. Participant information. The participants are recruited using the platform "Prolific.co". We note that crowd workers might limit the generalizability of our results. However, our sampling of the task should ensure that crowd workers are capable of doing the task. In total, we conducted the experiment with 66 participants (33 participants per condition). We excluded two participants in the CF condition and two participants in the control condition because of conducting the eight tasks in under one minute. Apart from the attention check, we provided participants with in total of six questions that ensure that participants understand the task and underlying visualization, e.g., how many weeks of data are displayed in the context view. Based on these questions, we further excluded eight participants in the CF condition and nine participant in the control condition for incorrect answers. Even though this might seem like a high number of excluded participants, one needs to consider that multivariate time series anomaly detection is a very challenging task, and some crowd workers may not even understand what a time series is. In addition, we exclude all participants who fall outside the interquartile range of 1.5. By doing so, we have excluded two outliers in the CF condition. This leaves us with 22 participants in the control group and 21 in the counterfactual group. Table 1 shows the age, gender, and education distribution of the participants. Our first measure, effectiveness, is the accuracy of the participants in the anomaly investigation, e.g., the share of correctly classified events. To calculate the share, we first binarize the result. Due to our sampling strategy, by chance, participants would be able to have an accuracy of 50 percent. In addition to the accuracy, we analyze the participants' certainty by comparing the share of agreement and disagreement with the percentage of strong agreement and disagreement. Next, efficiency represents the time needed for the anomaly investigation. For both measures, we calculate the mean per participant for the global evaluation of our hypothesis. Additionally, we examine the effects of the explanation on each type of event more closely. Therefore, we also build the mean for the four extreme and non-extreme weather events. Finally, we collect information about the extraneous cognitive load based on questions used by Chang et al. [16]. It describes the cognitive load put on a person that is created via the presentation of the task. It is influenced, e.g., by the amount of irrelevant information displayed or unnecessarily presented content. Such information distracts from the relevant contents and thus leads to a higher extraneous cognitive load which hampers the task performance [16]. To analyze our hypothesis, we build the mean per participant and compare it between treatments. RESULTS In the following section, we report the results of our study. First, we provide a qualitative interpretation of typical patterns of detected anomalies that could have been observed by the participants of the experiment alike. Finally, we present an analysis of the experiments' results. Qualitative Interpretation of Detected Anomalies As mentioned earlier, experiment participants need to classify identified anomalies in extreme weather events. This classification is based on the intuition that each type of event has common patterns that are shared across anomalies. However, these patterns often do not allow a deterministic classification of anomalies. Nevertheless, in the following we qualitatively introduce and interpret certain patterns that were derived with the help of counterfactual explanations. Extreme Weather. During extreme weather events, three variables mainly differ from regular days: trip count is lower, the proportion of tips in the total fare increases, and the average trip distance decreases. For an example of a winter storm, see the top of Figure 3. We interpret this pattern to mean that more people stay home on stormy days and forego longer trips, such as visiting relatives or friends in other city districts. Further, people leave their homes only for urgent matters and then rely on taxis. Once they have arrived at their destination, they express their gratitude to the taxi drivers with an increased tip because of the adverse circumstances. Public Holidays. Compared to extreme weather events, public holidays often have a distinct pattern (see the bottom of Figure 3). On holidays the number of trips during the night is usually higher, and later, the average number of passengers per trip is higher . People often go out the night before, and thus there are more trips during the night compared to regular days. Compared to regular working days, we interpret the higher trip count during the night with people that go out, which results in more trips. The second observation with more people sharing taxis could be families that visit relatives or friends together. Experiment Results In Effectiveness. Analyzing the results of the experiment reveals that the participants' mean accuracy is significantly higher in the cf group compared to the control group ( = 137.5, = 0.021). Thus, we can confirm H1 and conclude that explanations improve the effectiveness of anomaly investigation. A more detailed analysis based on the type of events reveals that the effectiveness increases for both: extreme weather events and non-extreme weather events. Individually applying Student's t-test to the effectiveness of each event type results in a p-value of 0.1 for only extreme weather events and also a p-value of 0.1 for non-extreme weather events. While these are not significant, it still provides a tendency that the increase in effectiveness draws from both event types alike, e.g., participants were not only better at detecting extreme weather events. Additionally, Figure 5 shows that the interquartile range is lower for both events when provided with explanations. Efficiency. Contrary to H2, counterfactual explanations do not increase human efficiency in anomaly investigation in our setup. Accordingly, no effect on the time needed for the classification was observable between the cf group and the control group. Neither further distinguishing between anomaly types (extreme weather vs. non-extreme weather) nor task performance (correct vs. incorrect decision) yielded significant effects on the efficiency of the participants. We conclude that we cannot verify H2. Next, we observed whether there are differences in efficiency between the type of events. As displayed in Figure 6, the interquartile range is higher between treatments and for both kinds of anomalies considered individually. Cognitive Load. Finally, we collect information on extraneous cognitive load to investigate a potential mediator of the effect. However, we do not find significant effects visible between treatments (H3). This means, we do not find evidence that explanations decrease cognitive load at anomaly investigation. To investigate whether cognitive load in general, affects the anomaly investigation, we have a look at correlation effects between cognitive load and effectiveness/efficiency (H4/H5). As the Shapiro-Wilk test shows non-normality, we use the spearman correlation to determine the impact of cognitive load on participants' effectiveness and efficiency. The spearman test shows no significant correlation between task performance and cognitive load, which is why we can not confirm H4. However, further analysis shows a significant correlation between extraneous cognitive load and task efficiency, and, accordingly, we can confirm H5. In total, our experiment highlights the potential for using counterfactual explanations to improve anomaly investigation. In the following section, we discuss our results. DISCUSSION In this work, we find that counterfactual explanations improve the accuracy of classifying weather events. This means human experts can transfer insights generated from an ADS to anomaly investigation (H1). In our current setup, we do not find an efficiency improvement by providing counterfactual explanations (H2). This might be because counterfactual explanations also require effort to interpret. In addition to those direct effects, we investigate a first potential mediator-cognitive load. We find a trend that explanations do not reduce the cognitive load but instead increase it (H3). This might be induced due to counterfactual explanations being a nontrivial form of explanation that requires some cognitive effort to interpret. This means future research needs to investigate different potential mediators. Furthermore, future studies should identify the reasons for the increase in cognitive load and whether it can be mitigated, e.g., through a different form of visualization of counterfactual explanations. Implications. To the best of our knowledge, we are the first study empirically showing that explainable anomaly detection can improve anomaly investigation. This result has a major implication for research and practice. Our work has implications for every use case with vast amounts of data and rare classes of interest. In previous work, these use cases were usually called disadvantageous for ML [50]. However, we argue they need a different solution approach. Instead of using supervised ML with advanced sampling strategies (e.g., SMOTE), we hypothesize that using explainable anomaly detection together with a human expert-based anomaly validation-i.e., human-AI collaborationcan be superior. . In addition to detecting rare events of interest-thereby being an alarm system-explainable anomaly detection has the potential to work as a data mining tool to generate new knowledge. Anomaly detection can find patterns previously unknown to experts [15]. Explanations could enable experts to validate those patterns and thereby generate completely new insights. Limitations. As always with behavioral experiments, there is the question of how generalizable our results are. We would like to emphasize that our research does not aim to recommend generalizable design features (e.g., the use of counterfactual explanations for time series) but rather shows that explanation can improve the investigations of anomalies per se. We further argue that showing that it works with lay workers in an online experiment highlights even more potential for experts. Still, additional work is needed to investigate design recommendations and whether our findings hold in other domains. Furthermore, we want to discuss how realistic the testbed we have chosen is. For this reason, we compare the classification of taxi events with a typical use case for anomaly detection in manufacturing. Compared to our task, the number of features in the manufacturing industry is usually even higher. However, based on the researchers' expertise, we argue that the number of important features for the classification task is usually similarly small. Future Work. A key open question is how our human-AI collaboration workflow is perceived by end users. One key criterion for anomaly detection in the past was the reduction of "false alarms", i.e., detected anomalies that are not a class of interest, to reduce the work of experts [13,52]. However, our results show that explanations may sometimes not reduce the time needed to interpret the anomalies. This means that explanations could be perceived as uncomfortable. Future research needs to investigate if experts perceive an explainable ADS as useful and will adopt it. CONCLUSION In this work, we address the problem of investigating anomalies regarding their relevancy. In this research, we analyze the influence of explainable anomaly detection on anomaly investigation. We conduct a behavioural experiment and show that counterfactual explanations of autoencoder-based anomaly detection improve investigating anomalies in multivariate time series. We hope to motivate researchers and practitioners with our results to research, implement and use explainable anomaly detection.
2023-02-08T02:15:48.620Z
2023-02-07T00:00:00.000
{ "year": 2023, "sha1": "1ee5bcd7a0800953b6fc079a6e8d473962d03aaa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1ee5bcd7a0800953b6fc079a6e8d473962d03aaa", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
26667085
pes2o/s2orc
v3-fos-license
Disruption of Daily Rhythms by High-Fat Diet Is Reversible In mammals a network of circadian clocks coordinates behavior and physiology with 24-h environmental cycles. Consumption of high-fat diet disrupts this temporal coordination by advancing the phase of the liver molecular clock and altering daily rhythms of eating behavior and locomotor activity. In this study we sought to determine whether these effects of high-fat diet on circadian rhythms were reversible. We chronically fed mice high-fat diet and then returned them to low-fat chow diet. We found that the phase of the liver PERIOD2::LUCIFERASE rhythm was advanced (by 4h) and the daily rhythms of eating behavior and locomotor activity were altered for the duration of chronic high-fat diet feeding. Upon diet reversal, the eating behavior rhythm was rapidly reversed (within 2 days) and the phase of the liver clock was restored by 7 days of diet reversal. In contrast, the daily pattern of locomotor activity was not restored even after 2 weeks of diet reversal. Thus, while the circadian system is sensitive to changes in the macronutrient composition of food, the eating behavior rhythm and liver circadian clock are specifically tuned to respond to changes in diet. Introduction Circadian rhythms are ~24-hour rhythms of physiology and behavior that are synchronized to environmental cycles of light/dark and fasting/feeding [1].Circadian rhythms in mammals are orchestrated by a network of clocks located throughout the body.The suprachiasmatic nucleus (SCN) in the hypothalamus receives information about the light-dark cycle and coordinates the phases of other central and peripheral clocks [2,3].Disruption of the coordinated phase relationship between these clocks in rodents by chronic jet lag and shift work causes obesity [4,5].Likewise, shift workers are at increased risk of obesity, the metabolic syndrome, and cardiovascular disease [6][7][8][9][10]. We recently demonstrated that eating a diet high in saturated fat for only 1 week disrupts the phase relationship between circadian clocks in male mice by advancing the phase of the liver clock by ~5h [11].Moreover, high-fat diet markedly reduces the amplitude of the daily rhythm of eating behavior such that eating behavior is almost equally distributed across the day and night [11,12].This low-amplitude eating rhythm during high-fat diet consumption is a determinant of obesity because correcting the rhythm (by restricting high-fat diet to the nighttime) inhibits diet-induced weight gain [13,14]. Studies in mice have demonstrated that diet-induced obesity and type 2 diabetes (hyperglycemia) caused by chronic high-fat diet consumption are partially or completely reversed by feeding mice low-fat diet [15][16][17].In this study, we sought to determine whether the effects of chronic high-fat diet consumption on circadian rhythms were ameliorated by diet reversal. Materials and Methods Animals Male C57BL/6J heterozygous PERIOD2::LUCIFERASE mice [3] (23 to 24 generations of backcrossing to C57BL/6J mice from The Jackson Laboratory) were used to analyze the phase of the liver circadian rhythm.Genotyping was performed by measuring light emission from fresh tail pieces using a luminometer.Male wild-type C57BL/6J mice from our breeding colony were used to assess eating behavior rhythms.All mice were bred and maintained in 12h light:12h dark (light intensity ~350 lux) with chow (13.5% kcal from fat, LabDiet 5L0D; 3.02 kcal/g metabolizable energy) and water provided ad libitum in the Vanderbilt University animal facility.Sentinel testing in the animal facility was performed biannually and mice were negative for all Vanderbilt excluded pathogens (excluded pathogens listed at https://www4.vanderbilt.edu/acup/dac/List_of_Excluded_Pathogens_050112.pdf;our colony is negative for mouse parvovirus).Mice were weaned at 3 weeks old and group housed (2 to 4 mice per cage).Starting at 7 weeks old, body weight and food were measured weekly (during the 3 hours before lights off). Ethics Statement All experiments were conducted in accordance with the guidelines of the National Institutes of Health Guide for the Care and Use of Laboratory Animals.Mice were euthanized by cervical dislocation without anesthesia followed by decapitation.All procedures were approved by the Institutional Animal Care and Use Committee at Vanderbilt University (protocol number M/ 13/081). Experiment I. Protocol for measuring liver circadian rhythm At 7 weeks old, male heterozygous PER2::LUC mice were singly housed in cages (33 x 17 x 14 cm) with locked running wheels (running wheels were present but could not rotate) and fed chow ad libitum.The cages were housed in light-tight, ventilated boxes in 12h light:12h dark (light intensity ~200-300 lux) at 25.5±1.5°C.At 8 weeks old, mice were either fed chow for 5 weeks ( Bioluminescence recording and analysis Mice were euthanized by cervical dislocation followed by decapitation.For most experiments, mice were euthanized at the end of the light phase, within 1.5h before lights out.To determine if the phase of the liver PER2::LUC rhythm was reset by the culture procedure, some mice were euthanized at the beginning of the light phase, within 1.5 h of lights on.Liver explants were cultured as previously described except that CellGro (catalog number 90-013PB plus L-glutamine) recording medium was used [18].Liver explants were cultured on mesh (Spectra Mesh Woven Filter, 500μm opening) in 35mm dishes in the LumiCycle apparatus (Actimetrics Inc, Evanston, IL) housed in a water-jacketed incubator (temperature 36.5°C ± 0.03°C).Photon counts were integrated over 10-min intervals by the LumiCycle.Bioluminescence data were detrended by subtracting the 24-h moving average and smoothed by 0.5h adjacent averaging using Lumicycle software and then exported to Clocklab analysis software (Actimetrics Inc.).The phase of the liver PER2::LUC rhythm was defined as the peak of bioluminescence occurring between 12h and 36h in culture.To determine if the phase of the liver PER2::LUC rhythm was reset by the culture procedure, cultures were prepared at either Zeitgeber time (ZT) 1 (where ZT12 is lights off) or ZT11 (~10h apart) and the peaks of bioluminescence were plotted relative to the light-dark cycle and relative to the time of culture. Experiment II. Protocol for measuring eating behavior and locomotor activity The experimental conditions were identical to those described above for measuring the liver circadian rhythm except that eating behavior and locomotor activity were simultaneously recorded in one group of wild-type male C57BL/6J mice that were fed high-fat diet for 4 weeks and then chow for 1 week (Fig 1C; n = 5). Behavior recording and analysis General locomotor activity data were collected every minute using passive infrared sensors (sensors record a maximum of 1 count every 6 secs; model 007.1, Visonic LTD).Double-plotted actograms of locomotor activity were created with Clocklab (10-min bins; normalized setting).Eating behavior was continuously recorded using an infrared video camera (PYLE PLCM22IR Flush Mount Rear View Camera with 0.5 lux Night Vision, Pyle Audio Inc., Brooklyn, NY) interfaced to a computer with VideoSecu4 [11].Eating behavior was analyzed in 1-minute bins (coded as 1 for eating behavior and 0 for no eating behavior) as previously described [11].Eating behavior data were plotted in circular histograms (Oriana 4.0; Kovach Computing Services, Wales, UK).Body weights (at 13 weeks old) and the phases of liver PER2::LUC rhythms were compared by one-way ANOVA followed by post-hoc Fisher's least significant difference (LSD) tests (Origi-nPro 9.1, Northampton, MA).Circular data were plotted and analyzed using Oriana 4.0.The mean vector of each day of behavior data (for individual mice) was determined by Rayleigh's uniformity test to indicate the angle (μ) and degree of clustering (length; r).Grand mean vectors (to analyze groups of mice) were analyzed using Hotelling's one sample test.The length of the vector describes the uniformity of the distribution of activity such that short vectors indicate that activity is more evenly distributed across the cycle.Significance was ascribed at p<0.05. Diet-induced obesity is rapidly ameliorated by diet reversal After 5 weeks on high-fat diet, mice weighed significantly more than chow-fed mice (Fig 2 ; 13 weeks old, F (2,23) = 10.01,p<0.001,LSD p<0.001).The body weight of diet reversal mice did not differ from chow-fed mice (LSD p = 0.24).Thus, diet-induced obesity was completely reversed by 1 week of chow feeding. The phase of the liver circadian rhythm is restored by diet reversal We previously showed that the phase of the liver PER2::LUC bioluminescence rhythm was advanced ~5h after 1 week of high-fat diet consumption [11].To determine if the liver phase remained advanced during chronic high-fat diet consumption, we fed mice high-fat diet for 5 weeks and measured PER2::LUC bioluminescence rhythms in liver explants (S1 Fig) .The phase of the liver was advanced 4h after 5 weeks of high-fat diet eating compared to chow-fed mice (F (2,18) = 10.51,p<0.01,LSD p<0.001;Fig 3).To determine if the phase of the liver clock was reversible, we fed mice high-fat diet for 4 weeks and then chow for 1 week.By 1 week of chow feeding, the phase of the PER2::LUC rhythm in liver did not differ from the liver rhythm in chow-fed mice (LSD p = 0.65).These data demonstrate that the phase of the liver rhythm was reversible even after chronic high-fat diet consumption. To determine if the 4-h advance of the liver PER2::LUC rhythm in high-fat diet-fed mice was an artifact of the culture procedure, we prepared liver explants from chow or high-fat dietfed mice at either ZT1 (1h after lights on) or ZT11 (1h before lights off; S2 Fig) The effects of chronic high-fat diet on the eating behavior rhythm are rapidly reversed Eating behavior was consolidated during the night in chow-fed mice, resulting in a robust eating behavior rhythm and long mean vector [11] 1).By 1 week of diet reversal, the eating behavior rhythm was similar the rhythm prior to high-fat diet consumption (Fig 4, Table 1, S3, S6 Figs).These data show that high-fat diet consumption chronically disrupted the eating behavior rhythm, but this effect was rapidly reversed by chow feeding.Male heterozygous PER2::LUC mice were fed either chow for 5 weeks (black circle), high-fat diet for 5 weeks (red circle), or high-fat diet for 4 weeks and then chow for 1 week (green circle).The mean (±SD) phases were determined from the peaks of PER2::LUC expression in liver explants during the interval between 12 and 36h in culture and were plotted relative to the time of last lights on (24h is lights on and 36h is lights off, white and black bar on top; liver cultures were prepared at ZT11).The sample size is shown (number of rhythmic tissues/number of tissues tested).*p<0.01.doi:10.1371/journal.pone.0137970.g003 The reversibility of high-fat diet disruption of locomotor activity rhythms is variable The of chow-fed mice occurred mostly during the dark phase of the lightdark cycle, with several consolidated bouts of activity during the day [11] 2). Beginning on the first day of high-fat diet and persisting for the duration of chronic high-fat feeding, activity in the light phase was dispersed across the day (instead of occurring in several distinct bouts; Together, these data show that the daily pattern of locomotor activity was rescued by diet reversal in some mice, but not in others. Discussion We previously showed that the liver circadian clock, the daily rhythm of eating behavior, and the daily pattern of locomotor activity were acutely sensitive to high-fat diet consumption [11].However, it was unknown whether these effects persisted during chronic high-fat diet feeding and whether they were reversible. After chronic (5 weeks) high-fat feeding, the phase of the liver molecular clock was advanced 4h.This effect was completely reversed by 1 week of chow feeding.Thus, the liver circadian rhythm is exquisitely sensitive to macronutrients, even after chronic exposure to highfat diet.Likewise, the temporal transcriptional state of the liver is also reversible after chronic high-fat diet consumption [19]. Importantly, we found that the 4-h advance in the phase of the liver clock caused by highfat diet consumption was not an artifact of the culture procedure.The phase of the liver PER2:: LUC rhythm was advanced ~2h when the liver was cultured in the morning compared to livers cultured in the evening, but this occurred in both chow-and high-fat fed mice.Moreover, the phases of ex vivo liver PER2::LUC rhythms cultured from chow-fed mice (at either time of day) approximated the phase of the PERIOD2 rhythm measured in vivo [20].Therefore, our results reflect the effects of diet on the phase of the liver PER2::LUC rhythm and were not an artifact of the experimental approach. This study also demonstrated that the phase of the liver clock correlated with body weight.The phase of the liver circadian clock was advanced during weight gain (positive energy *Hotelling's one sample test was used to test if there was a significant mean direction.† The 95% confidence intervals are reported (in parentheses) for the directions of the grand mean vectors.balance) but returned to its normal phase during weight loss and weight maintenance.While it is unknown whether the change in the phase of the liver clock precedes or is a consequence of changes in energy balance, it is intriguing to speculate that the phase of the liver circadian clock contributes to the metabolic dysfunction caused by high-fat diet consumption.The daily rhythm of eating behavior was markedly disrupted during chronic high-fat feeding.The rhythm was severely compromised (or absent) on the first day of high-fat eating, such that mouse eating behavior was almost evenly distributed across the entire 24h day [11].In this study, we demonstrated that the effect of high-fat diet persisted during chronic high-fat diet consumption.Beginning on day 1 of high-fat diet and persisting for the entire 4 week feeding protocol, the amplitude of the eating behavior rhythm was markedly reduced or the rhythm was absent.This is consistent with the study by Kohsaka et al. (2007) that showed compromised food intake rhythms during 6 weeks of high-fat feeding.For the first time, we showed that the effects of high-fat diet consumption on the daily rhythm of eating behavior were rapidly reversed.Within 2 days of chow feeding (after 4 weeks of high-fat diet feeding), the eating behavior rhythm was similar to the rhythm prior to high-fat feeding. Chronic high-fat diet consumption also altered the pattern of locomotor activity.By 1 day of high-fat feeding, locomotor activity became dispersed across the daytime and activity was decreased during the nighttime.Unlike eating behavior, locomotor activity was not immediately restored by diet reversal.In fact, the locomotor activity of some mice never reverted during the 2 weeks of diet reversal examined in this study.Because the locomotor activity rhythm is controlled by the SCN, it is possible that the SCN rhythm is altered by high-fat diet consumption.We think this is unlikely since in our previous study we found that the locomotor activity rhythm was altered on the first day of high-fat diet consumption, yet there was no effect on the phase, period, and amplitude of the SCN PER2::LUC rhythm [11].Instead, we speculate that a brain region distinct from the SCN responds to high-fat diet which results in abnormal locomotor activity patterns. In sum, we found that high-fat diet had long-lasting effects on the liver clock, daily rhythm of eating behavior, and daily pattern of locomotor activity.Upon diet reversal, the eating behavior rhythm was rapidly reversed (within 2 days) and the phase of the liver clock was rescued by 7 days of diet reversal.In contrast, some characteristics of the daily pattern of locomotor activity were not restored after 2 weeks of diet reversal.Together these data demonstrate that the circadian system is sensitive to changes in macronutrient composition of food.The presence of obesity does not inhibit the sensitivity of the liver circadian clock and the brain region(s) controlling the daily rhythm of eating behavior (in an unidentified anatomical locus) to changes in the fat content of food as even obese mice experienced a complete reversal of the effects of high-fat diet on these clocks.In contrast, chronic high-fat diet consumption has long-lasting effects on locomotor activity that are not quickly reversed.Thus, the liver circadian clock and eating rhythms may be ideal targets for therapeutic interventions for obesity since they are rapidly reversed even in obese animals. Fig 1 . Fig 1. Experimental Protocol Diagram.Male heterozygous PER2::LUC mice were single-housed in lighttight boxes (12L:12D) at 7 weeks old and fed either chow for 5 weeks (A), high-fat diet (HFD) for 5 weeks (B), or HFD for 4 weeks and then chow for 1 week (C).At 13 weeks old, bioluminescence rhythms were measured from liver explants (n = 7/group).Eating and locomotor activity rhythms were also measured in the diet reversal group (C, n = 5).doi:10.1371/journal.pone.0137970.g001 We found that the absolute phase of the liver PER2::LUC rhythm was advanced ~2h in cultures prepared at ZT1 vs. ZT11 in both chow (S2A Fig) and high-fat diet-fed mice (S2B Fig; peak of PER2::LUC expression was plotted relative to the light-dark cycle).The liver clock was not reset by the culture procedure in chow-(S2C Fig) and high-fat diet-fed (S2D Fig) mice because the peak of PER2::LUC expression did not occur at a fixed interval after culture in explants prepared at ZT1 and ZT11.Together these data show that culture time can shift the absolute phase of the liver clock (by ~2h), but the ~4h advance caused by eating high-fat diet compared to eating chow was maintained in both the ZT1 and ZT11 culture conditions. (Fig 4 and S3 Fig Chow: Day 7; Table 1).On the first day of high-fat diet consumption, eating behavior was distributed across the day and night, resulting in a low-amplitude or absent eating behavior rhythm.The low-amplitude rhythm was evidenced by a short mean and observed in activity profiles of eating behavior of individual mice (Fig 4, HFD: Day 9; S3 Fig).The effect of high-fat diet on the distribution of eating behavior persisted for the duration of chronic feeding (Fig 4 and S3 Fig, HFD: Day 35; data from each of 4 weeks of high-fat diet shown in S4 Fig).On the first day of diet reversal, eating events were consolidated during the night and the robust eating behavior rhythm (and long vector) was restored (Fig 4, S3, S5 and S6 Figs: Day 37; Table Table 1 . Mean vector properties of eating behavior rhythms. Table 2 . Vector properties of locomotor activity rhythms in individual mice.The mean angle (μ) ± circular standard deviation (SD) and vector length (r) are reported are for individual mice.*Rayleigh's Uniformity test was used to determine if the locomotor activity of individual mice had a significant non-uniform direction (all vectors were p<1x10 -12 ). *doi:10.1371/journal.pone.0137970.t002
2018-04-03T01:00:46.643Z
2015-09-14T00:00:00.000
{ "year": 2015, "sha1": "7d73306b6c9943ca316f077214bd557f88211923", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0137970&type=printable", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7d73306b6c9943ca316f077214bd557f88211923", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3600261
pes2o/s2orc
v3-fos-license
The Role of G Protein-coupled Receptor Kinases in Cancer G protein-coupled receptors (GPCRs) are the largest family of plasma membrane receptors. Emerging evidence demonstrates that signaling through GPCRs affects numerous aspects of cancer biology such as vascular remolding, invasion, and migration. Therefore, development of GPCR-targeted drugs could provide a new therapeutic strategy to treating a variety of cancers. G protein-coupled receptor kinases (GRKs) modulate GPCR signaling by interacting with the ligand-activated GPCR and phosphorylating its intracellular domain. This phosphorylation initiates receptor desensitization and internalization, which inhibits downstream signaling pathways related to cancer progression. GRKs can also regulate non-GPCR substrates, resulting in the modulation of a different set of pathophysiological pathways. In this review, we will discuss the role of GRKs in modulating cell signaling and cancer progression, as well as the therapeutic potential of targeting GRKs. Introduction More than 700 genes have been identified as G protein-coupled receptors (GPCRs), which form the largest protein superfamily in the human genome [1]. GPCRs play key roles in mediating a wide variety of physiological events, from hormonal responses to sensory modulation (vision, olfaction, and taste) [2,3]. GPCR-targeting drugs are used to treat many diseases. Over 40% of all FDA approved drugs are aimed at targeting GPCRs or their related pathways [4]. GPCRs are now being used as early diagnosis biomarkers for cancer, as they play integral roles in regulating and activating cancer-associated signaling pathways [5,6]. Though GPCRs represent a growing share of all new anticancer therapies, "druggable" GPCRs represent only a small subset of receptors from this superfamily [5,7]. This is mainly due to drug resistance, as studies that have used short-and long-term exposure to GPCR-targeting drugs have observed receptor desensitization [5,8,9]. Therefore, the pharmacological potential for GPCRs and their downstream regulators requires further investigation in order to further develop therapies that can efficiently target cell signaling pathways in cancer [6]. GPCR signaling generally results in the transmission of amplified signals throughout the cell, and hyperactivation of these receptors may result in loss of normal cell physiological properties. There are several mechanisms including GTP hydrolysis, second messenger related protein kinases (e.g., PKA and PKC), G-protein-coupled receptor kinases (GRKs), and arrestins that prevent hyperactivation of GPCR signaling. However, the most important mechanism of terminating GPCR hyperactivation is GRK-mediated phosphorylation. Upon liganddependent receptor activation, GRK phosphorylates its target GPCR to prevent excessive cellular signaling. The details of this GPCR termination mechanism will be discussed in our review. Because GRKs act as negative regulators of GPCR activity, it has been suggested that they may play a role in cancer Ivyspring International Publisher progression by regulating cell proliferation, migration, apoptosis, invasion, and tumor vascularization in a cell type-dependent manner [10][11][12][13][14]. Therefore, a complete understanding of their role in GPCR signaling is of high clinical importance. In this article, we will review current research linking GRKs to various aspects of cancer biology and discuss the therapeutic potential of targeting GRKs. GPCR signaling pathway GPCRs serve as the interface between intracellular and extracellular environments. Receptors of this family are characterized by seven highly hydrophobic transmembrane helix structures. Stimuli, such as photons, small chemicals, ions, or protein ligands induce conformational changes in GPCR, which translates into cell signaling responses by activating coupled trimeric G protein complexes [15]. Based on sequence and structural similarity, GPCRs are classified into five subfamilies, namely rhodopsin, secretin, glutamate, adhesion, and frizzled [1]. The core structure of each GPCR can be divided into three domains: the extracellular region (ECR), which includes the N-terminus and three extracellular loops, the transmembrane region (TM), which includes seven α-helices, and the intracellular region (ICR) that contains three intracellular loops, an intracellular amphipathic helix, and the C-terminus [16]. Generally, the functional role of the ECR is to initiate binding of specific ligands. The TM region forms the receptor core structure, which is important for transducing extracellular stimulus through conformational changes. The TM region may also be involved in ligand binding in the majority of GPCRs. The ICR interacts with coupled G proteins, which are activated through conformational changes, and transduce downstream intracellular signaling [16]. Upon induction of conformational changes, GPCRs mediate signaling through G protein heterotrimers (Gα, β, and γ subunits) [17]. These G proteins help transduce downstream signaling amplification based on the different functions of each Gα subunit isoform. Heterotrimeric G proteins transduce signals by exchanging GDP for GTP on the Gα subunit in response to ligand dependent activation of the GPCR. The Gα-GTP subunit can then separate from the Gαβγ heterocomplex and interact with its downstream signaling effectors. Because different GPCRs interact with a diverse array of intracellular coupled Gα protein isoforms, they can modulate a variety of downstream signaling pathways, and cause unique physiological responses when activated by different stimuli (Figure 1) [18]. In response to GPCR conformational changes, G proteins activate downstream targets by converting GDP to GTP. GRKs phosphorylate the intracellular regions of GPCRs. This results in receptor desensitization and arrestin recruitment. Arrestins initiate receptor endocytosis, followed by receptor recycling or degradation. All mammalian GRKs possess a similar protein structure, with a catalytic domain (~270 amino acids), an N-terminal domain (~185 amino acids), and a variable-length C-terminal domain (~105-230 amino acids). The N-terminal domain contains two conserved motifs for ligand binding and plasma membrane insertion [21,22]. A single regulator of G-protein signaling (RGS) domain (~120 amino acids) is located within each of the N-and C-terminal domains, and both RGS domains contain a kinase catalytic domain (KD). The RGS domains regulate GPCR function via phosphorylation [23][24][25]. Phosphorylation occurs in response to ligand binding to GPCR. GRK-induced phosphorylation of cytoplasmic serine and threonine residues of GPCR triggers conformational changes in the receptor, revealing high affinity helical binding domains for β-arrestins. As a result, β-arrestin binds to the GPCR and inhibits downstream signaling [26]. Because GRKs also belong to protein kinase A, G, and C families (AGC), all GRKs share a conserved AGC sequence, which is important for kinase activity [27]. Unlike other GRKs, GRK2 and GRK3 have a pleckstrin homology domain (PH), which is critical for serine/throenine phosphorylation site recognition and termination of Gβγ-induced signaling [28,29]. In addition, the Gβγ binding site of GRK2 is located in the N-terminal domain, which is involved in GRK2 cell surface localization and termination of G-βγ downstream signaling [30]. Furthermore, GRK4 has a phosphatidylinositol-(4,5)-biphosphate (PIP2) motif in its N-terminal that can increase its catalytic kinase activity [31]. GRK4 is predominately expressed in the brain, kidney, testis, and human granulosa cells [32][33][34][35]. GRK1 and GRK7 are expressed mainly in retinal cells, where they mediate photo-transduction by phosphorylating rhodopsin receptors [36]. All other GRKs are expressed ubiquitously throughout the body and regulate the functions of a variety of GPCRs [37]. There are many more types of GPCRs than GRKs, with each GRK interacting with and phosphorylating multiple GPCRs [13]. However, several groups have found that a subset of GRKs show substrate specificity with a select few GPCRs, despite the structural similarity of their domains. Therefore, the physiological roles of each GRK isoform may vary significantly [38]. In agreement, GRKs can find other regions of their targeted GPCR to phosphorylate even when the domain is mutated [39]. The N-terminus of each GRK is highly conserved and critical for GPCR identification. However, how GRKs recognize GPCR conformational changes is still unknown, making its role during GPCR activation unclear [40]. Termination of GPCR signaling The major function of GRKs is to mediate signaling termination through phosphorylation of activated GPCRs. The activated GPCR is directly regulated by two major mechanisms involving both GRKs and arrestins, where arrestins are recruited to the GPCR upon phosphorylation by GRKs. These two interacting regulatory proteins terminate GPCR intracellular signal transduction, and control G protein subunit coupling. The mechanism of GPCR inhibition involves GRK phosphorylation of the intracellular regions of the target GPCR. Arrestins then bind to the phosphorylated GPCR domain to inhibit further G protein activation. This process leads to GPCR desensitization, internalization, and recycling or degradation, which require the recruitment of other proteins such as AP-2 and clathrin. To phosphorylate a ligand-bound GPCR, the GRK must be located at (or translocated to) the plasma membrane, where it forms a complex with the receptor. GRK1, GRK4, GRK5, GRK6, and GRK7 are normally found on the plasma membrane, where they bind and phosphorylate their target receptors. However, GRK2 and GRK3 require further steps in order to localize to the plasma membrane. These two kinases are mainly localized in the cytosol and endoplasmic reticulum. Only upon GPCR activation do they recognize detached Gβγ subunits and undergo translocation to the plasma membrane to desensitize GPCRs [41]. Once a ligand binds to a GPCR, GRKs sense the intracellular conformational changes as it separates from coupled G protein subunits. They can then phosphorylate the C-terminus of the target GPCR [42]. Phosphorylation will then induce additional GPCR conformational changes and increase its affinity for arrestins. The binding of arrestins prevents further coupling of G protein subunits and inhibits second messenger signal transduction [43]. Furthermore, GRKs also attenuate GPCR signal transduction by controlling the responsiveness of GPCRs to their ligands. This GPCR desensitization allows the cell to distinguish between different types of ligands, such as chemokines, and define the intensity of the cellular response [44]. Previous studies have shown that GRK specificity for a particular GPCR is determined largely based on the ligand bound to the GPCR [45]. GPCR desensitization GPCRs amplify signals in response to ligand binding in a dose-dependent manner. Stimulated receptors become less sensitive due to the amount of time they remain in their active form, a phenomenon known as receptor desensitization [24,46,47]. In 1983, Sitaramayya et al., demonstrated that GRK1-mediated rhodopsin phosphorylation correlated with cGMP phosphodiesterase activity, and suggested that GRK1 was involved in GPCR desensitization [48]. Thereafter, the importance of pleckstrin homology/Gβγ-binding domain in the recruitment of GRK5 during GPCR desensitization was shown in vitro and in vivo. Furthermore, it was found that in the absence of phosphorylation by GRKs, GPCR receptor desensitization was greatly minimized [49]. Overexpression of GRK2 and β-arrestin in COS-7 cells resulted in GRK2-mediated phosphorylation of the β-adrenergic receptor, and subsequent recruitment of β-arrestins [50]. Several in vivo studies have identified co-localization of GRK3 and β-arrestin-2 in olfactory neuroepithelium in conjunction with dendritic knobs and cilia, and that GRK2 and β-arrestin-1 are not expressed in these tissues. In response to odorants such as citralva, GRK3-mediated GPCR desensitization was β-arrestin-2-dependent [51]. With regards to GRK4, no receptor substrate had been identified prior to 2000, when Sallese et al., successfully demonstrated that metabotropic glutamate receptor 1 (mGlu1) signaling could be desensitized by GRK4 in an agonist-dependent manner in HEK293 cells [52]. GRK7 was the last GRK family member identified, and was co-expressed with GRK1 in cone photoreceptor cells [53]. In a zebrafish model, GRK7A deficiency affected cone bleaching adaptation and spontaneous decay, which highlighted the role of GRK7 in GPCR desensitization in scotopic vision [54]. GPCR internalization and degradation Internalization is an important mechanism used by cells to regulate GPCR activity and to allow recovery post desensitization. Internalization involves the endocytosis of activated receptors from the plasma membrane, a process that requires GRK activity. The presence of GRKs and their interaction with clathrin in endocytosed vesicles highlights the role they play during receptor internalization [55]. GRKs interact with clathrin through a clathrin-box structure located within the C-terminus normally used to form clathrin-coated pits, and the removal of this domain led to loss of GPCR internalization [56]. For example, silencing of clathrin heavy chain protein inhibited β-adrenoceptor internalization and phosphorylation, suggesting clathrin may also regulate GRK activity [43,57]. Goodman et al. previously demonstrated that β-arrestin subunits 1 and 2 were important in facilitating GPCR internalization [58]. The association of GRKs and arrestins with the C-terminus of various GPCRs led to shuttling of the internalized complex to lysosomes. Interestingly, the binding of β-arrestin to GPCRs was dependent on GRK phosphorylation of the target receptor. In cases of spontaneous β-arrestin-independent internalization, the receptor may also be recycled back to the plasma membrane [59]. One of the key roles of GRKs is to regulate the number of GPCRs at the plasma membrane through receptor internalization, which can prevent hyperactivation of the target receptor. Internalized receptors are then degraded by endocytic trafficking to lysosomes. This process can occur either rapidly or over a longer period of time. The internalization and degradation of the β2-adrenergic receptor occurs within seconds to minutes upon agonist binding, and can be reversed within minutes after removal of the agonist without new receptor synthesis, a process that is regulated by GRK2. Normally, a rapid reduction of cell surface receptors only temporarily desensitizes the receptor. However, slower regulation of receptor numbers requires receptor endocytosis in combination with synthesis of new receptors, which also involves ubiquitin-dependent protein degradation and GRK3 and GRK6 [64,65]. [153,154]. The protein kinase catalytic domain with a short AGC protein kinase domain is responsible for GRKs catalysis. The C-terminus has more motif variants than the RGS/C domain, which is critical for receptor recognition [14]. Unlike other isoforms, the GRK2 subfamily (GRK2 and GRK3) has a pleckstrin homology domain (PH), which is important for terminating Gβγ complex-related downstream signaling. The C-terminus length varies among all GRK subtypes (~105 to 230 AAs). GPCR biased signaling with GRK The discovery of the concept of "biased signaling of GPCR" has revised the classical understanding of GPCR signaling. It is believed that activated GPCR conformation will promote either G-couple proteins or β-arrestin signaling or both, with a ligand known as "G-protein-biased agonist", "β-arrestin-biased agonist", or "balanced agonist". The current molecular mechanism of this phenomenon has not been fully revealed yet. The prevailing hypothesis is that GPCR may have a special activated conformational state in response to each ligand leading to different exclusive downstream modulator(s). β-arrestin does not only serve as the terminator of GPCR signaling, but also acts as a signaling protein, in which the "β-arrestin-biased signaling" always requires phosphorylated GPCR. Therefore, GRKs are suggested to be critical for therapeutic strategies of β-arrestin-biased signaling, because GRKs are able to promote high-affinity binding of β-arrestin to GPCRs. Interestingly, β-arrestin may have distinct responses to different GRK subtype phosphorylation of the same GPCR. Kelly et al. used β2-adrenergic receptor (β2AR) as an example to demonstrate that different GRK subtypes phosphorylate distinct intracellular sites [38]. After over-expression or silencing of GRK2 or GRK6 through transfection in HEK293 cells, and treatment with either unbiased full agonist (isoproterenol) or β-arrestin-biased agonist (carvedilol), the author used specific antibodies against 13 phosphorylation sites on the β2AR to analyze phosphorylated sites by each GRK subtype. The result revealed six (Thr360, Ser364, Ser396, Ser401, Ser407, and Ser411) and two (Ser355 and Ser356) phosphorylation sites for GRK2 and GRK6, respectively. In addition, GRK2 suppressed phosphorylation of the two sites phosphorylated by GRK6. Moreover, GRK6, but not GRK2, is essential for β-arrestin-dependent ERK1/2 activation. Additional evidence for β2AR phosphorylation using PKA assay screening of Ser355 and Ser356 mutants was given by Xiaofang et al. [66]. Their data supported the hypothesis that GRK subtypes might have preferential phosphorylation and trigger unique conformational changes in GPCR. The different phosphorylation pattern therefore directly affects β-arrestin-dependent functions. Other receptors such as kappa opioid receptor, mu-opioid receptor, angiotensin II receptor type 1A, C-X-C chemokine receptor type 4, and vasopressin type 2 receptor were also reported to exhibit similar properties and showed a contradictory role between GRK2/GRK3 and GRK5/GRK6 in β-arrestin-mediated signaling [67][68][69][70][71][72]. However, there are limited studies on the physiological and pharmacological functions of GRK in GPCR-biased signaling. More importantly, because β-arrestin-biased signaling is involved in the cancer-related signaling pathway, the role of GRK in β-arrestin-biased signaling in cancer is worthy of attention. Role of GRKs in non-GPCR signaling pathway GRKs play significant roles in non-GPCR-mediated signal transduction by phosphorylating proteins such as receptor tyrosine kinases (RTK). Previous studies have hypothesized a link between GRKs and RTK signaling that not only reveals the diverse role of GRKs in controlling intracellular signaling, but also highlights the potential benefit of targeting GRKs in cancer. In this section, we will discuss several examples of GRK regulation of non-GPCR signaling pathways. Early evidence for GRKs as modulators of RTK signaling was based on the observation that ligand activation of epidermal growth factor receptor (EGFR) promoted GRK2 translocation to the plasma membrane to initiate EGFR internalization. EGFR activation of ERK/MAPK via phosphorylation was enhanced in HEK293 cells in response to overexpression of GRK2. These results indicate GRK2 regulated not only EGFR internalization but also its ability to transduce signals [73]. Furthermore, EGF stimulation induced the formation of a GRK2/EGFR complex in a Gβγ-dependent manner [74], and pretreatment with the EGFR tyrosine kinase inhibitor, AG 1478, reversed this effect [74]. Other targets of GRK2 include the retinal photo-receptor type 6 cyclic guanosine monophosphate (cGMP) phosphodiesterase (PDEγ), which is found in the internal membranes of retinal photoreceptors, where it reduces cGMP in rod and cone outer segments in response to light [75]. Wan et al. demonstrated that GRK2 targeted PDEγ in a complex with c-Src whereby threonine 62 of the PDEγ was phosphorylated [76]. This is consistent with earlier reports that Src mediates GRK2 phosphorylation of other plasma membrane proteins [77]. In addition, GRKs mediate the cross talk between RTK and GPCR signaling pathways, such as that which exists between the δ-opioid receptor (DOR), the μ-opioid receptor (MOR), and EGFR [78]. Reduced platelet-derived growth factor receptor (PDGFR) signaling is also found to be associated with GRK2 expression. By modulating the levels of GRK2 expression, Freedman et al. showed that phosphorylation of PDGFR could be altered by GRK2 [79]. It was subsequently found that GRK2 enhanced the ubiquitination of PDGFR leading to its desensitization. In smooth muscle cells, overexpression of full-length GRK2 significantly decreased PDGFR activity in a Gβγ-independent manner, but this effect was abrogated in cells expressing a truncated version of GRK2 consisting of only its pleckstrin homology/Gβγ-binding domains [80]. However, in the case of GRK5 that lacks pleckstrin homology domain, PDGFR could still be phosphorylated [81]. Moreover, knocking out GRK2 in HEK293 cells strongly enhanced downstream PDGFR signaling, resulting in increased AKT activation [82]. Therefore, the catalytic activity of PDGFR is dependent on the inhibitory activities of GRK2 and GRK5. Interestingly, an opposite effect was found with regards to the modulation of insulin-like growth factor 1 receptor (IGF-1R) signaling by GRK2, where reduced GRK2 levels inhibited IGF1-R activation. Silencing of GRK2 also enhanced the expression of cyclins and IGF1-R in human hepatocellular carcinoma (HepG2) cells [83], and silencing GRK5 and GRK6 also impaired IGF1-R signaling. However, silencing GRK3 had no effect on the status of IGFR-1 [84]. In this study, GRK2 and GRK6 were found to counteract each other with regards to IGF-1R activation and ERK phosphorylation. It was then suggested that IGFR-1 interacts with both GRK2 and GRK6 to promote β−arrestin recruitment [84]. Previous studies have illustrated an interaction between vascular endothelial cell growth factor receptor (VEGFR) and GRK in the cardiovascular system and during tumor angiogenesis. Here, GRK5 was shown to regulate VEGF signaling and activation of its downstream effectors (AKT, ERK1/2, and GSK-3) in human coronary artery endothelial cells (HCAECs). However, Sandeep et al. demonstrated that GRK6 depletion enhanced tumorigenesis and metastasis using a GRK6 knock-out mouse model of Lewis lung cancer [85]. The role of GRK in GPCR transactivation In the past decade, GPCR transactivation has further broadened this signaling network, with the two known patterns being from GPCRs to receptor tyrosine kinases (RTK) and receptor serine/threonine kinases (RS/TK). Two mechanisms, ligand-dependent and ligand-independent, are generally accepted for GPCR-RTK transactivation. Membrane-bound matrix metalloproteases (MMPs) play predominant roles in ligand-dependent receptor transactivation. Following GPCR activation, MMP levels increase to enhance the release of membrane-anchored heparin-binding EGF-like growth factor (HB-EGF) [104]. HB-EGF further activates EGFR and hence triggers the metastatic and invasive behaviors of cancers. The ligand-independent pathway is based on GPCR activation of Src family proteins, which phosphorylate the intracellular tyrosine residues of RTK to activate the downstream signaling pathway. Examples of GPCR-RTK transactivation in cancers are well summarized in articles written by Almendro et al. [105]. Regarding RS/TK transactivation by GPCR, recent accumulating evidence establishes a GPCR-ROCK-Integrin-TGFBR (ALK5) pathway. Burch et al. properly demonstrated that thrombin activated PAR-1 to transactivate TGF-beta receptor by integrin binding to LLC with conformational alternation that led to TGF-ligand initiation of ALK5 downstream pathway in vascular smooth muscle cells [106]. Also this report supported an early research model on integrin-LLC-ALK [107]. As discussed above, current studies of GRKs involve GPCR-dependent and RTK-dependent mechanisms. As recently evidenced by Kamato et al. using RNA-sequencing in vascular smooth muscle cells, over 50% GPCR-activated gene expression was accounted for by RTKs and RS/TKs transactivation [108]. Thus, the role of GRK in GPCR-RTK or GPCR-RS/TK transactivation regulation is becoming more apparent and important in cancer research (Figure 4). The role of GRKs in cancer pathology Because GPCR signaling is an important contributor to tumor growth and metastasis, discerning how GRKs regulate GPCR activity in cancer cells may greatly improve our understanding of tumorigenesis and oncogenesis, and help develop novel anticancer therapies. In the following section, we discuss the role of GRKs in cancer progression (an overview is summarized in Table 1). Accumulating evidence suggested that GRKs have the regulatory function in GPCR, RTK, and RS/TK signaling pathways. As GPCR induced transactivation of RTK or RS/TK that further broadened the GPCR network map, the next research area will be certainly turning into the potential functions of GRKs during the transactivation process. Black arrow; studies have confirmed interaction between GRKs and the target substrate. Dotted blue arrow; potential function of GRK in the transactivation of RTK or RS/TK. T in circle; signaling termination. P in circle; receptor phosphorylation. R in circle; regulation of receptor activities after interaction. GRK1 and GRK7 Because GRK1 and GRK7 are almost exclusively expressed in retinal tissue (cone and rod cells) and phosphorylate opsin GPCRs during visual acquisition, only a handful of cancers have been implicated in dysfunctional GRK1/7 activity. Several groups have shown that GRK1 and GRK7 may be involved in embryogenesis, but also indirectly interact with proteins such as Rho GDP-dissociation inhibitor (RohGDI) and PDEγ, both of which have been found to be aberrantly regulated in cancer [109,110]. Furthermore, reductions in GRK1 activity due to a mutation or deletion causing nonfunctional protein, have been shown to contribute to the onset of Oguchi disease (stationary night blindness Oguchi type-2) [111]. A direct link between reduced GRK1/7 activity and cancer progression has not yet been confirmed. However, GRK1/7 may play a role in cancer-associated retinopathy through its interaction with recoverin (a calcium binding protein) in patients with lung cancer [112]. GRK2 The role of GRK2 in cancer has been well-characterized. High levels of GRK2 expression are found in differentiated thyroid carcinoma, whereas GRK5 levels are significantly deceased when compared to normal thyroid tissue. The levels of GRK2 protein are correlated with its activity, as it rapidly desensitizes thyroid-stimulating hormone receptor (TSHR), whereas GRK5 inhibits desensitization of this receptor. In thyroid carcinoma, activation of TSHR increased cancer cell proliferation [113]. In human hepatocellular carcinoma HepG2 cells, GRK2 negatively regulates Insulin-like growth factor-1 receptor (IGF1-R) signaling. In GRK2 knockdown cells, reduced GRK2 levels enhanced cyclin and IGF1-R expression. These results highlight the role of GRK2 in suppressing cell cycle progression [83]. Another study using hepatocellular carcinoma cells (HCCs) demonstrated that GRK2 inhibits IGF1 signaling, thereby regulating proliferation and migration [12]. It is generally believed that IGF-1 induces the expression of early growth response protein 1 (EGR1) through reactive oxygen species (ROS)-dependent activation of ERK1/2/JNK and PI3-K/PKB [114]. Modulating the levels of GRK2 expression in HCCs demonstrated that GRK2 regulated EGR1 expression after IGF1 treatment. Altogether, these studies reveal the therapeutic potential of GRK2 by inhibiting IGF1-mediated responses [12,83,84]. Moreover, in clinical studies, high GRK2 expression correlated with a high tumor (T) stage and poor survival rates of patients with pancreatic cancer. However, these levels could not be considered as an independent prognostic marker [115]. Yao et al. used a rat osteosarcoma model to evaluate the association between GRK2 and cancer-related pain levels in dorsal root ganglion neurons. Implanted breast carcinosarcoma cells in the tibia resulted in increased expression of GRK2. Nerve growth factor (NGF), together with GRK2, promoted the phosphorylation of opioid receptors, and an increase in pain. In addition, anti-NGF therapies mediated the effects of GRK2 and arrestins to significantly relieve cancer bone pain [116]. GRK2 is also involved in miRNA mediated pathways. MiR-K3, a Kaposi's sarcoma-associated herpesvirus (KSHV) encoding miRNA, facilitates endothelial cell migration and invasion. Interestingly, overexpression of GRK2 in KSHV infected tumor cells reversed this miR-K3-dependent induction. In addition, miR-K3 directly up-regulated GRK2 expression. Moreover, downregulation of miR-K3 by GRK2 inhibited the migration and invasion of KSHV infected HUVEC cells. These results suggest that GRK2 plays a role in suppressing KSHV-associated tumor progression [117]. GRK2 has also been implicated in breast cancer progression. GRK2 levels are elevated in precursor lesions of mammary glands in mouse mammary tumor virus with Her-2 amplification (MMTV-HER2) mice. The expression of GRK2 is also dependent on estrogen receptor alpha (ERα) signaling in human breast cancer cell lines. Moreover, increased GRK2 levels may contribute to cellular transformation by promoting mitogenic and anti-apoptotic activities during tumor development. Data obtained from breast cancer patients showed that GRK2 levels were significantly increased in infiltrating ductal carcinomas. This strongly suggests that a relationship exists between GRK2 activity and breast cancer progression [118]. Interestingly, a recent study demonstrated that up-regulation of GRK2 was associated with increased histone deacetylase 6 (HDAC6) expression. GRK2 increased the growth of both luminal and basal breast cancer cells in an HDAC6-and peptidylprolyl-cis/trans-isomerase (Pin1)-dependent manner, and inhibition of GRK2 increased the sensitivity of these cells to commonly used chemotherapeutic compounds. Therefore, the GRK2-HDAC6-Pin1 axis may be a potential therapeutic target for combination therapy [119]. Finally, GRK2 was found to be a negative regulator of CXCR4 (C-X-C chemokine receptor type 4), a chemokine receptor that mediates metastasis and is typically used as an indicator of patient prognosis [120]. GRK2 also plays a role in gastric cancer progression, as studies using human gastric carcinoma MKN-45 cells have shown that GRK2 mediated the homologous desensitization of H2 receptors in poorly differentiated cancers [121]. Desensitization of H2 receptors by histamine was inhibited in response to treatment with a GRK2, but not GRK6, antisense phosphorothioateoligo-DNA (PON). This indicates that GRK2 and GRK6 play significantly different roles in modulating H2 receptor pathways in MKN-45 cells. With regards to the role of GRK2 in skin cancer, low expression of GRK2 in melanoma cells could enhance cAMP production in response to MC1R agonists. MC1R has been shown to regulate pigmentation and differentiation of epidermal melanocytes and contributed to melanoma progression. This effect was reversed by overexpressing GRK2 and GRK6 [63]. Thus, GRK2 and GRK6 may regulate the development of skin cancer by modulating MC1R signaling. N/A= not provided by research ↑=increase or promoting related mechanism↓=decrease or diminish related mechanism --=has association with. A systematic study by Penela et al. described the regulatory role of GRK2 in tumor vessel formation, as it modulated the proliferation and migration of endothelial cells, and promoted hypoxia and macrophage infiltration during tumor angiogenesis [122]. Rivas et al. further suggested that GRK2 is involved in tumor vessel formation and acts as a regulator of angiogenesis through neovascularization in breast cancer. Using hemizygous GRK2 (GRK2+/-) and endothelium-specific GRK2 knockout mice, it was also shown that reduced GRK2 levels enhanced tumor growth and promoted new blood vessel formation [98]. GRK3 Fitzhugh et al. firstly showed a role for GRK3 in breast cancer progression as knockdown of this gene in MDA-MB-231 and MDA-MB-468 cells inhibited CXCL12-mediated chemotaxis, suggesting that GRK3 regulates CXCR4 activation of CXCL12 [123]. In vivo, stable knockdown of GKR3 resulted in a high metastatic rate of xenograft breast cancer. Interestingly, according to the study by Billard et al., GRK3 regulated CXCR4 signaling in triple negative and other molecular subtypes of breast cancer [13]. In these studies, GRK3 protein and mRNA expression levels were sensitive to chemokine-mediated migration. This observation was supported by other studies that used extracted data from The Cancer Genome Atlas (TCGA) database. Here, the authors demonstrated that the ratio between GRK3 and CXCR4 may be a key factor in controlling tumor migration and metastasis. Although these authors did not perform in vivo experiments to confirm their findings, they clearly showed that breast cancer metastasis was inhibited by GRK3 through regulation of CXCR4 signaling. Prostate cancer has also been used as a model to discern the role of GRK3 in metastasis, tumorigenesis, and angiogenesis. Li et al. found significantly increased migration of endothelial cells in response to elevated expression of GRK3. One group implanted GRK3 knockdown cells into the prostates of SCID mice, and observed enhanced proliferation and metastasis of primary tumors. Microvessel density was modulated by overexpression of GRK3 in primary tumor cells, indicating GRK3 mediated angiogenesis. In agreement, the expression levels of GRK3 were found to be significantly higher in patients with metastatic disease than those with early stage disease [124]. The first report involving GRK3 in retinoblastoma progression dates to 2001. By measuring corticotropin-releasing factor (CRF)-stimulated intracellular cAMP production and PKA signaling pathway activation in Y-79 retinoblastoma cell line, Dautzenberg et al. found that GRK3 controlled desensitization of the CRF1 receptor. Because the CRF1 receptor is known to increase stress adaptation in cancer cells [125], decreased levels of GRK3 may be beneficial to cancer cell survival. Finally, recent studies have shown that GRK3 is aberrantly expressed in oral squamous carcinoma cells. The mRNA expression levels of GRK3 in tumor samples from patients with oral squamous cell carcinoma were significantly higher than in normal tissues. This study suggests that high expression of GRK3 is associated with oral squamous cell carcinoma tumorigenesis, possibly through the activation of β2-adrenergic receptor [126]. GRK4 GRK4 is expressed primarily in the testis, ovaries, brain, kidney, and myometrium [127]. Due to the low-level expression of GRK4, there are few reports regarding its role in cancer. Several studies have shown that GRK4 modulates arterial angiotensin type 1 (AT1) and dopamine D receptor signaling [128,129]. In malignant ovarian granulosa cells, the expression of GRK4α/β was also found to be significantly lower than in benign granulosa cells [35]. Silencing of GRK4α/β resulted in decreased kinase activity and impaired follicle-stimulating hormone receptor (FSHR) desensitization. This desensitization may play a crucial role during ovarian granulosa cell transformation, although further investigation is required to fully discern the role of GRK4 in this process. GRK5 GRK5 plays various roles during tumorigenesis. The expression of GRK5 is associated with worse prognosis in patients with stage II to IV glioblastoma. Kaur et al. showed that the levels of expression of GRK5 were significantly higher in high-grade primary and recurrent glioblastoma multiforme (GBM) than low-grade glioblastomas. The samples obtained from patients with recurrent diseases had higher GRK5 expression than tumor samples taken prior to recurrence. Knockdown of GRK5 was further shown to decrease the rate of proliferation and expression of stem cell markers in glioblastoma cells derived from a patient with GBM [130]. Note, as previously discussed, GRK2 and GRK5 have opposite roles in thyroid cancer, as GRK2 inhibited the activity of the TSH receptor, while GRK5 promoted its activation [113]. In prostate cancer, it appears that increased expression of GRK5 correlates with tumorigenesis and oncogenesis. Silencing of GRK5 inhibited the proliferation of prostate cancer PC3 cells by decreasing the number of cells in G1 and increasing those in the G2/M phase of the cell cycle. These results suggest that GRK5 regulates cell cycle progression in prostate cancer cells [131]. Other studies found that GRK5 was involved in migration, invasion, and cell adhesion of prostate cancer cells through a possible interaction with moesin, a protein known to regulate cell spreading. Overexpression of phosphomimetic moesin-T66D in PC3, DU145, and LNCaP prostate cancer cells significantly reduced cell migration through phosphorylation of moesin by GRK5. Moreover, expression of a phosphorylationdeficient moesin-T66A protein enhanced its activity [132]. In vivo studies have also demonstrated that knockdown of GRK5 in mice with xenografted prostate cancer cells suppressed tumor growth, invasion, and metastasis [132,133]. Finally, it has been shown that GRK5 could directly phosphorylate P53 tumor suppressor in U2OS and Saos-2 osteosarcoma cells, and promote its degradation, thereby inhibiting tumor cell apoptosis [90]. GRK6 Studies investigating the function of GRK6 in patients with hepatocellular carcinoma using immunohistochemistry have demonstrated a positive correlation between GRK6 and Ki-67 expression, pathological disease stage, metastasis, and survival rate. These authors hypothesized that GRK6 could be used as a biomarker for the early diagnosis of hepatocellular carcinoma [134]. Furthermore, recoverin, which is functionally associated with GRK6 (as well as GRK2 and GRK5), was aberrantly expressed in SSTW-2 gastric cancer cells [11]. GRK6 mRNA and protein expression was also found to be lower in hypopharyngeal squamous cell carcinomas than normal tissues [135]. Cancer progression appears to be linked to aberrant methylation of GRK6, whose expression correlates with cancer cell invasion and disease stage. In vitro, GRK6 expression levels are elevated and invasion is inhibited in FaDu cells in response to 5-aza-2'-deoxycytidine [135]. In medulloblastoma cells, PDGFR/Src signaling may mediate GRK6 activity to promote CXCR4 activation of CXCL12 with increased rate of cell migration [136]. GRK6 depletion also enhanced tumorigenesis and metastasis in a Lewis lung cancer mouse model, as MMP-2 and MMP-9 expression was significantly increased in GRK6−/− animals. Pharmacological inhibition of CXCR2 activation abrogated this effect in an NF-kB-dependent manner [85]. GRK inhibitors Increased GRK expression levels are correlated with chronic or acute use of GPCR-targeted drugs. In a study investigating GPCR-targeted drug tolerance in the brain, it was shown that increased GRK expression might be responsible for drug resistance [137,138]. In addition, various pathological conditions such as heart failure, depression, Alzheimer's disease, and Parkinson's disease are associated with the modulation of endogenous GRK expression [139][140][141][142][143]. Because scientists are beginning to understand more about the biology of GPCR/GRK signaling in cancer, targeting GRKs are emerging as a new anticancer strategy. The most likely methods of developing highly selective GRK inhibitors will be to target their unique kinase domains or decrease GRK expression using selective RNA aptamers [144]. To date, no effective GRK inhibitors have been approved for clinical practice [145]. Because GRKs are a subfamily of AGC kinases, and their kinase domains are relatively similar in structure, nonselective GRK inhibitors may likely cross-react with other AGC kinases [146]. However, one highly selective GRK2 inhibitor known as Takeda Compound 103A that inhibits GRK2 activity 50-fold more than other AGC kinases has been developed by Takeda Pharmaceuticals [147]. Other highly selective drugs such as paroxetine (GRK2), GSK180736A (GRK3), balanol (GRK5), Takeda Compound 101 (GRK2), and sanigivamycin (GRK6) have been widely studied [148]. Although they are reasonably selective for GRKs, they all show cross-reactivity with other kinases. The RNA aptamer, C13, first described by Mayer et al., [149] competes with the alkaloid, staurosporine, for the kinase domain binding on GRK2 without binding to the N-or C-terminus [144,149]. To our knowledge, the effective inhibitor targeted regions are frequently mutated in the population, limiting the drug's therapeutic potential [150]. Conclusion and perspectives In conclusion, GRKs cooperate with arrestins and clathrin to control GPCR signaling. Because GPCRs and RTKs are the primary signal transducers of extracellular stimuli, GRKs act as important regulators to protect cells from overstimulation. Consequently, GRK activity could directly modulate the ability of endocrine hormones to influence the function of cells and tissues. In addition, GRKs help maintain homeostasis by inhibiting signal transduction and facilitating cellular communication in response to extracellular stress. Recent studies have shown that GRK expression levels are implicated in almost every facet of cancer biology, from proliferation, invasion, and migration to metastasis and oncogenic transformation [151]. Significant levels of GRK expression alternation in different cancers indicate that GRKs could be promising molecular targets to be considered to modulate GPCR responsiveness. Due to the heterogeneity of GRK structures, each one appears to play a different role in cancer progression. Investigation of the interesting contradictory roles of each subtype of GRKs may enable development of potent drugs that inhibit some isoforms but activate others for better disease control. GRKs regulate hundreds of unique GPCRs simultaneously, making it complex to fully discern how they modulate cellular signaling. As more GPCR-targeted anticancer therapies are developed, it is important to identify these mechanisms. Our current hypothesis suggests that GRKs primarily act as feedback loops that alter GPCR activity, especially for arrestin-biased signaling. Furthermore, targeting GRKs may result in unintended consequences with regard to the effects of GPCR-targeting drugs, and may induce drug resistance. Because GPCR signaling is becoming a more attractive topic in cancer pharmacology research [152], it is necessary for us to gather a full understanding of these GPCR-GRK interactions to inform more practical and efficient drug design and development. Collectively, pharmacological intervention of GRKs provides us with a novel concept in cancer therapy.
2018-04-03T00:10:40.961Z
2018-02-05T00:00:00.000
{ "year": 2018, "sha1": "c6d74810e78589722e10e3e16f138e3469da5224", "oa_license": "CCBYNC", "oa_url": "http://www.ijbs.com/v14p0189.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c6d74810e78589722e10e3e16f138e3469da5224", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258176017
pes2o/s2orc
v3-fos-license
The Value of Qualitative Methods in Cross Cultural Education: A Case Study from a First Person Perspective This paper presents a first-person account of using qualitative research methods to address medical residency education. The results of this project have been published. However, the study's process and its educational impact on the participants have not been well-described. The purpose of this article is to describe the background and conduct of the study itself. A family medicine residency program, the setting for this project, had recently begun accepting international medical graduates (IMGs) who had lived and received medical school education outside of the United States. The author, a faculty member in the residency and a clinical psychologist, and the physician faculty observed residents as they saw patients in the family medicine residency clinic. Concern was expressed about some of the IMG resident physicians’ knowledge base and their ability to develop rapport with patients. In providing instruction in behavioral science, the author and a psychologist colleague noted that some of the IMG residents were confused by aspects of U.S. family life and the educational system. The relationship with clinical instructors and expectations of faculty also differed from the pedagogical norms in U.S. medical education. As a result, a qualitative interview project was undertaken to understand better how these IMG residents were experiencing and interpreting faculty-learner and resident physician-patient interactions. The results were beneficial in multiple ways. First, recognizing that faculty members were interested in their experiences helped develop rapport and trust between the faculty and residents. Providing the project results to the residents helped open discussion about cultural differences in medical education and patient care. For educators who may have difficulty understanding the perspective that learners bring to their education, the process described could be of potential benefit. A B S T R A C T This paper presents a first-person account of using qualitative research methods to address medical residency education. The results of this project have been published. However, the study's process and its educational impact on the participants have not been well-described. The purpose of this article is to describe the background and conduct of the study itself. A family medicine residency program, the setting for this project, had recently begun accepting international medical graduates (IMGs) who had lived and received medical school education outside of the United States. The author, a faculty member in the residency and a clinical psychologist, and the physician faculty observed residents as they saw patients in the family medicine residency clinic. Concern was expressed about some of the IMG resident physicians' knowledge base and their ability to develop rapport with patients. In providing instruction in behavioral science, the author and a psychologist colleague noted that some of the IMG residents were confused by aspects of U.S. family life and the educational system. The relationship with clinical instructors and expectations of faculty also differed from the pedagogical norms in U.S. medical education. As a result, a qualitative interview project was undertaken to understand better how these IMG residents were experiencing and interpreting faculty-learner and resident physician-patient interactions. The results were beneficial in multiple ways. First, recognizing that faculty members were interested in their experiences helped develop rapport and trust between the faculty and residents. Providing the project results to the residents helped open discussion about cultural differences in medical education and patient care. For educators who may have difficulty understanding the perspective that learners bring to their education, the process described could be of potential benefit. Background and Context While there are multiple sets of published guidelines for systematically formulating and testing hypotheses (Morling, 2020), choosing a statistical technique for quantitative studies (Watt & Collins, 2022), conducting interviews (Creswell & Baez, 2021) and determining validity in qualitative research (Hayashi et al., 2021), there are few recent first-person accounts of condcuting research (Streiner & Sidani Souraya, 2010).The purpose of this paper is to describe the process of conducting an applied qualitative research study in a medical education setting. The focus in terms of methodology is not on the specific qualitative interview questions that guided the data gathering but, instead, is on the rationale (Mulisa, 2022) for the qualitative approach used in this particular project, how it was carried out, ethical issues, and the presentation of the findings to the participants. This latter technique, member checking, is considered a qualitative validity assessment (Motulsky, 2021). The study's results are available (Searight et al., 2014(Searight et al., , 2020Searight & Gafford, 2006). It is hoped that this account will be of value to faculty and researchers who may find themselves in a situation in which a study of this type would be useful for enhancing the quality of education (Thompson Burdine et al., 2021). In keeping with the goals and theme of the article, the narrative will often be presented in the first person. This qualitative study was conducted at a community-based university-affiliated family medicine residency program. The author is a clinical psychologist who also received an additional master's degree in DOI: 10.2147/ijm Available online at Journals.aijr.org H. Russell Searight, Int. J. Methodol.;Vol. 2, Issue 1, pp: 28-33, 2023 public health. I was the Director of Behavioral Science in the residency and a Clinical Associate Professor of Community and Family Medicine at the time of this project. The residency program with which I was affiliated had recently undergone changes that added a distinct international dimension to medical education. As a result, the recent classes of incoming residents for the three-year family medicine program included many physicians who received their education and medical school training outside the United States. International Medical Graduates and Residency Education From a clinical perspective, my psychologist colleague and I taught behavioral science in two venues. First, we both saw patients regularly that primary care physicians referred for diagnostic evaluation or brief psychotherapy. Residents typically joined us for these encounters. In addition, we were responsible for clinical instruction in physician-patient interaction and effective patient interviewing. As a result of these experiences, I, along with other faculty members, noted several issues among our international medical graduates that we had not typically seen among graduates of U.S. medical and osteopathic schools. For example, while a core component of family medicine and a basic competency in the specialty is mental health, many of our international graduates appeared uneasy. They were unclear about how to proceed when patients presented with depression or anxiety disorders symptoms. Additionally, many patients with mental health issues that a family physician could effectively manage were being referred to me or outside psychiatrists. As we got to know these residents better, they indicated that they had had little formal instruction in psychiatry during medical school. Often, their sole exposure to mental health was following an attending physician on ward rounds in large long-term psychiatric facilities. As part of my role as a faculty member, I also participated in monthly reviews of resident progress in the program. In these discussions, resident evaluations from recent rotations were reviewed, and observations by physician preceptors about the resident's performance in the family practice clinic were described. During these meetings, concern was expressed about some international medical graduates' perceived knowledge and skill level (IMGs). For example, one repeated situation was that when IMG physicians verbally presented a patient to a supervising physician that the IMG resident would describe the presenting problem, describe the relevant aspects of the history, and any physical examination findings but then not go on to the expected differential diagnosis and recommendations for treatment. As a result, supervising faculty often concluded that these physicians had an inadequate knowledge base. We later learned that this style was an expression of humility and respect for the faculty physicians and not indicative of a knowledge deficit (Searight & Gafford, 2006). There were also observations that the IMG residents did not seem to appreciate some of the psychosocial aspects of patient care. For example, in an encounter with a pregnant adolescent female, the resident physician focused solely on the technical aspects of the pregnancy, such as assessing fetal heart rate and fundal height and asking about diet and smoking. However, issues surrounding the psychosocial development of the 15-year-old who was pregnant were not addressed (Searight et al., 2014). Choice of Research Method While I had some tentative hypotheses about the possible cultural dynamics underlying the behavioral patterns described, I was also aware that my piecemeal understanding was uncertain at best. I also thought that any observations and recommendations regarding the training of IMG residents would have a more significant impact if they were based on systematic inquiry. Since this was a topic area in which there was little in the medical education literature for guidance, a qualitative project appeared appropriate (Ng et al., 2018). Several recent qualitative health studies that I had conducted yielded both surprising findings and led to a much deeper understanding of the topic under study than corresponding quantitative findings. I had previously collaborated with clinical pharmacists overseeing drug trial studies in the family medicine clinic. We were initially interested in participants' recall and understanding of the study information presented (Miller et al., 1994), which provided a summary score of the participants' recall and understanding of the study information. Participants recalled approximately 70% of the information over which they were tested. It was helpful to have a psychometrically sound measure of the elements of informed consent (there were few such instruments at the time). However, one of my pharmacy colleagues and I became interested in how drug trial participants viewed their role, their perceptions of the consent process, and the corresponding explanation of the study they received. We developed a series of openended questions and interviewed 14 participants from recent clinical studies. After coding the interview narratives for common themes, we generated a description of the participants' views of informed consent ("It prevents lawsuits") and random assignment ("I imagine they rolled dice for it.") (Russell Searight & Miller, 1996). Given my previous successful experience with ethnographic methods, such as the long interview (McCracken, 1988) and thematic coding of the resulting narrative transcripts (Strauss et al., 1998), this approach appeared to be a good fit for this study. Another reason for selecting a qualitative interview approach was that this research topic had received relatively little attention. Qualitative studies are often recommended for the initial exploration of a domain of interest (Pokhrel et al., 2021). The number of participants required for valid qualitative research is typically significantly less than for a quantitative study. Data saturation is used to determine adequate sampling (Hennink & Kaiser, 2022). At saturation, adding further interview narratives does not add additional information to the study's findings. Another standard suggested for data saturation is the number of interviews necessary for another investigator to replicate the results with a similar sample (Hennink & Kaiser, 2022). For extended, in-depth interviews of the type used in this study, saturation may occur with as few as six protocols. Ten IMGs participated in the study. Member checking involves presenting the study's results to the participants who were the source of the data (Thomas, 2016). The process of presenting and the feedback generated may also help further refine the descriptions of the themes generated through the coding process. Member checking is also a validity check (Harper & Cole, 2012;Motulsky, 2021) that can ensure that the inductively generated themes are accurately described. Research Design Before the IMG study, I had used qualitative interview methods in several previous studies in the family medicine setting. Qualitative interviews are particularly useful in understanding health-related issues that have not been well studied. In particular, an open-ended inductive approach is most beneficial when the goal is to understand the subjective perspective of a specific culture or subgroup. Additionally, in-depth interviews provide flexibility in that an unanticipated category of meaning may spontaneously arise. For example, medical education includes the official curriculum of courses, clinical clerkships, and residencies but also has a hidden curriculum of norms, values, and expected behavior (Mahood, 2011). This hidden curriculum was something IMGS were found to be struggling to understand, as this excerpt illustrates: "I don't know how to interact with other staff in the office, like the nursing staff. In India, they would never question a doctor. Also, it is hard to figure out who does what here. For example, at home, the doctor would draw the patient's blood (Searight & Gafford, 2006). In conducting the interviews, I followed (Spradley, 2016) general guidelines by opening the interview with "grand tour" questions ("What has your experience been as a resident in the U.S.?"). These were often followed by "mini tour" questions focusing on a particular area ("How are you finding the patients that you are encountering in the residency clinic and hospital?"). Compare and contrast questions help clarify key differences and the boundaries of the interviewees' meaning categories ("We have been talking about how mental health problems are common in American primary care patients. Are these the same types of problems that would be placed in the category of 'mental health problems in your home country?"). Primarily because of the IMG residents' interest, the findings were presented at a conference attended by DOI: 10.2147/ijm Available online at Journals.aijr.org H. Russell Searight, Int. J. Methodol.;Vol. 2, Issue 1, pp: 28-33, 2023 all family medicine residents and faculty. As noted earlier, this presentation was an unplanned forum for member checking (Thomas, 2016). Research Practicalities In this study, there were a few obstacles. However, being in the role of a faculty member conducting the research interview could be perceived as a conflict of interest. On the other hand, the topic area and the methodology were not areas in which other faculty had much experience. While I did not provide ongoing counseling or psychotherapy for residents, it was not uncommon for residents to occasionally share personal information with me. Although I had also discussed some of the interview topics informally with residents during clinical instruction, I did promise our informants confidentiality. I indicated that we wanted their input to help us improve behavioral science education within the program. At the outset of the interviews, I had no clear intention of doing anything formal with the findings, such as publishing or presenting them at a conference. Methods in Action As noted above, in conducting qualitative interview studies, I typically begin with a "grand tour" question-a broad, open-ended query about the topic area. Then, based on participants' responses to this question, I will follow up with a more specific question or follow the respondent's lead if they take me in an unanticipated direction. I have found these surprises often lead to issues I had not considered. While I usually have a formal list of questions (often necessary for Institutional Review Boards Proposals), I may depart from these prepared queries when a potentially meaningful, yet unexpected issue arises. For example, in the informed consent study, participants were aware they were in a double-blind, placebo-controlled pharmaceutical trial. However, when I questioned them about their understanding of these conditions, nearly all participants indicated that they were sure they had been given the active drug and not the placebo (Russell Searight & Miller, 1996). Given the number of interviewees and the process of assignment to experimental or control conditions, it was highly unlikely that all of these interviewees had indeed been prescribed the active drug. This issue, called the "therapeutic misconception," had not been widely recognized in biomedical research (Russell Searight & Miller, 1996). In the current study, I was surprised by the degree of engagement that the IMG residents demonstrated during the interviews. While several interviewees were initially a bit guarded, my expressions of support and interest seemed to increase their comfort rapidly. It often seemed as if they were eager to discuss their experience and were pleased, and even flattered, that a faculty member was interested. One issue in the interviews was that many of the IMGs lived with an ongoing fear that they would be "found out" their visa canceled, resulting in being sent back to their home countries. This "imposter" experience seemed to contribute to keeping a low profile and a highly deferential style when engaging with faculty. As a faculty member, I was surprised by hearing this theme and responded empathically, albeit with some sadness on my part, that this fear was so prominent and pervasive. Practical Lessons Learned The history of this study illustrates how a research project can have unintended consequences. To some extent, the focus on IMGs was an "elephant in the room" issue. IMGs have often been viewed as medical second-class citizens (Chen et al., 2011). A high level of sensitivity was necessary to carry out and present the research findings without conveying discriminatory overtones. Unfortunately, these overtones are familiar in discussions of IMGs. For example, the primary governing and oversight bodies for medical education often made a clear distinction between U.S citizens educated in U.S. medical schools, U.S. citizens receiving their medical education abroad. and non-citizen IMGs. There has often been an unspoken prejudice against these physicians, of which IMGs are very aware (Moore & Rhodenbaugh, 2002;Searight et al., 2014 While I have published multiple qualitative studies, this project did not originate as a formal study per se. Instead, the focus was to understand better some of the educational issues that arose when U.S.-trained faculty were teaching and supervising the clinical work of IMGs. While IMGs comprise about 25% of the U.S physician workforce, at the time of this study, there was little information for medical educators aside from demographics. In particular, little literature was available about how these physicians experienced and managed the abrupt transition from countries where outpatient medical care was less developed. Many new researchers often become discouraged when considering a topic with little associated research literature. However, the absence of literature on a topic is often a cue that a qualitative study would be of benefit. When the initial manuscript based on this project was submitted for publication to a specialty medical education journal, it was promptly rejected and not sent out for review. The editor indicated that the paper would be considered only if we presented quantitative findings consistent with our qualitative results. This editorial response illustrates a common issue with qualitative research in the scholarly community. We submitted the manuscript to another journal (Academic Medicine) which enthusiastically accepted the paper with minor revisions and used the paper as the center of several articles on international medical graduate education. With 86 citations to date, it is hoped that the paper has positively influenced medical education. Conclusion Educators at all levels encounter situations in which learners are not demonstrating progress in meeting educational goals. Rather than assuming deficiencies in the learners, educators may wish to consider how the learners are interpreting the educational setting and the instructors' expectations. Qualitative methods are particularly useful in investigations in which the goal is to understand how a minority culture interprets situations created by the dominant culture. Notably, the dominant culture rarely reflects upon habitual behavior's inherent values, assumptions, and meaning. Qualitative interviewing often brings additional dimensions to awareness that would not be captured by a Likert scale quantitative survey. Competing Interests The author has no competing or potentially competing interests with respect to this article. Publisher's Note AIJR remains neutral with regard to jurisdictional claims in institutional affiliations.
2023-04-17T15:13:32.989Z
2023-04-15T00:00:00.000
{ "year": 2023, "sha1": "6de6cac5fb2389983ad046aa87f67c789cd76638", "oa_license": "CCBYNC", "oa_url": "https://journals.aijr.org/index.php/ijm/article/download/6139/552", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4e0b2142437e06149d61fc0e27cd5970d17937db", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [] }
114745912
pes2o/s2orc
v3-fos-license
A Comparison of Saudi Building Code with 1997 UBC for Provisions of Modal Response Spectrum Analysis Using a Real Building The study uses an actual building to compare the modal response spectrum analysis results of Saudi Building Code (SBC) and the 1997 Uniform Building Code (UBC) used in Saudi Arabia before the introduction of SBC. A sample of four buildings with reported analysis of comparison between IBC and UBC is taken for confirming the comparison. Eight sample places from SBC map for Saudi Arabia together with two sample places of high seismic activity in USA were taken for the comparisons. The study used software package ETABS in this study for modeling and analysis. The results are dissimilar from the comparisons reported for test places of USA. It is concluded that at most places SBC base shear is higher for both ELFP and MRSA. However, the results cannot be generalized and considered always right. The same is factual for overturning moments. Consequently, we cannot report that SBC is more conservative than UBC for all scenarios. Introduction Several academic studies have offered assessments of building codes provisions for seismic analysis.Reference [1], reported seismic design for building structures in Canada, the United States, Chile and New Zealand.The code provisions are implemented to a 9-storey building situated in area in each country having similar seismic circumstances.The comparable seismic loads are identified for this structure in Canada and Chile, whereas considerably lower seismic effects are used for the U.S. The use of the dynamic (response spectrum) analysis T. M. Nahhas process caused in lighter and more flexible structures as compared to the corresponding static force system in all countries.Seismic stability necessities had greater influence on designs in Canada and New Zealand.Reference [2] advocated for higher emphasis on evaluation of international standards.The taken standards are Eurocode, IBC (American Society of Civil Engineers) and Indian code i.e.IS 1893:2002.These findings also support in knowing main elements which lead to reduced performance of structure subjected to earthquake.These results also guide to attain satisfactory safe performance under future earthquakes.The lateral seismic forces, applied to the center of gravity of the structure are measured manually for each floor in X and Z direction.The analytical output of the model buildings is shared graphically and in tabular form.Differences in the findings using three codes i.e.Eurocode, IBC (ASCE) and Indian code are reported.A comparative examination is done in terms of Base shear, Displacement, Axial load, Moments in Y and Z direction for particular columns. It also presents comparison of different codes for selected columns and beams for Displacement, Axial load, Moments in Y and Z direction floor wise. Reference [3] reported a comparative research of various code-based record selection procedure suggested by Eurocode 8, ASCE41-13 and NZS1170.5:2004.Different procedures in the seismic evaluation of four steel buildings, planned according to different criteria are employed to report comparable findings.Reference [4] reported an assessment of seismic provisions of three seismic design codes, the Philippine code, Eurocode 8 and the American code, to the wellknown residential frames of normal occupancy and compared regular and irregular reinforced concrete frames for four story building structures.The response spectrum and the seismic parameters of NSCP 2010 were taken for the horizontal load action with diverse load combinations.Based on the findings of column axial load bending moment interaction diagrams, EC8 was reported to be conservative when matched to NSCP 2010 and 2009 IBC.It was concluded in [4] that for the design and investigation of ordinary RC residential buildings with certain irregularity, EC8 provisions can be chosen as safer. The most relevant paper in this regard is authored by Nahhas [5] reporting an assessment of the seismic forces generated from a Modal Response Spectrum Analysis (MRSA) by implementing the two building codes; the 1997 Uniform Building Code (UBC) and the 2000-2009 International Building Code (IBC), to the normal domestic buildings of standard occupancy.The UBC was reported to be considerably more conservative than the IBC for all the investigated cases. The UBC design response spectra have higher spectral accelerations.Therefore, the response spectrum investigation delivered a much higher base shear and moment in the structural members as compared to the IBC.These studies lead to the conclusion that normal office and domestic buildings designed using UBC 1997 are reflected to be overdesigned, and therefore they are relatively safer than the designs based as per IBC provisions.Reference [6], assessed the Turkish Earthquake Code and the Euorocode 8 with UBC based on (Modal Response Spectrum Analysis) MRSA using finite element investigation process for struc-tural examination.This is the only research that likened two codes using MRSA. In this research, the IBC response spectra have been reported to be dissimilar from UBC and others.However, no MRSA data associating IBC and UBC have been reported. Reference [7] also presented comparison of IBC and UBC.The findings reported in this study were taken using the ELFP but not MRSA.It is a key research on this topic and has significant but non-conclusive findings.The results are mixed for two predominantly designated sites in San Francisco and Sacramento.This research found that UBC is more conservative in some areas but the findings reported are not definite.Another paper [8] equated IBC with the Mexico's code. It is important to note, that IBC does not permit the implementation of ELFP for computing the base shear and other inside forces for structures in the seismic design category D and above if the modal basic time period of a structure computed by FEM is larger than that by ELFP.In such areas, procedure as modal response spectrum examination, linear response history, nonlinear static procedure or non-linear response history analysis must be used [9].In such areas, the normally employed used procedure is response spectrum analysis method (ASCE 7-05 Section 162) which is also acceptable by IBC.This paper has embraced the same for all cases.This paper extends the work presented in [5] using a real building to compare the modal response spectrum analysis results of Saudi Building Code (SBC) and the 1997 Uniform Building Code (UBC) used in Saudi Arabia before the introduction of SBC.Eight sample places from SBC map for Saudi Arabia together with two sample places of high seismic activity in USA are taken for the comparisons.This study was important to ensure that the conclusion drawn by [5] that UBC being more conservative than IBC for academic frames is valid for SBC using the spectral accelerations from the maps provided in the code. Test Locations Eight test locations were randomly selected for KSA for comparing the Saudi Building Code (SBC) with UBC and are shown on the SBC map in Figure 1.The selected test locations in USA are shown in Figure 2.Both locations were in the areas of high earthquake probability. Modeling The software package ETABS was used in this study for modeling and analysis. The structures were modeled as special moment resisting frame system, which is a requirement of building codes for higher seismic zones.Slabs were modeled using shell elements to represent the real slab behavior, providing stiffness in all directions and transfer mass of slab to beams.A rigid diaphragm was assumed at all floor levels.The modal combination method used for all models was CQC (Complete Quadratic Combination).It is preferred over SRSS (Square Root of Sum of Squares) because the structural models of the sample buildings used in CQC results are generally much more accurate for structures with closely spaced modes [10].The internal forces obtained using CQC are about the same as SRSS. It was verified for all the buildings used in this work. Results of Verification Before modeling the real building structures, verification was done through published problems in [5] which calculated base shear and moments using UBC and IBC at four US locations for four different building structures modelled as 3-D frames.The paper shows results for soil types A to E. This study reports exactly same results as published in the paper by Nahhas [5].The results vary from building to building quantitatively but remain the same qualitatively.The paper shows that the maximum base shear varies from sample location 1 (area of low seismicity) to sample point 4 (area of high seismicity) in a logical manner.Also, the maximum base shear values increase with varying site class from A to E for each sample location except for sample location 4. For sample location 4, for all buildings, the maximum base shear for site class E is significantly lower than what is for site class D. An explanation has been provided in the paper for this anomaly.This is actually related to the modal contribution and the design response spectrum for the sample point 4. The design response spectrum for soil type D and E at sample point 4 have a large difference in the peak spectral acceleration.For site class D, it is close to 1.7 g whereas for site class E it is about 1.4 g.This discrepancy does not exist for other sample locations.This is how the code has been developed for this high area of seismicity. Real-World Building A real-world building was modeled using ETABS.The framing plan for the building is shown in Figure 3. Figure 4 shows the perspective view of the building structure.The important specifications of the structure are given in Table 1.(ELFP) as well as MRSA.The ELFP calculation has been done using UBC and SBC.In MRSA, the code does not specify any parameters so it is independent of the code.MRSA gives a higher time period (about 40%) when compared to the ELFP.It represents the mass participation along X-direction equal to 29.4% of the total mass as shown in Table 2.A much bigger mass participation is along Y direction and has a value of 63.42% for second mode with a time period of 1.228. Mode 3 with a time period of 1.014 (about the same as ELFP time period) is also significant with a participation of 35.79 %.Mode 4 has no significant participation and Mode 5 has a participation of 16.48% along Y. Table 2 shows that about 80% of the mass participation is accounted for in the first 10 modes.The remaining modes don not participate significantly.However, 44 modes together of the 50 modes account for about 99% participation. Table 4 shows the parameters used to generate the design response spectra for the real building assumed to be located at 10 different locations.These locations are shown in Figure 1 1) The response spectra for locations 1 and 2 correspond to Zone 0 with no earthquake activity according to UBC.Therefore, the UBC design spectrum is just a horizontal line with zero magnitude shown in red color.But for SBC, the location does represent probability of earthquake and therefore, a normal design spectrum is attained.This is an important finding in the context of the difference between UBC and SBC. 2) It can be seen that the design spectra shapes do not change much as we vary the site class from A to D but the peak value goes up ranging from 0.055g to 0.17 g. 3) The design spectra shapes and peak magnitudes change drastically as we are summarized as follows: 1) Figure 15 summarizes the data of analysis for the base shear for UBC and SBC EFLP for the real building.It can be seen that at most locations SBC base shear is higher.However, it is not always true. 2) Figure 16 compares the base shear for UBC and SBC using MRSA for the real building.Again, it can be seen that at most locations SBC base shear is higher.However, again it is not always true. 3) Figure 17 compares the overturning moments for UBC and SBC using MRSA for the real building.Again, it can be seen that at most locations SBC base shear is higher and again it is not always true. Conclusions This study of a real building compares UBC and SBC and reports results.The findings are not in agreement with a previous similar study comparing UBC and IBC.The results were obtained by using the same models.The comparisons of IBC, UBC published in previous work were first verified also.The design response spectra of SBC were evaluated and compared with UBC.The effect of soil class, and geographical location on the design response spectra was generated in accordance with SBC.The effects were studied and the response spectra were presented.Code compliant analyses for a real-world building were performed for eight soil types and geographical locations of Saudi Arabia to compare the base shear and internal forces in the structure.The design spectra shapes and peak magnitudes change drastically as we move from one location to another.It ensures that the selected locations have quite different geological properties. The study concludes that about 80% of the mass participation is accounted for in the first 10 modes.The remaining modes do not participate significantly. Therefore, it is recommended that SBC requires MRSA using first 15 modes maximum. The ELFP calculation was done using UBC and SBC.MRSA gives a higher time period (about 40%) when compared to the ELFP.It is recommended that SBC provides guidelines to relate MRSA with ELFP.Some more research is required to provide such guidelines. It is found that at most locations SBC base shear is higher for both ELFP and MRSA.However, it is not true always.The same is true for overturning moments.Therefore, we cannot conclude that SBC is more conservative than UBC for all cases.It is recommended that further research pursued to look more deeply into cases where the SBC base shear is lower than UBC base shear.SBC seismic maps may not be very accurate.It is also advisable to develop proper guidelines to handle such cases in SBC. Figure 1 . Figure 1.Sample test locations in Saudi Arabia. Figure 2 . Figure 2. Sample test locations in USA. Figure 4 . Figure 4. Perspective view of real building 1. and Figure 2 . Out of the 10 different locations, 8 locations are selected in the various areas of the Kingdom of Saudi Arabia.Two of the ten locations have been selected in the severe earthquake area of California USA.The parameters are shown for both UBC 1997 code and Saudi Building Code 2007.For each of the ten different locations the five site classes, A, B, C, D and E are assumed to generate 50 cases of design spectra.Values of SBC parameters SS and S1 for these locations are also given in Figure 1 and Figure 2.Since MRSA requires input of Design Spectra, the design spectra are generated for all site classes for all sample locations.Figure5shows the design spectra for Location 1. Five different design spectra are shown for five different site classes: A, B, C, D, E. Similarly, Figures 6-14 give the design response spectra for locations 2 to 10.It is important to note the following about these design spectra: move from one location to another.It ensures that the selected locations have quite different geological properties.4) Looking at the peak acceleration of the design spectra, the following is observed: Location 1: PSBC > PUBC (PUBC = 0) T. M. Nahhas Figure 16 . Figure 16.MRSA Base Shears for UBC and SBC. Figure 17 . Figure 17.Comparison of overturning moments at base for UBC and SBC. Table 1 . Real building specifications. Table 2 shows the modal periods & participating mass for the structure.Results for 50 modes have been presented.The fundamental time periods of the structure are shown in Table3calculated using Equivalent Lateral Force Procedure Table 2 . Modal periods & participating mass for real building. Table 3 . Fundamental time periods of the real building. Table 4 . Real building design spectra cases.
2019-03-11T05:40:34.714Z
2017-03-17T00:00:00.000
{ "year": 2017, "sha1": "5bbd4bcbd31f17d195f2f6414806d496f1745439", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=76226", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "95eef4468dd20734f0b92a1859e56634f6b22949", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
234513813
pes2o/s2orc
v3-fos-license
Increased dose of re-irradiation therapy improves the survival of patients with local recurrent esophageal squamous cell carcinoma after radiotherapy Introduction: Local recurrence (LR) threatens the treatment of esophageal squamous cell carcinoma (ESCC). This study interrogated the optimal re-irradiation dose for the LRESCC following radical (chemo) radiotherapy.Methods: We retrospectively analyzed a total of 125 patients with LRESCC after initial radiotherapy. For the radiotherapy dose, 58 patients were assigned to low-dose (LD) group (50–54 Gy) while the remaining 67 were classified into the high-dose (HD) group (55–60 Gy). We recorded the response rate (complete + partial response), 1-, 2- and 3-year survival rate, and toxicity. We then analyzed the impact of the different radiotherapy doses, and combination chemotherapy on the survival of the LRESCC patients.Results: After re-irradiation, the 1-, 2- and 3-year survival rates were 48.3%, 24.1% and 10.3% in the LD group, and 61.2%, 34.3% and 19.4% in the HD group (P<0.05), respectively. The median survival time of patients receiving radiotherapy alone was 9 months in the LD group and 15 months in the HD group (P<0.05). Whereas the survival rate of patients treated with chemoradiotherapy was higher than that of patients treated with radiotherapy alone in the LD group, chemoradiotherapy showed no advantage over radiotherapy alone in the HD group. In addition, the incidence of esophagitis, the most common toxicity, was higher in the HD group compared to the LD group (68.7% vs 58.6%, P<0.05). Our multivariate analysis demonstrated that re-irradiation dose was an independent favorable prognostic factor in patients with LRESCC.Conclusion: Taken together, our data shows that, increasing the re-irradiation therapy dose (55-60Gy) improves the long-term survival of patients with LRESCC after radiotherapy, with tolerable toxicity. Introduction Esophageal squamous cell carcinoma (ESCC) is the most common esophageal malignancy in China. Radiotherapy and surgery are the mainstay treatment option for the ESCC. Unfortunately, the local recurrence rate of esophageal cancer remains at 40%-60%, following radiotherapy [1]. Without active intervention, most of the patients with disease progression die within 1 year [2,3]. To date, there is still no consensus on the retreatment of LRESCC following (chemo) radiotherapy. Previous studies have shown that salvage surgery is effective in the treatment of LRESCC with radiation history [4][5][6][7][8]. However, brosis of tissues surrounding the tumor bed after radiotherapy complicates the surgical intervention and increases perioperative risk, even with carefully selected patients [9,10]. Besides, Chen et al. reported that re-irradiation can achieve similar survival outcome as salvage surgery [2]. Therefore, re-irradiotherapy remains an important salvage treatment option for patients with LRESCC [2,11]. Radiotherapy dose is a vital factor that in uence survival outcome [12]. Studies have shown that a salvage radiation dose > 50 Gy and < 60 Gy yields better survival for LRESCC [13,14]. However, there is lack of literature on whether higher doses of re-irradiation, between 50 and 60 Gy, would be more bene cial to patients with LRESCC after initial radiation. Here, we interrogated the survival outcome and toxicity of different re-irradiation doses combined with or without chemotherapy in the treatment of LRESCC. Patients and clinical features We retrospectively selected 207 patients with LRESCC who underwent re-irradiation therapy in the Chengdu Fifth People's Hospital between January 2012 and December 2016. We included patients with a non-recurrence interval of > 6 months after initial radiotherapy, con rmed (pathological examination, imaging or gastroscopy) recurrence of primary esophageal cancer with or without local lymph node metastasis, or only local lymph node metastasis (supraclavicular fossa, mediastinal, esophageal, or paraaortic lymph nodes), re-irradiation therapy dose 50-60 Gy, as well as those with the Eastern Cooperative Oncology Group (ECOG) score 0-2, with functional heart, liver, kidneys, lungs, and bone marrow hematopoiesis. We excluded patients with distant organ metastasis, tumor in other organs, or those with incomplete information. Patients with hyper-fractionated radiotherapy and a history of surgical resection of esophageal cancer were also excluded. Finally, 125 patients were included, 84 men and 41 women with a median age of 68 years (range 50-89 years). All the patients had either refused surgical intervention or were unable to undergo salvage surgery. The locally recurring tumors of all the patients were located in the previously irradiated area. This study was approved by the Ethics Committee of Chengdu Fifth People's Hospital. Treatment and follow-up Two-dimensional radiotherapy, three-dimensional conformal radiotherapy (3D-CRT) or intensity modulated radiotherapy (IMRT) was used for the initial radical treatment for the 125 patients, with a median dose of 60 Gy (50-66 Gy). Most of the patients received chemotherapy regimens containing cisplatin, paclitaxel or uorouracil. After the initial treatment, gastroscopy, esophageal barium meal radiography, or enhanced computed tomography (CT) was used to evaluate the e cacy. All the patients achieved either complete or partial response. All the patients received re-irradiation with 3D-CRT or IMRT of 1.8-2.0 Gy per fraction, 5 days/week. Gross tumor volume (GTV) was assessed by esophageal barium meal examination and CT images. GTV for the metastatic lymph nodes (GTVnd) included enlarged lymph nodes with a short diameter >10 mm, or positron emission tomography-CT examination suggested metastatic lymph nodes. Planning target volume (PTV) was formed by extending 0.5-1.0 cm radially from the GTV, and 1.0-2.0 cm above and below the GTV, or 0.5 cm outside the GTVnd in all directions. The lymphatic drainage area was not prophylactically irradiated. Dose limitation for normal tissues or organs was: bilateral lungs V20 < 25%, spinal cord Dmax < 20 Gy, and mean radiation dose to the heart < 30 Gy. The overall survival (OS) was de ned as time from the beginning of retreatment until last follow-up or patient death. E cacy was evaluated at the end of retreatment and 1 month after the end of retreatment. Patient evaluation methods included endoscopy, tumor markers, esophageal barium meal, or enhanced CT. The response rate (RR) was de ned as the percentage of patients with complete response and partial response in the total number of cases. Therapeutic evaluation criteria referred to response evaluation criteria in solid tumors (RECIST 1.1). In addition, toxicity was graded by the Common Terminology Criteria for Adverse Events (CTCAE v4.0). According to the re-irradiation dose, 58 patients were assigned to the low-dose (LD) group (50-54 Gy, median dose 50 Gy) while the remaining 67 to the high-dose (HD) group (55-60Gy, median dose 60Gy). Statistical analysis We employed SPSS version 23.0 (IBM Corporation, Armonk, NY, USA) to analyse the data in this study. The continuous or categorical variables in the two treatment groups were compared by the Student's ttest or the chi-square test, respectively. The survival rates were calculated by Kaplan-Meier method, and the difference in survival curves was compared by log-rank method. Cox's proportional hazards regression model was used to determine the effect of univariate or multiple factors on the survival. Time to recurrence (TR) was de ned as the time between the beginning of initial treatment and con rmed recurrence. Two-sided P < 0.05 was considered to be statistically signi cant. Results The clinical baseline characteristics for the two patient groups are as shown in Table 1. Sixty-eight (54.4%) patients relapsed within 2 years following initial de nitive radiotherapy. The median time to recurrence in all the patients was 21 months (range 8-201 months). After re-irradiation, the median follow-up time was 19 months (range 4-65 months). At the end of the follow-up, 2 patients survived in the LD group, while 7 survived in the HD group. There were 67 cases of primary recurrence (PR), 33 cases of PR with local lymph node recurrence (PR+LNR) and 25 cases of local lymph node recurrence (LNR). Sixty-one of the 125 patients received concurrent or 1-4 sequential cycles of chemotherapy, which included either single or double-drug regimens containing paclitaxel, platinum or uorouracil. In the LD group, 8 patients received platinum-containing dual-drug sequential chemotherapy, while 17 received single-drug uorouracil or platinum-containing dual-drug concurrent chemotherapy. On the other hand, in the HD group, 14 patients received platinum-containing dual-drug sequential chemotherapy, while 22 patients received single-drug uorouracil or platinum-containing dual-drug concurrent chemotherapy. The two groups of patients were then strati ed based on their acceptance of chemotherapy. Patients who received radiotherapy alone in the HD group had a signi cantly better survival outcome compared to the patients who received radiotherapy alone in the LD group (median survival time: 15 Cox regression analysis for overall sample We conducted a univariate and multivariate analysis to evaluate whether the dose of re-irradiotherapy affected the patient survival (Table 3). In the univariate analyses, ECOG, age, TR and re-irradiation dose were signi cantly associated with OS (P < 0.05 for each comparison). Similarly, the multivariate analysis showed that ECOG of 0-1 (P = 0.048), age <70 years (P = 0.028), TR >24 months (P = 0.001) and reirradiation dose of 55-60 Gy (P = 0.013) were independent factors for favorable OS. Toxicity Our toxicity analysis showed that radiation esophagitis was the most common acute toxicity in both groups. The incidence of esophagitis was 58.6% in the LD group and 68.7% in the HD group (P =0.868). Other acute toxicities included hematological toxicity (P =0.004 and gastrointestinal reaction (nausea, vomiting, loss of appetite and constipation) (P = 0.732). Rare acute toxicity included radiation-induced pneumonitis, radiation tracheitis and skin reaction. Grade ≥ 2 radiation-induced pneumonitis was noted in only 1 patient in the HD group. A total of 11 patients, (7 patients with chemotherapy and 4 patients without chemotherapy), developed severe treatment-related toxicity, such as esophageal perforation/ stula and bleeding. The TR of most for the 11 patients (9 / 11) was ≤ 24 months. Within 3 months following re-irradiation, 2 cases of bleeding and 3 cases of esophageal perforation/ stula occurred in the LD group, while 1 case of bleeding and 2 cases of esophageal perforation/ stula occurred in the HD group. In addition, 1 case of bleeding and 2 cases of esophageal perforation/ stula occurred in the LD group, 3 months after re-irradiation. Severe late complications were observed after 6 months of reirradiation; 2 patients in each group had severe esophageal stenosis and underwent esophageal dilatation (Table 2). Discussion This study evaluated the survival outcome and toxicity of different re-irradiation doses combined with or without chemotherapy in the treatment of LRESCC. A recent study showed that, a radiotherapy dose of >59.4Gy after standard concurrent chemoradiotherapy (50.4 Gy) led to the achievement of complete response in patients with ESCC. The dose signi cantly improved local control, 5-year recurrence-free survival rate and 5-year OS rate [15]. Previously, some studies have indicated that the survival rate of LRESCC can be signi cantly prolonged with a > 50 Gy dose of re-irradiation [2,13,14]. For instance, Kobayashi et al. demonstrated that 60 Gy was an appropriate salvage dose for LRESCC, after surgery [16]. Besides, another study pointed out that patients with LRESCC with a radiation history should be given higher doses of radiotherapy [17]. These studies suggest that increasing the dose of radiotherapy for recurrent esophageal cancer may prolong the overall patient survival. Here, we compared the effects of 50 Gy and 60 Gy median re-irradiation dose on the survival of patients with LRESCC with radiation history. Our data showed that 54.4% (68/125) of the patients experienced recurrence within 2 years following initial (chemo) radiotherapy, which was consistent with previous reports [2,13,18,19]. In their studies, Xu et al. reported that the 2-year survival rate and median survival time of LRESCC patients receiving ≥ 50 Gy re-radiotherapy was 37.5% and 18 months respectively [14]. Whereas this outcome was better than the whole cohort of our study, it was similar to our survival results in the HD group. In this study, the 2-year survival rate and median survival time for the whole cohort was 29.6% and 14 months respectively. The 1-,2-, or 3-year survival rate and median survival time was 61.2%, 34.3% or 19.4% and 18 months in the HD group, and 48.3%, 24.1% or 10.3% and 11 months in the LD group, respectively. As demonstrated by our multivariate analyses, the dose of radiotherapy was an important factor in de ning survival. The median survival time of patients receiving radiation alone in the HD group was 15 months, which was signi cantly higher than the 9 months in the LD group. Our results showed better outcome compared with previous studies [2,13,17]. The higher doses of radiation might have been more bene cial for tumor control. Therefore, we recommend higher doses of re-radiation for patients with recurrent ESCC after radiotherapy. Besides, local control is a vital factor affecting patient survival outcome [20]. To date, few studies have reported the rate of local control in patients with LRESCC following re-irradiation therapy. Previous studies reported that increasing the radiation dose can signi cantly improve the local control rate of esophageal cancer, and survival outcome [21,22]. Here, the 3-year locoregional control rate in the LD group and that in the HD group was 3.4% and 14.9%, respectively. The outcome was discouraging probably because the lymphatic drainage was not included in the treatment eld. Many studies have demonstrated positive effect of chemotherapy in the initial treatment of ESCC [23]. However, data on the role of chemotherapy in the re-irradiation therapy of ESCC remains very scant. Few studies have shown the role of chemotherapy in re-radiotherapy for recurrent ESCC. Chen et al. reported that the 1-, 2-or 3-year survival rate of 36 LRESCC patients receiving re-irradiation with concurrent chemotherapy (paclitaxel + cisplatin) was 51.7%, 21.4% or 12.2%, respectively [2]. In addition, Katano et al. reported that the median survival time for 6 patients who underwent concurrent chemotherapy (nedaplatin and tegafur) with re-radiotherapy was 13.6 months [10]. In a strati ed analysis, our data demonstrated that chemotherapy coupled with radiotherapy could increase survival rate. In the LD group, the median survival time for chemoradiotherapy was 14 months, which was consistent with the results reported by Katano et al. [10]. However, in the HD group, chemoradiotherapy did not show better survival bene t compared to the radiotherapy alone. Our results suggest that chemotherapy can increase the survival rate of LRESCC patients with re-radiotherapy, when patients are exposed to lower radiation doses. Our data showed that acute radiation esophagitis was the most common toxicity in the whole cohort. Whereas the incidence of esophagitis in the HD group was higher than in the LD group (68.7% vs 58.6%), the difference was not signi cant. A previous study showed that the incidence of severe acute radiation esophagitis in patients with thoracic radiotherapy was 15-25% [24]. Similarly, grade ≥ 3 acute esophagitis was rare in both groups in this study. Besides, hematological toxicity and gastrointestinal reactions were also common, but only hematologic toxicity was signi cant between the two groups. More than two-thirds of patients with hematological toxicity or gastrointestinal reaction received chemoradiotherapy. In this study, the incidence of radiation pneumonitis was lower than that shown in previous studies [25]. In addition, grade 1-2 radiation pneumonitis occurred in six (10.3%) and eight (11.9%) patients in the LD and HD groups, respectively, with controllable symptoms. Severe complications occurred mainly within 3 months following re-irradiation. A total of 11 patients had severe treatment-related toxicity, such as bleeding, esophageal perforation, and esophageal stula, eight of which were in the LD group. This may be related to the TR of ≤ 24 months in most patients (36/58, 62.1%) in the LD group. Severe late complications; esophageal stenosis, and the dysphagia were effectively relieved after esophageal dilation. Therefore, higher re-irradiation dose may be a safe treatment option for the patients. Conclusion Taken together, our data demonstrates that higher re-irradiation dose (55-60 Gy) for LRESCC patients with radiation history may yield better long-term survival outcome, with tolerable toxicity. Re-irradiation with lower doses (50-54 Gy), combined with chemotherapy can improve survival outcomes of LRESCC patients with radiation history. For those patients with a short recurrence time (TR ≤ 24 months), its important to monitor serious treatment-related toxicity. Availability of data and materials All data generated or analyzed during this study are included in this article. Competing interests The authors declared that they had no competing interests. Funding None. Authors' contributions XUN WU and JUNRU CHEN participated in research design. Acquisition of the data was performed by XINGSHENG HU and XUEMEI YANG. Evaluation of the images was conducted by XINGSHENG HU and LANG HE. The manuscript written by XUN WU. All authors read and approved the nal manuscript. receiving de nitive concurrent chemoradiotherapy: A population-based propensity-score-matched analysis. Medicine (Baltimore) 2018, 97 (22) Figure 1 There was a signi cant difference between locoregional control rate in the LD group and HD group. Figure 2 Survival curve of patients in the LD and HD groups after re-irradiation therapy.
2020-12-17T09:05:15.001Z
2020-12-12T00:00:00.000
{ "year": 2020, "sha1": "fe54cdca330d2e471c6bf64eada71ad7319c4abf", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-56036/v2.pdf?c=1631881035000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "0726b048f4a71d70784cb282de859d19485495af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
83896073
pes2o/s2orc
v3-fos-license
Mechanism of Gαi-mediated Inhibition of Type V Adenylyl Cyclase* The topology of mammalian adenylyl cyclase reveals an integral membrane protein composed of an alternating series of membrane and cytoplasmic domains (C1 and C2). The stimulatory G protein, Gαs, binds within a cleft in the C2 domain of adenylyl cyclase while Gαi binds within the opposite cleft in the C1domain. The mechanism of these two regulators also appears to be in opposition. Activation of adenylyl cyclase by Gαs or forskolin results in a 100-fold increase in the apparent affinity of the two domains for one another. We show herein that Gαireduces C1/C2 domain interaction and thus formation of the adenylyl cyclase catalytic site. Mutants that increase the affinity of C1 for C2 decrease the ability of Gαi to inhibit the enzyme. In addition, Gαi can influence binding of molecules to the catalytic site, which resides at the C1/C2 interface. Adenylyl cyclase can bind substrate analogs in the presence of Gαi but cannot simultaneously bind Gαi and transition state analogs such as 2′d3′-AMP. Gαi also cannot inhibit the membrane-bound enzyme in the presence of manganese, which increases the affinity of adenylyl cyclase for ATP and substrate analogs. Thus homologous G protein α-subunits promote bidirectional regulation at the domain interface of the pseudosymmetrical adenylyl cyclase enzyme. The classic mammalian adenylyl cyclases (AC) 1 consist of two repeats of a unit that includes six transmembrane spans and a cytoplasmic domain. Nine isoforms of adenylyl cyclase have been cloned that share this common topology. However, each of these enzymes display distinct patterns of regulation (reviewed in Refs. 1 and 2). For example, types V and VI adenylyl cyclase can be activated by GTP␥S-G␣ s (3), forskolin (4), and protein kinase C (␣ and , Ref. 5) and inhibited by G␣ i (6,7), calcium (8,9), cAMP-dependent protein kinase (10,11), and protein kinase C ␦ (type VI only, Ref. 12). Regulation by heterotrimeric G proteins has been the hallmark of adenylyl cyclase activity, but the details of this regulation have remained unclear until recently. In vitro, G␣ i can inhibit both G␣ s and forskolin-stimulated adenylyl cyclase activities in a non-competitive manner (7,13). But much of our recent knowledge is based upon experiments utilizing the cytoplasmic (or soluble) domains of adenylyl cyclase. The two cytoplasmic domains of adenylyl cyclase (C 1 and C 2 ) create a pseudosymmetrical heterodimer that forms the catalytic moiety of the enzyme and is the target for most known intracellular regulators (reviewed in Ref. 14). The cytoplasmic domains contain a 200 -250 amino acid region that is ϳ50% similar to each other and 50 -90% similar to corresponding regions of other adenylyl cyclase isoforms. The C 1 and C 2 domains can be independently expressed as soluble proteins in Escherichia coli and mixed to reconstitute full adenylyl cyclase activity, including activation by G␣ s and forskolin and inhibition by P-site inhibitors and G␣ i (15)(16)(17)(18)(19). These soluble adenylyl cyclase fragments have proven invaluable in determining the stoichiometry of G␣ s , forskolin, and ATP binding. They have also proven invaluable in localizing binding sites for G␣ s and G␣ i and in discerning the catalytic mechanism of P-site inhibition of adenylyl cyclase. Much of this early work utilized C 1 and C 2 proteins from dissimilar isoforms of adenylyl cyclase (C 1 from type I and the C 2 domain from type II adenylyl cyclase), although more recent systems contain both domains from a single adenylyl cyclase isoform (19 -22). Crystal structures of complexes containing the C 1 and C 2 domains, G␣ s , and forskolin reveal that forskolin and ATP analogs bind at the interface of the C 1 and C 2 domains (23-25), while the major binding site for G␣ s is located on the C 2 domain in the cleft formed by the ␣2Ј and ␣3Ј helices (17,23,26). A similar groove is formed by the corresponding structural elements of C 1 (23). Mutagenesis and binding studies indicate that G␣ i binds to this corresponding site in the C 1 domain (19); however, the mechanism for G␣ i -mediated inhibition is still unclear. Activation of adenylyl cyclase by G␣ s or forskolin results in a 100-fold increase in the apparent affinity of the two domains for one another (15,16). We show herein that G␣ i works in opposition to G␣ s to reduce domain interaction and thus the formation of the adenylyl cyclase catalytic site. G Protein Subunits-All G protein ␣-subunits were synthesized in E. coli as described by Lee et al. (27). G␣ i was co-expressed with yeast protein N-myristoyltransferase (27,28) for synthesis of myristoylated protein. Purified ␣-subunits were activated by incubation with 50 mM NaHEPES (pH 8.0), 5 mM MgSO 4 , 1 mM EDTA, 2 mM dithiothreitol, and 400 M [ 35 S]GTP␥S at 30°C for 30 min for G␣ s or 2 h for G␣ i . Free GTP␥S was removed by gel filtration. All G proteins were activated with GTP␥S unless stated otherwise. Expression and Purification of Adenylyl Cyclase in E. coli-The C 2 domains of type II and type V adenylyl cyclase (IIC 2 , VC 2 ) and the C 1 and C 1a domains of type V adenylyl cyclase were expressed in E. coli and purified by metal affinity chromatography followed by ion exchange as described previously (15,17,19). Adenylyl Cyclase Assays-Adenylyl cyclase activity was measured as described (29). All assays were performed for 7-10 min at 30°C in a final volume of 50 -100 l with 5 mM MgCl 2 and 100 M ATP for the membrane-bound enzyme. In assays containing C 1 and C 2 domain proteins (reconstitution assays), limiting concentrations of the C 1 domain protein were first incubated with G␣ i for 15 min on ice followed by addition of G␣ s and VC 2 prior to initiation of the assay. The final concentration of C 2 was at least 0.5 M to promote interaction between the C 1 and C 2 proteins except where indicated otherwise. The final concentration of ATP was 1 mM for reconstitution assays, unless stated otherwise. All determinations were performed in duplicate and are representative of at least two experiments. Filter membranes were dried in a vacuum desiccator, and the bound ligand was quantified by liquid scintillation counting. RESULTS We have previously shown that G␣ i binds within the cleft formed by the ␣2 and ␣3 helices of the C 1a domain of type V adenylyl cyclase. Using [ 35 S]GTP␥S-G␣ i a clear shift in molecular weight of G␣ i is observed upon addition of the C 1a domain as determined by gel filtration. However, the tight binding of the C 1a domain and G␣ i is partially disrupted by formation of a C 1 -C 2 complex in the presence of forskolin (Fig. 1A). Further- more, the complex of C 1a and G␣ i is completely abolished in the presence of both G␣ s and forskolin. G␣ s does not compete for binding of G␣ i to the C 1 domain; however, full activation of adenylyl cyclase by G␣ s and forskolin does limit the extent of inhibition by G␣ i (7,19). Therefore we examined the formation of a potential G␣ i -C 1 -C 2 -G␣ s complex in the absence of forskolin ( Fig. 1B) using the full-length C 1 domain, which has a 10-fold higher affinity for G␣ i (19). Even in the presence of high concentrations of C 1 , C 2 , G␣ s , and G␣ i , complexes between C 1 -G␣ i and C 2 -G␣ s are formed, but no heterotetramer is ever observed. Note that myristoylated G␣ i and complexes containing the myristoylated protein tend to be slightly retarded on gel filtration columns most likely because of the hydrophobic nature of the myristate group. Both forskolin and G␣ s have been shown to increase the affinity of C 1 and C 2 by 100-fold more than basal, and the combined action of the activators increases the apparent affinity of C 1 and C 2 by more than 1000-fold. Our gel filtration results suggest that G␣ i may work in opposition to these regulators to decrease the affinity between C 1 and C 2 . We therefore examined the ability of G␣ i to inhibit adenylyl cyclase with increasing concentrations of C 2 protein to drive the interaction between the two domains ( Fig. 2A). At low concentrations of C 2 , G␣ i greatly inhibits adenylyl cyclase activity. However, as the concentration of C 2 increases, the interaction between C 1 and C 2 increases, reducing the ability of G␣ i to inhibit the enzyme. At maximal concentrations of C 2 protein, no inhibition by G␣ i is observed. Therefore, the interaction between the C 1 and C 2 domains not only decreases the binding of G␣ i to C 1 but also decreases the ability of G␣ i to inhibit the enzyme. A similar phenomenon is observed with a mutant of C 1 (E418A), which has a 6-fold higher affinity for G␣ i (19). Once again, maximal inhibition occurs at low C 2 concentrations, and the effect of G␣ i decreases with increasing C 2 (Fig. 2B). In this case, however, the effect of G␣ i is not eliminated at maximal concentrations of C 2 . This mutation is within the G␣ i binding cleft and has no effect on the apparent affinity of the C 1 and C 2 domains (Fig. 2C) and does not increase dimerization of C 1 as measured by gel filtration (data not shown). The inability to eliminate the effect of G␣ i at high C 2 concentrations is most likely caused by the increased efficacy of G␣ i -mediated inhibition of both the membrane-bound and cytoplasmic domains of adenylyl cyclase when this mutation is present (19). These data suggest that the mutation of Glu-418 to alanine induces a conformational change that facilitates G␣ i inhibition independent of changes at the C 1 -C 2 interface. We have also examined the interaction between the C 1 and C 2 domains utilizing a mutant within the C 2 domain (K1014N in type II AC) with an increased affinity between the C 1a and C 2 domains in the presence and absence of G␣ s (30). When paired with the full-length C 1 domain from type V, the C 2 -K1014N mutant shows a similar 10-fold shift in affinity for C 1 (Fig. 3A). This point mutant is located 16 -28 Å from G␣ iinteracting residues and on the opposite domain from which G␣ i binds. Thus it is not predicted to directly interfere with G␣ i binding. However, the inhibition of C 1 and the mutant C 2 domain by G␣ i is remarkably diminished as compared with wild type (Fig. 3B). Therefore, once again as the affinity of C 1 and C 2 is increased, the ability of G␣ i to inhibit adenylyl cyclase is decreased. The G␣ i binding cleft is located in close proximity to the catalytic site, particularly to residues contacting the magnesium ions at the active site and the phosphate moieties of substrate or P-site molecules. Therefore, G␣ i may affect binding of molecules to the active site. To address this possibility, kinetic analysis of inhibition by G␣ i and P-site inhibitors or substrate analogs was performed. P-site inhibitors bind at the active site and are postulated to mimic a product-like transition state. Classic P-site inhibitors (such as 2Јd3Ј-AMP) require the product pyrophosphate for binding to the enzyme and show uncompetitive inhibition with respect to MgATP upon activation with G␣ s (18). The P-site inhibitor 2Јd3Ј-AMP and G␣ i behave as mutually exclusive inhibitors of adenylyl cyclase giving rise to a family of parallel lines on a Dixon plot obtained at different G␣ i concentrations (Fig. 4A). This kinetic pattern is true for both the recombinant membrane-bound enzyme and the cytoplasmic domains of type V adenylyl cyclase (Fig. 4B). Therefore any complex of adenylyl cyclase containing G␣ i has significantly reduced ability to obtain a conformation capable of binding this type of transition state analog. This same pattern of inhibition is also observed for the membrane-bound adenylyl cyclase with the more potent P-site inhibitor, 2Ј5Јdd3Ј-ATP (data not shown) (31). This is an important feature of catalysis because product release is one of the rate-limiting steps along the reaction coordinate for adenylyl cyclase (18). Ap(CH 2 )pp is a non-hydrolyzable analog of ATP that competitively competes for ATP binding (18). As shown in the Dixon plot for the membrane-bound adenylyl cyclase (Fig. 5A), Ap(CH 2 )pp also shows a nearly parallel family of curves obtained at different G␣ i concentrations with respect to Ap(CH 2 )pp, indicating that each of these two inhibitors binds with greatly reduced affinity in the presence of the other. The intersection of these lines well below the x axis provides a reduction in the K i for Ap(CH 2 )pp of at least 6-fold in the presence of G␣ i . However, the cytoplasmic domains of type V show an intersection of lines on the x axis, suggesting no significant reduction in the affinity of the enzyme for Ap(CH 2 )pp in the presence of G␣ i (Fig. 5B). This non-competitive interaction is consistent with the formation of a G␣ i -C 1 -C 2 -ATP complex. To further understand the binding of Ap(CH 2 )pp to the cy- toplasmic domains of adenylyl cyclase, we measured the direct binding of Ap(CH 2 )pp to C 1 , C 2 , and G␣ s in the presence or absence of G␣ i . These measurements were made using Mn 2ϩ , which greatly increases the affinity of adenylyl cyclase for Ap(CH 2 )pp as compared to Mg 2ϩ (18). An identical pattern of non-competitive inhibition was obtained for the soluble type V domains with Ap(CH 2 )pp and G␣ i in the presence of Mn 2ϩ as previously shown for Mg 2ϩ (Fig. 6). G␣ s is required to form a complex at reasonable concentrations of C 1 and C 2 , but G␣ s will reduce the effectiveness of G␣ i in this assay by reducing the binding of G␣ i to the C 1 domain. Therefore a complete loss of binding is not anticipated, but clearly a 50% reduction in Ap(CH 2 )pp is observed by both filter binding assays (Fig. 6B) and equilibrium dialysis (data not shown). Scatchard plot analysis of the filter binding data would suggest non-competitive binding of Ap(CH 2 )pp, although we were unable to fully saturate binding. One unusual aspect of G␣ i -mediated inhibition of adenylyl cyclase is the inability of G␣ i to inhibit the membrane-bound adenylyl cyclase in the presence of manganese. This has been reported previously by Hildebrandt and Birnbaumer (32), but it was unclear at that time if this was caused by an effect of Mn 2ϩ on the G protein or catalytic subunit. We show that G␣ i is unable to inhibit type V adenylyl cyclase in the presence of Mn 2ϩ alone, Mn 2ϩ and G␣ s , or Mn 2ϩ and forskolin (Fig. 7A). In fact, G␣ i actually increases forskolin-stimulated activity in the presence of Mn 2ϩ . This may be due to the weak ability of G␣ i to compete at the G␣ s binding site (7). The inability of G␣ i to inhibit adenylyl cyclase in the presence of Mn 2ϩ is not caused by an inactivation of G␣ i , since G␣ i can inhibit the Mn 2ϩ -G␣ sstimulated cytoplasmic domains of adenylyl cyclase activity under identical conditions (Fig. 6A). In addition to 3 mM Mn 2ϩ , these assays all contain 0.5 mM Mg 2ϩ , which should be sufficient to maintain activation of the G proteins. The inclusion of an additional 1 mM Mg 2ϩ has no effect on the membrane-bound enzyme in the presence of Mn 2ϩ (Fig. 7B). The mechanism for this effect of metals on G␣ i -mediated inhibition and the difference between the soluble and membrane-bound adenylyl cyclase is discussed below. DISCUSSION Preliminary kinetic analysis suggests that G␣ s and G␣ i bind simultaneously to adenylyl cyclase; however, a complex of C 1 -C 2 -G␣ s -G␣ i has not been observed (7,19). This suggests a model in which G␣ i inhibits the enzyme by decreasing the affinity of one domain for another. This is in stark contrast to G␣ s and forskolin stimulation where the affinity between C 1 and C 2 increases with increased activation. We show that binding of G␣ i to the C 1 domain is weakened or lost in the presence of a C 1 -C 2 -forskolin or a C 1 -C 2 -G␣ s -forskolin complex (Fig. 1). However, G␣ i inhibits the fully stimulated G␣ s -forskolin-activated enzyme very poorly, and we would not expect to observe a complex with both G␣ s and G␣ i under these conditions. Therefore, we also examined whether a heterotetrameric complex could be formed with G␣ i , G␣ s , and adenylyl cyclase in the absence of forskolin. Even under conditions of high protein concentrations, a complex of C 1 -C 2 -G␣ s -G␣ i is never observed. A direct test of our hypothesis is the measurement of C 1 /C 2 affinity in the presence of G␣ i . It is clear that increased C 1 -C 2 complex formation, driven by increasing C 2 concentrations, decreases the ability of G␣ i to inhibit adenylyl cyclase activity (Fig. 2). This is consistent with the limited ability of G␣ i to inhibit the most stimulated forms of adenylyl cyclase that display the highest C 1 /C 2 affinity (7,19). In fact, a mutant C 2 protein (K1014N, Ref. 30) with increased affinity for C 1 displays dramatically decreased inhibition by G␣ i (Fig. 3). Although the membrane spans of adenylyl cyclase physically link C 1 and C 2 in close proximity, the structural changes that we measure through a change in affinity are still occurring in the native enzyme. Supporting a structural change at the interface of C 1 /C 2 is the fact that residues on one face of helix ␣2 interact with G␣ i , whereas residues on the opposite face of ␣2 interact with the C 2 domain (19,23). A conformational change at the C 1 /C 2 interface and the close proximity of the G␣ i binding site to the catalytic site might also suggest that G␣ i influences the binding of molecules to the active site of adenylyl cyclase. Kinetic analysis of inhibition by P-site inhibitors and G␣ i reveals that the actions of these inhibitors are mutually exclusive. Therefore, binding of G␣ i prevents formation of specific conformations at the active site, particularly those capable of binding these transition state analogs. This is true for both the soluble and membrane-bound enzymes, and this pattern of inhibition is observed with both uncompetitive (2Јd3Ј-AMP) and non-competitive (2Ј5Јdd3Ј-ATP) types of P-site inhibitors. It is clear that the key to regulation of adenylyl cyclase is the conformational state of the C 1 /C 2 domain interface. The change in affinity is a measurement of the changes in conformation at the active site located at the domain interface. This type of movement is mimicked somewhat with P-site inhibitors. Even with bound G␣ s and forskolin, the unliganded active site maintains an open conformation. But upon addition of P-site inhibitors that mimic a product-like transition state, the active site clamps down on the inhibitor as one might observe in an induced fit model of regulation (33). In the absence of activator, it is speculated that an even more open active site would be observed, similar to the structure obtained for the C 2 homodimer (34). This open to closed transition involves the inward collapse of structural elements around the active site. The G␣ i binding site is directly adjacent to these regions and may affect binding of molecules to the catalytic site. In fact, the most flexible regions of the C 1 domain are located on the back side of the G␣ i binding site. We put forth evidence that G␣ i mediates its effects by reducing the ability of the enzyme to obtain a closed conformation. This is observed as a reduced affinity between the C 1 and C 2 domain and a reduced ability to form a transition state conformation necessary for binding P-site inhibitors relative to the ground state. The kinetic analysis of inhibition by the substrate analog Ap(CH 2 )pp reveals a different pattern. Kinetic and binding data of the soluble domains and the membrane-bound enzyme show non-competitive aspects between G␣ i and the substrate analog Ap(CH 2 )pp, as displayed by intersecting lines on a Dixon plot. However, the soluble domains have no reduction in the K i for Ap(CH 2 )pp in the presence of G␣ i , while the membrane-bound enzyme displays a reduced affinity for Ap(CH 2 )pp and G␣ i in the presence of each other. Despite this difference, clearly both G␣ i and a substrate analog can bind simultaneously to adenylyl cyclase. However, it is not clear whether a G␣ i -ATP-bound enzyme can lead to cAMP production. G␣ i binding may allow limited binding of substrate to an inactive enzyme. Alternatively, G␣ i binding may produce a catalytically competent enzyme but with greatly reduced activity. The latter rationale might explain why G␣ i inhibition of either the soluble domains or membrane-bound type V AC never leads to zero or basal activity levels. Generally, the maximal inhibition by G␣ i is 50 -70% when stimulated with modest levels of G␣ s (7), suggesting that an AC-G␣ i complex might retain low activity and hence bind substrate. Additional experiments are required to differentiate between these two possibilities. The effects observed with manganese may also point to an inability of G␣ i to interact with a closed conformation. Residues from both C 1 and C 2 domains are required for binding ATP and ATP analogs (23,35,36). In fact, P-site analogs bound at the active site can stabilize complex formation between C 1 and C 2 (30). Manganese acts to increase the affinity of adenylyl cyclase for ATP analogs. This is observed as a 22-fold decrease in the K d for Ap(CH 2 )pp and a 2.5-fold decrease in the K m for ATP with the G␣ s -stimulated cytoplasmic domains of type V adenylyl cyclase in the presence of Mn 2ϩ versus Mg 2ϩ (36). The difference between Mn 2ϩ and Mg 2ϩ is even more dramatic for the basal or forskolin-stimulated soluble enzyme (36). This phenomenon is also present in the membrane-bound enzyme yielding a 4 -7-fold reduction in the K m for ATP in the presence of Mn 2ϩ (37) and a 5-fold reduction in the K i for competitive ATP analogs (38). We measure a 30-fold reduction in the IC 50 for Ap(CH 2 )pp in the presence of 100 M ATP and manganese versus magnesium (data not shown). A similar reduction in K i is observed in the presence of manganese for all P-site analogs; however, the mechanism of this reduction may be due to the increase in activity of the enzyme rather than the increase in P-site affinity (18,39). By increasing the affinity of metal-ATP for the enzyme, manganese may be serving to increase the interaction between C 1 and C 2 , driving the enzyme to a more closed conformation. The question is why is this observed only for the membranebound enzyme. The soluble enzyme may be more susceptible to G␣ i inhibition because of its very nature of two separate proteins. The membrane-bound enzyme holds the C 1 /C 2 domains in the optimal orientation for catalysis. Although V max is comparable to the soluble domains, the K m for metal-ATP and the affinity for ATP analogs are considerably lower for the membrane-bound enzyme (ϳ13-fold difference in the K m for Mg-ATP, data not shown). This may be part of the reason that G␣ i reduces the affinity of Ap(CH 2 )pp for the membrane-bound enzyme and not for the soluble domain. The increased affinity for metal-ATP, particularly Mn-ATP, may reflect an inability of G␣ i to inhibit this enzyme as compared with the C 1 /C 2 domains that have substantially weaker affinity for substrate. For the C 1 /C 2 domains an increase in affinity for Mn-ATP is not sufficient to significantly reduce the inhibition by G␣ i . As with all model systems, the cytoplasmic domains have been an invaluable tool but may have their limits in faithfully reproducing the kinetic features of adenylyl cyclase. In summary, these data suggest a model where G␣ i decreases the interaction of C 1 and C 2 for each other and also decreases catalytic activity by decreasing the formation of the active site. This is directly opposite to the actions of G␣ s and highlight the pseudosymmetry of the adenylyl cyclase structure that is poised for bidirectional regulation.
2019-03-20T13:02:21.109Z
2002-08-09T00:00:00.000
{ "year": 2002, "sha1": "513b9b64c5abb5d27c210327ec66de850d1dd75a", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/32/28823.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "564e78cceffddba56ff674812a06299911876be3", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
246783735
pes2o/s2orc
v3-fos-license
Chromatoid Bodies in the Regulation of Spermatogenesis: Novel Role of GRTH Post-transcriptional and translational control of specialized genes play a critical role in the progression of spermatogenesis. During the early stages, mRNAs are actively transcribed and stored, temporarily bound to RNA binding proteins in chromatoid bodies (CBs). CBs are membrane-less dynamic organelles which serve as storehouses and processing centers of mRNAs awaiting translation during later stages of spermatogenesis. These CBs can also regulate the stability of mRNAs to secure the correct timing of protein expression at different stages of sperm formation. Gonadotropin-regulated testicular RNA helicase (GRTH/DDX25) is an essential regulator of spermatogenesis. GRTH transports mRNAs from the nucleus to the cytoplasm and phospho-GRTH transports mRNAs from the cytoplasm to the CBs. During spermiogenesis, there is precise control of mRNAs transported by GRTH from and to the CBs, directing the timing of translation of critical proteins which are involved in spermatid elongation and acrosomal development, resulting in functional sperm formation. This chapter presents our current knowledge on the role of GRTH, phospho-GRTH and CBs in the control of spermiogenesis. In addition, it covers the components of CBs compared to those of stress granules and P-bodies. Introduction Spermatogenesis is a complex, serial and highly specialized differentiation program during which progenitor cells undergo a series of cellular reorganization events, resulting in the production of mature functional sperm [1][2][3]. This differentiation process is controlled by the integrated expression of an array of genes and specific proteins in a precise temporal sequence that produces genetically unique spermatozoa [2][3][4]. Gene expression in haploid spermatids requires temporal uncoupling of transcription and translation in the adult mammalian testis [2,5,6]. Post-meiotic haploid round spermatids formed during the spermatogenesis process possess complex transcriptomes, and hence efficient and accurate quality control mechanisms are necessary to deal with the major diversity of transcribed RNAs in these germ cells [7]. Initial stages of spermatogenesis are marked by active transcription and the resulting mRNAs are transported and stored transiently in large cytoplasmic ribonucleoprotein granules called "chromatoid bodies" (CBs) [8,9]. The CB is a perinuclear organelle and a ribonucleoprotein (RNP) granule present in the cytoplasm of male germ cells [9]. The functions of CBs overlap with those of P-bodies and stress granules of somatic cells. Due to the presence of a wide array of proteins involved in different steps of RNA metabolism with different classes of RNAs, including microRNAs (miRNAs) and Piwi-interacting RNAs (piRNAs), the CB seems to function as an RNA processing center [9,10]. CBs are involved in diverse pathways and mechanisms through which it regulates a wide variety of processes, including RNA transport, regulation, decay, surveillance, translational arrest, translation machinery assembly, etc. Although interesting aspects have been found in the last half-decade, several intrinsic mechanisms (specifically, the role of CBs in the storage and regulation of important mRNAs that are involved in making healthy sperm) are still not fully understood in detail [10]. Mature mRNAs are transported to CBs during the early stages of spermatogenesis (active transcription stages) bound to RNA binding/transport proteins such as gonadotropin-regulated testicular RNA helicase (GRTH/DDX25) [9,11,12]. RNPs Are Critical for Cellular and Organism Function: Role of CBs RNP complexes and granules are powerful composite structures of merged functions and unique properties. RNP granules are formed under physiological conditions. In addition, granule formation can be induced by stress such as heat, starvation, oxidative stress, etc. These granules are more dynamic in nature, and they occur in condensed or diffused states and depend on the cell requirements. Specific mutations occurring in the cell can prevent RNP assembly or disassembly and diminish cellular and organism function, resulting in several pathologies. Examples include fragile X-syndrome, associated primary ovarian insufficiency in human germ cells, and amyotrophic lateral sclerosis (a progressive neurodegenerative disease that affects nerve cells in the brain and the spinal cord), possibly occurring due to the failure of disassembly of stress granules. The importance of RNPs is diverse and the complete functions are still not completely understood. The main functions of RNPs include carrying out intricate tasks in RNA-processing pathways and the posttranscriptional regulation of mRNAs. The association of RNA-binding proteins such as DDX4 and RNA in the germ line of several organisms gives rise to non-membranous structures called germ granules. These are known by several different names in different organisms with minor variations in function, from Nuage, Pole plasm, piNG body (piRNA nuage giant body) in Drosophila, Polar granule in C elegans, and chromatoid body, Balbiani body, and Cajal body in mice and zebrafish [13]. One of the biggest RNP granules is the CB, which consists primarily of RNA and other RNPs involved in the RNA processing of male germ cells. The CB is a membrane-less, filamentous-lobular, perinuclear organelle (0.5-1 µm) and an electron-dense structure present in the cytoplasm of male germ cells [9,14]. It acts as a repository of important mRNAs associated as mRNPs waiting for translation during later stages of spermatogenesis. CBs dynamically move within the cytoplasm and exhibit continuous changes in shape and size. The biochemical composition of CBs is unique: they are primarily composed of small RNAs, Piwi-interacting RNA (piRNA), siRNAs, miRNAs, mRNAs, long non-coding RNAs, RNA-binding proteins, members of the small interfering RNA pathway such as MIWI, Argonaute protein, Dicer endonuclease, decapping enzyme, and other proteins involved in RNA post-transcriptional regulation which may play a critical role during sperm elongation and spermiogenesis [8,9,15,16]. CBs also contain GRTH [9], phospho-GRTH [10] and MVH/DDX4, a mouse homolog of VASA, a germ cell marker [16]. In fact, these proteins constitute the majority of CBs in addition to other proteins which are involved in the piRNA pathway, nonsense-mediated RNA decay (NMD) pathway, and in RNA post-transcriptional and translational regulation. Specifically, these CB proteins are DDX25, DDX4, MILI, MIWI, TDRD6, TDRD7, D1PAS1, PABP1, HSPA2, and constitute around 70% of the CB proteome [9,10,17,18]. The origin of CBs is highly debatable. The most accepted view is that they emerged from small granules (precursors of CBs) which are associated with the nuclear envelope present near the electron-dense inter-mitochondrial cement in the late pachytene spermatocytes ( Figure 1). The CB can be detected during all steps of round spermatid differentiation (steps 1-8 of spermiogenesis). The largest CBs are observed in step four, five and six of spermiogenesis [18]. At later stages of spermatid elongation, CBs move caudally to the neck region and split into two separate structures; one is discarded along with residual cytoplasm, and the other forms a ring around the base of the flagellum (Figure 1). Figure 1. Schematic representation of CB organization and fate during spermiogenesis in mice. The inter-mitochondrial cement (IMC; green) intermixed with mitochondria (blue) and the CB precursors (pink) co-exist in late pachytene spermatocytes. The Golgi complex is depicted in gray. The CB (pink) is condensed to its final single form in the early round spermatids. At step 8 of spermiogenesis, the CB is found at the basis of the flagellum. Later, it splits into two separate structures (step 9 onwards) and eventually disappears. There is limited information about the specific mechanisms of CB function during spermatid elongation. With the identification of small non-coding RNA-mediated gene regulation and other associated mechanisms, the functions of the CB are slowly beginning to be understood. However, several of the molecular mechanisms are still unclear, requiring further in-depth molecular studies. CBs Are Analogous to Stress Granules and P-Bodies Compartmentalization of molecular processes is accomplished by various intracellular organelles that spatially segregate functionally related molecules. RNPs act as organelles that lack any demarcating membrane and play a key role in mRNA homeostasis. RNP granules formed under physiological conditions in male germ cells are called CBs, while in somatic cells they are called RNA processing bodies or P-bodies. CBs' overall functions fall between P-bodies and stress granules and are in concert with the maintenance of RNA regulation. Stress granules and processing bodies are also membrane-less RNA granules that dynamically sequester translationally inactive messenger ribonucleoprotein particles (mRNPs) into compartments which are distinct from the surrounding cytoplasm [19,20]. These granules are more dynamic in nature and exist in a condensed or diffused state based on conditions and requirements. Like P-bodies, stress granule assembly is dependent on the pool of non-translating mRNAs. Stress granules and P-bodies can physically interact to facilitate the shuttling of RNA and protein between them. The main difference between P-bodies and stress granules is that P-bodies assemble around the key enzymes of cytoplasmic RNA degradation in physiological conditions, and stress granules assemble around the essential components of the translation machinery under different stress conditions such as heat, glucose deprivation, viral or bacterial infection, hypoxia, and oxidative stress [19,21]. In contrast to CBs and stress granules, P-bodies are not associated with the regulation of translation initiation; instead they serve as a site for mRNA degradation, translation The inter-mitochondrial cement (IMC; green) intermixed with mitochondria (blue) and the CB precursors (pink) co-exist in late pachytene spermatocytes. The Golgi complex is depicted in gray. The CB (pink) is condensed to its final single form in the early round spermatids. At step 8 of spermiogenesis, the CB is found at the basis of the flagellum. Later, it splits into two separate structures (step 9 onwards) and eventually disappears. There is limited information about the specific mechanisms of CB function during spermatid elongation. With the identification of small non-coding RNA-mediated gene regulation and other associated mechanisms, the functions of the CB are slowly beginning to be understood. However, several of the molecular mechanisms are still unclear, requiring further in-depth molecular studies. CBs Are Analogous to Stress Granules and P-Bodies Compartmentalization of molecular processes is accomplished by various intracellular organelles that spatially segregate functionally related molecules. RNPs act as organelles that lack any demarcating membrane and play a key role in mRNA homeostasis. RNP granules formed under physiological conditions in male germ cells are called CBs, while in somatic cells they are called RNA processing bodies or P-bodies. CBs' overall functions fall between P-bodies and stress granules and are in concert with the maintenance of RNA regulation. Stress granules and processing bodies are also membrane-less RNA granules that dynamically sequester translationally inactive messenger ribonucleoprotein particles (mRNPs) into compartments which are distinct from the surrounding cytoplasm [19,20]. These granules are more dynamic in nature and exist in a condensed or diffused state based on conditions and requirements. Like P-bodies, stress granule assembly is dependent on the pool of non-translating mRNAs. Stress granules and P-bodies can physically interact to facilitate the shuttling of RNA and protein between them. The main difference between Pbodies and stress granules is that P-bodies assemble around the key enzymes of cytoplasmic RNA degradation in physiological conditions, and stress granules assemble around the essential components of the translation machinery under different stress conditions such as heat, glucose deprivation, viral or bacterial infection, hypoxia, and oxidative stress [19,21]. In contrast to CBs and stress granules, P-bodies are not associated with the regulation of translation initiation; instead they serve as a site for mRNA degradation, translation repression, storage of non-translating mRNAs, and RNA-binding proteins ( Figure 2) [10,20,22]. P-bodies are uniquely enriched with factors related to mRNA decay and the NMD pathway, such as members of mRNA decapping machinery including the decapping enzymes DCP1/2; UPF1/2, the activators of decapping EDC3, Dhh1/RCK/p54, Pat1, Scd6/RAP55, and EDC3; the LSM1-7 complex; and the exonuclease XRN1 ( Figure 2) [20,[22][23][24]. P-bodies are independent of initiation factors or translational assembly, while CBs seem to regulate mRNA storage and decay, as well as translational machinery, all in one compartment effectively [10,20]. Recent studies have shown that the process of liquid-liquid phase separation is a main driver promoting the assembly of RNP granules such as stress granules, P-bodies, and other related membrane-less organelles [25]. It would be very interesting to determine the involvement of biological mediators and other proteins in the assembly of RNP granules. Few studies have implicated post-translational modifications of RNP granule proteins as the driver for the assembly and disassembly of the RNP granule [13,19]. Irrespective of the exact driver which initiates the formation of the RNPs, the role it plays is unique and irreplaceable, and has a critical role in the survival of the cell. More like stress granules, CBs have proteins or mRNA that are involved in the translation process, such as initiation and elongation factors, ribosomal subunits and other associated factors, and mRNA regulatory factors such as PABPC1, UPF2, and others [10,17,21]. RNP granules induced by heat stress are detected in spermatogonia, preleptotene, and early pachytene spermatocytes [26]. Detailed information and similarities of these heat-induced RNP granules in germ cells with respect to CBs are not clear. Further studies on interaction between these RNPs inside the cell provide vital clues on the molecular mechanisms of gene regulation. CBs resemble the recently described TISassociated granules and the interconnections proposed between TIS granules and the ER (TIGER domain structure) [27]. repression, storage of non-translating mRNAs, and RNA-binding proteins ( Figure 2) [10,20,22]. P-bodies are uniquely enriched with factors related to mRNA decay and the NMD pathway, such as members of mRNA decapping machinery including the decapping enzymes DCP1/2; UPF1/2, the activators of decapping EDC3, Dhh1/RCK/p54, Pat1, Scd6/RAP55, and EDC3; the LSM1-7 complex; and the exonuclease XRN1 ( Figure 2) [20,[22][23][24]. P-bodies are independent of initiation factors or translational assembly, while CBs seem to regulate mRNA storage and decay, as well as translational machinery, all in one compartment effectively [10,20]. Recent studies have shown that the process of liquidliquid phase separation is a main driver promoting the assembly of RNP granules such as stress granules, P-bodies, and other related membrane-less organelles [25]. It would be very interesting to determine the involvement of biological mediators and other proteins in the assembly of RNP granules. Few studies have implicated post-translational modifications of RNP granule proteins as the driver for the assembly and disassembly of the RNP granule [13,19]. Irrespective of the exact driver which initiates the formation of the RNPs, the role it plays is unique and irreplaceable, and has a critical role in the survival of the cell. More like stress granules, CBs have proteins or mRNA that are involved in the translation process, such as initiation and elongation factors, ribosomal subunits and other associated factors, and mRNA regulatory factors such as PABPC1, UPF2, and others [10,17,21]. RNP granules induced by heat stress are detected in spermatogonia, preleptotene, and early pachytene spermatocytes [26]. Detailed information and similarities of these heat-induced RNP granules in germ cells with respect to CBs are not clear. Further studies on interaction between these RNPs inside the cell provide vital clues on the molecular mechanisms of gene regulation. CBs resemble the recently described TIS-associated granules and the interconnections proposed between TIS granules and the ER (TIGER domain structure) [27]. Small RNAs of CBs and Regulation of Spermatogenesis During spermatogenesis, the early precursors of sperm utilize small regulatory RNAs such as microRNAs to control the expression of an array of genes at transcriptional or post-transcriptional levels during complex and specialized processes of sperm production. The role of CBs in small RNA-mediated gene control and the associated mechanisms are not clearly delineated at present. The repertoires of microRNAs and piRNAs have been identified and both serve as important regulators of male germ cell differentiation. miRNAs are small in size (21-23 nucleotides) and belong to the class of noncoding RNAs which act as endogenous gene regulators and participate in a wide array of biological functions, by promoting target mRNA degradation and inhibition of translation [28][29][30]. Each miRNA can target several mRNAs of different genes and thus regulates gene expression stringently in a stage-and tissue-specific manner in every organ including the testis. miRNAs recognize their target mRNAs by sequence-specific bp pairing in the RNAinduced silencing complex together with Argonaute proteins (AGO) [9]. Most miRNAs are derived from primary miRNA transcripts which are processed by the Drosha-DGCR8 complex in the nucleus to generate precursor miRNAs (pre-miRNAs), which are transported to the cytoplasm where mature miRNAs are generated via a Dicer-dependent or independent route. One of the crucial components of the miRNA and siRNA pathways is the cytoplasmic endonuclease Dicer, which is critical for male fertility [31]. Dicer interacts with the germ-cell-specific RNA helicase MVH (mouse VASA homolog). Sertoli cell-specific deletion of Dicer in mice results in spermatogenic malfunction, defective maturation, and infertility [31]. Transcripts of AGO proteins, Drosha, and Dicer have been demonstrated to be present in germ cells and Sertoli cells [29,32]. GRTH regulates proteins of the microprocessor complex, Drosha and DGCR8 (miRNA biogenesis) at the mRNA and protein levels [29]. miRNA pathway proteins have been demonstrated to accumulate in the CBs in haploid round spermatids, suggesting that the CB and GRTH have a role in miRNA-dependent gene regulation [28,29]. Testis-specific miRNAs such as miR-469, testis-preferred miR-34c, miR-470 and others such as let-7 family members (let-7a/d, b, and e-g), and miR203 were upregulated in round spermatids of GRTH KO mice. Furthermore, the enzyme complex (Drosha-DGCR8) required for the processing of miRNA was also upregulated in the KO mice. MiR-469 target TP2 and Prm2 mRNA by binding to their coding regions and thereby preventing translation of these essential mRNAs to proteins [29]. GRTH negatively regulates overall miRNA biogenesis via Drosha/DGCR8 microprocessor complex, to generate mature miR-469 and others which could play a role during spermatogenesis. miRNAs such as miR-469, through their inhibitory action on TP2/Prm2 mRNA, control the timely expression for chromatin compaction and the progression of spermatogenesis [29]. piRNAs comprise the biggest and most complex class of small non-coding RNAs. Unlike miRNAs (22 nt long), piRNAs are slightly longer, in the range of 26-32 nucleotides, and the biosynthesis of piRNAs is different from that of miRNAs and siRNAs [33,34]. During post-meiotic germ cell differentiation, the CB accumulates piRNAs and proteins of piRNA machinery, as well as several other proteins involved in distinct RNA regulation pathways. The embryonic and post-natal male germ cells express high levels of piRNAs during late meiotic cells and haploid round spermatids. CBs provide platforms for the Piwiinteracting RNA (piRNA) pathway and appear to be involved both in piRNA biogenesis and piRNA-targeted RNA degradation. In addition, RNA regulatory mechanisms, such as the NMD pathway, are also known to exist inside the CB, and this provides exciting new insights into the function of CBs. Another important and most fascinating feature of the CB is its dynamic and non-random movements in the cytoplasm of haploid spermatids, thereby facilitating the sharing of selective mRNAs, small RNAs and proteins between neighboring spermatids, and making them an attractive organelle for specific pathways' interconnectivity, in addition to coordinating mRNA regulation in an efficient and accurate manner. GRTH-Mediated Regulation of Germ-Cell-Specific mRNAs in CBs during Spermatogenesis GRTH, first identified in our laboratory, is a multifunctional RNA helicase and is a member of the Glu-Asp-Ala-Glu (DEAD)-Box family of proteins which plays an essential Cells 2022, 11, 613 6 of 11 role in the process of spermatogenesis [35]. GRTH displays low amino acid sequence similarity with other members of the DEAD-box protein family (Figure 3). It is expressed in meiotic spermatocytes, round spermatids, and Leydig cells, and its expression is controlled by hormonal stimulation via gonadotropin/androgen regulation [35,36]. GRTH-Mediated Regulation of Germ-Cell-Specific mRNAs in CBs during Spermatogenesis GRTH, first identified in our laboratory, is a multifunctional RNA helicase and is a member of the Glu-Asp-Ala-Glu (DEAD)-Box family of proteins which plays an essential role in the process of spermatogenesis [35]. GRTH displays low amino acid sequence similarity with other members of the DEAD-box protein family (Figure 3). It is expressed in meiotic spermatocytes, round spermatids, and Leydig cells, and its expression is controlled by hormonal stimulation via gonadotropin/androgen regulation [35,36]. GRTH acts as a negative regulator of steroidogenesis (in Leydig cells) and mitochondrial, death receptor, and nuclear factor k-B (NF-kB) pathways, and plays a central role in preventing germ cell apoptosis [37]. In addition to its inherent helicase activity, GRTH transports mRNAs from nucleus to cytoplasm and to the CBs, and has a vital role in the completion of spermatogenesis. GRTH binds to specific mRNAs as an integral component of RNPs [38]. Furthermore, GRTH is also associated with polyribosomes to regulate target gene translation during germ cell differentiation [39]. The high transcriptional activity in early round spermatids, and mRNA transported by GRTH and its subsequent storage in CBs, provide critical control mechanisms to participate in the post-transcriptional RNA regulation. GRTH knock-out mice are sterile with halted spermatogenesis (step eight of round spermatids stage) with no elongated spermatids, and the sperm and lumen of the epididymis contain only degenerating spermatids. These mice have normal levels of circulating gonadotropins and testosterone with normal sexual behavior. The CBs of GRTH KO mice are highly condensed and markedly reduced in size, with a lack of the "nuage" appearance which is usual at all steps of round spermatids. These changes in the CB of null mice are consistent with their lack of GRTH-dependent nuclear-cytoplasmic transport of messages concerned with the progress of spermatogenesis. Microarray differential gene expression analysis of polysome-bound RNA revealed a genome-wide perspective of GRTH-regulated genes, with the ubiquitin-proteasome-heat shock protein signaling network pathway and NFkB/TP53/TGFB1 signaling networks [39]. In germ cells, there are two species of GRTH: a 56 kDa non-phosphorylated form, predominantly found in the nucleus; and the 61 kDa phosphorylated form (pGRTH), present exclusively in the cytosol and found to be associated with polyribosomes. Previous studies from our laboratory have shown that 5.8% of Japanese infertile men (non-obstructive azoospermia) have a specific missense mutation (R 242 H) in the gene expressing GRTH, GRTH acts as a negative regulator of steroidogenesis (in Leydig cells) and mitochondrial, death receptor, and nuclear factor k-B (NF-kB) pathways, and plays a central role in preventing germ cell apoptosis [37]. In addition to its inherent helicase activity, GRTH transports mRNAs from nucleus to cytoplasm and to the CBs, and has a vital role in the completion of spermatogenesis. GRTH binds to specific mRNAs as an integral component of RNPs [38]. Furthermore, GRTH is also associated with polyribosomes to regulate target gene translation during germ cell differentiation [39]. The high transcriptional activity in early round spermatids, and mRNA transported by GRTH and its subsequent storage in CBs, provide critical control mechanisms to participate in the post-transcriptional RNA regulation. GRTH knock-out mice are sterile with halted spermatogenesis (step eight of round spermatids stage) with no elongated spermatids, and the sperm and lumen of the epididymis contain only degenerating spermatids. These mice have normal levels of circulating gonadotropins and testosterone with normal sexual behavior. The CBs of GRTH KO mice are highly condensed and markedly reduced in size, with a lack of the "nuage" appearance which is usual at all steps of round spermatids. These changes in the CB of null mice are consistent with their lack of GRTH-dependent nuclear-cytoplasmic transport of messages concerned with the progress of spermatogenesis. Microarray differential gene expression analysis of polysome-bound RNA revealed a genome-wide perspective of GRTH-regulated genes, with the ubiquitin-proteasome-heat shock protein signaling network pathway and NFkB/TP53/TGFB1 signaling networks [39]. In germ cells, there are two species of GRTH: a 56 kDa non-phosphorylated form, predominantly found in the nucleus; and the 61 kDa phosphorylated form (pGRTH), present exclusively in the cytosol and found to be associated with polyribosomes. Previous studies from our laboratory have shown that 5.8% of Japanese infertile men (non-obstructive azoospermia) have a specific missense mutation (R 242 H) in the gene expressing GRTH, which resulted in the lack of phospho-GRTH (pGRTH). GRTH knock-in (KI) transgenic mice (human mutant GRTH gene with R 242 H) lack the 61 kDa phospho-species in CBs and cytoplasm, while the non-phospho form is still present in the cytoplasm, nucleus, and CBs of germ cells [4,10]. The levels of androgen and gonadotropin were not altered, and the mating behavior was normal. GRTH KI mice (lacking phospho-GRTH) are sterile, with reduced testis size and complete lack of sperm and elongated spermatids due to spermatogenic arrest at step eight of round spermatids. The round spermatids of GRTH KI mice contain CBs which are significantly smaller in size and more condensed compared to the CBs of WT mice ( Figure 4A-C). which resulted in the lack of phospho-GRTH (pGRTH). GRTH knock-in (KI) transgenic mice (human mutant GRTH gene with R 242 H) lack the 61 kDa phospho-species in CBs and cytoplasm, while the non-phospho form is still present in the cytoplasm, nucleus, and CBs of germ cells [4,10]. The levels of androgen and gonadotropin were not altered, and the mating behavior was normal. GRTH KI mice (lacking phospho-GRTH) are sterile, with reduced testis size and complete lack of sperm and elongated spermatids due to spermatogenic arrest at step eight of round spermatids. The round spermatids of GRTH KI mice contain CBs which are significantly smaller in size and more condensed compared to the CBs of WT mice ( Figure 4A-C). [10]. Detailed methodology on immunofluorescence, EM, and RNA-Seq analysis were described earlier [10]. The absence of the phospho form of GRTH in the CB of GRTH KI mice has direct impact on the structural integrity of CBs in RS, as phospho-GRTH is one of the abundant RNA binding proteins along with other CB proteins such as MVH,MIWI, etc. Previous studies from our laboratory demonstrated that GRTH protein, through its conserved binding motifs, bind to the 3′ UTR region of Tp2 mRNAs [40]. In a recent study, we also [10]. Detailed methodology on immunofluorescence, EM, and RNA-Seq analysis were described earlier [10]. The absence of the phospho form of GRTH in the CB of GRTH KI mice has direct impact on the structural integrity of CBs in RS, as phospho-GRTH is one of the abundant RNA binding proteins along with other CB proteins such as MVH, MIWI, etc. Previous studies from our laboratory demonstrated that GRTH protein, through its conserved binding motifs, bind to the 3 UTR region of Tp2 mRNAs [40]. In a recent study, we also demonstrated that it is the phospho form of GRTH which has a more important role in the translation of Tp2 through the binding of its 3 UTR [4]. In round spermatids, GRTH participates in the transport of specific mRNAs from the nucleus to cytoplasmic sites via the CRM1 pathway. In the cytoplasm, GRTH is phosphorylated and this phospho-GRTH association with relevant messages prevents their degradation, and presumably participates in the transport of mRNA to chromatoid bodies (CBs) where messages are temporarily stored and translationally repressed, awaiting translation during later stages of spermiogenesis [6,9]. The loss of phospho-GRTH in GRTH KI mice serves as a very good model to study the relevance of GRTH phosphorylation in relation to CB transcriptomic/mRNA storage profiles and healthy sperm formation. The CBs obtained from the GRTH KI mice were markedly smaller in size compared to WT mice which were analyzed using electron microscopy ( Figure 4B). Furthermore, amorphous "nuage" texture with irregular boundaries which is typical for CBs were lost completely. The CBs of mutant mice germ cells are small in size and store very few mRNAs which are essential for sperm formation in addition to the marked decrease in half-lives of mRNAs. The levels of non-phospho GRTH were unaltered, while pGRTH protein was completely absent in the CBs of GRTH KI. This model clearly illustrates the functions mediated by phospho-GRTH in round spermatids. Phospho-GRTH is one of the important structural proteins of CBs [10] along with MVH and MIWI. In GRTH KI mice, the levels of MVH and MIWI were not altered; however, phospho-GRTH was completely lost. mRNA transport to the CB decreased due to the loss of phospho-GRTH, resulting in the diminished size of CBs in the GRTH-KI mice (Figures 4 and 5). demonstrated that it is the phospho form of GRTH which has a more important role in the translation of Tp2 through the binding of its 3′ UTR [4]. In round spermatids, GRTH participates in the transport of specific mRNAs from the nucleus to cytoplasmic sites via the CRM1 pathway. In the cytoplasm, GRTH is phosphorylated and this phospho-GRTH association with relevant messages prevents their degradation, and presumably participates in the transport of mRNA to chromatoid bodies (CBs) where messages are temporarily stored and translationally repressed, awaiting translation during later stages of spermiogenesis [6,9]. The loss of phospho-GRTH in GRTH KI mice serves as a very good model to study the relevance of GRTH phosphorylation in relation to CB transcriptomic/mRNA storage profiles and healthy sperm formation. The CBs obtained from the GRTH KI mice were markedly smaller in size compared to WT mice which were analyzed using electron microscopy ( Figure 4B). Furthermore, amorphous "nuage" texture with irregular boundaries which is typical for CBs were lost completely. The CBs of mutant mice germ cells are small in size and store very few mRNAs which are essential for sperm formation in addition to the marked decrease in half-lives of mRNAs. The levels of non-phospho GRTH were unaltered, while pGRTH protein was completely absent in the CBs of GRTH KI. This model clearly illustrates the functions mediated by phospho-GRTH in round spermatids. Phospho-GRTH is one of the important structural proteins of CBs [10] along with MVH and MIWI. In GRTH KI mice, the levels of MVH and MIWI were not altered; however, phospho-GRTH was completely lost. mRNA transport to the CB decreased due to the loss of phospho-GRTH, resulting in the diminished size of CBs in the GRTH-KI mice (Figures 4 and 5). The pGRTH, through its interaction with actively elongating polyribosomes, regulates translation of target mRNA. It is involved in transport of mRNPs in and out of the chromatoid body, where it is transiently stored and/or degraded when not needed in the CBs. mRNAs are transported from the CB by pGRTH for translation at specific times during spermatogenesis. Figure 5. RNA transport and associated changes occurring in CBs of GRTH-KI mice compared to wild-type mice. GRTH transports essential mRNAs from nucleus to the cytoplasm for translation. The pGRTH, through its interaction with actively elongating polyribosomes, regulates translation of target mRNA. It is involved in transport of mRNPs in and out of the chromatoid body, where it is transiently stored and/or degraded when not needed in the CBs. mRNAs are transported from the CB by pGRTH for translation at specific times during spermatogenesis. the phospho-form of GRTH. Since the mRNA binding function in the CBs was impacted significantly in GRTH KI mice (lacking pGRTH), this resulted in impaired chromatin compaction and spermatid elongation, and stalled spermiogenesis at step eight due to the failure of the translation machinery assembly and the loss of mRNAs to decay inside the CBs (Figure 6). The transcriptome profiles of germ cells are exceptionally diverse and RNA-Seq profiles of isolated CBs (mRNA storage profiles) show altered expression of mRNAs involved in spermatid development, differentiation, chromatin remodeling, RNA transport, and transcriptional and translational regulation [10]. A comparison of gene expression in germ cells, with changes in abundance of genes in CBs obtained from wild-type and GRTH-KI mice, is depicted in Figure 4D. During the process of spermiogenesis, the RS undergoes the elongation process where the histones are hyperacetylated and replaced by the highly basic transition proteins TNP1/2, which constitute 90% of the chromatin basic proteins, followed by the deposition of protamine PRM1/2 [41,42]. These chromatin remodeling proteins play a crucial role in hyper-chromatin condensation and compaction of the RS nucleus and reshaping the nucleus of elongating and condensing spermatids. The transcripts coding for transition proteins, protamines, and TSSKs were reduced greatly inside the CBs due to the loss of phospho-GRTH, leading to the failure of chromatin remodeling, which is essential for the condensation of chromatin in developing spermatids during spermiogenesis. UPF2, which is involved in NMD, an mRNA surveillance pathway that eliminates transcripts with premature stop codons, was decreased in the CBs resulting in inefficient mRNA surveillance due to the loss of phospho-GRTH [10]. Different poly(A)-binding proteins (PABP) and Poly (Rc) binding proteins (PCBP) are found in the CB to support the role of the CB in mRNA processing, splicing, regulation, translation, and mRNA turnover. Due to the loss of phospho-GRTH, transcripts coding for these proteins were increased in CBs of GRTH KI mice, as decreased mRNAs in the CB require more stabilization from PABPC and PCBP proteins. Furthermore, transcripts of several initiation factors (eIF4e2, eIF4ebp2, eIF3l and eIF3m) together with mRNAs related to 60S subunit (Rpl10l/Rplp0) were Figure 6. Schematic diagram showing progression of the spermatogenesis process in the presence and absence of pGRTH, with reference to chromatoid bodies and the associated mRNA and protein expression. Loss of phospho-GRTH altered the CB structure and biochemical composition. It also diminished the transport of essential transcripts between cytoplasm and CBs, thereby altering the mRNA storage profiles and causing impaired spermatid elongation, resulting in loss of spermatozoa and subsequently infertility. During the process of spermiogenesis, the RS undergoes the elongation process where the histones are hyperacetylated and replaced by the highly basic transition proteins TNP1/2, which constitute 90% of the chromatin basic proteins, followed by the deposition of protamine PRM1/2 [41,42]. These chromatin remodeling proteins play a crucial role in hyper-chromatin condensation and compaction of the RS nucleus and reshaping the nucleus of elongating and condensing spermatids. The transcripts coding for transition proteins, protamines, and TSSKs were reduced greatly inside the CBs due to the loss of phospho-GRTH, leading to the failure of chromatin remodeling, which is essential for the condensation of chromatin in developing spermatids during spermiogenesis. UPF2, which is involved in NMD, an mRNA surveillance pathway that eliminates transcripts with premature stop codons, was decreased in the CBs resulting in inefficient mRNA surveillance due to the loss of phospho-GRTH [10]. Different poly(A)-binding proteins (PABP) and Poly (Rc) binding proteins (PCBP) are found in the CB to support the role of the CB in mRNA processing, splicing, regulation, translation, and mRNA turnover. Due to the loss of phospho-GRTH, transcripts coding for these proteins were increased in CBs of GRTH KI mice, as decreased mRNAs in the CB require more stabilization from PABPC and PCBP proteins. Furthermore, transcripts of several initiation factors (eIF4e2, eIF4ebp2, eIF3l and eIF3m) together with mRNAs related to 60S subunit (Rpl10l/Rplp0) were increased and accumulated in the CB, resulting in the disruption of translation machinery of germ cells which caused spermatogenic arrest and loss of sperm in GRTH KI mice. Conclusions CBs are highly critical organelles in the developing sperm which control the spermatid elongation processes precisely by mediating the post-transcriptional and translational control. In the absence of phospho-GRTH there is a loss and degradation of mRNAs (in CBs) coding for essential proteins which are involved in chromatin condensation, spermatid elongation, and spermatozoa formation, resulting in a lack of sperm and infertility. Our studies in this line of work unraveled the role of phospho-GRTH on cell-specific regulation occurring in the CBs of germ cells. Additional functional analyses would reveal more spe-cific mechanisms governing functional sperm production and male fertility. Furthermore, more studies will unravel the molecular shuttling events of mRNA transport to the CB and to the polysomes by phospho-GRTH, and its role in translation. Conflicts of Interest: The authors declare no conflict of interest.
2022-02-13T16:28:54.677Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "9e456c2f6d64ce88244752be03fd649ac49c9f19", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/11/4/613/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b1c41482af75f0b0b1eaa812113c4e4a1222d3a4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
243230799
pes2o/s2orc
v3-fos-license
IoT-Based Health Big-Data Process Technologies: A Survey Recently, the healthcare field has undergone rapid changes owing to the accumulation of health big data and the development of machine learning. Data mining research in the field of healthcare has different characteristics from those of other data analyses, such as the structural complexity of the medical data, requirement for medical expertise, and security of personal medical information. Various methods have been implemented to address these issues, including the machine learning model and cloud platform. However, the machine learning model presents the problem of opaque result interpretation, and the cloud platform requires more in-depth research on security and efficiency. To address these issues, this paper presents a recent technology for Internet-of-Things–based (IoT-based) health big data processing. We present a cloud-based IoT health platform and health big data processing technology that reduces the medical data management costs and enhances safety. We also present a data mining technology for health-risk prediction, which is the core of healthcare. Finally, we propose a study using explainable artificial intelligence that enhances the reliability and transparency of the decision-making system, which is called the black box model owing to its lack of transparency. Introduction present a health-risk-prediction technology comprising data mining. The conclusions of this study are presented in Section 5. Cloud-based IoT Health Platform In the past, the focus was on disease treatment and saving a patient's life. The current concept of medical services is to make innovative changes in the medical industry using technologies such as the cloud, IoT, big data, and AI. Among these, the cloud-based IoT health platform reduces the costs associated with human and material resources that are unnecessarily consumed in the medical treatment process through medical devices using IoT. Collected information is stored in the cloud to provide easy access to patient information, thus making it possible to provide more efficient and effective services [8]. In addition, the doctor-patient relationship that is focused on a doctor's diagnosis has evolved into a cloud-based IoT medical system in which a patient knows his/her own condition, and a consensus is drawn through sustainable discussions with the doctor. This allows doctors and patients to communicate, and appropriate medical services are determined. Table 1 presents recent studies conducted on cloud-based IoT health systems. -Developed a patient-centered medical data management system using blockchain technology for storage to achieve privacy protection -Encryption function used to encrypt health data and ensure anonymity Y. Karaca et al. [10] (2019) -Converged the mobile cloud environment with cloud computing for the purpose of medical information processing M. Rahman et al. [11] (2019) -Developed a lossless deoxyribo-nucleic-acid-sequence (DNA-sequence) hiding method to ensure the authenticity of DNA sequences in mobile cloudbased medical systems R. Ganiga et al. [12] (2019) -Presented a secure cloud architecture by building a private cloud -Managed patient data in the medical environment by building a personal cloud using open source tools M. Pham et al. [13] (2018) -Developed a cloud-based smart home environment -Collected physiological, motion, and voice signals via non-invasive wearable sensors and provided situational awareness services S. Miah et al. [14] (2018) -Evaluated patient data and medical histories -Developed diagnostic skills through health professionals and community clinics in a cloud-based solution P. Verma et al. [15] (2018) -Developed a cloud-centered IoT-based health-diagnosis system -Defined a smart, interactive health system for IoT environments T. Bhardwaj et al. [16] (2018) -Developed technology to provide services to WBAN users based on sensory data volume and application type -Developed a computing system for maintenance at the Edge of Things -Developed a framework to regulate computing resources in the cloud Cloud-based WBAN Healthcare Many WBAN studies have been conducted because cloud technology is useful for big-data management, processing, and analysis. In recent years, many studies have been conducted on the benefits of the cloud for medical applications. The hierarchical structure of the cloud network consists of three service layers, which provide various services, between the client and server layers [10,13]. Fig. 1 shows the service layer of a cloud network. First, the infrastructure as a service (IaaS) provides the network technology, such as the load balancer and virtual private network (VPN). It is divided into a physical and a virtual network layer [16]. IaaS is mechanically different from traditional physical network devices because it is serviced virtually to each user. The second is a platform as a service (PaaS), which is a platform layer that provides a virtual technology for development platforms. Third, software as a service (SaaS) is the software layer, which provides the user's medical information via virtual software services such as web applications. Peer-to-peer networking physically connects these services to the server [17]. The overlay cloud computing service is then set up between the client and software layers such that the system can provide an automatic network configuration. Observation areas are set up across layers to collect observed values from each layer. The control area sends a virtual network configuration command to the virtual network layer. In contrast to conventional peer-to-peer networks, the network of the infrastructure layer is divided into a physical and virtual network layer in the cloud network. In addition, overlay cloud computing services that provide automatic network configurations operate between the software and client layers. Health prediction system using scalable cloud and big-data technology P. Sahoo et al. [18] conducted a study on a health-prediction technology and an analyzing healthcare big-data technology for future health conditions. Fig. 2 shows a health-prediction system comprising the use of scalable cloud and big-data technology. The patient's health status is monitored via their WBAN, and the data is stored in the scalable cloud. A signaturebased access control mechanism prevents unauthorized users from accessing data. The patient sets up profiles, configures those who can access data, and determines whether to monitor continuously, only on request, or periodically. Furthermore, machine learning was applied to signals collected by a WBAN sensor to classify congestive heart failure among system users. However, as the amount of data increased, the data traffic grew rapidly. Therefore, it is necessary to consider a method of reducing the waiting time as the data inflow rate and volume increase. Medical IoT Healthcare Network Platform using a WBAN The IoT-based healthcare network is a key element of WBANs used in healthcare applications. Fig. 3 depicts an IoT-based healthcare network. The IoT-based healthcare network consists of a topology, architecture, and platform. The topology represents the flow of the network wherein the WBAN sensors and data collected through the sensors are transmitted in an IoT-based healthcare system. As the healthcare environment comprises dynamic environments owing to the mobility of users, the network interface conditions are changed. In addition, the server and WBAN sensors that constitute the topology determine the value of the optimal condition through the session setup process that periodically sends and receives control signals. In other words, the operating environment of the WBAN sensor, sensor type and communication method, traffic types and patterns, and reliability and delay requirements for communication are used to determine the optimal values for the parameters required at each protocol layer and to maintain an efficient healthcare environment. In addition, various features related to the device's movement, channel status, communication status, information, and amount of data transmitted are reflected in the pattern management of the communication and traffic between the WBAN devices in a continuous healthcare environment. [17,19] consists of four layers and forms a healthcare platform via the interactions between layers. The data layer consists of components for the storage and processing of data collected via the WBAN sensor and undergoes real-time data filtering to increase the reliability and consistency of the data analysis. In this process, users are made to go through user authentication and encryption procedures to enhance privacy. The information layer analyzes the user's behavior and performs situation inference modeling, which can be used to predict a user's situation and behavior pattern through life-pattern recognition and inference based on the collected data. The knowledge layer analyzes the user's health information based on the medical information database established for healthcare services and creates and manages knowledge according to the situation. The service layer provides customized services by converting knowledge information collected and newly processed through each layer into the user healthcare service. Cloud-based Health Big-Data Management Cloud-based health big-data management comprises the use of data mining, machine learning, and other forms of detailed analyses on the vast amounts of collected data to find meaningful associations between a patient's symptoms and conditions and to further determine effective treatments for various conditions. Moreover, a doctor can remotely provide treatment methods and medical-care-related feedback to the patient. In this process, data processing can be guaranteed through the delay processing of the cloud system to solve errors and losses caused by the delay. This provides services, such as scalability and link connectivity, to the total system capacity when the demand for the system increases by sharing resources such as bandwidth and storage space provided by all clients of the cloud network, thus increasing the accessibility and reliability of the network [20]. [21] considered cloud-based patient emotional state monitoring and developed a cloud-WBAN system for monitoring the emotional state of patients. It measures a users' physical and mental changes through the WBAN sensor and conducts a big data analysis on the relationships between them. Fig. 5 shows the cloud-based patient-emotionalstate monitoring system. The patient data collected via the WBAN sensor are stored in the cloud module, and data mining is used to extract key data based on the patient's condition, which is then used to infer knowledge. To maximize the storage space, a pre-treatment process is performed to remove and integrate redundant or unwanted data from the database. In the system, cloud storage is not just a function for maintaining health records, but a knowledge base that can be used to construct new knowledge via deduction and inference through machine learning, reinforcement learning, and data mining. The recommendation information created through the knowledge base is reprocessed based on the user's living environment and health-status information. The created information is then provided to the user after an authentication process for duplication prevention and rule consistency verification to increase the reliability. Health Big-Data Processing using Explainable Artificial Intelligence (XAI) A big-data-processing algorithm processes health big data and supports user decision making through prediction and recommendation models. However, owing to the complexity of the algorithm, the inside of the model is a black box. Therefore, it is difficult to clearly explain the rationale and process of the derived result. In health big-data processing, the reliability and accuracy of the obtained results are important. Therefore, a clear explanation is required for the validity of all processes and results generated from the decision-making system. Users should provide a clear description of the results of health big data, and researchers and experts should provide a step-by-step description of the characteristics and advantages/disadvantages of the algorithm. Therefore, explainable AI (XAI) technology has attracted attention as a new research field [22]. In the USA, the Defense Advanced Research Projects Agency (DARPA) is the leader in XAI research and has predicted the development of AI [23]. Fig. 6 presents the research conducted on XAI for health big data. 6. XAI research on health big data [23] The study being conducted as part of DARPA's XAI program will continue until 2021 and will comprise 11 sub-projects after 2017. Among the typical companies, H2O.ai is representatively studying explainable AI [24], and Microsoft will provide it through Azure. In particular, Kyndi is conducting research on XAI in the healthcare field. DAPRA has divided XAI into an explainable model that shows the interior of the aforementioned black box and an explanation interface for users. To develop an explainable model in the XAI study, the development strategy of the explainable model is possible, as shown in Table 2. In-depth explanation learning [25] -Develop a deep-learning technology that improves the method by attaching the explanation label to the nodes of the hidden layer of the neural network and transforms or supplements the existing neural network into a hybrid form -Perform semantic interpretation to reach the final conclusion by backtracking the nodes on which the network focuses its attention Decision-tree [26] -Use machine learning to learn decision-tree logic to explain the neuralnetwork operation in connection with the decision-tree process -Check the consistency of the results via a combination of the learning method with high interpretation, such as decision trees -Use an explanatory model in the form of a tracer Model Inference [27] -Infer and explain the results of the black box model through experiments and observations as a separate statistical model There are two methodologies that can be used for explaining the operation of big-dataprocessing algorithms: sensitivity analysis (SA) [28] and layer-wise relevance propagation (LRP) [29]. The SA evaluates the change in a result depending on the type of input data. The contribution of the final result is quantified and explained according to each part or item of data. LRP explains the final result by describing the decomposition of layers in a hierarchical model, such as a deep neural network. It works as a method for identifying the amount of change in the result as the input changes in each layer. In this methodology, the contribution of each item or layer is visualized as graphs and images, which are further provided to users. LRP comprises the use of a backpropagation algorithm that is implemented during the learning phase of neural networks for the purpose of visualization. The general neural network algorithm backpropagates the contribution to each node of the previous layer based on the learned weights. For visualization, the contribution of the hierarchical model is constructed in the form of a heat map in the backpropagation step. The heat map of each layer can be visualized and expressed comprehensively. The user can intuitively observe which parts of the neural network have had a significant effect on the results. LRP is more useful in image analyses and provides a clear basis for disease judgment in medical image analyses, thus facilitating its effective use by medical personnel for verifying information. It is also useful for explaining the operation results of PilotNet, which is NVIDIA's deep-learning-based autonomous driving control system [30]. Internal Analysis of the Health Big-Data Algorithm Representative big-data-algorithm analysis models include local interpretable model-agnostic explanations (LIME). LIME provides a technique for interpreting the results of a big-dataprocessing model [31]. There are various ways to understand the results of image classification in big data. Ribeiro [32] presented a method for identifying the major factors using images. In this method, visualization was used to determine which parts of the image were important. LIME comprises the use of a method of dividing an image into several smaller parts and checking the score change. This method is called the super pixel method [33]. LIME can be applied to various algorithms, such as neural networks, random forests, support vector machines (SVMs), and heterogeneous forms (e.g., numerical data, images, and text). Therefore, the results of various black box models can be interpreted in a reliable way. LIME identifies variables that are important for predicting results by approximating the model as an interpretable linear model. Fig. 7 illustrates the LIME process. The LIME model has a process for predicting the expression of a specific disease, which is presented through an explainer. In the process of implementing the algorithm, the explainer analyzes the impact of the input data and output results. The explainer analyzes the influence of the input data list and the prediction by weight. The magnitude of the weight and the positive and negative effects are relatively analyzed to highlight the important symptoms that affect the results. This helps medical practitioners to definitively diagnose a patient's condition. In addition, algorithms such as Shapley additive explanations (SHAP) have been studied for general use in machine learning [34]. SHAP measures the importance of attributes. To this end, LIME is complemented by the integration of a number of algorithms, such as game theory and local explanations. R packages, such as the XGBoost Explainer, show the inside of an algorithm comprising XGBoost as a white box. The XGBoost Explainer outputs the effect analysis at the terminal of the decision tree in a table form. This allows the ensemble model to be organized in the form of a transparent and easy-to-understand graph and to analyze the internal tree structure. Fig. 7. LIME process As shown above, XAI provides information to explain the interpretation of the algorithm analysis. The user can understand the system results. Moreover, the researcher can check the result of the model's predictive evaluation more intuitively beyond a simple accuracy evaluation. It also provides assistance in understanding the internal workings of the model. Mining Multi-layer Association Rules in Health Transactions Multi-layer association rule mining is a method of discovering multi-dimensional relationships between variables and attributes that occur frequently in a health transaction and that have undergone the pre-treatment step. As a method of discovering multidimensional association rules from frequent item sets, it determines association rules through the static discretization of quantitative attributes. This is a method of extracting frequent items from a transaction and finding a rule as per which the association between different independent items satisfies a minimum level of support. To improve the efficiency and scalability, frequent item set mining with transaction reduction, splitting, and sampling or without candidates has been developed in recent years. Data mining can be used in health transactions to discover hidden relationships such as the cause of a disease, complications, treatments, and the relationship with a disease. Multi-layer association rule mining also finds associations between items for health-risk prevention [35]. Hypertension data is collected from the Korea Centers for Disease Control and Prevention (KCDC). KCDC provides health information on disease definitions, causes and risk factors, symptoms, complications, and treatment [36]. The collected hypertension data comprises transactions through preprocessing to determine the association rule that satisfies the minimum map of 0.7 and the reliability of 0.8. Fig. 8 shows the support and lift of multilayer association rule mining performed for hypertension data [35,37]. [35,37] The size and color of a node represent the level of support and the size of the lift, respectively. The darker the color of the node, the higher the lift. The larger the size of the node, the higher the level of support. K. Xia et al. [38] developed methods for the treatment and diagnosis of chronic diseases through association mining analyses. In the optimization stage of the diagnosis and treatment, the association rule of frequent pattern growth and the Apriori algorithm were used to find correlations in the clinical data, and the association rules of the clinical treatment were generated. This optimized the clinical pathway, thus improving the associated cost and medical quality. Disease-Risk Prediction and Classification using a Regression Analysis A regression analysis can be used to mathematically estimate linear correlations in health data and model them using independent and dependent variables. Independent variables, also called explanatory variables, are causative variables that are necessary for obtaining predictions. Dependent variables, also called target and response variables, are the results of predictions. A regression analysis used for disease-risk prediction determines the extent to which independent variables affect the dependent variables through causal relationships. A regression analysis uses linear, multiple, and nonlinear regression. A linear regression models the linear correlation of dependent and independent variables and is classified as either simple linear or multiple linear regression depending on the number of independent variables [39]. Regression analyses can be used on a patient's medical data to predict the risk of disease. Colon-cancer-patient information uses colon data from R's survival package [37]. The attributes of the data consist of age, sex, cancer status, censorship status, etc. in the form of categorical or continuous numbers. For example, a gender category of 1 or 2 indicates male or female, respectively. Colon data are extracted from independent variables to predict them using censorship status as target variables, and then the influence and predictability of the independent variables are identified. Fig. 9 presents the results of a regression analysis of colon-cancer-patient data. Here, the dotted line represents the Cook's distance, and the residuals and leverage that have undergone normalization describe the influence on the data. The horizontal axis represents the influence of the variable, and the vertical axis represents the Pearson residual, which indicates how well the model predicts the observed values [39]. Fig. 9. Results of a regression analysis on colon-cancer-patient data [39] Fig. 9 shows the results of the application of a linear regression model that generalized the treatment progress of colon-cancer patients as cure, recurrence, and death. The dependent variable was designated as the status, and the independent variable comprised the remaining influence factors. The predicted results for the treatment effect of colon-cancer patients had a small Pearson residual value, and therefore, the predictive model of the regression analysis model was considered to be appropriate. G. Manogaran et al. [40] used the stochastic gradient descent (SGD) method and a scalable logistic regression analysis to analyze the health risk. An SGD algorithm was used to develop scalable diagnostic and logistic regression models. They also developed a scalable data structure and disease prediction model for cloud computing to determine the health risk. [35,37,41] Classification is a method of constructing health data via data purification, relevance analysis, and data transformation, and belongs to the class label according to a range of predefined attributes [42]. The classification of health data comprises the use of decision trees, random forests, the k-nearest neighbors algorithm, SVMs, and neural networks. The patient data are classified into each class, such as the presence or absence of diabetes and hypertension, and the characteristics are extracted. As a result, diet, exercise, treatment, or other suitable recommendations are provided for patient management. Fig. 10 shows the classification results presented in a decision tree for colon-cancer-patient data from R's survival package [37]. K. Dauda et al. [41] developed a decision-making model for survival data that includes competing risks. The decision tree is constructed using the classification and regression tree algorithm to process the validated data for the regression and classification trees. R. Vijayarajeswari et al. [43] developed a classification method for the early detection of breast cancer using an SVM classifier and the Hough transform. The Hough transform extracts features of a particular shape from an image and classifies them using the SVM. This method can be effectively used to classify images of X-rays that have been abnormally obtained. Ensemble Technique for Predictions The ensemble model method is used to derive the most appropriate prediction results using the prediction results of various models. It is a method of creating various prediction models based on the given health data. The main evaluation methods include bagging predictors using a simple majority vote method, random forest method, and weighting boosting method. L. Breiman [44] introduced the bagging of predictors, which is known as bagging and is a bootstrap aggregating algorithm and type of ensemble method. After the creation of bootstrap data and a corresponding predictive model, the ensemble method is applied to the result. A simple majority vote method includes a random forest. The random Forest method comprises creating several decision trees with randomness and decorrelation and then determining the result by a majority vote. This structure is also useful for data that includes random forest noise. The randomization of the tree is constructed through the bagging process, trains the tree through the training dataset, and combines it by a majority vote method. This addresses the disadvantage of decision trees being likely to poorly overfit new data. Boosting is an ensemble method that comprises the use of weights and was studied by Y. Freund and R. Schapire [45]. This method weights the error data with poor predictions of the boosting model. By modifying models that present negative results, the susceptibility to overfitting is reduced. In addition, even if the performances of individual models are poor, the final model provides improved results. Adaptive boosting (AdaBoost) is a basic boosting method. AdaBoost can be used with algorithms such as decision tree learning, and it learns with a focus on more difficult data. The gradient boosting machine (GBM) is a machine learning technique that combines gradient descent with boosting [46]. The GBM is a concept that connects many simple models of shallow trees. The GBM is constructed in a manner that compensates for errors in the previous tree, such that the core of the GBM comprises error correction of the previous tree. Gradient descent and the learning rate are used for error correction. Complex models can be constructed according to the learning rate. Using a relatively shallow tree, GBM uses less memory, performs better, and is able to perform regression and classification analyses. In particular, it performs well for X-Y grid-type data and provides an excellent prediction performance as compared with other machine learning algorithms [47]. Recently, various derived algorithms and packages have been developed to take advantage of the superior performance of the GBM, e.g., the Python-based packages such as XGBoost [48], Light GBM [49], and CatBoost [50]. These improve the performance of GBMs and are applied to big data processing, which requires a significant number of computations [51]. Various methods have been attempted to implement the hardware efficiently. Table 3 shows the types of boosting algorithms. In general, the health big-data algorithm presents the problem of deep dimensionality occurring in the learning form [52]. There is a stereotypical data form with an effective performance according to the type of machine learning. Therefore, the form or setting value should be adjusted for the algorithm performance. The concept of the boosting algorithm is more general and provides an effective performance with the use of fewer parameters. In addition, by selecting effective feature data, it reduces the number of dimensions of the health big-data learning network and improves the execution time. Conclusion The key to researching the health big-data system is the acquisition of various data and accuracy of data analyses. Recently developed health big-data analysis algorithms show positive effects in terms of their accuracy and speed. These provide personalized healthcare services while reducing medical expenses and time required. In addition, it is possible to provide medical professionals with analyses, research results in a short time, simulations, and predictions of the toxicity and side effects of drugs. A healthcare cloud system protects personal privacy and improves data management costs efficiently. In addition, more advanced explainable big-data-processing technologies provide users with explainable predictive results. Recently, using the mining multi-layer association rules and regression analysis, an attempt has been made to develop a method for predicting the risk of disease and the hidden relationships such as the cause of the disease, complications, treatment, and the relationship with the disease. In addition, ensemble models use the prediction results of various models to derive more effective prediction results. In particular, XAI technology can be used to visualize the decision process of AI models and explain the elements of deep-learning models involved in decision making. In the future, XAI is expected to be developed in the direction of creating an automated report or interactively by combining it with the technology of expressing human sentences. This would allow experts to understand the contents of an analysis and provide a reasonable basis for decision-making. Research on these techniques may contribute to the development of AI systems in various fields, including law, finance, economics, and medical treatments. In addition, these are expected to negate the concerns regarding automation systems and provide highly reliable information to a user.
2021-08-23T18:26:48.136Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "55ff16527dac009ccdc175296351306b6ee4ec5d", "oa_license": null, "oa_url": "http://itiis.org/digital-library/manuscript/file/24360/TIIS%20Vol%2015,%20No%203-9.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f8a9c3b3be125d53d48979b4af7ae44b4536729d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
237400828
pes2o/s2orc
v3-fos-license
HIV prevalence in patients with cervical carcinoma Abstract The Human Immunodeficiency Virus (HIV) seropositive prevalence among women with cervical cancer varies in different parts of the world and even within a country. This study aimed to document the prevalence of HIV infection in women with newly diagnosed cervical cancer at a secondary hospital in South Africa. This study is a retrospective review of records of 89 women who were newly diagnosed with cervical cancer between 01 June 2010 and 31 May 2013 at Pelonomi Hospital, Mangaung, South Africa. Data such as age, parity, gravidity, marital status, occupation, HIV status, CD4 count, on anti-retroviral treatment, clinical stage of disease were retrieved from the case files, the Meditech-patient record and Disa laboratory system. Data analysis was done using the SAS statistical package. HIV-seropositive prevalence was 52.4%, with the highest prevalence (91.3%) in the age group 40 years and younger. In HIV-positive women, the mean CD4 cell count was 280 cell/mm3 and 43% of them were not on anti-retroviral treatment. The majority (86%) of all patients presented with late stage disease (International Federation of Gynecology and Obstetrics Stage III and IV) when newly diagnosed with cervical cancer. This study highlights high HIV-seropositive prevalence; severe immunosuppression and late presentation of the disease in women newly diagnosed with cervical cancer. Cervical cancer screening programs need to be fully reinforced into existing HIV health care services to allow for ideal prevention and early detection of the disease. Anti-retroviral treatment needs to be prioritized for HIV-positive women. Introduction Cancer of the cervix remains a major cause of morbidity and mortality among women, especially in the developing world. [1,2] Progress has been made in both screening, prevention and treatment, but there still remain many challenges in low-resource settings. These include lack of proper epidemiological data, differing Human Immunodeficiency Virus (HIV) infection prevalence rates reported in women with cervical cancer, and inadequate health care services, which all hamper the progress to reduce morbidity and mortality of cancer of the cervix. [3] HIV and Acquired Immunodeficiency Syndrome have resulted in high cervical cancer incidence, and therefore, cervical cancer has been classified as an Acquired Immunodeficiency Syndromedefining disease. [4][5][6] HIV-seropositive women have been found to be at higher risk of Human Papilloma Virus (HPV) infection due to their immune-compromised status and that they are more likely to develop cervical precancerous lesions that lead to cervical cancer than HIV-negative women. [4,5] Globally, 1% to 2% of HIV-negative women develop high-grade Cervical Intraepithelial Neoplasia annually while HIV-positive women are 10% more prone to develop high-grade Cervical Intraepithelial Neoplasia lesions. [7] A systematic review by Mapanga and colleagues found that lower economic status, multiple sexual partners, early sexual debut, and smoking may also be confounding factors in cervical cancer and therefore, makes prevention of cervical cancer very important. [3] Denny reported that in South Africa during the year 1998/1999, the highest age specific rate of cervical cancer for all race and age groups occurred among black women aged 66 to 69 years with a rate of 152.5/100,000. [1] Due to lack of data from low-income settings and in particular the rural Provinces in South Africa, it was decided in 2013 to review the clinical case records and HIV status of newly diagnosed cervical cancer patients attending the Pelonomi Hospital in South Africa. The Pelonomi Hospital is a public sector facility which renders health services to the low or noincome communities. Results from this retrospective cohort study can serve as baseline data for the developing countries and to be compared with future studies looking at special prevention and treatment programs for cervical cancer patients. It can also assist in improving cervical cancer screening guidelines for HIVpositive women in the low-income settings as HIV-seropositive women are a high-risk group for developing cervical cancer. Identifying the ideal cervical cancer screening methodology for these women in low-income settings will assist in reducing premature mortality among these women. [4] The results of this study will enhance the education of clinicians and other health care workers on the association of HIV infection and cervical cancer. Methods This is a retrospective descriptive study and permission to conduct the study was obtained from the Ethics Committee of the Faculty of Health Sciences, University of Free State, and the Head of Clinical at the Pelonomi Hospital. The study was carried out in accordance Declaration of Helsinki, and local guidelines for conducting research. Case records were reviewed from 89 women who were newly diagnosed with cervical cancer between 01 June 2010 and 31 May 2013 at the Gynecologic Unit of the Pelonomi Hospital, Mangaung, South Africa. A structured data form to collect information on each patient was used. Data such as age, parity, gravidity, marital status, occupation, HIV status, CD4 cell count, on anti-retroviral treatment (ART), clinical stage of disease, tumor histological type were retrieved from the case files including the MEDI-TECH-patient record and Disa laboratory system. The International Federation of Gynecology and Obstetrics (FIGO-2009) cervical cancer clinical staging was used to stage the disease and was extracted from case records. [8] Patients accessing the Gynecologic Unit of the Pelonomi Hospital were all referred patients from primary health care clinics and primary hospitals in the Mangaung district. HIV testing and CD4 cell counts have previously been done as part of implementing anti-retroviral treatment on relevant HIV-positive women. These HIV test results were done through the referring centers and their laboratories and the results were available on the Disa Laboratory System. HIV tests were all confirmed by Enzyme-linked immunosorbent assay (ELISA -10283020 Centaur CHIV) in the Pelonomi hospital laboratory. CD4 lymphocyte counts were done using flow cytometry. For known HIV-positive women, a new CD4 cell count was requested after cervical cancer diagnosis was made on histology to obtain their recent CD4 cell count. The CD4 cell counts were done at the Pelonomi hospital laboratory. The information collected was entered on a structured form and each patient was decoded to protect confidentiality. The FREQ and MEAN procedures of the SAS statistical package were performed to analyze the data and results presented in frequencies, percentages, and charts for graph presentation. Results Records from 89 women attending Pelonomi Hospital and who were newly diagnosed with cancer of the cervix during 01 June 2010 and 31 May 2013 were reviewed. Table 1 shows selected characteristics and HIV infection prevalence in these patients. In this cohort, 23 (25.8%) of the women were 40 years of age and younger. The mean age calculated was 49.7 years ranging from as young as 25 and up to 79 years old. Fifty-two (58.4%) had 3 or more children, 43.8% were unmarried, and unemployment was high at 85.3%. Among the total of 89 women, 44 were HIV-positive, 40 were seronegative, and 5 had unknown HIV status, the latter was most likely due to declining HIV testing or unable to provide consent. The HIV prevalence was 52.4% (44/84) with a 95% CI of 41. In this study, the HIV infection was further analyzed per age group. Figure 1 shows that women 40 years of age and younger had the highest HIV infection prevalence of 91.3% (21/23) followed by 52.4% (22/42) in the age group 41 to 60 and only 5.3% (1/19) in the age group older than 60 years. All 89 patients were clinically staged and Table 2 shows the cancer characteristics. Table 1 Demographic, selected characteristics, and HIV infection prevalence of newly diagnosed cervical cancer patients (n = 89). Discussion During the 3-year period, 89 patients were referred by the primary hospitals and health care clinics to Pelonomi Hospital and on presentation were newly diagnosed with cervical carcinoma. Their ages ranged between 25 and 75 years with a mean age of 49.7. The majority of patients (85%) presented with late stage disease. The HIV infection prevalence among the newly diagnosed women with cervical cancer was 52.4% (44/84). The high HIV infection prevalence in this cohort of women within the Free State Province in South Africa is in contrast to a much lower HIV prevalence reported by a number of other studies conducted in South Africa including studies from other African countries, [9][10][11][12][13][14][15][16] see Table 3. Lomalisa et al found a HIV prevalence of 7.2% in women who presented with cervical cancer during Jan 1997 to June 1998 in Gauteng Province. [9] In KwaZulu Natal Province, a prevalence of HIV infection among women with cervical carcinoma was reported by Moodley M over 2 time periods of 1999 and 2003 as 21% and 21.8%, respectively. [10] Within Kenya, 2 studies among cervical carcinoma patients also reported different HIV infection prevalence rates. [11][12] Rogo and Kavoo-Linge in 1990 reported a HIV prevalence of 1.5% among cervical cancer patients from Nairobi [11] and Gighangi et al in 2003 found a HIV-seropositive prevalence of 15% among women with cervical cancer referred to the National Kenyatta Teaching Hospital in Nairobi. [12] Newton et al in 2001 reported a 32% HIV prevalence among cervical cancer patients in Uganda [13] and in 2007 Sekirime and Gray reported a HIV prevalence of 18% from case records reviewed between 1993 and 1995. [14] More recently (2018) a study was undertaken in BUTH, Zaria, North-Western Nigeria using data retrieved from case files and reported a 4% HIV seropositivity among patients with cervical cancer. [15] Also in 2018, Simonds et al in Western Cape Province of South Africa reported in their study a HIV infection prevalence rate of 14.4% among patients with cervical cancer. [16] Different background of HIV prevalence rates may account for the differences, as shown by 3 South African studies from sites with different HIV prevalence rates. [9,10,16] The South African National HIV Prevalence Survey-2012 demonstrated different general population background of HIV prevalence rates in different provinces, where KwaZulu Natal-eThekwini Metro had the highest HIV prevalence rate of 14.5%; followed by Gauteng-City of Johannesburg Metro (11.1%); Free State-Mangaung Metro (7.9%); and Western Cape-City of Cape Town (5.2%). [17] A number of studies reported that HIV-positive patients present with cervical cancer at a younger age compared to HIVnegative patients. [9][10][11][12][13][14][15][16] In this study, the HIV infection was further analyzed per age group and data showed that women 40 years old and younger had the highest HIV prevalence of 91.3% followed by 52.4% in the age group 41 to 60 and only 5.3% in the age group 61 and older, confirming reports by others. In this study, clinical staging of cervical cancer among the women (89) was evaluated at the time of presentation. The majority of women presented with FIGO (2009) advanced stage III and IV disease. Limited access to health care facilities and poor implementation of cervical cancer screening programs are most likely contributing factors towards late stage disease presentation. Maiman et al demonstrated that HIV-positive women presented with more advanced stages of cervical cancer than HIV-negative women [18] and the high prevalence of HIV infection (52.4%) in this study might also be a contributing factor to the advanced cervical cancer stages at presentation. It is historically well established that the most common histological type of cervical cancer is squamous cell carcinoma followed by adenocarcinoma. In this study, squamous cell carcinoma was observed in 94% of the women; adenocarcinoma in 5% and adenosquamous formed 1%. A similar pattern was observed by Maiman et al. [18] The mean CD4 cell count in this cohort was 280 cells/mm 3 and immunosuppression status might be due to the high prevalence of HIV infection as well as the advanced stages of disease. Lomalisa et al reported that HIV-seropositive patients with CD4 cell counts less than 200 cells/mm 3 had significantly more advanced stage cervical carcinoma than HIV seronegative patients. [9] The severe immunosuppression status found in this cohort of women might influence their overall disease progression and response to future treatment. In the HIV-positive women with newly diagnosed cervical cancer, 43.2% were not on anti-retroviral treatment. This is 3 . It has been shown that women living with HIV infection and demonstrating low CD4 cell counts were more likely to acquire oncogenic HPV and less likely to clear HPV infection, but these risks can be mitigated once women are on ART. [19] Good clinical management of patients with both HIV infection and cervical cancer requires integrated screening, prevention and treatment services. South Africa's current cervical cancer screening policy in the public health sector is offering asymptomatic women 3 free cervical smears, which begin at age of 30 years and are repeated every 10 years. For women with HIV infection, cervical cancer screening is done at 3 yearly intervals for the duration of the woman's life. And only when the screening test is positive for the disease, then the annual screening is recommended for the duration of the woman's life. [20] Screening programs in resource-poor areas remain challenging. [21] For HIV screening in South Africa, Human immunodeficiency virus Counselling and Testing was launched in the year 2000. And the South African department of health national Human immunodeficiency virus Testing Services: Policy, 2010 (updated 2016) renders HIV counselling and testing services to all members of the public. The service delivery platforms are available in the settings of health facilities (hospitals, clinics, mobile clinics) and community sites (home-based, workplace, schools, and higher learning institutions). Any person aged 12 years and older with sufficient maturity and mental capacity to understand the benefits, risks, social, and other implications of HIV testing, may give consent for HIV Testing Service in South Africa. [22] Intensive execution of cervical cancer prevention and treatment programs is strongly advocated for women living with HIV infection. Conclusion This study highlights a high HIV infection prevalence in younger women newly diagnosed with cancer of the cervix, severe immunosuppression in these women and late presentation of the disease. It further highlights that close to half (42%) of the HIV-positive women were not receiving ART at the time of their diagnosis with cervical cancer. These study results can be used for (1) the improvement of screening and treatment guidelines for cervical cancer in HIV infected women, (2) monitoring the effect of scaling up ART in HIV infected women, (3) educating health care workers on the importance of HIV infection and cervical cancer as dual disease. Cervical cancer screening programs need to be fully reinforced into the existing HIV health care services, this will allow for ideal prevention and early detection of the disease in the low-resource settings with high HIV prevalence.
2021-09-04T06:18:46.219Z
2021-09-03T00:00:00.000
{ "year": 2021, "sha1": "042cc1635c82e5878256bc475a7e85a83b8c172e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000027030", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3583d728950f1f4b03ab9ad3d71253921bd0d4c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
186250487
pes2o/s2orc
v3-fos-license
COMPARING AND ALIGNING OUTCOMES OF TWO ENGINEERING AND TECHNOLOGY DISCIPLINES IN ONTARIO The separate development of engineering and technology programs in Ontario has made transfer between these program types a complicated process. The process often requires assessment on a case-by-case basis and considers different aspects of knowledge, skills, and performance. This study was conducted to determine the level equivalency between two engineering and technology disciplines with the purpose of informing the development of transfer policy and comprehensive bridging programs in the province. Outcomes, content, and function of engineering and technology programs in Ontario were analyzed using a common framework in two disciplines: mechanical and electrical. Material from 7 engineering and 10 technology programs, including syllabi, learning outcomes, and reports was collected and analyzed, along with publically available information about programs. Slightly less than 40% of the courses in representative first year Mechanical and first and second year Electrical/Electronics Technology programs had equivalency to courses in engineering degree programs. The level of cognitive process expected for problemsolving outcomes is higher in the engineering programs than technology programs, and vice versa for outcomes related to hands-on skills. Overall, the analysis indicated sufficient alignment between engineering and technology programs to suggest transfer students may have acquired the necessary skills and knowledge of introductory level courses that are similar in content. Through hybrid bridging subjects and tests on prior knowledge, engineering programs can ensure incoming transfer students meet all CEAB accreditation criteria. INTRODUCTION Establishing points of equivalency for transfer between engineering and technology programs in Ontario has been a complex process that often requires the assessment of a student's knowledge profile and skills on an individual basis. This problem exists because these programs were initially developed to serve separate needs in the engineering and technology industry without the intention of supporting efficient student transfer between the two [8]. The difference in both outcomes and curricula, despite some similar content, makes it difficult to provide credit for prior learning when transferring. Additionally, stringent engineering accreditation qualifications set out by the Canadian Engineering Accreditation Board (CEAB) make it challenging to accept credit from technology programs into accredited engineering programs [3]. CEAB defines the content that must be covered for accredited engineering programs. Institutions are responsible for ensuring other accreditation criteria is met including elements of the program environment pertaining to the quality of the educational experience, faculty (expertise, competency, professional status), and contact hours. This situation provides an opportunity to rethink how credit transfer is done in Ontario, perhaps differently from provinces like British Columbia and Alberta that have a stronger link between technology and engineering programs. The specific goal for this study was to identify the level of similarity of the learning outcomes and course content between engineering and technology programs in the disciplines of electrical engineering/technology and mechanical engineering/technology. Nicole Fallon explains that transfer between engineering and engineering programs is a multifaceted problem that requires investigation of both the content and context of learning [4]. It is important that both the topics covered (content) and the expectations for depth of understanding (context) are aligned to grant transfer credits. For this reason, a framework developed by Zakani et al. (2016) University of Toronto; June 4 -7, 2017 was used to capture both elements [9]. This framework considers the equivalency of content, context of learning, and function of courses in the overall curriculum, and was designed to be applicable for transfer into engineering or technology. METHODS Engineering and technology programs were compared using an approach developed by Moskowitz and Stephens for assessing the equivalency of programs [6]. They explain that a comprehensive evaluation can be made on program equivalency through the systematic analysis of four elements: (i) Content -The fundamental concepts and content that are covered by a program. This addresses the area of concern for content disparity between program types. (ii) Context -The depth and complexity of tasks and learning outcomes. This addresses what cognitive processes are the focus for development. (iii) Function -The relationship between courses in the curriculum. This addresses whether programs offer standalone courses or courses that build off of each other for deeper understanding. (iv) Structure -The order and sequence of content delivery. This addresses surface level similarities and differences between these program types and order/timing of course delivery. This method was shown to be effective at capturing differences between fundamental engineering science courses and engineering design courses [8]. To capture all the elements for the systematic analysis of equivalency for Mechanical and Electrical/Electronics at a larger program level, several sources and artifacts were needed. For this analysis specifically, program learning outcomes, and syllabi or course descriptions from each program's curriculum were used as data sources. Program Learning Outcomes Program learning outcomes provided insight into the content and context of each program. Specifically, the Ontario Qualification Framework was used to determine the learning outcomes for technology programs. For engineering programs, learning outcomes were obtained by contacting engineering programs and asking permission to share their CEAB program level indicators. Program learning outcomes were also obtained from publicly available resources. Bloom's Taxonomy of Educational Objectives was used to measure the complexity of learning that students are expected to demonstrate in a given program or course, as defined by the program learning outcomes [2]. Bloom's was adopted because it is a common framework used to assess learning outcomes. It can be applied when information on other courses or activities in a program is limited, and the language has a high degree of overlap with typical language used to describe explicit learning outcomes. Using Anderson and Krathwohl's revision of Bloom's Taxonomy, this study focused on the cognitive process aspect of the cognitive domain [1]. The following levels describe the different cognitive processes in the framework: (i) Remember -Retrieve and recall knowledge from long term memory. (ii) Understand -Construct meaning from instructions and demonstrate acumen. (iii) Apply -Use appropriate procedures or processes in a given situation. (iv) Analyze -Break down problems into parts to determine how they relate to one another, and use the parts to construct an overall solution. (v) Evaluate -Make decisions based on standards or criteria. (vi) Create -Put elements together to form a coherent whole. To distinguish between the cognitive process expectations levels of learning, learning outcomes were categorized by five curriculum components known as the CEAB Accreditation Units (AU Though technology programs are not required to cover all five components, they provided a point of comparison for program structure. Based on these frameworks, the learning outcomes were analyzed in four steps: 1) Assign a CEAB AU content area to the learning outcome 2) Identify the nomenclature that describes the topics covered in the AU University of Toronto; June 4 -7, 2017 3) Isolate the action verb of the learning outcome 4) Associate a cognitive process to the action verb Course Descriptions Course descriptions were used to analyze the content and structure of programs. For transfer between engineering technology and engineering programs, or vice versa, one of the first steps is to identify the similarities and differences in content delivery. Kopera-Frye et al. assert that curriculum mapping is a versatile tool for comparing the quality of teaching and learning in higher education [5]. Plaza et al. also highlight that curriculum mapping can provide a visualizing of the courses, how they are related, and the timeline of a program [7]. For both analyses, 70% content similarity was used to determine equivalency between courses. Queen's University and Seneca College curriculums were used as benchmarks for comparison with the other programs included in the analyses. The courses from ten institutions offering both Mechanical Technology and Electrical/Electronics Technology programs were mapped to courses in the Queen's Mechanical and Electrical Engineering curriculums. Additionally, the courses in the Mechanical and Electrical/Electronics Engineering curriculums of seven institutions were mapped to technology program courses at Seneca College. Data was obtained by contacting engineering and technology programs and instructors and asking permission to share their course descriptions and syllabi. Online course descriptions were used when syllabi were not available. Comparisons of Program Learning Outcomes Analysis of program learning outcomes from engineering and technology programs indicated differences between task expectations (context) and skill emphasis (content). Results were visualized using treemaps which display the relationships between common structures as sets of hierarchical (Macro-Meso-Micro) nested rectangles. In this analysis, the first three steps corresponded to the levels of hierarchy, where learning outcomes were first grouped according to their AU category. Within their AU category groupings, learning outcomes were separated by AU topic followed by the learning outcome action verb. Sample treemaps for Mechanical Engineering and Mechanical Technology program learning outcomes can be seen in Figure 1 and Figure 2, respectively. For Mechanical Engineering programs, results show that institutions tend to emphasize Engineering Science, Engineering Design, and Complementary content areas in the first two years of study. These programs focus on topics such as physics, modelling, materials science, statics and solid mechanics, thermodynamics, and fluid mechanics. Conversely, learning outcomes for the first two years of Mechanical Technology programs concentrate on Complementary studies, followed by Engineering Design and Engineering Science. For Mechanical Technology curriculums, Complementary studies involve understanding and following predefined codes and safety practices, and topics in Engineering Design and Engineering Science emphasize engineering drawing, materials sciences, and basic sciences. From the analysis of Electrical Engineering program learning outcomes, results indicate that institutions tend to offer curriculums focusing on Engineering Science, Complementary, and Engineering Design. Electrical/Electronics Technology programs place a higher priority on Engineering Design, Complementary studies, and Engineering Science content areas. Furthermore, differences between engineering and technology program learning outcomes within the same AU content category were present. For example, it is important to note that Engineering Design topics in representative engineering program learning outcomes emphasize development of problem solving skills while technology programs focus on hands-on skills. A general verb comparison between engineering and technology programs for all action verbs in learning outcomes of both disciplines was completed and is displayed in Figure 3. From the figure, it can be seen that both engineering and technology program learning outcomes are associated with a wide variety of cognitive processes. However, it appears that engineering programs have a larger proportion of higher cognitive processes when compared with technology programs, particularly in the category of engineering design and engineering science. It is also evident that technology programs use a wide range of verbs to describe their learning outcomes. Curriculum Mapping Before comparing courses in engineering and technology programs, curriculum mapping was performed between Queen's University Mechanical Engineering and Electrical Engineering, and other Mechanical Engineering and Electrical Engineering programs at the representative Ontario institutions. This analysis indicated 78% and 86% of courses in first and second year engineering at Queen's University are covered by courses in other Mechanical Engineering programs. For Electrical Engineering, 73% and 60% of courses in the first and second year of the Queen's University curriculum are covered by curriculums of other Electrical Engineering programs. This high level of mapping confirmed the equivalency of engineering programs across various institutions in Ontario. Curriculum mapping for Queen's University Mechanical Engineering to other Mechanical Technology programs show that only 30% and 55% of courses in the first and second year of Queen's are covered in the technology programs. Results from the curriculum mapping of Queen's University Electrical Engineering to other Electrical/Electronics Engineering Technology programs reveal only 33% and 38% of courses in the first and second year of Queen's University's curriculum are covered by courses in the engineering technology programs. For consideration of transfer from engineering to technology programs, curriculum mapping was performed using Seneca College as a benchmark for technology programs. Interestingly, for Seneca's Mechanical Technology program, only 37% and 19% of courses in the first and second year were covered by Mechanical Engineering programs. Electrical Engineering programs have a much higher level of mapping with coverage of 40% and 63% of courses in the first and second year of Seneca's Electronics Technology program. DISCUSSION AND CONCLUSIONS The analysis of program learning outcomes indicates that engineering programs have more emphasis on skills related to design including problem solving, developing models, and using models. This indicates a demand for higher cognitive levels associated with design activities. Technology programs have a heavier emphasis on developing hands-on skills not common in engineering programs such as troubleshooting, testing, installing, and maintaining. These skills are associated with mid-level cognitive processes. This distinction is clearly illustrated in the use of modelling in the study of engineering and technology. Engineering programs teach their students to develop and validate models, while technology programs focus on developing skills that relate to selecting and applying appropriate models. The selection and application of models and procedures results in a higher focus on topics of quality control, codes and standards, and health and safety which is evident in technology program learning outcomes. This finding may be a result of preparing students for different work environments. For curriculum mapping, the use of Queen's University as a benchmark was validated by the high level of mapping between Queen's and the other engineering programs at the representative institutions for both disciplines. From the analysis, less than 40% of the courses in representative first year Mechanical and first and second year Electrical/Electronics Technology programs had equivalency to courses in engineering degree programs. Though some specific technology programs in both disciplines exhibited more than 50% coverage in the first two years, the majority of technology programs had lower levels of mapping. This suggests that in terms of content, only certain introductory level courses will have equivalency (typically courses in calculus, discipline specific engineering science, and complementary), and consideration of the specific institution when a student is transferring is warranted. Furthermore, the differences between Mechanical and Electrical/Electronics Technology programs highlights the need for discipline specific design transfer strategies. It was interesting to note the higher level of mapping for second year Mechanical Technology. This may be a result of engineering programs covering a wide range of engineering science topics in first year while discipline specific courses begin in second year. Technology programs tend to have a discipline specific focus throughout the entire curriculum, hence more coverage of a second year engineering degree. Additionally, institutions should be aware that coverage of first and second year courses in engineering curriculums may have a contribution from courses in third year technology curriculums. Limitations of the study include the number of institutions involved, access to clear and complete course descriptions and syllabi, and the language used to describe program learning outcomes. In particular, it was clear from the analysis that learning outcomes from technology programs are often meticulously written, while engineering programs tend to be more generic. However, engineering programs have only recently started using learning outcomes on a wide scale, and it is expected that learning outcomes will improve quickly to clearly and accurately reflect course learning. Overall, the combined analysis of program learning outcomes and curriculum mapping demonstrates some level of alignment between engineering and technology programs in the first two years of study. Though the emphasis of cognitive skills may differ, for course content that matches, there are enough similarities to suggest students may have acquired the necessary understanding and skills expected in introductory level courses. Therefore, for students transferring within their respective discipline, a Prior Learning Assessment and Recognition (PLAR) can be administered where students are given tasks to evaluate missing skills, similar to the process currently used in British Columbia. With this approach, programs can develop appropriate paths of study for incoming students, and those who have demonstrated proficiency in the PLAR can then focus on the courses that were not covered in their previous program. Furthermore, the conclusions from this study inform the development of bridging programs which can focus on addressing the gaps in skills and knowledge of incoming transfer students. By ensuring that hybrid bridge subjects cover all necessary content and program environment requirements, students will be able to complete the remainder of their degrees while still meeting CEAB accreditation criteria.
2019-06-13T13:18:05.392Z
2018-02-22T00:00:00.000
{ "year": 2018, "sha1": "e9fb1b81959dc519fba8bd56f6bcd12242adc971", "oa_license": null, "oa_url": "https://ojs.library.queensu.ca/index.php/PCEEA/article/download/9690/7188", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5e0be037b1deab2ad7e002cb36caa8833a99f881", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
146032899
pes2o/s2orc
v3-fos-license
The effectiveness of biofertilizer on edamame productivity This study aimed at determining the effectiveness of bio-fertilizer with different concentrations on productivity soybean “Edamame” (Glycin max). The productivity consisted of the number of pods, empty pods, pod content, Edamame weight each plant. Bio-fertilizer in this study consists of microbial consortia (Lactobacillus, Pseudomonas, Bacillus, Saccharomyces, Rhizobium, Azotobacter, Azospirillum, and Cellulomonas). The treatment consists of 3 bio-fertilizer concentrations (25%, 50%, and 75%), as well as the negative control and positive control (100% chemical fertilizer equivalent 5g/plant). The results showed that treatment B2 (bio-fertilizer 50%) gave better results on productivity compared with other treatments. Introduction Bio-fertilizer is a fertilizer containing microbes, include Bacillus, Pseudomonas, Rhizobium, Azotobacter, Azosprillum, Mycorrhiza, and Trichoderma. The existence of these microbes can be either single or combined several types of microbes, is called microbial consortium. Microbes are used as biological fertilizer able to spur plant growth, holds the nitrogen, dissolve phosphate and inhibits the growth of plant diseases [1] [2]. Bio-fertilizer is one of the numerous components that s are critical to improving the system for the supply of nutrients in agriculture. Several types of soil microbes are often used as a bio-fertilizer among other bacteria as fixation N non-bacterial symbioses, symbiotic N bacterial, mold, and bacteria mycorrhiza solvent phosphate. The soil Microbe when utilized together and right in the system of organic farming can give positive impact to the availability of nutrients needed by crops, disease and pest control, can increase the growth and the productivity of the crop [3]. Edamame (Glycine max (l.)) as a potential plant because that has an average production of 3.5 tons ha-1 higher than regular soy crop production has an average production of 1.7 -3.2 ton ha-1 [1]. besides, edamame also has an extensive export market opportunity. The request export from the country of Japan hills to 100,000 tons per year and the US amounted to 7,000 tons per year. Meanwhile, Indonesia can meet the 3% of Japan's market needs, while 97% filled by China and Taiwan [4]. Edamame can be consumed as a vegetable when young pods are still green. Edamame had a complete protein with quality equivalent to the protein content in milk, eggs or meat. Edamame is a complete protein, dietary fiber, and micronutrients, particularly folate, manganese, phosphorus and vitamin k. The balance of fatty acids in 100 grams of edamame is 361 mg omega-fatty acids 3-1794 mg of omega-6 fatty acids. Besides, the edamame also contains anti-cholesterol, so it is perfect for consumption. Chemical fertilizers are types of fertilizer is frequently used in planting soy edamame. The use of chemicals in the form of chemical fertilizers in excess doses, currently pose a serious enough problem. Using chemical fertilizers is not only harmful to agricultural but also endangers human health. The ecosystem of farmland being damaged, natural predators are gone, and the balance of nutrient elements in soil being disturbed [5]. A dose of chemical fertilizers used on edamame was too high, so, to overcome used biofertilizer with the intent to reduce high dose in the chemical fertilizer. Research on using biofertilizer to increase edamame productivity was still a bit of a do. Edamame soybean farmers mostly use chemical fertilizers with a very high dose that is 600 kg per hectare. It is feared may damage soil fertility, so the other alternative biofertilizer required to minimize the use of chemical fertilizers. Based on the background, need to be developed microbial consortium biofertilizer technology to increase crop productivity edamame (Glycin max (l.) Merrill). Biofertilizer Biofertilizer is a product of biological active consists of microbes that can increase the efficiency of fertilization, soil health, and fertility. Microbes are harnessed to increase the fertility of the soil is known as biofertilizer (microbial fertilizer = microbial fertiliz-ers). A wide range of benefits that can be obtained from the use of microbes, are: (1) provide a source of nutrient for plants; (2) pro-tect the roots of disorders of pests and diseases; (3) in order to develop rooting perfect stimulate system and extend the root; (4). The meristem tissue in mitosis gunning point of growing shoots, buds, and the stolen; (5) as an antidote to the poison of some heavy metals; (6) as growing regulatory metabolites; and (7) as bio-activator destroy organic materials. Besides, the microbes contained in biological fertilizer/biofertilizer useful as a fastening system N2 boosters and growing plants, solvents, phosphates and destroy materials organic [5]. Microbial nitrogen fastening system Nitrogen (N) is the most crucial element for plants and plays a role in the vegetative growth of plants. Nitrogen in the soil, among others derived from organic materials, the results of binding of N from the air by microbes, fertilizers, and rain. Soil nitrogen con-tained in low generally, so should always be added in the form of manure or other sources at the beginning of each planting. In addi-tion to merely low level, N in the soil have a dynamic nature (easily changed from one form into other forms such as NH 4 NO 3 be-ing, NO, N 2 O and N 2 ) and easily vaporized and leached lost along the water drainage [3]. Different types of bacteria in biological N 2 fixation, among others consist of Rhizobia, Sianobakter (blue-green algae), photoautotrophic bacteria on the surface of stagnant water and, as well as refer to heterotrophic bacteria in the soil and root zone. The bacteria can bind nitrogen from the air, either in symbiosis (root-nodulating bacteria) or nonsymbiotic (free-living nitrogen-fixing rhizobacteria). Utilization of bacterial fixation of N 2 , both being applied through the soil or sprayed on the plant, capable of improving the efficiency of fertilization. Use N 2 fixation Bacterial potentially reducing the need for fertilizers, improv-ing production and N synthetic the income of farming with cheaper inputs. Azotobacter is N 2 bacteria fixation that able to produce glycemic promoting, cytokinin, indole-acetic acid, and so it can spur the growth of the root. The population of Azotobacter in soils is affected by fertilization and plant type [6]. Microbe solvent phosphate Phosphorus (P) in the soil consists of P-P and inorganic-organic that comes from organic materials and minerals containing P (apatite). Elements of P in availability land for plants because P is bind by clay, organic material, as well as oxides of Fe and Al in the soil pH, is low soil pH with a Dour 4-5.5) and by the Ca and Mg on soil that has high pH soil (neutral and alkaline with a pH of (7-8). Soil minerals that administered generally have neutral pH between 5.5 to 6.5 so that the availability of P is not a problem. Due to the massive amounts of P fertilization and over the years continually, it caused hoarding (accumulation) P in the soil [3]. One alternative to increasing the efficiency of fertilizer phosphate in overcoming low phosphate or phosphate saturation available in the soil is by utilizing microbes that can dissolve phosphate solvent phosphate unavailable become available for the plant. As expressed above, the release of phosphate of iron phosphate on the waterlogged lands can occur through the process of reduction of iron, to increase the availability of the phosphate for a plant. The increasing P availability in the soil also explains why rice requires a relatively low P fertilizer. Various types of phosphate solvent microbes, such as Pseudomonas, Micrococcus, Bacillus, Flavobacterium, Penicillium, Fusarium, and Sclerotium, Aspergillus potentially high in dissolving phosphate bound phosphate becomes available in the soil) [7]. [7]. Bacteria boosters grow producing directly to the phytohormone able to induce growth. Increasing of growing plant can occur when a rhizobacterium produces metabolites which act as a phytohormone that enhances plant growth directly. The resulting metabolites other than in the form of a phytohormone, as well as antibiotics, siderophore, cyanide, and so on. phytohormone or growth hormone produced can be produced Auxin, ethylene, cytokinin, and abscisic acid. Bacteria grow boosters indirectly also inhibit pathogens through the synthesis of antibiotics, as a biological control. Some kind of endophytic mutualistic with its host plant symbiotes in increasing resilience against insect pests through the production of a toxin, anti-microbial compounds on the side such as fungi, Pestalotiopsis microspore, Taxus and walkchyana that producing taxol (anti-cancer substances) [6]. Microbes remodel organic matter Definition of microorganism remodel organic material or bio-decomposer is a microorganism lignin fiber, parser, and organic compounds that contain nitrogen and carbon from the organic matter (organic remnants of plants or animals that have been dying). remodel microbial organic substances composed of Trichoderma reesei, t. harzianum, t. koningii, Phanerochaeta crysosporium, Cellulomonas, Pseudomonas, Thermospora, Aspergillus niger, A. terreus, Penicillium, and Streptomyces. The function of remodeling organic materials generally has better ability than bacteria in decomposing the remnants of plants (cellulose, hemicellulose, and lignin). General microbes that able to degrade cellulose was also able to degrade hemicellulose. The Group of fungi showed the most apparent bio-decomposition activity, which can soon make soil organic matter decompose into simpler organic compounds, which doubles as primary ion exchangers save and release nutrients around the plant [8]. Edamame Soybean Soy edamame is a species of plants which are included in the category of vegetables (green vegetable soybean) in his native country, namely Japan, edamame or Gojira as vegetables and snacks. Edamame soybeans contain high nutritional value, per 100 g seeds contain 582 kcal, protein 11.4 g, carbohydrate 7.4 g, fat 6.6 g, vitamin A or beta-carotene, 100 mg, B1 0.27 mg, B20, 14 mg, B3 1 mg of vitamin C, and 27mg, as well as minerals such as phosphorous 140 mg , Calcium 70 mg 1.7 mg, iron, and potassium 140 mg). Edamame soybeans are also rich in isoflavones which is an organic compound that is an antioxidant and helps prevent cancer. Edamame (Glycine max (l.) Merr.) plants that need to be developed because the potential has an average production of 3.5 tons ha-1 higher than regular soy crop production has an average production of 1.7 -3.2 ton ha-1. Besides, edamame also has extensive export market opportunities [4]. Plant Productivity Crop productivity is the ability of a plant to produce product/results seen from measurements of dry mass. In the modern production cultivation plant, the production of a plant aims to increase and maximize growth rate through genetic manipulation and the environment, to obtain maximum yields. In other words, the production of a crop could be interpreted as the result of a plant that is obtained after the growth process is completed [7]. Producing of Biofertilizer Make a solvent of 2% molasses (140 mL molasses in 6860 mL water), heating it to the boil, and leave it to cool. Then enter the respective breed bacteria and yeasts. The total number of breeds that are incorporated into the molasses that are 10% (700 mL of microbial consortium in 6300 molasses 2%). Counting of biological fertilizer in TPC media molasses is done with the smallest dilution to dilution series. In the dilution series, taking as much as 1 mL to pour plate on Nutrient Agar medium (for TPC bacteria) and Potato Dextrose Agar (for TPC, Yeast). Next, incubate at temperature 37 o C for 24-hours and do the calculation TPC. Measurement of crop productivity. Harvest soy edamame was done when most of the leaves are already yellowed, the fruit began to change color to dark green, or a solid look already contains pods. Soy edamame aged that will be harvested which is around the age of 75-100 days. Collection data of plant productivity include the number of pods, empty pods, pod content, edamame weight each plant. Methods and analysis This research is experimental using a complete randomized design, with 4 treatment (0, 25, 50, and 75%). Each treatment was repeated 3 times and every repetition consists of 5 plants. The data will be analyzed using ANOVA with the degree of significance of 5% and continued with Duncan's test to compare between the treatments. ANOVA test is done before, do a test of its homogeneity and normality test. Result Planting the seed of soy edamame on May 29, 2017, three days before planting, water on the land so that the land will be sprinkled with soy edamame seeds moist. 4 days after planting the plastic covering of the dike wall opened so as not to interfere with an edamame soybean germination. Plant soy edamame flowered almost simultaneously from all the treatment, that is, at the moment the plants aged 30 days after planting. Data capture productivity include the number of pods per plant, the number of empty pods, number of pods contents, fresh soy edamame and weight per plant. Harvesting is done on the date of 5-10 August 2017, precisely when the plant 67-72 day-old edamame after planting. Harvesting is done in the span of 2 days, this is due to the maturity of the soy edamame each plant is not the same. Data were analyzed with ANOVA statistics measurements with SPSS version 16. Based on the results of the analysis of the obtained data is as follows: Table 1. The Result of analysis plant productivity with an application of consortia microbe biofertilizer Information: The numbers followed by the same letters in the same column are not significantly different, while the numbers followed by different letters in the same column are significantly different. Productivity results were analyzed using SPSS version 16. Based on the results of the analysis it was found that all biofertilizer data on the productivity of edamame soybean plants were normally distributed and homogeneous (α> 0.05), so that they were continued using ANOVA analysis. Based on the results of Anova's analysis, it was found that the significance of the number of pods was (α = 0.116), so that biofertilizer administration had no significant effect on the number of edamame soybean pods because it had a significance value of α> 0.05, so the data was not followed by Duncan's test. As for the other productivity data, the following results are obtained: the number of empty pods has a significance value of (α = 0.001), number of filled pods (α = 0.04), and edamame soybean weight (α = 0.024), so that it can be said Biofertilizer fertilizer significantly affected the number of empty pods (the smallest value), the number of filled pods, and the weight of edamame soybeans per plant, because it had a significance value of α <0.05. The data is known to have a significant effect, so it is continued with the Duncan test. Duncan Test Results can be observed in Table 2. From the table shows that of all the biofertilizer concentration variants, the concentration of B3 (75% concentration biofertilizer) is significantly different from other concentration variations, so it can be said that the best biofertilizer for edamame soybean productivity that is at a concentration of 75%. Based on the results of the above measurement data obtained images as below: Discussion This research microbial consortium biofertilizer, using 8 kinds of microorganisms, they were: Rhizobium spp., Azotobacter sp., Azospirillum sp., Bacillus sp., Pseudomonas sp., Cellulomonas sp., Lactobacillus sp. and Saccharomyces sp. Until now, Some bacteria are reported to have a beneficial influence for plants so that it can be classified into a group of PGPR (Plant Growth Promoting Rhizobacteria), the Group of the genus Azoarcus sp., Azospirillum spp., Azotobacter sp.,Arthrobacter sp., Bacillus sp., Clostridium sp., Enterobacter sp., Pseudomonas sp., Gluconoacetobacter sp., and Serratia sp., [9]. Microbes that used in this study belongs to PGPR, whereas a provider of nutrient for plants, it can also as a producer of hormones that can spur the growth of the plant. Azotobacter in addition to binding the N from the air, it is also able to produce Indol Acetic Acid (IAA) in an amount directly proportional to its density. Besides, Azotobacter can also generate cytokinin, gibberellin, and abscisic acid [10]. One of the microbes that make up this biofertilizer fertilizer is N fixing bacteria (Rhizobium sp., Azotobacter sp. and Azospirillum sp.), Where it is known that Nitrogen (N) is the most important element for plants and plays a role in vegetative growth of plants. Nitrogen in the soil includes organic matter, the binding of N from the air by microbes, fertilizers, and rainwater. Nitrogen contained in the soil is generally low, so it must always be added in the form of fertilizer or other sources at the beginning of each crop. In addition to its low levels, N in the soil has a dynamic nature (easily changes from one form to another such as NH 4 to NO 3 , NO, N 2 O and N 2 ) and easily vaporizes and is washed away with drainage water [3]. Biofertilizer microbial consortia that used in the study included in N fixation, bacteria that can bind nitrogen from the air, either in symbiosis (root-nodulating bacteria) or non-symbiotic (freeliving nitrogen-fixing rhizobacteria), so it can supply the nutrients needed by plants. In addition, the biofertilizer in this study also used phosphate (P) solvent bacteria (Pseudomonas sp., Lactobacillus). P element in soil availability for plants is low because P is bound by clay, organic matter, and Fe and Al oxides on soils with low pH (acid soils with pH 4 -5.5) and by Ca and Mg on soil the pH is high (neutral and alkaline soil with pH 7-8). One alternative to overcome the low available phosphate or saturation of phosphate in the soil is by utilizing phosphate solvent microbes that can dissolve phosphate not available to be available to plants. So that in the presence of Pseudomonas sp., Lactobacillus can dissolve the phosphate bound to a free phosphate element so that it can be absorbed directly by plants. And there are microbes are having (Saccharomyces sp., Cellulomonas sp) that can break down lignin, fibers, and organic compounds that contain nitrogen and carbon from the organic matter (organic remnants of plant or animal tissue that had died). Generally, the microbes that can break down soil organic matter into simpler organic compounds, which doubles as primary ion exchangers save and release nutrients around the plant [11]. So the Microbe can provide nutrient elements required by plants. Microbial consortia biofertilizer can provide soil organic matter which is very beneficial in restoring the fertility of the physics, chemistry, and biology of the soil, as useful as a binder of soil particles through soil aggregation process. Aggregation of soil formation can produce micro-pore spaces, so the aeration in the soil become better and create the optimum state for the absorption of nutrient elements that are useful to the plant. Influence of organic materials against chemical soil fertility among others against the capacity of exchange of cations and anions, increasing the activity of microbial soil through decomposition and mineralized inorganic materials. Besides, the organic matter can absorb and retain water, which in turn affect the accumulation of food substances and results of metabolism that is saved in the stem, leaf, fruit, and seeds [12]. Nutrient elements that can increase crop productivity soy edamame had been produced by Microbes that found in microbial consortium biofertilizer, thus granting biofertilizer 75% concentration of microbial consortium (B3) shows the best results productivity compared with other concentrations. This is because of the more concentrated solution of biofertilizer, increasingly many microorganisms contained in the biofertilizer. So the Microbe can provide nutrient elements in abundance, and plant soy edamame can directly absorb the nutrient elements to improve its productivity. Based on the results of the above research it is known that the administration of microbial consortium biofertilizer can increase plant productivity in terms of the amount of edamame per plant, number of filled pods, number of empty pods, and edamame weight, and the best treatment was 75% (B3) biofertilizer concentration treatment. Conclusion Granting of biofertilizer effect on productivity in soy edamame, 75% concentration of biofertilizer (B3) is the most well compared with other concentrations.
2019-05-07T14:06:04.829Z
2019-04-09T00:00:00.000
{ "year": 2019, "sha1": "b1eb3059707b197fc1a9c948354b322f72806d37", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/243/1/012099", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f4b8f0bb923086c55608f4498b271a28e50fecc8", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
225471832
pes2o/s2orc
v3-fos-license
Public panic over Covid-19 Outbreak: Criticism toward panic theory in collective behavior study This article evaluates the issue of public panic over the coronavirus outbreak in Indonesia from March to April 2020. The study was conducted with secondary data analysis sourced from competent online media, and was aimed to expand the panic scope as well as criticize the related theories in a collective behavior study. The results showed inconsistencies in the event of public panic in a crowd. This include while dealing with natural disasters, terror, sinking ships, fires, collapsed buildings or other physical threats, and also over invisible dangers, as observed with viruses. In addition, the individual disposition persists for a long time on occasions where the authorities, both political and academic, fail to immediately strategize a convincing countermeasure. Based on these findings, the study provides a critique of several theories, comprising crowd panic, emerging norms, and moral panic. peak of Covid-19 outbreak. Also, these moments enhance awareness of changes in individual behavior towards collective respect for government decisions, establishing the policy of Large-Scale Social Restrictions (PSBB), a form of lockdown, with a few exceptions in some pandemic epicenter cities. Moreover, policies instituted were undoubtedly accompanied by social policies, in order to prevent the incidence of poverty. This purpose was achieved by accelerating the disbursement of Employment Card (Kartu Pekerja), direct cash assistance, non-cash food assistance, exempting the poor from electricity costs, and also by continuing existing social assistance programs. The incidence of public panic was sustained, and consequently adjusted according to the news developments related to virus distribution. These facts confirmed the public to comprise of active media users, and huge waves of panic heightened the tension, conflict, and resistance to government decisions. Moreover, the response of regional heads showed more imagery compared to the extent of coordination and resource utilization in managing the outbreak. The Governor of DKI Jakarta in mid-March 2020 restricted mass transportation (especially TransJakarta), prohibited in and out bus movements, proposed the termination of commuter train services, and prepared a lockdown scenario. Subsequently, a meeting was set with the Minister of Home Affairs, delegated by the President at the City Hall to cancel the decisions and plans (https://regional.kompas.com, 2020/03/17/). In addition, a similar event was observed with the Mayor of Tegal (tempo.co, 1327529/), the Regent of Mamberamo Tengah, the Mayor of Sorong, and the Governor of Papua (cnnindonesia.com, 20200327). This power contestation lingered for several weeks, as the central government was considered sluggish. Therefore, the regional authority of Mamberamo Regent declared a statement in response to the lockdown cancelation request, stipulating "We are more knowledgeable of this area, the central government are encouraged to handle problems at the center" (rmol.id, 2020/04 / 02). Media representation, including social media has inevitably triggered widespread panic among individuals. The coronavirus varies from other viruses known to be prevented by inculcating personal hygiene and maintaining a healthy lifestyle. Initially, the methods proposed by health authorities were perceived as insignificant and contradictory to regular traditions, including avoiding meetings, keeping a physical distance, and thorough washing of hands with soap. For instance, these gatherings, particularly in worship centers, are predominant practices of Indonesians encouraged by every religion. In addition, other products and economic processes also revolve around meeting people, e.g., in local markets, cafes, and malls. Personal concerns, for example, shaking hands, hugging, cheek-kissing, going home, etc., are equally common routines known for hundreds of years to foster connection between people. However, almost every Indonesian find it complicated to shun gatherings, handshakes, or other social arrangements. The conflict over logic and faith philosophies involving religious leaders and health experts, appeared fierce as the police struggled to implement the PSBB rules, despite being opposed by several spiritual authorities. For this reason, the government mobilized the entire forces, including the army and civil instruments to enforce this policy. By this time, the panic level had continued to increase in areas, including shopping, eviction of medical personnel from boarding and rented apartments, rejection of dead bodies caused by COVID-19, and the fear of layoffs. These are collective behaviors and are different from panic theories. Furthermore, theories of crowd panic, contagion, emerging norm, and the dominant moral in the study of collective behavior are unable to comprehend this pandemic panic phenomenon. Purpose and Method The purpose of this research, therefore, is to critique panic theories in collective behavior using secondary data analysis. Subsequent data were obtained based on reports by 3 mass media agencies with national credibility, and the analysis was conducted by comparing facts and theories. Further comprehension of the resulting concepts was provided after the parameters have been confirmed. This method of analysis was selected due to the availability of abundant data (Schwab (2019), although, the corona outbreaks did not permit researchers to visit several affected locations. Data: Public Panic Bubbles Precisely on Monday, 2 March 2020 at 11:35 WIB, Indonesian president, Joko Widodo, announced two citizens had tested positive to Covid-19. By 13:00, large crowd was observed at a shopping center in North Jakarta in an attempt to stock up food supplies, including rice, sugar, noodles, eggs, oil, milk, snacks, and even spices. On average, each buyer was spotted in possession of 2 trolleys in a long queue estimated at 20 m with 5 -8 cashiers (cnnindonesia.com/ekonomi/20200302). In addition, other items purchased were drugstores, pharmacies, masks, hand sanitizers, wipes, multivitamins, and alcohol. However, the prices of these commodities have multiplied by tenfold. A standard mask usually sold for 40 thousand rupiahs per box increased to 350-400 thousand. This incident occurred in the suburbs of Jakarta such as Bekasi, Tangerang, Depok, and Bogor (https://m.m.detik.com/d-4922328; https://regional.kompas.com/read/2020/03/02). The growth in public panic was also attributed to live television reports from the home of a suspected Covid-19 patient wearing an anti-tear gas mask and showing "police line". Based on health experts opinion, large crowd were witnessed in shopping centres in an effort to store food reserves for 2 to 3 weeks. This situation, therefore, prompted the Indonesian Chief of Police to issue a letter-number B / 1872 / III / Res.2.1 / 2020 / Bareskrim requesting all minimarkets, malls, and retail stores to limit sales of staples, especially rice, cooking oil, sugar and instant noodles. Furthermore, government disposition in handling the crisis appeared confusing, and subsequently, influenced the panic rate. For instance, in early March 2020, health authorities affirmed the use of masks were only intended for sick people. However, this became mandatory for all residents after three weeks. In addition, the virus was thought to transmit only through fluids, but after a month, was said to spread by air. This attitude was exploited by several religious leaders to reject the PSBB policy, especially the prohibition of worship services. The clash between government policies and faith expressions raised people's emotions about religious gathering. The rejection of Covid-19 dead bodies further intensified public panic. This incident originally occurred in Semarang when a nurse died. Some community leaders steered the residents to block the ambulance. The family and health authorities submitted a request clarifying the body was virus-free, but the funeral was denied. Finally, the body was returned to the hospital (cnnindonesia.com/nasional/20200410). In addition, a similar case also reported at Lampung and Medan. Furthermore, the community raised banners with contents stressing the Covid-19 funeral rejection. Residents took turns guarding the entrance to the tomb already constructed with stones and wooden blocks to in order to prevent ambulances from gaining access (https://m.detik.com/news/berita/d-4957959). However, casual leaders in Pasuruan mobilized youths with weapons (machetes) to block the hearse (https://regional.kompas.com/read/2020/04/13). Also, residents of 6 villages declined a body about to be buried in Bolaang Mongondow. The hearse had gone to several villages looking for a place to bury, but was refused. A village head highlighted the fears of infecting the entire community with virus originally from Wuhan if a Covid-19 body was allowed to be buried (https://regional.kompas.com/read/2020/04/07). More tragically, the tomb of a victim was demolished in Banyumas, where the body was previously refused for two days by several villages. The rejection was directly led by the village head, while Banyumas Regent was responsible for the destruction (https://regional.kompas.com/read/2020/04/03). Various panic behaviors were portrayed by people discriminating against medical personnel. These health workers were evicted by owners of residences (boarding, rented house) due to being considered as virus carriers. An initial event occurred in Sukoharjo district where three nurses were ejected by the homestay owner, despite also being a midwife. The decision was triggered by the insistence of other residents worried about contracting the virus (https://news.detik.com/berita/d-4995283). Further discrimination was also recorded in several big cities, including Jakarta, Palembang, Banda Aceh, Lampung, and Medan. Medical personnel such as doctors and nurses were poorly treated by neighbors and were also evicted. The hospital where they worked became a resort before the local government provided a special hotel (https://megapolitan.kompas.com/read/2020/03/25). The panic also hit financial market players. In addition, exchange rate of the rupiah against several foreign currencies, including United States dollar, Yen, and Euro dropped significantly in just a few weeks, e.g., the rupiah against US dollars fell by 2,250 in 3 weeks. The capital outflows also scaled, immediately followed by a declining national economy. Furthermore, thousands of companies reported plans to cut down jobs. This resulted to government providing incentives such as delaying payment of installments for 6 months and providing tax incentives valued at 123.01 trillion rupiahs or a 30% discount, in an effort to cushion the effects, although, layoffs were inevitable. The laid-off workers up until mid-May 2020, reached 15 million (https://www.cnnindonesia.com/ekonomi/20200501181726). The enactment of the PSBB policy in big cities caused a complicated situation for laid off workers with a high standard of living. In addition, many were stranded and unable to return to their hometowns due to travel prohibitions. However, some devised various forms in order to survive, such as acting as homeless people or looking for other unguarded routes or short cuts. Also, some were forced to live on the streets or uncontrolled public spaces, e.g, under bridges and empty government buildings due to inability to pay rent. Meanwhile, others fortunate to have returned home, were rejected by several residents for fear of contacting the virus, although few were discovered to be carriers. Therefore, hospitals in small towns were reported to treat the influx of Covid-19 patients after the Eid Al-Fitr 1441 H. Three cases of panic shown in the graph below. Week 1 March Week 2 March Week 3 March Week 4 March Week 1 April Week 2 April Week 3 April Week 4 April Panic Buying Rejection of the body Expulsion of medical personnal Data Analysis Several concepts and theories have been developed to explain human behavior in the face of emergencies. The earliest proposal by Le Bon in 1895 indicated the higher tendency for a unified mind amongst individuals in crowds exposed to strong group pressure and panic situations, therefore leading to irrational behavior (2004). This postulation is the foundation for studies related to collective behavior. In addition, the related theories developed in the early twentieth century argue about possibility of identity loss by individual's comforted with physical threats. This is followed by an immersion in group emotions, elimination of rational considerations, and an increase in mob mentality (Drury, 2002). This theory is limited in explaining public panic over the COVID-19 outbreak. According to the World Health Organization (WHO) recommendations, people are expected to maintain a distance between 1-2 meters from each other in crowds. The coronavirus is easily transmitted through physical contact, which is facilitated by close contact with one another. Le Bon's concept indicates the inability for communities to feel the pressure of groups or masses. Therefore, people are assumed to be relatively independent in acting and making decisions on a broad and macro scope. This indicates the protection of individual identity and also known to have not formed from Le Bon's description of a collective mind. Contagion theory stipulates collective behavior as irrational, and also as a product of interpersonal transmissions. Le Bon (2004) analyzed specific subconscious processes, where information or beliefs were spread throughout social groups, in the form of mass transmission. Therefore, individuals outside the main actor were considered as passive, featuring a tendency to easily hypnotize. Despite the wide refutation of this theory through empirical studies, the popularity is retained due to the ease to proffer an explanation for several events, e.g., mass behavior in dealing with natural disasters, including the 2004 and 2018 Tsunami in Aceh and Palu, respectively. In addition, groups of people influence individuals, and instigate the development of mob mentality, as well as the loss of thinking ability. The crowd is trapped in the experience of a small group of people, followed by the demonstration of emotional and irrational behavior. Furthermore, sociologists engaged in this field were implicated in empirical researches from the 1960s, featuring the rejection of concepts and theories, as observed with Smelser (1963), Turner and Killian (1973). The arguments constructed were contrary to classical theory, as Smelser attributed collective behavior as motivated by generalized beliefs associated with demand change. These opinions were related to political and economic conditions, alongside deviations, violations of norms, and others. Contagion is more driven by ongoing interaction where the parties interpret the meaning of actions as demonstrated by Blumer (1969) and recent studies of Burgess et al. (2018) in educational institutions. Public panic over the dangers of COVID-19 was not quickly transmitted, and also had no influence on the formation of mob mentality or hypnosis of individuals. In addition, the respective social identity was determined to be very clear, with the capacity to mobilize measures needed to counter the danger of this outbreak. However, the transmission of panic to various places was indicated by the expulsion of nurses from personal homes, rejection of sufferers' bodies in various areas, refutation of residents returning to villages, and various other forms. In addition, the patients' identity is not lost, and the ability to think rationally facilitates panic. The informal leaders in Semarang responsible for the mobilization of residents to block the ambulance of the corpse carrier argued saying, "if there is a tomb of COVID-19 sufferer in this village, we are all threatened". Despite the fallacies of logic, rationally reasoning is discerned as the citizens reportedly moved. In the midst of doubts about the accuracy of crowd theory and contagion, emerging norm theory emerged from Turner and Killian (1973). This affiliated group behaviors, including 548 Technium Social Sciences Journal Vol. 10, 544-552, August 2020 ISSN: 2668-7798 www.techniumscience.com crowd formation, riots, and others with the development of new norms, in response to crisis or panic. Furthermore, the novel customs are unstable, appear immediately, and justify the individuals' behavior in response to external factors assumed to threaten safety. These characteristics are formed through a brief experience and are modifiable by means of crowd development (Lemonik Arthur, 2013). The process is initiated by the sudden exposure to a new or foreign situation, alleged to be significantly different from daily events. Therefore, the norms justify members' actions, both to survive and escape quickly from threats. Hence, collective panic results in joint action, with no absolute pattern. The novel customs in this study are not observed in the general sense of tens or hundreds of years old social institutions. These are only a quick "agreement" between members, aimed at accelerating escape from threats. The emerging norms theory has some disadvantages, and various studies have shown the prevalence of old panic norms in the crowd. The public response towards related situations is contrary to the predictions. For example, people prioritized saving women and children during the terrorism against the World Trade Center in 2001, and also demonstrated acts of mutual assistance (Antao, 2013). Meanwhile, mass panic rarely occurs and is short-lived on appearance. The persistence of rational disposition in cases of extraordinary events indicates the inability for panic to hit the entire community. This condition has been observed in the WTC terror, the sinking of Titanic, as well as the bombings in Bali in 2002, 2004 (Gurtner, 2004;Henderson, 2008) and Jakarta in 2016. Panic shopping, rejection of bodies, and expulsion of medical personnel from personal homes are not customary, as these actions were performed rationally to protect the community. However, the norms emerged in confronting crises, following the governments' suggestions after heeding the advice of health authorities. These include the act of washing hands with soap, wearing masks, maintaining a physical distance from one another, avoiding crowds, and staying at home. In addition, the norms are expected to be observed as the daily behavior of the community, in order to successfully form a healthy lifestyle. The public panic demonstrated in dealing with Covid-19 does not eliminate old norms, but is rather used to help people manage the current crises. Meanwhile, Indonesian people have strong social capital in several places, including cooperation, arisan, tulung-tinulung (mutual help). These help the government deal with the economic crisis, estimated to have resulted from the outbreak. The theory of moral panic was generated following the public reaction to threats of declining morality, which result from crowd behavior. This concept was postulated by sociologist Stanley Cohen (1973) after the examinnation of mass clashes on the coasts of South England. Cohen interpreted these event or situations as threats to the interests of the wider community, and rhetorically expanded by the media. This representation creates "folk devils" and victims, including societal morality. In addition, the publications further create public panic, because of the threats to moral and social values. This condition prompted government actions by formulating rules to prevent a wider public panic, aimed at controlling the behavior of groups, and tempering public anger (Eversman, 1233) (Eversman MH and Bird JD, 2017; Mannion and Small, 2019). The public panic observed in the first half of 2020 was not a reaction to the violation of community norms. This perception prevents the classification as a moral panic, based on the statement by Cohen. In addition, the invisible danger from viruses increased the challenges experienced during the prevention and resistance. The continuous spread to astonishing number of sufferers is worthy of questioning the scientific authority of health institutions. However, peace and trust in a community is only possibly realized through prompt and effective management. The delayed use of a large state budget answers the problems, increases public cynicism, and further reduces trust for government and health authorities. Moreover, these parastatals not exactly referred to as folk devils, or discrimination against ethnic Chinese. The term 'folk devils' is proper to be given to terrorists who threaten social order (Dingley & Hermann, 2017) and organizations or groups (Alonso & Delgado, 2020). Based on the analysis above, more focus is possibly drawn to public panic characteristics in dealing with Covid-19. First, the cause is viruses, and not natural disasters, fires, bombings, terror, sinking ships, or other accidents. Therefore, the impeding danger is invisible and requires special countermeasures or expertise. The panic bubble continuously increases after the owners of academically proven expertise become victims, with no reliable power to neutralize the event. In addition, each party is expected to conduct self-defense by following the prevention protocol, although freedom from infection is not necessarily guaranteed. This condition creates an atmosphere of mutual suspicion for one another, assuming an individual to be a potential source of transmission. Despite the reason to care for each other, thousands of people are separated from family members, thus negating friendship and worship. Second, individuals estranged from the crowd are expected to perform social and physical distancing. Furthermore, recommendations for Indonesians with primary institutions, characterized by community groups, including cooperation, arisan, patrolling, and community service to maintain distance are not easily realized. Moreover, asocial behavior is commonly opposed by the community, resulting from the views contrary to cultural values. Third, the public in panic are free to move, despite the restrictions, especially on the large-scale or based on local social limitations. This freedom promotes the individuals ability to act rationally. However, panic spending, rejection of bodies, expulsion of medical personnel, decreasing value of the rupiah, and layoffs were performed precisely through reasonable calculations. Fourth, regarding the macroscope and long duration, the community deliberated on preferable ways to overcome the current threat. Based on the incidence of Covid-19 threats, the study of collective behavior is enriched with a different type of public panic, compared to previous studies introduced by Le Bon (2004), Turner and Killian (1973) and Cohen (1973). These forms occur on a macro scale, with a long duration in individuals with limited rationality, especially in the face of invisible dangers. Furthermore, the authorities, both political and academic, are unable to immediately devise a convincing remediation approach. Conclusion Based on the above analysis, new findings are obtained in the study of collective behavior, indicating the uncertainty of panic incidence in a crowd. This is observed especially in situations where people scramble to locate a way out of life-threatening danger. In addition, there have also been similar manifestations amongst people absent from a limited locus, but in open spaces, as observed in the Coronavirus experience. The efforts made are aimed at developing a defense from the invisible virus attacks. Also, the human characteristic of community togetherness as a social creature ought to be modified into the maintenance of distance and wearing protective equipment. The behavior of limited rationality was demonstrated by a fraction of community members, through the purchase of staples, medical equipment, rejection of corpses, the expulsion of medical personnel from the neighborhood, and market panic. However, people were able to think rationally and convert panic into vigilance. The government efforts assisted by religious leaders have been successful in controlling individual behavior, and also play an important role in preventing dangerous situations. In addition, a new lifestyle, previously campaigned for in the mid-2000s and popularly known as the Clean and Healthy Lifestyle (PHBS) was accomplished as the new norm, following the Covid-19 protocol. This study
2020-09-10T10:23:05.897Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "008994ebf25c6d939bbb97ad9f9c8fabb7f870b6", "oa_license": null, "oa_url": "https://techniumscience.com/index.php/socialsciences/article/download/1355/518", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1892245b216fb54a62b93571c468ab11c7997418", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Political Science" ] }
119403042
pes2o/s2orc
v3-fos-license
Big Bang Nucleosynthesis with Stable $^8$Be and the Primordial Lithium Problem A change in the fundamental constants of nature or plasma effects in the early universe could stabilize $^8$Be against decay into two $^4$He nuclei. Coc et al. examined this effect on big bang nucleosynthesis as a function of $B_8$, the mass difference between two $^4$He nuclei and a single $^8$Be nucleus, and found no effects for $B_8 \le 100$ keV. Here we examine stable $^8$Be with larger $B_8$ and also allow for a variation in the rate for $^4$He + $^4$He $\longrightarrow$ $^8$Be to determine the threshold for interesting effects. We find no change to standard big bang nucleosynthesis for $B_8<1$ MeV. For $B_8 \gtrsim 1$ MeV and a sufficiently large reaction rate, a significant fraction of $^4$He is burned into $^8$Be, which fissions back into $^4$He when $B_8$ assumes its present-day value, leaving the primordial $^4$He abundance unchanged. However, this sequestration of $^4$He results in a decrease in the primordial $^7$Li abundance. Primordial abundances of $^7$Li consistent with observationally-inferred values can be obtained for reaction rates similar to those calculated for the present-day (unbound $^8$Be) case. Even for the largest binding energies and largest reaction rates examined here, only a small fraction of $^8$Be is burned into heavier elements, consistent with earlier studies. There is no change in the predicted deuterium abundance for any model we examined. A particularly interesting possibility is that an appropriate change in the constants of nature might allow for the stability of 8 Be, which normally spontaneously fissions into 4 He + 4 He with a very short lifetime. Coc et al. [14] investigated the effects of stable 8 Be on BBN. More recently, Adams and Grohs [17] examined stellar evolution with stable 8 Be. Their goal was not to constrain such a model, but rather to refute anthropic arguments for the fine-tuning needed to allow the triple-α reaction to proceed by showing that stable 8 Be could provide an acceptable alternative pathway for the production of heavier elements. A completely different mechanism to stabilize 8 Be has been suggested by Yao et al. [18]. They proposed that plasma effects could stabilize 8 Be in the early universe, obviating the need for new physics. To keep our results as general as possible, we will not assume a particular model for stable 8 Be, but will instead treat the 8 Be binding energy as a free parameter. Following Ref. [14], we define the mass difference between a single 8 Be nucleus and two 4 He nuclei to be Present-day measurements give B 8 = −0.092 MeV. However, if the constants of nature during BBN were sufficiently different so as to make B 8 positive, then 8 Be would be stable, significantly altering the reaction network; a similar effect might occur due to plasma effects in the early universe. Coc et al. examined BBN for B 8 ≤ 100 keV and found no significant effect on any of the resulting element abundances. Here, we extend this calculation to larger values of B 8 . Given uncertainties in the nuclear rates when the constants of nature are allowed to change, we also parametrize the rate for 4 He + 4 He −→ 8 Be in terms of an overall multiplicative factor. In the next section, we present our calculations of the primordial element abundances and give the results of these calculations. We discuss our results in Sec. III. We find that BBN can be significantly altered for B 8 ∼ 1 − 3 MeV, with a large reduction in the 7 Li abundance, while the predicted deuterium and 4 He abundances are unchanged. The nuclear fusion rates necessary to achieve this are similar to those calculated for the present-day 8 Be binding energy. II. CALCULATION OF ELEMENT ABUNDANCES Consider first the standard model for BBN. (For recent reviews, see Refs. [19,20]). In the first stage of BBN, the weak interactions interconvert protons and neutrons, maintaining a thermal equilibrium ratio: while a thermal abundence of deuterium is maintained via After the weak reactions drop out of thermal equilibrium at T ∼ 0.8 MeV, free neutron decay continues until T ∼ 0.1 MeV, when the thermal equilibrium abundance of deuterium becomes large enough to allow rapid fusion into heavier elements. Almost all of the remaining neutrons end up bound into 4 He, with a small fraction remaining behind in the form of deuterium. There is also some production of 7 Li via where the 7 Be decays into 7 Li via electron capture at the beginning of the recombination era [21]. The element abundances produced in BBN depend on the baryon/photon ratio η, which can be independently determined from the CMB. We adopt a value of η = 6.1 × 10 −10 , consistent with recent results from Planck [22]. This value of η yields predicted abundances of D and 4 He consistent with observations. Recent observational estimates of D/H include those of the Particle Data Group [23]: D/H = (2.53 ± 0.04) × 10 −5 and Cooke et al. [24]: D/H = (2.547 ± 0.033) × 10 −5 . The primordial 4 He abundance, designated Y p , is not as well established. Izotov et al. [25] give Y p = 0.2551 ± 0.0022, while Aver et al. [26] give Y p = 0.2449 ± 0.0040. The Particle Data Group limit is [23] Y p = 0.2465 ± 0.0097. Given these discrepant estimates, a safe limit on 4 He is Y p = 0.25 ± 0.01. As noted, both the deuterium and 4 He abundances are consistent with the predictions of standard BBN with the CMB value for η. The same cannot be said for the 7 Li abundance. The primordial lithium abundance is estimated to be [23] 7 Li/H = (1.6 ± 0.3) × 10 −10 . However, standard BBN with η ∼ 6 × 10 −10 predicts a primordial value for 7 Li/H that is roughly three times higher than this observationally-inferred value. For this value of η, most of the primordial 7 Li is produced in the form of 7 Be, which decays into 7 Li much later, as noted above. This discrepancy between the predicted and observationallyinferred primordial 7 Li abundances has been dubbed the "lithium problem," and it remains unresolved at present (for a further discussion, see Ref. [27]). Hypothetical changes in the constants of nature have been invoked previously as a possible solution of the lithium problem [11]. In standard BBN, 8 Be is excluded from the reaction network, as it undergoes spontanous fission, with a lifetime ∼ 10 −16 sec; the energy liberated in this fission is −B 8 . Here we assume that during the era of BBN, B 8 > 0, so that 8 Be is stable. Coc et al. [14] examine a specific model for time variation of the fundamental constants, in which all of the binding energies can be calculated as functions of the change in the nucleon-nucleon interactions. A similar approach is taken by Epelbaum et al. [28]. Adams and Grohs [17] take a more general approach and discuss several ways in which changes in the fundamental constants might alter B 8 . Since we are interested in isolating the particular effects of stable 8 Be, we shall adopt the latter approach and treat B 8 as a free parameter. The major limitation of our treatment is that we do not consider changes in the other nuclear binding energies; these require the assumption of a specific model like the one in Ref. [14]. We discuss this issue further in Sec. III. Given the existence of stable 8 Be, the primary new reactions of importance are along with the corresponding reverse reactions. Estimated rates for these two reactions have been derived by Nomoto et al. [29], Langanke et al. [30] and Descouvement and Baye [31]. Adams and Grohs [17] use the nonresonant reactions from Ref. [29], while Coc et al. [11] derived their own expressions for these rates based on a particular model for changes in the nuclear interaction strength. There are, of course, large uncertainties in any calculation of this kind. For example, the nonresonant rate calculated in Ref. [29] is not a direct-capture rate; instead, it represents the low-energy wing of the resonance at the current 8 Be binding energy. Any change in B 8 might have profound effects on this rate. On the other hand, the calculation of Ref. [14] assumes a particular model for changes in the nuclear interaction strength. We have chosen to parametrize the uncertainty in this calculation by expressing the rate for reaction (8) in terms of the standard expression for charged-particle interactions along with an overall multiplicative constant, which we allow to vary. This allows us to the determine the threshold for interesting effects, which can then be compared (at least in order of magnitude) to previous estimates for the rate. For reaction (9), which is less important for our results, we follow Ref. [17] and use the nonresonant rate from Ref. [29]. Recall that for a charged particle reaction like reaction (8), the cross section as a function of center-of-mass energy E can be written as where η is the Sommerfeld parameter, η = Z 1 Z 2 e 2 /hv, with Z 1 and Z 2 the charges on the incoming nuclei and v their relative velocities. If the reaction is nonresonant, then S(E) is generally a slowly-varying function of E (see, e.g., Ref. [32] for a pedagogical discussion). The standard procedure is to expand S(E) in a power series around E = 0 and convolve the cross-section with the thermal distribution of nuclei. For reaction (8) we obtain an expression of the form [33]: where T 9 is the temperature in units of 10 9 K, and N A is Avogadro's number. In this expression for the reaction rate, the F N are functions of S(E) and its first and second derivatives at E = 0 [34]. For the purposes of this study, we take only the constant term F 0 in Eq. (11), and we ignore all of the higher powers of T 1/3 9 so that Effectively, this amounts to treating S(E) as a constant as E → 0. We do not claim that this is likely to be the most accurate description of the form for the reaction rate. However, it gives a simple one-parameter model for this rate that can be compared (at least at the order-of-magnitude level) with other expressions for the cross section. As noted earlier, for reaction (9), we simply use the nonresonant rate of Nomoto et al. [29]. As we will see, this process has little impact on our final results. The reverse reaction rates can be calculated from the forward rates using detailed balance. For reactions of the form i + j −→ k + γ, we have (see, e.g., Ref. [33]), where we have used the fact that all of the nuclei in reactions (8) and (9) are spin singlet states, and Q * denotes the Q values for reactions (8)−(9) when B 8 is allowed to vary from its present value. The present-day Q values for these two reactions are, respectively, Q αα = −0.092 MeV and Q α 8 Be = 7.27 MeV. When we allow the binding energy of 8 Be to change, the new Q values become Q * αα = B 8 and Q * α 8 Be = 7.27 MeV − B 8 . We expect reactions (8) and (9) to be the most important new pathways for the buildup of heavier elements when 8 Be is stable. However, we have also examined the effects of the following reactions: To get a rough estimate of the effect of these reactions, we simply used the rates for the corresponding 2α reactions. We calculated the element abundances both with and without reactions (14)- (23). Over our parameter range of interest, we found no significant difference in the predicted element abundances when we included these additional reactions, in agreement with the earlier results of Coc et al. [14]. We calculated the primordial element abundances using the AlterBBN computer code [35], stripping out the triple-α reaction and replacing it with reactions (8) and (9). We allowed B 8 to vary up to 3 MeV, and we examined F 0 from 10 9 to 10 12 . Our results for 4 He and 8 Be are displayed in Fig. 1 for F 0 = 10 11 , and Fig. 2 gives the 7 Li abundance as a function of both B 8 and F 0 . Note first that the primordial 2 H abundance (not displayed) is completely insensitive to B 8 even for the largest values of F 0 we examined. This makes sense, as this abundance is determined by the rate of deuterium burning into heavier elements, which is unaffected by helium burning into beryllium. This implies that the excellent agreement between the observed and predicted abundances of 2 H is preserved (although see the discussion in Sec. III regarding the deuterium binding energy). We also find essentially no change in any of the element abundances for small binding energies. Our results agree with Ref. [14], who found no significant change in the primordial element abundances for B 8 as large as 100 keV. We can extend this conclusion to larger values of the binding energy: we find no discernable changes in element abundances for B 8 as large as 600 keV, and significant changes only occur for B 8 > 1 MeV. In Fig. 1, we show Y p (the 4 He mass fraction) and the 8 Be mass fraction as a function of B 8 for F 0 = 10 11 . The results for our other values of F 0 are qualitatively similar. As B 8 increases from 1 to 3 MeV, there is a sharp reduction in Y p and a corresponding increase in the 8 Be abundance. Naively, one might expect that this reduction in Y p to values far below the value estimated from observations would rule out this model, but this is not the case. We know that B 8 must assume its present-day negative value at some time after BBN. When this occurs, 8 Be will no longer be stable and will fission back into 4 He. Thus, the present-day mass fraction of 4 He will be given by the sum of the 4 He and 8 Be mass fractions. We have plotted this sum in Fig. 1. Only an infinitesimal fraction of 8 Be is burned into heavier nuclides, so this sum is constant and equal to its value at B 8 = 0. Thus, 4 He, like 2 H, is essentially unaffected by stable 8 Be even for very large binding energies and large rates for reaction (8). However, large values of B 8 do have an important effect on the 7 Li abundance, which is displayed, relative to hydrogen, in Fig. 2. The value of 7 Li/H is clearly very sensitive to both the 8 Be binding energy and the rate of reaction (8). For F 0 ≤ 1.0 × 10 9 , there is essentially no effect on the lithium abundance. Significant reduction begins to occur at F 0 = 1.0 × 10 10 and B 8 ≥ 2 MeV. Lithium abundances in agreement with the observations can be achieved as F 0 increases from 1.0 × 10 10 to 1.0 × 10 11 , and for larger F 0 , there is a narrow range of values for B 8 for which the predicted primordial lithium abundance agrees with the observationally-inferred value. As in standard BBN, we find that most of the 7 Li, at our chosen value of η, is produced in the form of 7 Be. The physical mechanism for this decrease is the sequestration of 4 He in the form of 8 Be as seen in Fig. 1. This decrease in the 4 He abundance during BBN then inhibits reactions (4)- (5). The CNO elements are produced in very small amounts in standard BBN, with typical abundances relative to hydrogen of CNO/H ∼ 10 −15 − 10 −14 [36]. A larger primordial production of CNO elements would be interesting, as the results of Ref. [37] suggest that the value of CNO/H begins to affect the first generation of stars (population III) when CNO/H increases above 10 −11 . However, our results agree with those of Ref. [14]; even for the largest B 8 and F 0 values we examined, we see no significant primordial production of CNO elements. III. DISCUSSION We find that BBN with stable 8 Be can begin to produce interesting changes in the final element abundances for B 8 > ∼ 1 MeV. The deuterium and 4 He abundances are unchanged, although the latter is sequestered in the form of 8 Be until B 8 drops below zero at late times. This sequestration leads to a reduction in the 7 Li abundance and can push it into a regime consistent with observations for a sufficently large 4 He + 4 He rate. We can compare the value of F 0 needed to produce this reduction in the lithium abundance with the nonresonant cross section of Ref. [29]. For T 9 ∼ 1 − 0.1, the prefactor in Ref. [29] corresponding to our F 0 lies between 4 × 10 11 and 2 × 10 10 . As we have already noted, it is not at all clear that the rates of Ref. [29] can be extrapolated to a model with large binding energies for B 8 . However, this comparison with Ref. [29] does indicate that the reaction rates examined here are not completely unreasonable. A value of B 8 ∼ 1 MeV is larger than has been considered in previous BBN calculations. In the context of plasma effects, it requires a very large Debye mass (m D ∼ 3 MeV) if one simply extrapolates the linear approximation of Yao et al. [18]. This is larger than the value of m D predicted from plasma effects during BBN, although there are FIG. 2: The abundance (relative to hydrogen) of 7 Li, as a function of B8, the mass difference between two 4 He nuclei and a single 8 Be nucleus, for (top to bottom) F0 = 1.0 × 10 9 (black), 1.0 × 10 10 (magenta), 1.0 × 10 11 (red), 1.0 × 10 12 (blue), where F0 parametrizes the 4 He + 4 He rate in Eq. (12). The 7 Li abundance is the sum of the primordial 7 Li and 7 Be abundances, as the latter decays into the former. Horizontal dashed lines give the range for the observationally inferred value of 7 Li/H. some uncertainties in these calculations [18]. Such a large value of B 8 can be more plausibly achieved in the context of time variation of the fundamental constants. A value of B 8 ∼ 1 MeV can be obtained with a change in the strong coupling constant of ∼ 15%, or changes in the quark masses or fine structure constant by a similar amount [17,28]. The major caveat in this discussion is that we have limited our analysis to changes in the 8 Be binding energy alone. This was intentional, as we wished to isolate the effects of large changes in this binding energy in a model-independent way. A realistic model would result in changes to all of the nuclear binding energies, as in Ref. [14]. In changing the other nuclear binding energies, the one likely to have the largest impact is deuterium [6][7][8]11]. In the model presented in Ref. [14], ∼ 1 MeV values of B 8 would result in a 50% increase in the deuterium binding energy. A larger deuterium binding energy would result in an earlier onset of nuclear fusion, leading to more 7 Li, and potentially cancelling the reduction in 7 Li noted here. However, all of these conclusions depend on the particular model invoked to alter the nuclear binding energies. A systematic estimate of the effects of changing other binding energies can be found in Ref. [12]. It is possible that the plasma effects proposed by Yao et al. [18] would have a much larger effect on the 8 Be binding energy than on the other nuclear binding energies, since these plasma effects are sensitive to the existence of the 92 keV resonance in 8 Be. Our results indicate that it is difficult to produce significant abundances of CNO elements in BBN even with MeVscale binding energies for 8 Be. In that regard, the famous "mass gap" at A = 8 is misleading; the failure to produce heavier elements in the early universe is a result of the lower densities and shorter times for nuclear fusion than prevail in stars [14]. This analysis ignores the possibility that, for large values of B 8 and F 0 , the build-up of a large mass fraction of 8 Be might allow the reaction 8 Be + 8 Be −→ 16 O + γ to compete with reaction (9) as a mechanism for the production of the CNO elements, but that seems unlikely in view of the large Coulomb barrier. Of course, these results are also sensitive to the assumed rate for 8 Be + 4 He; a rate that diverges from that of Ref. [29] could alter our conclusions regarding the CNO elements. This work is admittedly speculative; our goal was to establish a threshold on the 8 Be binding energy and the 4 He + 4 He reaction rate that would produce a reduction in the primordial lithium abundance. While the possibility of solving the lithium problem through a change in the constants of nature, including the binding energies of the light nuclei, is not new [11], the sequestration of 4 He during BBN noted here represents a qualitatively new mechanism to achieve this. IV. ACKNOWLEDGMENTS We thank F.C. Adams, R. Galvez, and X. Yao for helpful discussions.
2017-09-13T16:05:09.000Z
2017-07-12T00:00:00.000
{ "year": 2017, "sha1": "d4c0e9eeb5f4b0d52ced2c19579708c263456e22", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1707.03852", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b7e178cfa42cfd914b0a370c4548430cb482991b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
116962758
pes2o/s2orc
v3-fos-license
A conjectural Lefschetz formula for locally symmetric spaces We formulate a conjectural Lefschetz formula for locally symmetric spaces of finite volume. The formula can be verified in the compact case and for Riemann surfaces. In this paper we suggest a Lefschetz formula for non-compact finite volume spaces and we prove it in the case of Riemann surfaces by exploiting the properties of the Selberg zeta function. This way of proof might be extended to rank one spaces, but for higher rank a new idea is required. Global Lefschetz numbers Let G denote a connected semisimple Lie group with finite center. Fix a maximal compact subgroup K with Cartan involution θ. So θ is an automorphism of G with θ 2 = Id and K is the set of all x ∈ G with θ(x) = x. Let g R , k R denote the real Lie algebras of G and K and let g and k denote their complexifications. This will be a general rule: for a Lie group H we denote by h R the Lie algebra of H and by h = h R ⊗ C its complexification. Let b : g × g → C be a positive multiple of the Killing form. On G, K and all parabolic subgroups as well as all Levi-components we install Haar measures given by the form b as in [20]. Let H be a non-compact Cartan subgroup of G. Modulo conjugation we can assume that H = AB where A is a connected split torus and B is a closed subgroup of K. Fix a parabolic P with split component A. Then P has Langlands decomposition P = MAN and B is a Cartan subgroup of M. Note that an arbitrary parabolic subgroup P ′ = M ′ A ′ N ′ of G occurs in this way if and only if the group M ′ has a compact Cartan subgroup. In this case we say that P ′ is a cuspidal parabolic. The choice of the parabolic P amounts to the same as a choice of a set of positive roots Φ + (g, a) in the root system Φ(g, a). The Lie algebra n of the unipotent radical N can be described as n = α∈Φ + (g,a) g α , where g α is the root space attached to α, i.e., g α is the space of all X ∈ g such that ad(Y )X = α(Y )X holds for every Y ∈ a. Definen = α∈Φ + (g,a) g −α . This is the opposite Lie algebra. Letn R =n∩g R andN = exp(n R ). ThenP = MAN is the opposite parabolic to P . Let a * denote the dual space of a. Since A = exp(a 0 ), every λ ∈ a * induces a continuous homomorphism from A to C * written a → a λ and given by (exp(H)) λ = e λ(H) . Let ρ P ∈ a * be the half of the sum of all positive roots, each weighted with its multiplicity. So a 2ρp = det(a|n). Let a − 0 ⊂ a 0 be the negative Weyl chamber consisting of all X ∈ a 0 such that α(X) < 0 for every α ∈ Φ + (g, a). Let A − = exp(a − 0 ) be the negative Weyl chamber in A. Further let A − be the closure of A − in A. This is a manifold with corners. Let K M = M ∩ K. Then K M is a maximal compact subgroup of M and it contains B. Fix an irreducible unitary representation (τ, V τ ) of K M . Then V τ is finite dimensional. Letτ be the dual representation to τ . LetĜ denote the unitary dual of G, i.e., it is the set of all isomorphy classes of irreducible unitary representations of G. LetĜ adm ⊃Ĝ be the admissible dual. For π ∈Ĝ adm let π K denote the (g, K)-module of K-finite vectors in π and let Λ π ∈ h * be a representative of the infinitesmal character of π. Let H • (n, π K ) be the Lie algebra cohomology with coefficients in π K . By [21] for each q the (a ⊕ m, K M )-module H q (n, π K ) is admissible of finite length, i.e., a Harish-Chandra module. For λ ∈ a * and an A-module W let W λ denote the generalized λ-eigenspace, i.e., W λ is the set of all w ∈ W such that there is n ∈ N with (a − a λ ) n w = 0 for every a ∈ A. Let m = k M ⊕ p M be the Cartan decomposition of the Lie algebra m of M. For π ∈Ĝ and λ ∈ a * let L τ λ (π) denote the representationtheoretic Lefschetz number given by For a given smooth and compactly supported function f ∈ C ∞ c (G) we define its Fourier transformf :Ĝ → C bŷ f (π) def = tr π(f ). (b) The sum in (a) is finite, more precisely, the Lefschetz number L τ λ (π) is zero unless there is an element w of the Weyl group of (g, h) such that λ = (wΛ π )| a − ρ P . Proof: The proof of part (a) is contained in section 4 of [12], and (b) is a consequence of Corollary 3.32 in [21]. Local Lefschetz numbers Let Γ ⊂ G be a discrete subgroup of finite covolume. Let X = G/K be the symmetric space and X Γ = Γ\X = Γ\G/K be the corresponding locally symmetric quotient. The group Γ is called neat if it is trosion-free and for every finite dimensional representation ρ of G and every γ ∈ Γ the linear map ρ(γ) does not have a root of unity other than 1 for an eigenvalue. Every arithmetic group has a finite index subgroup which is arithmetic and neat [3]. Let Define the H-index by Remarks • If L is reductive, H a compact Cartan subgroup and Γ cocompact and torsion-free, then the H-indext equals the Euler-characteristic, where K L is a maximal compact subgroup of L. • Assume L reductive, Γ neat and H = AB a Cartan subgroup with A central in L. Let C be the center of L, Then Γ A is a discrete and cocompact subgroup of A. Under these circumstances, • Let G as before and let H = AB be a non-compact Cartan subgroup of G. Let Γ ⊂ G be neat and let [γ] ∈ E P (Γ). Assume that γ is regular, Let K γ ⊂ G γ be a maximal compact subgroup and let X γ = Γ∩G γ \G γ /K γ the corresponding modular subvariety of X Γ . Being semisimple, the element γ lies in a Cartan subgroup H γ = A γ B γ , where A γ is a split connected torus and B is compact. Hence γ =ã γbγ . If we assume thatã γ is a regular element of A γ , then A γ is uniquely determined by γ. Back to the notation of the first section let E P (Γ) denote the set of all conjugacy classes [γ] in Γ such that γ is in G conjugate to an element a γ b γ of A − B. Then there is a conjugate H γ of H such that γ ∈ H γ For [γ] ∈ E P (Γ) we define the local Lefschetz number by The Lefschetz formula The unitary G-representation on L 2 (Γ\G) decomposes as is a direct sum of irreducibles with finite multiplicities and L 2 cont is a sum of continuous Hilbert integrals. In particular, L 2 cont does not contain any irreducible subrepresentation. Let r be the dimension of A and let α 1 , . . . , α r ∈ a * R be the primitive positive roots. Let a * ,+ R = {t 1 α 1 + · · · + t r α r : t 1 , . . . , t r > 0} be the positive dual cone and let a * ,+ R be its closure in a * R . For µ ∈ a * and j ∈ N let C µ,j (A − ) denote the space of all functions on A which • are j-times continuously differentiable on A, This space can be topologized with the seminorms Since the space of operators D as above is finite dimensional, one can choose a basis D 1 , . . . , D n and set The topology of C µ,j (A − ) is given by this norm and thus C µ,j (A − ) is a Banach space. Conjecture 3.1 (Lefschetz Formula) For λ ∈ a * and π ∈Ĝ adm there is an integer N Γ,cont (π, λ) which vanishes if Re(λ) / ∈ a * ,+ R and there are µ ∈ A * and j ∈ N such that for each ϕ ∈ C µ,j (A − ) and with Either side of this identity represents a continuous functional on C µ,j (A − ). In the following cases the conjecture is known. (a) The conjecture holds if Γ is cocompact. In that case the numbers N Γ,cont (π, λ) are all zero. This is shown in [12]. (b) In the next section we will prove the conjecture for G = PSL 2 (R). We will now make the conjecture more precise for congruence subgroups. For this assume that G = G(R) for some semisimple linear group G defined over Q. Let A = A fin × R be the adele ring over Q. Assume that Γ is a congruence subgroup., i.e., there exists a compact open subgroup K Γ of G(A fin ) such that Γ = G(Q) ∩ K Γ . To explain the conjectured nature of the number N Γcont (π) we will recall Arthur's trace formula. This formula is the equality of two distributions on G(A), The geometric distribution J geom can be described in terms of weighted orbital integrals. Our interest however is focused on the spectral distribution J spec . According to [1], Theorem 8.2, one has where χ runs through conjugacy classes of pairs (M 0 , π 0 ) consisting of a Qrational Levi subgroup M 0 and its cuspidal automorphic representation π 0 , the sum being absolutely convergent. The particular terms have expansions extends to a meromorphic function on (a G L ) * . For ν ∈ (a G L ) * let R ν denote the residue of this operator valued function at ν. Arthur proved that the distribution f → tr (R ν ρ χ,η (P, ν, f )) = D(f ) is invariant. Let 1 K Γ be the indicator function of K Γ . We conjecture that the distribution on G, is a finite linear combination of traces with integer coefficients, i.e., for some c(χ, η, P, ν, π) ∈ Z. PSL 2 (R) In this section we will prove the conjecture in the simplest case, that of the group G = PSL 2 (R) = SL 2 (R)/ ± 1. For this group there is, up to conju- Let us recall some facts from the representation theory of PSL 2 (R). Let λ ∈ a * and denote by π λ the corresponding principal series representation, so π λ lives on the space of measurable functions f : G → C with f (anx) = a λ+ρ f (x) which are square integrable on K, modulo nullfunctions. The representation is the right regular representation, i.e., π(y)f (x) = f (xy). For each natural number n there are exact sequences 2n are the discrete series, resp. limit of discrete series representations as in [26], and δ 2n−1 is the unique irreducible representation of G of dimension 2n − 1. In all other cases the representation π λ is irreducible. If λ is purely imaginary, then π λ is isomorphic with π −λ and this is the only isomorphism between different principal series representations. The admissible dualĜ adm consists of all irreducible principal series representations and all D ± 2n and all δ 2n−1 . The unitary dualĜ comprises all irreducible π λ , where λ is purely imaginary, all π tρ for 0 < t < 1 and all D ± 2n . For λ ∈ a * we also write λ for the quasi-character a → a λ of the group A. Proof: Recall the Iwasawa decomposition G = ANK. Let λ = sρ, s ∈ C. Then for x ∈ R, Since f ∈ π K , the function f is continuous on K, so the limit as x → ∞ must exist, which implies s = −1 or Re(s) < −1. In the case s = −1 the constant function f (x) = 1 indeed gives a basis for H 0 (n, π K ). If Re(s) < −1 then f can only be a smooth function on K if s = 1 − 2k is integral and odd. In order to determine the A-actions we need to introduce some notation. For a C[A]-module V and λ ∈ a * we write V λ for the generalized λ-eigenspace in V . This means v ∈ V λ if and only if there is n ∈ N such that (a − a λ ) n v = 0 for every a ∈ A. On a * we introduce a partial order < as follows, where Λ π ∈ h * is a representative of the infinitesimal character of π and the sum runs over w in the Weyl group W (g, h). Let π = π (1−2k)ρ . According to part (a) of the Lemma, the group A acts on H 0 (n, π K ) either via (2k −2)ρ or −2kρ. The second possibility is excluded by part (b) of the Lemma. This proves part (a) of the Proposition. Part (b) of the Proposition follows from highest weight theory and part (c) follows from part (a) and the fact that D + 2n ⊕ D − 2n is a subrepresentation of π 2n−1 . Proposition 4.3 For π ∈Ĝ adm the dimension of H 1 (n, π K ) = 1 is one except if π = π λ where λ is purely imaginary (unitary principal series) in which case the dimension is two. (b) Let π λ be a unitary principal series. Then and L λ (π µ ) = 0 in all other cases. (d) Let n ∈ N. Then L τ λ (D ± 2n ) = 0 except for Next for the local Lefschetz numbers. If [γ] ∈ E P (Γ), then γ is G- γ of Γ is called primitive if γ = σ n for σ ∈ Γ and n ∈ N implies n = 1. Each γ ∈ E P (Γ) is a power of a unique primitive γ 0 which will be called the primitive underlying γ. We write L(γ) for L τ (γ) since τ is trivial anyway. Then We will now recall some facts about the Selberg zeta function [15, ?, 24]. Let E p P (Γ) denote the set of all primitive classes in E P (Γ). The Selberg zeta function is given by the product The product converges locally uniformly for Re(s) > 1. The zeta function extends to a meromorphic function on the plane of finite order. It has a simple zero at s = 1 and zeros at s = 1 2 ± u of multiplicity N Γ (π u ρ 2 ). These are all zeros or poles in Re(s) ≥ 1 2 except for s = 1 2 where Z(s) has a zero or pole of order N Γ (π 0 ) minus the number of cusps. The poles and zeros in Re(s) < 1 2 can be described through the scattering matrix or intertwining operators [15, ?, 24]. Recall the inversion formula for the Mellin transform. Let the function ψ be integrable on (0, ∞) with respect to the measure dt t , in other words, ψ ∈ L 1 (0, ∞), dt t . Then the Mellin transform of ψ is given by If ψ is continuously differentiable and ψ ′ (t)t, ψ ′′ (t)t 2 are also in L 1 (0, ∞), dt t , then the following inversion formula holds, Now assume that ψ is supported in the interval [1, ∞) and that for some . Then it follows that the integral Mψ(s) defines a function holomorphic in Re(s) < µ and the integral in the inversion formula can be shifted, for every C < µ. Every γ ∈ E P (Γ) can be written as γ = γ n 0 for some uniquely determined γ 0 ∈ E p P (Γ) and a unique n ∈ N. A computation yields for Re(s) > 1, Let ψ be as above with µ > 1 and let 1 < C < µ. Then, since Z ′ Z (s) is bounded in Re(s) = C we can interchange integration and summation to get Then ϕ ∈ C 2,2µρ (A − ) and 1 2πi C+i∞ which is the right hand side of the Lefschetz formula. Now suppose that ϕ ∈ C j,2µρ (A − ) for some j ∈ N and some µ > 1. Then the functions ψ(t), ψ ′ (t)t, . . . , ψ (j) (t)t j are all O(t −µ ). Integration by parts shows that This implies that Mψ(s) = O ((1 + |s|) −j ) uniformly in {Re(s) ≤ α} for every α < µ. For R > 0 and a ∈ C let B r (a) be the closed disk around a of radius r. Let g be a meromorphic function on C with poles a 1 , a 2 , . . . . We say that g is essentially of moderate growth, if there is a natural number N, a constant C > 0, and as sequence of real numbers r n > 0 tending to zero, such that the disks B rn (A n ) are pairwise disjoint and that on the domain D = C \ n B rn (a n ) it holds |g(z)| ≤ C|z| N . Every such N is called a growth exponent of g. Lemma 4.5 Let h be a meromorphic function on C of finite order and let g = h ′ /h be its logarithmic derivative. Then g is essentially of moderate growth with growth exponent equal to the order of h plus two. Proof: This is a direct consequence of Hadamard's factorization Theorem applied to h. This Lemma together with the growth estimate for Mψ implies that for j large enough the contour integral over C + iR can be moved to the left, deforming it slightly, so that one stays in the domain D, and gathering residues. Ultimately, the contour integral will tend to zero, leaving only the residues. One gets This implies the conjecture in the case G = PSL 2 (R). Applications If we assume the Lefschetz formula in general, then most applications known in the compact case carry over to the non-compact case. In this section we will only highlight the prime geodesic theorem as an example. For this we assume that the parabolic P = MAN is minimal, i.e., the group M is compact. Let r = dim A and let α 1 , . . . , α r be positive multiples of simple roots such that for the modular shift ρ P we have 2ρ P = α 1 + · · · + α r . Proof: The proof of the compact case [13] carries over. We further note a consequence of the Prime Geodesic Theorem which comes about when one applies the Prime geodesic Theorem to G = SL d (R) and Γ = SL d (Z). Let d be a prime number ≥ 3. Let C be the set of all totally real number fields F of degree d. Let O be the set of all orders O in number fields F ∈ C. For an order O ∈ O let h(O) be its class number and R(O) its regulator. For λ ∈ O × let σ 1 , . . . , σ d denote the real embeddings of F ordere in a way that |σ k (λ)| ≥ |σ k+1 (λ)| holds for k = 1, . . . , d − 1. For k in the same range let So c > 0 and it comes about as correctional factor between the Haar measure normalization used in the Prime Geodesic Theorem and the normalization used in the definition of the regulator. Then we have, as T 1 , . . . , T d−1 → ∞,
2019-04-12T09:13:38.119Z
2004-01-21T00:00:00.000
{ "year": 2004, "sha1": "86059343ea0f491d3384d3574a3bf9313a76d359", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "86059343ea0f491d3384d3574a3bf9313a76d359", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
78337591
pes2o/s2orc
v3-fos-license
The NOMA track module on nutrition , human rights and governance : Part 2 . A transnational curriculum using a human rights-based approach to foster key competencies in nutrition professionals The education of health professionals has not kept pace with the major challenges involved in providing health security for all during the 21st cen tury.[1,2] Despite professional regulatory bodies requiring certain competencies, which are featured in the teaching and learning policies of training institutions, these competencies are not necessarily embedded in the formal curriculum and are often assumed to be acquired through the ‘hidden’ curriculum.[3] Professional competencies are seen to include, but go beyond, disciplinary expertise or technical knowledge – ‘they are the qualities that also prepare graduates as agents for social good in an unknown future’.[4] Various frameworks are used by educational institutions worldwide to demonstrate the professional competence of graduates. For example, one of the participating universities in this study bases its professional competencies[5] (Table 1, left column) on the CanMEDS competency framework developed by the Royal College of Physicians and Surgeons of Canada.[6] Similar competency attributes are also embedded in the principles of a human rights-based approach (HRBA), which centres around the primary rights and responsibilities of the rights holders (such as vulnerable population groups) and the corresponding duties of those responsible for improvements (duty bearers). For example, HRBA principles emphasise participation, transparency, non-discrimination and sustainability.[7] Thus, the professional competency attributes mentioned correspond with several of the human rights principles required by nutrition professionals to fulfil their roles as duty bearers, honouring their obligation towards the fulfilment of the relevant human rights of the rights holders.[8] The HRBA also implies that nutrition professionals should not function in isolation. Through transprofessional collaboration between several professions, such as nutrition, law, economy and agriculture, among others, sustainable solutions may be found to deep-rooted nutri tion problems.[8] Such a transprofessional approach provides nutrition professionals with a combination of enabling competencies valuable to the development and implementation of policies and programmes aimed at addressing the myriad of nutrition-related challenges faced by vulnerable population groups. Agreeing about many of the educational challenges of the 21st century, educators at universities in Norway, South Africa (SA) and Uganda collaboratively developed the NOrwegian MAsters (NOMA) track module on nutrition, human rights and governance (further referred to as ‘the module’). Funding was obtained from the Norwegian government (through the Centre for International Cooperation in Education (SIU)).[9] Participating students were registered for a Master’s degree in nutrition at their respective universities. The 18-week module was presented for 6 weeks in each of the three countries and students had to adapt to different cultures and educational systems twice. NOMA students (n=22) were exposed to the associated culture shock, which caused some anxiety resulting from the absence of support systems and familiar surroundings and cultural practices. The available literature reports that international students are typically exposed to different beliefs about what constitutes knowledge, and how it should be learnt, taught and assessed.[10] Furthermore, the transition period in a foreign country is associated with disorientation, insecurity and incomprehension, all of which may negatively affect the learning process and preclude skills transfer. During the transition period there may be a disparity between a Background. In response to the challenge of the global health needs of the 21st century, four academic institutions in Norway, South Africa and Uganda, each offering a Master’s degree in nutrition, collaboratively developed the NOrwegian MAsters (NOMA) track module on nutrition, human rights and governance, integrating a human rights-based approach into graduate education in nutrition. Objective. To capture students’ perceptions about the NOMA track module, focusing on the development of key competencies. Methods. Employing a qualitative approach, 20 (91% response rate) in-depth telephonic interviews were conducted with participating students, voice recorded and transcribed. Through an inductive process, emerging themes were used to compile a code list for content analysis of the transcribed text. Relevant themes were reported according to the professionals’ roles described by the CanMEDS competency framework. Results. Participation in the module enhanced key competencies in the students, e.g. communication skills and the adoption of a holistic approach to interaction with people or communities. Their role as collaborator was enhanced by their learning to embrace diversity and cultural differences and similarities. Students had to adapt to different cultures and educational systems. They were inspired to contribute in diverse contexts and act as agents for change in the organisations in which they may work or act as leaders or co-ordinators during interaction with community groups and policy makers. Higher education institutions offering transnational modules should support lecturers to manage the inherent diversity in the classroom as a way of enhancing student performance. Conclusion. The development of future transprofessional modules will benefit from the inclusion of desirable key competencies as part of the module outcomes by following a competency by design process. Research Table 1.Examples of professional roles and attributes enhanced through Master of Nutrition students' participation in the NOMA track module 'Nutrition, Human Rights and Governance' , as perceived by the students Roles of HNPs [5] Summary of perceptions of NOMA track students Selected quotes to illustrate the development of professional attributes As communicators, HNPs effectively facilitate the carer/ service-user relationship and the dynamic exchanges that occur before, during and after interaction Gained confidence in expressing own feelings Communication with people from different cultures 'I felt within the group I could ask the questions I needed to ask to get an understanding, because the others were better in English than we Norwegians. Sometimes they laughed at our understanding but I can handle that.' (Female student, Norway) ' And sometimes you don't know what they are thinking and that can lead to a lot of confusion because you could potentially keep on saying things that annoy them, but in their culture they don't complain … they just keep it inside or ignore it.' (Female student, SA) '[There is a] different way of thinking: as a Norwegian person I may have understood one situation as positive and an African person may have understood the situation completely otherwise … you meet people and think everything is okay and that you are just behaving normally, but then you realise after a while that you have been rude or been perceived that you have been rude.' (Female student, Norway) As collaborators, HNPs effectively work within a team to achieve optimal service-user care (the community included) Students from different countries embraced the diversity as a platform to grow as person and as a professional Adapting to foreign cultures Embracing cultural differences Conflict management 'There were some good interactions among the students.We got to know each other. By the time we left Norway, we were very familiar with each other … eventually we became one team.' (Female student, Uganda) 'We respected the fact that we were from different cultures, we are different people raised up in different countries.So we needed to respect each other.' (Female student, Uganda) 'Try not to compare it to your own culture or the culture you just been to, but you have to accept it that is just the way it is. Research student's learning expectations and accomplishments and those anticipated by lecturers. [10]n the first part of this series, Marais et al. [9] reported the perceptions of NOMA students about the development and process of the NOMA track module, which presented students with different challenges.The objective of this article is to describe attributes associated with professional competence deduced from NOMA students' own accounts of their experiences of the module. Twenty NOMA students (16 female and 4 male), enrolled for a Master's degree in nutrition at universities from different countries (4 from Norway, 7 from SA and 9 from Uganda), consented to participate in the study (91% response rate).Their mean age was 30.2 (standard deviation (SD) 6.0) years.Some participants had between 1 and 18 years of working experience as community dietitians, nutritionists, research scientists or cooks, while others had only been registered students with no previous work experience. [9]thods Data were collected during October and November 2012.As the students resided in different countries, two trained research assistants conducted in-depth interviews (35 -125 minutes) telephonically in English.A discussion guide was used, based on topics and probes relevant to the module.An example of one topic was students' experience of participating in the module and its effect on their personal skills and professional competencies. Transcriptions were checked to ensure that the text was a true reflection of the recorded interviews and a systematic approach was used to analyse unstructured data.Constant comparison of information ensured that the themes reflected the original data.An inductive process was followed, as themes emerging from the text were used to compile a code list and code-transcribed text, using a text analysis computer programme (ATLAS.tiversion 6, Germany). Results from the study are reported in two articles.As reported in the first part of this series, the participants appreciated the module content, study visits, experienced lecturers and interactive teaching style. [9]Another set of themes that emerged related to development of the competence required of nutrition professionals; the attributes displayed by the participants were grouped according to the seven professional CanMEDS roles and are presented in this article. Ethics and legal aspects Approval for the study was obtained from the Health Research Ethics Committee of the Faculty of Medicine and Health Sciences, Stellenbosch University (ref.no.N12/08/044).Informed written consent for voluntary participation as well as for voice recording of interviews was obtained from all participants.Anonymity and confidentiality were maintained during interview transcription and whenever direct quotes were used.The transcripts and voice recordings were stored in protected files and the voice recordings were destroyed after 6 months. Results The participants generally described the module as memorable and a once-in-a-lifetime opportunity, with 'an incredible learning curve' .The study illustrates the concept of lifelong learning, as participants testified to professional development and personal growth resulting from the experience.This was also evident from the set of emerging themes grouped according to the different professional roles [5] that nutrition professionals [5] Summary Research must fulfil (Table 1), i.e. those of communicator, collaborator, manager and leader, scholar, health (and nutrition) advocate and a professional, culminating in being a (nutrition) practitioner. [6]mmunicator: Learning to effectively participate during dynamic exchanges Students testified to personal growth as they grew more independent during the study period in foreign countries and gained confidence in expressing their feelings.Even though the more reserved students were afraid of 'saying anything wrong' whenever sensitive issues were discussed, other students felt supported by the group and free to ask questions: 'You learn when to keep quiet and when to say your say … to state why you disagree or have different ideas.' (Female student, SA) It was a source of frustration for some students when fellow students did not voice their opinion during the lectures.Some identified communicating in English as a second language as a barrier, limiting spontaneous participation and self-expression at times.Students willing to interact in a meaningful way learnt from each other how to participate in discussions and debates in a culturally sensitive manner. Collaborator: Learning to embrace differences Participating students, being from different countries and studying at different universities, were introduced to perspectives, values and social norms that partially differed from those they were used to: 'Take it in your stride and inhale as much as you possibly could … Look to compare … so many differences but so many similarities … .' (Female student, SA) Overall, students embraced the opportunity to meet people from different nations and used the opportunity to find out 'why they believe what they believe' .Mature students or those who had been exposed previously to other world views and cultures seemed more tolerant of and respectful towards inherent differences.In this context, culture is understood in the broader sense; it is the total way of life in a society, which distinguishes members of human groups from others in terms of shared beliefs, ideologies and norms that influence actions. [11]These cultural differences were a potential source of misunderstanding and conflict; for example, differences in time management sometimes interrupted the teaching schedule. Cultural differences became most pronounced in Uganda and some of the foreign students adapted with difficulty.To enable co-operation and develop an understanding of different cultures required some effort, and awareness that there may be issues within one's own culture or country unacceptable to foreigners.What was considered as rude or discourteous differed according to the cultural context.For example, in Uganda, all conversations start with a reciprocal enquiry about each individual's wellbeing before the actual conversation begins.In contrast, people in Norway use fewer formalities, 'If you want to do something, you don't waste any time doing it' .Additionally, in Norway, religion is regarded as a private matter but in Uganda it is discussed freely: 'When it comes to culture, it brings out a positive something.But as soon as it does not coincide with the other countries, then they [foreign students] bring up those issues of someone being offended.' (Female student, Uganda) Diversity also provided many opportunities for interesting and sometimes heated debates, and those who were able to accept differences refined the skill of dealing with difficult situations.Unknowingly, an 'ignorant' question was sometimes perceived as being offensive or it came across as being derogatory.For example, a lack of awareness that sometimes things in Uganda are just accepted and not challenged created a situation where a student offended people by asking questions -according to the Ugandan culture, it is rude to tell someone if they are in the wrong. People have different ways of coping with stress and unfamiliar situations.More than one student admitted that they needed to become more tolerant and to learn how to deal with conflict.One student mentioned that she initially became psychologically disengaged to avoid offending people by saying 'something wrong' .The stress caused another student to overreact; she became emotional 'where I didn't expect I would have' .Others learnt how to manage their own emotions and felt better equipped to handle difficult situations in future. Manager: Learning to enhance effectiveness 'I certainly feel more equipped and competent to work at a level where I am not just on the ground but I am able to work with people who are possibly making policy decisions … .' (Female student, SA) Problem-solving skills were enhanced as students had to evaluate situations, identify areas for improvement and compare different countries.Students felt better equipped to be part of an inter-or transprofessional group, as knowledge of the HRBA '… adds to any professional that works with people, policies or scarce resources that you need to redistribute' .Students learnt how to prioritise their responsibilities and how to manage large volumes of information.One student realised that '… time is a very, very important factor which I did not take into consideration [previously]' . According to their current job description, some students felt apprehensive about immediate implementation of an HRBA, realising that to effectuate change, the usual planning process needed to be followed, requiring hard work and perseverance: 'I feel the impact will come.It's not going to be an immediate thing where you suddenly see the light and that everything just flows smoothly.It is a process.' (Female student, SA) Health and nutrition advocate: Learning to influence the wellbeing of individuals and communities Students became aware that nutrition is interrelated and integrated, and that issues of food, nutrition and food security cannot be addressed without attention to broader sociocultural, political, economic and technical issues.They anticipated the future implementation of an HRBA in their daily practice by assessing individuals in a holistic manner, involving the person/ community in decision-making processes and consulting the community about new projects: 'Not to just rush into something and try and change things but rather to look at the reason why you want to change things … what impact it will have on people.' (Female student, SA) Some students were enthusiastic about newly acquired skills when advocating for nutrition or engaging with public officials or non-governmental organisations.Another student was more cautious, as she realised that government officials may have a limited understanding of food as a human right.She regarded it a challenge for nutrition professionals who want to act as agents for change: Research 'We have to first teach people in government about human rights … because if they don't understand it, they can't accept nor implement it.' (Female student, Uganda) Scholar: Lifelong learning The module provided students with a global perspective and challenged them on intellectual, emotional and physical levels.Students felt enriched by being exposed to new concepts and unique experiences.It motivated them to share their knowledge with colleagues and to train other health professionals. Students accepted the responsibility of acting or speaking on behalf of vulnerable groups in the future and providing accurate information about their situation.In various ways, diversity helped to develop a better comprehension of the module content.Some male students expressed the opinion that diversity in the group prevented the module from becoming 'static' .Students with previous work experience were familiar with working in an inter-or transdisciplinary environment and could provide practical examples, explain specific situations or compare policies and programmes implemented in different countries. Students from all countries benefited from new information about their own countries and found that they understood global events and processes better.Other interests relevant to nutrition were developed, e.g. the interrelationship between nutrition, agriculture and political stability. Professional: Conducting ethical practice Examples given above show that students embraced diversity and adopted a holistic approach, indicating their enhanced perception of professional and ethical practice.Some students felt relieved when they realised that they did not necessarily need to conform to peer pressure and that they should remain true to their values. Healthcare and nutrition practitioners: Integrating their competencies Overall, an awareness of hardship experienced by vulnerable population groups was developed, one which helped to foster a changed mindset, '… to give more than I receive.To look where I can make a difference …' .Students' passion for nutrition was reinvigorated, inspiring them to serve needy communities in a meaningful way.They finally realised the extent of their calling as dietitians/nutritionists, and that it included being 'advocates and consultants for human rights' . NOMA students regarded themselves as privileged.They realised that they had previously had a narrow technical focus without a broader contextual understanding of food and nutrition security.They were now equipped to foster a person-centred approach, as part of a global network promoting the right to adequate food. Discussion Frenk et al. [1] argued that 'tribalism of professions should be replaced with collaboration to optimise mutual learning opportunities across countries' .As an example of transnational education (where a student is in a different country than the host university and where academic qualification is obtained), [12] the development of the NOMA track module was brought about through successful collaboration between universities from different countries, with a willingness to form a network and share educational resources.In search of sustainable solutions to nutrition-related problems, the module strived to integrate human rights and nutrition using an HRBA.Professionals from both fields aimed at educating students to contribute to societies as they currently exist and for future changes as they evolve. [7,13]rofessionals representing both fields had as their objective the education of students, through whom current and future changes to societies will be influenced. The structure of the module was such that the NOMA students had to adapt to different cultures and educational systems every 6 weeks.Even with the assistance of peers, international students still needed time to adjust to the different sets of social rules that regulate interaction and communication.Kelly [10] suggests that while students are still adapting, they are less inclined to be interactive; this may have caused gaps in understanding, as lecturers and peers may have perceived students as being unwilling or unable to participate. It is particularly important for lecturers of transnational students to be aware of the potential for culture to influence student preferences and expectations and introduce sufficient flexibility into their approach to teaching to accommodate various nationalities, educational backgrounds, learning styles and language proficiencies. [14]Lecturers may have had unrealistic assumptions about NOMA students' competence, e.g.their ability to manage a large volume of literature. [9]Even though international students may be English literate, using a second language may negatively affect their ability to participate optimally during interactive learning opportunities. [10]Based on the findings of this research, it is recommended that the following aspects should be considered during the introduction of any transnational module: introductory lectures on world view, time management and academic writing (including referencing). Academic institutions should strive to reduce the transition period for international students by reassessing whether the curriculum is culturally responsive and relevant to the needs of such students, making them feel included rather than excluded or disadvantaged.If not, the potential exists to promote surface learning and/or an inability to solve problems independently. [10]ifferent teaching strategies to help the adaptation process and to enhance learning were employed in the module. [9]However, there is no single correct way to learn.Ultimately, different learning cultures have the potential to stretch individual students beyond their established styles, and to develop learning strategies/approaches that are more adaptive.This may also create a greater capacity to engage in lifelong learning and professional development opportunities. [14,15]Thus, lecturers need to consider carefully the choice of approaches that they encourage/discourage, and their use and development. [14]everal NOMA students from different countries formed close relationships and it is possible that collaboration and understanding would have been further enhanced if more opportunities for socialisation were integrated in the module programme. [10,16]During the development of future modules for transnational students, the use of team bonding exercises, cross-cultural communication activities and allocation of mentors, to facilitate the adaptation process and to develop skills in collaboration and teamwork, may be considered. [10]he interviews revealed that some underlying tension and conflict during the training period may be ascribed to interpersonal differences.However, this may have been influenced by power differences between groups formed during the 4-month period, or caused by a lack of leadership, indicating the absence of a common group identity and resulting in misunderstandings Research arising from poor communication. [17]There were situations that arose because of cultural insensitivity or poor communication that might have been avoided by proactively developing a mutually agreed process for handling disagreements within such a diverse group. [18]ome students embraced diversity by learning more about observable elements (i.e.language) and hidden elements of cultural characteristics (i.e.customs). [19]NOMA students identified the need for an introductory lecture about cultural diversity to enhance mutual understanding. [9]However, students should also be advised that they will not always fully understand a foreign culture, that it is often helpful to assume the role of the 'respected outsider' and be encouraged to focus on commonality rather than separateness. [19]enerally, the need to develop competence generates an intrinsic interest in what is being learnt. [10]Students who previously had a strictly scientific approach to nutrition were drawn to participate in the module because of their keen interest in the link between nutrition and human rights.Students were also introduced to aspects of political science and agriculture, nurturing the potential to join in public reasoning as informed citizens and on behalf of vulnerable groups. [1,2]After completion of the module several students were inspired to contribute in diverse contexts beyond their own countries [9] and to act as agents for change in the organisations in which they may work or act as leaders or co-ordinators during interaction with community groups and policy makers. [1,2]imilarly to undergraduate module development, it is recommended that during the development of modules at a Master's degree level, a rigorous competency-based curriculum design process is followed, clarifying beforehand the competencies that the specific module should help to develop and, most importantly, how these competencies will be assessed. Conclusion According to the recommendation made by the Lancet Commission to 'promote quality, uphold a strong service ethic, and be centred around the interests of [individuals and] populations' , the NOMA track module addressed an integrated approach to human rights and nutrition.Based on the perceptions of the students, it became evident that the professional competency attributes of a group of Master of Nutrition students were also enhanced. Transnational and transprofessional education provided nutrition professionals the opportunity to broaden their competency base.Besides learning to respect diversity and embracing cultural differences and similarities, the students learnt to see critical issues from the perspective of political, social and agricultural sciences.Without this understanding, intolerance and prejudice often create a barrier to optimal intervention or education of a person/community requiring professional advice.The development of future transnational modules will benefit from the inclusion of professional competencies as part of the module outcome, by following a competency by design process. of perceptions of NOMA track students Selected quotes to illustrate the development of professional attributes You were not just hearing the lecturer speak … just give you information … You' d go back and discuss.It wasn't "this is the way it is … and there's no other way".You could engage … argue back and forth … and fight.You have an idea of how it can be applied in different settings.' (Female student, SA) Not to just rush into something and try and change things but rather to look at the reason why you want to change things … what impact it will have on people' .(Female student, SA) 'I believe using the human rights way where you have inclusion … even the marginalised people, if they are involved in the planning, if you work to get an impact, then they will help you; to create a sense of ownership and responsibility for everyone.' (Female student, Uganda) 'I feel the impact will come.It's not going to be an immediate thing where you suddenly see the light and that everything just flows smoothly.It is a process.' (Female student, SA) 'I don't think I could have foreseen how it would change me or the way I look at things.' (Female student, SA) 'My interest within the nutritional field is at a global level … and the prevention of diseases more than the clinical treatment of diseases.It made me more politically interested and I see my future more clearly than I did before.' (Female student, Norway) HNP = health and nutrition professional.
2018-12-22T08:07:44.527Z
2016-09-06T00:00:00.000
{ "year": 2016, "sha1": "7f4a922c6295064279943cd0f54bce1ca84bdca3", "oa_license": "CCBYNC", "oa_url": "http://www.ajhpe.org.za/index.php/ajhpe/article/download/554/418", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "09f7c51a34323506a17bd5d251f2102c49f4a07f", "s2fieldsofstudy": [ "Education", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118835078
pes2o/s2orc
v3-fos-license
Space-Time Structure of Loop Quantum Black Hole In this paper we have improved the semiclassical analysis of loop quantum black hole (LQBH) in the conservative approach of constant polymeric parameter. In particular we have focused our attention on the space-time structure. We have introduced a very simple modification of the spherically symmetric Hamiltonian constraint in its holonomic version. The new quantum constraint reduces to the classical constraint when the polymeric parameter goes to zero.Using this modification we have obtained a large class of semiclassical solutions parametrized by a generic function of the polymeric parameter. We have found that only a particular choice of this function reproduces the black hole solution with the correct asymptotic flat limit. In r=0 the semiclassical metric is regular and the Kretschmann invariant has a maximum peaked in L-Planck. The radial position of the pick does not depend on the black hole mass and the polymeric parameter. The semiclassical solution is very similar to the Reissner-Nordstrom metric. We have constructed the Carter-Penrose diagrams explicitly, giving a causal description of the space-time and its maximal extension. The LQBH metric interpolates between two asymptotically flat regions, the r to infinity region and the r to 0 region. We have studied the thermodynamics of the semiclassical solution. The temperature, entropy and the evaporation process are regular and could be defined independently from the polymeric parameter. We have studied the particular metric when the polymeric parameter goes towards to zero. This metric is regular in r=0 and has only one event horizon in r = 2m. The Kretschmann invariant maximum depends only on L-Planck. The polymeric parameter does not play any role in the black hole singularity resolution. The thermodynamics is the same. INTRODUCTION Quantum gravity is the theory attempting to reconcile general relativity and quantum mechanics. In general relativity the space-time is dynamical, then it is not possible to study other interactions on a fixed background because the background itself is a dynamical field. The theory called "loop quantum gravity" (LQG) [1] is the most widespread nowadays. This is one of the non perturbative and background independent approaches to quantum gravity. LQG is a quantum geometric fundamental theory that reconciles general relativity and quantum mechanics at the Planck scale and we expect that this theory could resolve the classical singularity problems of General Relativity. Much progress has been done in this direction in the last years. In particular, the application of LQG technology to early universe in the context of minisuperspace models have solved the initial singularity problem [2], [3]. Black holes are another interesting place for testing the validity of LQG. In the past years applications of LQG ideas to the Kantowski-Sachs space-time [4] lead to some interesting results in this field. In particular, it has been showed [5] [6] that it is possible to solve the black hole singularity problem by using tools and ideas developed in full LQG. Other remarkable results have been obtained in the non homogeneous case [7]. There are also works of semiclassical nature which try to solve the black hole singularity problem [8], [9], [9]. In these papers the authors use an effective Hamiltonian constraint obtained replacing the Ashtekar connection A with the holonomy h(A) and they solve the classical Hamilton equations of motion exactly or numerically. In this paper we try to improve the semiclassical analysis introducing a very simple modification to the holonomic version of the Hamiltonian constraint. The main result is that the minimum area [11] of full LQG is the fundamental ingredient to solve the black hole space-time singularity problem in r = 0. The S 2 sphere bounces on the minimum area a 0 of LQG and the singularity disappears. We show the Kretschmann invariant is regular in all space-time and the position of the maximum is independent on mass and on polymeric parameter introduced to define the holonomic version of the scalar constraint. The radial position of the curvature maximum depends only on G N and . This paper is organized as follows. In the first section we recall the classical Schwarzschild solution in Ashtekar's variables and we introduce a class of Hamiltonian constraints expressed in terms of holonomies that reduce to the classical one in the limit where the polymer parameter δ → 0. We solve the Hamilton equations of motion obtaining the semiclassical black hole solution for a particular choice of the quantum constraint. In the third section we show the regularity of the solution by studying the Kretschmann operator and we write the solution in a very simple form similar to the Reissner-Nordström solution for a black hole with mass and charge. In section four we study the spacetime structure and we construct the Carter-Penrose diagrams. In section five section we show the solution has a Schwarzschild core in r ∼ 0. In section six we analyze the black hole thermodynamic calculating temperature, entropy and evaporation. In section seven we calculate the limit δ → 0 of the metric and we obtain a regular semiclassical solution with the same thermodynamic properties but with only one event horizon at the Schwarzschild radius. We analyze the causal space-time structure and construct the Carter-Penrose diagrams. I. SCHWARZSCHILD SOLUTION IN ASHTEKAR VARIABLES In this section we recall the classical Schwarzschild solution inside the event horizon [5] [6]. For the homogeneous but non isotropic Kantowski-Sachs space-time the Ashtekar's variables [12] are A =cτ 3 dx +bτ 2 dθ −bτ 1 sin θdφ + τ 3 cos θdφ, The Using the general relation E a i E b j δ ij = det(q)q ab (q ab is the metric on the spatial section) we obtain q ab = (p 2 b /|p c |, |p c |, |p c | sin 2 θ). We restrict integration over x to a finite interval L 0 and the Hamiltonian takes the form [6] The Hamilton equations of motion arė The solutions of equations (5) using the time parameter t ≡ e T and redefining the integration constant ≡ e T0 = 2m (see the papers in [5] [6]) are This is exactly the Schwarzschild solution inside and also outside the event horizon as we can verify passing to the metric form defined by h ab = diag(p 2 b /|p c |L 2 0 , |p c |, |p c | sin 2 θ) (m contains the gravitational constant parameter G N ). The line element is Introducing the solution (6) in (7) we obtain the Schwarzschild solution in all space-time except in t = 0 where the classical curvature singularity is localized and except in r = 2m where there is a coordinate singularity where dΩ (2) = sin 2 θdφ 2 + dθ 2 . To obtain the Schwarzschild metric we choose L 0 = p 0 b . In this way we fix the radial cell to have length L 0 and p 0 b disappears from the metric. In the semiclassical LQBH metric p 0 b does not disappears fixing L 0 . At this level we have not fixed p 0 b but only the dimension of the radial cell. This is the correct choice to reproduce the Schwarzschild solution. We have defined the dimension of the cell in the x direction to be L 0 = p 0 b obtaining the correct Schwarzschild metric in all space time, we will do the same choice for the semiclassical metric. With this choice p 0 b will not disappears from the semiclassical metric and in particular from the p c (t) solution. We will use the minimum area of the full theory to fix p 0 b . For the semiclassical solution at the end of section (V) we will give also a possible physical interpretation of p 0 b . II. A GENERAL CLASS OF HAMILTONIAN CONSTRAINS The correct dynamics of loop quantum gravity is the main problem of the theory. LQG is well defined at kinematical level but it is not clear what is the correct version of the Hamiltonian constraint, or more generically, in the covariant approach, what is the correct spin-foam model [13]. An empirical principle to construct the correct Hamiltonian constraint is to recall the correct semiclassical limit [14]. When we impose spherical symmetry and homogeneity, the connection and density triad assume the particular form given in (1). We can choose a large class of Hamiltonian constraints, expressed in terms of holonomies h (δ) (A), which reduce to the same classical one (4) when the polymeric parameter δ goes towards to zero. We introduce a parametric function σ(δ) that labels the elements in the class of Hamiltonian constraints compatible with spherical symmetry and homogeneity. We call C LQG the constrain for the full theory and C σ(δ) the constraint for the homogeneous spherical minisuperspace model. The reduction from the full theory to the minisuperspace model is where the arrow represents the spherical symmetric reduction of the full loop quantum gravity hamiltonian constraint. To obtain the classical Hamiltonian constraint (4) in the limit δ → 0 we recall that the function σ(δ) satisfies the following condition We are going to show that just one particular choice of σ(δ) gives the correct asymptotic flat limit for the Schwarzschild black hole. In fact the asymptotic boundary condition selects the particular form of the function σ(δ). The classical Hamiltonian constraint can be written in the following form where Ω = − sin θτ 3 dθ ∧ dφ and 0 F = dK + [K, K] (K is the extrinsic curvature, A = Γ + γK and Γ = cos θ τ 3 dφ ). The holonomies in the directions x, θ, φ for a generic path ℓ are defined by We define the field straight 0 F i ab in terms of holonomies in the following way it's a simple exercise to verify that when δ → 0 (13) we obtain the classical field straight. The Hamiltonian constraint in terms of holonomies is V = 4π |p c |p b is the spatial section volume. We have introduced modifications depending on the function σ(δ) only in the field straight but this is sufficient to have a large class of semiclassical hamiltonian constraints compatible with spherical simmetry. The Hamiltonian constraint C δ in (14) can be substantially simplified in the gauge N = (γ |p c |sgn(p c )δ)/(sin σ(δ)δb) From (15) we obtain two independent sets of equations of motion on the phase spacė Solving the first three equations and using the Hamiltonian constraint C δ = 0, with the time parametrization e T = t and imposing to have the Schwarzschild event horizon in t = 2m, we obtain where we have defined the quantities ρ(δ) = 1 + γ 2 δ 2 , Now we focus our attention on the term (2m/t) σ(δ)ρ(δ) . The choice of this term and in particular the choice of the exponent will be crucial to have the correct flat asymptotic limit. The exponent is in the form (2m/t) 1+ǫ and expanding in powers of the small parameter ǫ ∼ δ 2 we obtain (2m/t) 1+ǫ ∼ −(2m/t) log(t/2m) at large distance (t ≫ 2m) (we remember that outside the event horizon the coordinate t plays the rule of spatial radial coordinate). It is straightforward to see that exists only one possible way to obtain the correct asymptotic limit and it is given by the choice σ(δ) = 1/ 1 + γ 2 δ 2 . In other words we can say that any function x ǫ ∼ ǫ log(x) diverges logarithmically for small ǫ and large distance (x ≫ 1). Let as take σ(δ) = 1/ 1 + γ 2 δ 2 . In force of the correct large distance limit and in force also of the regularity of the curvature invariant in all space time, we will extend the solution outside the event horizons with the redefinition t ↔ r. I will come back to this extension in the next section. A crucial difference with the classical Schwarzschild solution is that p c has a minimum in t min = (γδmp 0 b /2) 1/2 , and p c (t min ) = γδmp 0 b . The solution has a spacetime structure very similar to the Reissner-Nordström metric and presents an inner horizon in For δ → 0, r − ∼ mγ 4 δ 4 /8. We observe that the inside horizon position r − = 2m ∀γ ∈ R (we recall γ is the Barbero-Immirzi parameter). Now we study the trajectory in the plane (p b /p 0 b , log(p c )) and we compare the result with the Schwarzschild solution. In Fig.1 we have a parametric plot of (|p b |, log(p c )); we can follow the trajectory from t > 2m where the classical (dashed trajectory) and the semiclassical (continuum trajectory) solution are very close. For t = 2m, p c → (2m) 2 and p b → 0 (this point corresponds to the Schwarzschild radius). From this point decreasing t we reach a minimum value for p c,m ≡ p c (t min ) > 0. From t = t min , p c starts to grow again until p b = 0, this point corresponds to a new horizon in t = r − localized. In the time interval t < t min , p c grows together with |p b | and as it is very clear from the picture the solution approach the second specular black hole for t → 0. In particular we have a second f lat asymptotic region for t ∼ 0. Metric form of the solution. In this section we write the solution in the metric form and we extend that to the all space-time. We recall the Kantowski-Sachs metric is ds . The metric components are related to the connection variables by We have introduced Ω(δ) by a coordinate transformation x → Ω(δ) x, This coordinate transformation is useful to obtain the Minkowski metric in the limit t → ∞. The explicit form of the lapse function N (t) 2 in terms of the coordinate t is Using the second relation in (20) we can obtain the X 2 (t) metric component, The function Y 2 (t) corresponds to |p c (t)| given in (17). The metric obtained has the correct asymptotic limit for The semiclassical metric goes to a flat limit also for t → 0. We can say that LQBH interpolates between two asymptotic flat region of the spacetime. The metric obtained in this paper has the correct flat asymptotic limit for t → +∞ and reproduce the Minkowski metric for m → 0. Both those limit are not satisfied in the work [8]. The small modification introduced in the holonomy form of the Hamiltonian is necessary for those two fundamental consistency limit. III. LQBH IN ALL SPACE-TIME In this section we extend the (metric) semiclassical solution obtained obtained in the previous section to all space-time. As explained in the previous subsection the metric solution has the correct flat limit for t → 0 and goes to Minkowski for m → 0. Now we shaw that the Kretschmann scalar K = R µνρσ R µνρσ is regular in all space-time. In terms of N (t), X(t) and Y (t) the Kretschmann scalar is In Fig.2 is plotted a graph of K, it is regular in all spacetime and the large t behavior is the classical singular scalar R µνρσ R µνρσ = 48m 2 /t 6 . What about p 0 b ? Now we fix the parameter p 0 b using the full theory (LQG). In particular we choose p 0 b in such way the position r Max of the Kretschmann invariant maximum is independent of the black hole mass. This means the S 2 sphere bounces on a minimum radius that is independent from the mass of the black hole and from p 0 b and depends only on l P . We consider the solution p c (t) and we impose the minimum area A Min = 4πγδmp 0 b of the S 2 sphere to be equal to the minimum gap area of loop quantum gravity a 0 = 2 √ 3πγl 2 P . With the choice γδmp 0 b = a 0 /4π we obtain a significative physical result. We have not impose p c (t) to have a minimum in a 0 but we have just impose that the minimum of p c (t) is the minimum area of the full theory. The minimum area of the two sphere is a result and not a request. We observe that this choice of p 0 b fixes the absolute maximum and relative minimum of p b (t) to be independent of the mass m as this is manifest from the plot in Fig.3. We want to provide an argument to support the choice p 0 b ∼ a 0 /m. In the paper [15] it is shown the phase space is parametrized by m and the conjugated momentum p m and it is shown that are both constants of motion (in our notation p m = p 0 b ). As usual in elementary quantum mechanics to derive the Heisenberg uncertainty relation, we can introduce the state |φ = (m + iλp m )|ψ , wherem andp m are the mass and momentum operators and λ ∈ R. and the result is m 2 = 4m 2 0 (for ∆ = √ 3m 0 ). Using the Heisenberg uncertainty relation we determine p 2 We have introduced explicitly all the coefficients but the main result is p 0 b ∼ a 0 /m. However the presented here is just an argument and not a proof. At the end of section (V) we will give a physical interpretation of p 0 b . We now want underline the similarity between the equation of motion for p c (t) and the Friedmann equation of loop quantum cosmology. We can write the differential equation for p c (t) in the following form From this equation is manifest that p c bounces on the value a 0 /4π. This is quite similar to the loop quantum cosmology bounce [16]. As it is evident from Fig.4 the maximum of the Kretschmann invariant is independent of the mass and it is in r Max ∼ √ a 0 (a 0 ∼ l 2 P ) localized. At this point we redefine the variables t ↔ x (with the subsequent identification x ≡ r) and the metric components to bring the solution in the standard Schwarzschild form Schematically the properties of the metric are the following, We consider the property (28) sufficient to extend the solution in all space-time. The solution is summarized in the following table (in the table we have not fixed the parameter p 0 b ). We have said in the previous section the metric solution has two event horizons. An event horizon is defined by a null surface Σ(r, θ) = const.. The surface Σ(r, θ) = const. is a null surface if the normal n i = ∂Σ/∂x i is a null vector or satisfied the condition n i n i = 0. The last identity says that the vector n i is on the surface Σ(r, θ) itself, in fact dΣ = dx i ∂Σ/∂x i and dx i n i . The norm of the vector n i is given by In our case (29) reduces to g rr ∂Σ ∂r and this equation is satisfied where g rr (r) = 0 and if the surface is independent from θ, Σ(r, θ) = Σ(r). The points where g rr = 0 are r − and r + = 2m. We can write the metric in another form which is more similar to the Reissner-Nordström space-time. The metric can be written in the following form If we develop the metric (31) by the parameter δ and the minimum area a 0 at the zero order we obtain the Schwarzschild solution: ) and g θθ (r) = g φφ (r)/ sin 2 θ = r 2 + O(a 2 0 ). We have correction to the metric from the polymer parameter δ and also from the minimum area a 0 . To check the semiclassical limit we calculate the perturbative expansion of the curvature invariant for small δ and a 0 and we obtain a divergent quantity in r = 0 at any order of the development. The regularity of K is a non perturbative result, in fact for small values of the radial coordinate r, K ∼ 3145728π 4 r 6 /a 4 0 γ 8 δ 8 m 2 diverges for a 0 → 0. (For the semiclassical solution the trace of the Ricci tensor (R = R µ µ ) is not identically zero as for the Schwarzschild solution. We have calculated also this operator and we have obtained a regular quantity in r = 0). We conclude this section showing the independence of the pick position of Kretschmann invariant from the polymeric parameter δ. We have plotted the invariant K(δ, r) and we have obtained the result in Fig.(7). From the picture is evident the position of the Kretschmann invariant maximum is independent from δ. Corrections to the Newtonian potential. In this paper we are interested to to singularity problem in black hole physics and not to the Post-Newtonian approximation, however we want give the fist correction to the gravitational potential. The gravitational potential is related to the metric by Φ(r) = −(g tt (r) + 1)/2. Developing the g tt component of the metric in power of 1/r to the order O(r −7 ), for fixed values of the parameter δ and the minimal gap area a 0 , we obtain the potential where P ≡ P(δ) is defined in (18). IV. CAUSAL STRUCTURE AND CARTER-PENROSE DIAGRAM In this section we construct the Carter-Penrose diagrams [17] for the semiclassical metric (31). To obtain the diagrams we will do many coordinate changing and we enumerate them from one to eight. 1) We can put the metric (31) in the form ds 2 = g 00 (r(r * ))(dt 2 − dr * 2 ) introducing the tortoise coordinate r * defined by : 2) The second coordinate set to use is (u, v, θ, φ), where u = t − r * and v = t + r * . The metric becomes ds 2 = g 00 (u, v)du dv. 3) The singularity on the event horizon r + disappearances using the coordinates (U + , V + , θ, φ) defined by We introduce also the parametric function . Now we are going to show that any massive particle could not fall in r = 0 in a finite proper time. We consider the radial geodesic equation for a massive point particle where "˙" is the proper time derivative and E n is the point particle energy. If the particle falls from the infinity with zero initial radial velocity the energy is E n = 1. We can write (42) in a more familiar form A plot of V ef f is in Fig.(9). For r = 0, V ef f (r = 0) = 4m 4 π 2 γ 8 δ 8 /a 2 0 then any particle with E n < V ef f (0) could not arrive in r = 0. If the particle energy is E n > V ef f (0), the geodesic equation for r ∼ 0 isṙ 2 ∼ r 4 and integrating τ ∼ 1/r − 1/r 0 or ∆τ ≡ τ (r 0 ) − τ (0) → +∞. We can compose the diagrams in Fig.(8) to obtain a maximal extension similar to the Reissner-Nordström one, the result is represented in Fig.(10). V. ASYMPTOTIC SCHWARZSCHILD CORE NEAR r ∼ 0 In this section we study the r ∼ 0 limit of the metric (31). If we develop the metric very closed to the point r ∼ 0 we obtain : . (44) The parametric functions a, b, c, d are a = 64Ω(δ)m 4 π 2 γ 4 δ 4 P(δ) 2 We consider the coordinate changing R = 1/r √ c. The point r = 0 is mapped in the point R = +∞. The metric in the new coordinates is where m 1 and m 2 are functions of m, a 0 , δ, γ, For small δ we obtain m 1 ∼ m 2 and (46) converges to the Schwarzschild metric of mass M ∼ a 0 /2mπγ 4 δ 4 . We can conclude the space-time near the point r ∼ 0 is described by an effective Schwarzschild metric of mass M ∼ a 0 /m in the large distance limit R ≫ M . An observer in the asymptotic region r = 0 experiments a Schwarzschild metric of mass M ∼ a 0 /m. We now want give a possible physical interpretation of p 0 b . If we reintroduce p 0 b ∼ a 0 /m in the core mass M defined above we obtain M ∼ p 0 b , then we can interpret p 0 b as the mass of the black hole as it is seen from an observer in r ∼ 0. In [9] the authors interpret p 0 b as the mass of a second black hole, in our analysis instead p 0 b seems to be the mass of the black hole but from the point of view of an observer in the asymptotic region r ∼ 0. VI. LQBH TERMODYNAMICS In this section we study the termodynamics of the LQBH [19]. The form of the metric calculated in the previous section has the general form where the functions f (r), g(r) and h(r) depend on the mass parameter m and are the components of the metric (31). We can introduce the null coordinate v to express the metric (48) in the Bardeen form. The null coordinate v is defined by the relation v = t + r * , where r * = r dr/ f (r)g(r) and the differential is dv = dt + dr/ f (r)g(r). In the new coordinate the metric is (2) . (49) We can interpret our black hole solution has been generated by an effective matter fluid that simulates the loop quantum gravity corrections (in analogy with the paper [19]). The effective gravity-matter system satisfies by definition of the Einstein equation G = 8πT , where T is the effective energy tensor. The stress energy tensor for a perfect fluid compatible with the space-time symmetries is T µ ν = (−ρ, P r , P θ , P θ ) and in terms of the Einstein tensor the components are ρ = −G t t /8πG N , P r = G r r /8πG N and P θ = G θ θ /8πG N . The semiclassical metric to zero order in δ and a 0 is the classical Schwarzschild solution (g C µν ) that satisfies G µ ν (g C ) ≡ 0. A. Temperature In this paragraph we calculate the temperature for the quantum black hole solution and analyze the evaporation process. The Bekenstein-Hawking temperature is given in terms of the surface gravity κ by T = κ/2π, the surface gravity is defined by κ 2 = −g µν g ρσ ∇ µ χ ρ ∇ ν χ σ /2 = −g µν g ρσ Γ ρ µ0 Γ σ ν0 /2, where χ µ = (1, 0, 0, 0) is a timelike Killing vector and Γ µ νρ is the connection compatibles with the metric g µν of (48). Using the semiclassical metric we can calculate the surface gravity in r = 2m obtaining and then the temperature, The temperature (50) coincides with the Hawking temperature in the large mass limit. In Fig.11 we have a plot of the temperature as a function of the black hole mass m. The dashed trajectory corresponds to the Hawking temperature and the continuum trajectory corresponds to the semiclassical one. There is a substantial difference for small values of the mass, in fact the semiclassical temperature tends to zero and does not diverge for m → 0. The temperature is maximum for m * = 3 1/4 √ a 0 / √ 32π and T * = 3 3/4 σ(δ) Ω(δ)/ √ 32πa 0 . Also this result, as for the curvature invariant, is a quantum gravity effect, in fact m * depends only on the Planck area a 0 . If we calculate the limit δ → 0 in T (m) and T * we obtain two physical quantities which are independent of δ, B. Entropy In this section we calculate the entropy for the LQBH metric. By definition the entropy as function of the ADM energy is S BH = dm/T (m). Calculating this integral for the LQBH we find We can express the entropy in terms of the event horizon area. The event horizon area (in r = 2m) is Inverting (53) for m = m(A) and introducing the result in (52) we obtain A plot of the entropy is in Fig.12. The first plot represents entropy as a function of the event horizon area A. The second plot in Fig.12 represents the event horizon area as function of m. The semiclassical area has a minimum value in A = a 0 for m = a 0 /32π. As for the temperature also for the entropy we can calculate the limit δ → 0 and we obtain a regular quantity which depends on the event horizon area, on the Planck area but it is independent from δ, In the limit a 0 → 0, S → A/4. We want underline the parameter δ does not play any regularization rule in the observable quantities T (m), T * , m * and in the evaporation process that we will study in the following section. We obtain finite quantities taking the limit δ → 0. This is an important prediction of the model. C. The evaporation process. In this section we focus our attention on the evaporation process of the black hole mass and in particular in the energy flux from the hole. First of all the luminosity can be estimated using the Stefan law and it is given by L(m) = αA(m)T 4 BH (m), where (for a single massless field with two degree of freedom) α = π 2 /60, A(m) is the event horizon area and T (m) is the temperature calculated in the previous section. At the first order in the luminosity the metric (49) which incorporates the decreasing mass as function of the null coordinate v is also a solution but with a new effective stress energy tensor as underlined previously. Introducing the results (50) and (53) of the previous paragraphs in the luminosity L(m) we obtain L(m) = 4194304 m 10 π 3 α σ 4 Ω 2 (a 2 0 + 1024 m 4 π 2 ) 3 . VII. THE METRIC FOR δ → 0 We have shown in the previous section that same physical observable can be defined independently from the polymeric parameter δ. This result suggest to calculate the limit of the semiclassical metric (31) for δ → 0. We will obtain a regular metric and we will study its spacetime structure. In the quantum theory we can not take the limit δ → 0 because we haven't weakly continuity in the polymeric parameter δ. However the LQBH'metric (31) is very close to the Reissner-Nordström metric which is not stable and this suggest that also (31) could be not stable when we consider non homogeneities [20]. If it is the case then the horizon r − disappearances or in other words by (19), P(δ) → 0. Another motivation to calculate and to study this extreme limit of the metric is to show that the polymeric parameter does not play any rule in the singularity problem reslution. For δ → 0 the ) plot is given in Fig.13. We redefine the metric of section (31) introducing an explicit dependence from δ (the redefinition is: g µν (r) → g µν (r; δ)). The new metric is mathematically defined by lim δ→0 g µν (r; δ) ≡ g µν (r). (60) The result of this limit gives the following very simple metric which is independent from the polymeric parameter δ, This metric has an event horizon in r + = 2m and this is in accord with the solution for general values of δ, in fact lim δ→0 r − = 0. The question now is to see if the solution is regular in all space-time and in particular in r = 0. We can calculate the Kretschmann invariant and we obtain K(r) = 65536π 4 r 2 (a 2 0 + 64π 2 r 4 ) 6 (−6291456a 2 0 π 6 m(2m − r)r 12 +50331648m 2 π 8 r 16 + a 8 0 (15m 2 − 24mr + 11r 2 ) −128a 6 0 π 2 r 4 (36m 2 − 56mr + 17r 2 ) +4096a 4 0 π 4 r 8 (294m 2 − 272mr + 63r 2 )). The invariant (62) is regular in all space-time and in particular in r = 0. For a 0 ∼ 0 we find K(r) = 48m 2 /r 6 + O(a 2 0 ) and for r ∼ 0 we have K(r) = (983040m 2 π 4 r 2 )/a 4 0 + O(r 3 ) that shows the non perturbative character of the singularity resolution. From the second picture in Fig.(16) is evident the r-coordinate of the pick of the curvature invariant K is independent from the black hole mass. This result is exactly the same quantity obtained in section (VI) but with δ → 0. From this point the analysis is the same of section (VI): temperature, entropy and evaporation are the same of (51), (55), (58). Causal structure and Carter-Penrose diagrams In this section we construct the Carter-Penrose diagrams for the metric obtained taking the limit δ → 0. To obtain the diagrams we must do many coordinate changing and we enumerate them from one to five. 1) First of all we calculate the tortoise coordinate r * for the metric (61) defined by dr * 2 = −g 11 (r)dr 2 /g 00 (r), The coordinate (64) reduces to the Schwarzschild tortoise coordinate r * = r + 2m log |r − 2m| for a 0 → 0. On the other side for r → 0, r * ∼ a 0 /4mr 2 . Using coordinate (t, r * , θ, φ) the metric is ds 2 = g 00 (r(r * ))(dt 2 − dr * 2 ) + g θθ (r(r * ))dΩ (2) , (65) where g 00 (r(r * )) is implicitly define by (64) (from now on we will not write the S 2 sphere part of the metric). 2) Now we write the metric in coordinate (v, w, θ, φ) defined by v = t+r * and w = t−r * . The metric becomes where r is defined implicitly in terms of v, w. 3) We can do another coordinate changing which leaves the two space conformally invariant. The news coordinate (v ′ , w ′ , θ, φ) are defined by v ′ = v ′ (v) and w ′ = w ′ (w). The metric is All the coordinates in the conformal factor are implicitly defined in terms of t ′ , x ′ . At this point we choose explicitly the functions v ′ (v) and w ′ (w) to eliminate the singularity in r = 2m. Following the analysis of the Schwarzschild case we take v ′ (v) = exp(v/λ) and w ′ (w) = − exp(−w/λ), where 2/λ = 512π 2 m 3 /(a 2 0 + 1024π 2 m 4 ). This is the correct coordinate changing also in our case to eliminate the coordinate singularity on the event horizon. We define the function F 2 (r) = −g 00 (∂v/∂v ′ )(∂w/∂w ′ ) that in terms of the radial coordinate r becomes . To conclude the analysis we extend the radial coordinate to negative values. The surface Σ(r, θ) = r = 0 is a null surface as can be shown following the analysis in (III) (in particular g rr | r=0 = 0). We can extend the radial coordinate r to negative values because the spacetime is singularity free. The metric is asymptotically flat for r → −∞ and at the order O(r −2 ) takes the form Because r 0 we have not event horizons in the negative region. The metric (61) is regular in all space-time −∞ < r < +∞. The Carter-Penrose diagrams are in Fig.(18). We can obtain the same results of this section in another equivalent way. Essentially what we have done in this section is to show that to solve the black hole singularity problem at semiclassical level it is sufficient to replace the component c(t) with the holonomy h = exp(δc) without to replace the component b(t) with the relative holonomy. In fact the solution (61) can be obtained directly from the semi-quantum Hamiltonian constraint The scalar constraint (72) is classic in the b, p b sector but quantum in the c, p c sector (N = γ |p c |sgn(p c )/b and σ(δ) = 1). The constraint introduced in (14) is not the more general. We can introduce two different polymeric parameter δ b and δ c respectively in the directions θ, φ and r obtaining the constraint and N = N = γ |p c |sgn(p c )δ b / sin(σ(δ b )δ b b). The scalar constraint (72) is obtained taking the limit The main result is that the singularity problem is solved by a bounce of the two sphere on a minimal area a 0 . The parameter δ does not play any role in the singularity problem resolution. This is evident from the Kretschmann invariant (62) which is independent from δ. The parameter δ is related to the position of the inner horizon and for δ → 0 the horizon r − disappearances. CONCLUSIONS & DISCUSSION In this paper we have introduced a simple modification of the holonomic Hamiltonian constraint which gives the metric with the correct semiclassical asymptotic flat limit when the Hamilton equations of motion are solved. We recall here the LQBH's metric We have shown the LQBH's metric (75) has the following properties 1. lim r→+∞ g µν (r) = η µν , 2. lim r→0 g µν (r) = η µν , 3. lim m,a0→0 g µν (r) = η µν , 4. K(g) < ∞ ∀r, In particular (see point 5.) the position (r Max ) where the Kretschmann invariant operator is maximum is independent from the black hole mass and from the poly-meric parameter δ. The metric has two event horizons that we have defined r + and r − ; r + is the Schwarzschild event horizon and r − is an inside horizon. The solution has many similarities with the Reissner-Nordström metric but without curvature singularities. In particular the region r = 0 corresponds to another asymptotically flat region. Any massive particle can not arrive in this region in a finite proper time. A careful analysis shows the metric has a Schwarzschild core in r ∼ 0 of mass M ∼ a 0 /m. We have calculated the limit g µν (δ → 0; r) of the LQBH metric obtaining another metric regular in r = 0. This solution can be also obtained from (75) taking the limit δ → 0 or more simple P(δ) = 0 and r − = 0. The result is This metric could be see as a solution of the Hamilton equation of motion for the semi-quantum scalar constraint (72). Our analysis shows that the singularity problem is solved by a bounce of the S 2 sphere on a minimum area a 0 > 0. This happens for both the metrics obtained in this paper, the first one of Reissner-Nordström type (75) and the second one of Schwarzschild type (76). The parameter δ does not play any rule in the singularity resolution problem. The solution (76) has all the good properties of (75) and in particular it is singularity free. This metric has an event horizon in r = 2m and the thermodynamics is exactly the same of (75). When we consider the maximal extension to r < 0 we find a second internal event horizon in r = 0. We have studied the black hole thermodynamics : temperature, entropy and the evaporation process. The main results are: (77) 2. The black hole entropy in terms of the event horizon area and the LQG minimum area eigenvalue is 3. The evaporation process needs an infinite time in our semiclassical analysis but the difference with the classical result is evident only at the Planck scale. In this extreme energy conditions it is necessary a complete quantum gravity analysis that can implies a complete evaporation [18]. We have shown it is possible to take the limit δ → 0 in T (m), S(A) and the evaporation process equation F (m; m 0 , a 0 ) = v obtaining regular quantities independent of the polymeric parameter δ. The result of the limit are physical quantities that depend only on the Planck area and not on the polymeric parameter. We want to conclude the discussion with a stimulating observation. In this paper we have calculated the temperature (77) that in general we can see as a relation between temperature, mass and the minimum area a 0 . If we solve (77) for the minimum area we obtain the universal critical behavior a 0 ∼ (T c − T ) 1/2 . The critical exponent ζ = 1/2 is independent from the mass and from the particular choice of the Hamiltonian constraint modification. The critical temperature is the classical Hawking temperature T c = 1/8πm [21]. Some open problems. In this paper we have fixed the p 0 b parameter (which comes from the integration of the Hamilton equations of motion) introducing the minimum area a 0 (of the full theory) in the metric solution. In this way we have obtained a bounce of the S 2 sphere on the minimum area a 0 . A priori it is not obvious how to obtain the same bounce at the quantum level. However solving the quantum constraint we think we will obtain a bounce on a minimum area a 0 ∼ G N . The QEE contains only dimensionless quantities, the eigenvalues τ, µ of the operatorsp c ,p b and the polymeric parameter δ. When we reintroduce the length dimensions in the QEE we have µ ≡ 2p b /γl 2 P , τ ≡ p c /γl 2 P , then in the quantum evolution l 2 P will play the rule played by a 0 in the semiclassical analysis and we will have a quantum bounce of the wave function on l 2 P ∼ a 0 . This is manifest in the effective Wheeler-DeWitt equation obtained from the QEE in the limit µ ≫ δ, τ ≫ δ [6] where a 2 0 ∼ l 4 P appears explicitly, However the quantum evolution of a coherent Schwarzschild state is an open problem. A problem related to the previous one is that we have fixed the integration in the x direction to a cell of finite volume L x and this can imply a non scale invariant resolution of the singularity problem under a rescaling L x → L ′ x [23]. Another problem can be related to the entropy calculation. In fact we obtain a regular entropy but we do not obtain the usual logarithmic correction. We think it is possible to solve this problem with a simple modification of the holonomic version of the Hamiltonian constraint or taking into account the possibility that quantum properties of the background space-time alter geometry near the horizon [24]. Other problems could be related to the maximal extension of the space-time. If we observe carefully the diagram in Fig.17 we can see that close time-like curve (CTC) are possible. This is manifest in the Fig.19 where a null CTC is represented by a close black curve. In the second diagram of Fig.19 we have represented the light cones along a CTC curve. We can have CTCs also with just one diagram if we identify the upper and lower extremes of the diagram (18).
2008-11-13T19:02:19.000Z
2008-11-13T00:00:00.000
{ "year": 2008, "sha1": "4d71ab44f9fa69fcbe8d35a0c778218376ce7f4f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0811.2196", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4d71ab44f9fa69fcbe8d35a0c778218376ce7f4f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
212564007
pes2o/s2orc
v3-fos-license
Population fitness has a concave relationship with migration distance in Sanderlings Handling Editor: Jennifer Gill Abstract In Focus: Reneerkens, J., Versluijs, T. S. L., Piersma, T., Alves, J. A., Boorman, M., Corse, C., ... Lok, T. (2020). Low fitness at low latitudes: wintering in the tropics increases migratory delays and mortality rates in an Arctic breeding shorebird. Journal of Animal Ecology, 89, 691–703. A central question in migratory ecology has been to understand the fitness consequences of individual variation in migration distance among different species and populations. Reneerkens et al. (2020) investigated the demographic consequences of long-distance migration for Sanderlings Calidris alba, an Arctic-breeding species of sandpiper. Their study population has a remarkable geographic distribution with a breeding range that is concentrated in northeast Greenland and Ellesmere Island, Canada but a nonbreeding range that extends across 85° of latitude from Scotland to Namibia. The authors report on unexpected patterns of latitudinal variation in three demographic parameters: timing of passage on northward migration, probability of juvenile migration and apparent survival of adults. Sanderlings travelling 1,800–2,800 km to settle at north temperate sites during the nonbreeding season had earlier passage dates, and also higher probabilities of migration and apparent survival. In contrast, birds travelling 6,000–7,800 km to equatorial sites experienced later passage dates, delayed maturity and lower apparent survival. However, if Sanderlings migrated even farther and flew over 11,000 km to nonbreeding sites in Namibia, then their performance was restored to early passage dates and higher survival. Movement tracks from birds tagged with geolocators showed that birds wintering in Namibia make nonstop flights of 7,500 km that bypass West Africa during northward migration. Thus, all lines of evidence suggest that Sanderlings face adversity when spending the nonbreeding season at equatorial latitudes. Moreover, the central finding that components of fitness can have nonlinear relationships with migration distance is a novel discovery that leads to many additional questions. The new findings have broader implications for theoretical models of migration, and for understanding how different patterns of movements may arise or be maintained in migratory species. Long-distance migration has evolved multiple times in different groups of animals and is one of the most amazing features of the natural world. Patterns of migration vary widely with numerous examples of partial migration where populations include both resident and migratory individuals, and differential migration where migration distances vary among different demographic groups (Cristol, Baker, & Carbone, 1999). Investigations of migratory behaviour from a cost-benefit perspective often consider migration to be a costly stage of the life cycle due to high energetic costs and elevated mortality rates, with the expectation that exposure to risk should increase over longer migration distances (Alerstam, Hedenström, & Åkesson, 2003). Arctic-breeding waders are an interesting group for field studies of the costs of migration due to their global distribution and use of different habitats, an ability to cross ecological barriers with extreme nonstop flights and a remarkable diversity of mating systems and parental care (Conklin, Senner, Battley, & Piersma, 2017;García-Pena, Thomas, Reynolds, & Székely, 2009 (Grond, Ntiamoa-Baidu, Piersma, & Reneerkens, 2015) and the Western Hemisphere (Myers et al., 1990). Sanderlings are intriguing because they exhibit a diverse range of behaviours at all stages of their annual cycle. During the breeding season, the mating system includes social monogamy with biparental incubation, but also social and genetic polygamy with uniparental incubation (Reneerkens, Veelen, Velde, Luttikhuizen, & Piersma, 2014). During the nonbreeding season, Sanderlings can adjust their social behaviour to alternate between territorial defence or flocking depending on resource dispersion and the cost of defending feeding territories against intruders (Myers, Connors, & Pitelka, 1979). Here, Reneerkens et al. (2020) present new data that show that Sanderlings also vary greatly in their migration behaviour with a fivefold difference in migration distance for birds that travelled to different nonbreeding sites across 85° of latitude between Scotland and Namibia. Tracking small-bodied birds across such vast distances presents great logistical challenges. Reneerkens and Versluijs successfully led an international coalition of investigators with coauthors from eight different countries in Europe and West Africa. Together, the research team captured a total of 5,863 birds, of which 89% were individually colour-ringed and available for resighting in a 7-year period from 2007 to 2013. Resighting data were collected by the authors with additional help from over 2,000 citizen science observers who helped to track colour-ringed birds across the entire eastern Atlantic flyway. One immediate question that arises from the new results is: what factors determine the incredible variation in migration distance for Sanderlings, which are presumably due to settlement decisions taken during their natal year? Differential migration is common among Arcticbreeding sandpipers (Calidris spp.) with the general patterns that males and juveniles tend to winter at more northerly sites, whereas females and adults often migrate longer distances (Nebel et al., 2002;Tavera, Lank, & Gonzalez, 2016). In an early paper, Myers (1983) suggested that 'Sanderlings have no friends' and argued that intraspecific competition is important for structuring populations during the nonbreeding season. In the case of Western Sandpipers Calidris mauri, the sexes exhibit strong dimorphism in bill length, and differential migration is related to latitudinal clines in intertidal food resources (Mathot, Smith, & Elner, 2007). Reneerkens et al. (2020) were able to reject similar explanations for Sanderling in the eastern Atlantic flyway because they found little evidence of biologically relevant variation in morphology, body condition or demographic classes across their latitudinal gradient of nonbreeding sites. The factors that determine initial settlement at a nonbreeding site remain unknown but the authors suggest that equatorial sites in West Africa could be an ecological trap if strong site fidelity limits the options for surviving birds to explore or disperse to a better nonbreeding site. Reneerkens et al. (2020) confirmed that migration distance entails fitness costs, but quite unexpectedly, they found that the relationships with fitness were nonlinear. One of the great features of their study system was that Sanderlings from different nonbreeding sites stage together at coastal estuaries in Iceland before migrating to Greenland and northern Canada. Their first finding was that passage dates on northward migration (D m ) had a convex relationship with migration distance for birds from different nonbreeding sites. Timing of migration was estimated for birds resighted at the staging site and actual passage dates might be different if some proportion of the population did not stop in Iceland. Nevertheless, timing of northward migration was early for Sanderling from Europe, intermediate for Namibia, and with later passage dates for birds from West Africa. Supplementary tracks from birds tagged with geolocators showed that birds wintering in Namibia were able to bypass West Africa on northward migration and catch up with more northerly populations. Timing of migration is likely to be correlated with arrival at the breeding grounds, particularly since Iceland is the last stop for northbound birds. Fitness could be affected by seasonal declines in reproductive performance (Weiser et al., 2018), but daily nest survival actually increases during the breeding season for Sanderlings at the authors' long-term study site in eastern Greenland (Reneerkens et al., 2016). Early breeding might still be advantageous if it allows females to find mates or lay replacement clutches (Morrison, Alves, Gunnarsson, Þórisson, & Gill, 2019). Another component of fitness that remains poorly known for arctic-breeding waders is the effect of seasonal timing on juvenile survival on southbound migration. Early breeding and early departure might also help juveniles avoid seasonal increases in predation risk from falcons at stopover sites (Niehaus & Ydenberg, 2006). A second key finding was that migration distance had an effect on the subsequent probability that a juvenile Sanderling would migrate north as a yearling (M j ). Reneerkens et al. (2020) found that the probability of juvenile migration followed a threshold model where it was <0.3 for birds at nonbreeding sites in West Africa but increased to one for birds at nonbreeding sites in Europe. The authors had no information on probability of migration for Namibia, but juvenile Sanderlings that winter further south in South Africa have a high probability of migration (M j = 0.95, Summers, Underhill, & Prŷs-Jones, 1995). Other species of arctic-breeding sandpipers also oversummer as yearlings with delayed maturity (O'Hara, Fernandez, Becerril, de la Cueva, & Lank, 2005; Summers et al., 1995;Tavera et al., 2016), and sometimes as adults with intermittent breeding (Martínez-Curci et al., 2015). Migratory divides where yearlings migrate north to breed from northern nonbreeding sites but oversummer at equatorial sites are now known to occur in Sanderlings and also Western Sandpipers (Fernández, O'Hara, & Lank, 2004). Proximate explanations for delayed maturity might include effects of migration distance on feather wear and the timing or costs of moult, or access to the food resources needed for migratory fueling. Counterintuitively, Sanderlings spend less time foraging and have higher rates of food intake at nonbreeding sites at equatorial latitudes, but food quality was lower because the diet was mainly bivalves in Ghana but soft-bodied polychaetes in the Netherlands (Grond et al., 2015). Oversummering as yearlings likely reduces fitness due to delayed entry of their offspring into the breeding population. Delayed age at maturity could be adaptive if life-history trade-offs with productivity or adult survival were present, but that does not seem to be the case in Sanderlings. If Sanderlings do not oversummer as yearlings in Namibia, then the life-history strategy of long-distance migrants may have a comparable fitness to birds that winter at north temperate latitudes. The last major finding was a concave relationship between the annual probability of adult survival (S a ) and nonbreeding latitude where apparent survival rates were c. 0.74 in West Africa but 0.84-0.87 in Namibia and Europe. Interpretation of apparent survival as a measure of fitness can be challenging because population losses can be due to mortality, permanent emigration or some combination of the two processes (Sandercock, 2020). If all losses were due to mortality, low apparent survival at equatorial nonbreeding sites would indicate fitness costs are due to low true survival. Reneerkens et al. (2020) argued that their estimate was close to true survival because Sanderlings were highly site faithful to nonbreeding areas, observers resighted birds at all key staging sites and auxiliary resightings were collected rangewide. However, site fidelity of waders to nonbreeding sites can be variable (Myers, 1984;Rehfisch, Insley, & Swann, 2003), and short movements might have led to emigration to sites without observers. The geolocator tracks also demonstrate that Sanderlings can skip staging sites during migration and might not always be available for resighting. Even with remarkable effort from a network of citizen scientists, it is unlikely that sampling coverage could be complete for the entire migratory range. Thus, an alternative explanation for low apparent survival at equatorial latitudes could be that poor quality feeding conditions led to emigration from tropical nonbreeding sites or to migratory movements where birds were harder to detect. The three demographic parameters can be combined with components of fecundity and seasonal estimates of daily nest survival to calculate fitness (McGraw & Caswell, 1996). Population fitness of Sanderlings has a concave relationship with migration distance and is also sensitive to annual variation in nest survival (Figure 1). The new F I G U R E 1 Sanderlings Calidris alba breed at arctic sites in Greenland and Canada but have a fivefold variation in migration distance to nonbreeding sites that extend across 85° of latitude from Scotland to Namibia. (a) Reneerkens et al. (2020) report that three demographic parameters varied among nonbreeding populations at different latitudes: a convex relationship for passage dates on northward migration through Iceland (D m , day 145 = 25 May), a threshold relationship for the probability of migration among juveniles (M j ), and a concave relationship for the apparent survival of adults (S a ). What are the combined effects of such variation on population fitness? (b) Migration dates can be extrapolated to expected nest survival based on a 3-week delay before clutch initiation, with nest survival (S n ) calculated as the product of consecutive daily survival rates for a 22-day exposure period (day 175 = 24 June, Reneerkens et al., 2016). (c) Other components of fecundity were calculated from nests attended by 1-2 parents (135 nests for 113 pairs) and a modal clutch size of four eggs (Reneerkens et al., 2014), with a sex ratio set at 1:1 and juvenile survival set to be S j = 0.3. (d) Population fitness (λ) for the different nonbreeding sites was then estimated from a pre-breeding matrix model where the confidence intervals were calculated with parametric bootstrapping. Population fitness of Sanderlings has a concave relationship with migration distance and is affected by annual variation in nest survival: fitness is lower for birds that winter in Ghana or Mauritania, and higher for birds at the edges of the nonbreeding range in Namibia and Europe finding that the costs of migration have a nonlinear relationship with migration distance has interesting implications for understanding the evolution and maintenance of migratory systems. If nonbreeding sites at equatorial latitudes are ecological sinks with low fitness, they might be maintained by immigration from other areas or eventually be abandoned. Spatial variation in fitness among different sites in a flyway could also be a stepping stone towards development of more complex distribution patterns resulting from chain or leap-frog migration. In the future, comparative data are needed to determine whether the lower fitness of Sanderlings at equatorial sites is a general pattern that also occurs in other populations of migratory birds in the same or different flyways. ACK N OWLED G EM ENTS I thank Prof. Jenny Gill for helpful comments on the manuscript.
2020-03-07T14:14:26.964Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "0fe02ee0554d57089ad1127f44798c1154c24756", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1365-2656.13187", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bd8ee6d867795e61dcc925927078b8aa4ca71621", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
13539826
pes2o/s2orc
v3-fos-license
Indication for hypertrophy posterior longitudinal ligament removal in anterior decompression for cervical spondylotic myelopathy Abstract The retrospective study aimed to investigate the indication for hypertrophy posterior longitudinal ligament (HPLL) removal in anterior decompression for cervical spondylotic myelopathy (CSM). A total of 138 consecutive patients with CSM were divided into 2 groups with developmental cervical stenosis (DCS) (group S) and non-DCS (group N), according to the Pavlov ratio. These 2 groups were subdivided into 2 further subgroups, according to whether HPLL was removed or preserved: group SR (49 patients) and group SP (32 patients) in group S, group NR (21 patients) and group NP (36 patients) in group N. The modified Japanese Orthopedic Association score (mJOA), the modified recovery rate (mRR), quality of life (QoL), and relevant clinical data were used for clinical and radiological evaluation. The mJOA scores improved from 7.3 ± 2.2 to 15.0 ± 1.8 in the SR group and from 7.9 ± 2.3 to 14.2 ± 1.5 in the SP group (P = .036), with postoperative QoL significantly higher in the SR group than the SP group. A reduction in the diameter of enlarged spinal canals occurred at a significantly faster rate in the SP group compared with the SR group (P = .002). Multivariate regression analyses showed removal of HPLL correlated with mJOA scores (coefficient = 7.337, P = .002), mRR (%) (coefficient = 9.117, P = .005), PCS (coefficient = 12.129, P < .001), and MCS (coefficient = 14.31, P < .001) in the S group at 24 months postoperatively, while removal of HPLL did not correlate with clinical outcomes in the N group. The HPLL should, therefore, be removed when mobility was reduced and the spinal cord remained compressed after anterior decompression procedures in the patients with DCS. However, in non-DCS patients, it remains unclear as to whether removal of HPLL provides any clinical benefit, thus, HPLL removal may not be necessary. Introduction Anterior cervical discectomy and fusion (ACDF), described by Cloward in 1958, [1] is the most commonly utilized technique for the treatment of compressive cervical myelo-radiculopathy. Since the initial application of this procedure, ACDF has been increasingly applied for the treatment of cervical spinal disorders following improvements in surgical techniques and instrumentation. However, controversy still exists as to whether hypertrophy posterior longitudinal ligaments (HPLLs) should be removed or preserved in ACDF procedures. [2] This debate has primarily arisen due to differences in management and that posterior longitudinal ligaments (PLL) are associated with complications in some patients. Specifically, the specific clinical indications for HPLL removal are still unknown during ACDF procedures. To date, studies are limited regarding the indication for the removal of HPLL in ACDF procedures for cervical spondylotic myelopathy (CSM) in patients with developmental cervical stenosis (DCS) or without DCS, which indicates congenital cervical vertebral canal stenosis. Therefore, in our retrospective study, the clinical and radiological outcomes of patients who underwent HPLL preservation were compared with patients with HPLL removal in order to elucidate the clinical indications for removal during ACDF procedures and to provide a standardized protocol based on HPLL mobility. study. In our study, HPLL was defined as the abnormal thickening of PLL greater than 3.5 mm and less than 5 mm without any ossified or calcified fragments. The presence of thickened HPLL was confirmed by T2-weighted magnetic resonance imaging (MRI) scanning, demonstrating the presence of compressed segmental epidural bursa that is composed of dura mater, spinal cord, and cerebrospinal fluid (CSF). The exclusion criteria were as follows: patients with spondylotic amyotrophy, tumors, rheumatoid arthritis, pyogenic spondylitis and comorbidities, central nerve system disorders, peripheral neuropathy, known psychiatric illnesses, ligamentum flavum hypertrophy or facet hypertrophy, ossification of posterior longitudinal ligament or calcification of posterior longitudinal ligament, and a history of previous cervical spine surgery or injury; patients with a spinal cord compressing rate greater than 30% measured by MRI according to the Takahashi classification system. [3] We set criteria for DCS patients, defined as a Pavlov ratio [4,5] <0.75. The Pavlov ratio was defined as the ratio between the sagittal diameter of spinal canal and the diameter of the vertebral body in the same segment measured in T2-weighted MRI scanning. [5,6] According to the aforementioned criteria, the patients were divided into 2 groups accordingly: DCS (group S) and non-DCS (group N). In anterior decompression procedures based on the mobility of HPLL, the 2 groups of patients were subdivided into 2 further subgroups according to whether the HPLL was removed or retained, which was detected under the operating microscope and head lamp. By the judgment of the relative position between the HPLL and adjacent upper and lower vertebral posterior edge attachment line, we made a decision to remove or preserve the PLL: if the HPLL showed insufficient mobility and the spinal cord remained compressed by the HPLL after decompression procedures behind the adjacent upper and lower vertebral posterior edge attachment line, then the HPLL was removed. Otherwise, the HPLL was preserved, as shown in Fig. 1. A total of 21 patients (13.2%) were lost to follow-up (9 due to noncompliance, 7 due to relocation, and 5 due to unknown whereabouts). Follow-up information was available for 138 out of the 159 patients (86.8%). Consequently, patients included were divided into 4 groups: group SR (HPLL removal n = 49, 32 males, 17 females, 58.4 ± 9.5 years), group SP (HPLL preserved n = 32, 20 males, 12 females, 54.8 ± 8.2 years) ( Table 1), group NR (HPLL removal n = 21, 13 males, 8 females, 61.5 ± 11.8 years), and group NP (HPLL preserved n = 36, 18 males, 18 females, 58.8 ± 11.0 years) ( Table 1). The data of clinical demographics showed that there were no statistical differences (P > .05) between group SR and group SP, and group NR and group NP ( Table 1). The study protocol was approved by our institution's ethics committee, and written informed consent was obtained from all participating patients. Surgical procedures All operations were performed by the same senior orthopedic surgeon in our spinal surgery department. ACDF procedures for CSM was performed as described in previous studies. [7,8] In order to expose the involved HPLL, we approached the procedure from a standard right-side anterior approach and removed any pathological structures, including degenerated discs, herniated intervertebral nuclei, and proliferative osteophytes. In the removal group, the HPLL was separated from the dura mater and meticulously resected. In the preserved group, the HPLL was preserved without specific management. The spinal cord was free of compression after the procedures in all patients. During the discectomy procedure, the cartilage endplates were removed with curettage, and avoiding any additional damage to the endplate. The Caspar distractor was placed between adjacent vertebral bodies to perform distraction approximately 2 to 3 mm. A trial spacer was used to determine the appropriate implant size and type; a suitable polyether-ether-ketone cage (Medtronic Sofamor Danek USA Inc., Memphis, TN) packed with autologous local bone pieces taken from the operation site was placed safely into the intervertebral space. After implantation of the cage, a Caspar distractor was released. A titanium Codman plate (Johnson & Johnson) or zephir plate (Medtronic Sofamor Danek) or Orion plate (Medtronic Sofamor Danek) (Fig. 2) was fixed onto the adjacent vertebrae with screws. Postoperatively, the patients were allowed to mobilize with a soft neck collar after bed rest for several days. Collar supports were removed after 4 to 6 weeks in all patients. All patients were followed up at 1, 6, 12, 24 months postoperatively. Clinical and radiological evaluation Neurological function was evaluated using the Modified Japanese Orthopedic Association Cervical Spine Myelopathy Functional Assessment Scale, as described by Benzel et al. [9,10] The modified recovery rate (mRR) was calculated using the same formula as that applied for the original Japanese Orthopedic Association (JOA), changing the cumulative score from 17 to 18. [9,10] mRRð%Þ ¼ postoperative mJOA score À preoperative mJOA score 18 À preoperative mJOA score  100% The Medical Outcomes Study 36-item short-form health survey with the score ranging from 0 to 100, including physical component summary (PCS) measure, mental component summary (MCS) measure, [11,12] was used for the assessment of the quality of life (QoL), with higher scores representing improved health. The Medical Outcomes Study 36-item short-form health survey is a health status questionnaire that was developed 2 decades ago to assess functional status and well-being, [11] which has since been applied in a variety of clinical settings. [13][14][15] The mean operating time, blood loss, and any associated complications were also evaluated and compared between groups. Radiological examinations (MRI, CT, and X-rays) were performed preoperatively. X-rays were performed every 3 months, and MRI and CT scans were performed every 12 months postoperatively for all patients. To determine intra-and interobserver variability, radiological measurements were carried out by 2 senior radiologists, who performed radiological evaluations independently for each patient. The anteroposterior median sagittal diameter of the cervical spinal canal was measured at the most stenotic operating vertebral level (Fig. 3) on sagittal T2-weighted MRI. [16] Bone fusion was defined when the following criteria were satisfied: no movement between the spinous processes; absence of a radiolucent gap between the graft and the endplate; and presence of continuous bridging bony trabeculae across the graft-endplate interface (Fig. 4). Statistical analysis Data were expressed as means ± standard deviation (SD) for quantitative variables. Qualitative data are represented as relative percentages. The Student t test and chi-square test were used for comparison of clinical and radiological data. Multivariate regression was used to analyze the clinical outcome effects of any independent variables. Multivariate regression was adjusted for age, gender, spinal cord compressing rate, mean duration of symptoms, pre-and postoperative sagittal diameter, pre-and postoperative modified Japanese Orthopedic Association score (mJOA), pre-and postoperative PCS, pre-and postoperative MCS, and surgical method (HPLL removing = 1; HPLL retaining = 0). [17] A P-value less than 0.05 (P < .05) was considered statistically significant. Statistical analysis was performed using SPSS statistical software package 15.0 for Windows (SPSS Inc., Chicago, IL). Clinical and radiological outcomes The 138 patients were followed up for more than 24 months, and no deaths occurred in either group. The mJOA scores improved from 7.3 ± 2.2 to 15.0 ± 1.8 in the SR group and increased from 7.9 ± 2.3 to 14.2 ± 1.5 in the SP group (P = .036). Furthermore, an increase from 6.8 ± 1.8 to 14.3 ± 2.0 was observed in the NR group, and an increase from 7.4 ± 2.1 to 13.5 ± 1.4 in the NP group (P = .104). The mRR (%) between groups was significantly different (71.7 ± 13.7 in SR vs 64.1 ± 12.9 in SP, P = .014) and (64.3 ± 13.3 in NR vs 58.6 ± 12.2 in NP, P = .110). QoL including MCS and PCS in SR group was significantly higher than that in Table 1 The demographics and operation data of group SR, SP, NR, and NP. Variable Group SR (n = 49) Group SP (n = 32) P Group NR (n = 21) Group NP (n = 36) P the SP group, but no significant difference was found between the NR and NP groups ( Table 2). Radiological examination showed that the sagittal median diameter of the cervical spinal canal in the SR group was significantly wider than that in the SP group (P = .002) and no significant difference was found between the NR and NP groups (P = .151). Reduction in the postoperative spinal canal diameter occurred at a significantly faster rate in the SP group when compared with the SR group (P = .002). No significant difference was found in postoperative canal size between the NR and NP groups (P = .157) ( Table 2). There were no significant differences between groups in terms of operating time, bleeding, bone graft union, and complications (Tables 1 and 2). Intraoperative and postoperative complications No spinal cord injuries occurred in either group. Hoarseness, dysphagia, wound infection, and epidural hematomas were found in 4 groups and were managed conservatively. Cervical instability and displacement of grafts and steel fixators were also present in some patients, which were treated conservatively with immobilization, neck collars, and orthoses. Seven cases in DCS patients and 5 cases in non-DCS patients demonstrated transient postoperative shoulder muscle weakness. No neurological deterioration developed in participants in our study and 3 patients developed CSF leakage in the SR and NR groups. This was accounted for due to significant adherence between HPLL and dura mater, resulting in damage to the latter during surgery. CSF leakage was treated conservatively. No statistical significance was found in the incidence of complications between groups ( Table 2). Discussion Although anterior decompression is a common and widely accepted surgical technique for cervical myelo-radiculopathy, it is still difficult to determine whether HPLL should be removed during ACDF procedures for CSM. [18] Some studies have reported that the PLL is fundamental in protecting the spinal cord and stabilizing the cervical spine. [2,19] However, Loughenbury et al [20] and Chen et al [21] demonstrated that the PLL also prevents protrusion of disc tissue into the spinal canal. PLL removal may lead to instability of the cervical spine and increase the risk of damage to the anatomical components of the cervical canal including the dura mater, spinal cord, nerve roots, and epidural vascular plexus. [12] Nevertheless, a clinical study conducted by Bai et al [17] has described the benefit of degenerative PLL removal in ACDF procedures for CSM. However, no definite indication was identified. Therefore, our study sought to identify the clinical indication for HPLL removal and to provide standardized removal in advance, based on HPLL mobility detected in ACDF for CSM patients. The results of our study revealed that HPLL removal may not markedly influence cervical spine stability, bone graft union and the incidence of graft, and internal fixation displacement both in DCS and non-DCS patients ( Table 2). However, postoperative reduction in spinal canal diameter in the group SP was significantly faster than that in the group SR in DCS patients, which may reduce the long-term beneficial therapeutic effects of anterior decompression. However, no significant difference was found between the NR and NP groups in non-DCS patients (Table 2). Similarly, our data from DCS patients demonstrated that postoperative mJOA scores and neurological mRR scores in the SR group were significantly higher than those found in the SP group. These findings were in accordance with previously published studies. [18,22] Sagittal T2-weighted MRI scans showed that the mean median sagittal diameter of the cervical spinal canal was significantly wider in the SR group compared with the SP group. These results indicated that decompression was more effective after removal of HPLL in DCS patients. Improved mJOA and mRR (Table 2) scores may have resulted from complete removal of HPLL. Compared with DCS patients, no significance was found in postoperative mJOA scores, neurological mRR scores, and the mean median sagittal diameter of the Table 2). Postoperative recovery following ACDF procedures is affected by the presence of DCS. [23] In our study, diagnosis of DCS was based on the Pavlov ratio, which is a reliable determinant, instead of the true diameter of the cervical spinal canal. [24] A Pavlov ratio value less than 0.75 indicates cervical canal stenosis. [4,5] After multivariate adjustment for other covariates in DCS patients, removal of HPLL also correlated with higher mJOA scores, mRR and improved PCS and MCS (P < .05) representing an improved QoL. These findings suggest that HPLL removal correlated with positive clinical outcomes in DCS patients (Tables 3-6) at 24-month follow-up. However, removal of HPLL did not correlate with outcomes in non-DCS patients, which indicated that the removal of HPLL did not influence mJOA and mRR scores or PCS and MCS at 24-month follow-up. Although no significant differences in terms of blood loss and operating time were observed between groups, HPLL removing procedures required more complicated techniques. Determining HPLL mobility after discectomy was important and required careful detection and surgical experience. The safety of removal procedures depends on low traumatic manipulation of the spinal cord and protection of the dura mater, nerve structures. and epidural vascular plexus. [18] The dura mater can adhere to the thickest portion of the HPLL in many patients, and the procedure to separate the HPLL from the dura mater should be performed carefully to avoid tearing of the dura. CSF leakage is a well-known complication after removal of PLL in spinal canal decompression, and incidence varies from 4.5% to 32%. [18] This complication is often found in patients, together with the adhesion of PLL. Yamaura et al [25] reported that the use of the "floating method" for ossification of posterior longitudinal ligament removal could decrease the incidence of CSF leakage, but to date, no comparative studies between DCS and non-DCS patients have been carried out to investigate the clinical effects of spinal canal decompression. Some studies have reported the application of dura sac repair as a management alternative for the treatment of CSF leakage. [18,26] In the 3 cases of CSF leakage in the removal groups in our study, the dura sac Table 2 The imaging, mJOA, SF-36, and complications of group SR, SP, NR, and NP. Variable Group SR (n = 49) Group SP (n = 32) P Group NR (n = 21) Group NP (n = 36) P defect was repaired using the placement of a gelatin sponge. A small piece of muscle and fibrin glue, or the dura, was sutured directly onto the defect. Our study showed that removal of HPLL did not significantly increase the incidence of CSF leakage, with this procedure being safe, and effective for the management of CSF leakage. Iatrogenic neurological injury is a concern for spinal surgeons. Preoperative planning and proper surgical technique helps minimize potential injury. Seven cases in DCS patients and 5 cases in non-DCS patients were found to have transient postoperative muscle weakness. This may have arisen due to C5 nerve root withdrawal, [27] which may not be caused by nerve injury and has no definite associations with HPLL removal. Fortunately, these symptoms resolved gradually over a few weeks, and no significant difference was observed between the removal and retaining groups, both in DCS and non-DCS group. Additionally, due to epidural vascular plexus injury, postoperative hematomas have been occasionally described, [28] and these were consistent with the findings in our study. Although no serious compressive spinal cord symptoms in our study were observed, attention should be paid in order to avoid epidural vascular plexus injury, as such complications are often serious. Some limitations should also be noted in the present study. First, this was a single center retrospective study. Unlike prospective studies, the indication for HPLL removal is miscellaneous in a retrospective one. Second, follow-up at 24 months may not be a sufficient amount of time, meaning that the reliability of the conclusions drawn from our study may be questionable. Additionally, our study included only 138 patients without random allocation. Therefore, further multicenter prospective randomized controlled studies with longer followup durations and larger sample sizes are urgently required to address these issues. Based on the present retrospective study, we were able to draw several conclusions from our findings. In DCS patients, if the HPLL had reduced mobility, and the spinal cord remained compressed after decompression, the HPLL should be removed. Accordingly, removal of HPLL in such procedures appeared, from our findings, to be beneficial and provided more complete spinal cord decompression and improved postoperative outcomes. Furthermore, patients who underwent HPLL removal had an improved QoL at 24-month follow-up. Although these procedures were more complicated and required a more skillful approach, they were generally safe and effective. However, in non-DCS patients, it remains unclear as to whether removal of HPLL provides any clinical benefit, thus, HPLL removal may not be necessary.
2018-04-03T04:07:43.942Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "96c44f42149f50604bae0ea8db081d3f37f4e4ec", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000007043", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96c44f42149f50604bae0ea8db081d3f37f4e4ec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11712488
pes2o/s2orc
v3-fos-license
A Summary of the United States Food and Drug Administrations’ Food Safety Program for Imported Seafood; One Country’s Approach It is well known that the vast majority of seafood is captured or farmed in emerging countries and exported to developed countries. This has resulted in seafood being the number one traded food commodity in the world. Food safety is essential to this trade. Exporting countries should understand the regulatory food safety programs of the countries they ship to in order to comply with their applicable laws and regulations to avoid violations and disruptions in trade. The United States (U.S.) imports more seafood than any individual country in the world but the European Union (E.U.) countries, as a block, import significantly more. Each importing country has its own programs and systems in place to ensure the safety of imported seafood. However, most countries that export seafood have regulatory programs in place that comply with the import requirements of the E.U. The purpose of this paper is to describe the United States Food and Drug Administration’s (USFDA) imported seafood safety program. The primary audience for the information is foreign government regulators, seafood exporters, and U.S. importers. It can also give consumers confidence that f U.S. seafood is safe no matter which country it originates from. Introduction All major seafood receiving countries have their own requirements to ensure the safety of the seafood they import. This presents a challenge to exporting countries because they have to comply with the requirements of each country to which they export. The collective countries of the European Union make up the largest importer of seafood in the world. The European Commission's Directorate-General for Health and Consumers (SANCO) is responsible for seafood safety in the European Union [1]. Because the E.U. collectively is the largest importer of seafood and has historically worked closely with the exporting country's seafood safety government authority, most major seafood exporting countries understand the SANCO program and have export programs in place that comply with SANCO's requirements. SANCO has a diverse food safety program that includes a mandatory "recognition" of the official governmental competent authority within the exporting country, various strategies for testing seafood products offered for entry into the member states, training and technical assistance to countries, and other programs. A particular focus of the SANCO program is the requirement that the competent authority develop and maintain a "list" of establishments (e.g., processors, cold storage facilities, etc.) that conform to SANCO requirements or directives. There are also specific requirements for raw material sites, such as the structure of harvesting vessels, landing locations, and aquaculture operations. Another requirement is that shipments into the E.U. must be accompanied by the appropriate health certificates. Audits of the competent authority are conducted to measure their conformance with these and other SANCO requirements. The lead authority for seafood safety in the United States is the U.S. Food and Drug Administration (USFDA). While the USFDA has a number of requirements that are similar to SANCO's, there are significant differences. Some of the differences are: the USFDA does not identify or recognize a country "competent authority"; does not require a country to maintain a list of processors that conform to USFDA requirements; does not require a processor or exporter of seafood to obtain or include health certificates with each shipment of product to the U.S.; and does not conduct audits of the competent authority in order to measure conformance with USFDA requirements. In addition, while the USFDA has a variety of sanitation and adulteration requirements for all food (e.g., the use of potable water and ice, the requirement that all drugs used on aquaculture farms be approved by the USFDA), there are no specific food safety regulatory programs for aquaculture operations, landing sites, or harvesting vessels. Therefore, because most of the exporting countries are more informed about the E.U. SANCO program and there are significant differences between SANCO and the USFDA, it is important to inform the foreign seafood exporting industry, governments, and others about the USFDA seafood import requirements and programs. If a country exporting seafood to the U.S. is not aware of the requirements for imported seafood and fails to comply with U.S. laws and regulations their products may not be allowed entry into the U.S. The USFDA refused nearly 2000 shipments of imported seafood in 2015, one of the highest totals in recent years ( Figure 1) [2]. More than one-fourth of those refusals were because illegal substances were detected, frequently unapproved veterinary drugs administered to farmed fish and shellfish. Shipments that contained the unapproved drug residues were denied entry into the U.S. and were required to be shipped back to the country of origin. This high violation rate may illustrate a need to better inform the seafood exporting industry about the USFDA seafood safety requirements, including the requirement that only USFDA approved drugs can be used to treat aquacultured animals intended to be exported as seafood to the U.S. certificates. Audits of the competent authority are conducted to measure their conformance with these and other SANCO requirements. The lead authority for seafood safety in the United States is the U.S. Food and Drug Administration (USFDA). While the USFDA has a number of requirements that are similar to SANCO's, there are significant differences. Some of the differences are: the USFDA does not identify or recognize a country "competent authority"; does not require a country to maintain a list of processors that conform to USFDA requirements; does not require a processor or exporter of seafood to obtain or include health certificates with each shipment of product to the U.S.; and does not conduct audits of the competent authority in order to measure conformance with USFDA requirements. In addition, while the USFDA has a variety of sanitation and adulteration requirements for all food (e.g., the use of potable water and ice, the requirement that all drugs used on aquaculture farms be approved by the USFDA), there are no specific food safety regulatory programs for aquaculture operations, landing sites, or harvesting vessels. Therefore, because most of the exporting countries are more informed about the E.U. SANCO program and there are significant differences between SANCO and the USFDA, it is important to inform the foreign seafood exporting industry, governments, and others about the USFDA seafood import requirements and programs. If a country exporting seafood to the U.S. is not aware of the requirements for imported seafood and fails to comply with U.S. laws and regulations their products may not be allowed entry into the U.S. The USFDA refused nearly 2000 shipments of imported seafood in 2015, one of the highest totals in recent years ( Figure 1) [2]. More than one-fourth of those refusals were because illegal substances were detected, frequently unapproved veterinary drugs administered to farmed fish and shellfish. Shipments that contained the unapproved drug residues were denied entry into the U.S. and were required to be shipped back to the country of origin. This high violation rate may illustrate a need to better inform the seafood exporting industry about the USFDA seafood safety requirements, including the requirement that only USFDA approved drugs can be used to treat aquacultured animals intended to be exported as seafood to the U.S. discussed in this paper.) The USFDA recognizes and understands that success in ensuring that the seafood in the U.S. market is safe depends on reaching beyond U.S. borders and engaging with its government regulatory counterparts in other nations, as well as with the seafood industry and regional and international organizations, to encourage the implementation of science-based standards to ensure the safety of seafood before it reaches the U.S. Overview of the USFDA Seafood Import Food Safety Program The USFDA operates a mandatory safety program for all fish and fishery products under the provisions of the Federal Food, Drug and Cosmetic (FD&C) Act, the Public Health Service Act, and related regulations. The USFDA program is multi-faceted and includes the following: ‚ Food Safety Modernization Act requirements. USFDA's Mandatory Hazard Analysis and Critical Control Point (HACCP) Program USFDA's multifaceted and risk-informed seafood safety program relies on various measures of compliance with its seafood HACCP regulation. This is a management system in which food safety is addressed through the analysis and preventive control of biological, chemical, and physical hazards from raw material production, procurement, handling, manufacturing, and distribution to the final point of sale of the finished product. The cornerstone of the HACCP program is the USFDA's Fish and Fisheries Products Hazards and Controls Guidance document, an extensive compilation of the most up-to-date science and policy on the hazards that affect fish and fishery products and effective controls to prevent their occurrence. The fourth edition of this guidance document, which has become the foundation of fish and fishery product regulatory programs around the world, is now available. Under the seafood HACCP regulation, HACCP controls are required for both domestic and foreign processors of fish and fishery products intended for the U.S. market. The regulation also requires that U.S. importers take certain steps to verify that their foreign suppliers meet the requirements of the regulation. USFDA uses a variety of measures to enforce processors' compliance with seafood HACCP, including inspections of foreign processing facilities, use of a screening system to sample imported products, domestic surveillance sampling of imported products, inspections of seafood importers, , foreign country program assessments, and information from our foreign partners and USFDA foreign office posts. Foreign Inspections Each year since 1999 the USFDA has inspected a limited number of seafood processors that export to the U.S. In recent years, the USFDA has significantly increased the number of inspections it conducts of foreign food manufacturers, including seafood manufacturers. For example, in fiscal year 2008 the USFDA conducted 303 inspections of foreign food facilities, of which 95 were seafood processors. In fiscal year 2014 the USFDA conducted 1336 foreign food facility inspections of which 303 were seafood processors. Every seafood processor whose product is intended for the U.S. market is required to have and implement a written HACCP plan whenever a hazard analysis reveals at least one food safety hazard that is reasonably likely to occur. The HACCP inspection approach, for the purpose of verifying compliance with the seafood HACCP regulation, is used by USFDA during domestic and foreign inspections of seafood processors to focus its attention on the parts of the process that are most likely to affect the safety of the product. In contrast to historical methods of evaluating processing practices on the day of the inspection, the HACCP approach allows USFDA to evaluate processors' overall implementation of their HACCP systems over a period of time. This is done by accessing the firms' HACCP plans and determining if the pertinent hazards have been identified. If they have been identified the investigator checks to see if appropriate preventive controls are in place, and verifies whether or not the firm has monitoring records and corrective action records on file. In this model, it is the seafood industry's responsibility to develop and implement HACCP controls and the regulatory agency's role to ensure that the industry complies. It is important to note that HACCP compliance is only one element of a USFDA inspection. The seafood HACCP regulation is complemented by other regulations, including the Current Good Manufacturing Practice regulation, which provides the basis for determining whether the products have been processed under sanitary conditions, and the Thermally Processed Low-Acid Foods Packaged in Hermetically Sealed Containers and Acidified Food regulation, which specifically addresses the Clostridium botulinum hazard in these products. Together, these regulations provide the regulatory food safety controls to which a processor of fish or fishery product is subject. The frequency of inspection for foreign firm is determined by risk-based product priorities as well as other country-specific factors, including the volume of seafood exported to the U.S., the history of violations associated with the products originating from the country, the outcome of previous inspections conducted of the seafood processors, the outcome of importer inspections, the credibility of information raising safety concerns with a foreign establishment's or country's exports, and the use of a new technology or process by processors that might raise food safety concerns. For example, countries or individual processors that process and export known high priority products, such as vacuum packed raw fish, ready-to-eat fishery products, scombrotoxin-forming fish and aquaculture seafood, are routinely targeted for inspection. Although inspection frequency is based primarily on risk-based product and country priorities, the USFDA may adjust the frequency if a particular country, region or specific establishment is connected with violative surveillance samples or is associated with an illness outbreak or other event that causes the USFDA to be concerned about the safety of the seafood (e.g., a natural disaster such as a hurricane, earthquake, tsunami, or an aquaculture disease outbreak which may result in the use of illegal or unapproved antibiotics). The regulatory options available for the USFDA with respect to foreign processors that do not comply with USFDA regulations include placing the affected products on Import Alert for detention without physical examination (DWPE). This means the product is subject to detention or refused admission into the U.S. until it is demonstrated to be in compliance. Domestic Seafood Importer Inspections It is the importer's responsibility to offer for entry into the U.S. products that are fully compliant with all applicable U.S. laws. Under the seafood HACCP regulation, HACCP controls are required for both domestic and foreign processors of fish and fishery products. Additionally, the regulation requires that U.S. importers take certain steps to verify that their foreign suppliers meet the requirements of the regulation. The importer must meet its obligation by having and implementing written verification procedures for ensuring that the fish and fishery products offered for entry into the U.S. were processed in accordance with the requirements of the regulation. Some verification steps taken by importers include: maintaining a current copy of the foreign processor's HACCP plan along with the processor's written guarantee of compliance with the seafood HACCP regulation, inspecting the foreign processor's facilities to ensure compliance with the seafood HACCP regulation, and obtaining continuing or lot-by-lot certifications from an appropriate foreign government inspection authority certifying that the products were processed in compliance with the seafood HACCP regulation. USFDA conducts inspections of domestic seafood importers to verify compliance with these seafood HACCP requirements. Similar to foreign processor inspections, USFDA prioritizes importer inspections. Importers who seek to bring products with a history of violations into the U.S. or who import from a processor that has a history of compliance deficiencies, may be inspected at a greater frequency than importers dealing with products and processors who do not have a history of violations. Seafood Import Surveillance and Enforcement Between 2002 and 2010 overall U.S. food imports, as measured by the number of "lines" of imported food, almost doubled from 4.4 million to 8.6 million import lines (Figure 2). This trend is expected to continue and the U.S. public is likely to consume even more imported food in coming years. Seafood Import Surveillance and Enforcement Between 2002 and 2010 overall U.S. food imports, as measured by the number of "lines" of imported food, almost doubled from 4.4 million to 8.6 million import lines (Figure 2). This trend is expected to continue and the U.S. public is likely to consume even more imported food in coming years. USFDA electronically screens all food offered for entry into the U.S., and a subset of those are physically inspected at varying rates, depending on the potential risk associated with them. To accomplish this USFDA has implemented an automated screening tool, the Predictive Risk-based Evaluation for Dynamic Import Compliance Targeting (PREDICT) system, which significantly improves the screening of imported food. PREDICT uses automated data mining and pattern discovery to identify data anomalies with regard to import and compliance history of a firm and/or a specific product, such as the facility inspection history; results of previous field exams, sample analyses, and facility inspections; and types of products that the firm offers for entry into U.S. commerce. For example, if a firm historically imports fresh seafood and suddenly imports canned seafood, this information is detected by PREDICT and may trigger a decision by the Agency to conduct an examination of the new type of imported product. Once an entry is selected, USFDA will examine or analyze it for microbiological contamination, parasites, decomposition and histamine, chemical contaminants (e.g., pesticides, dioxin, etc.), food and color additives, filth, mold, foreign objects, unapproved new animal (aquaculture) drugs, packaging, and labeling. The type of examination and analysis depends on the product and the types of problems that have been associated with it in the past. USFDA may increase its sampling of certain products if current surveillance sampling detects a pattern of violative products from its source (country, foreign processor, or shipper). Increased sampling, beyond that which is part of the annual work plan, may be initiated via a sampling assignment or an Import Bulletin. In FY 2014, USFDA processed approximately 938,000 entries of imported seafood, performed nearly 26,000 physical examinations of seafood imports, and collected over 5600 samples of domestic and imported seafood for analysis at USFDA field laboratories [3]. When USFDA detects adulterated seafood or observes that an importer or foreign processor has failed to implement adequate safety (HACCP) controls, those subsequent shipments may be placed on DWPE when introduced for entry into the U.S. USFDA uses a system of publicly accessible Import Alerts to provide information and instructions to its field import review staff on how to process particular entries, including products that are subject to DWPE. Products from firms listed on an Import Alert and subject to DWPE may be refused entry unless the owner or consignee of the goods provides evidence to USFDA that the seafood is not violative. In cases where USFDA determines that a particular problem is widespread in a country or region, the USFDA may place all of a particular type of product from that country or region on an Import Alert USFDA electronically screens all food offered for entry into the U.S., and a subset of those are physically inspected at varying rates, depending on the potential risk associated with them. To accomplish this USFDA has implemented an automated screening tool, the Predictive Risk-based Evaluation for Dynamic Import Compliance Targeting (PREDICT) system, which significantly improves the screening of imported food. PREDICT uses automated data mining and pattern discovery to identify data anomalies with regard to import and compliance history of a firm and/or a specific product, such as the facility inspection history; results of previous field exams, sample analyses, and facility inspections; and types of products that the firm offers for entry into U.S. commerce. For example, if a firm historically imports fresh seafood and suddenly imports canned seafood, this information is detected by PREDICT and may trigger a decision by the Agency to conduct an examination of the new type of imported product. Once an entry is selected, USFDA will examine or analyze it for microbiological contamination, parasites, decomposition and histamine, chemical contaminants (e.g., pesticides, dioxin, etc.), food and color additives, filth, mold, foreign objects, unapproved new animal (aquaculture) drugs, packaging, and labeling. The type of examination and analysis depends on the product and the types of problems that have been associated with it in the past. USFDA may increase its sampling of certain products if current surveillance sampling detects a pattern of violative products from its source (country, foreign processor, or shipper). Increased sampling, beyond that which is part of the annual work plan, may be initiated via a sampling assignment or an Import Bulletin. In FY 2014, USFDA processed approximately 938,000 entries of imported seafood, performed nearly 26,000 physical examinations of seafood imports, and collected over 5600 samples of domestic and imported seafood for analysis at USFDA field laboratories [3]. When USFDA detects adulterated seafood or observes that an importer or foreign processor has failed to implement adequate safety (HACCP) controls, those subsequent shipments may be placed on DWPE when introduced for entry into the U.S. USFDA uses a system of publicly accessible Import Alerts to provide information and instructions to its field import review staff on how to process particular entries, including products that are subject to DWPE. Products from firms listed on an Import Alert and subject to DWPE may be refused entry unless the owner or consignee of the goods provides evidence to USFDA that the seafood is not violative. In cases where USFDA determines that a particular problem is widespread in a country or region, the USFDA may place all of a particular type of product from that country or region on an Import Alert for DWPE. The number of Import Alerts varies but FDA currently has 38 Import Alerts relating to import seafood products. USFDA's import surveillance system works in conjunction with USFDA's enforcement of the Seafood HACCP Regulation. For example, the finding of non-compliant product during import surveillance can result in USFDA scheduling an inspection of the importer or foreign processor. USFDA Global Presence Today, with 300,000 foreign facilities from more than 150 countries exporting USFDA-regulated products to the United States, USFDA works beyond U.S. borders to ensure that products coming into the United States are safe and effective. This includes working with governments, industry, and academia in foreign countries, as well as with multilateral organizations [4]. This is an important strategy because USFDA faces ever-greater challenges in determining whether a product has been properly manufactured, distributed, and stored, and in determining who has handled the product. The manufacture of a single product can involve multiple parties from different countries that are engaged at various steps throughout the process. Along the way, there are opportunities for the product to be improperly formulated or packaged, contaminated, diverted, counterfeited or adulterated. USFDA utilizes diverse approaches to address the complex issues posed by globalization. In 2008, USFDA began establishing foreign offices, posting staff in strategic locations around the world, including China, Europe, India, and Latin America (Figure 3). USFDA overseas offices work closely with foreign governments, industry, and other stakeholders to enable USFDA to more effectively protect U.S. consumers. ‚ coordinate food safety issues with the competent authorities in the countries/regions in which they are located, such as response to food-borne illnesses traced to imported product; ‚ educate foreign industry about USFDA requirements; ‚ provide technical assistance to foreign competent authorities and industries; and ‚ inspect foreign manufacturers by placing inspectional staff permanently within the high priority countries/regions. Foreign Country Assessments of Aquaculture Food Safety The majority of seafood imported into the U.S. is from aquaculture operations. For this reason, understanding and ensuring the safety of seafood and aquaculture products is an essential component of USFDA's efforts. Foreign country assessments are systems reviews that offer USFDA a broad view of the ability of the country's industry and regulatory infrastructure to control aquaculture drugs. These assessments allow USFDA to become familiar with the controls that a country's competent authority implements over the distribution, availability, and use of animal drugs. USFDA uses country assessments to help determine the risk of unapproved drug residues in the aquaculture products a country may ship to the United States. Some of these unapproved drugs are suspected carcinogens; others present concerns for the development of antibiotic resistance in human bacterial pathogens. This assessment program helps USFDA direct its resources more effectively and efficiently and allows USFDA to work directly with countries to resolve drug residues problems. The country assessment program also helps USFDA direct its foreign inspection and border surveillance resources more effectively and efficiently and allows USFDA to work directly with countries to resolve drug residue problems. USFDA uses information from country assessments to: ‚ better target (i.e., increase or decrease) surveillance sampling of imported aquaculture products; ‚ inform its planning of foreign seafood HACCP inspections; ‚ provide additional evidence for potential regulatory actions, such as an import alert; ‚ improve collaboration with foreign government and industry contacts to achieve better compliance with USFDA's regulatory requirements; and ‚ better understand the causes for significant changes in a country's drug residue problems, such as a sudden spike in noncompliant samples. Some of the countries whose aquaculture food safety systems FDA has assessed include: The Philippines ‚ Vietnam Food Safety Modernization Act The USFDA Food Safety Modernization Act (FSMA), the most sweeping reform of U.S. food safety laws in more than 70 years, was signed into law by President Obama on 4 January 2011. It aims to ensure the U.S. food supply is safe by shifting the focus from responding to contamination to preventing it. FSMA also gives USFDA new tools and authorities to help make certain imported foods meet the same safety standards as foods produced in the U.S. [5]. Seafood is specifically exempt from some of the new FSMA requirements. For example, the FSMA Foreign Supplier Verification Program (FSVP) for Importers of Food for Humans and Animals. This rule will require that importers of food perform certain risk-based activities to verify that food imported into the United States has been produced in a manner that meets applicable U.S. safety standards. Although importers of fish, and fishery products are exempt from this requirement, they are still subject to the seafood HACCP regulation which requires importers to have and implement written verification procedures (commonly called an "affirmative step") that ensure the fish and fishery products they offer for entry into the U.S. were processed in accordance with the HACCP regulation. The following are among USFDA's key new import authorities and mandates under FSMA. Specific implementation dates specified in the law are noted in parentheses: ‚ Third Party Certification: FSMA establishes a program through which qualified third parties who implement certification programs that have been recognized by USFDA can certify that foreign food facilities comply with U.S. food safety standards. This certification may be used to facilitate the entry of imports. ‚ Certification for high risk foods: USFDA has the authority to require that high-risk imported foods be accompanied by a credible third party certification or other assurance of compliance as a condition of entry into the U.S. ‚ Voluntary qualified importer program: USFDA must establish a voluntary program for importers that provides for expedited review and entry of foods from participating importers. Eligibility is limited to, among other things, importers offering food from certified facilities. ‚ Authority to deny entry: USFDA can refuse entry into the U.S. of food from a foreign facility if USFDA is denied access by the facility or the country in which the facility is located. ‚ FSMA requires USFDA to develop a comprehensive plan to expand technical, scientific and regulatory food safety capacity of foreign governments and their respective food industries in countries from which foods are exported to the United States. Further, USFDA is required to develop the capacity-building plan in consultation with certain stakeholders, including representatives of the food industry, officials from other federal agencies, foreign government officials, non-governmental organizations (NGOs) that represent the interests of consumers, and other stakeholders. The capacity-building plan includes, as appropriate:
2016-06-10T08:59:46.098Z
2016-04-29T00:00:00.000
{ "year": 2016, "sha1": "e5334c6ca6b6ea89811a2a725f6fcce4f311d0db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/5/2/31/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5334c6ca6b6ea89811a2a725f6fcce4f311d0db", "s2fieldsofstudy": [ "Business", "Environmental Science" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
39985347
pes2o/s2orc
v3-fos-license
Steam treatment of contaminated groundwater aquifers – development of pathogenic micro-organisms in soil Steam treatment of contaminated soil and aquifer sediment is a promising method of cleaning soil. The treatment is based on steam injection into a water saturated porous aquifer (Gudbjerg et al. 2004), by which the heat transfers the contaminants into the vapour phase, allowing entrapment in an active carbon filter connected to a large vacuum suction device. The treatment is effective against several important groundwater contaminants, including pentachlorophenole and perchloroethylene, typically found in association with industrial processes or dry cleaning facilities. Furthermore, as an example of removal of non-aqueous phase liquids (NAPLs) large amounts of creosote have been recovered after steam injection in a deep aquifer (Kuhlmann 2002; Tse & Lo 2002). Steam treatment is dependent on the complete heating of the soil volume under treatment. The steam has a strongly adverse impact on trees and other plants with deep root systems within the soil, but no other visible effects have been reported. The aim of the activities undertaken during collaborative projects carried out by the Geological Survey of Denmark and Greenland (GEUS) and the Danish Institute of Agricultural Sciences (DJF) for the Danish Environmental Protection Agency and the local authorities in Copenhagen (Københavns Amt) was to establish to what extent the microbial community was affected by the steam treatment of the soil. A few results from the literature indicate that the microbial activity increases in steam treated soil (Richardson et al. 2002), probably due to microbial degradation of the soil contaminants in combination with microbial utilisation of heatkilled organisms. It is, however, not known whether this increased microbial activity is associated with the development of pathogenic micro-organisms; these are typically able to grow at higher temperatures than the general microbial community in soil. Steam treatment of contaminated soil and aquifer sediment is a promising method of cleaning soil.The treatment is based on steam injection into a water saturated porous aquifer (Gudbjerg et al. 2004), by which the heat transfers the contaminants into the vapour phase, allowing entrapment in an active carbon filter connected to a large vacuum suction device.The treatment is effective against several important groundwater contaminants, including pentachlorophenole and perchloroethylene, typically found in association with industrial processes or dry cleaning facilities.Furthermore, as an example of removal of non-aqueous phase liquids (NAPLs) large amounts of creosote have been recovered after steam injection in a deep aquifer (Kuhlmann 2002;Tse & Lo 2002). Steam treatment is dependent on the complete heating of the soil volume under treatment.The steam has a strongly adverse impact on trees and other plants with deep root systems within the soil, but no other visible effects have been reported.The aim of the activities undertaken during collaborative projects carried out by the Geological Survey of Denmark and Greenland (GEUS) and the Danish Institute of Agricultural Sciences (DJF) for the Danish Environmental Protection Agency and the local authorities in Copenhagen (Københavns Amt) was to establish to what extent the microbial community was affected by the steam treatment of the soil.A few results from the literature indicate that the microbial activity increases in steam treated soil (Richardson et al. 2002), probably due to microbial degradation of the soil contaminants in combination with microbial utilisation of heatkilled organisms. It is, however, not known whether this increased microbial activity is associated with the development of pathogenic micro-organisms; these are typically able to grow at higher temperatures than the general microbial community in soil. Steam treatment of contaminated groundwater aquifers -development of pathogenic micro-organisms in soil Carsten Suhr Jacobsen, Susanne Elmholt, Carsten Bagge Jensen, Pia Bach Jakobsen and Mikkel Bender The steam treatment in Hedehusene Hedehusene is situated approximately 25 km to the west of Copenhagen, and the contaminated soil and groundwater aquifer here results from various industrial activities primarily carried out between 1920 and 1970.These activities include a dry cleaning facility and several small workshops. From both types of industry, trichloroethylene and tetrachloroethylene are often found as groundwater contaminants.The groundwater aquifer in Hedehusene was known to be contaminated with high concentrations of trichloroethylene, which has been a constant hazard to an important drinking water production well downstream from the site. Pumping, treating and recycling water at the site over many years had controlled the distribution of the contamination, but the main contamination was still present at very high levels and has become a long-term threat to continued groundwater extraction.The steam treatment in Hedehusene was carried out during the winter 2001-2002.Wells delivering steam were buried nine metres below the land surface, allowing the transfer of steam below the contamination plume (Fig. 1).The steam was pumped continuously for a period of five months, until the temperature reached 90°C.Heating of the soil allowed the transfer of the contaminant to the vapour phase which was then trapped in an active carbon filter. Heat-tolerant micro-organisms found at the Hedehusene site The site was monitored by sampling surface soil and soil from approximately 50 cm depth on 11 September 2001 (before the steam treatment), and resampling during and after the steam treatment.Sampling was undertaken six times with the latest sampling on 26 October 2004.In general, it was found that the number of heat-tolerant micro-organisms increased after the heat treatment, and that some of the heat-tolerant micro-organisms could still be found three years after the 2001-2002 steam treatment. Heat-tolerant bacteria are defined as able to continue growing at temperatures of 42°C, and heat-tolerant fungi as those able to continue growing at 37°C.Such high temperatures do not occur naturally at the site, and soil micro-organisms originating from this site are not expected to be able to grow at such high temperatures.Heat tolerance is one of the main characteristics that distinguish normal soil microorganisms from pathogenic micro-organisms found in human patients. General microbial community adaptation to growth at high temperature The effect on the general microbial community was investigated by assessing its growth rate on 24 different microbial food sources.A small amount of soil was added to 24 different microbial growth media and incubated at either 20°C or 42°C.This technique revealed that the microbial community in the control plot was very constant in its ability to utilise the different food sources during the sampling period.Furthermore, the microbial community in the control plot showed little ability to utilise food sources at the elevated temperature (an area approximately 30 m away from the heating zone).In contrast, the heated soil showed a massive and long-lasting increase in the ability of the microbial community to utilise the food sources at 42°C (Fig. 2). It is well known that micro-organisms differ in their ability to survive in soil.Some are able to form spores that can stay inactive in the soil for years while others die out due to predation and competition with micro-organisms having a very low level of metabolic activity.We have chosen two different representatives of heat-tolerant micro-organisms: a bacterium without the ability to form spores, and a fungus which forms conidia.Although these conidia are able to germinate and grow in the laboratory, they need not be active in the soil.Both species showed a clear response to soil heating as described below. Aspergillus fumigatus -an unusual pathogenic and allergenic micro-organism Aspergillus fumigatus is a remarkable and unusual pathogen because in addition to causing life-threatening invasive disease of immuno-compromised human patients, it can also cause allergic reactions in persons with fully functional immune systems (Latgé 1999;Denning et al. 2002). A. fumigatus is easily identified and is distinguished by rapidly growing colonies in characteristic turquoise to dark green colours, by the phialides curving to be roughly parallel to each other and to the axis of the stipe, and the presence of small conidia in columns (Fig. 3; Klich & Pitt 1988).A. fumigatus is regularly reported as a dominant species in vari-ous types of compost, but never as a dominant species in soil (Domsch et al. 1993). A. fumigatus was only found in very low numbers in the untreated control plot, but in the heat-treated soil this fungal species was abundant.A. fumigatus was still present in elevated numbers at the last sampling in October 2004 in the heat-treated soil, but the numbers were slowly declining.It seems, however, likely that the elevated numbers of A. fumigatus will continue for some time due to the ability of the fungus to form conidia. Pseudomonas aeruginosa -an opportunistic pathogen Pseudomonas aeruginosa is an opportunistic pathogen, meaning that it exploits any defects in the human host defences to initiate an infection.It causes urinary tract infections, respiratory system infections and also bone and joint infections.Furthermore, it is associated with gastrointestinal infections and a variety of systemic infections, particularly in patients with severe burns and in immuno-compromised cancer and AIDS patients.P. aeruginosa infections are a serious problem for patients hospitalised with cancer, cystic fibrosis and burns.The case fatality rate for these patients is 50%. P. aeruginosa increased from non-detectable (less than 100 cells per gram of soil) in the non-treated soils to 10 5 cells per gram of soil in the heat-treated soil (Fig. 4).P. aeruginosa is a 1 cm representative of fast growing soil bacteria that are unable to form spores.In contrast to A. fumigatus, the population of P. aeruginosa decreased rapidly after the heat treatment, and after one year the numbers were again below the detection level.This reduction was probably due to predation and lack of competing abilities when the temperature decreased. Need for monitoring of microbial sideeffects in relation to steam treatment Micro-organisms differ in their ability to develop resting forms.A. fumigatus develops conidia that can remain in soil for many years.These resting conidia may not be active in the soil, even if they can be detected on agar plates when analysed in the laboratory.In contrast, P. aeruginosa does not form resting spores, and detection on agar plates in the laboratory is connected to activity of the bacterium in the soil. The present project highlights the need for microbial risk assessments in connection with new steam treatment projects.The high level of potentially pathogenic micro-organisms expected after heat treatment of a soil points to the need for monitoring these organisms in connection with new steam treatment projects. Fig. 1 . Fig. 1.Sketch of steam treatment facility at a strongly contaminated industrial site at Hedehusene, west of Copenhagen.Inset map shows location. Fig. 2 . Fig. 2. Changes of microbial metabolic fingerprints using comparisons of the ability of micro-organisms to grow on different carbon substrates.The appearance of coloration in each section indicates growth of microorganisms.A high number of positive sections at 42°C indicate a high number of organisms able to grow at temperatures associated with pathogenic micro-organisms.A: steam-treated soil with growth at 20°C.B: control soil with growth at 20°C.C: steam-treated soil with growth at 42°C.D: control soil with growth at 42°C. Fig. 4 . Fig. 4. Pseudomonas aeruginosa isolated from steam-treated soil.The photograph was taken in ultraviolet light to show the characteristic fluorescence of this bacteria genus.Diameter of view is 9 cm.
2017-11-03T11:08:16.134Z
2005-07-29T00:00:00.000
{ "year": 2005, "sha1": "5781b539146dd2e3ead50e73249c7771c8773931", "oa_license": "CCBY", "oa_url": "https://geusbulletin.org/index.php/geusb/article/download/4829/10465", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "5781b539146dd2e3ead50e73249c7771c8773931", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
232080785
pes2o/s2orc
v3-fos-license
An improvement on Furstenberg's intersection problem In this paper, we study a problem posed by Furstenberg on intersections between $\times 2, \times 3$ invariant sets. We present here a direct geometrical counting argument to revisit a theorem of Wu and Shmerkin. This argument can be used to obtain further improvements. For example, we show that if $A_2,A_3\subset [0,1]$ are closed and $\times 2, \times 3$ invariant respectively, assuming that $\dim A_2+\dim A_3<1$ then $A_2\cap (uA_3+v)$ is sparse (defined in this paper) and has box dimension zero uniformly with respect to the real parameters $u,v$ such that $u$ and $u^{-1}$ are both bounded away from $0$. INTRODUCTION AND MOTIVATION 1.1. Furstenberg's intersection problem. Furstenberg ( [F70]) posed a series of fundamental questions on the transversality of dynamical systems. A central heuristic of those problems is that ×2, ×3 should be two very different actions. Towards this direction, we have the following conjecture. Conjecture 1.1 (The strong Furstenberg's intersection problem). Let A 2 , A 3 be closed ×2, ×3 invariant sets respectively and such that dim H A 2 + dim H A 3 < 1. Then the intersection A 2 ∩ A 3 contains only rational numbers. Here dim H is the Hausdorff dimension, and it will be discussed in Section 3.5. The original form of the above conjecture states that for any irrational number a, its ×2 mod 1 orbit closure and ×3 mod 1 orbit closure have total Hausdorff dimension at least 1. In this direction, an earlier result of Furstenberg in [F67] states that the ×2, ×3 mod 1 orbit, i.e. {2 k 3 m a mod 1} k,m≥1 is dense in [0, 1]. As we mentioned above, Furstenberg posed a series of deep conjectures of this kind. Conjecture 1.1 is perhaps the strongest among all of them. Some weaker conjectures are recently resolved, and we now introduce them. In [F70], Furstenberg introduced the notion of CP-chains and showed that if dim H A 2 + dim H A 3 < 0.5, then dim H A 2 ∩ A 3 = 0. One of the first breakthroughs of Conjecture 1.1 is the following result which appeared in [HS12]. This result settled an earlier sumset conjecture of Furstenberg. Intuitively speaking, the above result says that the projections of A 2 × A 3 along non-trivial directions (i.e. not horizontal nor vertical) all attain the largest possible dimension. This somehow indicates that the fibres (i.e. intersections with lines orthogonal with the projecting direction) should not be large. A precise form of the intersection result was proved in [S16] and [W19] independently which improves Furstenberg's original result. The following result settled an earlier intersection conjecture of Furstenberg. Theorem 1.3 (Shmerkin, Wu). Let A 2 , A 3 be closed ×2, ×3 invariant sets respectively. For all real numbers u, v such that u = 0 we have the following result Here dim B is the upper box dimension and it will be discussed in Section 3.5. The above theorem is a significant step towards Conjecture 1.1. Notice that when dim H A 2 + dim H A 3 < 1 then we have dim B A 2 ∩ A 3 = 0. This indicates that A 2 ∩ A 3 is small which strongly supports Conjecture 1.1. From here, one might be curious about whether anything can be obtained beyond dimension zero. Indeed, this is one of the main focus of this paper. There are other recent results around Furstenberg's intersection problem. See [Al20], [Au20], [BY18], [Y20] for further details. We will show a stronger version of Furstenberg's intersection result. In the statement, we encounter the notions of the Hausdorff dimension (dim H ), the Assouad dimension (dim A ), densities and sparseness and invariant sets. They are defined and discussed in details in Sections 3.5, 3.7, 4.2 and Section 3.6. For concreteness, the results we list here are about ×2, ×3 mod 1 invariant sets. They still hold if we replace 2, 3 by p, q respectively such that log p/ log q / ∈ Q. The bound O(N 27s ) below needs to be changed to O(N C(p,q)s ) with constants C(p, q) depending on p, q. We shall see that being sparse implies having zero dimension. The advantage of considering sparseness is that it provides us with more quantitative understanding of the size of sets. For example, instead of considering an individual intersection, one can consider all intersections at once and the box-counting numbers can be controlled in a uniform way. We first introduce the following notion of spareness which provides us with a quantitative view of the 'topological smallness' of a compact subset of a line. Definition 1.4. Let A ⊂ R be a compact set. We see that A is (super) sparse near 0 if has upper (Banach) density 0. We call W (A) the sparse index of A near 0. More generally given any a ∈ A, we define the sparse index of A near a by W (A, a) = W (A − a) and we say that A is (super) sparse near a if and only if A − a is (super) sparse near 0. In general when l ⊂ R 2 is contained in a line not parallel with the Y -coordinate axis, we define W (A, a) as W (A, a) = W (π Y (A), π Y (a)). Now we state the main results of this paper. Theorem 1.5. Let A 2 , A 3 be closed ×2, ×3 invariant sets respectively and dim H A 2 + dim H A 3 = s < 1/2. Then let l = l u,v = A 2 ∩ (uA 3 + v), u = 0 be an intersection. The distance set |l − l| is supersparse near 0. Moreover, we have the following bound which is uniform with respect to k ∈ N, #(W (|l − l|) ∩ [k + 1, k + N ]) = O(N 27s ). Theorem 1.6. Let A 2 , A 3 be closed ×2, ×3 invariant sets respectively and dim H A 2 + dim H A 3 < 1. Then let l u,v = A 2 ∩ (uA 3 + v), u = 0 be an intersection. For all a ∈ l u,v , l u,v is super sparse near a. Moreover, for each γ > 0, the following bound is uniform with respect to |u| ∈ (γ, γ −1 ), a ∈ l u,v , k ∈ N, See Section 3.7 for the meaning of densities. A direct consequence of the above theorems is the following corollary on a uniform version of box dimension estimate. This follows by applying Theorem 1.6, Proposition 4.2 and the discussions below Proposition 4.2. We remark that this uniform box dimension estimate is very useful in some problems concerning numbers with restricted digits, see [BY18]. Corollary 1.7 (A uniform estimate for the upper box dimension). Let A 2 , A 3 be closed ×2, ×3 invariant sets respectively and dim A key tool for proving the above results is the notion of sparseness. This is where we make use of the notation W (.)( see Section 4.2). Our notion of sparseness is a very natural indicator which says that a set is small. We think this topic might be interesting on its own and we will give a detailed treatment of the notion of sparseness. In Section 4.2 we will prove some relations between sparseness and fractal dimensions. In particular, we show that being sparse implies having zero dimension but the converse is in general not true. In particular, we have the following results as consequences: • By Proposition 4.3: If dim H A 2 + dim H A 3 = s < 1/27 then H g (l u,v ) = 0 for the gauge function In other words, l u,v is actually far away from having positive dimension if dim H A 2 + dim H A 3 is small. The above weaker consequences already revisit partially the results (see Theorem 1.3 above) of [S16] and [W19] on Furstenberg's intersection conjecture. We note here that one can modify Wu's and Shmerkin's methods to show that dim A l u,v = 0 as well. 1.2. Wu's ergodic sampling theorem. A key step for Wu's approach of Furstenberg's intersection problem is an ergodic sampling theorem proved as [W19,Theorem 6.1]. The precise statement is rather technical and we will give a detailed discussion in Section 7. Here we only provide some heuristics. Let S = {x n } n≥1 be a sequence of numbers in [0, 1], and we want to sample this sequence by taking a subsequence S ′ = {x n k } k≥1 . Suppose that S equidistributes in [0, 1], and taking a random subsequence S ′ in a Bernoulli manner (by tossing a coin for each x n and decide whether or not put it into S ′ ) will almost surely keep the equidistribution property. Now, instead of randomly (in a Bernoulli manner) choosing the subsequence S ′ , we can choose it according to some returning time with respect to an auxiliary dynamical system (Y, T ) given y ∈ Y and A ⊂ Y : n k = the integer i such that T i (y) ∈ A for the k-th time. If (Y, T ) is a 'complicated' system, then we expect that some randomness can be extract and the subsequence S ′ will be still nicely distributed. [W19,Theorem 6.1] provides a precise result when S is an irrational rotation orbit and (Y, T ) is a dynamical system with positive entropy. We will provide a generalization of this result, see Theorem 7.8. Essentially, our result allows us to take S to be a subsequence rather than the whole irrational rotation orbit. Although not straightforwardly, we remark that one can actually modify Wu's proof in [W19] for proving Theorem 7.8. We decide to include a full detailed proof for this theorem in Section 7. There are other results in this direction, see [Au20] and [Yu19]. STRUCTURE OF THIS PAPER In Section 3, we briefly recall some basic terminology from dynamical systems, notions of dimensions and densities of integer sequences. In Section 4, we introduce notions of sparseness and their connections with fractal dimensions. We point out the importance of Section 4.3, the dipole direction structure will be useful later. In Section 5, we prove some target hitting estimates using discrepancy theory and use it in Section 6 for proving Theorem 1.5. We present in Section 7 a version of Sinai's factor theorem which is closely related to but different than the version which appeared in [W19,Section 6]. Then finally in Section 8 we proof Theorem 1.6. NOTATION 3.1. Filtrations, atoms and entropy. Let X be a set with σ-algebra X . A filtration of σ-algebras is a sequence F n ⊂ X , n ≥ 1 such that Given a measurable map S : X → X and a finite measurable partition A of X, we denote S −n A to be the following finite collection of sets (notice that S might not be invertible) Then we use ∨ n−1 i=0 S −i A to denote the σ-algebra generated by In other words, an atom in ∨ n−1 i=0 S −i A can be also described as follows. Given a sequence {A i } n i=0 ∈ A n+1 , we define the following set (which can be empty) {x ∈ X : x ∈ A 0 , S(x) ∈ A 1 , . . . , S n (x) ∈ A n }. The above set is an atom and all atoms have the above form. In this sense ∨ n−1 i=0 S −i A is generated by a finite partition A n−1 of X which is finer than A. Let µ be a probability measure. Then we define the entropy of µ with respect to a finite partition A as follows We define the entropy of S as follows h(S, µ) = lim Here we implicitly used Sinai's entropy theorem, see [PY98,Lemma 8.8]. 3.2. Dynamical systems and factors. A measurable dynamical system is denoted as (X, X , S, µ) where X is a set with σ-algebra X and measure µ and a measurable map S : X → X. In case when X is clear in context (for example Borel σ-algebra in Borel spaces) then we do not explicitly write it down. Given two dynamical systems (X, X , S, µ), (X 1 , X 1 , S 1 , µ 1 ), a measurable map f : X → X 1 is called a factorization map and (X 1 , X 1 , S 1 , µ 1 ) is called a factor of (X, X , S, µ) if µ 1 = f µ and 3.3. Dynamics on product sets and components. Let (X, S, µ) be a measurable dynamical system with X = X 1 × X 2 . Denote the projection function π 1 : X → X 1 . Then the X 1 component of the measure µ is the projected measure π 1 µ. Let A be a collection of subsets of X. The X 1 component of A is π 1 A. In the case when S is a product or skew-product of maps, namely, for (x 1 , x 2 ) ∈ X, S(x 1 , x 2 ) = (S 1 (x 1 ), S 2 (x 1 , x 2 )), then (X 1 , S 1 , π 1 µ) is a factor of (X, S, µ) and π 1 µ is S 1 -invariant if µ is S-invariant. We call (X 1 , S 1 , π 1 µ) the X 1 component of (X, S, µ). 3.5. Dimensions. We will encounter (and have encountered) in this paper various notions of fractal dimensions. We briefly introduce the definitions. For more details on the Hausdorff and box dimensions, see [F05,Chapters 2,3] and [M99,Chapters 4,5]. For the Assouad dimension, see [F14]. We shall use N (F, r) for the minimal covering number of a set F in R n with cubes of side length r > 0. 3.5.1. Hausdorff dimension. Let g : [0, 1) → [0, ∞) be a continuous function such that g(0) = 0. Then for all δ > 0 we define the following quantity The g-Hausdorff measure of F is When g(x) = x s then H g = H s is the s-Hausdorff measure and Hausdorff dimension of F is 3.5.2. Box dimensions. The upper box dimension of a bounded set F is Similarly the lower box dimension of F is If the limsup and liminf are equal we call this value the box dimension of F and we denote it as dim B F. 3.5.3. Assouad and modified Assouad dimensions. The Assouad dimension of F is where B(x, R) denotes the closed ball of centre x and radius R. The modified Assouad dimension of F is In particularly any countable set has modified Assouad dimension 0 and it is easy to see that 3.6. ×p mod 1 invariant sets. In this paper, given an integer p ≥ 2, we use A p to denote an arbitrary closed ×p mod 1 invariant subset of 3.7. Densities of integer sequences. We also work with various notions of densities of integer sequences. We recall two notions of density for integer sequences. Definition 3.1. The upper natural density of W is defined as Similarly, we define the lower natural density by replacing the above lim sup with lim inf and write it as d(W ). If these two numbers coincide we call it the natural density of W and write it as d(W ). if there exists positive number C > 0 such that if for any ǫ > 0 there exists N ∈ N such that for all k ≥ N we have In some occasions there is another parameter set S and we have functions f, g : , o c (g) to indicate that the above tendencies depend on the choice of c. We say that f = O(g), o(g) uniformly for c ∈ S if the above tendencies do not depend on the choice of c. Weak convergence of measures and the Portmanteau theorem. In Section 8, we need the notion of weak * convergence of measures and the Portmanteau theorem. Let µ k , k ≥ 1 be a sequence of probability measures on a Borel space X. We say that µ k → µ in weak * sense (or weakly) if for all bounded continuous functions f : The following version of the Portmanteau theorem is taken from [K06,Theorem 13.16] and [Su14, Theorem 3.3 (Portmanteau theorem). Let µ k , k ≥ 1 be a sequence in P(X) (the space of Borel probability measures supported on X) where X is a Borel space. Let µ ∈ P(X). The following statements are equivalent: 1 : µ k → µ weakly; 2 : lim sup k µ k (K) ≤ µ(K) for all closed subsets K of X; 3 : lim k→∞ X f dµ k = X f dµ for bounded and µ-almost everywhere continuous real valued functions f on X. There are a lot of other equivalent statements for the Portmanteau theorem, for more details, see [Su14] and the references therein. One particular use of the above result is related to invariant measures of almost continuous dynamical systems. More precisely, let X be a compact metric space. Let T : X → X be a map (not necessary continuous). For each integer n ≥ 1, let x n ∈ X be arbitrarily chosen and let µ n = (n + 1) −1 n i=0 δ T i (xn) be a sequence of probability measures on X. Let µ be a weak * limit point of this sequence. In the case when T is continuous, we know that µ is T -invariant. This is the content of Kryloff-Bogoliouboff theorem. We can extend this result if T is only assumed to be µ-almost everywhere continuous. In fact, for any f ∈ C(X), we have the following result for a sequence of integers Now we want to consider the same for the function f • T. It is continuous at where T is continuous. Then we see that f •T is µ-almost everywhere continuous. We have the following result (by Theorem 3.3 (3)), The last term is equal to which is the same as (recall that f is bounded) This shows that µ is T -invariant. In general, it is not simple to show that T is µ-almost everywhere continuous. There are some special cases when this is possible to be checked, see for example [W19, Section 5.2]. SPARSENESS AND DIPOLES In this section, we introduce our key tools for approaching Furstenberg's intersection problem. The ideas behind the definitions are very natural and straightforward, however, we are not able to find any direct references. We decide to give a detailed treatment which will be more than what we need for Furstenberg's intersection problem. Doubling measures. Later we shall use some facts about doubling measures. Here we are interested in doubling measures supported on compact subsets of R. We have the following result. The proofs can be found in [VK], [L98,Theorem 6.10] and [KRS12]. Theorem 4.1. Let A ⊂ R be a compact set. Then there is a doubling probability measure supported on A. Namely, there is a measure µ ∈ P(A) and there exists an absolute constant (called the doubling constant for R) D ≥ 1 such that for all a ∈ A and r > 0, According to [L98,Section 6.13], the constant D for R can be chosen to be 2 × 3 × 4 × 9 5 . Definition. Let has upper (Banach) density 0. We call W (A) the sparse index of A near 0. More generally given any a ∈ A, we define the sparse index of A near a by W (A, a) = W (A − a) and we say that A is (super) sparse near a if and only if A − a is (super) sparse near 0. In general when l ⊂ R 2 is contained in a line not parallel with the Y -coordinate axis, we define W (A, a) as It is easy to see that (super) sparseness is insensitive with respect to scaling. That is to say, if W (A, a) has upper natural density 0, then for each real number c = 0, W (cA, ca) also has natural density 0. A similar result holds for upper Banach density as well. Given any set A ⊂ R, we denote |A − A| as its distance set. Intuitively, if |A − A| is sparse near 0 then A cannot be too large. A less restrictive notion is uniform sparseness. That is to say, for each δ > 0, there is an integer N δ such that #W (A, a) ∩ [1, . . . , N ] ≤ δN for all a ∈ A, N ≥ N δ . Similar notion of uniform super sparseness can be formulated as well. In particular, if |A − A| is (super) sparse then A is uniformly sparse. Proposition 4.2. We have the following results. • 1: Given any uniformly sparse set • 4: The converses of the above are in general not true. On the other hand, if A is finite then it is uniformly super sparse. Proof. The proofs for (1) and (2), (3) are very similar and we only write the proof for (1). First observe that the last conclusion of (4) is trivial. We now illustrate the first part of (4). Let A 0 be the set We see that dim B A 0 = 0 but we can see that A 0 is not sparse near 0 and therefore it is not uniformly sparse. Now we consider (1). Let A ∈ [0, 1] be a uniformly sparse set. Then we see that the following set has 0 upper natural density uniformly across a ∈ A, Since A is compact we assume that it is contained in [0, 1]. To bound the upper box dimension of A we shall use Theorem 4.1 and find a doubling (with doubling constant D > 0) probability measure supported on A. Let a ∈ A be arbitrarily chosen and for any integer n ≥ 0 we can find a nested sequence of intervals a ∈ B(a, 2 −n ) ⊂ · · · ⊂ B(a, 1). Since we assumed that A ⊂ [0, 1] therefore we see that µ(B(a, 1)) = 1. Now we make use of the uniform sparseness of |A − A|. It is clear that if Since W (A, a) has natural density 0 uniformly across a ∈ A, we see that for all ǫ > 0 there exist a N ǫ such that for all a ∈ A, N ≥ N ǫ we have Then we see that for all N ≥ N ǫ µ(B(a, 2 −N )) ≥ D −ǫN . We can cover A with disjoint intervals of length 2 −N −1 and denote the collection of such intervals as N N +1 , then for any I ∈ N N +1 there is a a ∈ I∩A such that I ⊂ B(a, 2 −N ) and therefore µ(I) ≥ D −ǫ(N ) . Since µ is a probability measure we see that This implies that dim B A ≤ ǫ log D/ log 2. and because ǫ can be arbitrarily chosen we see that dim B A = 0. This finishes the proof of (1). The proofs of (2), (3) are similar and we omit the details. Since the doubling constant D can be chosen independently with respect to A we see that with the uniform sparseness assumption, for each ǫ > 0 there is an integer N ǫ such that for all n ≥ N ǫ . Therefore, if we have a collection of compact sets {A i } i∈I of [0, 1] and we assume uniform sparseness uniformly across i ∈ I then for each ǫ > 0 there is an integer N ǫ and for all individually for all a ∈ A then it is possible to say something about the Hausdorff measure of A with respect to a certain gauge function. Proof. Since A is compact we can find a doubling probability measure with doubling constant D on it, see Theorem 4.1. Let c > 0 be an arbitrarily chosen constant. Then for each a ∈ A, because of the sparseness of A around a with a similar argument as in the proof of Lemma 4.2 we see that there exists an integer N a such that whenever N ≥ N a we have Since A is compact we see that there is a finite collection of intervals of form I a = B(a, N a ) that covers A. By Besicovitch's covering lemma( [M99, Chapter 2, Section 7]) we see that there exists an absolute constant C > 0 and for a finite subset Let r a be the length of I a , we see that for an absolute constant C > 0, It is clear that we can bound max a∈A ′ r a to be arbitrarily small. This implies that H g (A) < ∞. In Then there is a constant c > 0 such that the following holds for all small enough δ > 0 : Let A ⊂ R 2 be a compact subset. Let E ⊂ [0, 2π] be a δ-separated set of directions. Suppose that for each e ∈ E we can find x e , y e ∈ A such that and y e − x e points towards the direction e. Then we have N (A, δ) ≥ c √ #E. To see this, we only need to cover A with disjoint δ-cubes. As long as δ is small enough, there is a number c ′ > 0 so that if there is a δ-cube containing M points of form x e then the corresponding y e are all at least c ′ δ-separated from each other. Therefore we have N (A, δ) ≥ c ′ M/2. On the other hand if non of the δ-cubes contain more than M many points of form x e then N (A, δ) ≥ #E/M. Then we see that for all integer M , Definition 4.4. Let A ⊂ R 2 be a compact subset, the dipole direction set of A is defined as follows, It is easy to see that when A is compact DD(A) is also compact. We have shown the following lemma. where λ is the Lebesgue measure on [0, 1]. This implies that when A is small we expect that n ≥ 0, R n α (x) ∈ A happens not so often. It is known that the circle rotation system with irrational α is uniquely ergodic therefore it is expected that R n α (0) ∈ A happens not so often. Proof. For any ǫ > 0, we can cover A with intervals A ⊂ i∈I I i such that I is a finite set and i∈I λ(I i ) ≤ λ(A) + ǫ. Then we can approximate each ½ I i with a continuous function where (1 + ǫ)I i is the interval with the same centre as I i but its length is equal to (1 + ǫ) times that of I i . Then because of the unique ergodicity we see that for each i ∈ I and x ∈ [0, 1), Furthermore the above limit holds uniformly across x ∈ [0, 1). Therefore for each i ∈ I there is a number N i which does not depend on x such that for each N ≥ N i and x ∈ [0, 1), Since M ≥ N ǫ for each i ∈ I we see that This implies that Since ǫ > 0 and M > N ǫ can be chosen arbitrarily we see that the upper Banach density of W is at most λ(A). It is natural to consider what happens when A is small in dimension. For this purpose we need to consider error terms in ergodic limits. We will be most interested in the cases when α = log p/ log q and α / ∈ Q. It was proved in [B15] that there are numbers C(α), c(α) > 0 such that for all integers n, m ≥ 1 The best known example in this kind is when α = log 2/ log 3 and in this case, see [R85, proposition and formula (6)(7) on page 160] the above inequality can be written as α − m n ≥ 0.00000000000001 1 n 14.3 . Since for any two different integers i 1 , i 2 we have where M 1 = ⌊i 1 α⌋, M 2 = ⌊i 2 α⌋. We see that As we can assume that i 1 > i 2 , the inequality (1) implies that This is the key point of the inequality (1) that we shall use. Lemma 5.2. Let A ⊂ [0, 1] be a set with dim B A = s < 1, then for any irrational number of form α = log p/ log q with two integers p, q > 0, we have the following inequality holds for all ǫ ∈ (0, 1 − s), where C(α) > 0 is a constant depends only on α. Remark 5.3. This lemma applies better in the case when s is small. For example if α = log 2/ log 3 then when s < 1/14 we have the following polynomial bound, Proof. Let N be a large integer and we consider the following sequence Then it is clear that elements in S N (α) never coincide because α is an irrational number. Then by inequality (GAP) we see that there exist positive numbers c(α), C(α) > 0 such that for any x, y ∈ Now we choose r N = N −C(α)+1 and cover A with k N = N (A, r N ) many disjoint r N -intervals. We denote again the union of those r N -intervals as A r N . Then we see that This is because each r N -interval we use to cover A contains at most O α (1) many points in S N (α). Then because of the dimension requirement of A we see that for any ǫ > 0, Therefore we see that for a constant C ′ (α) we have This proves the result. SMALL SETS, DIPOLE CONFIGURATIONS In this section we study A 2 ∩ (uA 3 + v) when dim H A 2 + dim H A 3 < 1/2. Furstenberg studied this intersection in [F70], in particular, he showed that dim H A 2 ∩ (uA 3 + v) = 0 for all u = 0. In this section, we provide a more straightforward argument and a stronger result. Proof of Theorem 1.5. We consider the product set K = A 3 × A 2 . Then l is, up to rescaling, the same as l K = l ′ ∩ K with l ′ = {y = ux + v}. For convenience we require that u > 1 but we note that the cases for other u = 0 are similar. Now we want to show that |l K − l K | is super sparse near 0. We denote W K = W (|l K − l K |) and we want to show that W K has zero upper Banach density. Now for each k ∈ W K we can find x k , y k ∈ l K such that Without loss of generality we shall assume that the vector y k − x k has positive Y -component. Now let α = log 2/ log 3 and we can construct the map T = R 2 × [0, 1] → R 2 × [0, 1], Now let x, y ∈ R 2 be two different points such that the line segment xy is not parallel to the coordinate axis. Then we can find the following sequence of pairs of points in R 2 , ((x n , t n ) = T n (x, 0), (y n , t n ) = T n (y, 0)) n≥0 . Now we apply the above map T for k times with initial pair x k , y k and end up with the pair ((x, t k ) = T k (x k , 0), (y, t k ) = T k (y k , 0)). We still have to perform the mod 1 operation on each coordinate component of x and y. Denote the following doubled set of K K = K ∪ (K + (0, 1)) ∪ (K + (1, 0)) ∪ (K + (1, 1)), then because |y − x| ∈ [1/6, 1.5], we can findỹ,x ∈K such that For each k ∈ W K we have seen that there is a pair of points x, y ∈K with |x − y| ∈ [1/6, 1.5] such that the direction vector y − x has slope u3 {k log 2/ log 3} . We denote the map e : [0, 1] → S 1 such that e(t) is the direction vector in S 1 ⊂ R 2 with slope u3 t . It is easy to see that this map is smooth; therefore it is bi-Lipschitz. Then we see that However the dipole direction set DD(K) has upper box dimension at most 2s < 1 and therefore its Lebesgue measure is 0. By Lemma 5.1 W K must have upper Banach density 0. For the second conclusion, let N be a large integer and a be an arbitrarily chosen integer. We notice that {k log 2/ log 3} k∈[a,a+N ]∩W K consists r N -separated points for r N = N −13.3 , see Lemma 5.2. Let ǫ > 0 be a small number then for all large enough N we see that If we choose ǫ to be small enough we see that SINAI'S FACTOR THEOREM: CASINO WITH CLOCKS In this section we introduce Sinai's factor theorem and prove Theorem 7.8. The main technicalities here are similar to that in [W19, Section 6] and our Theorem 7.8 can be seen as a generalization of [W19, Theorem 6.1]. To have a intuitive idea in mind, consider a sequence of i.i.d random variables {X n } n≥1 with values in {0, 1} For any irrational number α we consider the sequence {X n R n α (0)} n≥1 . Intuitively, imagine a casino with a clock (which is unrealistic) with only one finger rotating with irrational angular speed ( +α mod 1 system). Whenever a gambler throws a coin with head up then he will check the clock. Then a sample path of the above random sequence would be a series of time a gambler observed. The results in this section can be intuitively stated as follows. For each gambler, almost surely, the time series he observed equidistributes in [0, 1], that is, the time series he observed does not depend on whether he is winning or losing. We shall discuss various different aspects towards the above intuition. Then we take a σ-algebra on Ω generated by cylinder subsets. A cylinder subset Z ⊂ Ω is such that Z = i∈N Z i and Z i = Λ for all but finitely many integers i ∈ N. We construct a probability measure µ on Ω by giving a probability measure µ Λ = {p λ } λ∈Λ on Λ and set µ = µ N Λ . We require here that p λ = 0 for all λ ∈ Λ. Then this system is weak-mixing and has entropy h(S, µ) = λ∈Λ −p λ log p λ . We call this system a Bernoulli system. We can also introduce a metric topology on Ω by defining d(ω, ω ′ ) = #Λ − min{i∈N:ω i =ω ′ i } . This turns Ω into a compact and totally disconnected space. For ω ∈ Ω and r ∈ (0, 1), we use B(ω, r) to denote the r-ball around ω with radius r with respect to the metric d constructed above. Let Ber = (Ω, S, µ) be a Bernoulli system on Ω = Λ N . Let α ∈ (0, 1) be an irrational number. Heuristically, the dynamical system T looks like a stochastic process with a sequence of i.i.d random variables. For any B ⊂ Ω with µ(B) > 0 and ω ∈ Ω the following set can be realized as randomly constructed by choosing each k ∈ N independently with probability µ(B). Then for any subset K ′ ⊂ N the chance that K(ω, B) ∩ K ′ = ∅ is (1 − µ(B)) #K ′ and it is small when #K ′ is large unless µ(B) = 0 which we assumed not to be the case. Definition 7.2. Let (X, S, µ) be a dynamical system and let B ⊂ X be a subset. Then we can construct the following sequence K(x, B) = {k ∈ N : S k (x) ∈ B}, and the following set for α ∈ [0, 1), Lemma 7.3. Consider the Bernoulli system (Ω, S, µ). Let {B i } i∈I be a finite pairwise disjoint family of measurable subsets of Ω. Suppose that i∈I µ(B i ) ≥ 1 − δ for a δ ∈ (0, 1). Then there exists a set Ω ′ ⊂ Ω with full µ-measure such that for each ω ∈ Ω and any integer sequence K of lower natural density ρ larger than δ, there exists an i = i(ω, K) ∈ I such that has Lebesgue measure at least ρ − δ. Proof. For each i ∈ I, K(ω, B i ) can be essentially viewed as a random sequence of integers obtained by deciding to choose each integer independently with probability µ(B i ). It is helpful to have this intuition in mind for what follows. We see that for almost all ω ∈ Ω, by the ergodicity of Bernoulli systems, d(K(ω, B i )) = µ(B i ), and the sequence of real numbers {R k α (0)} k∈K(ω,B i ) equidistributes in [0, 1] (we re-enumerate K(ω, B i ) with N). This can be seen by considering the dynamical system (Ω × [0, 1], S × R α , µ × λ) (λ is the Lebesgue measure) which is ergodic because it is the product of a weakly mixing and a uniquely ergodic system, see also [W19,Lemma 6.5]. Since I is a finite family, we see that for almost all ω ∈ Ω, for each i ∈ I the above results hold. We denote this full measure set as Ω ′ . We see that for each ω ∈ Ω ′ d(∪ i∈I K(ω, B i )) ≥ 1 − δ. Now let K be an arbitrarily chosen sequence with lower natural density ρ > δ, then we see that K ∩ (∪ i∈I K(ω, B i )) has lower natural density at least ρ − δ > 0. We denote for each i ∈ I, This implies that i∈I µ(B i ) > 1 and it is impossible. So we see that there exists an i ∈ I such that ρ i ≥ (ρ − δ)µ(B i ). Now we denote ǫ = ρ − δ. For this i we see that K(ω, B i ) \ K i has lower natural density at most µ(B i ) − ρ i ≤ (1 − ǫ)µ(B i ). Now by the equidistribution property we see that for any interval I ⊂ [0, 1], K ′′ = {k ∈ K(ω, B i ) : R k α (0) ∈ I} has natural density µ(B i )|I| and therefore if |I|µ(B i ) > (1 − ǫ)µ(B i ) then K ′′ has natural density strictly larger than (1−ǫ)µ(B i ). Therefore K i ∩K ′′ cannot be empty and thus we have I ∩A K(ω,B i )∩K = ∅. This argument works for finite unions of intervals as well. For any finite collection of intervals with disjoint interiors I j , j ∈ J with total length j∈J |I j | > 1 − ǫ we see that Then we see that A c K(ω,B i )∩K is open and has Lebesgue measure at most 1 − ǫ. This is because for any open set O ⊂ [0, 1], there exist a countable family L m , m ≥ 1 of intervals with disjoint interior such that m |L m | = λ(O), where λ is the Lebesgue measure, see [SS05,Theorem 1.3]. Then for any η > 0 we can find a finite collection of those intervals with total length at least λ(O) − η. We can apply this argument for A c K(ω,B i )∩K and for arbitrary small η > 0. As a result we see that λ(A K∩K(ω,B i ) ) ≥ ǫ as required. Theorem 7.4. Let (X, S, µ) be an ergodic dynamical system with h(S, µ) > 0. We can find a Bernoulli factor Ber = (Ω, S B , ν) of (X, S, µ) with h(S B , ν) = h(T, µ) > 0. Denote f : X → Ω to be the factorization map. For a δ > 0, let B i , i ∈ I be a finite disjoint collection of measurable subsets in Ω with i∈I ν(B i ) ≥ 1 − δ. Then for µ almost all x ∈ X, for any integer sequence K with lower natural density ρ > δ there exist an i ∈ I such that Proof. We can find a Bernoulli factor Ber = (Ω, S B , ν) of (X, S, µ) with h(S B , ν) = h(S, µ) > 0. The a straightforward application of Theorem 7.4 gives us the result. When Ber is a factor of (X, S, µ) with the same entropy, then intuitively all the complicities are carried by Ber and therefore the fibres of f should not be too complicated with respect to the map S. The following result expresses this intuition in a clear way. The following result is known as Rohlin's disintegration theorem, and we adopt the version in [S12]. Definition 7.5. Let f : X → Y be a measurable map between two measurable spaces and let µ be a measure on X with projection µ Y = f µ on Y . We call a collection of measures {µ y } y∈Y a system of conditional measures if the following properties hold, 1 : For all y ∈ Y , µ y is a measure supported on f −1 (y) and for µ Y almost all y ∈ Y , µ y is a probability measure. 2 : We have the law of measure disintegration. For all Borel set B ⊂ X, we have If X, Y are also metric spaces (f need not to be continuous) we require further that the following holds for µ Y almost all y ∈ Y . 3 : µ y = lim r→0 µ f −1 (B(y,r)) , where the limit is in the weak* sense and µ f −1 (B(y,r)) is the conditional measure of µ on f −1 (B(y, r)), namely, for any Borel set B ⊂ X with positive µ measure, Theorem 7.6. Let f : X → Y be a measurable map between two metric spaces with corresponding Borel σ-algebra. Then there exists a system of conditional measures. Then we have the following result due to [W19, Lemma 6.4] which is a direct consequence of the conditional Shannon-McMillan-Breiman theorem, Egorov's theorem and the Portmanteau theorem. Theorem 7.7 (Wu). Let (X, S, µ) be an ergodic dynamical system with X being a Borel space. Let A be a finite partition of X such that ∨ ∞ i=0 S −i A generates the sigma-algebra of X. For each x ∈ X not on the boundaries of sets in ∨ n i=1 S −i A, for each n ∈ N we denote A n (x) the unique atom A of ∨ n i=0 S −i A such that x ∈ A. If µ does not give positive measures to boundaries of S −i A for all i ∈ N and h(S, µ) > 0 then there exist a Bernoulli factor (Ω, S B , ν) with measurable factorization map f : X → Ω and for each δ > 0 there exist a X δ ⊂ X and a constant C δ with the following properties, 1 :µ(X δ ) > 1 − δ. 3 :For all integers n ≥ 1, there exists a measurable set B n δ ⊂ Ω with ν(B n δ ) ≥ 1−δ and a r = r(δ, n) > 0 such that for all ω ∈ B n δ and all atoms A n we have The following result is a generalization of [W19, Theorem 6.1]. Theorem 7.8. We adopt the conditions in Theorem 7.7. In addition we let ǫ > 0 be arbitrarily chosen in (0, 1) and α be an arbitrary irrational number in (0, 1). For each δ ∈ (0, 1), there is a constant c δ > 0 and X ′ δ with full µ measure such that the following statement holds: For all n ≥ 1, all x ∈ X ′ δ and all K ⊂ N with lower natural density at least ρ > 2δ + ǫ, there is a collection M n = M n (x, K) of at most c δ 2 nδ atoms of ∨ n i=0 S −i A with the following properties. Denote the union of elements in M n as M n . We construct the following sequence Then the following set has Lebesgue measure at least ǫ We use Theorem 7.7 to find a set X δ with µ(X δ ) > 1 − δ. Then for each integer n ≥ 1 we can find B n δ with ν(B n δ ) ≥ 1 − δ and r = r(δ, n) > 0. Without loss of generality we shall assume that r = d −k where d is the number of digits of the Bernoulli system and k is an integer. For each ω ∈ B n δ we have Now because of the topology we chose for Ω, we see that B(ω, r) consists of all sequences in Ω with the same first k digits as ω. In particular if ω ′ ∈ B(ω, r) then B(ω ′ , r) = B(ω, r). This property reflects the fact that Ω is an ultrametric space. Notice that for any Bernoulli system (Ω, S B , ν), any ball of positive radius has positive ν measure. In particular µ(f −1 (B(ω, r))) > 0 and by properties (2) and (3) in Theorem 7.7, for each ω ′ ∈ B n δ ∩ B(ω, r) we have . Now it is possible to see that for all x in the set On the other hand we clearly have atoms An Since µ is not supported on boundaries of any atom we see that X δ ∩ f −1 (B(ω, r) ∩ B n δ ) can intersect at most 2 nδ (1 − δ)C δ many atoms of ∨ n i=0 S −i A since different atoms can intersect only on boundaries. Now let Y (ω) = X δ ∩ f −1 (B(ω, r) ∩ B n δ ). Since there are only finitely many r balls in Ω we see that as ω varies in B n δ there are finitely many different sets of form Y (ω). Denote the collection of these sets as {Y 1 , . . . , Y N (n) } where N (n) is an integer. For each i ∈ I = {1, . . . , N (n)}, let Ω(i) ⊂ B n δ be the set of form B(ω, r) ∩ B n δ such that Y i = X δ ∩ f −1 (Ω(i)). We notice here that the union of all Y i is a rather large subset of X, more precisely we have the following result, For each i ∈ I we write the collection of atoms intersecting Y i as M n (i) and write their union as M n (i). Then we saw that Now we consider the following sequence for x ∈ X, by the ergodic theorem we see that for µ almost all x ∈ X, K(x) has natural density at least 1 − δ. For each i ∈ I and x ∈ X we construct the following set , Ω(i)). By Lemma 7.3 and Theorem 7.4 we see that for µ almost all x ∈ X and any sequence K with lower natural density at least 2δ + ǫ there exists an i ∈ I such that has Lebesgue measure at least ǫ. This is because K ∩ K(x) has lower natural density at least δ + ǫ for µ almost all x ∈ X and i∈I ν(Ω(i)) = µ(B n δ ) ≥ 1 − δ. This theorem follows since the above argument holds for all n ≥ 1 and we can find a full measure set X ′ δ ⊂ X which satisfies all our requirements. LARGE SETS, BERNOULLI FACTORS We now deal with the case when dim H A 2 + dim H A 3 ∈ (1/2, 1). Proof of Theorem 1.6. For the moment let A ⊂ R 2 be an arbitrary compact set. We define the following function g A : where we use [x, y] with x, y ∈ R 2 for the line segment from x to y and v t is the vector with slope 3 t whose Y -projection has length 1. To see that g A is measurable it is enough to see that {g(a, t) = 0} is Since A is compact, for each η ∈ [0, 5, 1] ∪ [−1, −0.5] we see that a + ηv t / ∈ A and therefore there exists positive number r(η) > 0 such that B(a + ηv t , r(η)) ∩ A = ∅. We know that the segment Then it is easy to see that there exist two positive numbers r(a), r(t) such that for each (a ′ , t ′ ) ∈ R 2 × [0, 1] with |a ′ − a| < r(a) and |t ′ − t| < r(t) we have This shows that {g A (a, t) = 0} is in fact an open set and therefore g A is measurable. ±1)) (there are in total 9 translated copies of A 3 × A 2 ) and α = log 2/ log 3. In what follows we omit the subscript A in g A . We notice that for any a ∈ A and any t ∈ [0, 1] the orbit of (a, t) always lies in (A 3 ×A 2 )×[0, 1]. Having defined the dynamics we now construct a measure. Denote x k = (a k , t k ). For each k we construct the following measure in P(U ). Then by taking a sub-sequence if necessary we assume that µ k → µ weakly in P(U ). This measure µ is not necessarily S-invariant because it might give positive measure on the discontinuities of S. If we identify [0, 1] 2 with R 2 /Z 2 = T 2 then S is discontinuous precisely at points (a ′ , t ′ ) with t ′ = 1 − α. This is where we are about to choose a different multiplication map for the [0, 1] 2 component. However it is easy to see that the projection of µ onto the [0, 1] component is precisely the Lebesgue measure because α / ∈ Q and R α is uniquely ergodic. Thus S is µ-a.e. continuous and therefore µ is S-invariant (see Theorem 3.3, statement 3). Now we take a µ-typical x ∈ U . Suppose that x = (a ′ , t ′ ), we want to estimate the following average, Thus for µ.a.e (a ′ , t ′ ) we denote µ a ′ ,t ′ to be the ergodic component of (a ′ , t ′ ) and we see that, Suppose σ(a ′ , t ′ ) is the ergodic disintegration measure of µ against the S-invariant σ-algebra we see that In the second step, we have used the fact that {g(a, t) = 1} is a closed set and we also used the Portmanteau theorem ( Theorem 3.3, statement 2). For the third step we used the fact that A 3 is ×3 mod 1 invariant and A 2 is ×2 mod 1 invariant. We would get equality in the third step if the sets A 2 , A 3 would be strictly invariant under the maps ×2, ×3 respectively. Intuitively we transferred the upper Banach density in our initial data to the upper natural density almost surely along the orbit average. For this reason, for each (a, t) ∈ U we denote W ′ (a, t) to be the following sequence, W ′ (a, t) = {k ∈ N : g(S k (a, t)) = 1}. We see that there is an ergodic component µ a ′ ,t ′ such that g(a, t)µ a ′ ,t ′ (a, t) ≥ ρ. Consider now the dynamical system (U, S, µ a ′ ,t ′ ), it is ergodic by construction with the property that In order to apply Theorem 7.8 we need to address some issues. We divide the rest of proof into three subsections. and a partition C of [0, 1] as C 1 = [1−α, 1], C 2 = [0, 1−α). We see that ∨ ∞ i=0 S −i A generates the Borel σalgebra of [0, 1] 2 × S 1 and therefore h(S, µ a ′ ,t ′ ) = h(S, µ a ′ ,t ′ , A). Our first issue is that µ a ′ ,t ′ could give positive measure on boundaries of {S −i A} i≥0 . We see that the [0, 1] component of µ a ′ ,t ′ is +α mod 1 invariant and thus it is the Lebesgue measure. If µ a ′ ,t ′ does give positive measure on boundaries of {S −i A} i≥0 then its [0, 1] 2 component gives positive measure on boundaries of the [0, 1] 2 component of {S −i A} i≥0 , which are rectangles with edges that project to either dyadic rational numbers on Yaxis or triadic rational numbers on X-axis. In this case the Y -component of µ a ′ ,t ′ is then supported on finitely many rational numbers since it is ×2 mod 1 invariant and we can focus on the X-component. For the other case, the projection on X-axis does not define a dynamical system. In this case it can be seen that the [0, 1] 2 component of µ a ′ ,t ′ supports on finitely many horizontal lines with rational Xcoordinates. Suppose the former case and the later case can be treated in a similar way. We consider the following dynamical system and µ X a ′ ,t ′ is the corresponding projected measure. If µ X a ′ ,t ′ still supports on boundaries we see that the [0, 1] 2 component of µ a ′ ,t ′ supports on finitely many rational points and this case the result is obvious. Therefore we can assume that at least one of the X or Y coordinate projections of µ a ′ ,t ′ does not support on boundaries and we then perform the following entropy arguments for either (U, S, µ a ′ ,t ′ ) or one of its projections. We only illustrate the argument for (U, S, µ a ′ ,t ′ ) and the arguments for its projections are similar. 8.2. Zero entropy. We now consider the case when h(S, µ a ′ ,t ′ ) = 0. In this case for each integer n ≥ 1, the atoms of ∨ n i=0 S −i A are of form B × C where B ⊂ [0, 1] 2 is a rectangle with dimension 3 −n ′ × 2 n where n ′ satisfies 2 −n ≤ 3 −n ′ ≤ 3 × 2 −n (so the rectangle is almost a square) and C is one of the atoms of ∨ n i=0 R −i α C. The number of atoms in ∨ n i=0 R −i α C is at most 2n and for each C the number of different atoms B × C is between 2 2n /3 and 3 × 2 2n . Now if the entropy 1 n H(µ a ′ ,t ′ , ∨ n i=0 S −i A) is smaller than a given small number ǫ for all large enough n then there exist δ(ǫ) = O(ǫ) such that O(2 δn ) many atoms in ∨ n i=0 S −i A support at least 1 − δ portion of µ a ′ ,t ′ measure. To see this, let V be a finite set of points of cardinality greater than 2 n and for each v ∈ V we give a probability p v ∈ (0, 1) such that v∈V p v = 1. If the entropy, namely − v∈V p v log p v < nǫ ′ for a number ǫ ′ > 0, then for another number δ ′ > 0 we define the following subset point x e ′ ∈ A 3 × A 2 and Q ∈ Q n such that x e ′ ∈ Q. Moreover there exists y e ′ ∈ A with distance |x e ′ − y e ′ | ∈ [1/6, 1.5] and We notice that A and A 3 × A 2 have the same Hausdorff dimension. For all large enough integers n, we can find at least 0.5c(ρ − δ)2 n many 2 −n -separated directions in E and we denote this collection of direction as E n . By the pigeonhole principle we see that there exists Q ∈ Q n such that it contains O(2 −δn (0.5c(ρ − δ)2 n )) many points of form {x e ′ } e ′ ∈En . Then the corresponding points y e ′ are all at least 0.5/2 n -separated. As this holds for all large enough n we see that this implies that dim B A ≥ 1−δ but we constructed δ = O(ǫ) therefore by letting ǫ be small enough we obtain a contradiction because we assumed that dim H A 3 × A 2 = dim B A 3 × A 2 < 1. 8.3. Positive entropy. Now finally we can assume that (U, S, µ a ′ ,t ′ ) has positive entropy, that is, h(S, µ a ′ ,t ′ ) > 0. We saw that for µ a ′ ,t ′ almost all x = (a ′′ , t ′′ ) ∈ U , W ′ (a ′′ , t ′′ ) has lower natural density at least ρ. Now we want to apply Theorem 7.8. Let δ > 0 be such that ρ > 2δ. Then exists a constant c δ > 0 and for each n ≥ 1 there exist a set U δ ⊂ U with full µ a ′ ,t ′ measure such that for each x ∈ U δ , there is a collection M n of at most c δ 2 δn atoms in ∨ n i=0 S −i A with union M n such that A W ′ (a ′′ ,t ′′ )∩{k∈N:S k (x)∈Mn} has Lebesgue measure at least ρ − 2δ. Then the rest of the argument is the same as that of the zero entropy case. 9. FURTHER REMARKS AND PROBLEMS 9.1. Casinos with multidimensional clocks. In this paper we only considered problems related to intersections between two invariant sets. One reason is that in Theorem 7.8 we coupled a Bernoulli system with an irrational rotation on the unit circle. There is no problem if we replace the irrational rotation with an irrational torus rotation. Let T k be the unit torus. We view it as [0, 1] k . Suppose that α 1 , . . . , α k are irrational numbers which are linearly independent over the field of rational numbers. Then the action (x 1 , . . . , x k ) → (x 1 + α 1 mod 1, . . . , x k + α k mod 1) is an irrational torus rotation. Like its one dimensional brother, irrational torus rotations are uniquely ergodic with the Lebesgue measure. One can also study discrepancy estimates, see [DT97]. All results in Section 7 can be generalized in this way. Let p 1 , . . . , p k be k ≥ 2 integers such that 1, log p 1 / log p 2 , . . . , log p 1 / log p k are linearly independent over the field of rational numbers. We can consider l ∩ A p 1 × · · · × A p k with a line l in R k which is not parallel with the coordinate axis. We also assume that l is not contained in any subspaces generated by coordinate axis, otherwise we can drop some of A p 1 , . . . , A p k . To see how to obtain a torus rotation, let (x 1 , . . . , x k , θ 2 , . . . , θ k ) ∈ [0, 1] 2k−1 , we define the following map (which can be viewed as a higher dimensional version of the map T defined in the proof of Theorem 1.6) T (x 1 , . . . , x k , θ 2 , . . . , θ k ) = (y 1 , . . . , y k , {θ 2 + log p 2 / log p 1 }, . . . , {θ k + log p k / log p 1 }) where y 1 , . . . , y k are determined as follows y 1 = {p 1 x 1 } and for each i ∈ {2, . . . , k}, Now we allow the direction vector of l range inside S k−1 whose coordinate components are contained in (δ, 1 − δ) where δ > 0 can be arbitrarily chosen. Then if l ∩ A p 1 × · · · × A p k is large (in terms of sparseness which can be defined similarly for lines in R k ), by using the torus rotation with vector (log p 1 / log p 2 , . . . , log p 1 / log p k ) we see that A p 1 × · · · × A p k would have dimension at least k − 1, the dimension of S k−1 . Therefore we can upgrade Theorem 1.6 for intersections among more than two sets. As the main technical steps are the same for all k ≥ 2, we only illustrated the proof for k = 2 in which case we have a better visualization. To be precise, we state the following higher dimensional version of Corollary 1.7. Corollary 9.1. Let k ≥ 2 be an integer. Let A p 1 , . . . , A p k be k closed invariant subsets of [0, 1] with respect to ×p 1 mod 1, ×p 2 mod 1 . . . respectively. Assume that log p 1 / log p i for i ∈ {2, . . . , k} are irrational numbers which are linearly independent over the field of rational numbers. Suppose that k i=1 dim H A p i < k − 1 then for each 2k-tuple u 1 , . . . , u k , v 1 , . . . , v k of non-zero real numbers we have Moreover, let δ > 0 be an arbitrarily chosen positive number and suppose that δ < |u i | < δ −1 for each i ∈ {1, . . . , k}. Then for each ǫ > 0, there is an integer N ǫ > 0 such that There is one important point to note. For k ≥ 3, it is surprisingly not an easy task to produce even one example of integers p 1 , . . . , p k to satisfy the condition that 1, log p 1 log p 2 , . . . , log p 1 log p k (*) are Q-linearly independent. Problems of this kind are related to the study of algebraic relations among logarithms of algebraic numbers. We now show that a conjecture of Schanuel (see below) implies the above Q-linear independence as long as p 1 , . . . , p k are multiplicatively independent, i.e., 1, log p 2 log p 1 , . . . , log p k log p 1 are Q-linearly independent, for example, p 1 = 2, p 2 = 3, p 3 = 5. Let P k (x 1 , . . . , x k ) be the symmetric polynomial of degree (1, . . . , 1) (there are k − 1 many ones). For example, we have P 3 (x 1 , x 2 , x 3 ) = x 1 x 2 + x 1 x 3 + x 2 x 3 . For the above Q-linear independence of (*) we would need that (log p 1 , . . . , log p k ) does not solve the polynomial equation P k (n 1 x 1 , . . . , n k x k ) = 0.
2018-11-27T16:16:16.000Z
2018-11-27T00:00:00.000
{ "year": 2021, "sha1": "6493db766afab569363120cc33afc844ec7d78e1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1811.11073", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6493db766afab569363120cc33afc844ec7d78e1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
269249321
pes2o/s2orc
v3-fos-license
Cognitive and Relational Processes Associated to Mental Health in Italian High School Students during COVID-19 and Russian–Ukrainian War Outbreaks The negative impact of the COVID-19 pandemic on mental health has been widely demonstrated; however, few studies have investigated the psychological processes involved in this impact, including core beliefs violation, meaning-making disruption, interpersonal support, or one’s relational functioning. This study explored the mental health of 215 Italian adolescents during the COVID-19 pandemic and the subsequent outbreak of the Russian–Ukrainian war. By administering a set of questionnaires, several cognitive and emotional variables were investigated, including core belief violation, meaning attribution to the pandemic and war, attachment, and emotion regulation, social media addiction, and relationships with significant others and teachers. We conducted some descriptive, mean difference, correlational, and predictive analyses that revealed a significant association between core belief violation caused by war and pandemic, ability to integrate war and pandemic within personal meaning universe, the relational support received, and mental health. The relationship with teachers during these challenging periods improved significantly according to the respondents’ opinion, becoming both more authoritative and empathic. This study offers insights into what cognitive and relational processes are useful to intervene on to reduce the distress of adolescents who are facing significant moments of crisis due to events that challenge their cognitive and emotional balance. Introduction In the recent past, there have been at least two significant crises in rapid succession worldwide.In 2020, the COVID-19 pandemic brought significant disruptions to our daily lives, leading to profound changes in education and social interactions worldwide.In general, widespread lockdowns, job losses, income reductions, and mobility restrictions were among the many challenges faced [1].Amidst global recovery efforts, the Russian-Ukrainian war, which commenced on 24 February 2022, cast a shadow over post-pandemic economic prospects and triggered a humanitarian crisis across Europe [2,3].Despite not being directly engaged in the ongoing war, Italy and its citizens are not exempt from the repercussions it entails.There is a palpable concern among Italian citizens, as well as those in other European nations, regarding the potential spread of the conflict beyond its current borders.Furthermore, the impact of the war can be observed through rising energy and commodity prices, as well as distressing scenes conveyed via the media, which predominantly focus on the ongoing conflict [4][5][6]. The war, alongside the COVID-19 pandemic, carries significant implications, as the experience and expression of negative emotions have the potential to impact the mental well-being of the population [7][8][9].The detrimental impacts of war and terrorism on mental health have been examined in the past.Studies conducted in nations that have endured war and/or armed conflict have consistently demonstrated a marked decline in the mental wellbeing of populations directly affected by these events [10][11][12][13][14][15].In addition, extensive media coverage has made the psychological repercussions of the conflict accessible worldwide, causing anxiety disorders, acute stress reactions, depressive episodes, and PTSD in various populations in the post-pandemic COVID-19 landscape, which had already generated numerous negative implications for individuals' mental well-being [16][17][18][19][20]. Students are one of the demographic groups most affected by the COVID-19 pandemic outbreak and the Russian-Ukrainian war [2,3,21,22].During the COVID-19 pandemic, schools underwent significant changes, shifting from traditional in-person classes to remote learning models due to lockdowns and social distancing measures.This transition to virtual classrooms presented challenges for both educators and students, necessitating adjustments in teaching methods and curriculum delivery [23].The closure of schools also led to the loss of in-person interactions and extracurricular activities, impacting students' social development and well-being [24][25][26][27][28][29][30].Studies on the mental health of students consistently highlight the need for self-regulation and motivation in online learning environments [31][32][33][34].Additionally, there has been an increase in mental health disorders such as Major Depressive Disorder and Generalized Anxiety Disorder among students, attributed to the pandemic's effects on social distancing and uncertainties in educational procedures [35,36].Prolonged exposure to a sense of helplessness, also known as learned helplessness, poses a risk factor for depression, especially considering the lingering psychological aftermath of COVID-19 [37,38].Some authors maintained that the combined impact of COVID-19 and the Ukrainian war would pose a significant risk to the mental health of specific categories, such as women, adolescents, the older people, individuals with disabilities, and healthcare professionals [39].The findings revealed the presence of negative emotions, such as anxiety, anger, and disgust, among the Italian population even if their involvement in the conflict was only indirect.These results have significant implications, as the experience and expression of negative emotions have the potential to impact the mental well-being of the population.Recent systematic literature reviews underlined the increase of symptoms related to anxiety, panic, depression, eating disorders, sleep disorders, social withdrawal, stress disorders, psychotic symptoms, anti-conservative thoughts, and self-harming acts also in adolescents, aggravated by COVID-19 restrictions and the impacts of Russian-Ukrainian war [2,3,7,8,38,40]. While traumatic or negative events have been linked to a range of negative physical and psychological outcomes, extensive research has also focused on the significance of personal and social resources, commonly referred to as protective factors, that can positively influence responses to such events.Researchers have identified various risk and protective factors that can affect mental health outcomes in the face of traumatic or negative events.Among the recognized protective factors, there are social support and the ability to make meaning of negative events which had challenged one's core beliefs [41][42][43][44][45]. Protective factors can be both internal or external to the individual (e.g., family, qualified teachers, peer relations, and the community or individual social environment) [46,47].Through social support, significant people can influence the individual's capacity to deal with stressful experiences, cope well with these experiences, and positively face these challenges.Adolescent students especially need good social support to increase resilience when facing pressure or stress [48], and the presence of a supportive atmosphere can offer reassurance and foster a sense of security among students [49].The optimal balance of distress varies and depends on individual predisposing factors.Few studies have proposed general explanation models that include the psychological variables responsible for the impact of direct and indirect stressors on mental health during the pandemic.For example, Milman and colleagues [18] developed an explanatory model to understand the relationship between pandemic-related stressors and their effects on mental health.The model is based on two primary psychological processes mediating these effects, namely, violation of core beliefs and disrupted meaning-making.Core beliefs are the fundamental beliefs individuals hold about the world, themselves, and their relationships, and stressful or traumatic events can violate these beliefs, causing disorientation and mental health disorders.Disrupted meaning-making occurs when individuals struggle to make sense of challenging events and integrate them into their worldview.Some studies [41][42][43][44][45] were conducted on the role of core belief violation and disrupted meaning-making in mental health during the pandemic.They found that these processes mediated the relationship between direct and indirect pandemic stressors and people's depression severity, general anxiety, and coronavirus anxiety more than demographic factors and pandemic stressors combined.Compliance with social isolation measures was found to reduce the burden of the pandemic by reducing the impact on core beliefs associated with predictability and control.Overall, research has also shown that confinement at home and strengthened relationships can reduce core belief violation and facilitate functional meaning-making in well-functioning families.Negri et al. [42] confirmed Milman et al.'s [18] findings, showing that core belief violation, reduced meaning-making, and increased perception of vulnerability and mortality significantly influenced mental health symptoms during the pandemic.These factors mediated the relationship between COVID-19 stressors and mental health outcomes, indicating that individuals experiencing greater core belief violation and reduced meaningmaking suffered more severe mental health issues. Based on these considerations and on these studies that not only snapshot the consequences of the pandemic on people's health but also explain the mechanisms that activate its negative impact, this study aimed to (a) detect whether these negative effects of the pandemic persist over time in the student population and are increased or not with the outbreak of the Russia-Ukraine war, (b) detect whether the possible negative effect of the war is also similarly associated with the cognitive factors of disruption of meaning-making and violation of core beliefs, (c) investigate whether other psychological factors more related to relationship functioning such as attachment and perceived support from significant others and teachers are also positively or negatively associated with students' mental health, and (d) explore some predictive models of such impact including the cognitive and relational processes found to be significantly associated with it. Participants and Procedure Two hundreds and fifteen Italian high school students (46 of whom were male), ranging in age from 14 to 18 (M = 16.23;SD = 1.56), participated in the study.The participants attended two high schools in Italy, one in the north (n = 149) and one in the center (n = 66) of the country.Sociodemographic information can be found in Table 1. Between May and June 2022, participants were recruited, and they completed questionnaires through an online form.Before they filled it out, the parents of participants received detailed information about the study.Only students whose parents have given their consent to participate received the online form to be completed with several questionnaires.Participation was voluntary, and confidentiality and anonymity were assured.The participants could withdraw from the study at any time.The study was approved by the Ethics Committee of Bergamo University (Minutes n. 04/2022 of the ethic committee held on 27 July 2022).The research was conducted according to the ethical guidelines for psychological research established by the Italian Psychological Association. Measures Participants were asked to complete online some questionnaires and questions to collect information about COVID-19-related stressors, psychological well-being and mental health, personal and sociodemographic information, use of social networks, attachment style, relationships with parents and teachers, core beliefs violation caused by war and pandemic outbreaks, and capacity for meaning-making.The instruments applied in the study are described in more detail below. Survey about COVID-19-Related Stressors and Personal Information At the beginning of the administration, respondents answered questions about age, gender, educational institution, and year of secondary school attended.Students were also asked to say whether they had received a psychological diagnosis before the COVID-19 pandemic and whether they had received psychological support during the pandemic.Other questions included whether they had tested positive for COVID-19, whether they were vaccinated, whether they had an acquaintance or family member who had died from COVID-19, and months of distance attendance of school classes.The last two items were open-ended questions to be answered: "In general, what has the pandemic changed in the way you think, feel and live your life?" and "In general, how have the events of the war in Ukraine changed the way you think, feel and live your life?" Questionnaires on Psychological Factors • Core Belief Inventory (CBI) [50].It is a 9-item questionnaire that assesses if a person's fundamental beliefs about the nature of the universe (as fair and controllable), the predictability of the future, a sense of purpose in life, one's self-worth and identity, as well as spirituality and religion, have been violated by a certain event.On a 6-point scale ranging from "not at all" (0) to "to a very great degree" (5), participants reported how much an "event" led them to critically consider their essential beliefs. In this study, we tested how much two specific events, "the coronavirus pandemic" (CBI_COVID) and "Ukrainian war" (CBI_War), have questioned the core beliefs of respondents.Higher values suggest a more severe violation of core beliefs.In this study, CBI showed good internal consistency (α = 0.85 for COVID and 0.86 for War). • Integration of Stressful Life Experiences Scale-Short Form (ISLES-SF) [51].It is a brief scale (six items) that is an assessment of meaning made of stressful events, in this study, the pandemic (ISLES_COVID) and the Ukrainian war (ISLES_War).In this study, participants rated their level of agreement with the items using a five-point scale that ranges from "strongly disagree" (1) to "strongly agree" (5).A higher score indicates more disruption in meaning-making and the final score is calculated as the sum of the item scores.In this study, ISLES-SF showed acceptable internal consistency (α = 0.79 for COVID and 0.77 for War).• Relationship Questionnaire (RQ) [52].The RQ presents four brief descriptions of the main attachment styles, and the respondent is asked to say which of the four attachment models is most appropriate for describing their own relationships.In addition, for each model, the respondent is asked to give a score on a 7-point scale from "not at all like me" to "very much like me".Two synthetic factors can be computed: the model of the self or attachment anxiety, and the model of other or attachment avoidance.• Emotion Regulation Questionnaire (ERQ) [53,54].It is a 10-item scale designed to measure respondents' tendency to regulate their emotions in two ways: (1) Cognitive Reappraisal and (2) Expressive Suppression.Cognitive Reappraisal is defined as changing the way one thinks about a situation in order to change its emotional impact, and Expressive Suppression is conceptualized as inhibiting behavioral expressions of an emotion [34].Respondents answered each item on a 7-point Likert-type scale ranging from 1 (strongly disagree) to 7 (strongly agree).In this study, ERQ showed acceptable internal consistency (α = 0.84 for Cognitive Reappraisal scale and α = 0.78 for Expressive Suppression scale).• The Network of Relationships Inventory: Behavioral Systems Version (NRI-BSV) [55].It is a questionnaire composed by 34 items about eight features of close relationships.The first five features (seeking in the other a safe refuge, seeking in the other a safe base, offering the other a safe base, friendship) make up the support factor, while the last three (conflict, antagonism, and criticism) make up the negative interaction factor.The instrument can be used to investigate the characteristics of relationships with different people.Specifically, the relationships considered are mother, father, same-sex friend, opposite-sex friend, romantic relationship, and significant other.The response mode consists of a 5-point Likert scale, from never to very much.In this study, the internal consistency values for the factors Support and Negative Interactions of NRI-BSV were good for all relationships (Cronbach's mean α = 0.83; range 0.80-0.85).• Survey on perceived qualities of the relationship with schoolteachers.We asked participants to talk about their relationship with teachers before the pandemic, during the pandemic, and in the last month, through six questions on some relevant qualities of the relationship with teachers.We, precisely, asked for a score about how much they did feel their teachers were authoritative, empathic, tolerant, allied, stimulating, and supportive.Each question is scored on a 5-point Likert scale that ranges from 0 (nothing) to 5 (very much).Two difference scores were calculated between the ratings given for the periods during and before the pandemic (∆ a ) and between the rating for the last month period and before the pandemic (∆ b ).When the difference scores were positive, there was an increase in scores; when they were negative, a decrease.• Vulnerability and mortality perception.Two individual questions were aimed at asking respondents how much more vulnerable and fragile they felt because of the pandemic (Vulnerability COVID) and the war (Vulnerability War) and how much more the pandemic and the war made them think about their own death (Mortality COVID and Mortality War). Psychological Health • Self-reported risk behaviors survey.We asked participants about their risk-taking behav- iors before the pandemic, during the pandemic, and in the last month.We, precisely, asked about how many times they injured themselves, and how many times they used drugs or alcohol to the point of feeling ill.Each question is scored on a 5-point Likert scale that ranges from 0 (never) to 5 (more than 3 times).Two difference scores were calculated between the ratings given for the periods during and before the pandemic (∆ 1 ) and between the rating for the last month period and before the pandemic (∆ 2 ). When the difference scores were positive, there was an increase in scores; when they were negative, a decrease.• Satisfaction with Life Scale (SWLS) [56].It is the scale that is most frequently used to measure life satisfaction among different populations.A five-item Likert scale with 1 being strongly disagreed and 5 being strongly agreed is used to rate the five items. Item mean scores are calculated by dividing the total score by five.Low scores indicate a low life satisfaction, whereas high scores indicate a high degree of life satisfaction.In this study, SWLS showed good internal consistency (α = 0.85).• Bergen Social Media Addiction Scale (BSMAS) [57].This instrument was used to evaluate problematic social media use (e.g., "How often during the last year have you spent a lot of time thinking about social media or planned use of social media?").It is a six-item scale that rates salience, mood modification, tolerance, withdrawal, conflict, and relapse on a 5-point Likert scale from 1 (very seldom) to 5 (very often).The higher the sum score, the higher the level of addictive social media use.In this study, BSMAS showed good internal consistency (α = 0.88).• Four-item Patient Health Questionnaire (PHQ-4) [58].The PHQ-4 is an ultra-brief tool for detecting both depression and anxiety symptoms experienced over the last two weeks.It has two 2-item subscales: one for anxiety and one for depression.Each item is scored on a 4-point Likert scale that ranges from 0 (not at all) to 3 (nearly every day). The range of the PHQ-4's overall score is 0 to 12. Higher scores indicate higher levels of anxiety and depression.In this study, PHQ showed suitable internal consistency (α = 0.80).• General Population-Clinical Outcomes in Routine Evaluation (GP-CORE) [59].It is a 14-item scale that was developed from the Clinical Outcomes in Routine Evaluation-Outcome Measure (CORE-OM) to measure psychological distress in a non-clinical population.The items concern well-being, problems, and psychological functioning.On a Likert-type scale, respondents rate how frequently they felt a particular way in the course of the week before, ranging from 0 (not at all) to 4 (most or all of the time). Greater well-being is indicated by lower scores.In this study, GP-CORE showed good internal consistency (α = 0.84). Statistical Analyses Overview To explore the associations between respondents' mental health indices and cognitive, emotional, relational, and COVID-19-related variables, we conducted four different types of analyses.First, through descriptive analyses of the data collected in each measured variable, we outlined a picture of the mental health of the participating students and of the main variables associated with it.Second, a set of analyses of variance (ANOVAs) was conducted to test the presence of significant differences in the means of subgroups of participants defined by the dichotomous variables (gender, COVID-19 deaths, psychological support during the pandemic, COVID-19 diagnosis) and the average values at different times in risk behaviors and perceived relationship qualities scores.Third, by correlational analysis, we tested the association between the cognitive and relational variables and the mental health indices.Fourth, changes over time (pre-COVID, during COVID, and in recent months) in self-reported risk behavior and relationship with teacher scores were tested with a series of repeated-measures ANOVAs.Fifth, for each mental health index, we performed a multiple regression analysis to find among the variables correlated to each index which could be considered also predictors of mental health levels.We conducted both correlational analysis and multiple regression analyses, including in the models the general satisfaction with life (SWLS) as a covariate to keep controlled the effect of this variable.Effect sizes (η 2 , eta-squared) were calculated to determine the magnitude of differences between groups in the ANOVA analyses. Participants' Descriptive Features As shown in Table 1, most of the respondents were female (78.6 percent) and of Italian nationality (99.6 percent).They were evenly distributed among the five secondary school years, attended in Italy by students between the ages of 14 and 18.The average age of the respondents was 16.23 years (SD = 1.56). Regarding factors related to COVID-19, participants attended school classes remotely for 9.2 months on average, with significant variability (SD = 4.06).Most of the students (55.8%) either did not test positive for COVID-19 or tested positive but were asymptomatic (13.0%); 67 out of 215 students (31.2%) tested positive and presented symptoms produced by the virus.A not insignificant proportion of students experienced the loss of a family member due to COVID (13%) or an acquaintance (44.2%), while 42.8% of respondents did not have a COVID-related death. Variables measuring mental health showed an important level of psychological suffering at questionnaire administration (May-June 2022): 6.3% of students underwent psychological treatment during the pandemic and 4.2% took psychotropic drugs; respondents' average score on GP-CORE was equal to 1.66, above the clinical cut-off (Female = 1.63;Male = 1.49); the PHQ-4 mean (5.18) was between mild and moderate levels (normal = 0-2, mild = 3-5, moderate = 6-8, severe = 9-12); the average score on BSMAS was equal to 16.37, below the clinical cut-off of 24; and finally, the students reported that in the last months before the administration of the questionnaires they had "rarely" or "sometimes" put themselves in dangerous situations, or they had harmed themselves or they had taken drugs or alcohol to the point of feeling ill (M = 1.40; range: form 0 = never to 3 = often).In terms of life satisfaction, respondents achieved a mean score of 22.11 with a standard deviation of 6.61; in general, they were slightly satisfied, but the variability of the scores also covers some dissatisfaction (extremely satisfied = 31-35; satisfied = 26-30; slightly satisfied = 21-25; neutral = 20; slightly dissatisfied = 15-19; dissatisfied = 10-14; extremely dissatisfied = 5-9). Regarding psychological factors that may explain the negative impact on mental health of the stressful events of the pandemic and the war between Russia and Ukraine, the responses indicate that both events challenged the respondents' beliefs (CBI), but the pandemic (M = 2.52 above the mean response scale score of 2.50) did so more than the war (M = 1.85).The ability to make sense of these two events (ISLES) was also difficult for the respondents (M = 15.56 for the pandemic and M = 15.95 for the war, just above the mean score of the response scale), and indeed in this case the outbreak of the war between Russia and Ukraine was an event slightly less able to be integrated into one's horizon of meaning.Furthermore, only 19.9 percent of the students recognized themselves in a secure attachment style, while among the others the most widespread attachment style was the fearful one.The emotion regulation strategies of Cognitive Reappraisal and Expressive Suppression, as measured by the Emotion Regulation Questionnaire, are partially used by the respondents, the latter being the most frequently used (respectively M = 22.14 on 6 items and M = 15.07 and 4 item with range 1-7). The Possible Role Played by Gender, Psychological Support, and COVID-19-Related Variables Male and female students showed different scores in many variables (See Table 2).The magnitude of the differences (η 2 ) could be considered high or medium in most of the comparisons according to Cohen's [60] interpretive guidance.From a mental health perspective, females have significantly lower levels of general life satisfaction than males and had significantly higher levels of general malaise (GP-CORE), anxiety and depression (PHQ-4), and social media use addiction (BSMAS). It is again female students who, in comparison with male students, felt their vulnerability and mortality more because of the events of the pandemic and the Russia-Ukraine war (Vulnerability and Mortality) and showed more difficulty in integrating these events into their personal value system (ISLES), because they challenged their core beliefs more than males (CBI). On the relational side, females gave higher scores in attachment avoidance and anxiety; particularly, they recognized themselves more in the fearful attachment pattern.However, female students felt more relational support from their parents and same-sex friends. Another variable that significantly differentiates participants' scores is whether they received psychological support from a professional during the pandemic (See Table 3).The magnitude of the differences (η 2 ) could be considered medium or small in most of the comparisons according to Cohen's [61] interpretive guidance. On average, those who underwent a psychological treatment did not show a more pronounced general discomfort profile (GP-CORE and PHQ-4) than other students but only a lower life dissatisfaction score (SWLS) and higher dysfunctional use of social media.However, in terms of self-reported risk behaviors, those who underwent psychological treatment significantly gave much higher scores than those who did not feel the need for professional psychological support.Moreover, on average, those who received psychological support had attended school classes for more months remotely.The attachment of the group of those who received psychological support tended to be more problematic in terms of anxiety, and the fearful attachment pattern was more frequent than the other students.On the positive side, these students saw a more pronounced improvement in perceived qualities of the relationship with their teachers (∆ b ). Those who obtained help from a psychological professional during the pandemic had, on average, more pronounced perceptions of vulnerability and mortality in the face of the war and pandemic events, and these events challenged these students' core beliefs more. Respondents who tested positive for COVID-19, regardless of whether they presented the corresponding symptoms, had significantly worse mental health scores (GP-CORE, PHQ4, and BSMAS) than students who did not test positive (see Table 4).The values of effect size (η 2 ) were small, according to Cohen's [61] interpretive guidance.Finally, having had a death among one's family members or acquaintances compared to those who did not also led to significant differences in respondents' scores (See Table 5).Also in this case, the magnitude of the differences was small in all comparisons, according to Cohen's [60] interpretive guidance.Those who experienced losses struggled significantly more to integrate the war and the pandemic into their horizon of meanings (ISLES), thought more about their own fragility and mortality, and saw their core beliefs challenged more (CBI). Those who experienced losses due to COVID-19 reported both higher scores in selfreported risk behaviors and a lower pre-post-pandemic delta in these behaviors (∆ 2 ) than those who did not experience bereavements, i.e., the scores of these students decreased less than those who did not lose any loved ones or acquaintances due to COVID-19. Lastly, there were no differences in any variables between those who received the vaccine and those who did not, between those who had a psychiatric diagnosis before the pandemic and those who did not, and between those who took psychotropic drugs during the pandemic period and those who did not. The Possible Role Played by Months of Remote Schooling and Cognitive and Relational Processes To test their possible role played in students' mental health, we calculated partial correlation coefficients between months of remote school attendance and cognitive and relational processes that we believe are explanatory in the negative impact on well-being on the one hand and mental health indices on the other.We calculated the coefficients by eliminating any association with level of satisfaction with life (SWLS) that may affect the variables included in the analysis. The length of the remote schooling was positively correlated with self-reported behavior in the last month but not with the other indexes of mental health (Table 6).A clear and recurring association emerged: the higher the scores on violation of core beliefs (CBI), disruption of meaning-making ability (ISLES), and feelings of mortality and frailty, the higher the scores on depression and anxiety (PHQ-4), social media addiction (BSMAS), and general malaise (GP-CORE), as shown in Table 6.This association was observed for both the pandemic and the war event. As for emotion regulation (ERQ), there was a characteristic trend: the more the Cognitive Reappraisal strategy was present, the less there was dysfunctional use of social media (BSMAS), while the greater the presence of Expressive Suppression, the worse were the indices of depression and anxiety (PHQ-4) and general malaise (GP-CORE) (Table 6). The Cognitive Reappraisal strategy was also associated with greater scores of selfreported risk behaviors during the pandemic and in the last month.Finally, the violation of the core beliefs due to the outbreak of the pandemic and the inability to make sense of this event in one's value horizon, together with the more vivid thought of one's own mortality, were associated with more self-reported risk behaviors during the pandemic period (Table 6). Looking at relational factors, we see that the greater the attachment anxiety, the worse the mental health scores (PHQ-4 and GP-CORE) (Table 7).In particular, it was the preoccupied and fearful attachment styles that were associated with higher levels of anxiety, depression, and general malaise, and in the case of fearful attachment, also with dysfunctional use of social media (BSMAS).Attachment characteristics, on the other hand, appear not to be associated with risk behaviors.The dismissive attachment style did not correlate with both mental health indices and risk behaviors. Regarding perceptions about relationships, the most important role was played by parents, friends, and partners, although in different ways (Table 7).In general, the levels of general malaise (GP-CORE), depression, anxiety (PHQ-4), social media addiction (BSMAS), and selfreported risk behaviors increased in correspondence with greater negative interactions with both the mother and father. If we look at the relationships with same-sex friend or opposite-sex friend or even with romantic partner, the worst mental health scores and risk behaviors increased along with both perceived support from and negative interactions with them.In contrast, the quality of the relationship with teachers showed no significant associations with their students' mental health, at least in their opinion (Table 7). Changes in Risk Behaviors and Relationships with Schoolteachers The comparison between the mean scores of the items on self-reported risk behavior in the pre-COVID period (M = 1.23,SD = 1.80), during the COVID period (M = 1.33,SD = 2.29), and in the last few months (M = 1.39,SD = 2.44) showed no significant differences.The relationship with teachers, on the other hand, underwent significant changes as reported by the respondents: the positive qualities of the relationship increased significantly from pre-COVID (M = 15.82,SD = 5.86), COVID (M = 17.18,SD = 5.78), and in the last month (M = 18.46,SD = 4.46) (pre-COVID vs. during COVID: t = −5.056,df = 214, p ≤ 0.001; during COVID vs. last month: t = −4.226,df = 214, p ≤ 0.00).Of all the variables measured, only the anxious style is predictive of a positive increase in teacher relationship quality from pre-pandemic to last month (β = 0.168, t = 2.488, p = 0.004, R 2 0.028). The Predictors of Mental Health Five multiple linear regression models were performed to investigate possible predictors of student's mental health as measured by GP-CORE, PHQ-4, BSMAS, and the self-reported risk behaviors index during the pandemic and in the last month (Table 8).The most complete model we found was the one with the level of general malaise as measured by GP-CORE as the dependent variable.The predictors found to be significant in order of importance were (a) feeling vulnerable due to the pandemic, (b) difficulty in making sense of the pandemic (ISLES), (c) use of the strategy of Expressive Suppression of emotions (ERQ), (d) being positive on the test for COVID-19, (e) negative interactions with the father (NRI-BSV), (f) attachment avoidance (RQ), (g) lower emotional support from the father and (h) from the romantic partner (NRI-BSV).This model predicted 56.6% of GP-CORE scores (R 2 = 0.566; adjusted R 2 = 0.550; F = 33.640p < 0.001). A second linear regression analysis had the level of depression and anxiety as measured by PHQ-4 as the dependent variable and (a) perception of vulnerability due to pandemic, (b) perceived negative interaction with father (NRI-BSV), (c) difficulty in meaningmaking about war (ISLES), (d) violation of core beliefs for pandemic (CBI), and (e) months of distance learning as significant predictors.This model predicted 43.5% of PHQ-4 scores (R 2 = 0.435; adjusted R 2 = 0.418; F = 26.638,p < 0.001). A third model tested the significant predictors of risk behaviors during the pandemic.Only a combination of negative interactions with partner and father (NRI-BSV) and core beliefs violation for pandemic was found to predict risk behaviors during pandemic.This model predicted 14,8% of risk behaviors during pandemic (R 2 = 0.148; adjusted R 2 = 0.136; F = 12.261, p < 0.001). A last model of linear regression with self-reported risk behaviors in the last month as a dependent variable was tested.In this model, only the negative interactions with partner and father (NRI-BSV) resulted as significant predictors.This model was significant and predicted 11.5% (R 2 = 0.115; adjusted R 2 = 0.106; F =13.712, p < 0.001). Social media addiction scores (BSMAS) were not predicted by any of the variables considered in the present study. Discussion This study drew a less than optimistic picture of the mental health of Italian students during the period marked by two consecutive crises, the outbreak of the pandemic and the outbreak of the Russia-Ukraine war.The mean scores revealed the presence of mild to moderate levels of depression and anxiety (PHQ-4), a level of malaise above the clinical cut-off (GP-CORE), the presence at least sometimes of risk behaviors such as drug abuse, alcohol, or self-injurious behaviors, and satisfaction with one's life only slightly toward the positive side.This is even though only 5.1% of respondents had received a psychiatric diagnosis even before the pandemic and only 4.2% took psychotropic drugs during the pandemic period.In addition, 16.3% felt the need to seek psychological support from a professional during the pandemic period.Thus, regarding the first (a) aim we had set in the present study, we can say that the psychological malaise that began with the pandemic persisted and was maintained even during the initial period of the outbreak of war in Ukraine. These are not surprising data, since direct and indirect stressors related to the COVID-19 pandemic were important: 44.2% of students tested positive for COVID-19 and 31.2% also had symptoms related to the virus, in line with epidemiological data [62,63]; most students (57.2%) had a relative or acquaintance who had died due to COVID-19, a figure that unfortunately matched the general statistics [63]; and finally, participants attended school classes remotely for 9.2 months on average, i.e., about one year of school.The combination of these stressors has certainly had an important negative impact on students' health as much literature has well documented e.g., [60,64]. As a part of the literature highlighted [18,[41][42][43][44][45], also in this study we found a close association between worse scores on mental health dimensions and cognitive factors such as violation of core beliefs (CBI), disruption of meaning-making ability (ISLES), and a heightened sensation of vulnerability and mortality.This suggests an explanatory role of these psychological factors in the relationship between stressors and psychological distress.In other words, a stressful event such as a pandemic or war has a negative impact to the extent that personal meaning attribution processes are challenged.As hypothesized (see aim (b) of this study), this seems to apply not only to the outbreak of the pandemic, which was a very impactful event on people's lives, but also to the outbreak of the Russia-Ukraine war, at least in Italy, where people had been used to having for many years a peaceful relationship between the major geopolitical blocs in the Eurasian area, as some recent studies demonstrated [2][3][4]7,8,10,39,40].In fact, our data say that war made it difficult for students to integrate and make sense of the war event in their cognitive universe of life, similarly to pandemic (ISLES: M = 15.56 and 15.95, respectively, for pandemic and war); the same happened for the violation of basic beliefs although to a lesser extent than pandemic, which involved people more directly and personally (CBI: M = 2.52 and 1.85 respectively for pandemic and war).The war, thus, seems to have had a similar or even inferior effect to that of the pandemic, despite being an event that puts one's own life and that of one's family members at less direct risk.In addition, correlations were very high, consistent, and positive between all indices of mental health (PHQ-4, BSMAS, GP-CORE, risky behaviors) and core belief violation (CBI), difficulty in meaning-making capacity (ISLES), and feelings of mortality and vulnerability.Interesting, too, from a clinical perspective is that the Expressive Suppression strategy (ERQ) is associated more with increased general malaise (GP-CORE) and levels of depression and anxiety (PHQ-4), while the Cognitive Reappraisal strategy (ERQ) is associated more with social media addiction (BSMAS) and risky behaviors.Thus, a more internalizing or more externalizing profile of distress seems to emerge. A third aim (see aim (c)) of the present study was to explore the role of relational factors in addition to cognitive factors in the impact of negative events on mental health.In this regard, it appears to be noteworthy the association between relationship factors and students' mental health.In our opinion, this is the original aspect of present study, which sought to add to the cognitive psychological aspects already investigated in the literature by involving relational functioning through a questionnaire on attachment style (RQ), a very complete though little-used one on perceptions regarding relationships with significant others (NRI-BSV), and finally some questions on perceptions of relationship qualities with teachers.Surprisingly, only 19.9% of the students recognized themselves in a secure attachment style, while the most widespread attachment style was the fearful one.We do not know how accurate the respondents' perceptions are, and thus we cannot say whether 80% of the students actually have a disturbed attachment style or whether it is the stressful situations of the pandemic and war that stimulated the activation of anxious and avoidant elements in the respondents.Certainly, the search for attachment becomes more pronounced at these times, and it is not always possible to find partners with whom to establish secure bonds.Schoolteachers in our study generally represented reference points for our students, and the relationship with them during the pandemic improved by becoming more authoritative, empathic, tolerant, allied, stimulating, and supportive, especially for students who identified themselves with a fearful attachment style.Positive relationships with teachers, however, did not correlate with mental health indices, either positively or negatively.This may be explained by the fact that teachers are surely a help and support for students' growth, but they do not become as emotionally important people as family members do.In fact, correlation analysis showed that a very important role in maintaining good psychological balance during the pandemic and outbreak period was played by the support received from parents, a romantic partner, and friends (NRI-BSV); in contrast, negative interactions with them led to worse levels of mental health both in terms of general malaise, anxiety and depression, and risky behaviors.All these results point to the crucial role played by the relationships in construing, maintaining, and protecting a good mental health [48,49].Surprisingly, the most pronounced role among these, either in terms of support or, at the opposite end, of negative interactions is played by the father.He is a less studied figure than, for example, the mother, but in systemic, direct, or indirect terms, he seems to be equally or more important than the other figures in the family as suggested but some scholars and research studies, e.g., [65,66]. Finally, as we proposed in the last (d) aim of this study, the results also offer predictive models of mental health in stressful situations such as those of pandemic and war, at least in the student population.Being female, testing positive for COVID-19, and having had a COVID-19-related death among acquaintances and family members were factors that led to worse levels of mental health.General malaise as measured by the GP-CORE was predicted by a combination of factors including feeling vulnerable due to the pandemic, experiencing difficulty in making sense of the pandemic (ISLES), using the strategy of Expressive Suppression of emotions (ERQ), being positive on the test for COVID-19, having negative interactions with the father (NRI-BSV), having attachment avoidance (RQ), and having lower emotional support from the father and romantic partner (NRI-BSV).High levels of anxiety and depression (PHQ-4) were predicted by similar factors: perception of vulnerability due to pandemic, perceived negative interaction with father (NRI-BSV), difficulty in meaning-making about war (ISLES), violation of core beliefs for pandemic (CBI), and months of distance learning.Risk behaviors, on the other hand, were predicted more by negative interactions with the partner and father (NRI-BSV).Finally, social media dependence scores (BSMAS) were not predicted by any of the variables considered in the present study.These predictive models give us a picture of the factors involved in students' mental health in this time of pandemic and war.We find such a picture more inclusiveand therefore more useful for intervention-than the many models in the literature did, e.g., [67][68][69], both in terms of the variety of mental health outcomes considered and the cognitive and relational factors involved. Some of the present study's limitations in interpreting the results must be mentioned.This study utilizes a cross-sectional design, gathering data at a specific moment, which constrains the establishment of causation and tracking dynamic shifts over time.This study provides a snapshot of participants' experiences and mental health status at a specific point in time, without the ability to track changes or assess long-term effects.Future research employing longitudinal designs is warranted to comprehensively explore the ongoing impacts of significant events like the War-on-COVID on mental health outcomes.Another constraint pertains to the sample's representativeness, predominantly composed of high school students from specific Italian regions.This composition may curtail the applicability of findings to more extensive demographic cohorts or diverse geographic settings.Moreover, the sample enrolled in this study shows a good distribution for years of study, but there appears to be a greater representation of women.However, this finding is consistent with other studies in which female participation in research is consistently higher than males.The higher engagement of young women in psychological research has been well documented in the literature [70,71] and this implies a caution to generalize the results to an entire adolescent population.Finally, a last caveat in the interpretation of the results stems from the fact that we asked for ratings based on respondents' memories of different past times.This surely may have led to bias due to memory issues or emotional and subjective contingencies of the respondents.Surely having more external and objective measures of adolescents' mental health during the outbreak of the pandemic and war would have given us a more complete picture.However, the mere fact that respondents reported malaise at the time they were answering is important from a psychological perspective because subjective experience is the most relevant part that affects mental health, beyond the actual aspects of life.It would also be important to conduct studies that delve more deeply into how this impact takes shape in the particular stage of development that is adolescence. Conclusions In the wake of the unprecedented challenges presented by the COVID-19 pandemic and the subsequent Russian-Ukrainian war, this study sought to study the intricate network of psychological impacts on a cohort of Italian high school students.The findings underscore a complex interplay of factors influencing mental well-being, ranging from the direct consequences of the pandemic and war to individual coping mechanisms and social dynamics.The study reveals a substantial psychological toll inflicted by the dual crises, manifesting in high levels of malaise, depression, anxiety, and risky behaviors.Distinct correlations emerged, shedding light on the multifaceted nature of the challenges faced by the participants.Notably, the perceived violation of core beliefs during the pandemic and war and disruptions in meaning-making were identified as significant contributors to mental health outcomes.The study emphasizes that the optimal balance of distress varies based on individual predisposing factors, highlighting the importance of personalized approaches in mental health interventions.In the context of the ongoing global recovery efforts, the study calls for an understanding of the psychological repercussions of not only the pandemic but also concurrent geopolitical events.The integration of comprehensive models, such as the one proposed by Milman et al. [18] and extended by Negri et al. [42], provides a robust framework for comprehending the intricate relationships between cognitive, emotional, and social variables in shaping mental health outcomes. This study contributes to the evolving discourse on mental health during complex and protracted crises, offering valuable insights for psychoeducational interventions in school contexts.Our findings offer insights into what cognitive and relational processes are useful to intervene in school settings and in general with people who are facing significant moments of crisis due to events that challenge their cognitive and emotional balance.It appears very important, for example, to foster students' capacity in making a personal or collective sense to stressful and unexpected events such as the COVID-19 pandemic and war, to incorporate emotional intelligence instruction into school programs to improve students' ability to comprehend and manage their emotions effectively, and to organize events or community forums that openly discuss the meaning through which to interpret events happening in society. Table 1 . Descriptive statistics about the main sociodemographic, COVID-19-related, mental health, and psychological factor variables. Table 2 . Significant differences between Females (F) and Males (M). Notes: CBI = Core Belief Inventory; ISLES = Integration of Stressful Life Experiences Scale; RQ = Relationship Questionnaire; NRI-BSV = The Network of Relationships Inventory: Behavioral Systems Version; SWLS = Satisfaction with Life Scale; GP-CORE = General Population-Clinical Outcomes in Routine Evaluation; PHQ-4 = Four-item Patient Health Questionnaire; BSMAS = Bergen Social Media Addiction Scale.The higher mean value in the comparison is written in bold. Table 3 . Significant differences between those who received psychological support during the pandemic (Psy) and those who did not (noPsy). Notes: CBI = Core Belief Inventory; RQ = Relationship Questionnaire; ∆ b = difference between scores given to last month and pre-pandemic period; SWLS = Satisfaction with Life Scale; BSMAS = Bergen Social Media Addiction Scale.The higher mean value in the comparison is written in bold. Table 4 . Significant differences between those who tested positive for COVID-19 (Cov) and those who did not (noCov). Table 5 . Significant differences between those who have had an acquaintance or family member die due to COVID-19 (Deaths) and those who have not (NoDeaths). Notes: CBI = Core Belief Inventory; ISLES = Integration of Stressful Life Experiences Scale; ∆ 2 = difference between scores given to the last month and the pre-pandemic period.The higher mean value in the comparison is written in bold. Table 6 . Partial correlation coefficients between mental health variables, months of remote schooling, and cognitive processes, controlled for satisfaction for life (SWLS). Table 7 . Partial correlation coefficients between mental health variables and relational variables, controlled for satisfaction for life (SWLS). Table 8 . Significant predictors of mental health indices, weighted for satisfaction for life (SWLS). Notes: CBI = Core Belief Inventory; ISLES = Integration of Stressful Life Experiences Scale; NRI-BSV = The Network of Relationships Inventory: Behavioral Systems Version; RQ = Relationship Questionnaire; GP-CORE = General Population-Clinical Outcomes in Routine Evaluation; PHQ-4 = Four-item Patient Health Questionnaire.
2024-04-21T15:15:42.216Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "c4982570d1284709dd16599be3a3982c0b68a3b9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/21/4/508/pdf?version=1713537605", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "70562ec76448d49df4b1792bf007fd9eb8bbb987", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
16723637
pes2o/s2orc
v3-fos-license
GRB afterglow plateaus and Gravitational Waves: multi-messenger signature of a millisecond magnetar? The existence of a shallow decay phase in the early X-ray afterglows of gamma-ray bursts is a common feature. Here we investigate the possibility that this is connected to the formation of a highly magnetized millisecond pulsar, pumping energy into the fireball on timescales longer than the prompt emission. In this scenario the nascent neutron star could undergo a secular bar-mode instability, leading to gravitational wave losses which would affect the neutron star spin-down. In this case, nearby gamma-ray bursts with isotropic energies of the order of 1e50 ergs would produce a detectable gravitational wave signal emitted in association with an observed X-ray light-curve plateau, over relatively long timescales of minutes to about an hour. The peak amplitude of the gravitational wave signal would be delayed with respect to the gamma-ray burst trigger, offering gravitational wave interferometers such as the advanced LIGO and Virgo the challenging possibility of catching its signature on the fly. INTRODUCTION Thanks to Swift observations (e.g. Nousek et al. 2006;Zhang et al. 2006), it has now become evident that the "normal" power-law behavior of long GRB Xray light curves F (T ) ∝ T α with α ∼ −1.2 (where F (T ) is the observed flux and T is the observer's time), is often preceded at early times by an initial steep decay (α ∼ −3), followed by a shallower-than-normal decay (α −0.5, see Fig. 1). The steep-to-shallow and shallow-to-normal decay transitions are separated by two corresponding break times, 100 s T break,1 500 s and 10 3 s T break,2 10 4 s. During the shallow-tonormal transition the spectral index does not change and the decay slope after the break (α ∼ −1.2) is generally consistent with the standard afterglow model (e.g. Mészáros & Rees 1997;Sari et al. 1998), while the decay slope before the break is usually much shallower. The lack of spectral changes suggests that the shallow phase may be attributed to a continuous energy injection by a long-lived central engine, with progressively reduced activity (for a review see Zhang et al. 2006, and references therein). Recently, Panaitescu & Vestrand (2008) have pointed out that the effects of a late-time energy injection may also be evident in some optical afterglows, around 30 − 10 4 s after the trigger. Although it is still not clear if a typical "steep-flat-steep" behavior does exist also in short GRB X-ray afterglows, the case of GRB 051221a does fit this scheme remarkably, with a plateau observed right in the middle of the afterglow decay (Soderberg et. al 2006). Newborn magnetars are among the progenitors proposed to account for shallow decays or plateaus observed in GRB light curves (Dai & Lu 1998;Zhang & Mészáros 2001;Fan & Dong 2006;Yu & Huang 2007). Independent support for this scenario comes from the observation of SN2006aj, associated with the nearby sub-energetic GRB 060218, suggesting that the supernova-GRB connection may extend to a much broader range of stellar masses than previously thought, possibly involving two different mechanisms: a "collapsar" for the more massive stars collapsing to a black hole (BH), and a newborn neutron star (NS) for the less massive ones (Mazzali et al. 2006). Previous studies aimed at accounting for the afterglow plateaus by invoking a magnetar-like progenitor have assumed that the magnetar's slow-down is dominated by magnetic dipole losses, neglecting the contribution from the emission of gravitational waves (GWs, see Dai & Lu 1998;Fan & Dong 2006;Yu & Huang 2007), or treating them separately as a limiting case for a NS with sufficiently high, constant eccentricity (Zhang & Mészáros 2001). Such studies have shown how magnetars dipole losses may indeed explain the flattening observed in GRB afterglows. In the simplest version of the magnetar scenario, the end of the shallow decay is accompanied by an achromatic break, while several cases of chromatic breaks have also been observed (e.g. Panaitescu 2009). Additional mechanisms such as variable micro-physical parameters in the fireball shock front (e.g. Panaitescu et al. 2006) or a structured jet model (e.g. Racusin et al. 2008) can be invoked to explain such chromatic breaks. Anyhow, a larger sample of simultaneous optical-to-X-ray observations is needed to firmly asses the achromatic or chromatic behavior of breaks associated to the end of the shallow-decay phase. In this paper, we investigate in more detail the effects of GW losses on the magnetar's spin-down, and explore quantitatively the signatures which could test whether this is indeed the mechanism at work in the shallow X-ray light curves. Although the precise evolution of a newborn magnetar from birth up to timescales of ∼ 10 3 − 10 4 s is difficult to predict or to follow with numerical simulations, here we point out that among the possible evolutionary paths which one may reasonably consider, one plausible and particularly interesting possibility to explore is that of a newborn NS left over after a GRB explosion, which undergoes a secular bar-mode instability. In this scenario, simple estimates accounting for the most relevant energy loss processes can provide useful insights into the viability of having efficient GW emission associated with a GRB X-ray afterglow plateau. Although these estimates are clearly approximate, since possible complications like viscosity effects or magnetic field driven instabilities are neglected, they nonetheless allow us to make a first statement on the relevance of the considered process. Moreover, while other scenarios are also possible, the interesting aspect of this particular one is that, on the one hand, GW observations would be facilitated by the presence of an electromagnetic signature to pinpoint the GW signal search, while on the other hand the detection of bar-mode like GWs in coincidence with a GRB X-ray plateau would be a smoking-gun signature of a magnetar pumping energy into the fireball, thus identifying the much-debated plateau mechanism. Given that several alternative scenarios have been invoked to explain the afterglow flattening, which are not expected to be associated with GW signals (see e.g. Panaitescu 2008), this would represent a significant step forward in our understanding of GRB physics. Moreover, identifying the presence of a magnetar would confirm that not all GRB explosions necessarily lead to the prompt formation of a BH. A point of interest for current analyses that GW detectors are carrying out (see e.g. Abbott et al. 2008a,b) is that the scenario described here involves a new class of GW signals, which should be searched for in coincidence with GRBs. These would have a longer duration (10 3 − 10 4 s) and a different frequency evolution than the type of GW signals currently considered to be possibly associated with GRBs. Moreover, being delayed by minutes to 1 hour with respect to the prompt γ-ray trigger, the GW signal associated with a GRB plateau would offer the challenging possibility of an on-line detection. In light of the fact that the Virgo 4 and LIGO 5 interferometers are now progressing toward their enhanced/advanced configurations, and getting prepared for performing on-line data analyses, this prospect appears very appealing. It is worth noting that despite a GW signal in coincidence with a GRB plateau could also be searched off-line by LIGO or Virgo, an online detection would be highly preferable, since it could serve as a trigger for ground-based optical follow-ups, even if a GRB trigger alert is absent for any reason. The paper is organized as follows. In Sec. 2 we briefly describe how GRB afterglow plateaus are modeled in the context of the magnetar model. In Sec. 3 we review the main processes that can lead to GW emission associated with NS formation. The aim of this section is to show that, among the different mechanisms that can come into play, the high efficiency of the secular bar- -Cartoon representation of the typical light curve behavior observed by Swift XRT. The "standard" power-law decay with index α = −1.2 is preceded by a flat phase, lasting 10 2 −10 4 s, during which the decay index is α = −0.5 or flatter (Zhang et al. 2006). mode instability is conducive to producing GW signals which are detectable also from relatively nearby extragalactic sources. Moreover, it develops on timescales compatible with the observed durations of GRB plateaus. Sec. 4 describes the general idea and particular aspects of the scenario being explored here, and how it can explain GRB afterglow plateaus with the presence of a magnetar whose spin-down includes both magnetic dipole and bar-mode GW losses. In Sec. 5 we present the results of our calculations, and in Sec. 6 we discuss these results, summarizing our conclusions in Sec. 7. GRB PLATEAUS IN THE MAGNETAR SCENARIO Although a wide range of GRB progenitors end in the formation of a BH-debris torus system, it has been proposed that some progenitors may lead to a highly magnetized rapidly rotating pulsar (e.g. Usov 1992;Duncan & Thompson 1992;Thompson 1994;Dai & Lu 1998;Kluzniak & Ruderman 1998;Nakamura 1998;Spruit 1999;Wheeler et al. 2000;Ruderman et al. 2000;Levan et al. 2006;Mereghetti 2008;Bucciantini et al. 2009), with such possibility being realized not only in the case of long GRBs associated to collapsars, but eventually also in scenarios relevant for short GRBs, such as NS binary mergers (Dai & Lu 1998, and references therein). Fast rotating highly magnetized pulsars, are among the class of progenitors that may be associated with significant energy input in the fireball for timescales longer than the γ-ray emission, thus being relevant for explaining GRB afterglow plateaus. A detailed analysis of the observable effects linked to the presence of a pulsar pumping energy into the fireball was performed by Zhang & Mészáros (2001), the results of which we briefly recall in what follows. Consider the general scenario where the GRB is powered by a central engine that emits both an initial impulsive energy input, E imp , as well as a continuous luminosity, the latter varying as a power-law with time, i.e. L = L 0 T T0 q where T is the observer's time. This could be the case if the central engine is a pulsar and the initial im-pulsive GRB fireball is due to ν-ν annihilation or magnetohydrodynamical processes (see e.g. MacFadyen et al. 1999;Popham et al. 1999;Di Matteo et al. 2002;Lee 2005;Oechslin & Janka 2006;Zhang & Dai 2009). In such a case, a self-similar blast wave is expected to form at late times. In a GRB, the timescale T 0 at which the self-similar solution applies is roughly equal to the time for the external shock to start decelerating while collecting material from the interstellar medium (e.g. Sari & Piran 1999). At different times, the total energy into the fireball may be dominated by either the initial impulsive term, or by the continuous injection one, whose contribution will scale as E inj = L0T0 q+1 T T0 q+1 . The continuous energy injection term can dominate on the impulsive one for T T c (where T c T 0 so as to assure that the self-similar solution has already developed when the continuous injection law dominates), if q > −1 and E inj (T c ) ∼ E imp . In the particular case in which L 0 T 0 ∼ E imp then T c ∼ T 0 and the dynamics is dominated by the continuous injection as soon as the selfsimilar evolution begins. Generally speaking, one can (Zhang & Mészáros 2001). Note that the continuous injection may, in addition, have another characteristic timescale T f at which the continuous injection powerlaw index q > −1 switches to a lower value q < −1. In such a case, it is only for T c < T f that the continuous injection has a noticeable effect on the afterglow light curve (Zhang & Mészáros 2001). During the energy-injection dominated phase, the peak flux, peak frequency and cooling frequency of the synchrotron photons produced by the forward shock (Sari et al. 1998) (Zhang & Mészáros 2001), that reduce to the standard scalings for q = −1 (Sari et al. 1998). In the case of a nearly constant energy supply, i.e. q ∼ 0, one has F m ∝ T , ν m ∝ T −1 , ν c ∝ T −1 , respectively. These scalings allow one to compute the temporal indices of the afterglow light curve expected during the injection phase. Supposing to be in slow cooling, these are F ν ∝ F m ν where we have indicated with p the power-law index of the electron energy distribution in the shock front (Sari et al. 1998). For 2 < p < 4, one has 0.5 > α 1 > −0.5 at frequencies ν m < ν < ν c and 0 > α 2 > −1 at ν > ν c , to compare with α −0.5 observed during GRB afterglow plateaus. In the absence of energy injection, for the standard adiabatic fireball one would have −3/4 > α 1 > −9/4 for ν m < ν < ν c , and −1 > α 2 > −5/2 at ν > ν c , for the same range of p values. Thus, the presence of a pulsar pumping energy into the fireball at a nearly constant rate, is expected to cause a flattening in the typical decay of the afterglow light curve, with α −0.5, in agreement with Swift observations (see Fig. 1). GWS BY NS FORMATION Gravitational collapse leading to the formation of a NS has long been considered an observable source of GWs. During the core collapse, an initial signal is expected to be emitted due to the changing axisymmetric quadrupole moment. A second part of the GW signal is produced when gravitational collapse is halted by the stiffening of the equation of state above nuclear densities and the core bounces, driving an outwards moving shock, with the rapidly rotating proto-neutron star (PNS) oscillating in its axisymmetric normal modes. In a rotating PNS, non-axisymmetric processes can also yield to the emission of GWs with high efficiency. Such processes are convection inside the PNS and in its surrounding hot envelope, anisotropic neutrino emission, dynamical instabilities, and secular gravitational-radiation driven instabilities, that we briefly recall in what follows. We refer the reader to e.g. Kokkotas (2008) for a recent, more detailed review. -Convection and neutrino emission -2D simulations of core collapse (Müller et al. 2004) have shown that the GW signal from convection significantly exceeds the core bounce signal for slowly rotating progenitors, being detectable with advanced LIGO for galactic sources. In many simulations, the GW signature of anisotropic neutrino emission has also been considered (Epstein 1978;Burrows & Hayes 1996;Müller & Janka 1997) and estimated to be detectable by advanced LIGO for galactic sources. -Dynamical instabilities -They arise from nonaxisymmetric perturbations and are of two different types: the classical bar-mode instability and the more recently discovered low-T /|W | bar-mode and one-armed spiral instabilities. In Newtonian stars, the classical m = 2 bar-mode instability is excited when the ratio β = T /|W | of the rotational kinetic energy T to the gravitational binding energy |W | is larger than β dyn = 0.27 (Chandrasekhar 1969). It can be excited in a hot PNS a few ms after core bounce, or alternatively a few tenths of seconds later, when the PNS cools due to neutrino emission and contracts further, with β becoming β dyn (β ∝ 1/R during contraction). The instability grows on a dynamical timescale (the time that a sound wave needs to travel across the star) which is about one rotational period, and may last from 1 to 100 rotations depending on the degree of differential rotation (e.g. Baiotti et al. 2007;Manca et al. 2007). If the bar persists for ∼ 10-100 rotation periods, then even signals from distances considerably larger than the Virgo Cluster are estimated to be detectable. An m = 1 one-armed spiral instability has also been shown to become unstable in PNS, provided that the differential rotation is sufficiently strong (with matter on the axis rotating at least ten times faster than matter on the equator, Centrella et al. 2001;Saijo et al. 2002). In recent simulations of rotating core collapse to which differential rotation was added (Ott et al. 2005), the emitted GW signal reached a maximum amplitude comparable to the core-bounce axisymmetric signal, after ∼ 100 ms and at a frequency of ∼ 800 Hz. -Secular instabilities -At lower rotation rates, a star can become unstable to secular non-axisymmetric instabilities, driven by gravitational radiation or viscosity. Secular GW-driven instabilities are frame-dragging instabilities usually called Chandrasekhar-Friedman-Schutz (CFS, Chandrasekhar 1970;Friedman 1978) instabilities. Neglecting viscosity, the CFS-instability is generic in rotating stars for both polar and axial modes. In the Newtonian limit, the l = m = 2 f -mode, which has the shortest growth time of all polar fluid modes (1 s τ GW 7 × 10 4 s for 0.24 β 0.15, see Lai & Shapiro 1995), becomes unstable when β 0.14. The f -mode instability, also referred to as the secular bar-mode instability, is an excellent source of GWs. In the ellipsoidal approximation, Lai & Shapiro (1995) have shown that the mode can grow to a large nonlinear amplitude, modifying the star from an axisymmetric shape to a rotating ellipsoid, that becomes a strong emitter of GWs until the star is slowed-down towards a stationary state. This stationary state is a Dedekind ellipsoid, i.e. a non-axisymmetric ellipsoid with internal flows but with a stationary (non-radiating) shape in the inertial frame. During the evolution, the nonaxisymmetric pattern radiates GWs sweeping through the advanced LIGO/Virgo sensitivity window (from 1 kHz down to about 100 Hz), which could become detectable out to a distance of more than 100 Mpc. Two recent hydrodynamical simulations (Shibata & Karino 2004;Ou et al. 2004, in the Newtonian limit and using a post-Newtonian radiation-reaction, respectively) have essentially confirmed this picture. Among axial modes, the l = m = 2 r-mode is an important member (see e.g. Andersson 1998;Friedman & Morsink 1998;Lindblom et al. 1998;Owen et al. 1998;Andersson & Kokkotas 2001;Andersson 2003;Bondarescu et al. 2009). If the compact object is a strange star, such instability is predicted to persist for a few hundred years (at a low amplitude) and, integrating data for a few weeks, could yield to an effective amplitude h eff ∼ 10 −21 for galactic signals, at frequencies ∼ 700 − 1000 Hz (Kokkotas 2008). -Other magnetic-field related effects -Finally, mechanisms different from rotational instabilities can be invoked as GW sources in newborn magnetars. E.g., in several scenarios the star's shape may be dominated by the distortion caused by very high internal magnetic fields (e.g. Palomba 2000;Cutler 2002;Arons 2003;Stella et al. 2005;Dall'Osso & Stella 2007;Dall'Osso et al. 2008). GW signals produced by these kind of processes are typically estimated to be detectable by the advanced interferometers up to the Virgo Cluster (i.e. distances of the order of 20 Mpc). THE NS SPIN-DOWN On the longer afterglow timescales that are of interest for the present work, the energy injection into the fireball by a magnetar eventually surviving after the GRB explosion, is expected to be mainly through electromagnetic dipolar emission (Zhang & Mészáros 2001). For what concerns GW losses, in this work we focus on the secular bar-mode instability, given its high efficiency in the production of GWs, and being its characteristic timescale τ GW compatible with the one of GRB plateaus (see Sects. 3 and 4). As discussed in the previous section, a collapsing core rotating sufficiently fast is expected to become nonaxisymmetric when β is sufficiently large. Since a newborn NS can be secularly unstable but dynamically stable only if the rotation rate of the pre-collapse core lies in a narrow range, and since during the collapse β increases proportionally to R −1 , Lai & Shapiro (1995) considered more likely that the core becomes dynamically unstable (β > β dyn ) following the collapse, provided the initial β i is not too small. On a short dynamical timescale, such NS will evolve toward a nearly axisymmetric equilibrium state, with β decreasing below β dyn , but possibly remaining above β sec (see Lai & Shapiro 1995, and references therein). Due to gravitational radiation, the nearly axisymmetric core (secularly unstable Maclaurin spheroid) will evolve into a non-axisymmetric configuration (Riemann-S ellipsoid), on a secular dissipation timescale ∼ τ GW . While an initial dynamical unstable phase would possibly produce a GW burst during the GRB, the secular evolution takes place on longer timescales, thus being relevant for the shallow phase (100 s T 10 4 s) observed in GRB afterglows (see Fig. 1). For this reason, in what follows we focus on the secular bar-mode instability. It is worth noting, however, that also the presence of a bar-like GW burst from a dynamical bar-mode instability, would provide a hint for a magnetar being formed in the GRB explosion. BH formation, in fact, is not expected to lead to strong quadrupole moments (except if it is argued for blobs forming in the infall, see e.g. Kobayashi & Mészáros 2003), and in any case a dynamically unstable magnetar would presumably give rise to a more regular signal. Fully general relativistic axisymmetric simulations of rotating stellar core collapse in three spatial dimension, performed for a wide variety of initial conditions (rotational velocity profile, equations of state, total mass), indicate that the threshold β = 0.27 for the onset of the classical dynamical instability is passed if the the progenitor of the collapse is: (i) highly differentially rotating; (ii) moderately rapidly rotating with 0.01 β i 0.02; (iii) massive enough (Shibata & Sekiguchi 2005). More recent numerical collapse simulations of rotating stellar iron cores to PNS have also provided an extensive set of post-bounce rotational configurations, allowing studies of the prospects for the development of non-axisymmetric rotational instabilities. E.g. Dimmelmeir et al. (2008) found that the rotational barrier imposed by centrifugal forces prohibits the spin-up necessary for the classical dynamical bar-mode instability, but a large subset of postbounce models exhibits a β above the secular instability threshold. Based on their results, Dimmelmeir et al. (2008) consider it unlikely that a PNS in nature develops a high-β dynamical instability at or early after core bounce. While many of the PNS could theoretically reach β dyn during their subsequent cooling to the final condensed NS (if angular momentum is conserved), it is however considered more likely that the secular instability driven by dissipation or gravitational radiation back-reaction will set in first (Dimmelmeir et al. 2008). Still, three-dimensional simulations are necessary to provide conclusive tests of these predictions. Under the hypothesis that a secular bar-mode instability does indeed set in, in this work we follow the NS quasi-static evolution under the effect of gravitational radiation according to the analytical formulation given by Lai & Shapiro (1995). Such evolution can in principle be studied using the full dynamical equations of ellipsoidal figures (Chandrasekhar 1969), including gravitational radiation reaction. However, since τ GW is generally much longer than the dynamical time of the star, the evolution is quasi-static, i.e. the star evolves along an equilibrium sequence of Riemann-S ellipsoids. Dif-ferently from what done by Lai & Shapiro (1995), here we add in the energy losses the contribution of magnetic dipole radiation, under the hypothesis that those will not substantially modify the dynamics, but will act speeding up the overall evolution of the bar along the same sequence of Riemann-S ellipsoids that the NS would have followed in the absence of radiative losses. As we are going to show in the following section, dipole losses are nearly constant during the bar evolution so that, according to what discussed in Sec. 2 for the q = 0 case, they can act as a source of continuous energy supply into the fireball, explaining the observed slope of GRB afterglow plateaus. It is worth noting that in a real situation where magnetic field instabilities and viscosity effects are also present, the relevant timescales may be altered. A secularly evolving bar can last up to a timescale of the order of 10 3 s, as far as viscosity or magnetic field induced instabilities do not substantially modify the dynamics. Viscosity may play a role on the secular evolution when the PNS has cooled to below ∼ 1 MeV (e.g. Lai & Shapiro 1995;Lai 2001). The time required for the pulsar to cool below such temperature was estimated as a few hundreds of seconds by Lai (2001). In the case of a GRB explosion, a less rapid cooling is expected due to continuous in-fall and jet emission (a heating source which was absent in e.g. Lai 2001), so that we may assume that the bar survives at least until the end of the electromagnetic plateau (i.e. T ∼ 10 3 s). Magnetic effects are notoriously difficult to predict (see e.g. Shibata & Karino 2004), and in general require making heuristic assumptions. In the context of the secular r-mode instability, Rezzolla et al. (2001) have shown that the growth of an initial magnetic field associated with the secular kinematic effects emerging during the evolution of the instability, possibly damps the growth of the instability itself. Despite the different context (r-modes), these results do suggest that magnetically driven instabilities may complicate the scenario. In what follows, we explore the quantitative consequences of making the plausible assumption that magnetic instabilities are less efficient at spinning-down the bar than GW emission and magnetic dipole losses. Moreover, apart from magnetic braking (spin-down due to dipole losses) which we do consider here, the presence of a magnetic field can influence the secular evolution in other ways. A magnetic field anchored on the star's surface is in fact perturbed by the instability itself, and this can lead to electromagnetic losses which can enhance the CFS mechanism. E.g. in the context of r-modes, Ho & Lai (2000) have considered the electromagnetic radiation associated with the shaking of magnetic field lines by the r-mode oscillations. This effect has been estimated to be negligible for NS with magnetic field strengths below 10 16 G. In our case, magnetic field lines anchored on the surface would be distorted by the bar-mode instability. In our treatment, we neglect this effect, and include only dipole losses associated with a magnetic field whose flux is conserved on a sphere of radius equal to the mean radius of the ellipsoid. MODELING OF THE NS EVOLUTION A general Riemann-S ellipsoid is characterized by an angular velocity Ω e 3 of the ellipsoidal figure (the pattern speed) about a principal axis e 3 , and by internal fluid motions which are assumed to have uniform vorticity ζ e 3 along the same axis (in the frame co-rotating with the figure). Labeling with a 1 and a 2 the principal axes of the ellipsoidal figure in the equatorial plane 6 , and with x 1 and x 2 Cartesian coordinates in such a plane, it can be shown that the fluid velocity in the inertial frame reads the velocity in the frame co-rotating with the figure (Chandrasekhar 1969), where e 1 and e 2 are unit vectors along the Cartesian axes x 1 and x 2 ; r is the position vector; × indicates the vector product; and is the angular frequency of the internal fluid motions, i.e. of the elliptical orbits that the particles span around the rotational axis in addition to the pattern motion. The velocity u 0 is contained in the plain perpendicular to the rotational axis e 3 , so indicating with r ⊥ the projection of the position vector r in such plane we can write r = r ⊥ + x 3 e 3 , and u 0 = d( r ⊥ )/dt + (dx 3 /dt) e 3 = d( r ⊥ )/dt (i.e. dx 3 /dt = 0 since the component of u 0 along e 3 is null). Further, we can write Note that Ω 0 is defined in such a way that r ⊥ Ω 0 gives the component of the particle velocity perpendicular to the polar radius r ⊥ , measured in the inertial frame. However, as underlined above, the motion of fluid particles on the surface can be viewed as the superposition of a circular motion with the pattern frequency Ω, plus an elliptical motion on paths contained on the pattern ellipsoid (resulting in maintaining the pattern fixed). Since the internal fluid motions are ellipses rather than circles, there is an additional component of the velocity parallel to r ⊥ . Using Eq. (1), one has In the frame co-rotating with the pattern, fluid particles on the star's surface move around the rotational axis, on ellipses contained in x 3 = const planes. Those ellipses are self-similar to the equatorial one and have equation: In the ellipsoidal approximation, surfaces of constant density are assumed to be self-similar ellipsoids, so the geometry of the configuration is completely specified by the three principal axes a 1 , a 2 and a 3 , and the axis ratio a 3 /a 1 and a 2 /a 1 are the same for all interior isodensity surfaces (Lai, Rasio & Shapiro 1993). For such fluid particles, a 2 2 x 2 1 + a 2 1 x 2 2 = a 2 1 a 2 2 1 − , so that in one cycle a 2 2 x 2 1 +a 2 1 x 2 2 r 2 ⊥ a1a2 = 1. Thus, in the inertial frame, fluid particles on the star's surface are characterized by an angular frequency: Since the gravitational radiation reaction acts like a potential force, the fluid circulation along the equator of the star, where d l is taken along the star's equator and ζ 0 is the vorticity in the inertial frame, is conserved in the absence of viscosity (Lai, Rasio & Shapiro 1993). Therefore, the NS will follow a sequence of Riemann-S ellipsoids with constant circulation. Treating the NS as a polytrope of index n (Chandrasekhar 1939), total mass M , and indicating with R 0 the radius of the non-rotating, spherical equilibrium polytrope with same mass M , one has (Lai & Shapiro 1995): where G is the gravitational constant; k n is a constant which depends on the index n of the considered polytrope (see e.g. Lai, Rasio & Shapiro 1993). Note thatC = C/ √ GM 3 R 0 is an adimensional quantity, C = −(k n M C)/(5π) has the dimensions of an angular momentum, and both are proportional to the conserved circulation C. It can be shown that (Lai & Shapiro 1995): where I = k n M (a 2 1 +a 2 2 )/5 is the NS moment of inertia with respect to the rotational axis. Along the secular equilibrium sequence, we write the NS spin-down law as (Shapiro & Teukolsky 1983): where E is the NS total energy, L GW = dE GW /dT accounts for GW losses, while L dip = dE dip /dT for magnetic dipole ones. Here ǫ = (a 2 1 − a 2 2 )/(a 2 1 + a 2 2 ) is the ellipticity; B p is the dipolar field strength at the poles; Ω is the pattern angular frequency of the ellipsoidal figure; R is the mean stellar radius; c is the light speed; T is the time measured in an inertial frame where the pulsar is at rest. L dip is computed conserving the magnetic field flux over a sphere of radius equal to the mean stellar radius (i.e. B p R 2 = const = B p,0 R 2 0 along the sequence, where R is the geometrical mean of the ellipsoid principal axes), and using the effective angular frequency Ω ef f , which includes both the pattern speed and the effects of the internal fluid motions. The use of Ω ef f = Ω 0 = 1 r ⊥ | r ⊥ × u 0 | accounts for the fact that in the frozen-in magnetic field approximation For reference, we also plot the rate of energy loss in the case only GW emission is considered (black-dash-dotted line), as in Lai & Shapiro (1995). Lower panel: absolute value of the surface fluid particles angular frequency divided by a factor of π (i.e. |Ω ef f |/π), when both magnetic dipole and GW losses are considered (black-solid line). For reference, we also plot the same quantity when only GW losses are taken into account in the magnetar's spin-down law (black-dash-dotted line), as in Lai & Shapiro (1995). Note that the vertical axis in the lower panel is a linear scale: between 10 2 s and ∼ 10 3 s, Ω ef f /π changes from ∼ 800 Hz to ∼ 750 Hz, i.e. less than ∼ 10% of its initial value. Thus, between 10 2 s and 10 3 s the power-law approximation to dipole losses is L dip ∝ T −0.11 , so that q ∼ 0 can be assumed for T 10 3 s. (See the electronic version for colours). (see e.g. Goldreich & Julian 1969;Baym et al. 1969;Thompson & Duncan 1996;Morsink & Rezania 2002;Thompson et al. 2002) the magnetic field lines are in effect tied to the fluid particles on the stellar surface. Note that Ω ef f (and the corresponding dipole loss term) is measured in the inertial frame, which is where we compute dE/dT as well. Once (C, n, M, R 0 , B p,0 ) are assigned, each configuration along a constant-C sequence is completely determined specifying the axis-ratio x = a 2 /a 1 in the ellipsoid equatorial plane. Thus, all relevant quantities can be considered as functions of x only, and Eq. (11) can be written as: We solve the above equation numerically, with its right hand side evaluated along a constant-C Riemann-S sequence, and imposing an initial condition sufficiently near to a uniformly rotating Maclaurin spheroid, (x(t i ) = x i → 1) of the given circulationC. RESULTS AND DISCUSSION In Fig. 2 we compare the luminosity emitted in GWs, computed with (black-solid line) or without (black-dashdotted line) the addition of the dipole loss term (reddashed line) in Eq. (11), for a typical choice of parameters, (C, n, M, R 0 , B p,0 ) = (−0.41, 1, 1.4 M ⊙ , 20 km, 10 14 G). Note thatC = −0.41 corresponds to a value of β = 0.20 for the initial Maclaurin configuration, i.e. in the middle of the 0.14 < β < 0.27 range for the secular instability. As evident from the lower panel of Fig. 2, as long as the circulation is conserved, Ω ef f remains nearly constant during all the evolution, and |L dip | ∼ 3 × 10 47 ergs/s = L 0 (upper panel, red-dashed line). As underlined in Sec. 2, energy pumped into the fireball at a constant rate is sufficient to explain the observed temporal behavior of afterglow plateaus (i.e. α −0.5, see Fig. 1). For what concerns the duration of the plateau, for a GRB with impulsive isotropic energy of the order of E imp ∼ 10 50 ergs, the effect of the energy injection in the light curve will become visible after a time T c ∼ E imp /L 0 ∼ (10 50 ergs)/(3 × 10 47 ergs/s) ∼ 300 s (see the red-dashed line in Fig. 2 and Sec. 3), which is about in the middle of the observed range for T break,1 ∼ 100 − 500 s (see Fig. 1). Supposing the energy injection ends or starts fading significantly when the star approaches the final Dedekind state (see the discussion at the end of this section), the GRB light curve will return to its standard behavior after T break,2 10 3 , to be compared with the observed range of 10 3 − 10 4 s. Thus, for a GRB with such impulsive energy, the properties of the plateau associated to the NS secular evolution are in agreement with those typically observed 7 . The waveform of the GW signal emitted in association with the afterglow plateau is computed as (Lai & Shapiro 1995): where θ is the angle between the line of sight and the rotation axis of the star, Φ = 2 t t0 Ωt is twice the orbital phase, and where d is the distance to the source, L GW and Ω are shown in Fig. 2 (upper panel, black-solid line) and Fig. 3 (lower panel, black solid line), respectively. The resulting GW signal is quasi-periodic, with frequency f = Ω/π. To estimate the GW signal detectability, we proceed as follows. For broad-band interferometers such as LIGO and VIRGO, the best signal-to-noise ratio is obtained by applying a matched filtering technique to the data, when a waveform template is available. In such a case, where h is a Fourier transform; S h (f ) is the power spectral density of the detector noise; F + , F × are the beam pattern functions (0 < F 2 + , F 2 × < 1 depending on the source position in the sky, see e.g. Thorne 1987;Flanagan & Hughes 1998). For the signal in Eq. (13), in the stationary phase approximation (Thorne 1987;Cutler & Flanagan 1994;Owen et al. 1998;, Lai & Shapiro 1995) losses being considered. A typical fit to the sensitivity expected for advanced detectors (purple-dashed line, see e.g. Cutler & Flanagan 1994;Owen et al. 1998), Virgo nominal sensitivity (blue-dotted line), and the advanced Virgo sensitivity optimized for binary searches (blue-dash-dot-dot-dotted line, , are also shown. Lower panel: evolution of the GW signal frequency, with dipole plus GW (black-solid line) and only GW (black-dash-dotted line) losses being considered in the NS spin down. (See the electronic version for colours). (16) Since we expect to be observing the GRB on-axis, θ ≃ 0. In case of optimal orientation, being h c = f h(t) dt/df the characteristic amplitude, and h rms = f S h (f ). In the upper panel of Fig. 3, we compare h c computed for a GRB at d = 100 Mpc, with the h rms expected for the advanced detectors Cutler & Flanagan 1994;Owen et al. 1998), for which ρ max 5 at d 100 Mpc, or d 150 Mpc if we make the assumption that knowledge of the GRB trigger time reduces the detection threshold, of a factor which as a rule-ofthumb we take equal to 1.5 (Kochanek & Piran 1993;Cutler & Thorne 2002). Higher confidence in an eventual detection may require ρ sky = 2/5 ρ max 5 in each of a three-detectors network with similar h rms (Cutler & Flanagan 1994). With the help of the GRB trigger time roughly compensating the factor of 2/5, this implies d 100 Mpc. BATSE results show that about 3% of short GRBs are expected to be within 100 Mpc (Nakar et al. 2006), which translates into ∼ 1 − 2 short GRBs per year in the Swift (∼ 10 short GRBs per year) plus GBM (GLAST -∼ 1/4 of ∼ 200 GRBs per year) sample. As far as low-luminosity long GRBs, two of them (980425 and 060218) were already observed at a d ∼ 40 Mpc and d ∼ 130 Mpc, and their local rate ( 200 Gpc −3 yr −1 ) is expected to be much higher than that of normal bursts (1 Gpc −3 yr −1 , e.g. Virgili et al. 2009). INTEGRAL has detected a large proportion of faint GRBs inferred to be local (Foley et al. 2008), a sample which may be increased by future missions such as Janus (Stamatikos et al. 2009) and EXIST 8 . To compare with the case discussed here, the standard progenitor scenario for long GRBs predicts ρ ∼ 5 at 27 Mpc for the advanced Virgo/LIGO, while the chirp signal from short GRBs is estimated to be detectable up to several hundreds Mpc (e.g. Flanagan & Hughes 1998;Kobayashi & Mészáros 2003). GWs eventually detected after a chirp and during an electromagnetic plateau of a short GRB, would add a significant piece of information, probing whether a magnetar is formed in the coalescence, rather than a BH. It is finally worth adding some few considerations more. First, in the scenario we are proposing here, some correlations do exist between the electromagnetic plateau and the GW signal, that could be explored in future analyses so as to test up to which level those may help the GW signal search. For example, a measurement of the initial frequency of the GW signal, for a given NS mass and radius, would allow one to derive an estimate β GW for the actual value of β (see also Fig. 5 in Lai & Shapiro 1995). In the ellipsoidal approximation, β GW would predict a specific evolution of the bar, as e.g. the expected value of Ω ef f (β GW ) = Ω(β GW ) − Λ(β GW ) during the nearly constant phase. At the same time, the luminosity of the afterglow plateau, for a given NS mass, radius and magnetic field strength, would also allow one to estimate the value of Ω ef f during the constant phase, which could thus be checked for consistency with the value Ω ef f (β GW ) inferred from the GW measurements. Next, some considerations are required on the fate of the bar after the final Dedekind state is reached. In the absence of dipole losses, the evolution of the NS along the sequence would have maintained Ω nearly constant up to a time T GW , of the order of few secular growth times, (Lai & Shapiro 1995). Here β is referred to the initial Maclaurin configuration, and it's determined by the choice ofC. In our case,C = −0.41 and β = 0.20, so that τ GW ≃ 335 s. As evident from the black-dashdotted line in the lower panel of Fig. 3, for such value of the circulation, when only GW losses are considered, one has T GW ≃ (1 − 2) × 10 3 s ≃ (3 − 6) τ . The addition of magnetic dipole losses speeds-up the process, so that the star reaches the stationary football configuration somewhat earlier (Fig. 3, lower panel, black-solid line). After the end of the secular evolution, we do not know what the fate of the bar is. As Lai & Shapiro (1995) have underlined, while the star approaches a Dedekind ellipsoid the gravitational evolution timescale increases, eventually becoming comparable to the viscous dissipation one. When this happens,C is not conserved anymore and the star is expected to be driven along a nearly-Dedekind sequence to become a Maclaurin spheroid, since this is the only final state that does not radiate GWs or dissi-pate energy viscously. The addition of magnetic dipole losses would speed up such evolution, and further spindown the final Maclaurin state. We thus expect to have Ω ef f decreasing at some point after the constant−C evolution, with the dipole luminosity L dip also decreasing accordingly. Correspondingly, the energy injected into the fireball will start decreasing (eventually entering in the q < −1 phase, see Sec. 2), and the afterglow plateau is expected to end, with the light curve turning back to the temporal decay expected in the absence of continuous energy injection. In view of these considerations, and for the purpose of this paper, we have limited our discussion to show that the properties of the electromagnetic plateau, even assuming this suddenly ends after the constant−C evolution, are in agreement with those typically observed in GRBs. CONCLUSION We have discussed a possible scenario where a newly formed magnetar is left over after a GRB explosion, and explored the hypothesis of its being subject to a secular bar-mode instability, including in the spin down the contributions of both radiative losses by magnetic dipole emission and by GWs. Following the analytical treatment of Lai & Shapiro (1995), we have shown that for reasonable values of the physical parameters, the typical properties of GRB afterglow X-ray plateaus may be reproduced. A consequence of this is that, on the relatively long timescale 10 3 − 10 4 s of the electromagnetic plateau, the advanced LIGO/Virgo interferometers may detect a corresponding GW signal up to d ∼ 100 Mpc, by carrying out matched searches. Such a signal would be associated with an afterglow light curve plateau from a long sub-luminous GRB, or from a short GRB, with isotropic energy 10 50 erg, which is typical of most nearby GRBs detected. For the more energetic GRBs, a bar-mode GW signal may be detected without a visible plateau in the afterglow. In conclusion, although there are considerable uncertainties about the evolutionary path of newborn magnetars, our analysis indicates that the scenario proposed here is a plausible and interesting possibility, leading to an efficient GW emission process which is accompanied by a distinctive electromagnetic signature. Thus, in view of the impending commissioning of the advanced LIGO and Virgo, we consider that it would be highly worthwhile to test this possibility through matched electromagnetic-GW data searches. We are grateful to Benjamin Owen for important comments and valuable suggestions on this scenario, and for helping improve the manuscript. AC thanks Fulvio Ricci for crucial support during this project, Giovanni Montani for important discussions, Cristiano Palomba for very helpful suggestions, and Christian D. Ott for useful comments. This work was supported by the "Fondazione Angelo della Riccia" -bando A. , and by NSF PHY-0757155 & NASA NNX08AL40G (PM). AC gratefully acknowledges the support of the Penn State Institute for Gravitation and the Cosmos (IGC).
2009-08-21T10:17:03.000Z
2009-07-14T00:00:00.000
{ "year": 2009, "sha1": "3829bc960325ab6668a7fd292ae54f4d8d4fbcba", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/0004-637X/702/2/1171/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "3829bc960325ab6668a7fd292ae54f4d8d4fbcba", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
224471993
pes2o/s2orc
v3-fos-license
DERMATOGLYPHIC METHODOLOGY FOR ESTABLISHING ZYGOSITY IN THE TWINS Dermatoglyphic traits (DT) of the palms are an instrument, which is used in anatomy, anthropology, genetics and forensic services. DT can also be applied to medicine and in parts to the prevention of diseases, early diagnostic and establishing important characteristics about life. The principal purpose of this study is to establish a dermatoglyphic instrument, which can help to establish the zygosity of twins via experimentation with different quantitative and qualitative dermatoglyphic indicators. The materials for investigation (or the sites of investigation) are finger and palm prints from both hands of 21 couple of monozygotic twins (MT) and 22 couples of dizygotic twins. From the experiment, we can conclude that it is well worth the effort to do a prescription of key parameters, which direct the attention of doctors and anthropologist on the dermatoglyphic traits, which have regards towards zygosity. This paper presents a cheap and reliable method for establishing zygosity, via an algorithm from dermatoglyphic indicators in combination with blood group analysis. INTRODUCTION Dermatoglyphic traits (DT) of the palms are an instrument, which is used in anatomy, anthropology, genetics and forensic services. DT can also be applied to medicine and in parts to the prevention of diseases, early diagnostic and establishing important characteristics about life. There are investigations in the sphere of "diagnostics methods for cancer of the mammary gland, schizophrenia, breast cancer and many other diseases [1,2]. There is an interest in the so-called twin method. The analysis done by foreign researchers under the current issue shows that twins are studied in a versatile method for morphological and functional characteristics. One part of the "Twin method" includes dermatoglyphic analysis. This analysis uses qualitative and quantitative variables. Using qualitative indicators, it has been established that palm prints fall under polygenic inheriting factors. Whereas, quantitative indicators of the palm prints are dermatoglyphic indicators, which all for the opportunity to investigate descendence and similarities. Literature review shows that twin studies are not that frequent, one reason being challenges of obtaining anthropological material. Most of the twin studies study the physical development and reasons for multiple pregnancies [3]. Chen Lin Chang and Imaizumi have made multiple studies of the age influence and the consistency of multiple pregnancies. They concluded that there is a greater frequency of multiple pregnancies as the age increases. The influence of estrogen hormones and bromocriptine, as well as the division of the frequency of encountering 2, 3, 4 twins in certain age groups [4]. The influence of abortions and contraceptive medicine on the birth of twins. Nowadays the noninvasive prenatal determination of the twin zygosity by maternal plasma (DNA fragments) are giving opportunity to detect the zygosity and to compare it with the dermatoglyphic traits of the twins. [5] However, the main aim of the investigation is to establish a dermatoglyphic instrument, that would help establish the zygosity of twins, via experimenting with different quantitative and qualitative dermatoglyphic indicators. MATERIALS AND METHODS This study uses dermatoglyphic methods, such as dactyloscopy -finger papillary images and a quantitative number of palm ridges [6], palmoscopy -palm images over the hypothenar, thenar, main palm lines, angles and, adt and dat [6], analysis of relationships between the zygosity and the order of twin birth. The model type of papillary fingerprints is determined using the indices of Dankmeijer (A/Wx100) and Furuhata(W/Lx100), whereas the intensity of the model via the delta index (Dl10)=(L+2W)/10. The ridge number is quantified via counting of the ridges, which cross or touch the line of Galton, that joins the central point of the triradius with the centre of the corresponding image (or concerned) image. The principal purpose of this study is to establish a dermatoglyphic instrument, which can help to establish the zygosity of twins via experimentation with different quantitative and qualitative dermatoglyphic indicators. The materials for investigation (or the sites of investigation) are finger and palm prints from both hands of 21 couple of monozygotic twins (MT) and 22 couples of dizygotic twins. The finger and palms prints have been received by the standard method. The fingerprinting was done by cov-ering the palm part surface of the hand with topographical ink, glass plate and a roller. The palm printing process was carried out by the standard way, meaning all moves related to the printing was helped by the investigator. A rotating device was used, by starting from one finger on the right hand and finishing with the 5th finger on the right hand. The palm surface was spread with topographical ink, the palm print was recorded after the coated palm was placed on a white sheet of paper. The white paper was placed on a convex cylindrical surface. This method ensures a complete printing of the palm surface, including the central part of the palm. The diagnostics of the dermatoglyphic investigations were carried out with the help of binocular magnifying glass. RESULTS The results can be placed in two groups -qualitative and quantitative. In relation to the qualitative, it was established that fingerprints give an advantage above the loop (L), a significant percent comes from the monozygotic twins, 62.27% vs 52.37% in DT. Radial loops occur in MT 3.18%, whereas in DT 1.43%. In relation to the distribution of the loops, it was concluded that there is a prevalence of loos of the 5th finder; 87.5% in MT and 69.04% with DT. Whorls (W) of the fingerprints were 29.77 in MT vs 40.47% in MT. In relation to the main palm line; line D it was an observed deviation in the method of finishing in MT and DT. The finishing of the main palm lines in twins is displayed in a In summary, the quantitative indicators made via parametrical and unparametrical methods, that show a correlation (link) with zygosity, (in relation to) the left and right hand are as follows: five indicators of the left-hand Σ RT I -V sin, RTI sin, RT III sin, RT l-V sin, RT V sin, five indicators of the right hand Σ RT I -V dex, Σ RT d-a dex, RT b-a dex, RTII dex, RT I-V dex and two total indicators TRC a-d, TRC I-V. DISCUSSION: According to the twin method during development, every characteristic is completely dependent on inheritance as well as the surrounding environment. Every individual has a certain genotype. This genotype determines the range, in which the genotypes can develop, whereas the resulting phenotype will develop based on the external factors that development takes place. The assignment of the twin method is establishing the correlative role of inheritance factors and the factors of the environment in the variability of different characteristics. In other words, to what extent the variations of characteristics play a role among individuals is dependent on the genetic differences and changes in the external environment. This method is used in the field of medicine during the study of inherent predisposition to different diseases, for example: Down's Syndrome, Cerebral gigantism, Diabetes, Schizophrenia, Chromosomal anomalies and others [8,9]. The essence of the Twin method is concluded with the putting together the differences among internal pair differences in MTand DT along the way of "similarity method", in the base that introduces the genetic uniformity in MT. The stage in which the variation is dependent on the genetic differences, the determination of the inheritance can be calculated. When establishing the zygosity of twins, the loops play a role and more specifically than the ulnar loops of the fifth finger with MT. In the radial loops their presence in the second finder of MT. The loops are a phenomenon with MT, but a characterise for DT, especially for the 4th finger. In DT, the whorl is well expressed in the 5th finger. The main palm lines have a stance with the ending of line D. In MT is observed completion of field 11 of the right hand, this is distinctive in over 75% of the cases. In the dizygotic twins, this completion is common in field 9. Ridges triradiialong the palm show a significant difference, as the t't variant, and lack of triradius occurs only in MT. CONCLUSION Characterising the zygosity via the twin method has a great significance for the Twin method. This method is applied in the study of inheritance in the fields of anatomy, medicine, genetics, anthropology and criminology. From the Inheritance experiment, we conclude that it is well worth establishing predispositions for key parameters, that direct the attention of physician and anthropologist towards these papillary representations. These representations have a stance towards zygosity. We must also note that qualitative parameters have a greater stance on zygosity, in which there are greater differences between MT and DT. We propose the following algorithm from the Dermatoglyphics indicators that have a stance of zygosity. The profile of MT; which responds to the following indicators: the presence of ulnar loops of the fifth finger, absence of spiral or if present its located on the fourth finger, loops prevailing along the fingers of the right hand, presence of combined t't or complete absence of palm triradius. Main plam line D that ends in 11th or 12th field of the right hand Dermatoglyphic indicators are an interesting and fast variant of establishing zygosity but must be combined with a blood group analysis. This article aims to direct future investigators towards the described parameters and introduce an algorithm for determining zygosity. As there is a need for future investigation in this field.
2020-10-19T08:38:55.216Z
2020-09-17T00:00:00.000
{ "year": 2020, "sha1": "ea4bfb05969ff06beaa74a2cb8b5d635303e54a0", "oa_license": "CCBYSA", "oa_url": "https://www.journal-imab-bg.org/issues-2020/issue3/2020vol26-issue3_3313-3316.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ea4bfb05969ff06beaa74a2cb8b5d635303e54a0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
119101862
pes2o/s2orc
v3-fos-license
Antiferromagnetic ordering and disappearance of pseudogap within the vortex core of Tl_2Ba_2CuO_{6+\delta} Spatially-resolved NMR is used to probe the magnetism in and around the vortex core of nearly optimally-doped Tl_2Ba_2CuO_{6+\delta} (T_c=85K). The NMR relaxation rate T_1^{-1} at ^{205}Tl site, at which antiferromagnetic (AF) fluctuation can be monitored sensitively, provides a direct evidence that the AF spin correlation is significantly enhanced in the vortex core region. In the core region Cu spins show a local AF ordering with moment ~ 0.1\mu_B parallel to the layers at T_N=20K. Above T_N the core region is in the paramagnetic state which is a reminiscence of the state above the pseudogap temperature (T*=120K), indicating that the pseudogap disappears within the core. at 205 Tl site, at which antiferromagnetic (AF) fluctuation can be monitored sensitively, provides a direct evidence that the AF spin correlation is significantly enhanced in the vortex core region. In the core region Cu spins show a local AF ordering with moment ∼ 0.1µB parallel to the layers at TN =20 K. Above TN the core region is in the paramagnetic state which is a reminiscence of the state above the pseudogap temperature (T * ≃120 K), indicating that the pseudogap disappears within the core. The relation between superconductivity and magnetism has been a central issue in the physics of high temperature superconductors (HTSC). In HTSC the superconductivity appears when carriers are doped into the mother compounds, which are antiferromagnetic (AF) Mott insulators. It is well established that the strong AF fluctuation plays a crucial role in determining many physical properties of the normal state of HTSC. Recently, the influence of the AF fluctuation in the superconducting state has been attracting much attention. In particular, how the antiferromagnetism emerges when the d-wave superconducting order parameter is suppressed is a fundamental problem in HTSC [1,2,3]. In this respect, the microscopic structure of the vortex core, which is a local normal region created by destroying the superconductivity by magnetic field, turns out to be a very interesting subject, especially since many unexpected behaviors have been observed experimentally. In conventional s-wave superconductors, the quasiparticles (QPs) are confined by the isotropic pair potential and form the bound states of Caroli, de Gennes and Matricon [4]. However the core structure of HTSC with d x 2 −y 2 -wave symmetry is expected to be very different from that of ordinary superconductors, because the pair potential goes to zero at certain crystal directions. At an early stage, the vortex core structure of HTSC had been discussed within the framework of the "semiclassical" approximation, which is a direct extension of the s-wave vortex core structure [5]. However, recent high resolution STM experiments have revealed many unexpected properties in the spectrum of the vortex core, which are fundamentally different from these semiclassical d-wave vortex cores [6]. A new class of theories have emphasized the importance of the magnetism arising from the strong electron correlation for accounting for the microscopic vortex core structure [7,8,9,10,11,12]. Therefore it is crucial for the comprehension of the vortex state of HTSC to clarify how the AF correlation and pseudogap, which characterize the magnetic excitation in the normal state, appear in and around the vortex core. Despite extensive studies, little is known about the microscopic electronic structure of the vortices, especially concerning the magnetism. The main reason for this is that although STM experiments can probe the local density of states (DOS) with atomic resolution, they do not directly reflect the magnetism. Recent neutron scattering experiments on La 2−x Sr x CuO 4 have reported that an applied magnetic field enhances the AF correlation in the superconducting state [13,14]. These results were interpreted in terms of the competition between superconductivity and static AF ordering (or spin density wave, SDW) [3]. However, the relation between the observed AF static ordering and the magnetic excitations within the vortex core is still not clear, because the neutron experiments lack spatial resolution. Recent experimental [15,16,17] and theoretical [18,19,20] NMR studies have established that the frequency dependence of spin-lattice relaxation rate T −1 1 in the vortex state serves as a probe for the low energy excitation spectrum which can resolve different spatial regions of the vortex lattice. Up to now, however, all of these spatially-resolved NMR measurements have been carried out at the 17 O sites [15,16,17], at which the AF fluctuations are filtered, because the O atoms are located in the middle of neighboring Cu atoms with antiparallel spins [22]. In this Letter we provide local information on the AF correlation in the different regions of the vortex lattice extending the measurements to the vortex core region, by performing a spatially resolved NMR imaging experiments on 205 Tl-nuclei in nearly optimally-doped Tl 2 Ba 2 CuO 6+δ . This attempt is particularly suitable for the above purpose because T −1 1 at the Tl-site, 205 T −1 1 , can monitor AF fluctuations sensitively. Quite generally, 1/T 1 is expressed in terms of the dynamical susceptibility as 1 , where γ n is the nuclear gyromagnetic ratio, A q is the hyperfine coupling between nuclear and electronic spins, and ω 0 is the Larmor fre- quency. Because Tl atoms are located just above the Cu atoms and there exist large transferred hyperfine interactions between Tl and Cu nuclei through apical oxygen (see the inset of Fig. 1), Tl sees the full wavelength spectrum of Cu magnetic spin fluctuation; 205 T 1 is dominated by χ(q) at q = (π, π), i.e. AF fluctuations. This should be contrasted to the O-sites at which χ(q) is dominated by uniform fluctuations at q= (0, 0). On the basis of the NMR imaging, we have been able to establish a clear evidence of the AF vortex core state of HTSC. NMR measurements were carried out on the caxis oriented polycrystalline powder of high quality Tl 2 Ba 2 CuO 6+δ (T c =85 K) in the external field along the c-axis. The 205 Tl spin echo signals were obtained by a pulse NMR spectrometer. A very sharp spectrum (∼ 50 kHz) above T c was observed. The spectrum becomes broad below T c due to the development of vortices. The solid line in Fig. 1 depicts the NMR spectra at 5 K measured under the field cooling condition (FCC) in a constant field (H 0 =2.1 T). We stress here the importance of measuring under FCC because the Bean critical current associated with the sweeping H not only produces a field gradient in the crystal but also seriously influences T 1 by producing a shift to the QP energy spectrum. The spectra was obtained by convolution of the respective Fourier-transform-spectra of the spin echo signals measured with an increment of 50 kHz. A clear asymmetric pattern of the NMR spectrum, which originates from the local field distribution associated with the vortex lattice (the Redfield pattern), is observed below the vortex lattice melting temperature (∼60 K at H 0 ). The local field profile in the vortex state is given by approximating H loc (r) with the London result, where G is a reciprocal vector of the vortex lattice, | r | the distance from the center of the core, ξ ab the inplane coherence length, and λ ab the in-plane penetration length. The thin solid line in Fig. 1 depicts the histogram at a particular local field which is given by the local field where Ω is the magnetic unit cell. In the calculation we used ξ ab =18Å and λ ab = 1700Å, and assumed the square vortex lattice. The upper inset shows the image of the field distribution in the vortex lattice. The magnetic field is lowest at the center of the vortex square lattice (Cpoint), and it is highest at the center of the vortex core (A-point). The intensity of the histogram shows a peak at the field corresponding to the saddle point (B-point). The real spectrum broadens due to the imperfect orientation of the power. The red dotted line represents the spectrum convoluted with Lorentzian broadening function, f (H loc ) = σ/(4H 2 loc + σ 2 ) using σ=48kHz. The theoretical curve reproduces the data well except at the high frequency region where deviations become significant. We will discuss this high frequency tail later. The spectrum shown in Fig. 1 demonstrates that the NMR frequency depends on the position of the vortex lattice. Therefore we can obtain the spatially-resolved information of the low energy excitation by analyzing the frequency distribution of the corresponding NMR spectrum. For 205 Tl with nuclear spin I=1/2, the recovery curve of the nuclear magnetization M (t) fits well to a single exponential relation, R(τ ) = (M (∞) − M (τ ))/M (∞) = exp(−τ /T 1 ) in the normal state. In the vortex state, on the other hand, the feature of the recovery curves is strongly position dependent. Figure 2 and its inset display the recovery curves as a function of time, τ , at the saddle point (B-point in Fig. 1, filled circles) and at the vortex core (A-point in Fig. 1, open circles). The procedure to determine T 1 is as follows. The spin echo intensities are measured as a function of τ after saturation pulses. Then the nuclear magnetization recovery curves as a function of τ is obtained from each frequency component of the Fourier transform spectra. We obtained the data set of the recovery intensity for each frequency point at the 28 kHz interval with a gaussian weight function of σ=10 kHz. There are two distinct features. First, the decay time at the center of the core is much faster than at the saddle point. Second, while the recovery curves show the single exponential at the saddle point, they show a √ τ dependence at the core region. We will discuss this √ τ dependence later. In what follows, we defined T 1 as the time required for the nuclear magnetization to decay by a factor 1/e, in order to define T 1 uniquely for either decay curve. The red filled circles in Fig. 1 (b). The filled circles represent the data at the vortex core (Apoint in Fig. 1) and open circles represent the data at the frequency correspond to the saddle point (B-point in Fig. 1). In (a) T * ≃ 120 K is the pseudogap temperature. The dotted line represents the Curie-Weiss law which is determined above T * . In (b), TN is the temperature at which 205 T −1 1 at the core, ( 205 T core 1 ) −1 , exhibits a sharp peak, which turns out to be the Néel temperature within the vortex core. tributed to local DOS produced by a Doppler shift of the QP energy spectrum by supercurrents around the vortices [23]. Therefore, the remarkable enhancement of 205 T −1 1 provides a direct evidence that the AF correlation is strongly enhanced near the vortex core region. The decrease of 205 T −1 1 well outside the core when going from point C to B was also reported in 17 [16] and YBa 2 Cu 4 O 8 [17]. This phenomenon has been discussed in terms of the suppression of the AF fluctuations by the Doppler shifted QP DOS [17,20]. in the core region, ( 205 T core 1 ) −1 , is two orders of magnitude larger than that expected solely from vortex vibration at all temperatures (see Fig. 2 in Ref. [21]). Therefore the influence of the vortex vibration is negligibly small in the core region. From high temperatures down to about 120 K, ( 205 T 1 T ) −1 obeys the Curie-Weiss law, ( 205 T 1 T ) −1 ∝ 1/(T + θ). The lowest T at which this law holds is conveniently called the pseudogap temperature T * . Below T * , ( 205 T 1 T ) −1 decreases rapidly without showing any anomaly associated with the superconducting transition at T c , similar to other HTSC [22]. Below 40K, ( 205 T 1 T ) −1 is nearly T -independent down to 4 K due to the DOS induced by the impurity in d-wave superconductor. The T -dependence of ( 205 T core Fig. 1). ∆f increases rapidly below TN = 20 K. Filled circles represent δf which is defined as the line width at the half intensity. Inset shows the spectrum at 5 K and the definition of ∆f and δf . tains some key features for understanding the core magnetism. The first important signature is that 1/ 205 T core 1 exhibits a sharp peak at T =20 K, which we label as T N for future reference (Fig 3(b)). Below T N ( 205 T core 1 ) −1 decreases rapidly with decreasing T . There are two possible origins for this peak. One is the reappearance of the pseudogap and the other is the occurrence of a local static AF ordering (or local SDW ordering) in the core region. The sharp peak of 205 T −1 1 at T N seems to support AF ordering. The broadening of the Redfield pattern at the high frequency region discussed before also gives an additional evidence on AF ordering. The open circles in Fig. 4 display the line width at the high frequency tail, ∆f , which is defined as a difference between the frequency at the peak intensity and the frequency at which the intensity becomes 1% of the peak. For the comparison, we plot the line width calculated from the Redfield pattern (frequency difference between A-and B-point in Fig. 1) by the dashed line. At high temperatures ∆f agrees well with the calculation, while below T N =20 K it becomes much larger. We also plot δf which represents the line width at the half intensity (filled circles). Since δf changes little below T N , the line broadening occurs only near the core region. We stress that the high frequency tail below T N is naturally explained by the transferred hyperfine fields at the Tl site through apical oxygen induced by the AF ordering within the core, which causes additional broadening. We also point out that the appearance of the local AF ordering is also consistent with the √ τ dependent nuclear magnetization decay curve shown in the inset of Fig. 2. In fact, the √ τ dependence has been observed when the microscopic imhomogeneous distribution of T −1 1 due to strong magnetic scattering centers is present [24]. On basis of these results, we are lead to conclude that the vortex core region shows local AF ordering at T N =20 K; T N corresponds to the Néel temperature within the core. This AF ordering within the core is consistent with the prediction of recent theories based on the t − J and SO(5) models [7,8,9,10]. The second important signature for the core magnetism is that, as shown by the dotted line in Fig. 3(a), ( 205 T core 1 T ) −1 above T N nearly lies on the Curie-Weiss law line extrapolated above T * . This fact indicates that the vortex core region appears to be in the paramagnetic state which is a reminiscence of the state above T * ; the pseudogap is absent in the core region. We note that the present results seem to be inconsistent with the recent theories which predict local orbital currents, in which the pseudogap phenomenon within the core is assumed [11,12]. The present results also should be distinguished from those of the neutron scattering experiments on La 2−x Sr x CuO 4 [13], in which the static SDW coexists with superconductivity even in zero field just below T c . In the present compound, on the other hand, we do not observe such a static SDW ordering and the vortex core region is in the paramagnetic state in a wide T -region between T c and T N . We finally discuss the spin structure within the core. The broadening occurs only at high frequencies while it is absent at low frequencies. This fact indicates that the AF spins are oriented parallel to the CuO 2 layers. This follows by observing that the broadening should occur at both high and low frequency sides if the AF ordering occurs perpendicular to the layers, because in this case the direction of the alternating transferred hyperfine fields are parallel and antiparallel to the applied field. Using the hyperfine coupling constant, A hf = 56kOe/µ B , the magnetic moments induced within the core is estimated to be ∼0.1µ B . Summarizing the salient features of spatially-resolved NMR results in the vortex lattice; (1) Upon approaching the vortex core, ( 205 T 1 ) −1 is strongly enhanced (Fig. 1), (2) Near the core region, the NMR recovery curves show the √ τ -dependence (Fig. 2), (3) ( 205 T core 1 ) −1 shows a peak at T =20 K (Fig. 3), (4) NMR spectrum near the core region broadens below T =20 K (Fig. 4). All of these results provide direct evidence that in the vortex core region the AF spin correlation is extremely enhanced, and that the paramagnetic-AF ordering transition of the Cu spins takes place at T N = 20 K. We also find the pseudogap disappears within the core. The present results offer a new perspective on how the AF vortex core competes with the d-wave superconductivity. We
2019-04-14T02:00:06.325Z
2002-06-19T00:00:00.000
{ "year": 2002, "sha1": "7e0bbb0eca49bb9b422c916d9170adf7ecdcd65f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c9d976abcb1b7432b30f5c5ab8cf28194e4522c9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
214746497
pes2o/s2orc
v3-fos-license
Comparison Between Processus Vaginalis Sac Tightening Technique and the Conventional Technique in Orchiopexy Surgery Over 10 Years Background Undescended testis (UDT) is a common congenital urogenital anomaly that is treated by orchiopexy. We aimed to introduce patent processus vaginalis (PPV) sac tightening (PVST) technique and compare it to the conventional technique. Methods We retrospectively studied all the operated UDT patients during 10 years. In the conventional technique, it was necessary to ligate PPV sac after being peeled off from the spermatic cord. PVST was dissected longitudinally from the two sides of where the PPV sac wall was attached to the spermatic cord till the proximal part, and only a narrow thin layer sticking to the spermatic cord was left and the proximal PVV sac opening was tightened as much as possible with vicryl suture at the internal inguinal ring level. The significance level was <0.05. Results Of 821 orchiopexy (mean age 24.5±24.2 months), 36.3% were done by conventional and 63.7% by PVST technique. Hematoma, edema, hydrocele, and wound infection were lower in the PVST technique, but it was not significant (p>0.05). Testicular atrophy and operation time were significantly lower in the PVST than the conventional technique (p<0.001). Conclusion The orchiopexy PVST technique has lower complications and seems to be easier, faster and safer than the conventional technique. Introduction Cryptorchidism or undescended testis (UDT) is one of the most common congenital anomalies of the genitalia and endocrine system in male children [1][2][3][4] that is the third most common urogenital abnormality affecting primary school children in Iran. 4 Its prevalence is 3.4% to 5.8% in the newborn term and up to 30% in preterm infants, reducing to 0.8% at one year of age and remaining approximately at the same rate until puberty. The prevalence was reversely associated with increased gestational age in preterm and increased age in term infants. 1,[3][4][5][6][7] Since UDT increases the risk of malignancy and infertility and the affected person is more vulnerable to trauma, orchiopexy surgery is treatment of choice through various laparoscopy or open surgical techniques. 1,2,8,9 These techniques have undergone different modifications during recent years to minimize intra and post-operative complications including hernia, testicular atrophy, hydrocele, hematoma, and infection. 10 For preventing postoperative hernia, in the conventional as the standard technique, it is necessary to ligate the patent processus vaginalis (PPV) sac in the proximal part after being peeled off from other components of the spermatic cord; 7,8 this increases the risk of subsequent damage to the spermatic cord superfine vessels, vas deferens, and secondary testicular atrophy. [10][11][12] To prevent this problem, some studies have suggested that the proximal area should be left unligated after peeling the PPV sac off the cord. 1,12,13 However, this method may result in inguinal hernia in the first postoperative days when the PPV sac peritonealization is being completed. 14 Given the importance of preventing the aforementioned complications caused by the techniques detailed above, we designed this study to introduce a new technique as PPV sac tightening (PVST), in which the PPV sac opening in the proximal part is tightened at the internal inguinal ring level instead of being peeled off and ligated. We aimed to compare the intra-and post-operative complications in the conventional and new techniques. Patients and Methods This retrospective study was conducted in Namazi and Shahid-Faghihi hospitals during 2007 to 2016. Our sample included all UDT patients who underwent non-urgent and elective open orchiopexy surgery using standard conventional and new technique by a single surgeon who also served as the senior pediatric urologist during the ten past years. In this study, we excluded the patients with very short spermatic cord, testicular and spermatic cord torsion, other serious congenital disorders, nubbin or absent testis, history of previous urologic surgery, incomplete follow up post-operation and medical records information. The remaining patients were divided into two groups: ligation group as the conventional technique and tightening group as the new technique. Data Collection To collect the required data, we prepared a form with three sections. The first section addressed the basic information; age at the time of operation, history of previous surgery, and date of operation. The second section included information about the testis location (based on the post-anesthesia induction exam and before starting the procedure), surgery time (defined as the time from skin incision till skin closure), the operation side, the type of orchiopexy surgery (scrotal, inguinal, abdominal exploration), and intraoperative complications (PPV sac tearing, the vas deferens damage, and damage to the spermatic cord superfine vessels). The third section included information about the postoperative complications: hematoma, edema, wound infections, testicular atrophy, reascendance of testis, hernia, and hydrocele in the one, six and twelve-month follow up visits. Testicular atrophy has been defined as 25% decrease in the volume of the operated testis (measured in preoperation visit and post-operation follow up visits by orchidometer) compared to the contralateral normally descended testis. Therefore, the patients with bilateral undescended testes were excluded from the final analysis. 15,16 The above data were extracted from the patients' medical records such as operation report sheets and their medical cases recorded in the hospital and clinic before and after the operation. Incomplete information was supplemented through phone calls with the patients' parents. Surgical Technique (Figures 1-4) The patients underwent surgery using the conventional orchiopexy technique from 2007 to 2010 and then using our new technique (PVST). After general anesthesia induction, palpability and testis site were checked and the orchiopexy was done through inguinal or scrotal incision based on the preoperative examination. Orchiectomy was performed for nubbin testes. Figure 1 shows the schematic cross-section view of the spermatic cord and PPV sac in PVST technique. Conventional Technique (Ligation Group) Upon the localization of the testis and spermatic cord, after cutting the gubernaculum, these structures were released from the inguinal canal walls. Then, the tunica vaginalis was dissected and the patent processus vaginalis sac was carefully peeled off from the vas deferens and the spermatic cord superfine vessels up to the level of internal inguinal ring, and at this level it was clamped. Afterwards, the PPV sac was cut distal to the clamp and its proximal part was twisted and ligated with vicryl suture. PVST Technique (Tightening Group) For preventing any possible damages to the spermatic cord superfine vessels and vas deferens and also preventing the tearing of patent processus vaginalis sac with a very thin wall, in this technique after cutting the gubernaculum and opening of the PV, the testis was delivered, PV was incised till the closest edge of the spermatic cord structures, and then PV sac was dissected longitudinally from the sides till the proximal part and only a narrow thin layer sticking to the spermatic cord was left ( Figure 2). Then, the proximal opening of the PPV sac was tightened to the possible extent, using the vicryl suture 4-0 at the internal inguinal ring level without removing the sticking layer on the spermatic cord and its fine structures ( Figure 3). To evaluate the tightening adequacy for preventing the subsequent inguinal hernia development, a pressure on the lower abdominal part was applied and if any abdominal fluid leakage was seen from the tightened proximal opening of the PPV sac, the tightening level would be increased till no leakage was seen by this maneuver. In the scrotal orchiopexy procedure which was performed on the lower lying undescended testes, through a scrotal incision of the testis, the spermatic cord, and the PPV sac were pulled out of the external inguinal ring by applying sufficient traction force and the mentioned procedures and techniques were done on the PPV sac. Then, the traction force was released and the structures were retracted back into the canal. The next steps of the testicular lowering procedure and fixing it to the scrotal wall were common among all types of techniques detailed above and similar to the standard technique. Steps of PVST technique is presented in Figure 4. Statistical Analysis The collected data were analyzed using SPSS Software (Version 18). The qualitative data were expressed in number and percentage and analyzed through the Chi-square test. The quantitative data were expressed as the mean and standard deviation and analyzed using t-test. The significance level was considered less than 0.05. Results The participants in this study were 643 boys aged 6 months to 12 years with the mean age of 24.5±24.2 months who underwent surgery with the primary diagnosis (impression) of UDT. With regard to the fact that 225 of all patients suffered bilateral UDT, a total number of 868 operated testes were reviewed in this study. Of the total number of testicles under analysis, 450 testicles (51.8%) were bilateral UDT, and of unilateral ones 196 (22.6%) were at the right side and 222 (25.6%) were at the left side. Besides, 679 testicles (78.2%) were palpable after induction of anesthesia, of which 39 (4.5%) were diagnosed as nubbin and underwent orchiectomy. No testicle was found after inguinal and abdominal exploration in 8(0.9%) cases. After excluding orchiectomy cases and those with absent testis, a total number of 821 cases were assessed in the study. In addition, 298 (36.3%) patients underwent surgery using the conventional technique (ligation group) and 523 (63.7%) patients by the PVST technique (tightening group). Table 1 shows the comparison of the two groups in terms of postoperative complications and different preoperation testis positions and surgical approaches. As can be seen, reascending and hernia were not observed in either group. In addition, there were no significant differences between the groups in terms of the postoperative complications, except for testicular atrophy (p>0.05). In order to assess the testicular atrophy as the most important postoperative complication, due to the necessity of comparing the affected operated testis size with the normally descended opposite one, the patients with orchiopexy for bilateral UDT were excluded from the final analysis. Postoperative testicular atrophy occurred significantly more in the conventional than PVST technique: 7 (2.3%) cases compared to 2 (0.4%) cases, respectively (p=0.030). No significant difference in post-operation complications was seen between different pre-operation testes positions and surgical approaches (scrotal, inguinal, abdominal) in both groups. The mean operation time was 22.8±3.5 mins in the conventional and 17.8±2.8 mins in the PVST Table 3. Discussion In the standard orchiopexy, peeling off the PPV sac from other subtle structures of the spermatic cord, cutting and ligation of its most proximal part to prevent a hernia is recommended as a necessary step of this procedure. 7,8 However, it seems that there is a risk of spermatic cord damage (its vessels and vas deferens), especially at lower ages that may result in delayed secondary testicular atrophy. 10,12 There is another risk of tearing and retracting of PPV sac wall that increases the surgery time for repairing. 1 Therefore, in order to prevent these complications, some studies have proposed the san ligation technique in which the proximal part of the PPV sac is left unligated after the PPV sac is peeled off the spermatic cord. 1,12,13 Many researchers who have proposed san ligation technique believed that the metamorphosis of the mesodermal cells leads to peritonealization of the PPV sac opening. However, it should be noted that the peritoneal repair of the defect in the PPV sac opening starts after 48 hrs and takes 2-3 weeks to be completed. 14 Nevertheless, during the mentioned period, especially within the first 48 hrs, one cannot be assured of the non-development of a hernia through the PPV sac opening; 1,12,13 there are some reports on the risk of the occurrence of a postoperative hernia if the sac opening is left open. Even in one reported case, the incarcerated hernia led to the intestinal resection during a few days after the operation. 6,17 Given the potential risk of development of the postoperative hernia in the cases in whom the PPV sac opening is not closed and considering the pathology of the peritoneal repair, the duration of spontaneous closure of its defect and the reports related to cases with post-operation hernia development, especially when the PPV sac has a wide opening and the defect is large, the san ligation technique cannot be recommended with full confidence. The mean age of our patients was close to the age recommended for surgery in the related studies (under 1-2 years), 6 and the patients who underwent surgery at a higher age were those who had delayed diagnosis and referral from the primary medical centers. To prevent the complications associated with both ligation and san ligation techniques, we suggest PVST technique in which the PPV sac is dissected longitudinally from the two sides of where the PPV sac wall is attached to the spermatic cord till the proximal part, and the proximal PPV sac is tightened to the possible extent. This can somehow prevent the possible damages made to the subtle structures of the spermatic cord as the result of peeling off the PPV sac from the spermatic cord and ligation of its proximal opening in the conventional standard techniques. However, the PVST technique closes the PPV sac opening and prevents the incidence of a secondary hernia. Thus, to ensure the safety of our technique, we compared the postoperative complications among the patients who had undergone surgery using the conventional ligation as the standard technique and the proposed tightening technique during ten years. Testicular reascending, as a postoperative complication, was not seen in our both groups; it was predictable because in both surgical techniques we fixed the testis to the scrotal wall. 1,6,8 Another post-operative complication that could potentially occur in the case of the non-closure of the PPV sac opening was a secondary hernia. However, it was not seen in the two groups due to the closure of the PPV sac opening in both, that was the same as the results in other studies. 1,8 The occurrence rate of the other postoperative complications including hydrocele, hematoma, edema, and infection of operation site was lower in the new technique. However, this difference was not significant and it was the same in previous studies. 1,10 The current results showed that testicular atrophy in the conventional group was similar to the previous studies. 8,16,18 However, its rate was significantly lower in our PVST technique, so it may confirm our hypothesis about the safety of the tightening technique that may prevent possible damages to the subtle structures of the spermatic cord, especially its superfine vessels. There was no significant difference between the mean age of the patients with post-operative complications in both groups. Duration of PVST surgery, as a new technique, was significantly shorter than the conventional technique. Given the ease of implementing the introduced technique compared to the standard technique and the elimination of the time-consuming nature of the operation for the complete peeling off the PPV sac from the spermatic cord, it seems that the use of the proposed technique may reduce the operation time compared to the conventional technique. Altogether, the proposed PVST technique is easier to perform compared to the conventional sac ligation technique; if its safety is confirmed in the subsequent prospective studies, it will appear to have a lower learning curve than the standard technique and it can be used easily even by inexperienced general urologists with no considerable complications. Limitation This is a retrospective study, so it was impossible to find some complications such as tearing the PPV sac wall or damage to the spermatic cord superfine vessels. Also, duo to shortcomings of the study, we lacked data on the follow-up postoperative visits in some cases. Also, Marcaine was administered in the surgery site as the local anesthesia to reduce pain, and pain could not be examined. Conclusion Based on this study results, it seems that PVST technique in comparison to the standard orchiopexy technique is safer with lower complications and also easier and faster to do, so this technique is recommended if prospective studies confirm these results. Ethical Approval This study was conducted in accordance with the Declaration of Helsinki and approved by the local ethics committee of Shiraz University of Medical Sciences (IR.sums.med. rec.1397.117) on June 2, 2018 written informed consent was obtained from a parent for each of the participants.
2020-03-19T10:20:40.721Z
2020-03-18T00:00:00.000
{ "year": 2020, "sha1": "6115b052af1c108cb16ed50dfb2a9d0948b6905e", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=56877", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "630f5d8508f575881dc620fbe8a6908f08bb5440", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55780114
pes2o/s2orc
v3-fos-license
The Effect of Exhaust Fumes on Glutathione S-Transferase Enzymes in the Lung of Rats Supplemented with Natural Products This study examined the effect of exhaust fumes on the lungs and the impact of dietary supplementation with natural products containing cancer chemopreventive agents in attenuating their effect. Thirty-two rats were grouped into eight groups of four rats each. Groups 1-3 were on non-supplemented diet and exposed to exhaust fumes from generator for time intervals of 5 mins, 1 h and 2 h, respectively at a distance of 2.5 m away from the generator. Groups 4-6 were fed on supplemented diet and exposed to exhaust fumes for time intervals of 5 min, 1h and 2 h, respectively at a distance of 2.5 m from the generator. Group 7 was positive control not exposed to exhausted fumes and fed on diet supplemented with natural products. Group 8 was positive control not exposed to exhaust fumes and not on supplement diet. Normal cellular architecture was observed in supplement positive control groups compared with non supplement positive control groups indicated that the integrity of tissues were not compromised following food supplementation. However, large deposit of dark spots were seen in lungs of non supplemented groups on 1h and 2 h exposure groups, respectively. The lungs also showed significant decrease in the Glutathione-S-Transferase (GST) level on exposure for 5 min, 1hour and 2 h (p<0.05) compared with their respective control groups. It was also observed that the level of Malondialdehyde (MDA) increased significantly (p<0.05) in non supplement groups compared with their control groups. Combination of natural products significantly reversed the effect of exhaust fumes on the level of GST (p<0.05) and MDA level (p>0.05 compared with non supplement groups. Supplementation of diet with natural products had no adverse effect on the integrity of the tissues under examination as demonstrated by histochemical analysis. Hence, combination of natural diet may provide a useful preventive measure against tissue injury consequent to exposed to exhaust fumes experienced in our houses and roads. INTRODUCTION It has become a great concern to every Nigerian of the epileptic condition of the electricity supply in the country.The condition is worsening everyday as most Nigerians go without power supply for weeks and perhaps even months.This has hampered the nation's economic development as industries, companies, cooperate organizations, private establishment have to rely on generators (Emeka, 2008).Regular power supply is the prime mover of technological and social development.There is hardly any enterprise or indeed any aspect of human development that does not require energy in one form or the other.Nigeria is richly endowed with various energy sources, crude oil, natural gas, coal, hydro-power, solar energy, fissionable materials for nuclear energy.Yet the country consistently suffers from energy shortage-a major impediment to industrial and technological growth.The National Electric Power Authority (NEPA), a government parastatal, has the sole responsibility for managing the generating plants as well as distribution of power nationally.The total generating capacity is about 3000MW, approximately one-third the current level of national demand (Ajanaku, 2007;Adegbamigbe, 2007).However, the actual power available at any given time is less that 40% of the total capacity due to poor maintenance; hence, there is a perennial shortage (Ajanaku, 2007;Adegbamigbe, 2007).This situation is exacerbated by a grossly inefficient and poorly maintained distribution system.Industry can only cope with power outages by resorting to internal generating plants (Ajanaku, 2007;Adegbamigbe, 2007).It is disheartening that about 60% of the country still has no access to electric power supply (UNDP, 2001;Ajanaku, 2007;Adegbamigbe, 2007).Libya with a population of only 5.5 million has generating capacity of 4,600 megawatts, approximately the same as Nigeria which has a population of about 140 million ( Odjaka, 2006;Ubani, 2012).Furthermore, South Africa with a population of only 44.3 million has a generating capacity of 45,000 megawatts, almost eleven times the generation capacity in Nigeria which has three times the population of South Africa (Agbo, 2007).Studies and experiences have shown that power generation in the country has been dismal and unable to compare with what is obtainable in smaller African countries.The recent survey on power distribution to the industrial sector in Nigeria showed that average power outage in the industrial sector increased from 13.3 h in January 2006 to 14.5 h in March 2006.In a worsening experience, the outage increased to 16.48 h per day in June.In other words, power distribution in the month of June, 2006 to the industrial sector, on the average, was 7.52 h per day (Odjaka, 2006). Hence, industrial sector, business oriented minds and domestic home sort alternative source of electricity through the use generators (Ajanaku, 2007).In densely populated areas cities in Nigeria, generators are placed on doorsteps, top of houses and people live in this condition for years.The exhaust fumes produced by these generators contain some hydrocarbons and other toxic substances like oxides of carbon, nitrogen, sulphur, lead and carcinogenic benzo(a)pyrene which pose serious health problems (Timbrell, 1991).Cases of death sequel to use of generators at home are daily reoccurrences because of exhaust fumes inhalation (IPCS et al., 1999).Similarly traffic congestions in our major cities have led to acute exposure to exhaust fumes at traffic stop points.Hence this study attempted to evaluate the damage of exposure to exhaust fumes at different time interval using rat as model. MATERIALS AND METHODS Thirty-two male Albino rats weighing 220 g on the average were obtained from National Institute of Veterinary Research Von; Jos.The rats were housed in eight groups of four rats each.They were housed in animal house, Biological Science, Bayero University, Kano with 12-h light-dark cycle.The rats were allowed free access to standard drinking water and feed ad libitum for 7 days to acclimatize.After this period, the animals were subjected to generator exhaust fumes and fed with combined antioxidants supplement feed for a total period of 2 months (8 weeks) Grouping of experimental animals: Group 1: The rats were fed on normal diet and exposed to exhaust fumes 2.5 m from electric generator for 5 min.Group 2: The rats were fed on normal diet and exposed to exhaust fumes 2.5 m from electric generator for 1 h. Group 3: The rats were fed on normal diet and exposed to exhaust fumes 2.5 m from electric generator for 2 h.Group 4: Rats were fed on normal diet supplemented with natural products and exposed to exhaust fumes 2.5 m from electric generator for 5 min Group 5: Rats were fed on normal diet supplemented with natural products and exposed to exhaust fumes 2.5 m from electric generator for 1h Group 6: Rats were fed on normal diet supplemented with natural products and exposed to exhaust fumes 2.5 m from electric generator for 2 h Group 7: The rats were fed on normal diet supplemented with natural products and non-exposed exhaust fumes.Group 8: The rats were fed on normal diet without dietary supplementation and not exposed to exhaust fumes. Composition of supplemented diet: Dietary supplementation composed a combination of 0.25 g (green tea crude extract), 1.0 g (onion crude extract) and 1.0 g/100 g of normal diet Preservation of organs and blood serum for analysis Lungs and blood serum of 3 rats from each group were collected and preserved at -80°C for biochemical analysis.Lungs from the others three animals in the group were preserved in 10% formalin for histological analysis. Preparation of onion (Allium Cepa) crude extract: Fresh red onions were purchased at Sabon Gari market in Kano State, Nigeria.Bulbs were cut into pieces, dried under shade away from direct sunlight to avoid possible damage to their phytochemicals.The dried onion was made into powder using pestle and mortar. To 40 g of the powdered onion, 250 cm 3 of methanol was added, shaken for 10 min and allowed to stand for 2 weeks at 4°C.The mixture was filtered and the filtrate placed on the rotary evaporator using water bath at 35°C-40°C to evaporate the solvent in order to obtain the crude extract rich in quercetin (Won et al., 2005). Preparation of crude cabbage (Brassica Oleracea) extract: Cabbage was purchased at Sabon Gari market, Kano.Rinse with fresh water and cut into pieces, dried under shade.The dried cabbage was made into powder using pestle and mortar.To 40 g of the powdered sample, 250 cm 3 of methanol was added, shaken for 10 min and allowed to stay for 2 weeks.The mixture was filtered with a filter paper and the filtrate placed on the rotary evaporator using water bath at 35°C-40°C.The crude extract obtained was rich in indole-3-carbinol (Won et al., 2005). Preparation of crude of Green tea extract (Camellia sinensis): Pure green tea (Twinings brand) was purchased from Shad stores, Zoo Road, Kano., Nigeria.To 5g of green tea 250 cm 3 of distilled water was added and heated at 60°C in 500 cm 3 round bottom flask, placed in a water bath an exact amount of 5.00 g green tea powder.The mixture was filtered through a 0.45µ membrane filter to remove dispersed minute green tea powder. The sample was washed with the same amount of chloroform in a separatory funnel to remove caffeine, pigments and other non-polar impurities.This step was repeated three times and negligible catechins compounds were found in the chloroform phase owing to their low solubility in chloroform.The crude extract containing mainly catechins in the water phase was extracted to 250 cm 3 ethyl acetate and this step was repeated three times.The ethyl acetate phase was evaporated to dryness using rotary evaporator to obtain crude cabbage extracts containing catechin compounds (Won et al., 2005). Procedure for histological analysis: Histological analysis was carried out at the Histopathological Department of Aminu Kano Teaching Hospital, Kano, Kano State, Nigeria.Physical examination of lung tissues was made.Likely diseased area of the tissue sample was selected for histochemical analysis (Bancroft and Stevens, 1982). GLUTATHIONE REDUCTASE ASSAY Glutathione reductase activity was assayed by the methods of Castro et al. (1990) in which 5ul of cytosolic extract was added to 125 uL of 0.2M potassium phosphate, buffer (pH 7.0) containing 0.1 mM NADPH the reaction was initiated by the addition of 25 uL of 10mM GSSG after a 30s incubation period at 30°C.Oxidation of GSSG was measured as a decrease in absorbance at 340 nm.Calculation of enzymes activity was based on consumption f NADH (єNADPH = 6300 L/mol/Cm. Protein estimation: Estimation of protein concentration was carried out by the method of Bradford (1976).Bradford reagent for protein determination was prepared by dissolving 100 mg of coomassie Brilliant Blue in 50 mL of 95% ethanol, 100 mL of 85% phosphoric acid and made up to 1dm 3 with distilled water.The reaction was initiated by adding 16 µL of the sample to 640 µL of Bradford reagent and 60 µL of distilled water.Protein concentrations were determined from a calibration curve using Bovine serum Albumin as standard at varying concentrations. Statistical analysis of data: Standard Error of Mean (SEM) from the mean standard deviation were obtained for all data collected during analysis and experiment. The students' unpaired t-test.(Test of significance of difference between two means) was utilized to determine increase in enzyme activity using statistical programme SSPS 14.0. : Mean Weekly body weights: There was an observed weekly increase in weight of the animals in every group.This suggests there was no significant effect of exhaust fume in the weight of the rats.In addition, the administered chemopreventive agents do not retard weight of the animals. Histological profile of tissues: The histological profiles of the lungs are shown in Fig. 1.Both the supplement and non-supplement control groups were compared with the exposed supplement and nonsupplement test groups exposed to exhaust fumes for 5 min, 1hr and 2 h time intervals, respectively.Normal lymphocyte content of all the tissues in supplement control groups compared to non-supplement control groups indicate that the integrity of tissues were not compromised following food supplementation.However, large deposits of dark spots (likely carbon) were observed in the lungs of non supplemented exposed group at 1 h and 2 h exposure.Furthermore, supplementation did not reduce the deposits of dark spots in supplemented groups exposed for two hours Effect of exhaust fumes and food supplements on Glutathione-S-Transferase (GST) in tissues: The results of the effect of exhaust fumes on GST activity in the various the lungs are presented in Table 1. Exposure to fume at 5 min had no significant effect on the activity of GST between exposed non supplement group and non supplemented control group.Prolonged exposure at 1 h and 2 h significantly reduced the activity of GST (p>0.001) in non supplemented group compared with non supplemented control group.Interestingly, it was observed that supplementation significantly increase GST activity (p<0.05) in 5 min and 1 h exposure groups compared to exposed nonsupplemented groups.Supplementation had no significant effect at 2 h compared to non supplemented exposed group on the activity of GST. Effect of exhaust fumes and chemopreventive effect of natural food Supplements on lipid peroxidation product (MDA): There was a significant increase in the level of Malondialdehyde in serum (p<0.05) on exposure to exhaust fumes in non dietary supplemented group compared to the control non-supplemented (Table 2).The supplemented groups exposed to fume showed a significant decrease in MDA level (p<0.05)Results are mean+Standard Error of Mean (n = 3).5 min, 1 h and 2h groups represent groups 1-3, respectively under non-supplement , while they represent groups 4-6, respectively under dietary supplementation.Non supplement control represents group 8, while supplement control represents group 7; b = p<0.05Compared non-supplement control group with non-supplement exposed group with respect to time; b1 = p<0.05Compared non-supplement group with supplement group; b2 = p>0.05Compared non-supplement control group with supplement exposed group with respect to time.X = p>0.05Compared non-supplement control group with supplement control group compared to the non-supplement groups exposed to fumes irrespective of time of exposure in lung tissues. Toxicological studies in animal and epidemiological data suggest possible health risk to diesel exhaust fume exposure.Acute effects of exposure may lead to lung function decrements and altered brain function (Scheepers and Bos, 1992;Pope et al., 2002).Also Lung function decrements are reported as chronic effects among occupationally exposed persons (Pope et al., 2002).The components in the exhaust fumes associated with carcinogenic effect are soot particles, particle-associated organics and/or gas phase compounds.The particle load direct effect may include retardation of lung clearance, inflammation and increased cell proliferation.These effects were all demonstrated in rodents.The particles may also prolong the residence time of particulate organics or induce the generation of reactive oxygen species.These compounds are known to react with macromolecules, causing lipid peroxidation (Scheepers and Bos, 1992).There is also a reservoir of basic information on effect of high exposure to diesel exhaust as occupational hazard (Groves and Cain, 2000). Our study used rats as animal model to evaluate possible impact of exhaust fumes that the generality of population are exposed but not associated with occupation; and the possible impact of supplementation of diet with natural products containing cancer chemopreventive agents in attenuating these adverse effects.The population is exposed to exhaust fumes especially in urban centres from a variety of sources.With the collapse of power sector, individuals have resorted to the use of generators either to provide electricity to their homes or use the energy to run businesses.Furthermore, increased vehicles on our roads would also increase the exposure of people at traffic junctions to exhaust fumes. There was an observed weekly increase in weight of rats in all the groups.This indicates that the exhaust fumes have no effect on the weight of the rats for the period of exposure.In addition, the administered chemopreventive agent did not retard weight of the animal; this is in accordance with the research on the effects of green tea on weight maintenance after bodyweight loss (Kovacs et al., 2004). Studies in several laboratory animal species provide strong evidence that carbon monoxide (CO) exposure produce reductions of birth weight, cardiomegaly, delays in behavioral development and disruptions in cognitive function (IPCS et al., 1999).Nevertheless, a diet rich with natural food supplements (Quercetin, indole-3-carbinol, kampferol and catechins) can go a long way to avert the severity of the damage.It has been reported that Glucobracissin; a phytochemical in cabbage is metabolized in vivo to indole-3-carbinol, a powerful inducer of glutathione-s-transferase (specially the class Mu-enzyme, GST M2-2) which plays a vital role in xenobiotic metabolism (Fong et al., 1999).Hence increase in GST activity observed in this study may have contributed to reduced production of MDA, consequent to metabolism of agents in the fumes which could contribute to the generation of reactive oxygen species.Furthermore Certain GST isozymes (GST-A and GST-M) show catalytic activity of Conjugation of 4-Hydroxy-2enals (HNE), a product of lipid peroxidation to GSH in rat and human liver cells as well as related endogenous electrophiles (Esterbauer et al., 1991;Schaur et al., 1991). Another study showed that Quercetin has health promoting effects such as improvement of cardiovascular health and reducing risk of cancer.It also have anti-inflammatory and anti-allergic effect due to its strong anti-oxidant action; it help to combat free radical molecules which can damage cells and prevent the oxidation of LDL cholesterol (Luo et al., 1997).People with high intake of apple (rich in quercetin) have low risk of certain respiratory diseases (Suganuma et al., 1999a, b).Other laboratory studies show that catechins act, as a powerful inhibitor of cancer growth is several ways.They scavenge oxidants before cell injures occur, reduce the incidence and size of chemically induced tumors and inhibit the growth of tumor cells.In studies of liver, skin and stomach cancer, chemically induced tumors were shown to decrease in size in mice that were fed with black tea.They may also target and repair DNA aberration caused by oxidants.(Dufresne and Farnworth, 2001;Hakim and Harris, 2001). Hstological analysis of the lung further highlights its vulnerability to the effect of exhaust fumes on exposure for a long time even with chemoprevantive agent at the dose administered.Supplementation of diet did not protect lungs exposed for 2 h to exhaust fumes against, carbon deposit and generation of product of lipid peroxidation.This is because the lungs receives 100% of the cardiac output and therefore are extensively exposed to toxic substances in the blood and air (Timbrell, 1991) and possibly the induction of antioxidant enzymes threshold cannot cope up with the generation of free radical. Nevertheless, on comparing the non-exposed control group with exposed-supplemented groups with respect to time, there was no significant decrease in the GST level.This is largely due to the activity of indole-3-carbinol, a powerful inducer of GST (Fong et al., 1999) and the other antioxidants capability to increase indigenous antioxidants (Gutteridge and Halliwell, 2002).In rats, the highest level of quercetin was measured in the lungs (De Boer et al., 2005). CONCLUSION Faced with the perplexing situation of electricity in our country, one cannot afford to do without the use of generator.Furthermore traffic congesting on our roads exacerbates pollution to which populace are exposed.Daily exposure to exhaust fumes for a long period posed serious damage to the lung as it significantly decreases the level of GST.It was observed that these natural antioxidants agents (catechins, quercetin and indol-3-carbinol) have significant chemopreventive effect on lung, as they reduced deposition of carbon soot, MDA and increased the activity of GST.Hence, consumption of combination of these natural chemopreventive supplements may help prevent or minimize the lung damage consequent to fume exposure. Table 1 : Effect of exhaust fumes from generator on the activity of antioxidant enzyme in rat lungs GST nmol/min/mg protein Table 2 : Concentration of serum malondialdehyde in rats exposed to exhaust fumes
2018-12-12T13:15:56.771Z
2013-08-25T00:00:00.000
{ "year": 2013, "sha1": "4b22a90df881aa8432bfeaac581ccb62c070576e", "oa_license": "CCBY", "oa_url": "https://www.maxwellsci.com/announce/BJPT/4-136-141.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4b22a90df881aa8432bfeaac581ccb62c070576e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231764814
pes2o/s2orc
v3-fos-license
The Special Developmental Biology of Craniofacial Tissues Enables the Understanding of Oral and Maxillofacial Physiology and Diseases Maxillofacial hard tissues have several differences compared to bones of other localizations of the human body. These could be due to the different embryological development of the jaw bones compared to the extracranial skeleton. In particular, the immigration of neuroectodermally differentiated cells of the cranial neural crest (CNC) plays an important role. These cells differ from the mesenchymal structures of the extracranial skeleton. In the ontogenesis of the jaw bones, the development via the intermediate stage of the pharyngeal arches is another special developmental feature. The aim of this review was to illustrate how the development of maxillofacial hard tissues occurs via the cranial neural crest and pharyngeal arches, and what significance this could have for relevant pathologies in maxillofacial surgery, dentistry and orthodontic therapy. The pathogenesis of various growth anomalies and certain syndromes will also be discussed. The Body Plan The Hox genes define specific morphological characteristics along the body axis from simple insects to complex mammals. Remarkably, the order of the different Hox genes on a chromosome corresponds to the order of their expression in successive body segments. If one Hox gene fails, the affected body region takes over characteristics of the neighboring Hox genes. For example, in the fruit fly (Drosophila), an additional pair of wings or in the mouse, an additional pair of ribs could be induced. While the Hox genes in Drosophila are still located on one chromosome, in humans the Hox genes are distributed in four clusters on four chromosomes. Besides the Hox genes, there are numerous other genes that code and regulate the three-dimensional body plan. These include genes from the gene families Pax, T-Box, Wnt and Sonic Hedgehog [1]. Early Embryonic Development After fertilization of the ovum and formation of the zygote, cleavage divisions initially occur without the total volume of cytoplasm increasing. The resulting morula then transforms into the blastocyst. The outer cells of the blastocyst form the trophoblast, while the cells inside are called embryoblast [2]. The trophoblast later forms the placenta, while the embryoblast represents the later embryo [1]. The embryoblast forms two cell layers: epiblasts and hypoblasts. The epiblast develops the amniotic cavity and the hypoblast develops the yolk sac. The contact surface of the epiblast and hypoblast is called the germinal disc, which produces the embryonic cotyledons in the further course of development [1,3]. Development of the Cotyledons The development of the three embryonic cotyledons begins with the immigration of cells into the space between epiblast and hypoblast. This process begins with the formation of the primitive gutter (primitive strip) on the epiblast. There, proliferation and epithelial-mesenchymal translation (EMT) of cells of the epiblast occurs. EMT gives the cells migratory abilities and allows them to enter the gap between epiblast and hypoblast. The cells that come in contact with the yolk sac underneath the epiblast displace the hypoblast cells and form the definitive entoderm [1,3]. The cells that migrate laterally between epiblast and entoderm form the mesoderm. After differentiation of the mesoderm, the overlying tissue is called the ectoderm. The development of the three cotyledons is called gastrulation [1,2]. Development of the Neural Tube At the anterior pole of the primitive gutter a thickening, the so-called primitive node, occurs. Behind this, a depression is formed, from which the chorda dorsalis emerges in further development [1]. The chorda dorsalis does not form any embryonic tissue itself, but only induces the development of the nervous system by inducing neural tube formation. The neural tube folds out of the ectoderm and then comes to rest underneath it. In the process of unfolding the neural tube from the ectoderm, a lateral migration of cells creates the neural crest [1]. The neural crest (NC) is a multipotent embryonic cell population with stem cell characteristics that undergoes extensive migration during embryogenesis and can produce a variety of tissues such as neurons, melanocytes, cartilage and bone [4]. Cranial, vagal, stem and sacral neural crest cells can be distinguished, which are characterized by different migration pathways and differentiation into different target tissues [4]. Cranial neural crest (CNC) cells are of particular importance for the understanding of craniofacial development. Development of the Head The formation of the head is, in essential aspects, different from the formation of the tissues in the rest of the body. For example, practically all connective tissue (cartilage, bone, fibroblasts) of the craniofacial region originates from the neural crest, whereas the connective tissue in the rest of the body is of mesodermal origin. The neural crest can therefore form not only nerve tissue in the head as in the rest of the body, but also mesenchymal tissue [1]. Development of the Pharyngeal Arches While with the water-living vertebrates, the pharyngeal arches are the origin for the development of the respiratory system, the pharyngeal arches undergo a functional change with the land-living vertebrates. The terrestrial vertebrates develop a new respiratory organ-the lung-from the gullet [1]. In the region of the pharynx, the pharyngeal arches are formed by proliferation of cells migrating from the neural crest. These are five-a sixth is only rudimentary-clasp-shaped prominences, each containing a vessel, a nerve branch and a muscle segment. These pharyngeal arches are separated from each other by pharyngeal furrows on the outside (ectodermal) and by so-called pharyngeal pouches on the inside (entodermal) [3]. From the first pharyngeal arch, among other things, the masticatory muscles develop, as well as the N. mandibularis of the N. trigeminus, the Meckel cartilage, which is involved in the formation of the lower jaw, as well as the upper jaw and lower jaw prominence. The second pharyngeal arch develops mainly the mimic muscles and the facial nerve, as well as the Reichert cartilage. The third pharyngeal arch is responsible, among other things, for the upper muscles of the pharynx and the N. glossopharyngeus, while the fourth is responsible for the muscles of the lower pharynx and the N. vagus. fifth and sixth pharyngeal arches are sometimes involved in the development of the internal laryngeal muscles [1]. Development of the Face At the beginning of the face-development, the mouth-bay (Stomatodeum) is framed by five facial prominences. The five facial prominences-the unpaired frontal nose prominence, the paired maxillary prominences and both mandibular prominences-are formed by proliferation of cranial neural crest cells. The frontal prominence can be divided into a medial and lateral nasal prominences, which enclose the olfactory pits. In regular facial development, the medial nasal process merges with the maxillary ridges on both sides. Failure to achieve this fusion results in cleft malformations of the lip and jaw [3]. The so-called primary palate is formed by fusion of the two medial nasal prominences with each other. This forms the os incisivum in further development. The secondary palate, on the other hand, is formed by fusion of the palatal processes of the two upper jaw prominences. If the fusion does not occur, cleft palates result [3]. Development of the Tongue The development of the tongue begins with the anterior growth and fusion of the lateral tongue prominences, which originate from the first pharyngeal arch. Dorsally, the unpaired tuberculum impar follows the tongue prominences. The root of the tongue dorsally of the sulcus terminalis, on the other hand, is formed by parts of the second, third and fourth pharyngeal arches [1,3]. The development from the first four pharyngeal arches explains the innervation of the tongue. The sensitive innervation in the anterior two thirds (until the sulcus terminalis) is performed by the lingual nerve from the mandibular nerve (first pharyngeal arch). The pharyngeal part of the tongue is innervated by the glossopharyngeal nerve (third pharyngeal arch) and the superior laryngeal nerve originating from the vagal nerve (fourth pharyngeal arch). In the front, two-thirds of the sensory (taste) innervation of the tongue is performed by the Chorda tympani from the facial nerve (second pharyngeal arch). The pharyngeal innervation for taste corresponds to the described sensitive innervation [1,3]. Development of the Nervous System in the Head Area As described above, the so-called neural tube is created during neurulation. During further embryogenesis, the brain emerges from the front two-thirds of the neural tube, whereas the rear third becomes the spinal cord. In the cranial neural tube, curvatures occur and three functionally different sections are formed: the forebrain (prosencephalon), the midbrain (mesencephalon) and the rhombencephalon [1]. In the area of the forebrain, there are vesicular prominences, from which the paired cerebral hemispheres (telencephalon) develop. This leads to a division of the forebrain into the unpaired, central diencephalon and the paired hemispheres of the cerebrum. The cerebellum develops from the roof of the rhombic brain [1]. Methods For the preparation of this review, a systematic literature research was conducted. For this purpose, the most well-known German and English textbooks on embryology were used. In addition, a search was conducted using the Pubmed database (https://www.ncbi. nlm.nih.gov/pubmed/). The articles were first screened by title and ab-stract. If the title and abstract were suitable, the full texts were downloaded as PDF. Sys-tematic reviews and original papers were included. If available, current literature was used. The electronic search in Pubmed initially used the following search terms: Original works, case series and review works were taken into account. Work that was an isolated case report was excluded. Where available, papers published after 2000 were used. The Role of the CNC Three-quarters of human malformations affect the craniofacial region [5]. This fact highlights the complexity and the susceptibility of the embryological processes for the formation of craniofacial tissue. The craniofacial tissues are mainly derived from cells of the cranial neural crest. These cells develop in the dorsal region of the neural tube and then migrate into the facial prominences and the 1st to 4th pharyngeal arches [5]. In further development, they contribute to the formation of neuronal, skeletal, dermal and mesenchymal structures [5]. CNC cells interact with other cells of the craniofacial tissues in many different ways during their migration, but also after completion of morphogenesis [5]. The skeleton of the face and a large majority of craniofacial connective tissue is derived exclusively from cells of the cranial neural crest [5]. These are pluripotent cells with exceptional migratory capabilities [6]. Creation of CNC Cells The CNC cells are formed in the border region between neural and non-neural ectoderm dorsal to the neural tube [5]. How exactly the differentiation of CNC cells is initiated and regulated is not yet fully understood. It is assumed that the WNT-and bone morphogenetic protein (BMP)-signaling pathways are of particular importance [5]. The formation of neural crest cells occurs during embryogenesis at about the time of neural tube closure. The so-called EMT is required to initiate the CNC. The EMT is a prerequisite for the migration capabilities of the CNC [5,6]. Through EMT, epithelial cells can leave their tissue network and migrate to other regions of the organism. Besides the physiological importance of EMT in embryogenesis, EMT plays an important pathophysiological role in the invasion and metastasis of malignant tumors [5]. For EMT, the CNC cells must first lose their apico-basal polarity and degrade intercellular adhesion molecules, such as cadherins and tight junctions [5]. At the transcriptional level, EMT is mainly regulated by the transcription factors Snail1 and Snail2 (slug) [5]. Migration of the CNC Cells CNC cells, which originate in the forebrain and rostral midbrain, migrate to the frontonasal and periocular facial region. CNC cells from the caudal midbrain migrate to the maxillary portion of the first pharyngeal arch. In the rhombic brain, CNC cells are derived from the seven rhombomeres and migrate into the pharyngeal arches [6] (Figure 1). During their migration, the CNCs move along defined routes. The migration begins in a continuous wave and then splits into three separate streams [6]. The migration of CNC cells into the craniofacial tissues is regulated by different cytokines. The exact regulation of these cytokines is still unknown. However, it is known that there are attractive and repellent signal molecules [5]. Furthermore, CNC cells show contact inhibition of movement. Thus, the movement of a larger group of migrating CNC cells can be directed in one direction [6]. CNC cells inside the migrating cell group are thus prevented from disordered movement, while the cells at the tip of the cell assembly do not experience contact inhibition of movement when moving forward [6]. The CNC cells and their migration seem to be of crucial importance for the individual face shape. Transplantation experiments have shown that the final face shape in a host embryo is determined by the donor's CNC cells [7]. The transplantation of mouse CNC cells into chicken embryos has led to the development of dentate jaws. These experiments show that the CNC cells are able to activate the genetic programs for tooth development in the ectodermal cells of the chicken [7]. CNC transplantations between duck and quail showed that the shape of the cranial feathers matched the profile of the CNC donor. Even in higher mammals, the CNC cells influence both skeletal and soft tissue facial shape [7]. After the CNC cells have arrived in their target region, they must differentiate on site. It is not clear whether different CNC cells already carry the information for their final differentiation intrinsically, or whether local signals in the target area are responsible for their differentiation [5]. The CNC cells maintain their multipotent status until late embryonic development [5]. The peripheral nerves are also important for the later embryonic development of the craniofacial tissues. Nerve-associated CNC cells play a special role here, which can differentiate themselves from other cell types and influence craniofacial morphogenesis [7]. Thus, peripheral nerves can be considered as stem cell niches, from which different cell types, such as bone marrow mesenchymal cells and melanocytes, can differentiate. In addition to their role as pigment cells, melanocytes play a decisive role in the development of the inner ear, where they contribute to the survival of sensory hair cells. In the dental pulp, CNC-derived cells of the pulp nerves play an important role in the regeneration of mesenchymal pulp cells and odontoblasts [7]. Contribution of the CNC Cells to the Development of Facial Prominences The embryonic face consists of the unpaired forehead-nose prominence, the paired maxillary and mandibular prominences. The forehead, nose, upper lip, philtrum and primary palate are formed from the frontonasal prominence (FNP). For the proper development of the FNP, interaction between the migrating CNC cells with the local epithelial cells of the facial ectoderm and the cells of the forebrain is necessary [5]. The lateral region of the FNP fuses with the lateral nasal prominence and the maxillary prominence. Maxillary and mandibular prominence originate from the first pharyngeal arch. Their development requires the interaction of the immigrating CNC cells with the local cells of the surface ectoderm, mesoderm and pharyngeal entoderm [5]. In the pharyngeal arches, a characteristic spatial arrangement of immigrating CNC and the local cells of the three primary cotyledons occurs. Thus, the mesodermal cells are located in the center of the pharyngeal arches and are surrounded by the CNCs [5]. The outer closure is formed by epithelial cells of the ectoderm and the inner closure by the entodermal epithelial cells of the pharynx [5]. The CNC decisively controls the morphogenesis of the face from different cell populations. During the embryonic development of the face, bone emerges from the CNC cells, while muscle develops from the mesodermal cells [5]. Signals from the CNCs control the differentiation of mesodermal cells into myoblast progenitor cells and, subsequently, the organization of these cells around the developing skeletal elements [5]. Disturbances in migration and growth of CNC cells are of particular importance for the pathogenesis of cleft malformations [5]. Cleft malformations are the most important congenital craniofacial malformations, occurring in one patient per 700 births and requiring complex combined surgical and orthodontic treatment procedures. While the primary palate originates from the FNP, the secondary palate develops from the palatal processes of the maxillary prominence [5]. The palatal processes consist of CNC cells surrounded by epithelial cells [5]. The Wnt signaling pathway is of particular importance for sufficient growth of CNC cells in the maxillary process. Reduced activation of the Wnt signaling pathway can lead to reduced growth and, thus, to cleft deformities of the palate. After the palatal processes have approached, the epithelial cells must be removed to allow the CNC to fuse. This can be achieved by apoptotic cell death or by migration of the epithelial cells [5]. The epidermal growth factor receptor (EGFR) pathway probably plays an important role in this process [5]. Similar to haematological stem cells, CNC cells initially show pluripotency while they are increasingly restricted in their developmental potential during further embryogenesis. However, it is not yet clear what proportion of pluripotency is retained by CNC cells into adulthood [5]. Most of the teeth are formed by CNC cells. Thus, the dentin, cementum, periodontal ligament and most of the pulp are made of CNC. Only the blood vessels of the pulp and enamel are not CNC derivatives [6] (Figure 1). Cellular Characteristics of CNC Cells CNC cells differ in their development potential from other NC cells of the body strain. CNC cells, for example, activate genes of chondral differentiation [6]. At the transcriptional level, the transcription factors Sox10, Sox9 and Ets1 are characteristic for CNC cells and play a major role in the regulation of their effector genes [6]. The Role of CNC Cells in Tooth Development Interactions of epithelial and mesenchymal cells are crucial for tooth development [8]. Tooth development begins when cells of the oral epithelium send signals to the underlying mesenchymal tissue derived from CNC cells. During tooth development, the epithelial cells differentiate into ameloblasts, while the CNC mesenchymal cells form odontoblasts [8]. However, the exact signaling pathways that regulate the formation of the tooth roots or the number of roots of each tooth are still unknown [8]. It is known that the Hertwig's epithelial sheath (HES) develops apically from the crown of the tooth. Determination of the Body Axes by Hox and Dlx Genes Embryologically, the segmental body structure is regulated by the Homeobox (Hox) genes [9]. The HOX genes are phylogenetically strongly conserved. In addition to all animals, the blueprint of plants and fungi is regulated by the Hox genes. In mammals the Hox genes are organized in four clusters (Hox A to D) [10]. Each cluster consists of 9 to 11 Hox genes. During early embryogenesis the Hox genes control the development of the body along the longitudinal axis [10]. In contrast to the rest of the body, Hox gene expression is absent in CNC cells of rhombomeres 1 and 2 [11]. The CNC cells of the first two rhomboids migrate into the first pharyngeal arch and form the structures of the neurocranium and the jaws [12]. The Hox positive CNC cells of the caudal rhombomers (r3 and below) form the cartilages of the larynx [12]. Hox-negative CNC cells are regulated in their morphogenesis by distal-less (Dlx) genes [11]. The Dlx code provides the CNC cells with structural information and regulates their polarity within the pharyngeal arches along the dorsal-ventral and proximal-distal axis. Dlx1 and Dlx2 are expressed in both the maxillary and mandibular prominence in the first pharyngeal arch, while Dlx5 and Dlx6 are only expressed in the mandibular prominence. In contrast, Dlx3 and Dlx4 are restricted there [11]. Thus, the Dlx combination code regulates the differentiation of the CNC cells in the first pharyngeal arch into maxilla and mandible. The Dlx code 1/2 defines the cells of the maxilla and the expression of Dlx 1/2/5/6 defines the mandible [11] Besides Dlx, many other genes are involved in the morphogenesis of CNC derivatives. Signals from the fibroblast growth factor (FGF) family have an important influence on the formation of the rostral-caudal axis [11]. The proximal-distal axis is mainly determined by FGF and BMP. Due to the action of the growth and differentiation factors, FGF and BMP, the expression of the transcription factors Barx1 and Dlx2 is restricted to the proximal part of the first pharyngeal arch, while Msx1, Msx2 and Alx4 are restricted to the distal part [11]. Among other things, the transcription factor, Barx1, is involved in regulating the morphogenesis of teeth from incisors to molars [11]. Development of the Jawbone Meckel's cartilage is a hyaline cartilage, which serves as a guiding structure for the development of the mandible bone during embryogenesis. The development of Meckel's cartilage within the mandibular prominence begins with the condensation of CNC cells in the area of the later first molar [11]. These cells then differentiate into chondrocytes and form the rod-shaped Meckel cartilage. The Meckel's cartilage initially extends in a ventromedial and dorsolateral direction on both sides and fuses at the most distal tip in the area of the later symphysis mandibulae. The proximal sections of Meckel's cartilage change shape to develop the hammer and anvil bone of the middle ear [11]. The exact regulation of the formation of Meckel's cartilage at the transcriptional level is still unknown. However, the chondrogenic transcription factor Sox9 plays an important role. Sox9 knockout mice cannot form Meckel cartilage. However, despite the absence of Meckel cartilage, these animals develop a reduced mandible bone. This shows that the Meckel cartilage is not necessary for initiating mandibular development [11]. The different parts of the lower jaw show different ossification. The distal portion of the mandible is enchondrally ossified from the symphysis [11]. In the middle part of the corpus mandibulae, intermembranous ossification occurs, while enchondral ossification occurs again in the proximal part. In intermembranous ossification, the CNC cells condense and then differentiate into osteoblasts. These cells then begin to secrete osteoid, which then calcifies secondarily. The differentiation of osteoblasts is regulated by various transcription factors. Dlx5 induces the expression of Runx2. Runx2 in turn induces the expression of Osterix, which regulates the differentiation of preosteoblasts into mature osteoblasts [11]. Furthermore, ossification is induced by BMPs [13]. The late differentiation of osteoblasts finalize in osteocytes embedded in calcified bone, this step is dependent on MEPE and Dmp1 factors. In enchondral ossification, the bone is formed using a cartilaginous template. This leads to a condensation of the CNC cells, which then differentiate into chondrocytes. The differentiation of the osteoblasts begins first in the perichondrium and then progresses centrally. The differentiation of osteoblasts is controlled by signaling pathways such as IHH, NOTCH, WNT and BMP, and by the transcription factors Dlx5, Runx2 and Osterix [11]. Development of the Tongue The tongue and the lower jaw have a common development-biological origin. They originate simultaneously from the mandibular prominence and their development is closely coordinated. In the medial part of the mandibular prominence, the tongue protrusion is formed, which also consists of CNC cells. This leads to the immigration of myoblasts from the occipital somites [11]. Thus, the connective tissue and blood vessels originate from the CNC, while the tongue muscles are formed from mesenchymal myoblasts. The CNC cells are important for the initiation and regulation of tongue development. Thus, the CNC cells can be understood as a matrix for the migrating myoblasts and they determine the pattern of muscle development. Furthermore, the CNC cells regulate proliferation and differentiation of the myoblasts. The Dlx genes 5 and 6 play an important role in this process. A loss of function of the Dlx5/6 genes in CNC cells leads to the absence of masticatory muscles and disturbed tongue development [11]. In addition, the Hedgehog signaling pathway plays an important role in the development of the tongue. CNC cells react to Hedgehog activation of cells of the tongue epithelium and influence the development of myoblasts. In addition, it has been shown that disturbances in the transforming growth factor (TGF) beta signaling pathway also lead to a defective tongue development [11]. Significance for Specific Diseases of Craniofacial Tissue Craniofacial tissue behaves biologically differently compared to extracranial tissue. One possible cause is the origin of craniofacial bone/tissue from the cranial neural crest ( Figure 2). Cranial neural crest-based tissues seems to be characterized by the biological peculiarity that cyclic forces evoke greater anabolic responses of craniofacial sutures as well as cranial base cartilage since gene expression, cell proliferation, differentiation and matrix synthesis were mechanically regulated [14]. Mechanical force thus influences genetics, whereby the onset of temporomandibular disorders can be explained down to the genetic level [14,15]. The response of CNC-derivates to forces is relevant for orthodontic treatment, with the aim to modulate growth. For example, in class 2 deformities (mandibular retrognathy), a stimulation of the condylar growth is desirable. There are several orthodontic approaches to achieve this goal in the growth periods of children. These approaches have in common an increase of muscular activity, repositioning the mandible anteriorly and a relief of compressive forces [14,15]. Another example of clinical relevance of cranial neural crest-dependent diseases is the differences between craniofacial and extracranial osteosarcoma Early metastasis is characteristic of extracranial osteosarcoma. In contrast, craniofacial osteosarcoma rarely develop metastasis [16,17]. In addition, craniofacial and extracranial osteosarcomas show a different clinical prognosis. Craniofacial osteosarcomas have a fiveyear survival rate of about 77%, while extracranial osteosarcomas have a five-year survival rate of only 55% [18]. The overriding clinical problem with craniofacial osteosarcoma is frequent tumor recurrence. This may be due to the difficulty of safe R0 tumor resection due to the close proximity to vital anatomical structures [16]. The different clinical behavior of craniofacial versus extracranial osteosarcoma may be due to the different developmental biology of craniofacial and extracranial bone. A different activation of the Hedgehog signaling pathway and also possible immunologic differences between craniofacial and extracranial osteosarcomas have already been shown [19]. The jawbone exhibits increased resistance to osteoporosis compared to the extracranial bone [20]. There is also a disease that affects almost exclusively the jaw-drug-associated necrosis of the jaw MRONJ. This is caused by antiresorptive drugs such as bisphosphonates or denosumab, as well as angiogenesis inhibitors or various tyrosine kinase inhibitors. Although the aetiopathogenesis of this disease is not yet fully understood, it is believed that the developmental characteristics of the jawbone play an important role in the occurrence of the disease [21,22], principally through the strong activation of the bone resorbing cells, osteoclasts. Significance for Orthodontic Treatment The orthodontist takes care of the physiological development of craniofacial growth and occlusion by preventing oral dysfunction, regulating jaw growth and moving the teeth within the alveolar bone when necessary, thereby using the special features of the craniofacial tissues. The alveolar bone, for example, is special since it is inducible by orthodontic tooth movement [23]. These tooth movements not only cause a change in occlusion, but also model the maxillofacial hard tissue and, as a result, the soft tissue. It must be highlighted, that the outcome of dentofacial orthopedic appliances is mostly due to deflection/bending of the alveolar bone and of remodeling processes of the periodontal tissues instead of skeletal increase due to growth stimulation. In this context, mechanical stimuli seem to play an essential role for cell differentiation, proliferation and metabolism due to regulation of expression of transcription factors, cytokins, growth factors, enzymes and structural proteins [24]. Functional orthodontic and extraoral appliances, in particular, take advantage of the physiological growth of the maxillofacial structures and modulate this growth. In addition, the orthodontist is regularly confronted with various malformations of the maxillofacial tissue. In the case of cleft malformations, for example, protracted orthodontic treatment is required to accompany the surgical interventions. Furthermore, differences in the development of cells of the mucosa compared to cells of the skin enable the understanding of orally induced tolerance against nickel via orthodontic treatment, as well as the mechanism of sublingual immunotherapy [25,26]. Another clinical observation that should be considered in orthodontic treatment planning is that the craniofacial bone exhibits faster bone healing and increased remodeling, which is crucial for a targeted and successful orthodontic therapy [20]. Therefore, diseases or medications affecting craniofacial development or bone remodeling impair orthodontic treatment favoring maxillofacial malformation and malocclusion. This may cause difficulties in mastication and speech, favor craniomandibular disorders and lead to a reduced quality of life [27]. Hence, in order to be able to optimally support the developmental processes, dentists and orthodontists should know that the human skull could be differentiated in the viscerocranium and neurocranium, which significantly differ in their development and growth and what special features play a role in these processes [28]. Impact for Oral Implant Osseointegration and Mesoderm Derived Bone Transplants Branemark's discovery of titanium osseointegration took place on titanium implants for intravital microscopy of the rabbit fibula [29]. Since then, much of what we have learned about osseointegration in the past decades has been done in studies on the long tubular bones of experimental animals, as these are easily accessible and have a large osteogenic marrow space [30,31]. In everyday clinical practice, most dental implants are inserted into the jawbone. Exceptions are patients who have had reconstruction of the maxilla or, more frequently, the mandible with a microvascular fibula graft, e.g., due to tumor disease of the oral cavity. In these patients, the dental implants are inserted into a long bone of mesodermal origin and can be directly compared to those placed into the jaw bone. Wijbenga et al. showed an implant survival rate of 95% in a follow-up period of 0 to 155 months after microvascular fibular reconstruction and dental rehabilitation with endosseous implants [32]. However, current data on the functional outcome and quality of life of patients are limited due to their study. Similar implant survival rates were found by Sozzi et al., who found an implant survival rate of 98% after 7.8 years following microvascular reconstruction of the jaws. No statistically significant differences were found between maxillary and mandibular or in irradiated and nonirradiated patients [33]. The implant survival rate was similar to that of Howe et al., who, in a meta-analysis, estimated the 10-year survival of dental implants in the jawbone of healthy patients to be 96.4% [34]. In our research group, we were able to show that porcine calvarial frontal bone (neural crest-derived dermatocranium) can serve as a model for bone regeneration of the human maxilla (neural crest-derived splanchnocranium) [35,36]. In this model, we were also able to examine different implant surface modifications and local gene therapy to improve osseointegration of dental implants [37][38][39][40]. Mouarett et al. found different rates of bone regeneration in a mouse model, when comparing the bony healing of defects in the tibial bone vs. defects in the maxillary bone [30]. They also found an influence of the maxillary periosteum on implant osseointegration of the murine maxilla. This is in good agreement with our own data, as we were able to demonstrate supracortical peri-implant bone formation through periosteal elevation in an established model of the porcine frontal skull [41]. When grafting bone into the jaw area, in addition to homotopic grafts from the jaw bone itself, heterotopic bone components of mesodermal origin (fibula, scapula, iliac crest or parietal cranial calvaria) are frequently transplanted into the jaws [36]. Cells of neurocrestal origin, as well as cells of mesodermal origin, can ossify intramembranously as well as endochondrally [36]. The unanswered question is what happens in detail to the transplanted cells? What is the influence of the embryological origin of the cells and what is the effect of the bony environment in the jawbone? A more precise understanding of these processes could contribute decisively to optimizing the regenerative possibilities of the transplanted bone tissue [33]. This would not only contribute to the well-being of patients requiring bone transplants, but would also have an immense economic benefit, as bone is the second most frequently transplanted tissue after blood [42]. Impact for Syndromes and Malformations Malformations of the derivatives of the CNC account for about one third to one half of all congenital malformations in humans [6]. The clinical significance of the CNC will be illustrated below using a few syndromes and malformations as examples. Fetal Alcohol Syndrome Fetal alcohol syndrome (FAS) is the most common teratogenic stress in humans and the best-known cause of developmental disorders. FAS is characterized by lifelong behavioral and cognitive deficits, as well as impaired attention, learning and motor skills [43]. Phenotypically, affected individuals can be recognized by craniofacial dysmorphia. These include small eyelid crevices, a flattened philtrum and a thin upper lip, as well as micrognathia and reduced interocular distance. Micrognathia is often accompanied by tooth displacement and impactations that require orthodontic and surgical treatment. Direct toxic effects of ethanol on the migrating cells of the CNC are considered to be the cause of these changes, whereas the neural crest of the strain does not seem to be affected [43]. In FAS, for example, no impairment of the autonomic nervous system or of the melanocytes derived from the neural crest of the strain is observed. The exact causes for the CNC specificity of ethanol toxicity are not known [43]. In animal experiments, ethanol led to a disturbed induction of CNC formation, impaired migration of CNC cells with increased apoptosis and, consequently, to morphological changes in the craniofacial structures of embryos. A mechanism of ethanol action on CNC cells is mediated by the Hedgehog signaling pathway. Ethanol leads to an impairment of the formation of the ligand Sonic Hedgehog [43]. Treacher Collins Syndrome Treacher Collins Syndrome (TCS) is a congenital disorder of craniofacial development. TCS is also known as Dysostosis mandibulofacialis. Characteristics of TCS include hypoplasia of the facial bones, particularly of the upper jaw, lower jaw and zygomatic complex [44]. In severe cases, the zygomatic arches may be completely absent and cleft malformations may be present. Jaw hypoplasia often leads to malocclusion with an anterior open bite. In addition to hypodontia, there are also changes in the shape of the teeth [44]. Moreover, malformations of the outer ears with atresia of the auditory canal and anomalies of the middle ear bones are common, which consequently lead to hearing disorders. In addition, there are impairments in brain development, mental retardation and psychomotor retardation. Micrognathia can cause airway obstruction caused by the tongue directly postpartum [44]. The molecular pathomechanism of TCS is now relatively well understood. In affected individuals, a mutation of the TCOF1 gene is present. The gene product of TCOF1 is jointly responsible for the initiation, proliferation and survival of CNC cells. In animal experiments, mutations of TCOF1 lead to a reduced number of CNC cells with undisturbed migration abilities of the CNC cells [44]. Cleft Malformations Despite numerous genome-wide analyses, the evidence of a clear genetic cause for cleft malformations is limited [45]. In this context, missense-mutations in the IRF6-gen as well as in the Grainy-head-like-3 (GRHL3)-gen could be detected as casual risk factors [46][47][48]. As described above, migration processes of CNC cells and their interaction with local cell populations play an important role in the pathogenesis of cleft malformations [5]. For example, correct proliferation of mesenchymally differentiated CNC cells is required for upper lip closure. Recent studies show a possible important role of the Hedgehog signaling pathway [45]. The Hedgehog signaling pathway is an important control element of epithelial-mesenchymal interactions during orofacial development. In animal experiments it could be shown that the formation of cleft lips is associated with a reduced proliferation of CNC cells of the medial nasal processes [45]. A reduced expression of the ligand Sonic Hedgehog led to a reduced gene expression of the transcription factor Foxf2 in the downstream of the signaling pathway. By increasing the expression of Sonic Hedgehog or Foxf2, an increased CNC proliferation could be achieved and thus the development of cleft lip could be counteracted [45]. Nevertheless, the various forms of cleft lip malformations in humans are a highly complex group of multifactorial malformations in which various genetic and environmental factors interact [45]. Pierre Robin Sequence The Pierre Robin Sequence (PRS) is characterized by a small lower jaw (micrognathia), a posterior displacement of the tongue (glossoptosis) and the associated obstruction of the upper airways. In addition, there is usually a cleft palate and bimaxillary retrognathia with reduced sagittal length of the mandible and maxilla [49]. The exact genetic mechanism of PRS is still unknown. A disturbed migration of CNC cells into the first two pharyngeal arches is assumed [49,50]. Mutations of Sox9 and of the bone morphogenic protein (BMP) signaling pathway are discussed as potential causes [51]. Hemifacial Microsomia Hemifacial microsomia is characterized by disorders of the development of the upper jaw, lower jaw, outer ear and middle ear, as well as of the trigeminal and facial nerve on the affected side of the face [52]. Cardiac, vertebral and central nervous malformations are also possible. The phenotypic expression of these developmental disorders is highly variable [49]. Causes are often discussed as disturbed blood flow during the morphogenesis of craniofacial tissues or localized ischemia. Besides such environmental factors, genetic influences are also suspected [52]. Thus, hemifacial microsomia can be regarded developmentally as a malformation of the first two pharyngeal arches. Thus, a disturbed morphogenesis of CNC derivatives is present. Mutation analyses in affected patients have revealed changes in various genes involved in the development and vascularization of CNC cells. Nevertheless, hemifacial microsomia seems to be a heterogeneous, multifactorial disease pattern [52]. Goldenhar Syndrome Goldenhar syndrome-also known as oculo-auriculo-vertebral syndrome-can be understood as an extended spectrum of hemifacial microsomia. The malformation complex is characterized by impaired development of the eyes, ears, lips, tongue, palate, jaw, zygomatic bone and dental deformities. It is caused by a malformation of the first and second pharyngeal arches [53]. In addition, ocular dermoid cysts, spinal anomalies and malformations of internal organs, such as the heart and kidney, can occur to varying degrees. Although various genetic changes have been detected in patients with Goldenhar syndrome, no clear genetic cause has been identified. A combination of genetic and environmental factors is probably pathogenetically relevant. For example, it has been discussed that abnormal development of vascularization in the fourth week of pregnancywhen the first two pharyngeal arches develop-could be the cause [53]. Conclusions Maxillofacial tissues are characterized by an embryology unique in the human organism. This explains many of the peculiarities of maxillofacial tissues, such as increased bone regeneration and good modulation ability, but also the occurrence of malformations. The complex development via cranial neural crest and pharyngeal arches, as well as the involvement of complex cell migration and multiple redifferentiations, can explain the relatively frequent occurrence of craniofacial malformations.
2021-02-03T06:20:12.158Z
2021-01-28T00:00:00.000
{ "year": 2021, "sha1": "a71605ae953d21eb604f40771bad8c52b9958859", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms22031315", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "002179eea7da5f4c901b983cd43231f94d46a553", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
120427630
pes2o/s2orc
v3-fos-license
Theoretical evaluation of Lanthanide Binding Tags as biomolecular handles for the organization of Single Ion Magnets and spin qubits Lanthanoid complexes are amongst the most promising compounds both in single ion magnetism and as molecular spin qubits, but their organization remains an open problem. We propose to combine Lanthanide Binding Tags (LBTs) with recombinant proteins as a path for an extremely specific and spatially-resolved organisation of lanthanoid ions as spin qubits. We develop a new computational subroutine for the freely available code SIMPRE that allows an inexpensive estimate of quantum decoherence times and qubit–qubit interaction strengths. We use this subroutine to evaluate our proposal theoretically for 63 different systems. We evaluate their behavior as single ion magnets and estimate both decoherence caused by the nuclear spin bath and the interqubit interaction strength by dipolar coupling. We conclude that Dy3+ LBT complexes are expected to behave as SIMs, but Yb3+ derivatives should be better spin qubits. Introduction The spatially controlled positioning of functional building blocks by self-assembly is one of the fundamental visions of nanotechnology.The organisation of devices with a resolution scale below the nanometer and total sizes above the micrometer is a characteristic of molecular biology.Since the first use of ferritin as a template for magnetic nanoparticles, 1 major steps towards this goal have been achieved as a part of what has been called synthetic biology. 2 DNA has been used as a programmable building block, 3 while short, self-assembling peptides have been shown to form a variety of stable nanostructures which have already been used for the rational design of functional devices. 4his bio-nanotechnological strategy will eventually be applied for quantum computing purposes, where the challenging goal of scalability requires the ability to organize different kinds of quantum building blocks.Obviously, the use of biopolymers to control quantum effects in complex organized systems is still a long-term goal.Nevertheless, the nascent field of quantum biology, devoted to the study of coherent quantum effects in processes as diverse as photosynthesis in plants, 5 geolocation in birds 6 and possibly smell in insects, 7 shows that this complex organization of quantum coherent processes already takes place in nature.The challenge is then to achieve this artificially. Spin-carrying metalloproteins, which are already being studied by manipulating their quantum states via pulsed EPR, 8 are promising systems for this synthetic quantum biology.We will focus on magnetic lanthanoid complexes because of the interest they awaken both as Single Ion Magnets (SIMs) and as spin qubits, that is, because of their favourable magnetic and quantum properties. 9 key experimental advance in this context is Lanthanide Binding Tags (LBTs).These are oligopeptides based on calcium-binding motifs of EF-hand proteins 10 that have been designed to interact very specifically with lanthanoids. 11hese new building blocks constitute a key advance towards an interdisciplinary region, since an LBT can be seen not only as a small part of a protein but also as a standard coordination complex.LBTs are easily incorporated at the DNA level into any recombinant protein, a potential route to highly complex organisation, e.g.via histones (Fig. 1). 14he first goal of this work is the development of a new computational subroutine to allow an inexpensive estimate of decoherence times and interaction strengths.A tool of this kind is lacking in the field: while wavefunctions are routinely calculated, only general arguments are given concerning the tunneling gap and its relation to decoherence, but detailed numbers are rarely offered.A second purpose will be to theoretically explore the possibility of using LBT for organising the lanthanoid ions, either for their use as SIMs or as spin qubits.This necessarily includes obtaining realistic estimates for decoherence times and interaction strengths that pave the way for the first experimental studies. Results and discussion We used SIMPRE, a tool commonly applied in the field of magnetic lanthanoid complexes, to study Ln-LBT complexes for the nine published crystallographically different LBT coordination environments (see Methods: Structures), using Ln = Nd, Tb, Dy, Ho, Er, Tm, Yb.Five of them correspond to LBTs designed for the exclusion of water and have analogous 8-coordinated environments: two bidentate carboxylates, three monodentate carboxylates, and a carbonyl group belonging to the LBT backbone.The remaining four, designed for an efficient interchange of water for their use as NMR contrast agents, are remarkably diverse, with a variable number of carboxilate groups and water molecules in the vicinity of the lanthanoid. First we analyzed the aptitude of these systems as single ion magnets.Next, we started from this set of energy levels and wavefunctions and used a specially crafted version of SIMPRE to calculate the expected quantum decoherence due to interaction of the electron spin qubit with the nuclear spins in the biomolecule, both in the native form and after deuteration (see Methods: Calculations).Finally we calculated interqubit coupling, which needs to be strong enough for two-qubit quantum gates to happen within the coherent time window. Single molecule magnet behavior We obtained the energy level scheme and the wavefunctions in a moderate field (0.32 T), to get rid of the hyperfine crossings due to the interaction with the lanthanoid nuclear spins, which our current model does not take into account.Note that this method cannot by itself predict SMM behavior, as this depends on many effects not included in the model, such as Raman processes.Nevertheless, the energy level scheme can be related to the single ion magnet potential, considering that it is more common to find SIM behavior in two-level systems with a marked Ising character.Thus, we calculated (a) the expectation values (〈J z 〉), which should be a maximum for an Ising character and (b) the ratio between the energy barrier (Ω) to the first excited state and the gap (Δ) within the ground doublet, which should be a maximum for a two-level system.Note that large tunneling energies tend to result in fast temperature-independent spin dynamics, while the presence of lowlying excited states tends to favour a fast thermal relaxation.Thus, complexes with low Ω/Δ ratios are not expected to present slow relaxation of the magnetization.We represent the results grouped by metal in box-and-whisker diagrams, which graphically divide the data into four quartiles, in order to give a visual idea of the expected character of LBT complexes and of the robustness of these expectations.The expectation values of 〈J z 〉 are a maximum for Dy and Er (Fig. 2, other metals in Fig. S1 †).The most favourable Ω/Δ ratios are obtained for Yb, Nd and Dy (Yb and Dy in Fig. 3, other metals in Fig. S2 †). In all the structures studied, the expectation values 〈J z 〉 stay close to the maximum theoretical values for both Dy and Er, that is, an almost pure Ising behavior is obtained for those two metals.This contrasts with the rest of the series, where a dispersion of behaviors is obtained.The second relevant parameter is the energy level scheme, here summarized in the Ω/Δ Decoherence from the nuclear spin bath We work with the same set of energy level schemes and wavefunctions at 0.32 T, which is a typical value for the X-band in a pulse EPR setup.As quantitative estimates of the qubit potential of the different complexes, we take into account both (a) the previously calculated Ω/Δ ratio, which in this context quantifies the separation of the qubit states from the rest of the spectrum, and (b) the decoherence time τ considering only the coupling with the nuclear spin bath.As this is controlled by the tunneling gap, this is expected to be roughly proportional to the coupling with magnons, which are the second source of decoherence.The third main source of decoherence, namely the coupling with phonons, is related with the rigidity of the coordination environment, and thus is expected to be approximately constant.Again, we represent the results grouped by metal in box-and-whisker diagrams, in order to give a visual idea of the expected quality as spin qubits of LBT complexes and of the robustness of these expectations.The estimated ranges of τ are wide and tend to reach higher values in the best cases for Yb, Tb, Tm and Ho, while being consistently narrow and grouped around low values for Dy, Er and Nd (Tb and Yb in Fig. 4, other metals in Fig. S3 †).According to this methodology and because of the different magnetogyric ratios in H and D, deuteration extends decoherence time by a factor of 15.2 in all cases, meaning all calculated times in Fig. 4 and S3 † can be extended by up to an order of magnitude, but only if fully deuterated peptides were used.The vast number of experimental possibilities that ranges from labeling of the closest protons to perdeuteration results in a range of calculated decoherence times. Interqubit coupling Let us use 2OJR, a polypeptide with a double-lanthanidebinding tag (see Methods) as an example to estimate the expected order of magnitude for the interqubit coupling in these kinds of systems.In 2OJR, the two lanthanoid ions bound to the same polypeptide are at a distance of r = 19.1 Å. Because of the nature of dipolar coupling, the relative orientation between the magnetic axes and the field establishes a vanishing lower bound for the coupling between two ions.Therefore, we estimate here the upper bounds for interqubit coupling in double-lanthanide-binding tags, assuming an optimal alignment between the magnetic axes of two neighbouring magnetic complexes.We do this for two extreme examples: Tm and Dy.Both ions present an adequate energetic isolation of the ground doublet: (Ω/Δ) Tm = 7.97, (Ω/Δ) Dy = 26.69,but, as discussed below, are practically opposite in the nature of their ground doublets. In the case of Tm[2OJR], the ground doublet has an easy plane character, and as a result the expectation values are dominated by the x orientation (see Table 1).In turn, these result in the following differences (Δ 01 H α ) between the fields (H α ) created by the two qubit states |Ψ 0 〉, |Ψ 1 〉 of a Tm site on a neighbouring Tm site: Δ 01 H x = 1.10 mT, Δ 01 H y = 0.99 mT, Δ 01 H z = 0.29 mT.This means an upper limit of 0.198 μeV for the interqubit coupling. For Dy[2OJR], the ground doublet has a marked easy axis character, with almost maximal expectation values 〈J z 〉 (see Table 1).These relatively high magnetic moments result in correspondingly larger differences Δ 01 H α between the field created by the two qubit states |Ψ 0 〉, |Ψ 1 〉 of a Dy site on a neighbouring Dy site: Δ 01 H x = 4.20 mT, Δ 01 H y = 2.17 mT, Δ 01 H z = 1.49mT.This means an upper limit of 3.54 μeV for the interqubit coupling. A dipolar interaction that cannot be switched off ("alwayson") in the order of the μeV means times for swap operations in the order of the nanosecond, which is also the order of magnitude for pulsed EPR operations.It is also within technologically accessible limits (1-100 GHz, i.e. 4-400 μeV or 0.05-5 K).According to this estimate and considering decoherence times as calculated above, this approach is then theoretically feasible.The ability of proteins to produce an on-demand spatial distribution of qubits means any conceivable scheme of dipolar couplings is available.Of course, in order to actually exploit polypeptide-organized qubits for a scalable quantum information processor, a new operating scheme would need to be developed.Lloyd's proposal, based on a periodic organisation of three different qubit types, would probably be a good start for this, as it is based on energetic, rather than spatial, addressing of the qubits. 15 Conclusions Including lanthanide binding tags in recombinant proteins constitutes a very promising pathway for the engineering of highly complex quantum structures, especially given the power of combinatorial peptide libraries. 16The calculations performed in this work allowed a general estimate of the crystal field created by these polypeptides, and thus an order-of-magnitude prediction of the magnetic and quantum properties in analogous complexes.This is needed both to guide the preparation of new LBT complexes and to prioritize the experimental study of those cases where it has not yet been possible to obtain crystals, a common problem with biopolymers.Thus, out of Nd, Tb, Dy, Ho, Er, Tm and Yb, we were able to confirm that only Dy is consistently expected to produce single molecule magnet behavior in a biological context; as LBTs are chiral these are expected to behave as chiral magnets.We also have determined that Yb is the best spin qubit candidate, com-bining a good isolation of the ground doublet from the first excited state and a certain protection from dipolar decoherence.From the methodological point of view, we have developed an extension to the freely distributable tool SIMPRE which adds the capability of inexpensively offering an inexpensive estimate of both (i) the decoherence time originated by the hydrogen nuclear spin bath and (ii) the through-space qubitqubit interaction strength.It has to be remarked that this is a first effort and that more refined computational methods will need to be developed to calculate all sources of decoherence, in particular phonon-caused decoherence. Methods Structures: from X-ray to coordination sphere The structures used for SIMPRE calculations were downloaded from the Protein Data Bank (PDB) and are identified by their PDB Ids, as follows. 1TJB is a 2.0 Å resolution X-ray crystal structure of a 17residue lanthanide-binding peptide, complexed with Tb 3+ , which excludes water molecules from the primary coordination sphere. 17There are two crystallographically independent metal sites, corresponding to two separate copies of the same LBT. 2OJR is a construct of a double-lanthanide-binding tag as an N-terminal fusion of ubiquitin complexed with Tb 3+ , with a 2.60 Å resolution. 18LTQ is a 2.1 Å X-ray crystal structure of a construct containing an LBT insert between the middle S-loop residues of interleukin-1β complexed with Tb 3+ . 19There is only one crystallographically independent metal site. 3VDZ is a 2.4 Å X-ray crystal structure of a modified dLBTubiquitin chimera complexed with Gd 3+ .The LBT sequence was modified to (a) enhance the exchange of water molecules in the vicinity of the magnetic site, (b) keep a high affinity for the lanthanoid and (c) favour crystallization.There are four crystallographically independent metal sites, corresponding to two separate copies of a "dinuclear" peptide. 20or all metal sites in 1TJB, 2OJR and 3LTQ, we considered eight oxygen atoms in the coordination sphere: two bidentate carboxylates, three monodentate carboxylates, and a carbonyl group belonging to the LBT backbone. In contrast, the coordination spheres in 3VDZ are more diverse and less well-defined, up to the point where the considered coordination number is somewhat arbitrary.A r < 3.5 Å criterion results in eight oxygen atoms in the coordination sphere.One of these always belongs to a backbone carbonyl group, while the rest, depending on the cases, are from three to five carboxilate groups and either zero or one water molecule. Hydrogen atoms were not resolved crystallographically, and instead their positions were estimated with Mercury software.As our purpose is to estimate the order of magnitude effect of a hydrogen cloud in peptide-coordinated lanthanoid ions, this Lanthanoid complexes are commonly isostructural to each other, with the metal-ligand bond distance being the main structural parameter that varies with the nature of the metal.Thus, we adapted the coordination environment from the original Ln = Tb/Gd structures to the complete series Ln = Nd, Tb, Dy, Ho, Er, Tm, Yb by changing the radial coordinates in the coordination sphere according to the variation in the ionic radii (see Table S1 †). Calculations: expectation values, decoherence times The crystal field Hamiltonian was solved with SIMPRE, 21 building upon previous results so that we are able to work with no adjustable parameters (see Table S2 †). 22A minor modification allows the introduction of the magnetic field as a diagonal component in the Hamiltonian.We use the energy level structure in the presence of this field to define, for the purposes of this paper, Δ as the energy difference between the ground state and the first excited state and Ω as the energy difference between the ground state and the second excited states.This has the advantage of allowing an automated processing of the data.In terms of evaluating Two-Level Systems (TLSs), this simple definition means that those among the non-Kramers systems which do not actually present a TLS are instead considered as merely low-quality TLSs because of their very low (Ω/(Δ ratio. Note also that the current version of SIMPRE automatically chooses the orientation of the coordinate axes that correspond to the most simple expression of the wavefunction, and applies the magnetic field in this z direction.We maintain the standard definition of Δ as an extradiagonal term in the qubit basis, meaning that we redefine the qubit states and that they do not necessarily correspond to the spin being aligned with the easy axis of magnetization.In turn, this results in suboptimal decoherence times.As a test case, we chose the case of Yb[3VDZ4], as Yb 3+ is the ion for which we calculated the longest decoherence times and Yb[3VDZ4] is the LBT complex where Yb 3+ is expected to present the most marked Ising behavior: M J = +7/2 accounts for 92% of the wavefunction, resulting in an expectation value 〈J z 〉 = 3.2 and a decoherence time due to the nuclear spin bath τ = 5.2 × 10 −5 s, which is among the lowest calculated for Yb[LBT] complexes.After a 90°r otation of the molecule, the field is along a hard axis and therefore there is a quenching of the expectation value 〈J z 〉 = 0.33.As a consequence, the calculated decoherence time rises to τ = 5.7 × 10 −4 s which is among the highest calculated for Yb[LBT] complexes.This is comparable with Yb[3VDZ5], the LBT complex described as an easy-plane behavior, with M J = +1/2 accounts for 92% of the wavefunction, resulting in an expectation value 〈J z 〉 = 0.3 and decoherence time due to the nuclear spin bath τ = 4.7 × 10 −4 s.SIMPRE was further adapted to extract the expectation values of 〈J α 〉 (with α = x, y, z) from the wavefunctions, using the Pauli matrices σ α : As we mainly intend to distinguish between Ising and non-Ising character here, in Table 1 Moreover, this specially crafted version of SIMPRE also takes the coordinates of the hydrogen atoms as an input.Of course, there are an effectively infinite number of hydrogen nuclei in a crystal structure.A cutoff radius for the hydrogen nuclei to be included in our calculation is needed.We neglect every hydrogen nucleus which, on average, is expected to produce 1/100 th of the effect produced by the hydrogen nucleus closest to the metal.As the hyperfine interaction falls with the third power of the distance, this means the cutoff radius is a factor of ffiffiffiffiffiffiffiffi 100 3 p farther away than the nearest hydrogen atom. From the expectation values of 〈J x 〉, 〈J y 〉, 〈J z 〉 and these coordinates, the dipolar magnetic field (H) felt by each nucleus and the hyperfine interaction energy (E) can be trivially calculated, for each of the two states of the qubit. H¼ μ 0 gμ B 4πr 3 Á J À Of course, by including the nearest magnetic ion, this procedure can be immediately used to estimate the (dipolar) interqubit interaction strength.From the set of hyperfine interactions, we also estimate the nuclear spin bath decoherence time using the standard equation (eqn (3)). 23This estimate of decoherence depends on the sum of the energy differences, for each proton i between the two qubit states |0〉, |1〉.The decoherence time is then estimated as a function of the tunneling gap Δ and this energy sum (ω i = E 0 − E 1 ): Note that this model is only valid at low temperatures and in the cases where the tunneling splitting is much larger than the hyperfine couplings, something that is generally verified for lanthanoid ions, and fields of the order of hundreds of mT. 25 For the representation of the results, box plots 24 were generated with Wavemetrics' IgorPro and include the full range of values. Fig. 1 Fig. 1 (a) Coordination environment created by a 17 aminoacid-long LBT.Only the coordinating lateral residues are shown, α-carbons are labeled.(b) Polypeptidic chain of the LBT without lateral residues, the α-carbons of aminoacids involved in the coordination sphere are labeled.(c) Each nucleosome (8 histones) could organise up to 8 LBTs (4 shown).(d) Nucleosome Positioning Sequences 12 could organise a scalable sequence of nucleosomes.(c) & (d) Reproduced with permission from ref. 13. Fig. 2 Fig. 2 Box plots with the full distribution of expectation values 〈J z 〉 for diverse LBT structures substituted by Dy (left) and Er (right). Fig. 3 Fig. 3 Box plots with the full distribution of Ω/Δ ratios for diverse LBT structures substituted by Yb (left) and Dy (right). Fig. 4 Fig. 4 Box plots with the full distribution of estimates decoherence times τ for diverse LBT structures substituted by Tb (left) and Yb (right). Table 1 Main M J contributions to the wavefunction of the two qubit states for Dy[2OJR] and Tm[2OJR], and the resulting expectation values 〈J xy 〉, 〈J z 〉
2019-04-18T13:03:03.720Z
2016-01-11T00:00:00.000
{ "year": 2016, "sha1": "6b5b534b69dcf145ec1a8fd4465b895ca8089d4e", "oa_license": "CCBYSA", "oa_url": "https://roderic.uv.es/bitstream/10550/79241/1/109579.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "4af2e3179cf8ecdd707c6c4e293c58208eca727c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
139516993
pes2o/s2orc
v3-fos-license
The Prediction of the Mechanical Properties for Dual-Phase High Strength Steel Grades Based on Microstructure Characteristics The decrease of emissions from vehicle operation is connected mainly to the reduction of the car’s body weight. The high strength and good formability of the dual phase steel grades predetermine these to be used in the structural parts of the car’s body safety zones. The plastic properties of dual phase steel grades are determined by the ferrite matrix while the strength properties are improved by the volume and distribution of martensite. The aim of this paper is to describe the relationship between the mechanical properties and the parameters of structure and substructure. The heat treatment of low carbon steel X60, low alloyed steel S460MC, and dual phase steel DP600 allowed for them to reach states with a wide range of volume fractions of secondary phases and grain size. The mechanical properties were identified by a tensile test, volume fraction of secondary phases, and grain size were measured by image analysis. It was found that by increasing the annealing temperature, the volume fraction of the secondary phase increased, and the ferrite grains were refined. Regression analysis was used to find out the equations for predicting mechanical properties based on the volume fraction of the secondary phase and grain size, following the annealing temperature. The hardening mechanism of the dual phase steel grades for the states they reached was described by the relationship between the strain-hardening exponent and the density of dislocations. This allows for the designing of dual phase steel grades that are “tailored” to the needs of the automotive industry customers. Introduction For reasons of environmental protection, an increased emphasis has been placed on the reduction of exhaust emissions from car use in recent years.The reduction in vehicle weight is considered to be one of the decisive factors for improving fuel consumption and hence, reducing emissions [1,2].The considerable potential for vehicle weight reduction is hidden in the body, which accounts for about 25% of the total mass of cars.In the segment of middle and lower vehicle classes, the base material is steel.In the higher-end segment, the concepts of an aluminum-based light alloy body or a combined body of steel, aluminum, and composite materials are applied.When aluminum alloys or composite materials are applied, the weight reduction is achieved, even at the expense of higher costs. The intention of the automotive industry is to produce vehicles not only with reduced weight but also with a high level of safety characteristics such as strength, stiffness, and deformation work [3][4][5].In comparison to other materials, the advantage of steel grades is the variability in performance properties (strength, stiffness, energy absorption ability, corrosion resistance of galvanized sheets, and so forth), technological properties (formability and weldability by application of various technologies), their recyclability at the end of the car's lifetime, and the lower production costs.To meet the oftentimes contradictory demands of the automotive industry on the utility properties, the steel industry is constantly developing new concepts of high-strength steel grades (DP-dual phase steel, CP-complex phase steel, TRIP-transformation induced plasticity steel, TWIP-twining induced plasticity steel, and so forth).It appears that thanks to a wide variety of combinations of strength, plasticity properties (yield strength R p0.2 = 280-700 MPa, ultimate tensile strength R m = 600-1000 MPa, ductility A = 12-34%, strain-hardening exponent n = 0.09-0.21,and normal anisotropy ratio r = 0.9-1), and cost from all the known high-strength steel sheets used in the construction of motor vehicles, dual phase steel grades take the largest share [6,7]. Dual phase steel grades (DP) consist of a fine-grained ferrite matrix with dispersed islands of martensite or lower bainite and often, with a certain share of residual austenite.The soft ferritic structure is the carrier of the plastic properties and the hard particles of the martensitic phase are the carriers of the strength properties.The share of martensite in dual phase steel grades ranges from 10 to 30%.With a greater share of martensite in the ferrite matrix, the clustering of martensitic islands may occur, which results in the deterioration of their strength-plastic properties combination [8][9][10][11][12]. The dual phase ferritic-martensitic structure can be obtained from any low-carbon steel by a controlled rolling or intercritical annealing method, provided that the transformation of austenite to perlite is avoided [13][14][15].Perlite formation is suppressed by the Cr and Mo elements which, at the same time, support the formation of martensite.Further enhancement of the over-hardenability can be achieved by the addition of Mn, Si, and P. The silicon inhibits the perlite and carbides formation, Nb ensures ferritic grain refinement and increases the temperature of intercritical annealing Tnr [16]. Mechanics of Plastic Flow during Deformation The mechanical properties of dual phase ferritic-martensitic steel depend on the chemical composition, the volume fraction of martensite, the volume fraction of ferrite, the carbon content in martensite, the grain size of martensite, and their strength [17,18].To describe the behavior of dual phase ferritic-martensitic steel under plastic deformation, various constitutive equations were proposed [9,15,[19][20][21][22].Increasing the intercritical temperature increases the amount of austenite generated and this is transformed to martensite during rapid cooling.Thus, the strength and hardness of the material increases as well.The carbon content in martensite is larger for dual phase steels with low volume fractions of martensite.Otherwise, the carbon content in martensite decreases when the volume fraction of martensite increases.The carbon content in martensite controls the phase hardness and influences the final properties of the material.By controlling the metallurgical processes, it is possible to reach ferritic-martensitic structures with volume fractions of martensite from 35 to 50% with a wide combination of strength and plastic properties [17,18]. The effective use of dual phase steels in the automotive industry requires a better understanding of how they behave in crashes, as well as how they behave when processed by stamping to the structural parts of the safety zones.Nowadays, numerical simulations of crash tests and metal forming processes, based on the Finite Element Method, are widely used to predict the deformation behavior of materials.Thus, to describe the material behavior under deformation, the following constitutive equations are used [23]: Hollomon where σ s is the true stress, K is the material constant, n is the strain-hardening exponent that expresses the intensity of the strain-hardening and the ability of the material to deform uniformly, φ i is true strain ϕ i = ln(1 + dL i /L 0 ), and ]φ 0 is the pre-strain [24]. These models can be used to prepare the production of DP steel grades with precisely defined "tailor-made" properties for the components of the vehicle's deformation zones at the front and the side impact.When selecting material for the car-body safety zones, the main criterion is resistance to deformation (that is, deformation work) that is consumed at the crash.This can be determined by the tensile test record σ s -φ (Figure 1): where V 0 is the specimen volume on the initial length L 0 , φ UE is the uniform true strain (true strain at tensile strength), and ]φ 0.002 = 0.002 is the true strain at yield strength.The parts of the body deformation zones are elastically and plastically deformed during impact and during their production.However, crash tests are only concerned with plastic deformation.After we insert Equation (1) into Equation (3) and make adjustments, we get the following: After the integration and adjustment Equation (4) we get the following: The values of the material constant K and the strain-hardening exponent n can be determined from the tensile test record by regression analysis.However, to gain a better understanding of the mechanics of the deformation process, the strain-hardening exponent n, and the material constant K can be determined from the mutual bonds of the mechanical properties of metallic materials.If Equation (1) (or, by analogy, Equation (3)) is subjected to a logarithmic operation, we get the following linear dependence: ln(σ S ) = ln(K) + n ln(ϕ i ) and we express the contribution to strain-hardening at tensile strength R m (R m refers to the ultimate tensile strength) with respect to the yield strength, depending on the uniform deformation, in the interval from φ = 0.002 to φ UE [25] as follows: which yields the exponent of the strain-hardening n: The n value is not constant throughout the uniform deformation, so it is necessary to expect a certain uncertainty in the calculation of the deformation work and the actual strength, especially in the case of minor strains.The exact determination of the strain-hardening exponent requires the division of an even deformation region into several intervals and the expression of the strain-hardening exponent in terms of deformation: where n 0 is the strain-hardening exponent found in the first interval (for example, φ i is the true strain between 0.002 and 0.02), p is the constant determined by the approximation of the dependence of the strain-hardening exponent on the deformation at individual intervals. From the Equation ( 6), it follows that the material constant K will be Upon adjustment, we get the following: The above-mentioned mechanical properties of materials (the yield strength R e , the tensile strength R m , the material constant K, the strain-hardening exponent n, the maximum value of uniform deformation φ UE , and so forth) are given by their internal structure-the structure of the material, which, in turn, depends on the chemical composition of steel and on its production technology.The production of "tailored" or "customized" steel grades, with exactly defined properties, requires knowledge of not only the above-mentioned relationships that determine the mechanical properties but also knowledge of the relationships between the structural parameters and the mechanical properties of the metallic materials. Due to the fact that the structural parts of the safety zones are deformed at higher strain rates when a car crashes, the influence of the strain rate needs to be included in constitutive equations.In Reference [26] the authors included the influence of strain rate and temperature into these equations.Authors from References [27,28] included the strain rate influence into the constitutive equations when predicting the deformation work.It has been found that the influence of the strain rate was low at quasistatic strain rates [28], but a notable effect was found at higher strain rates and that it is connected to the evolution of the dislocation density. In the literature, the structural nature of the material properties of ferritic-martensitic steel grades is given a great deal of attention [29][30][31].Based on the dislocation theory, founded on the motion of dislocations and their interaction with various obstacles (grain boundaries, precipitates, interstitial atoms, fractions of different phases, as well as other dislocations), the actual stress necessary for plastic deformation flow can be expressed in terms of the individual contributors to hardening: where σ 0 is Peierls stress necessary to overcome the lattice friction stress, the resistance of alloying elements dissolved in solid solution, the precipitation matrix resistance, and the lattice defects [32]; ∆σ g is the hardening effect depending on the size of the ferritic grain; ∆σ S is the effect of substitute hardening; ∆σ IN is the effect of interstitic hardening; ∆σ P is the effect of precipitation hardening; ∆σ PR is the effect of perlite hardening; ∆σ SG is the effect subgrain hardening (also possible to be expressed as ∆σ FMaB -the hardening through bainitic or martensitic fractions or plates); ∆σ D is the dislocation density hardening effect, and so forth [33,34]. For dual phase ferritic-martensitic (DP) steel grades, Equation ( 1) can be adjusted as follows: The Peierls stress σ 0 [32,33] The hardening effect of ferritic grain size: The hardening effect of the martensitic or bainitic fractions: The hardening effect of the dislocation density: where d α is the mean grain size of ferrite, k y is the strengthening coefficient, α is a material constant, G is the shear modulus (80,000 MPa), b is Burger's vector, and ρ D is the dislocation density. The aims of the experimental research were to prepare materials with different volume fractions of martensite up to 50% from commercial steels, to describe the relationship between the mechanical properties and the temperature of the intercritical annealing, and to describe the relationship between properties that are sensitive to changes of the sub-structural parameter when cold deformed. Materials and Methods The deformation behavior of the dual phase steel types (Equations ( 13)-( 17)) depends mainly on the chemical composition, the volume of martensite, the morphology and distribution in the ferrite matrix, as well as the ferrite grain size d α .The aim of the experimental research was to prepare materials (states) with a martensite volume of up to 50% from commercially produced low carbon steel types of 3-3.3 mm thickness: A (X60), B (S460MC), and C (DP600), whose chemical composition and values of carbon equivalent C E calculated from Equation (18) [35] are listed in Table 1. The microstructures of the initial materials A, B, and C used are shown in Figure 2. The low carbon steel microstructure (A) is ferritic-pearlite (Figure 2a).The low-carbon micro-alloyed steel (B) microstructure is ferritic-pearlite with a low perlite content (Figure 2b).The microstructure of steel C is a ferritic-martensitic one with a martensitic volume of 24% (Figure 2c).As can be seen from Table 1, the carbon content and the average size of the ferritic grain d α are approximately equal for as-received steels A, B, and C. The dispersion of the mean ferrite grain size under the surface and in the middle of the sheet thickness of the as-received A, B, and C materials was ±10%.Prior to the heat treatment, the proper starting and final temperatures of the transformation of ferrite to austenite A C1 , A C3 , A r1 , and A r3 were set according to Andrews [36] (Table 2).The non-recrystallization temperature Tnr and the critical cooling time between 800 and 500 • C for the beginning of the perlite precipitation were calculated according to the equations listed in Reference [37].The as-received materials samples were prepared by single-step annealing in a flowing cantalum furnace REH-B-10-60 (Linn High Therm GmbH, Bad Frankenhausen, Germany) with a protective argon atmosphere.The samples made out of material A (marked as DPA) were annealed at temperatures 740, 790, and 840 • C.Then, considering the results reached, the samples made out of materials B and C (marked as DPB and DPC) were annealed at temperatures of 750 and 820 • C (which lie between the temperatures A C1 -A C3 ) with the same steady-state 10 min for each temperature, followed by cooling in water with a cooling rate of 30 • C/s [38]. Samples for metallographic analysis were hot mounted in dentacrylate, wet grinded (sandpaper 220-1200), and polished by diamond grit in suspension.Then, the samples were etched in 2% Nital. The grain size d α was identified by the linear method according to the Slovak standard STN 42 0462 on the microscope Olympus GX71.The volume fraction of secondary phases (V FSP ) was measured by the grid method (square foil 15 × 15 cm with grid 1 × 1 cm) and by the image analysis method using the image analyzer Image J at a magnification 1000× [38]. Mechanical properties of as-received materials A, B, and C, and the samples after annealing DPA, DPB, and DPC were measured by static tensile tests according to STN EN ISO 6892-1 at room temperature on a testing machine TIRAtest 2300.These are shown in Table 3.The transversal feed was 1 mm•min −1 and the corresponding quasistatic strain rate was 0.003 s −1 .Five specimens for each material and annealing state were tested.The specimen's shape is shown in Figure 3 Results and Discussion The range of annealing conditions within the temperatures of A C1 -A C3 applied to commercially available low carbon steel X60 (A), low-alloyed steel S460MC (B), and dual phase steel DP 600 (C) were allowed to reach a wide range of microstructure states, with a martensite volume between 20.4% and 68.2% and a ferritic grain size between 4.7 and 7.7 µm, as seen in Figures 4-6, respectively.Hereinafter, these phases are designated as DPA 740 , DPA 790 , DPA 820 , DPB 750 , DPB 820 , DPC 750 , and DPC 820 .In determining the volume fraction of ferrite and martensite, the ferrite fraction was evaluated as the dominant phase, while the sum of all other phases (martensite, residual austenite, and bainite) represents the fraction of the secondary phase particles (FSP).This means that the fraction of the purely martensitic phase is slightly overestimated.However, the fraction of bainite, cementite, and residual austenite in the analyzed states of DPA, DPB, and DPC was ±3% within the distribution of the volume fraction of martensite.In the samples of the DPA 740 , DPA 790 , and DPA 840 states obtained by the heat treatment from the initial material A, the volume fraction of the martensite ranged between 23.4% and 68.2% and the ferritic grain size ranged from 4.7 to 7.7 µm; in the samples of the DPB 750 and DPB 820 states obtained by the heat treatment from the initial material B, the volume fraction of the martensite ranged between 22.4% and 58.6%, the ferritic grain size ranged from 7.7 to 3.1 µm; and in the samples of the DPC 750 , DPC 820 states obtained by the heat treatment from the initial material C, the volume fraction of the martensite ranged between 23.8% and 64.4% and the ferritic grain size ranged from 5.2 to 7.3 µm.Thus, the assumption that the states obtained from the A and C materials with higher values of carbon equivalent would result in greater percentages of the secondary phase fractions-shown in Table 2 and Figure 7a-depending on the annealing temperature in the range between 740 and 820 • C, has been confirmed.The dependence of the volume fractions of the secondary phase (Figure 7a) can be described by the regression model: Independent of the observed structural differences, increased annealing temperature T IA resulted in refinement of the mean size of the ferritic grain in examined states-Figure 7b.This tendency of grain size refinement in the annealing temperature interval between 740 and 820 • C has been described by the regression model: The interaction between the fractions of the secondary phase and the size of the ferrite grains as it is shown in Figure 8 is described by the regression model: According to References [38,39], the increase in the secondary phase fractions of the dual phase ferritic-martensitic steel grades is mostly due to the number of grains of the secondary phase fractions rather than due to their volume in the structure.Regardless of the annealing temperature, it is possible to further increase the martensite fraction in the volume and the refinement of ferrite grain by increasing the rate of cooling [40]. The decisive criterion for the choice of steel sheets for the structural parts of the deformation zone of the car-body is the deformation work, which expresses the absorption capacity during a crash.The deformation work can be determined with greater uncertainty from the conventional stress-strain diagram as seen in Figure 1. A g 100 (24) or with less uncertainty (more precisely) from the true stress-true strain diagram by application of Equation ( 3).For the obtained states, the attention was focused on the analysis of the relationships between the mechanical properties (the yield strength Re, the tensile strength Rm, the uniform elongation A g , the total elongation A, the material constant K, and the strain-hardening exponent n) and the parameters of the dual phase ferritic-martensitic steels structure.It follows from the measured results (Table 2 and Figure 9 However, the dependency of the tensile strength on the volume fraction of the secondary phase was found to be a linear, ranging between 20% and 50%, described by the regression equation as follows: R m = 3.74V FSP + 640 [MPa] (26) No increase in tensile strength has been observed for the volume fraction of the secondary phase greater than 50%.Rather, a decrease in tensile strength values has been noted.We assume that on the one hand, martensite contributes to an increase in tensile strength due to the increased volume of the harder phase (martensite), on the other hand, the carbon content of martensite decreases with the increasing volume of martensite.As is well known, the strength of martensite is mainly determined by its carbon content.Another reason for the reduced tensile strength values may be due to the size of martensite islands and martensite distributions in the ferrite matrix [11,32,39].At lower annealing temperatures T IA 740 or 750 • C, martensite was dispersed along the borders of ferritic grains, however, at the annealing temperature of 820 • C, the size of martensitic fractions was greater in comparison to the states obtained under the annealing temperature of 740 or 750 • C (Figures 4-6). It follows, from Figure 9b, that the lower the size of the ferritic grain d α −0.5 , the larger the yield strength and tensile strength values that have been observed.These curves have been described by the regression models in Equations ( 27) and (28). and similarly, in the 20% to 50% interval for the tensile strength However, it should be noted that the tensile strength dependency on the size of the ferritic grain d α -0.5 is more of a tendency because the residual dispersion value R 2 = 0.192 was low. The influence of strengthening contributors in terms of the individual structural parameters on the strength properties of dual phase ferritic-martensitic materials cannot be assessed separately.For this reason, attention was focused on expressing the summary influence of the structural parameters on the yield strength according to Equation ( 13) by the unit sum S j of the individual parameters of the structure.The unit sum S j of the parameters of the structure was determined as the ratio of yield strengths expressed by the Equations ( 25) and (27) with respect to the yield strength value of the reference material DPA 750 (R e = 406 MPa) as follows: where i is the number of parameters of the structure (i = 3), R e,ref is the yield strength value of the reference material, and k yαi is a constant expressing the influence of the ferritic grain size.The dependence of the yield strength on the unit sum of the parameters of the structure S iRe is given in Figure 10 and expressed through the following regression model: R e = 433.4S iR e − 16.9 [MPa] Then, after inserting Equation ( 29) into Equation (30), while taking into account the relationships given by Equations ( 25) and ( 27) and after the subsequent adjustment, we arrive at the following result: R e = 18.8d −0.5 α If we insert Equation (22) into Equation (31) as d α and we insert Equation ( 21) as V FSP , we obtain the relation from which we can predict the yield strength dependent on the annealing temperature: To determine the deformation work, it is necessary to know the value of the uniform elongation A g .Figure 11 shows that the increasing volume fraction of the secondary phase causes the values of the total elongation and the uniform elongation to drop.A similar trend was found in Reference [32].The uniform deformation values A g ranged from 8.8 to 16.3 and total deformation A ranged from 17.1 to 25.3.At the annealing temperature of 840 • C, the lower A g and A values were recorded compared to the states obtained at 750 • C. We assume that this tendency may be related in particular to the morphology and the distribution of secondary phase fractions.It should be noted that in most metallic materials, the dependence force on the elongation, or the conventional strength on the deformation, is flat in the area of the maximum uniform elongation A g .If we determine the elongation ∆L i from the conventional diagram (Figure 1) at the moment when The A g value will be determined by the following relation: Then, the value of the uniform deformation does not allow for the precise deformation work to be determined by Equation (3).The deformation work in the interval from A = 0.2% to the maximum uniform deformation A g , does not express the overall deformation work of the material, as the material resistance to the deformation increases even with a greater deformation than A g (Figure 1).For this reason, we recommend using a reduced elongation value A gs , to be determined from the tolerance range as 1/3 of the difference of the total elongation A and the uniform elongation A g relative to the standard quadratic deviation of the measured values of A and A g (STDEVA, A g ), according to the six-sigma method: Then the reduced value of the uniform elongation will be: and the true strain (or real deformation) will be: Trend analyses of the dependence of the immediate stress value on deformation in the interval from A = 0.002 to the maximum uniform deformation of A gs (Figure 1) allow the designers in the automotive industry to understand the differences of mechanical behavior in conventional and advanced high-strength steel grades in crashes and it allows designers to optimize the choice of the materials for individual "tailored" parts of deformation zones.When compared to the initial material A, the (gradient) curve directions (Figure 12) under the DPA, DPB, and DPC states show that the obtained states of DPA, DPB and DPC exhibit a more favorable course of material resistance to deformation and thus, the course of the deceleration at a crash in comparison to the curve of the steel A. The direction or the slope of the curves express the degree of steel deformability and also the intensity of the deformation resistance upon deformation.Figure 12 shows that the states obtained at lower annealing temperatures, whose strain-hardening exponent values are higher than in the states obtained at higher temperatures, exhibit the greatest deformation resistance. Figures 13 and 14 show that the material constant K and the strain-hardening exponent n depend on the thermomechanical history of steel, with the strain-hardening exponent n being more sensitive to changes in the parameters of the structure than the material constant K.In terms of physics, the strain-hardening exponent n determines the ability of the steel to distribute the stress along the tensile specimen.For low-carbon steel grades, used in the production of complex car-body shapes, the required value of n is >0.22.The higher the n-value, the more uniform the deformation distribution, the greater the steel's resistance to deformation, and the better its formability [39][40][41].Equation ( 13) allows for the estimation of the real strength of the material using empirical models based only on the structural parameters of ferritic-martensitic steel grades.This model describes the deformation behavior dependent on the volume fraction of the secondary phase, the ferrite grain size, and the lattice friction stress required for the dislocation motion.Another important parameter that affects the deformation behavior of metallic materials is the density of dislocations.The density of the dislocations is different for each material.The combined effects of the structural parameters of model (13) differ in the density of dislocations.We can express the density of dislocations from the actual difference in tensile strength: and the actual strength at the yield strength: σ(ϕ 0.002 ) = R e e (0.002) = σ 0 + ∆σ g + ∆σ MaB + ρ 0.5 D,ϕ0.002 7.34 × 10 −6 (39) After adjustment, we obtain the contribution coming from the density of dislocations upon deformation at the tensile strength limit: The high values of the dislocation densities in the dual phase steel grades listed in Table 4 are not surprising since the transformation of austenite into martensite is the cause of great stress in ferrites.Near the martensite fractions, the dislocation density may be even higher (Figure 15).For example, Reference [34] states that, depending on the deformation in DP 500, the dislocation density values can range from 1.5 × 10 14 m −2 up to 1.7 × 10 15 m −2 . Conclusions The experimental work in this paper was focused on reaching dual phase steel types with different volume fractions of martensite, within a range of 20-70%.This was done by changing the intercritical annealing temperatures.The reached states were analyzed using optical microscopy and transmission electron microscopy and the mechanical properties were measured by the tensile test.The size of the martensitic islands depended on the volume fraction of martensite.It was found that increasing the volume fraction of martensite increases the strength and lowers the ductility.The mechanical properties are strongly influenced by the morphology of the disperse martensitic phase.The results for the prediction material constant K and the strain-hardening exponent n, reached from constitutive relations, were acceptable in comparison with the results reported in Reference [34]. The volume fraction of the secondary phase grew with the increased temperature of the intercritical annealing and the refinement of the ferritic grain appeared.The dependencies of both the volume fraction of the secondary phase and the ferritic grain size on the intercritical annealing temperature have been described by means of regression equations.Thus, the paper proposes equations predicting the yield strength, the uniform elongation and the true stress (the true strain curve depending on both the volume fraction of the secondary phase V FPS and the mean size of the ferrite grain d α ).A complex relationship between the yield strength and the microstructure parameters makes it possible to describe the deformation behavior of dual phase steels during deformation in physical terms. Based on the relationships obtained, material engineers and designers will be able to design the dual phase steel grades with a wide range of strength-plastic properties, that is, "tailor-made" to the requirements of the automotive industry.The nature of the deformation behavior of the steel resides in the stress increment, expressed by the strain-hardening exponent.The results obtained show that the strain-hardening exponent n depends on the structural parameters (volume fraction of the secondary phase and grain size) and the state of the substructure (dislocation density).The proposed model might be verified and used by the engineers during the selection of material for car-body structural parts of the safety zones, especially to gain compatibility in the crash situations of different classes of cars. Figure 1 . Figure 1.The record of the tensile test. Figure 3 . Figure 3.A specimen for the tensile test (unit: mm). Figure 7 . Figure 7.The dependencies of the structure parameters: (a) the volume fraction of the secondary phases; (b) the ferritic grain size and the annealing temperature. Figure 8 . Figure 8.The dependence of the ferritic grain size on the volume fractions of the secondary phase. ) that the increase in the volume fraction of the secondary phase resulted in the increased yield strength values by about 80 MPa and the increased tensile strength by about 78 MPa in the DPA states.For the DPB states, the yield strength increased by 92 MPa and the tensile strength increased by 81 MPa; for the DPC, the yield strength increased by 85 MPa and the tensile strength increased by 96 MPa.Higher strength properties (yield strength, tensile strength) of the obtained states are mainly related to the volume fraction of the secondary phase.The results obtained indicate a linear dependence of the tensile strength on the volume fraction of the secondary phase, described by the following regression equation: R e = 1.876VFSP + 352 [MPa](25) Figure 9 . Figure 9.The dependence of the yield strength and tensile strength on (a) the volume fractions of the secondary phase; (b) the size of the ferritic grains. Figure 10 . Figure 10.The dependence of the yield strength on the unit sum of the parameters of the structure. Figure 11 . Figure 11.The dependence of the uniform and total elongation on (a) the volume fraction of the secondary phase; (b) the ferritic grain size. Figure 12 . Figure 12.The dependence of the true stress on the deformation in logarithmic coordinates. Figure 13 . Figure 13.The dependence of the material parameters on the volume fractions of secondary phases: (a) the strain-hardening exponent n; (b) the material constant K. Figure 14 . Figure 14.The dependence of the material parameters on the ferritic grain size: (a) the strain-hardening exponent n; (b) the material constant K. Figure 15 . Figure 15.The dislocation density in the ferrite grain near the martensitic fraction. Table 1 . The chemical composition of the as-received steels (wt %). Table 2 . The calculated temperatures of phase transformations ( • C). Table 3 . The mechanical properties of as-received steels and annealed states. Table 4 . The calculated values of material constant K, the strain-hardening exponent n, and the dislocation density.
2019-04-30T13:07:12.791Z
2018-04-05T00:00:00.000
{ "year": 2018, "sha1": "c7bb4b05d8c9334b66dbd70a05bcd790192d78b3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/8/4/242/pdf?version=1525347278", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f567b57de83f0f56c3b1f5d7d1f57674016ba9a1", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
247868270
pes2o/s2orc
v3-fos-license
Neuroprotective Effects of TRPM7 Deletion in Parvalbumin GABAergic vs. Glutamatergic Neurons following Ischemia Oxidative stress induced by brain ischemia upregulates transient receptor potential melastatin-like-7 (TRPM7) expression and currents, which could contribute to neurotoxicity and cell death. Accordingly, suppression of TRPM7 reduces neuronal death, tissue damage and motor deficits. However, the neuroprotective effects of TRPM7 suppression in different cell types have not been investigated. Here, we found that induction of ischemia resulted in loss of parvalbumin (PV) gamma-aminobutyric acid (GABAergic) neurons more than Ca2+/calmodulin-kinase II (CaMKII) glutamatergic neurons in the mouse cortex. Furthermore, brain ischemia increased TRPM7 expression in PV neurons more than that in CaMKII neurons. We generated two lines of conditional knockout mice of TRPM7 in GABAergic PV neurons (PV-TRPM7−/−) and in glutamatergic neurons (CaMKII-TRPM7−/−). Following exposure to brain ischemia, we found that deleting TRPM7 reduced the infarct volume in both lines of transgenic mice. However, the volume in PV-TRPM7−/− mice was more significantly lower than that in the control group. Neuronal survival of both GABAergic and glutamatergic neurons was increased in PV-TRPM7−/− mice; meanwhile, only glutamatergic neurons were protected in CaMKII-TRPM7−/−. At the behavioral level, only PV-TRPM7−/− mice exhibited significant reductions in neurological and motor deficits. Inflammatory mediators such as GFAP, Iba1 and TNF-α were suppressed in PV-TRPM7−/− more than in CaMKII-TRPM7−/−. Mechanistically, p53 and cleaved caspase-3 were reduced in both groups, but the reduction in PV-TRPM7−/− mice was more than that in CaMKII-TRPM7−/− following ischemia. Upstream from these signaling molecules, the Akt anti-oxidative stress signaling was activated only in PV-TRPM7−/− mice. Therefore, deleting TRPM7 in GABAergic PV neurons might have stronger neuroprotective effects against ischemia pathologies than doing so in glutamatergic neurons. Introduction Transient receptor potential melastatin-like-7 (TRPM7) is an ion channel protein with a serine-threonine kinase domain in its C-terminal, which is involved in various physiological and pathological processes [1,2]. In neural systems, TRPM7 ion channel and/or its kinase domain have been shown to regulate synaptic density, plasticity and memory [3] as well as synaptic terminal function and neurotransmitter release [4,5]. Earlier investigations revealed that oxygen-glucose deprivation increases TRPM7 conductance in cortical neurons and that blocking or suppressing TRPM7 reduces cortical neuronal death [6]. TRPM7 expression is increased following focal cerebral ischemia in rats [7]. Suppression of TRPM7 expression (by shRNA) in the rat hippocampus reduces neuronal death, cognitive impairment and motor deficits [8]. Treatment with carvacrol, an inhibitor of the TRPM7 ion channel, decreases neuronal degeneration, microglial activation and oxidative stress damage following global cerebral ischemia in rats [9]. Therefore, global inhibition or suppression of TRPM7 in neural systems exerts neuroprotective effects against ischemia. However, the neuroprotective effects of TRPM7 suppression in different cell types have not been investigated. Brain ischemia in rats increases glutamate and reduces GABA release [10]. Meanwhile, thrombolytic treatment is believed to promote recovery by increasing GABA and reducing glutamate release during acute brain ischemic injury [11]. Studies show that reduction in GABAergic synaptic transmission contributes to the upregulation in overall excitatory activity in the ischemic brain, which results in neuronal death [12]. Parvalbumin (PV) neurons are the most abundant GABAergic neurons in the cortex [13,14]. Studies show that GABAergic PV neurons are more vulnerable to brain ischemia damage than glutamatergic neurons [15] (but also see [16]). Therefore, glutamatergic and GABAergic neurons have different contributions to/sensitivity from ischemia pathologies. Despite the clear neuroprotective effects of TRPM7 suppression, its protective role in GABAergic vs. glutamatergic neurons has not been addressed. In the current study, we compared the neuroprotective effects of TRPM7 deletion in GABAergic PV neurons with that in glutamatergic neurons. To achieve our aim, we generated two lines of conditional knockout mice, with deletion of TRPM7 in PV neurons and in CaMKII neurons and subjected them to brain ischemia followed by behavioral, histological, cellular and biochemical analyses. Middle Cerebral Artery Occlusion (MCAO) Models For the development of MCAO models, all experimental mice (body weight of 23-28 g) were randomly assigned to a sham or brain ischemia group. Briefly, mice were anesthetized with 1.5% isoflurane in a 30% O 2 /70% N 2 mixture under a spontaneous breathing state. The left carotid arteries were exposed and isolated from the branches, and the left external carotid artery (ECA) was ligated and cut approximately 1.5 mm from the bifurcation. A nylon monofilament coated with a silicone tip was then inserted into the left ECA and advanced along the left internal carotid artery (ICA) to occlude the middle cerebral artery. Sixty minutes later, the monofilament was withdrawn to produce reperfusion. The rectal temperature of mice was controlled at 37 • C ± 0.5 • C using a temperature-regulated heating pad during surgery. Regional cerebral blood flow (CBF) was detected in all MCAO mice with Laser Doppler Flowmetry (PF5000, PERIMED, Järfälla, Sweden) to confirm the induction of ischemia. During MCAO, mice that did not reach a CBF reduction of 75% of the pre-ischemia baseline levels were excluded from the experiments. The sham group mice underwent the same anesthesia and surgical procedures except for the middle cerebral artery occlusion. To quantify the number of cells in the optical field, Image J software (v1.53, NIH, Bethesda, MD, USA) was used. The cell density was then calculated as number of cells/mm 2 . To quantify the florescent signal of TRPM7 immunostaining, Image-Pro Plus (v6.0.0.260, Media Cybernetics, Rockville, MD, USA) was used. All brain sections were stained, imaged and analyzed together under same conditions. Individual cell bodies were outlined, special filters were added to calibrate the background to the same level in all images and then the average of optical density of TRPM7 fluorescent signal within the cell body was calculated. For TNF-α quantitative analysis we applied a similar procedure to that used for TRPM7 analysis, but the quantification of the fluorescent signal was conducted in the entire optical field. Determining the Core, Penumbra and Imaging Areas Except for the infarct areas (Bregma 1 to −1.72 mm), all our quantitative analyses were done on brain sections taken from Bregma 0.5 to 1.1 mm. All images were taken within primary somatosensory cortex or primary motor cortex. We used the core/penumbra boundary as our reference for taking the images as follows: most of the data were obtained from images taken 0-800 µm from the boundary. P53 images were right across the boundary line. Caspase data were obtained from images taken 0-800 µm from the boundary, but within the core. The cortical core/penumbra boundary was determined by the obvious disruption in structure and morphology of the targeted cortical area as observed under the microscope. For capsase-3 and p53 quantitative analysis, we combined these experiments with NeuN staining to help determining the boundary and core area precisely. Neurological Deficit Score The neurological deficit score was evaluated at 2, 4, and 6 days post-MCAO, as described before [18,19]. Briefly, the score was defined as follows: 0 point, no observable deficit; 1 point, failure to extend fully the right forepaw; 2 points, the grip strength of the right forepaw is weakened; 3 points, circling to the contralateral side if the tail was hold, but moved to any direction if the tail was not hold; 4 points, circling to the contralateral side regardless of whether the tail was hold or not; 5 points, moved only after stimulation; 6 points, no response to stimulation and low level of consciousness; 7 points, death. Foot Fault Test The foot fault test was commenced at 2, 4, and 6 days post-MCAO. Foot fault errors were scored and calculated as described before [17,20]. Briefly, mice were placed on an elevated (by 30 cm) grid surface (L 40 × W 20 cm). Area of each of the grid openings was 4 cm 2 . During the movement on the grid, the number of the foot faults made by the right forelimb and the steps of right forelimb were counted. Each test consisted of three trials, 1 min each, with an interval of 1 min between trials. The foot faults were expressed as a percentage of the number of errors made by the right forelimb out of the right forelimb total steps. Western Blot Seven days after brain ischemia, cortical tissues from the penumbra were harvested and homogenized in RIPA buffer (Beyotime, Nantong, China) containing protease and phosphatase inhibitors (Roche, Penzberg, Germany). The homogenized tissue was centrifuged at 12,000 rpm for 15 min. The supernatants were collected and stored at −80 • C until use. Enzyme Linked Immunosorbent Assay (ELISA) Cortical tissue from the penumbra from the same animals used for Western blot experiments were harvested and homogenized in cold PBS containing protease and phosphatase inhibitors. Then, the homogenized tissue was centrifuged at 12,000 rpm for 15 min. The supernatants were collected and stored at −80 • C until use. TNF-α concentration was detected by using the Mouse TNF-α ELISA Kit (Biolegend, San Diego, CA, USA) according to the manufacturer's instructions. Statistical Analysis Minimum sample size was determined by power calculation using GPower software (V3. 1 Table S1). For comparison between two groups, the two-tailed unpaired t-test (normally distributed data), unpaired t-test with Welch's correction (variance not homogenous) or Mann-Whitney test (not normally distributed) was used depending on the normality and variance homogeneity of the data. For comparison among multi groups: one-or two-way ANOVA followed by Bonferroni's post-hoc test (normally distributed) or one-way ANOVA Kruskal-Wallis test followed by Benjamini post-hoc (not normally distributed) test was used. All data are presented as mean ± SEM. A p-value < 0.05 was considered statistically significant. Brain Ischemia Induces a Greater Loss of PV Neurons than of CaMKII Neurons Studies provided evidence that GABAergic PV neurons are more sensitive to brain ischemia damage than glutamatergic neurons; however, other studies claimed the opposite [15,16]. We evaluated the number of surviving GABAergic PV neurons and glutamatergic CaMKII neurons in the penumbra of the cortex 7 days after inducing MCAO. Our results showed that PV (Mann-Whitney test, U = 153.5, p < 0.001; Figure 1A,B) and CaMKII-positive cells (unpaired t-test with Welch's correction, t (90.09) = 7.58, p < 0.001; Figure 1A,C) were significantly reduced in the MCAO group. Furthermore, we found that the percentage of surviving PV-positive cells was significantly lower than that of the surviving CaMKII-positive cells post-brain ischemia (both were calculated as percentages of corresponding sham control, Mann-Whitney test, U = 961, p = 0.002; Figure 1D). Our results suggest that GABAergic PV neurons in the cortex are more vulnerable to ischemia-induced neuronal death than glutamatergic neurons. Brain Ischemia-Induced Upregulation of TRPM7 Is Higher in PV Neurons than in CaMKII Neurons TRPM7 expression is upregulated following brain ischemia [21]. However, previous studies did not investigate this upregulation in GABAergic vs. glutamatergic neurons. We evaluated the expression of TRPM7 in GABAergic PV neurons and glutamatergic CaMKII neurons in the penumbra of the cortex 7 days after inducing MCAO. Our results showed that TRPM7 expression was significantly higher in PV (unpaired t-test, t (70) = 6.582, p < 0.001; Figure 1E,F) and CaMKII neurons (unpaired t-test, t (70) = 2.777, p = 0.007; Figure 1E,G) of the MCAO group. Thus, brain ischemia resulted in upregulation of TRPM7 in PV and CaMKII neurons. Importantly, we found that the percentage of TRPM7 expression in PV neurons was significantly higher than that in CaMKII neurons (both were calculated as percentages of corresponding sham control, unpaired t-test with Welch's correction, t (54.69) = 5.128, p < 0.001; Figure 1H). These results suggested that brain ischemia upregulates TRPM7 expression in PV neurons more pronouncedly than in CaMKII neurons. Knocking out TRPM7 in PV Neurons Has Better Protective Effects against Brain Ischemia than in CaMKII Neurons To compare the neuroprotective effects of TRPM7 deletion in PV neurons with that in CaMKII neurons, we generated two lines of conditional knockout mice, PV-TRPM7 −/− and CaMKII-TRPM7 −/− mice. We confirmed the cell-specific knockout of TRPM7 by crossing PV-TRPM7 −/− or CaMKII-TRPM7 −/− mice with Ai3 Cre-reporter mice (Supplementary Figure S1A,B and Supplementary Experimental Procedures). Moreover, we confirmed that TRPM7 knockout had no effects on body weight (Supplementary Figure S1C) or motor functions (Supplementary Figure S1D). At 1 day before MCAO, we performed foot fault test for all experimental groups to establish a baseline. At day 2, 4 and 6 post-MCAO, neurological deficit score and foot fault tests were performed. All mice were sacrificed on day 7. All histological, biochemical and cellular analyses were carried out afterwards by using Western blot, ELISA, immunostaining and imaging (Figure 2A). Before and during the occlusion process, we monitored blood flow by laser Doppler flowmetry. We confirmed that our MCAO procedure effectively reduced the blood flow equivalently in all groups by 80% ( Figure 2B In all experiments, TRPM7 flox/flox Sham mice were also evaluated as a reference for normal neurological and motor scores. Therefore, we concluded that ischemia-induced neurological and motor deficits are reduced by the deletion of TRPM7 in GABAergic PV neurons, but not in glutamatergic neurons. TRPM7 Knockout in PV Neurons Exerts Better Anti-Inflammatory Effects Post-Ischemia Oxidative stress is well known to be induced by brain ischemia, which is accompanied with inflammation [22,23]. In the present study, we detected related inflammatory factors in the penumbra of the cortex post-MCAO treatment. The results showed that the number of GFAP-positive cells was dramatically higher in all MCAO groups than that in the sham group. However, GFAP-positive cells in PV-TRPM7 −/− MCAO mice were significantly decreased in comparison with these in TRPM7 flox/flox MCAO and CaMKII-TRPM7 −/− MCAO mice ( Figure 3A; one-way ANOVA, F (2,159) = 20.87, p < 0.001). Western blot analysis supported these data. We found that GFAP protein expression in penumbra was dramatically higher in all MCAO groups than that in the sham group. However, the GFAP level in PV-TRPM7 −/− MCAO mice was significantly decreased in comparison with that in TRPM7 flox/flox MCAO mice ( Figure 3B; one-way ANOVA, F (2,15) = 5.34, p = 0.018). Concerning reactive microglia, we observed that the Iba1 expression level was higher in all MCAO groups compared with that in the sham group ( Figure 3D). The percentage of reactive microglia (as a percentage of total microglia) in the PV-TRPM7 −/− MCAO mice was significantly lower than that in the TRPM7 flox/flox MCAO and CaMKII-TRPM7 −/− MCAO mice ( Figure 3C; one-way ANOVA, Kruskal-Wallis test = 24.13, p < 0.001). Protein expression of Iba1 in the PV-TRPM7 −/− MCAO mice was also significantly lower than that in the TRPM7 flox/flox MCAO mice ( Figure 3D; one-way ANOVA, F (2,15) = 11.17, p = 0.0011). In all MCAO groups, the expression of TNF-α, which is an important inflammatory factor, was also higher than that in the sham group. Meanwhile, we found that the TNF-α density in the PV-TRPM7 −/− MCAO mice was significantly reduced in comparison with that in the TRPM7 flox/flox MCAO mice ( Figure 3E; one-way ANOVA, Kruskal-Wallis test = 5.77, p = 0.05). These data were confirmed by quantitative Western blot ( Figure 3F; one-way ANOVA, F (2,15) = 13.56, p < 0.001) and ELISA assay ( Figure 3G; oneway ANOVA, F (2,15) = 25.06, p < 0.001). These results indicated that the deletion of TRPM7 in GABAergic PV neurons, but not in glutamatergic neurons, reduced the post-ischemia inflammatory processes. Akt/p53/Caspase-3 Signaling Pathways in PV-TRPM7 −/− and CaMKII-TRPM7 −/− Mice Post-Ischemia Akt signaling has been implicated in ischemia pathologies [24]. Increase in Akt activation (phosphorylation) is associated with enhanced post-ischemia recovery [25]. Therefore, we checked the level of p-Akt in the cortex of all experimental groups and found that the number of p-Akt-positive cells in PV-TRPM7 −/− MCAO was significantly higher than that in TRPM7 flox/flox MCAO and CaMKII-TRPM7 −/− MCAO mice ( Figure 4A; one-way ANOVA, Kruskal-Wallis test = 12.36, p = 0.002). These results were confirmed by Western blot analysis ( Figure 4B; one-way ANOVA, Kruskal-Wallis test = 7.87, p = 0.013). Further cell-type-specific analysis revealed that the number of p-Akt-positive cells in PV neurons was significantly higher in PV-TRPM7 −/− MCAO mice in comparison with TRPM7 flox/flox MCAO, but remained significantly lower than that in sham mice ( Figure 4C left; one-way ANOVA, Kruskal-Wallis test = 56.58, p < 0.001). Moreover, the number of p-Akt-positive cells in CaMKII neurons was reduced in both PV-TRPM7 −/− MCAO and TRPM7 flox/flox MCAO ( Figure 4C right; one-way ANOVA, Kruskal-Wallis test = 6.32, p = 0.042). These results indicated that the deletion of TRPM7 in GABAergic PV neurons, but not glutamatergic neurons, rescued Akt activity in the post-ischemic brain. Furthermore, this rescue is likely to occur in PV neurons and other types of brain cells, but not glutamatergic neurons. (G) Quantitative analysis of TNF-α level by ELISA assay; Bonferroni's post-hoc test. TRPM7 flox/flox Sham was used as reference but not statistically compared with the other groups. Quantitative immunostaining data were obtained from n = 54 images per group from 3 brain sections per mouse from 6 mice per group. Western blot and ELISA assay data were obtained from n = 6 mice per group. * p < 0.05, ** p < 0.01, *** p < 0.001. The protein p53 is considered a key regulator of apoptotic processes that are inhibited by Akt [26]. Next, we tested the expression level of p53. We found that the number of p53-positive cells in the PV-TRPM7 −/− MCAO mice was significantly lower than that in the TRPM7 flox/flox MCAO and CaMKII-TRPM7 −/− MCAO mice. Unlike p-Akt data, we found that p53-positive cells were also significantly reduced in CaMKII-TRPM7 −/− MCAO in comparison with TRPM7 flox/flox MCAO mice ( Figure 4D; one-way ANOVA, F (2,159) = 24.12, p < 0.001). In line with these data, Western blot analysis revealed that p53 protein expression was significantly lower in PV-TRPM7 −/− MCAO and CaMKII-TRPM7 −/− MCAO mice in comparison with TRPM7 flox/flox MCAO mice ( Figure 4E; one-way ANOVA, Kruskal-Wallis test = 13.66, p < 0.001). Finally, we detected the expression of a key apoptosis executive factor, namely the cleaved caspase-3, in all experimental groups. Our results showed that the number of cleaved caspase-3 positive cells was significantly reduced in PV-TRPM7 −/− MCAO mice in comparison with that in TRPM7 flox/flox MCAO and CaMKII-TRPM7 −/− MCAO mice. Similar to the p53 results, we found that the cleaved caspase-3 positive cells were also significantly reduced in CaMKII-TRPM7 −/− MCAO in comparison with TRPM7 flox/flox MCAO mice ( Figure 4F; one-way ANOVA, Kruskal-Wallis test = 47.05, p < 0.001). A similar pattern was observed by using Western blot analysis ( Figure 4G; one-way ANOVA, Kruskal-Wallis test = 9.98, p = 0.002). These results indicated that the deletion of TRPM7 in both GABAergic PV neurons and glutamatergic neurons protected brain cells from apoptosis mediating molecules. However, TRPM7 deletion in GABAergic PV neurons activated the anti-oxidative stress/anti-apoptosis Akt signaling. Benjamini post-hoc test. TRPM7 flox/flox Sham was used as reference but not statistically compared with the other groups (except for (C)); quantitative immunostaining data were obtained from 6 mice per group, 3 brain sections per mouse. Western blot data were obtained from n = 6 mice per group; co-detection of β-actin served as normalizing and loading control. * p < 0.05, ** p < 0.01, *** p < 0.001. Discussion In the current study, we show that brain ischemia induced higher loss of GABAergic PV neurons than of glutamatergic neurons in the cortex. The ischemia-induced upregulation of TRPM7 expression was found to be higher in GABAergic PV neurons than in glutamatergic neurons, which might explain the sensitivity of GABAergic PV neurons to ischemia-induced neuronal death. In line with the neuroprotective effects of TRPM7 blocking/suppressing, our data show that deleting TRPM7 in PV and/or glutamatergic neurons reduced infarct volume, protected glutamatergic neurons and promoted antiapoptotic mechanisms. However, we found that knockout of TRPM7 in GABAergic PV neurons was more effective in protecting GABAergic and glutamatergic neurons, promoting neurological and motor recovery, reducing inflammatory processes and inducing anti-apoptosis/anti-oxidative stress signaling pathways. Both glutamatergic and GABAergic neurons are damaged by cerebral ischemia, but contradictory results have been obtained on their comparative susceptibility to such injuries. Studies suggest that GABAergic interneurons survived the injury for up to 30 days in cortical and hippocampal regions in the rat brain [27]. Pyramidal neurons within CA1 were shown to degenerate at 2 days after ischemia [28]. On the other hand, short-term ischemia permanently impairs the excitability of inhibitory neurons and synaptic transmission mediated by GABA, leading to glutamatergic excitotoxicity [29]. In the current study, we found that both GABAergic PV and glutamatergic neurons are decreased in the cortex following brain ischemia; however, more GABAergic PV neurons were lost in comparison with glutamatergic neurons. Furthermore, the post-ischemia increase in TRPM7 expression was higher in PV neurons than that in glutamatergic neurons. In view of the compelling evidence showing the role of TRPM7 upregulation/hyperactivity in mediating post-ischemia neuronal death [1,2], our results correspond more with previous studies suggesting that ischemia induces more pronounced loss of PV neurons than of glutamatergic neurons [15] and provide a possible explanation for PV neuronal sensitivity, namely the more pronounced upregulation of TRPM7 following ischemia. Suppression of TRPM7 following brain ischemia reduces neuronal death, infarct volume and motor disability [30]. However, the exact molecular events underlying these neuroprotective effects of TRPM7 suppression remain largely unknown. It is known that excessive Ca 2+ influx induces oxidative stress and apoptotic cell death [31,32]. Therefore, it is widely believed that, following hypoxia/ischemia, the ion channel part of TRPM7 mediates Ca 2+ overload, leading to cytotoxicity and neuronal death [1,2]. Studies also suggest that Ca 2+ overload triggers production of oxygen/nitrogen species, which increases TRPM7-like conductance, leading to further Ca 2+ toxicity and more neuronal death [6]. On the other hand, TRPM7 suppression reduces Ca 2+ overload and increases the phosphorylation of endothelial nitric oxide synthase (eNOS), which counteracts the damage caused by oxygen/nitrogen species and hence reduces brain injury [33]. Our data does not contradict this universal (non-cell type specific) mechanism. Preventing Ca 2+ overload can apply to all cell types, which might explain the observed protective effects of deleting TRPM7 in glutamatergic and GABAergic PV neurons. However, the more pronounced neuroprotective effects of TRPM7 deletion in PV neurons indicate that there are other mechanisms, i.e., cell-type-specific mechanisms. Akt signaling pathway regulates cell proliferation, growth, differentiation and survival [34,35]. It is implicated in different diseases such as cancer, cardiovascular diseases, diabetes and neurological dysfunctions [24]. Akt is considered one of the key signaling pathways contributing to brain ischemia pathologies and recovery [25]. Several lines of research demonstrate a relationship between Akt signaling pathway and the neuroprotective effects of TRPM7 suppression during brain ischemia. For instance, activation of Akt signaling pathway protects the brain from ischemic injury by downregulating TRPM7 [7,36]. On the other hand, suppression of TRPM7 expression markedly increases the phosphorylation of Akt after brain injury [33]. Moreover, pharmacological suppression of TRPM7 protects neonatal brain from hypoxic-ischemic injury by activating Akt [30]. In line with this mechanism, we found complete restoration (i.e., matching sham level) of Akt activity happened only when TRPM7 was deleted in PV neurons. Furthermore, the Akt activation was found to happen in GABAergic PV, but not glutamatergic neurons. Since activation of Akt in PV neurons did not match the level of sham ( Figure 4B), our data suggest that deletion of TRPM7 in PV neurons might also activate Akt in glial cells and other types of GABAergic neurons resulting in the observed complete restoration of the overall activity of Akt ( Figure 4A). Downstream from Akt is the apoptosis activating signaling p53/caspase-3 [37]. Attenuation of p53 expression protects against focal ischemic damage in mice [38]. In cell lines exposed to OGD, TRPM7 was found to contribute to activation of p53/caspase-3 apoptotic pathway [39]. Suppression of TRPM7 expression reduces the cleaved caspase-3 in the brain following ischemia in mice [30]. Thus, our data support the link between TRPM7 suppression and activation of Akt under brain ischemia pathologies, but clearly indicate that such activation is dependent on the cell type in which TRPM7 was deleted (i.e., cell-type-specific mechanism). Furthermore, we confirmed the involvement of p53/caspase-3 in the neuroprotective effects of TRPM7 deletion. However, we found that suppression of such pro-apoptosis signaling molecules represents a universal neuroprotective response to TRPM7 deletion in both types of cells. Based on that, one might speculate that the TRPM7-dependent effects on p53/caspase-3 signaling may not necessarily require activation of Akt signaling at least in glutamatergic neurons and that activation of Akt signaling might underlie the stronger neuroprotective effects of deleting TRPM7 in PV neurons. In conclusion, we found that, in comparison with glutamatergic neurons, knockout of TRPM7 in GABAergic PV neurons has better neuroprotective and/or recovery effects following ischemia as well as stronger activation of anti-oxidative stress signaling within the post-ischemic brain. The results underline the importance of studying TRPM7 roles in brain pathology in a cell-type-specific manner. This should advance our understanding of TRPM7 contributions to brain pathologies as well as our knowledge regarding the molecular mechanisms linked to the channel functions and/or dysfunctions. Table S1: Results of normality test and variance homogeneity test of each Figure. Author Contributions: N.A. and Y.G. conceived, designed and supervised the project. N.A. and P.Z. wrote the manuscript. P.Z. and W.L. conducted experiments and data analysis. Y.L. performed MCAO models. All authors have read and agreed to the published version of the manuscript. Institutional Review Board Statement: The animal study was reviewed and approved by the Fudan University Committee for Animal Care and Use (license number: SYXK-2020-0032).
2022-04-03T15:13:44.471Z
2022-03-31T00:00:00.000
{ "year": 2022, "sha1": "96cfae005c91ed74d80daf63227b28a30e8a98df", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/11/7/1178/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8fd62856277bfba20f57df87a068bd6109e62ea9", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
230606017
pes2o/s2orc
v3-fos-license
Exploring the Environmental Practices in Hospitality through Booking Websites and Online Tourist Reviews : The major impact of hotel industry on the environment has become a serious concern for both hoteliers and tourists, with many studies showing that tourists are having more and more expectations from hoteliers to implement environmental practices. Considering the fact that most hotel bookings are now made online through booking websites, it is important for these to show and manifest the same preoccupation with the environment, by promoting environmentally friendly initiatives. The aim of the present study is to emphasize the crossroads between sustainability and digitalization, and to increase the role of digital technologies in encouraging a sustainable development of all tourism sectors, including hospitality. The study provides information and identifies important elements to assess the environmental practices of booking websites. In order to identify the position that booking websites and platforms take towards the environmental practices, the filter section of four booking platforms is analyzed. The present study aims not only to analyze the current position of booking platforms towards the environment, but also to identify methods that improve the way they highlight their implemented environmental practices, based on tourist’ reviews. In order to identify the customers’ opinion regarding these types of practices, a total of 31,800 tourist reviews posted on Booking.com were analyzed. The results have indicated that the level of awareness related to the need of protecting the environment, in the case of both hoteliers and tourists, must increase aspects that also imply the need for booking websites to highlight and encourage environmental practices. The obtained results are useful for booking websites developers who can adapt and make the website’s interface friendlier regarding environmental practices. In addition, hotel managers and entrepreneurs can make use of these results in order to develop new types of business in the hospitality industry, and in the long run the results are useful for increasing the awareness among tourists of the need for environmental practices. Suggestions are made on how booking websites can involve themselves more in environmentally friendly initiatives and in potential future study subject discussions. keywords responsible; searching The results of environmental practices analysis made on the four booking platforms, Booking.com, TripAdvisor, Directbooking.ro, and Travelminit.ro, highlighted the presence of 18 environmental filters. Only one environmental filter was identified as present on all four booking platforms analyzed, namely: facilities for people with disabilities. Three environmental filters are present on three of the booking platforms analyzed: short distance from the main points of attraction, short distance from public transport, and bicycle / boat rental. Booking.com has the highest ratio of environmental filters in the total number of filters displayed. Introduction In the current context, during the meeting of the Executive Council of World Tourism Organization (UNWTO) on 17 September 2020, UNWTO Secretary-General Zurab Pololikashvili highlighted the fact "that the restart of tourism must be properly managed and that our sector lives up to its unique potential". He stressed that "this crisis has made clear the important role tourism plays in every Sustainability 2020, 12, 10282 2 of 18 part of our lives", an aspect that is likely to lay a foundation in order to "work together to build a tourism sector that works for everyone, where sustainability and innovation are part of everything we do" [1]. In the last years, the impact of the hotel industry on the environment has become a major problem [2] because the amount of resources consumed by it is not to be neglected [3]. So-called green hotels have emerged as a solution to this problem and are considered to be a long time trend and a guarantee of success in the hotel industry [4,5]. Due to the fact that tourists are increasingly aware of the environmental problems, they express and manifest intention and desire to buy and consume green products and services [5,6]. Many studies have focused on analyzing the level of satisfaction and loyalty of tourists, but there are too few studies that analyze the impact of green hotels on the level of satisfaction with the intention to repeat the green hotel experience, and on the availability of tourists to generate positive word of mouth regarding green accommodation [7]. Considering that digitalization is one of the increasingly important topics present in tourism studies, literature seems to have overlooked the role of digital technologies in encouraging sustainable development and protecting the environment in the tourism and hospitality industry. Scholars have generated, until now, very little information concerning the intersection between sustainability and digitalization in the tourism and hospitality industry [8], this being one of the reasons for limited research on the link between sustainability and booking platforms. From this point of view, the studies on environmental practices of booking platforms are still in their developmental phase. To fill in this research gap, the purpose of this study is to provide information and to identify important elements regarding sustainable practices of hotels in order to assess the environmentally friendly practices on booking websites. The objectives of this study are to provide information and to identify the current position of booking platforms towards the environment and to identify methods that improve the way they highlight their implemented environmental practices, based on tourists' reviews. Literature Review The growing concern of people towards global warming is reflected in their decision to stay in a green hotel when they travel [9]. Tourists expect the hotel industry to pay more attention to environmental concerns and to operate more sustainably [4,10]. In this context, hoteliers must be aware of customers' changing behavior and the importance of promoting green products and services as well as consistent management. In order to increase their position in the hospitality market, they must implement environmentally responsible practices [11]. An increasing number of companies are trying to publicly demonstrate their commitment to sustainability and sustainable development, aiming the improvement of competitive advantage, to build their own brand and to distinguish themselves on the market [12]. Sustainable tourism is one of the most important topics in the global industry of tourism, sustainability being often seen as the reason for the competition developed between different tourist destinations [13]. Although tourists are showing more and more interest in sustainable and innovative proposals, the activities in the tourism industry are becoming less and less sustainable, many companies being in a situation where they do not have a clear innovative strategy, much less a strategy for sustainable growth and innovation. Environmental sustainability brings new challenges along with new opportunities for business. A valid source of innovation, through the enormous amount of generated data, is social media [14], which has been recognized as an important source of information in the tourism industry. Social media has also become a more reliable source of information, with tourists having more confidence in it than in official channels, and because of the fact that the content creators are users themselves. However, the potential of social media to operate in accordance with environment protection rules, is very little known [15]. A multitude of ecological certification standards for sustainable tourism have been developed, of which we mention for sustainable hotels, the following: Green Globe, which has developed its criteria on four key topics: sustainable management, the social and economic dimension, cultural heritage, and the environment; Green Key, whose criteria are related to environmental management (for water, energy, waste, cleaning, etc.), social responsibility of corporations, and good sustainable education (of staff, customers, suppliers, etc.); Travelife, which uses a package of tools and resources specially designed to improve the impact of the business on the environment, economy, and on the social pillar of sustainability; and Earth Check. Earth Check is a specialized standard in terms of sustainable development and management of a tourist destination, but also NEPCon their Sustainable Tourism Certification Program, which recognizes the effort made by accommodation units and tour operators who have committed to implementing sustainable practices in their work. Moreover, in 2019, Green Wall, a search engine for green hotels, was developed [16]. The green hotel is defined as an eco-friendly accommodation, which establishes and follows well determined environmental programs and practices (for example, reducing water and energy consumption, reducing solid waste and related costs, etc.), in order to contribute to the global action of protecting the environment [17]. In a more specific way, green hotels are those that show their commitment to the environment by meeting standards related to efficient energy consumption, green products consumption, conservation of water resources, air-quality management, solid waste management, the management and treatment of wastewater, control of noise pollution, the management of toxic and harmful substances, the management of human resources, cooperation with local organizations, and the policies and practices specific to the hotel business [18]. Considering these measures as solutions to the environmental issue, hotel industries can benefit from promoting their environmental performances and more detailed information about their green practices [19,20]. Furthermore, the possession of an ecological certificate represents a major factor in the tourists' decision process when they have to choose an accommodation [21]. Other studies have shown that even though tourists are aware of environmental problems, they do not consider environmental practices as being a priority in choosing where to stay, so they opt for conventional hotels [3,22]. Moreover, it was also argued that tourists, many times, are unaware of the fact that the chosen hotel possesses an ecological certificate [23], or even suspect it at times [24]. For this reason, it is important for hotels to use an efficient communication strategy when they inform tourists about environmental sustainability, so they are able to adopt an environmental behavior. In addition, hotel managers must be focused on increasing the credibility of their messages when it comes to environmental practices [25]. The use of Internet in the decision-making process for buying a tourist product or service is constantly increasing [26,27]. The most valuable shopping experience using the internet, starts from the moment of searching and consulting booking websites [28], which is why booking websites must attract more and more customers by presenting them with relevant and useful information [29]. Online sales related to the tourism industry, including accommodation services, continue to grow worldwide [30,31]. Even at the level of the European Union member countries, the amount of online sales of accommodation and transport services is growing at a high rate. Along with this the average age of European citizens between the ages of 16 to 74 that bought accommodation and transport services increased as well, from 51.5% (2017) to 53.5% (2018) [32,33]. Due to the advancement of Internet and online booking technologies, the demand for more interactive hotel websites has increased [34]. The Internet has changed the behavior of tourists towards tourist products, this fact resulting in a longer than ever searching and informing process in order to make a reservation. This process requires consulting an average of 38 websites [35]. Previously, customers only searched hotel's website for information (provided by hoteliers themselves) about the hotel, but now tourists typically look for reviews from those who have already had the experience of staying in that hotel [34]. Searches on online booking websites are now very important considering the fact that they are becoming more and more complex [29] and complicated to use, especially for those who are not skilled at what technology now requires. Hoteliers must periodically monitor customer trends in website Sustainability 2020, 12, 10282 4 of 18 usage to identify those attributes which are not being efficiently used. The most popular attributes of websites, such as online reviews section, chat-bots, and high resolution images, must be provided by websites for tourists in order to improve website's utility and to stimulate the intention of booking [36]. Moreover, search engines represent important sources of information for tourists and have great influence over purchase decision [37]. Numerous tourism organizations use e-WOM (electronic-word of mouth) to facilitate the process tourists go through to obtain information about tourist packages, tourist destinations, and websites [38]. WOM is seen as a functional means of information which helps people to assess the quality of services, leading them to the point of a buy or not decision [39]. A more advanced version of WOM is e-WOM, currently being spread through different platforms, due to the fact that it is more useful in tourist services assessment [40]. Furthermore, the researchers discovered that e-WOM is an efficient way of promoting goods and services [41], through which people can obtain information related to their own interests, such as quality of services, brand products, travel experiences, and food [42]. Moreover, e-WOM covers the costs of advertising and promotion, the sale of services being much more efficient this way [43][44][45]. Currently, the service provider organizations are building a sustainable connection with tourist, providing them the best services. Internet users generate reviews about hotels, services, and tourist destinations that are an essential source of information about tourists [46][47][48]. Every year, hundreds of tourists consult online reviews [49]. The presence of a reviews section on a booking website is currently a very important debated topic, and one that is to be discussed in the future [50][51][52][53][54]. Gerdt, Wagner, and Schewe found a relationship between sustainability orientation and customer satisfaction, moderated by star classification [55]. Research on customer satisfaction identifies the most relevant features of hotels that have contributed to customer satisfaction: room decorations and amenities (focusing on cleanliness of bedroom and bathroom), hotel environment (pool, decor, and view), staff service skills (friendliness and helpfulness of hotel staff), restaurants with local food, serving breakfast and dinner, and the Internet service (a quality of Internet signal in the hotel) [56]; rooms, value, cleanliness, sleep quality, service, and location [57]; Wi-Fi, facilities, parking, bathroom, noise, swimming pool, and room cleanliness [58]; staff, rooms, services, front desk, cleanliness, bed conditions, room space, view, quietness, and modernity for consumers [59]; tangible factors (room cleanliness, facilities, hotel location) and intangible factors (service and attitude of hotel staffs) [60]. Zhou et al. [61] identified attributes that influence customer satisfaction: satisfiers (public facilities); dissatisfiers (room size, cleanliness, dated quality of facilities, noise level, room price, proximity to attractions, accessibility with public transportation, language skills, efficiency); bidirectional (amenities in the room/bathroom, food quality, dining environment, friendliness of the staff, welcoming extras, food variety, availability of special food service (e.g., room service, vegetarian options)); and neutrals (Wi-Fi services, entertainment facilities, proximity to the airport/railway station, proximity to the city center, other price, and food and beverage price). Abrudan, Pop, and Lazar [62], in a study conducted in Romania, identified the relevant hotel attributes that influence customer ratings: general facilities (food-related facilities, restaurants, and complimentary breakfasts), common facilities (the pool and parking spaces, while for the room-the flat-screen TV), and sustainability facilities (facilities for disabled people and electric vehicle charging stations). The most common environmental measures implemented in the Romanian seaside hotel industry targeted the reduction of energy consumption, followed by water consumption, and waste [63]. Gerdt, Wagner, and Schewe [55] expressed concern about the lack of studies on the effect of sustainability measures in hospitality on customer satisfaction through online assessment and reviews. This study represents a step forward in the field of research and a starting point in conducting new studies on the environmentally friendly practices of booking platforms. In order to identify the stance of booking websites on environmentally friendly practices, a subject for analysis is the filters section of four booking platforms. Tourists' opinions about environmental practices were discovered by analyzing the reviews displayed on Booking.com, which is a booking platform. Methods This research, which evaluated the environmental practices of booking websites, consisted of two parts. The first part included the analysis on the filters section of four booking platforms, in order to identify their position related to environmental practices. The booking websites and platforms on which the analysis was made, were selected by using the criteria of popularity. Therefore, consulting the analysis and statistics website, SimilarWeb.com, we found that the most popular international booking platforms are Booking.com [64] and TripAdvisor [65]. By entering the following Romanian keywords-"rezervare hotel online"-in the Google search engine, the first results given were the Romanian booking websites, Travelminit.ro [66] and Directbooking.ro [67]. The relation of these four booking websites and platforms with environmental practices were analyzed. According to the literature, the trend in terms of environmental practices and the efforts of assuring a healthy relationship between mankind and the environment is upward, with many companies promoting their commitment and involvement in achieving these goals. The reason for choosing the filters section as a subject to analyze was the fact that it is a common attribute of all four selected websites (Booking.com, TripAdvisor, Directbooking.ro, and Travelminit.ro). Aside from this aspect, the filters section of websites is used by any person that desires to hasten the reservation process, since only those accommodation units that meet the expectations and preferences of the potential tourist are displayed. From this section of filters, you can extract valuable information about what types of facilities and services currently offer the accommodation units, and therefore, the dedication to environment protection of both accommodation units can be discovered by making these types of services, as well as booking platforms, available to tourists by displaying the offers of the accommodation units. Considering the specific attributes of environmental practices, the selection of environmental filters was made by using a set of keywords extracted from the international standard of ecological certification of accommodation units, Green Key [68]. This method was chosen because in this way we could find out the extent to which the accommodation units have assumed environmental practices, such as those presented in the ecological certification standard, and because they were displayed on the booking platforms in the reviews section, we could conclude that there is an interest in this regard from both the platforms, and the tourists. The keywords extracted from the ecological certification standard for accommodation units are the following: responsible; environmental; ecological cleaning products; waste; efficient use; environmental policy; sustainability; environmental practices; environmental awareness; energy consumption; water; carbon footprint; carbon emission; local food and beverage; environmental initiative; reuse; towel; bed sheet; sign; information; waste saving; environmentally friendly activities; public transportation; shuttle; alternatives for cycling/walking; local transport (bus, train, subway, boat, etc.); electric car; electric car charging station; charging location; available searching computer; water/energy saving; shower cabin; feedback questionnaire; excessive consumption; water flow; recycling; rainwater; eco-label; eco-friendly face towels and toilet paper; microfiber cloth; greenhouse; disposable glasses; composting systems; heating and air-conditioning; light bulbs; bathroom trash can; waste separation; vending machine; outdoor lighting system; solar panels; wind power; biogas from organic waste; geothermal heat; eco-certified energy; key-card; automatic system; heat recovery system; food waste; vegetarian and/or vegan alternatives; tap water; non-smoking; smart irrigation system; native species; noise; pollution; disabilities; discrimination; sustainable products; local small shops; information about nearby parks; landscape and/or nature conservation areas; borrowing or renting bicycles; nature guided tours; and consumable [68]. In the second part of the research, we analyzed the reviews of tourists related to the environmental practices of the hotels, displayed on Booking.com (the booking platform that has the highest ratio of environmental filters in the total number of filters displayed). The market of Internet users in Romania has some attributes that qualified it to be further examined. The study conducted by Hootsuite and WeAreSocial on the Global Digital Market in 2019 revealed that, out of a total of 7.734 billion people, 4.479 billion use the Internet. Globally, an average of 58% of the population has access to the Internet, an increase compared to 2018 by 10 percentage points [69]. Therefore, worldwide, the Internet, as a space of information and communication, is of an increasing interest for all categories of the population. Comparing with the global market, the percentage of penetration of the national digital market in 2019 is very high, with 75.7% of Romanian households having access to the internet from home, the percentage increasing compared with 2018, by 3.3 points [70]. Romania has the fourth fastest fixed Internet connection speed ranking in the world. With an average of 140.25 Mbps download speed, Romania is surpassed in the world top speeds, only by Singapore, Hong Kong, and South Korea [71]. In this context, the amount of online purchases of tourist services is increasing in Romania. However, the share of Romanians between the ages of 16 and 74 who booked accommodation and holidays in 2019, out of the total number of people of the same age who ordered goods or services online in 2019 is 17%, well below the EU average (52%) [72]. This study aims to capture interesting aspects of this topic relevant to tourism. The selection of the sample of hotels for analysis was made based on the following criteria: number of hotels, classification category, and distribution of hotels by development regions. There are eight development regions in Romania [73]: the north-east region (NE) including the counties of Bacău, Botos , ani, Ias , i, Neamt , , Suceava, and Vaslui; the south-east region (SE) including the counties of Brăila, Buzău, Constant , a, Galat , i, Tulcea, and Vrancea; the south region (S) consisting of the counties of Arges , , Călăras , i, Dâmbovit , a, Giurgiu, Ialomit , a, Prahova, and Teleorman; the south-west region (SW) including the counties of Dolj, Gorj, Mehedint , i, Olt, and Vâlcea; the west region (W) including the counties of Arad, Caraş-Severin, Hunedoara, and Timis , ; the north-west region (NW) including the counties of Bihor, Bistrit , a-Năsăud, Cluj, Maramures , , Satu-Mare, and Sălaj; the center region (C) including the counties Alba, Bras , ov, Covasna, Harghita, Mures , , and Sibiu; and the Bucharest-Ilfov region (B-IF) including the municipality of Bucharest and Ilfov County. The sizing of the representative sample, corresponding to the number of hotels in each region, was achieved by correlating the data provided by the Institute of Statistics of Romania within the Statistical Brief of Romania for the years 2018 and 2019 and the data provided by the list of classified accommodation units, presented in Table 1. In Romania, the total number of classified hotels was 1766 (until 20 April 2020), of which 38 were five-star hotels (2.15%), 381 were four-star hotels (21.57%), 981 were three-star hotels (55.55%), 318 were two-star hotels (18.01%), and 48 were one-star hotels (2.72%). The analysis of reviews of tourists related to environmental practices took into account the hotels classified at the 2-5-star category (the opinion of tourists who choose 1-and 2-star hotels are quite similar). The hotels were selected for the analysis sample as follows: out of the total hotels classified in the 2-5-star category (1718), a percentage of 5.82% was chosen, resulting in a number of 100 hotels. Therefore, the analysis sample included both 2-and 3-star hotels and 4-and 5-star hotels, as follows: 14 hotels from the north-west region, 17 hotels from the center region, 9 hotels from the west region, 7 hotels from the south-west region, 11 hotels from the south region, 7 hotels from the Bucharest-Ilfov region, 28 hotels from the south-east region, and 7 hotels from the north-east region. In the analysis of online reviews of tourists related to environmental practices present on the booking platform of Booking.com, how close hotel practices are to the environmental practices was taken into account. This was done by determining the frequency of use of the same previously selected keywords or their synonyms found in the online reviews posted by tourists who lived the experience of accommodation in selected hotels. Results Obtained by Analyzing Booking Platforms in Relationship with Environmental Practices The analysis on the selected booking platforms regarding the features of environmental practices, by using the previously presented set of keywords, highlighted the following attributes and environmental filters, according to Table 2. Following the analysis of the attributes and filters of the four reservation sites, 18 environmental filters were identified. The previously determined keywords found in the filter section of each website under analysis were related to carbon emissions, public/local transport, bicycle/boat, local food, food waste, electric cars, shower, information about natural landscapes, organized tours, people with disabilities, noise pollution, non-smoking facilities, and small local shops. The extent to which these booking websites provide tourists with the possibility to filter accommodation units by using environmental filters, is shown in Figure 1. By determining the number of environmental filters in the total number of filters displayed, by the selected booking platforms (18/116-Booking.com; 8/108-TripAdvisor; 4/89-Directbooking.ro; 9/262-Travelminit.ro), the following percentage results were produced: 15% for Booking.com, 7% for TripAdvisor, 4% for Directbooking.ro, and 3% for Travelminit.ro. Therefore, the booking platform that offers tourists a greater possibility of filtering accommodation by using the environmental filters is Booking.com. filters were identified. The previously determined keywords found in the filter section of each website under analysis were related to carbon emissions, public/local transport, bicycle/boat, local food, food waste, electric cars, shower, information about natural landscapes, organized tours, people with disabilities, noise pollution, non-smoking facilities, and small local shops. The extent to which these booking websites provide tourists with the possibility to filter accommodation units by using environmental filters, is shown in Figure 1. By determining the number of environmental filters in the total number of filters displayed, by the selected booking platforms (18/116-Booking.com; 8/108-TripAdvisor; 4/89-Directbooking.ro; 9/262-Travelminit.ro), the following percentage results were produced: 15% for Booking.com, 7% for TripAdvisor, 4% for Directbooking.ro, and 3% for Travelminit.ro. Therefore, the booking platform that offers tourists a greater possibility of filtering accommodation by using the environmental filters is Booking.com. Results Obtained by Analyzing the Reviews of Tourists Displayed on Booking.com In order to develop the study and obtain the results presented below, a total of 31,800 reviews were analyzed, of which 18,540 were reviews for 2-and 3-star hotels and 13,260 were reviews for 4and 5-star hotels. The reviews were analyzed by following the frequency of the previously selected keywords, dividing the results into two major categories: factors of satisfaction and factors of dissatisfaction. The results were also divided by hotel's rank, as follows: the results obtained by analyzing the reviews of 2-3-star hotels are presented in Table 3, and those of 4-5 star hotels, in Table 4. In order to identify the share of each variable in the total number of analyzed reviews (18, Results Obtained by Analyzing the Reviews of Tourists Displayed on Booking.com In order to develop the study and obtain the results presented below, a total of 31,800 reviews were analyzed, of which 18,540 were reviews for 2-and 3-star hotels and 13,260 were reviews for 4and 5-star hotels. The reviews were analyzed by following the frequency of the previously selected keywords, dividing the results into two major categories: factors of satisfaction and factors of dissatisfaction. The results were also divided by hotel's rank, as follows: the results obtained by analyzing the reviews of 2-3-star hotels are presented in Table 3, and those of 4-5 star hotels, in Table 4. In order to identify the share of each variable in the total number of analyzed reviews (18,540 reviews for 2-and 3-star hotels and 13,260 reviews for 4-and 5-star hotels), percentages were determined and presented in Table 3 for the 2-and 3-star hotels and for the 4-and 5-star hotels in Table 4. The short distance to attractions was the most appreciated factor by people who stayed in 2-and 3-star hotels and was in second place in the case of 4-and 5-star hotels. A considerable number of tourists expressed their dissatisfaction with the long distance to the points of interest, both in the case of those staying in 2-and 3-star hotels and in the case of those staying in 4-and 5-star hotels. The location of hotels in the vicinity of major points of interest has implications for the environment, by reducing carbon emissions, due to the use of personal vehicles being unnecessary. Informing tourists about the main attractions in the area, the tourist routes arranged in nature, and the public transport network is an important attribute in sustainable environmental practices. The number of cases in which tourists were informed about the above does not exceed the number of cases when this information was missing, causing the lack of information to become a factor of dissatisfaction. Table 3. Frequency of selected keywords in the online reviews for 2-and 3-star hotels (N = 66). (Note: These variables were extracted/correlated with criteria of the Green Key eco-certification standard [68]). Rank Factors of Satisfaction Local shops located in the immediate vicinity of a hotel area way to promote culture, history, and nature using locally sourced materials for their products. In Romania, the most appreciated shops are food and clothing stores. In this case, the local economy and the social dimension are encouraged by providing jobs for locals. The environmental dimension is less encouraged because stores do not fully meet the sustainability requirements. Local culture is also represented by traditional restaurants that provide live music with local performers in hotels in Romania; this was appreciated by tourists staying at all 2 -5-star hotels. The use of public transport by tourists is another way to reduce carbon emissions, creating further benefits for the environment. Opting for public transport was more common in the case of tourists staying at 2-and 3-star hotels. The study shows that tourists are greatly satisfied with the existence of opportunities to walk within reasonable distances from the hotel, the level of dissatisfaction when these possibilities do not exist being higher in the case of those staying at 2-and 3-star hotels. Given that tourists appreciate walking so much, carbon emissions can be further reduced if reception staff encourage tourists to do so by presenting the various tourist routes in the area where hotels are located. Bike paths, the rental of bicycles, the rental of boats, and the possibility to charge an electric vehicle were elements present in few hotels and appreciated by few people, although they represent important elements of the environmental practices. The results showing that the electric vehicle charging stations are an element of the environmental practices confirm those of Abrudan, Pop, and Lazar [62]. Parking is an essential facility, the dissatisfaction created by its absence being identified by tourists through the reviews section in greater numbers, than the satisfaction when parking is offered. Regarding the cleanliness around the hotel, this aspect was identified only in a few reviews of tourists staying in 2-and 3-star hotels, where we found equivalent expressions of appreciation and of dissatisfaction. The view that tourists had, determined by the location of the hotel, in a natural setting with lots of greenery, was very much appreciated by tourists, in the case of the four types of hotels. On the contrary, tourists were also quite disturbed when the natural setting is missing. In this case, we can consider that the presence of a natural environment near hotels is a tool for enticing tourists to become more interested in the conservation and protection of the natural environment. The location of a hotel in a quiet area accompanied with the appropriate sound insulation occupied the 3rd position in both categories, in the case of 2-and 3-star hotels, the percentage of mentioned satisfaction being higher. If tourists are satisfied and interested in peace, they may also be interested in reducing noise pollution in the natural environment. A considerable number of tourists appreciated the air quality, mentioning this in the case of hotels located in the mountain areas or when surrounded by trees. There were more tourists staying at 2-and 3-star hotels who appreciated fresh air compared to those staying at 4-and 5-star hotels. It is important for hotels to organize outdoor activities while informing tourists about the importance of conserving and protecting the environment. This criterion was only met by 1 single hotel, out of the 100 hotels studied in Romania, and appreciated by only 7 tourists. Proper insulation of a hotel building is essential to reduce energy losses for heating, but we must also consider sound insulation, for acoustic comfort. The review study showed that 2-and 3-star hotels offer lower sound insulation than 4-and 5-star hotels. The lack of an elevator in most cases was a factor of dissatisfaction, from which we can conclude that many hotels do not have facilities for people with disabilities. The results showing that the facilities for people with disabilities are relevant confirm those of Abrudan, Pop, and Lazar [62]. The automation of certain processes contributes to the decrease of resource consumption and implicitly of costs. Hotels with a 4-and 5-star rating are the ones that put more of an emphasis into practicing such technologies. The lighting system must be efficient, without too much energy consumption. For this, it is recommended that the bulbs used in the rooms be of the LED type or with an adequate energy consumption. While outside the hotel and in the hallways, the bulbs must have motion detection sensors in conjunction with LED or equally efficient bulbs [68]. A considerable number of tourists expressed their dissatisfaction with the lack of adequate lighting in the rooms. Very few commented on the lighting system outside the hotel, which is of particular importance because too much lighting at night, especially in hotels located in the forest, can damage the ecosystem. Some tourists staying at 2-and 3-star hotels appreciated the thermal comfort in the rooms. They appreciated the high temperature found in the rooms during the cold period, but there were several mentions of dissatisfaction in the case of 2-and 3-star hotels, as well as the room temperature being low in the cold period, causing discomfort to tourists. In the case of 4-and 5-star hotels, a considerable number of tourists expressed their dissatisfaction with the temperature found in the rooms, the thermal discomfort being generated by both the low temperature during the cold season and the high temperature during the hot season. A higher temperature generates additional costs, higher energy consumption, and therefore higher carbon emissions. It is recommended to maintain the accommodation spaces at a minimum temperature of 22 • C during cold periods, and a maximum temperature of 25 • C during hot periods; this is considered as an optimal temperature of comfort of tourists. The use of air conditioning in rooms must be controlled so that there is no inadequate energy consumption according to carbon reduction policies. In the case of most hotels in Romania, climate control systems are missing or inefficient, both in the case of 2-and 3-star hotels, and similarly in the case of 4-and 5-star hotels. In Romania, most hotels are equipped with air conditioning systems; this was greatly appreciated by a significant number of tourists. Among the first factors of dissatisfaction was the lack of ability to control the temperature in the rooms. Given that more tourists complained about the cold than the heat, the ability to adjust the temperature could lead to greater energy consumption resulting in a higher cost for the hotel administration. Tourists were often dissatisfied with the water heating system, often mentioning that they waited about 15-20 min for the water to reach the right temperature to take a shower. This waiting time generates enormous water consumption, the associated costs being commensurate. The water flow must not exceed a consumption of 6 L in a single toilet flush, and in the case of the shower, the water consumption should not exceed 9 L per minute, as long as it does not come at a cost of discomfort to tourists [68]. A considerable number of tourists expressed dissatisfaction with the water losses caused by the defects of the toilet tanks that resulted in the continuous flow of water in the toilet bowl, and due to faulty taps that did not close. These water leaks were mentioned more often in 2-and 3-star hotels. The number of those who preferred a bathtub instead of a shower in hotels is approximately equal to the number of those who were dissatisfied with the absence of a bathtub. Replacing the bathtub with a shower cabin reduces water consumption. The soap dispenser placed in the bathroom is a solution to reduce the amount of plastic used for hygiene items in the bathroom (shampoo, shower gel, etc.). Its existence was mentioned too few times in the online reviews of tourists. The use of a key-card can be an important aspect, as it can be used not only to open the door, but also to provide electricity in the room, thus eliminating the risk of energy consumption when the tourist is not in. Too few opinions of tourists on this subject were found. The presence of a minibar proved to be quite appreciated by tourists staying at 2-and 3-star hotels, especially in the case of those at the seaside, its absence being reported by even more tourists. In the case of 4-and 5-star hotels, the number of reviews related to the minibar was insignificant. The Green Key eco-certification standard suggests that the in-room minibar be replaced with a vending machine located in the hallway to reduce energy consumption [68]. The only situations in which aspects related to the presence of a vending machine located in the hotel lobby were mentioned were in the case of 2and 3-star hotels, these being positive (10 in number). Tourists referred to the existence or absence of a balcony, considering it an important facility for smoking, drying towels (for seaside hotels), and for admiring the landscape. The Green Key ecological certification standard suggests that hotels and restaurants should not allow tourists to smoke inside, and if the legislation of certain countries allows indoor smoking, 75% of the spaces should be non-smoking [68]. In Romania, the legislation does not allow smoking in enclosed public spaces, but, nevertheless, tourists in a considerable number (mostly in the case of 2-and 3-star hotels), complained about the smell of tobacco in the room, coming either from the walls that were not painted after the publication of the new legislation, or through the ventilation system in the bathroom. This proves that there are many people who still smoke in enclosed public spaces, despite the legal provisions. In the case of hotels located in a warm and sunny area, it is recommended that there be curtains or other solutions attached to the windows to obscure the sun's rays to reduce the greenhouse effect, and thus the energy consumption associated with cooling and ventilation of the rooms [68]. In the case of 2-and 3-star hotels, there were people who expressed dissatisfaction with the absence of curtains. The cleaning service was more of a dissatisfaction factor for tourists than of satisfaction, as they complained that no cleaning was done, not replacing used towels or bed linen with fresh ones, with only garbage being taken away. In this case, excessive consumption of energy, water, and detergents is not a risk for hotels in Romania. Only one person expressed dissatisfaction with the fact that his towels were replaced, even though he had requested for it not to be replaced. Regarding the waste separation and the existence of several trash cans, including in the room, there were some hotels that did not provide any trash cans in the room or around the hotel, dissatisfaction with this being expressed through reviews. The fresh food products offered to tourists usually come from the area where the hotel is located, therefore, by encouraging this practice carbon emissions can be reduced due to food transport being shortened by not exceeding a distance of 100 km [68]. More tourists staying at 4-and 5-star hotels expressed their satisfaction with the fresh products offered, compared to those staying at 2-and 3-star hotels. Only 3 people expressed their satisfaction with the organic products offered in the case of 4and 5-star hotels. Their lack of satisfaction was reported by too few tourists. The vegetarian or vegan option refers to the possibility for tourists to choose products suitable for the food preferences mentioned above. Vegetarian or vegan food has a lower impact on the environment than meat-based food, therefore restaurants should also offer vegetarian or vegan products on their menu [68]. In Romania, this need of tourists is not adequately met, in the reviews of tourists the dissatisfaction being expressed, in this regard, was more frequent than the satisfaction. Disposable cutlery is recommended to be used only in swimming pools and gyms, not in restaurants or rooms [68]. However, there were cases where disposable cutlery replaced porcelain tableware or other materials, thus generating dissatisfaction among tourists. In the current epidemiological context, we considered it important to include the criteria of cleanliness and sanitation in the analysis of reviews, the results being as follows: in the case of 2and 3-star hotels, the frequency of use of the word cleanliness was 2064 times (11% of total reviews analyzed) and the words describing inadequate hygiene, 603 times (3.25%). In the case of 4-and 5-star hotels, the frequency of using the word cleanliness was 1211 times (9.13%), and the words that describe inadequate sanitation, 197 times (1.5%). The analysis found that a considerable number of tourists expressed dissatisfaction with inadequate sanitation in the case of 2-and 3-star hotels, which, if not changed in the near future, could be harmful for those hotels and implicitly for tourism of Romania. Conclusions Following this study, conclusions can be drawn related to the environmentally friendly practices of booking platforms, conclusions on hotel practices and their level of professional training, as well as conclusions based on the opinion of tourists, and their level of information and interest in protecting the environment. The results of environmental practices analysis made on the four booking platforms, Booking.com, TripAdvisor, Directbooking.ro, and Travelminit.ro, highlighted the presence of 18 environmental filters. Only one environmental filter was identified as present on all four booking platforms analyzed, namely: facilities for people with disabilities. Three environmental filters are present on three of the booking platforms analyzed: short distance from the main points of attraction, short distance from public transport, and bicycle/boat rental. Booking.com has the highest ratio of environmental filters in the total number of filters displayed. Considering the fact that, internationally, the presence of ecological certificates has an impact on the way tourists choose the hotels in which to stay, the authors emphasize that hotels in Romania must take steps to obtain ecological certification. The results confirm those of Barbulescu, Moraru, and Duhnea [63]. After analyzing the reviews of tourists, including those staying in units of hotel brands that have publicly stated their commitment to sustainability, the authors consider that hotels should significantly improve the way they inform tourists about the environmental practices they apply. The results of the analysis of tourists' online reviews related to environmental practices present on Booking.com show that distance to attractions, noise level, view, parking, and facilities for people with disabilities are the most relevant. The results indicating that the facilities for people with disabilities are relevant confirm those of Abrudan, Pop, and Lazar [62]. The number of tourists who explicitly referred to environmentally friendly issues in reviews was quite small. The results of this study show that the share of the segment of tourists concerned about environmental practices of hotels is quite low in Romania, which is why the authors consider that informing tourists on this matter, including through national awareness campaigns, is needed. More and more tourists are informed about tourist offers and book their tourist packages through online booking sites, with the reservation representing the very beginning of their tourist experience. Therefore, it is important that booking platforms provide relevant and useful information for tourists, which could improve their experience, such as those related to the environmental practices of hotels and accommodation units listed on that platform. In this regard, the authors recommend that companies that manage booking platforms and websites adopt a holistic approach to environmental practices that is visible and allows consumers to access such information at all stages of the booking process: search, information, choice, booking, assistance, exchange of consumer experiences, inspiration of other potential customers, etc. Following this study, booking platform developers can implement and display certain filters, features, or options to improve the user experience on environmental practices. Therefore, displaying the eco-label owned by hotels can be a way to facilitate the search and to shorten the search time of tourists who want to stay in an accommodation unit where environmental practices are implemented. In addition, for accommodation units that do not have an eco-label, providing information on the fact that environmental practices are implemented at some level, possibly the number of measures implemented, can be useful in providing such information to tourists interested in this aspect. Moreover, they can have long-term implications, to become a means of educating tourists on environmental practices, paving the way for awareness of the importance of environmentally friendly behavior of tourists and creating demand in the tourism market for such practices. Most certainly, the implementation of these recommendations will determine action by hoteliers, who will become more interested in implementing environmental practices in accommodation units and possibly, to obtain an eco-label, as well as other aspects that could likely lead to both environment protection and to the sustainable development of hotels, which will generate positive effects, that will lead to a decrease in the consumption of utilities, therefore reducing expenses, and implicitly increasing profit. The results of the study are conclusive and relevant for the management of booking platforms, for hotel managers, and for companies interested in developing new businesses in the field of hospitality, but also for the effects on tourists' awareness of environmental practices. From an academic point of view, this study helps to strengthen existing studies on digitalization and sustainability in the tourism and hospitality industry and provides the basis for future research. This study is a small step forward in proposing a framework for the implementation of environmentally friendly attributes at the level of booking websites. This study presents some limitations. The main limitation of this research is generated by the fact that the sampling for analysis was done only on hotels and for a single booking platform. Another limitation is that the research was conducted for hotels in one country, namely Romania, and most tourists are Romanians-culturally different from guests from other countries. Future research could be extended to complete this study. The authors aim to continue research on this topic, including qualitative and quantitative studies, and to investigate the presence of other variables in these studies, such as the relationship between environmentally friendly issues and practices and consumer preferences in other states, and to expand research to other booking websites. Funding: This research has been supported by funding from Transilvania University of Brasov, Romania. Conflicts of Interest: The authors declare no conflict of interest.
2020-12-10T09:02:23.127Z
2020-12-09T00:00:00.000
{ "year": 2020, "sha1": "8dc66bfc1a3bdbd6aef5aeed29d4a759e24a85f5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/24/10282/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "900a12c75e9f3e831ca50db476241c9c8306de60", "s2fieldsofstudy": [ "Business", "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
79535098
pes2o/s2orc
v3-fos-license
Discrepancy of the Cause of Death in Autopsy Cases of Cardiovascular Disease with a Focus on Cause of Death Statistics Determining an individual’s cause of death (COD) is a difficult, yet very important task. On an individual level, determining the COD represents the final decision of death, whereas from a social perspective, determining the COD in individual cases can provide basic data for establishing social policies concerning public health and social safety. Hospital autopsy rates are decreasing worldwide, but autopsies remain useful [1]. Most autopsies performed in Korea are forensic autopsies and the autopsy rate in Korea is very low [2]. An autopsy is recognized as the gold standard procedure for investigating death, including the determination of COD [3]. Therefore, it is very difficult for countries with low autopsy rates, such as Korea, to establish accurate Korean J Leg Med 2017;41:100-106 https://doi.org/10.7580/kjlm.2017.41.4.100 Introduction Determining an individual's cause of death (COD) is a difficult, yet very important task. On an individual level, determining the COD represents the final decision of death, whereas from a social perspective, determining the COD in individual cases can provide basic data for establishing social policies concerning public health and social safety. Hospital autopsy rates are decreasing worldwide, but autopsies remain useful [1]. Most autopsies performed in Korea are forensic autopsies and the autopsy rate in Korea is very low [2]. An autopsy is recognized as the gold standard procedure for investigating death, including the determination of COD [3]. Therefore, it is very difficult for countries with low autopsy rates, such as Korea, to establish accurate nationwide COD statistics from a comprehensive investigation of COD. Statistics Korea determines COD by referencing death certificates by doctors, and various additional data including medical records, traffic crash reports from the National Police Agency, and registration data from the National Cancer Center. Among such data, autopsy data from the National Forensic Service (NFS) represent one of the most important data in determining the COD. Accordingly, we aimed to compare and analyze CODs determined by autopsy reports and CODs corresponding with Statistics Korea. Materials and Methods Among the 6,610 cases with confirmed COD by autopsy from all deaths that occurred in Korea in 2015, the present study investigated cases of cardiovascular disease as the COD. Among autopsies performed in Korea during 2015, the COD was cardiovascular disease in 1,920 cases, with cardiac and vascular diseases accounting for 1,417 and 503 cases, respectively [2]. There were 1,468 cases for which the COD could be confirmed by data from Statistics Korea; the numbers of cases involving cardiac and vascular diseases were 1,075 and 393 cases, respectively. The reasons for choosing cardiovascular disease for the present study were as follows. First, we determined that there would be more cases with an unclear COD among natural deaths than among unnatural deaths. Second, cardiovascular diseases accounted for the highest proportion of natural deaths in a statistical study on COD by autopsy [2]. Third, there is an increasing trend in cardiovascular disease among CODs in total death statistics in Korea; thus, cardiovascular disease represents an important disease that leads to death among Koreans. Lastly, various cardiovascular diseases lead to death, so we determined that there would be wide variability in determining the COD for cardiovascular diseases. We compared CODs between the autopsy reports and CODs corresponding with the same autopsy cases of cardiovascular disease from Statistics Korea. Moreover, causes of any discrepancies were analyzed from the viewpoint of a forensic pathologist who performed the autopsy and from Statistics Korea. Identification information of a death was not collected. This study was based on data from medicolegal autopsies with a court's warrant requested by the public prosecutor. The authors and institution that approved this study adjudged this study exempt from institutional review board approval. (Table 1). Among cases of IHD, the most common COD by Statistics Korea was chronic IHD in 365 cases (86.7%), followed by AMI with 30 cases. Other CODs included unattended death and other ill-defined and unspecified causes of mortality in five cases, cardiac arrest in three, and other acute IHD in two. Among cases of SCD, 21 different causes of death were noted from Statistics Korea. The most common COD confirmed from Statistics Korea was cardiac arrest in 216 cases (76.1%), followed by other ill-defined and unspecified causes of mortality in 15 cases; unattended death complications and ill-defined descriptions of heart disease in eight cases; and AMI and cardiomyopathy in five cases each. Among cases of AMI, the most common COD from Statistics Korea was AMI in 256 cases (94.1%), followed by other diseases of the pericardium in four cases and other ill-defined and unspecified causes of mortality in three cases. Among other cardiac diseases listed on autopsy reports as the COD, there were 28 cases of coronary atherosclerosis, of which chronic IHD was the most common in 23 cases, followed by two cases of AMI and one case each of cardiac arrest, atherosclerosis, and systemic lupus erythematosus. Among 21 cases of cardiomyopathy, there were 16 cases of cardiomyopathy and five cases of cardiac arrest. For myocarditis, complications and illdefined descriptions of heart diseases were the most common in nine cases, followed by acute myocarditis in six cases and cardiomyopathy in two cases. Analysis results of the COD from Statistics Korea for 393 cases of vascular diseases were as follows ( Table 2). Among cases of ICH, the most common COD from Statistics Korea was ICH in 135 cases (93.8%), followed by SAH in four cases. Among cases of SAH, the most common COD by Statistics Korea was SAH in 85 cases (91.4%). AAD accounted for most cases of COD (57 cases, 91.9%), along with two cases of other ill-defined and unspecified causes of mortality. For cases of pulmonary embolism (PE) and esophageal varix, various CODs by Statistics Korea were identified. Among cases of PE, the most common COD from Statistics Korea was PE in 26 cases, but this accounted for only 59.1%, and there were 11 other diverse CODs. Among cases of esophageal varix, the most common COD from Statistic Korea was fibrosis and cirrhosis of the liver in 16 cases (37.2%), followed by esophageal varix in 12 cases (27.9%) and alcoholic liver disease in 10 cases (23.3%). Discussion The present study showed that cases involving AMI, ICH, SAH, and AAD showed a relatively high concordance with COD determination. However, cases involving IHD, SCD, PE, and esophageal varix had a relatively low concordance with COD determination. The present study analyzed cases with mismatching CODs confirmed by autopsy and CODs confirmed by Statistics Korea. We identified cases with errors contained in the COD data provided to Statistics Korea from the NFS after autopsy. For example, in a case in which the COD was diagnosed as AMI after autopsy, the information provided to Statistics Korea was for a COD corresponding to "natural death-cardiovascular system-heart-IHD. " The COD for this case was confirmed as "chronic IHD" by Statistics Korea, instead of AMI. Moreover, another case of death by cardiac tamponade due to AMI, as determined by autopsy, was provided to Statistics Korea as "natural death-cardiovascular system-heart-cardiac tamponade. " The COD for this case was confirmed as "other disease of the pericardium" by Statistics Korea, instead of AMI. In such a case, it would be necessary for forensic pathologists to draft the COD data in detail so that the information on the COD determined by autopsy would be accurately relayed to Statistics Korea. We also found cases with errors in the determination of CODs by Statistics Korea. For example, non-traumatic ICH was diagnosed as a COD after autopsy, but the final COD determined by Statistics Korea was malignant neoplasm of the liver and intrahepatic bile ducts. The person who died had underwent surgery for liver cancer about 4 years earlier. In addition, a case of SAH caused by vertebral artery dissection was confirmed to be AAD as the final COD by Statistics Korea. It is believed that for such a case, education is required, and systematic improvement is needed through more active discussion between the staff in charge at Statistics Korea and the forensic pathologists. Lastly, the present study identified cases with differences between the medical COD and statistical COD. Clinically, PE and esophageal varix themselves are determined as important CODs; however, statistically, the reasons that cause PE and esophageal varix are considered important for determining the COD. Therefore, for cases involving PE, the final COD is often determined based on bodily injury or diseases that cause an immobilization. For similar reasons, the final COD is often determined as hepatic disease for cases involving esophageal varix. This type of discrepancy is due to a different purpose for determining the CODs between medical and statistical aspects. The underlying COD is more important than the immediate COD in the statistical aspect. Additionally, CODs determined by Statistics Korea are preferentially determined based on death certificates. In conclusion, our study showed a high concordance between the COD by autopsy and by Statistics Korea [4]. However, the present study also found other cases in which the final COD by Statistics Korea varied in contrast to the COD on autopsy reports. We analyzed a few causes of this, and accordingly, examined the areas that need further discussion and the measures needed for improvement. Autopsies performed in Korea are mostly forensic autopsies; therefore, autopsies performed by the NFS are not for COD determination by Statistics Korea. However, in any given society, statistics on the COD are the most fundamental health statistics. For countries with low autopsy rates, such as Korea, autopsy-based COD data can serve as important data when considering nationwide COD statistics. Additionally, to obtain more accurate COD statistics, we think that forensic autopsy results should also be efficiently reflected in COD statistics. To achieve this, the forensic pathologist should draft post-autopsy COD data based on an understanding of the rules for determining COD by Statistics Korea, and provide such data to Statistics. Moreover, the staffs in charge of the COD determination at Statistics should determine the final COD based on an understanding of the medical and statistical aspects of COD. Finally, the COD should be determined through closer discussions between the forensic pathologist and staff in charge at Statistics Korea, and systematic improvement for reconfirming CODs is deemed necessary. Furthermore, an independent organization that monitors deaths and the establishment of rules for determining COD seem to be needed. We limited our study to cardiovascular disease as the COD in the present study. However, this comparative study should be expanded to include various death COD to obtain COD statistics that are more accurate because mismatching cases were confirmed by this study. Conflicts of Interest No potential conflict of interest relevant to this article was reported.
2019-03-17T13:10:24.159Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "b36e470ee7ecd691f067393364b05fc6aaee9ae9", "oa_license": null, "oa_url": "https://doi.org/10.7580/kjlm.2017.41.4.100", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f013ecda9b7437217befb981a0c63b3a8b424b3f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3477964
pes2o/s2orc
v3-fos-license
Association between Excessive Use of Mobile Phone and Insomnia and Depression among Japanese Adolescents The aim of this study was to investigate the relationship between mobile phone use and insomnia and depression in adolescents. A cross-sectional study was conducted on 295 high school students aged 15–19 in Japan. Insomnia and depression were assessed using Athene Insomnia Scales (AIS) and the Center for Epidemiologic Studies Depression Scale (CES-D), respectively. Mobile phones were owned by 98.6% of students; 58.6% used mobile phones for over 2 h per day and 10.5% used them for over 5 h per day. Overall mobile phone use of over 5 h per day was associated with shorter sleep duration and insomnia (OR: 3.89 [95% CI: 1.21–12.49]), but not with depression. Mobile phone use of 2 h or more per day for social network services (OR: 3.63 [1.20–10.98]) and online chats (OR: 3.14 [1.42–6.95]), respectively, was associated with a higher risk of depression. Mobile phone overuse can be linked to unhealthy sleep habits and insomnia. Moreover, mobile phone overuse for social network services and online chats may contribute more to depression than the use for internet searching, playing games or viewing videos. Introduction The distribution of mobile phones in Japanese adolescents has risen rapidly. According to the Cabinet Office, Government of Japan, the distribution rate of mobile phones in adolescents aged 10-17 was 52.6% in 2011, 59.5% in 2013, and 68.3% in 2015 [1]. For instance, in 2015, 96.7% of senior high school students had mobile phones [1]. Many mobile phone users now have the most advanced version, called a smartphone. A smartphone is a useful tool that enables access to the internet and social networks, messaging, viewing videos, and playing games. Therefore, comparatively more hours are spent on a smartphone than on a conventional phone [2]. In 2008, less than 40% of adolescents used mobile phones for more than 2 h per day [3], while in 2015 about 50% of adolescents used smartphones for more than 3 h per day [4]. In Japan, smartphones have also come to be used widely. Approximate 95% of Japanese senior high school students use them [1]. Among Asian countries, Japanese adolescents in particular engage in various internet applications such as online gaming, blogging, instant messenger and e-mail [5]. It is predicted that smartphone use may increase drastically in the future. The mobile phone is reportedly a useful tool for health promotion. Mobile applications offer effective ways to improve one's lifestyle, for example, by increasing physical activity [6], weight control [7,8], and treating obesity [9]. On the other hand, mobile phone use could cause physical [10][11][12] and psychological [3,13,14] health problems when used excessively. It is reported that mobile phone use in bed at night negatively impacts sleep outcome [3,15,16]. This may be due to exposure to bright light from electronic devices, disturbing the circadian rhythms and then sleep quality [16][17][18]. Mobile phone overuse may also pose the risk of mobile phone addiction or smartphone addiction [4,14], thus contributing to poor sleep quality [13,14,19,20] and psychological problems such as depression and anxiety [13,21,22]. Similar risks associated with mobile phone overuse are also reported among Japanese adolescents [23,24]. Nowadays, adolescents are likely to use mobile phones for more hours per day, which can lead to poor sleep and psychological problems. However, there are only few studies examining the relationship between hours of mobile phone use per day and health problems among adolescents. Focusing on overall hours of mobile phone use per day and hours spent per purpose of use, the aim of this study was to investigate the associations between mobile phone overuse and insomnia and depression in senior Japanese high school students. Design and Sample A cross-sectional study was conducted using self-reported questionnaires. Participants were recruited from one public senior high school in Gifu Prefecture, Japan, between June and July 2014. This school is a prefectural high school in a local city, Japan, which has several courses such as a general course, an agriculture course, an animal husbandry course, a business course, and an information processing course. About 40% of students go on to university or technical college, and others get a job. This school comprised 346 students (120 in the 10th grade, 117 in the 11th grade, and 109 in the 12th grade). The first to third grades of high school in Japan correspond to the 10th to 12th grades in the U.S. Anonymous questionnaires were distributed to these 346 students after their class teacher had explained the nature of the study. Students returned the questionnaires in a sealed envelope to ensure confidentiality of their information. Of the 346 students, 332 (96.0%) agreed to participate in this study. After excluding 37 questionnaires with incomplete information on mobile phone use, insomnia or depression, 295 (88.9%) were analyzed. This study was approved by the research and ethics committee of the School of Medicine at the Graduate School of Nagoya University (14-102). Personal Data Personal data were gathered about participants' sex, age, and school grade. Lifestyle Question items related to lifestyle included participation in school club activities, eating breakfast, having a talk with family, wake-up time, bedtime, and sleeping hours. Participants were asked to select responses regarding participation in school club activities (sports club, culture club, and none). Similarly, they answered a question on eating breakfast (eat daily and occasionally do not eat) and talking with family (talk daily or occasionally do not talk). Participants were also questioned on wake-up time (earlier than 06:00, at 06:00-07:00, and later than 07:00), bedtime (earlier than 23:00, at 23:00-00:00, at 00:00-01:00, and later than 01:00), and sleeping hours (<5 h, 5 to 6 h, 6 to 7 h, and ≥7 h). Mobile Phone Use Mobile phone use was determined by the question "Do you own a mobile phone?" (own a smartphone, own a conventional phone, and none). Hours of mobile phone use per day was assessed by the following question: "How many hours per day do you usually spend on a mobile phone during a typical day?" The responses were: none, <1 h, 1 to 2 h, 2 to 3 h, 3 to 4 h, 4 to 5 h, ≥5 h. The responses regarding purposes of use, such as for e-mails, social networking sites (SNS) (e.g., Facebook, Twitter, Instagram), online chat (e.g., Line, Skype, Kakao Talk), internet searching, playing games, and viewing videos were: none, <30 min, 30 to 60 min, 60 to 120 min, ≥120 min. Depression Depression was evaluated using the Japanese version of the Center for Epidemiological Studies-Depression (CES-D) scale, which is a 20-item self-administered questionnaire [25]. CES-D, developed by Radloff [26], is widely used in many countries to assess depressive symptoms of general populations including adolescents [26,27]. Its reliability and validity have been demonstrated [25,26]. Internal consistency as measured by Cronbach's alpha is reported to be around 0.85 in community samples [26] and adolescents [27,28]. Cronbach's alpha in the current study was 0.83. This scale assesses the frequency of depressive symptoms experienced during the last week (0: rarely or never; 1: sometimes or on rare occasions; 2: occasionally or a moderate amount of time; and 3: most or all of the time). The scores range from 0 to 60, where a higher score indicates more severe depression. A score of 16 or greater is used to define clinically meaningful depressive symptoms [26,27]. Hence, in this study, the cut-off value of 16 points was used to identify students with depression. Insomnia Insomnia was evaluated using the Japanese version of the Athens Insomnia Scale (AIS) [29]. This scale is widely used as a useful tool to assess insomnia [29,30]. Its reliability and validity have been demonstrated. Cronbach's alpha values are 0.88 in community samples [29] and around 0.81 in adolescents [31,32]. Cronbach's alpha was 0.79 in this study. This scale consists of eight items: difficulty with sleep induction; waking up during the night; waking up early in the morning ; total sleep time; overall quality of sleep; problems with sense of well-being; problems with functioning; and sleepiness during the day. Each item is rated on a scale of 0 (no problem) to 3 (serious problem), and the range of the total score is from 0 to 24. An AIS score of six points is the optimum cutoff based on the ICD-10 diagnosis of insomnia [30]. Therefore, also in this study, the cut-off value of six points was used to identify insomnia. Social Support Social support was evaluated using the Japanese brief version of the Multidimensional Scale of Perceived Social Support (MSPSS), which is a 7-item self-administered questionnaire [33,34]. This scale measures perceived social support from family, friends, and a significant other. Its reliability and validity have been shown. Cronbach's alpha values for MSPSS are reportedly 0.85 in community samples [34], and 0.83 and 0.93 in adolescents [35,36]. Cronbach's alpha was 0.93 in this study. Each item is rated on a 7-point Likert scale, and the total score is calculated by averaging the scores for all items. The scores range from 1 to 7, and a higher score indicates better social support. Statistical Analysis Differences between mobile phone use and insomnia, depression, and the other health constructs were statistically tested using the Mantel-Haenszel test for trend and the Kruskal-Wallis test. Associations between mobile phone use and insomnia or depression were examined using multiple logistic regression analyses. The dependent variable was insomnia (0 = no problem [AIS score <6] and 1 = insomnia [AIS score ≥6]) or depression (0 = no problem [CES-D score <6] and 1 = depression [CES-D score ≥16)]). Odds ratio (OR) was calculated from the logistic regression, adjusting for age, sex, and factors associated with the dependent variable. In the logistic regression, the variables of hours of mobile phone use per day and hours spent on E-mail, SNS, online chat, internet, playing games, and viewing videos were each included as predictors. Coefficients of correlation between variables of time were not high (r: −0.025 to 0.569). p-values < 0.05 were considered statistically significant. All statistical analyses were performed using SPSS 20.0 (IBM Japan, Tokyo, Japan) for Windows. Results As shown in Table 1, the 295 students consisted of 173 (58.6%) boys and 122 (41.4%) girls, with the mean (standard deviation: SD) age of 16.2 (0.9) years (range: 15 to 19). Mobile phones were owned by 98.6% of participants (n = 291), and 92.9% (n = 274) owned a smartphone. Concerning hours of mobile phone use, 58.6% of participants used them for more than 2 h per day, and 10.5% used them for over 5 h per day. Relationships between overall hours of mobile phone use per day and lifestyle, social support, insomnia, and depression were shown in Table 2. More hours per day of overall mobile phone use was associated with female sex (p < 0.001), non-participation in the school's club activities (p < 0.001), late bedtime (p = 0.001), short hours of sleep (p = 0.006), and occasionally skipping breakfast (p = 0.007). Insomnia and depression were also associated with longer total hours of mobile phone use (p = 0.025 and p = 0.022, respectively). Table 3 shows the associations between mobile phone use and insomnia or depression in multiple logistic regression analyses, adjusting for age, sex, and factors associated with insomnia or depression. Insomnia was more frequently found in students who used mobile phones for 5 h or more a day than those using mobile phones for less than 1 h (OR: 3.89, 95% confidence interval (CI): 1. 21-12.49). Mobile phone use of 120 min or more for online chat was also associated with insomnia (OR: 2.81; 95% CI: 1.28-6.15), compared with mobile phone use of 30 min or less for online chat. Meanwhile, overall hours of mobile phone use was not related with depression. On the other hand, 120 min or more of mobile phone use for SNS (OR: 3.63; 95% CI: 1.20-10.98) and online chat (OR: 3.14; 95% CI: 1.42-6.95) was associated with depression, compared with mobile phone use of less than 30 min for SNS and online chat. Hours spent using a mobile phone for internet searching or video games was not linked with depression. Data are expressed as odds ratio (OR) and 95% confidence interval (CI) of logistic regression analysis. p value by multiple logistic regression analysis; † adjusting for age, sex, talking with family, and social support; ‡ adjusting for age, sex, talking with family, social support, and hours spent sleeping. Discussion The present study showed that excessively long hours of mobile phone use was associated with insomnia, particularly in students using mobile phones for 5 h or more per day compared with those using mobile phones for less than 1 h per day. On the other hand, no association was found between total hours of mobile phone use and depression. However, interestingly, long hours of mobile phone use for SNS or online chat were related to depression, particularly in students who spent 120 min or more on SNS and online chat, while hours spent using a mobile phone for internet searching, playing games or viewing videos was not associated with depression. This study showed that long hours of mobile phone use was a risk factor for insomnia. It is also suggested that overuse of 5 h and more a day could be a marker of a higher risk of insomnia. To our knowledge, there are only two studies examining the association between sleep disturbances and hours of mobile phone use. Among adolescents in Hong Kong, long hours of mobile phone use were correlated with short sleep duration, poor sleep quality, and excessive daytime sleepiness [20]. Another study in Japanese high school students reported that long hours of mobile phone use was associated with short sleep time and fatigue [23]. Both findings support an association between long hours of mobile phone use and sleep disturbances, as shown in this study. In this study, among adolescents using mobile phones for 5 h or more a day, 61.3% reported a bedtime after 00:00, and 67.8% had a sleeping time of less than 6 h per day. Thus, mobile phone overuse was linked to disturbances in sleep habits, which is known to be a risk factor for insomnia [37]. It is, therefore, considered that mobile phone overuse can cause impaired sleep habits, thus contributing to insomnia. In the present study, depression was not associated with total hours of mobile phone use. However, long hours spent using mobile phones for SNS and online chat was related to depression, while hours spent using mobile phones for internet searching, playing games or viewing videos was not linked with depression. The SNS (e.g., Facebook, Twitter or Instagram) and online chat (e.g., Line, Skype, Kakao Talk) are popular online communication tools among adolescents [38]. Some earlier studies have indicated that their use is associated with mental health problems [39][40][41][42]. Additionally, it is reported that internet addiction can be predicted by the use of SNS and chat rooms [39,40]; and that the use of SNS contributed to psychological distress, suicidal ideation and attempts [41,42]. Sampasa-Kanyinga and Lewis [42] reported that using SNS for more than 2 h every day was independently associated with poor self-rating of mental health, high levels of psychological distress and suicidal ideation. The present study confirmed that 2 h and more a day spent using mobile phones for SNS and online chat could increase risks of depression. SNS and online chat enable one to communicate and interact with a large number of people. Hence, young users may spend more time on them [43]. However, the overuse of SNS and online chat sometimes undermines well-being and life satisfaction [44], increases risk of cyberbullying victimization [41], and can also relate to depression in adolescents. The present study showed that female adolescents used mobile phones for longer hours per day than males. Previous studies also indicated that women tended to overuse online applications for social function or communication, such as e-mail, chat, and SNS [45][46][47]. Moreover, overuse of online communication was more likely to cause sleep disturbances and stress among women [48]. Thus, female adolescents in particular should be careful to prevent mobile phone overuse. The Japanese government has concerns about various problems such as bullying, crime or addiction among adolescents through the use of the internet, and insists on educating school students about appropriate mobile phone use, including the restriction on hours of mobile phone use [49]. The present findings may suggest that appropriate use of mobile phones in adolescents needs to be considered. The present study had some limitations. The present subjects were limited to participants in a single high school in central Japan, so the findings from this study cannot be largely generalized to other areas and countries. Another limitation was that the information on hours of mobile phone use was not very precise because the information was obtained using a self-administered questionnaire. Therefore, "5 h and more" is a rough criterion for overuse. Finally, this was a cross-sectional study. Hence, the present findings do not show a causal relation. Even so, this study showed meaningful findings. Mobile phone overuse could be linked to impaired sleep habits, and consequently to insomnia. It was suggested that 5 h and more of phone use in particular could increase risks of insomnia. Additionally, the overuse of mobile phones-120 min or more-for SNS and online chat might be related to depression among adolescents. Appropriate use of mobile phones should be considered. Conclusions The present study found that long hours of mobile phone use was associated with insomnia, particularly in students using mobile phones for 5 h or more a day. Additionally, long hours spent using mobile phones for SNS or online chat was related to depression, particularly in students who spent 2 h or more on SNS and online chat. Appropriate use of mobile phones should be considered in order to prevent sleep disturbances and the impairment of mental health among adolescents. Conflicts of Interest: The authors declare no conflict of interest.
2017-05-01T20:19:20.593Z
2017-06-29T00:00:00.000
{ "year": 2017, "sha1": "a05c4a6290b2fc63170b0997edd1e1c4d80bc596", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/14/7/701/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "128b9ecef5112b269a6240c6ffffe63173939ff6", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
9419681
pes2o/s2orc
v3-fos-license
Accessorizing and anchoring the LINC complex for multifunctionality The linker of nucleoskeleton and cytoskeleton (LINC) complex, composed of outer and inner nuclear membrane Klarsicht, ANC-1, and Syne homology (KASH) and Sad1 and UNC-84 (SUN) proteins, respectively, connects the nucleus to cytoskeletal filaments and performs diverse functions including nuclear positioning, mechanotransduction, and meiotic chromosome movements. Recent studies have shed light on the source of this diversity by identifying factors associated with the complex that endow specific functions as well as those that differentially anchor the complex within the nucleus. Additional diversity may be provided by accessory factors that reorganize the complex into higher-ordered arrays. As core components of the LINC complex are associated with several diseases, understanding the role of accessory and anchoring proteins could provide insights into pathogenic mechanisms. Introduction The linker of nucleoskeleton and cytoskeleton (LINC) complex is widely recognized as the major means by which the nucleus is mechanically linked to the cytoskeleton in eukaryotic cells. It is composed of Klarsicht, ANC-1, and Syne homology (KASH) domain proteins in the outer nuclear membrane and Sad1 and UNC-84 (SUN) domain proteins in the inner nuclear membrane (Fig. 1). The KASH domain projects into the perinuclear space between the inner and outer nuclear membranes, where it interacts with the SUN domain of SUN proteins. This interaction prevents the KASH protein from diffusing out of the outer nuclear membrane into the contiguous ER. KASH proteins extend into the cytoplasm and allow the LINC complex to bind to different cytoskeletal elements and signaling molecules. SUN proteins in turn are localized in the inner nuclear membrane, anchoring the LINC complex in the nucleus by interactions with A-type lamins, chromatin-binding proteins, and other proteins. At its core, the LINC complex is a two-membrane adhesive assembly that is capable of transmitting mechanical force across the nuclear envelope. This capability is adapted for a diverse range of functions including moving the nucleus, maintaining the centrosome-nucleus connection, shaping the nucleus, signal transduction, DNA repair, and moving chromosomes within the nucleus Starr and Fridolfsson, 2010). This functional diversity is achieved by assembling the LINC complex from distinct KASH proteins that interact with different cytoskeletal filaments and by associating with accessory factors. The LINC complex must be dynamic in order to switch between these functions, and to allow assembly of higher-ordered arrays that can transmit force to the nucleus as a whole or, alternatively, into the nucleus. We review the core LINC complex and interacting partners that alter cytoskeletal functionality and reinforce the core complex to permit force transduction. We consider how the LINC complex is differentially anchored for transmitting force to or into the nucleus. Furthermore, we examine data revealing that LINC complex components interact with signaling molecules, which suggests a role in signal transduction. Finally, we examine higher-ordered assemblies of LINC complexes and the role that accessory and anchoring proteins play in their formation and function. We do not address the function of short isoforms of KASH proteins that are generated by alternative transcriptional start sites or splicing, as these forms either do not localize to the nuclear membrane (KASH-less isoforms) or are unlikely to form LINC complexes, given their localization in the inner nuclear membrane (see Rajgor et al., 2012 for further discussion). Additionally, we refer the reader to reviews that cover other aspects of the LINC complex such as the discovery of its components and functions , threedimensional structure (Sosa et al., 2013), role in nuclear positioning and meiosis (Hiraoka and Dernburg, 2009), and association with disease (Burke and Stewart, 2014). The linker of nucleoskeleton and cytoskeleton (LINC) complex, composed of outer and inner nuclear membrane Klarsicht, ANC-1, and Syne homology (KASH) and Sad1 and UNC-84 (SUN) proteins, respectively, connects the nucleus to cytoskeletal filaments and performs diverse functions including nuclear positioning, mechanotransduction, and meiotic chromosome movements. Recent studies have shed light on the source of this diversity by identifying factors associated with the complex that endow specific functions as well as those that differentially anchor the complex within the nucleus. Additional diversity may be provided by accessory factors that reorganize the complex into higher-ordered arrays. As core components of the LINC complex are associated with several diseases, understanding the role of accessory and anchoring proteins could provide insights into pathogenic mechanisms. binding interface lying between adjacent SUN domains, multimerization of SUN monomers through the triple helical coiledcoil is required for KASH peptide binding (Sosa et al., 2012;Zhou et al., 2012b). In addition to multiple noncovalent interactions, the KASH peptide can make a disulfide bond to the SUN domain. The extensive interactions between the KASH and SUN domains provide an explanation for how the LINC complex resists mechanical forces applied on KASH proteins by the cytoskeleton. Based upon the projected length of the coiled-coil of the SUN trimer, it has been proposed that the LINC complex maintains the spacing of the inner and outer nuclear membranes (Sosa et al., 2012). Data are mixed on this issue. In HeLa cells, disrupting the SUN-KASH interaction with dominant-negative versions or knockdowns alters spacing of the two membranes Structure of the LINC complex: implications for force transmission Two groups have described the crystal structure of the SUN2 protein in complex with the KASH domain of Syne-2/nesprin-2 (Sosa et al., 2012;Wang et al., 2012). (Note: the original KASH proteins in mice were named Syne-1 and Syne-2 [Apel et al., 2000], but as the family expanded, most KASH proteins in vertebrates became known as nesprins, for nuclear envelope spectrin repeat [SR] protein [Zhang et al., 2001], a term we use here.) SUN2 is a trimer with a globular head composed of SUN domains and a stalk composed of a triple helical coiled-coil (Fig. 1). The KASH peptide binds along a hydrophobic groove between adjacent SUN domains, with additional interaction provided by a "KASH-lid" that covers part of the peptide. Consistent with the Figure 1. The LINC complex bridges the cytoskeleton and nucleoskeleton. The LINC complex is composed of KASH proteins in the outer nuclear membrane and SUN proteins in the inner nuclear membrane. The lumenal region of SUN proteins forms a triple helical coiled-coil, allowing trimerization of their SUN domains. The hydrophobic groove between neighboring SUN domains is required for the KASH peptide to bind, and this interaction is further strengthened by a KASH-lid of the SUN domain (see text). The cytoplasmic extensions of KASH proteins vary in size and interact with different cytoskeletal elements. Mammalian KASH proteins typically contain several SRs (see text). The nucleoplasmic domains of SUN proteins anchor the LINC complex to the nucleoskeleton, through its interaction with nuclear lamina, as well as chromosome-binding proteins and probably other anchoring proteins (see Fig. 3). INM, inner nuclear membrane; ONM, outer nuclear membrane. 2002). However, small KASH proteins such as nesprin-3 and nesprin-4 in mammals and UNC-83 in C. elegans also interact with cytoskeletal elements. Additionally, in at least one case, a small chimeric variant of nesprin-2G is capable of functionally rescuing actin-dependent nuclear movement defects when expressed in cells depleted of nesprin-2G (Luxton et al., 2010). Alternatively, large KASH proteins may provide scaffolding functions that enhance resistance to mechanical force, influence signaling, or contribute to higher-ordered assemblies of LINC complexes. Actin filaments. The giant KASH proteins nesprin-1G, nesprin-2G, ANC-1, and Msp-300 bind directly to actin filaments through paired calponin homology (CH) domains that strongly resemble those in other actin-binding proteins such as -actinin. Aside from SRs or coiled-coils, these CH domains are one of the few recognizable structural domains in cytoplasmic extensions of KASH proteins (nesprin-4 contains a leucine zipper that may contribute to dimerization; Roux et al., 2009). In each case, the CH domains are at the amino terminus of the protein separated by a long stretch of SRs or coiled-coils from the C-terminal membrane-spanning KASH domain (Fig. 2). The CH domains of giant KASH proteins are sufficient for recruiting actin filaments to the nuclear surface and are required for actin-dependent nuclear movement and positioning (Zhang et al., 2001;Starr and Han, 2002;Luxton et al., 2010). Yet, a recent study indicates that these domains alone are not sufficient to resist the mechanical load when actin moves the nucleus (Kutscheidt et al., 2014). Fibroblasts polarizing for migration move their nucleus rearward, resulting in reorientation of the centrosome (Gomes et al., 2005). This movement results from coupling of retrogradely moving dorsal actin cables to the nucleus by SUN2-nesprin-2G LINC complexes that assemble into linear arrays known as transmembrane actin-associated nuclear (TAN) lines (Luxton et al., 2010. The CH domains of nesprin-2G are necessary for TAN line formation and nuclear movement, yet nesprin-2G requires interaction with another actin-binding protein, the formin FHOD1, to assemble TAN lines and move the nucleus (Kutscheidt et al., 2014). FHOD1 has a typical formin domain structure, but has a unique second actinbinding site in its amino terminus that, in conjunction with an adjacent site that binds to SR11-13 of nesprin-2G (see Fig. 2), is sufficient to cross-link nesprin-2G and actin filaments (Kutscheidt et al., 2014). The FHOD1-interacting domain of nesprin-2G is within one of two clusters of SRs that are highly evolutionarily conserved and not contained in nesprin-1G (Kutscheidt et al., 2014). This region is predicted to be a site for protein-protein interaction (Autore et al., 2013), and also binds to the membrane protein meckelin, which participates in ciliogenesis ( Fig. 2; Dawe et al., 2009). Plant KASH-like proteins have been identified based upon their interaction with SUN proteins, localization to the nuclear envelope, and conserved carboxyl-terminal domains (Tamura et al., 2013;Zhou et al., 2014). Two of the five newly identified plant KASH-like proteins, termed SUN-interacting nuclear envelope (SINE) proteins, bind actin filaments through their amino-terminal armadillo repeats and are required for actindependent anchorage of the nucleus in the center of guard cells (Zhou et al., 2014). Other plant KASH-like proteins called WPP (Crisp et al., 2006). However, disruption of the SUN protein UNC-84 in Caenorhabditis elegans only affects spacing when nuclei are actively under force, as in body wall muscle cells, and deleting a large portion of the luminal domain does not change the spacing (Cain et al., 2014). As nuclei in HeLa cells and other adherent cells are under constant tension, these studies suggest that the LINC complex only contributes to spacing when the nucleus is under stress. A striking feature of SUN protein structure is its trimeric nature. It is clear from the crystal structure that SUN domain interfaces are required for KASH peptide binding, and individual SUN2 domains fused with an unrelated trimeric coiled-coil restore their KASH binding (Sosa et al., 2012). Yet, the trimeric nature of the SUN protein suggests additional features of LINC complex function. The triple helical nature of the SUN stalk may be required for efficient force transmission across the nuclear membranes and to withstand the high loads required for bulk movement of the nucleus or movements of meiotic chromosomes. Another possibility, which we consider later, is that the trimer contributes to the formation of higher-ordered arrays of LINC complexes. Accessorizing the LINC complex through KASH protein interactions KASH protein cytoplasmic extensions. Specificity of the LINC complex for attachment to cytoskeletal elements is determined by specific KASH proteins. These proteins have cytoplasmic extensions with distinct domains that bind directly or indirectly to cytoskeletal filaments. The repertoire of KASH proteins expressed in vertebrates and invertebrates allows for binding to actin and microtubules and, in vertebrates, intermediate filaments . Yeast and plants have divergent KASH-like proteins that engage microtubules in Schizosaccharomyces pombe (Chikashige et al., 2006;King et al., 2008) and actin filaments in Saccharomyces cerevisiae (Conrad et al., 2008;Koszul et al., 2008) and plants (Tamura et al., 2013;Zhou et al., 2014). KASH protein cytoplasmic extensions vary greatly in size from <30 kD to >1 MDa in mammals, C. elegans, and Drosophila melanogaster. For most KASH proteins, and the large ones in particular, the most prominent structural feature in their cytoplasmic extensions is the presence of extended regions containing predicted SRs or coiled-coil domains (Fig. 2). In the "giant" KASH proteins in mammals, the vast majority of the cytoplasmic extension is predicted to be composed of SRs, with 74 in nesprin-1G and 56 in nesprin-2G (Simpson and Roberts, 2008;Autore et al., 2013). Smaller KASH proteins in mammals (nesprin-3, nesprin-4, and small isoforms of nesprin-1 and nesprin-2 arising from alternative splicing and transcriptional initiation) also contain SRs. However, except for the Drosophila Msp-300, SRs are not found in other KASH or KASH-like proteins, including the giant C. elegans ANC-1, which instead is predicted to contain coiled-coil segments within tandem repeats (Fig. 2). The significance of the dramatic size variation among KASH proteins is unclear. It has been proposed that the large size and presumed extended length of the giant KASH proteins may enhance their interaction with the cytoskeleton (Starr and Han, a kinesin motor domain that appears to act as a microtubule depolymerase (Tikhonenko et al., 2013). An exception is the KASH-less p50 Nesp1 isoform that contains SRs 48-51 of nesprin-1 and interacts directly with microtubules in cosedimentation assays and colocalizes to P granules in cells (Rajgor et al., 2014). Presumably, all nesprin-1 isoforms containing SR 48-51 have the potential to directly interact with microtubules. In many cases, association of KASH proteins with microtubule motor proteins is direct and occurs through discrete regions in their cytoplasmic extensions and specific subunits of motor proteins (Fig. 2). Nesprin-1, nesprin-2, KASH5, UNC-83, and Zyg-12 interact either directly or indirectly with cytoplasmic dynein (Fig. 2). Detailed mapping has shown that a site near the KASH domain of UNC-83 binds dynein light chain DLC-1 and the dynein regulators NUD-2 (a homologue of mammalian domain-interacting protein (WIP) interact with myosin XI-i through another integral membrane protein called WPP domaininteracting tail-anchored protein (WIT). Myosin XI-i is recruited to the nuclear membrane by WIP-WIT proteins to regulate nuclear shape and dark-induced nuclear movement in plant cells (Zhou et al., 2012a;Tamura et al., 2013). Microtubules. KASH proteins that interact with microtubules include nesprin-1, nesprin-2, nesprin-4, and probably KASH5 in mammals, fue in zebrafish, UNC-83 and ZYG-12 in C. elegans, Klar in Drosophila, Kif9 in Dictyostelium discoideum, and Kms1 and Kms2 in fission yeast Gundersen and Worman, 2013). In almost all cases, the interaction with microtubules is mediated through association of the KASH protein with the motor proteins kinesin, dynein, or both. The D. discoideum KASH protein Kif9 itself contains Schematics are shown summarizing findings in mammals and C. elegans where the most information is available. Lines under KASH proteins indicate binding regions. "Unmapped" refers to proteins whose sites of interaction have not yet been identified. Giant KASH proteins (e.g., nesprin-1G, nesprin-2G, and ANC-1) contain CH domains that bind to F-actin, microtubule motors, and signaling proteins. The small isoforms typically interact with microtubule motors and/or their regulators. Interacting proteins in blue are characterized in isoforms lacking the KASH domain. Note that two KASH proteins, mammalian LRMP and C. elegans KDP-1, were omitted because of the lack of known interacting proteins. Intermediate filaments. Nesprin-3 is the only KASH protein known to interact with cytoplasmic intermediate filaments. It is one of two known isoforms and contains a unique region at its amino terminus that interacts with the actin-binding domain (ABD) of plectin, which binds intermediate filaments through its plakin domain ( Fig. 2; Wilhelmsen et al., 2005). The same region of nesprin-3 also interacts with the ABD of BPAG1n/ dystonin-2a (Wilhelmsen et al., 2005;Young and Kothary, 2008). It is unknown whether the binding of these proteins' ABDs to nesprin-3 prevents simultaneous binding to actin. Interestingly, nesprin-3 also binds to the ABD (i.e., CH domains) of nesprin-1G and nesprin-2 and may control nuclear size through formation of a nesprin meshwork (Lu et al., 2012). Nesprin-3 also appears to function in the cellular response to shear stress and force transmission (Lombardi et al., 2011;Morgan et al., 2011;Chambliss et al., 2013) and in fibroblast migration in 3D matrices, where the cells use a distinct form of migration in which the front and rear of the cell are compartmentalized by the nucleus and associated ER so that actomyosin-dependent forward movement of the nucleus creates pressure in the front of the cell to generate lobopodial protrusions (Petrie et al., 2014). Signaling scaffolds. Growing evidence indicates that KASH proteins act to tether signaling molecules. Early studies showed that nesprin-1 interacts with muscle-specific tyrosine kinase (Apel et al., 2000) and muscle A kinase anchoring protein (mAKAP; Pare et al., 2005). A KASH-less form of nesprin-2 interacts with active mitogen-activated protein kinases and promyelocytic leukemia protein (Warren et al., 2010). More recently, nesprin-2 has been shown to interact with -catenin (Neumann et al., 2010). Through this interaction and its interaction with the nuclear envelope protein emerin, nesprin-2 positively regulates the nuclear localization of active -catenin and Wnt signaling (Neumann et al., 2010). Curiously, emerin, which interacts with both nesprin-2 and -catenin, negatively regulates Wnt signaling by restricting nuclear accumulation of -catenin (Zhang et al., 2005;Markiewicz et al., 2006). Given that emerin and -catenin interact with the same region of nesprin-2 (see Fig. 2), it is possible that competition between emerin and catenins for nesprin-2 binding may explain the opposing roles of nesprin-2 and emerin in Wnt signaling. KASH protein regulation of -catenin and Wnt signaling may be phylogenetically conserved. C. elegans ANC-1 interacts with Regulator of Presynaptic Morphology 1 (RPM-1), a regulator of neuronal development and regeneration (Tulgren et al., 2014). Genetic analysis suggests that RPM-1, ANC-1, and -catenin function together to regulate synapse formation in motor neurons and axon termination in the mechanosensory neurons. This function of ANC-1 requires its nuclear localization and is negatively regulated by emerin as in mammalian cells (Tulgren et al., 2014). Although additional research is required to understand how KASH proteins contribute to Wnt signaling, an attractive hypothesis is that they enhance the perinuclear concentration of active -catenin. The LINC complex has also been implicated in very rapid mechanochemical signaling to the nucleus (Isermann and Lammerding, 2013). It is clear that the nucleus responds to force and that LINC complex components are necessary for this force NudE) and BICD-1 . Mammalian KASH5 also binds dynein and dynein regulators near its KASH domain (Morimoto et al., 2012;Horn et al., 2013b). Mammalian NudE/ EL may indirectly interact with the LINC complex, as SUN1/2 and NudE/EL are required for dynein-dependent removal of nuclear membranes from chromatin during nuclear envelope breakdown (Turgay et al., 2014). Sites of dynein interaction with nesprin-1 and nesprin-2 have not yet been identified. Several KASH proteins also bind kinesin-1 motors, including nesprin-2 (Zhang et al., 2009;Schneider et al., 2011;Yu et al., 2011), nesprin-4 (Roux et al., 2009, and UNC-83 (Meyerzon et al., 2009;. The interaction is usually mediated by direct binding of kinesin-1 light chains, through their tetratricopeptide repeats, to sites near the KASH domains (Fig. 2). An emerging theme is that KASH proteins do not simply interact with a single motor or cytoskeletal filament but rather functionalize the surface of the nucleus by providing binding sites for multiple cytoskeletal elements. For example, nesprin-2G binds actin filaments at one end through its amino-terminal CH domains and FHOD1 interaction site, whereas its other end binds kinesin-1 (Fig. 2). This calls into question how KASH protein-bound motors and other elements are coordinated to yield the largely unidirectional and single cytoskeletal track movements of nuclei that have been observed. For example, nesprin-2G is involved in actin-dependent nuclear movement in polarizing fibroblasts (Luxton et al., 2010) as well as microtubuleand dynein-dependent movement of nuclei in migrating neurons and developing photoreceptor cells (Zhang et al., 2009;Yu et al., 2011). Clearly, KASH proteins and/or their associated motors must be regulated to select one activity over another. Interaction with multiple microtubule motors may allow nuclei to be moved predominantly in one direction but with the capability of "back-tracking" to negotiate obstacles. In a detailed analysis of nuclear movement in hypodermal precursors in C. elegans, which involves the KASH protein UNC-83 and its binding partners kinesin and dynein, microtubule plus enddirected movements were interspersed with minus end-directed movements and rolling movements . In the absence of dynein, these latter movements were lost and nuclei failed to move efficiently toward the plus ends. KASH protein engagement of microtubule motors also facilitates centrosome association with the nucleus, one of the first functions attributed to the LINC complex. In C. elegans, the KASH protein ZYG-12 interacts with dynein through its light intermediate chain and a centrosome-localized splice variant that lacks the KASH domain to maintain the centrosome near the nucleus (Malone et al., 2003). Several KASH proteins, including nesprin-1, nesprin-2, nesprin-3, and KASH5 in mammals and Kif9 in D. discoideum, have been implicated in maintaining the centrosome in close juxtaposition to the nucleus (Zhang et al., 2009;Schneider et al., 2011;Yu et al., 2011;Horn et al., 2013b;Tikhonenko et al., 2013). In contrast, overexpression of nesprin-4 increases the nucleus-centrosome distance . In yeast, KASH proteins are integral components of the spindle pole body and maintain the close association between the nucleus and microtubule organizing centers (Niwa et al., 2000;King et al., 2008). Additionally, both SUN1 and SUN2 have increased diffusional mobility in nuclei of mouse fibroblasts lacking A-type lamins compared with cells from wild-type mice (Östlund et al., 2009). Finally, experiments on actin-dependent nuclear movement in migrating fibroblasts that lack A-type lamins show that nesprin-2G-SUN TAN lines are relatively unstable and slip over the nucleus rather than move with it, indicating a defect in anchoring . Similarly, nuclear migration in C. elegans is impaired when the interaction between UNC-84 and the lamin LMN-1 is weakened (Bone et al., 2014). Nevertheless, studies on the intracellular localization of SUN proteins show that factors other than lamin A binding must contribute to anchoring the LINC complex, particularly in mammalian cells. Expression of the single C. elegans lamin is apparently required for proper localization of UNC-84 , yet its closest mammalian orthologue SUN1 is properly localized in cells lacking A-type lamins or both A-type and B-type lamins (Padmakumar et al., 2005;Crisp et al., 2006;Haque et al., 2006;Hasan et al., 2006). Similarly, SUN2 is only minimally displaced to the ER when lamins A and C are lacking, which suggests that additional factors are also involved in its localization (Crisp et al., 2006). It should be noted that the Lmna / fibroblasts used in some of these studies actually expresses a truncated lamin A (Jahn et al., 2012), which may have dominant-negative effects because some of the phenotypes in the Lmna / cells cannot be rescued by reexpression of wild-type lamin. However, most of the results were confirmed with siRNA knockdown. These studies indicate that at least in mammalian cells, other factors contribute to SUN protein transmission (Maniotis et al., 1997;Lombardi et al., 2011;Chambliss et al., 2013). Nevertheless, it has been difficult to determine whether the LINC complex directly transmits mechanical force into chemical signals within the nucleus. The first evidence for direct mechanosensing by the LINC complex comes from a recent ground-breaking study in which magnetic tweezers were used to pull on nesprin-1 antibody-coated beads attached to isolated nuclei (Guilluy et al., 2014). Pulling on nesprin-1 resulted in a stiffening response in which greater force was required to displace the bead. Stiffening was accompanied by and required the recruitment of A-type lamins to the LINC complex, activation of Src, and tyrosine phosphorylation of emerin. This study identifies the first mechanotransduction pathway into the nucleus and raises several provocative questions including how tension on nesprin-1 activates Src and whether other KASH proteins also mediate mechanotransduction. Anchoring the LINC complex Nuclear lamins. To position and move the nucleus, the LINC complex must be anchored so that it can transmit force to the nucleus. Several studies clearly show that lamins contribute to nucleoplasmic anchoring of the LINC complex (Fig. 3 A). In mammals, the three lamin genes encode lamin B1, lamin B2 (and an alternatively spliced lamin B3), and the A-type lamins, which include the alternatively spliced isoforms lamin A, lamin C, and lamin C2 (Worman, 2012). In support of a LINC complex anchoring function, the carboxyl terminus of lamin A binds to SUN proteins, whereas binding to lamin B1 and lamin C appears to be very weak (Crisp et al., 2006;Haque et al., 2006). The nucleoplasmic tail of SUN2 binds to lamin A and anchors the LINC complex to the nuclear lamina in somatic cells. Samp1 and emerin are required to strengthen this anchorage during nuclear movement, presumably to resist the high mechanical force. For clarity, nesprin-2G is shown as a shorter protein without all of its 56 SRs. INM, inner nuclear membrane; ONM, outer nuclear membrane. (B) The nucleoplasmic tail of SUN1 binds TERB1 and anchors the LINC complex to chromosomes through telomere binding proteins (TRF1 and cohesion) in meiotic cells. Lamin C2 also associates with this complex, probably through SUN1 binding. (C) Nucleoplasmic tails of SUN proteins shown binding to nuclear pores (SUN1) and a hypothetical protein as possible alternative anchors for the LINC complex in somatic cells. As described in the text, the localization of SUN1 and SUN2 in the nuclear membrane is only slightly affected in somatic cells lacking all lamins, which indicates the presence of additional anchoring factors. Accessorizing and anchoring the lInC complex • Chang et al. Thus, emerin may function together with A-type lamins and Samp1 in the nucleoplasmic anchoring of the LINC complex. Anchors not clearly associated with lamins. Mouse embryonic stem cells harboring deletions of all lamin genes exhibit normal proliferation and can differentiate into fibroblast-like cells, beating cardiomyocytes and neural progenitor cells in vitro (Kim et al., 2011), which suggests that at least some LINC-complex-mediated functions can occur. Combined with data showing that lack of lamins does not completely disrupt the nuclear location of SUN proteins (discussed earlier), this suggests that proteins other than lamins or those bound to lamins can function in anchoring the LINC complex in the nucleoplasm (Fig. 3 C). In mammals, one possible candidate is the nuclear pore. SUN1 associates with the nuclear pore complex, and disruption of SUN1 interferes with nuclear pore assembly and distribution (Liu et al., 2007;Talamas and Hetzer, 2011). Yet it is unclear whether this association reflects SUN1 in a LINC complex, and there are no reports of KASH proteins contributing to nuclear pore distribution. In yeast, which lack nuclear lamins, it has been proposed that the yeast SUN protein, Sad1, interacts with Ima1 to anchor it within the nuclear membrane (King et al., 2008). The binding of Ima1 to centromeric DNA may provide the resistive force to anchor Sad1. Indeed, in the absence of Ima1, the Sad1-Kms2 LINC complex is partially disrupted, causing microtubule-dependent forces to distort the nucleus and depleting spindle pole body components from the nucleus. However, this finding has been questioned recently because it was found that some of the Ima1 deletion strains did not disrupt Ima1 (Hiraoka et al., 2011). Nonetheless, the authors report that Ima1 and two LEM domain proteins, Man1 and Lem2, interact with Sad1, and that when all three are disrupted similar nuclear phenotypes result, as originally reported for Ima1. Meiotic chromosome anchorage. The LINC complex functions in chromosome movements and pairing in meiosis, and for this function, the anchoring of the LINC complex is distinct from that in somatic cells. In meiosis the LINC complex is mobile within the plane of the nuclear membrane and is tethered to defined regions of chromosomes (telomeres in mice and yeast or pairing centers in C. elegans; Fig. 3 B). Tethering is required for movements of chromosomes and is mediated by specific meiotic proteins that link factors at the chromosomal sites with SUN proteins in LINC complexes, which in turn engage the cytoskeleton. Reflecting their distinct chromosomal binding sites, these tethering proteins are not conserved among mice, C. elegans, Drosophila, and yeast. For example, in mice SUN1 interacts with telomeres through TERB1, a meiosis-specific protein that binds telomere protein TRF1 and telomere repeat sequences, and recruits cohesin that encircles and holds sister telomeres together (Fig. 3 B; Daniel et al., 2014;Shibuya and Watanabe, 2014). Through its interaction with a meiotic-specific KASH protein KASH5, telomere-associated SUN1 engages dynactin and dynein to mediate meiotic chromosome movement (Morimoto et al., 2012;Horn et al., 2013b). SUN2 is also associated with sites of telomere tethering at the nuclear envelope, but may not be required for meiosis (Schmitt et al., 2007). In fission yeast, the telomeric Rap1/Taz1 complex recruits the SUN protein Sad1 to telomeres (Chikashige et al., 2006). This interaction localization in the nucleus and hence potentially in anchoring the LINC complex. A-type lamins also interact directly with nesprin-1 and nesprin-2 through SRs near their KASH domains (Fig. 2). However, this interaction is not likely to contribute to LINC complex anchoring, given that nesprins in the outer nuclear membrane do not contact the lamina. Indeed, nesprin-2G localization in the outer nuclear membrane is not strongly affected by the absence of A-type lamins . Instead, A-type lamin's interactions with nesprins are likely to reflect interactions with smaller nesprin isoforms that enter the inner nuclear membrane. Phenotypes of genetically modified mice imply that lamins other than A-type lamins may participate in anchoring the LINC complex in certain cell types. Although they develop growth retardation, muscular dystrophy, and cardiomyopathy after birth, mice with germline deletion of A-type lamins develop to term, suggesting that critical LINC complex-mediated events occur in these mice during embryonic development (Sullivan et al., 1999). B-type lamins, despite their weak interaction with SUN proteins, may play a role. In fact, lamin B1-, lamin B2-, nesprin1/2-, and SUN1/2-deficient mice all show similar defects in neuronal migration, which suggests that B-type lamins may contribute to anchoring the LINC complex in migrating neurons during development (Zhang et al., 2009;Coffinier et al., 2010Coffinier et al., , 2011. Perhaps the weak interaction between B-type lamins and SUN proteins is strengthened by other factors. Alternatively, B-type lamins may play an indirect role in anchoring the LINC complex, for example by overall stiffening of the nucleus. Lamin-associated proteins. Two A-type laminassociated proteins, Samp1 (also known as NET5) and emerin, have also been implicated in LINC complex anchoring. Both of these proteins depend on A-type lamins for their localization to the inner nuclear membrane (Sullivan et al., 1999;Borrego-Pinto et al., 2012). Samp1 was initially reported to interact with SUN1 and emerin and to be required for proper localization of emerin in the inner nuclear envelope (Gudise et al., 2011). Subsequently, Samp1 was found to be necessary for actin-dependent nuclear movement in fibroblasts and to interact with SUN2, lamin A, and lamin C, although localization of these proteins was not dependent on Samp1 (Borrego-Pinto et al., 2012). Samp1 also colocalized with nesprin-2G and SUN2 in TAN lines in fibroblasts polarizing for migration, although the effect of Samp1 depletion on TAN lines was not addressed. SAMP1 colocalization in TAN lines and its requirement for nuclear movement suggest that it enhances anchoring of TAN lines by providing a second interacting site for SUN2 in addition to that provided by A-type lamins. Such a model is supported by a recent report on C. elegans SAMP-1 (Bone et al., 2014). Interestingly, the LINC complex anchoring function of Samp1 was originally reported for its yeast orthologue Ima1 (see the following sections). Emerin may also contribute to LINC complex anchoring. Emerin associates with SUN1 and SUN2, and the interaction between SUN1 and emerin has been mapped to their nucleoplasmic domains (Haque et al., 2010). Consistent with a possible role in anchoring the LINC complex, depletion of emerin from polarizing fibroblasts leads to abnormal nuclear migration and slipping of TAN lines on the nucleus (Chang et al., 2013). (Lei et al., 2012). Nesprin-1 interacts with the DNA damage response proteins MSH2 and MSH6 through its CH domains (Sur et al., 2014). It is not yet clear whether DNA breaks associate with the LINC complex in mammalian cells as appears to be the case in yeasts. Assembling higher-ordered arrays of LINC complexes A fascinating aspect of the LINC complex is its formation of higher-ordered assemblies. These assemblies function to move nuclei in fibroblasts polarizing for migration (Luxton et al., 2010, to position nuclei in adherent smooth muscle cells (Nagayama et al., 2014), and to move meiotic chromosomes in numerous organisms. In polarizing fibroblasts, the TAN lines are higher-ordered linear alignments of SUN2-nesprin-2G LINC complexes along dorsal actin cables. Similar linear arrays of nesprin-1 also align with dorsal actin fibers in smooth muscle cells (Nagayama et al., 2014). In contrast, the higher-ordered arrays observed during meiosis in S. pombe, C. elegans, and mice are spot-weld clusters of LINC complexes that tether chromosomes to the nuclear envelope and allow for chromosome movements, which are usually powered by microtubules and dynein (Chikashige et al., 2006;Ding et al., 2007;Schmitt et al., 2007;Penkner et al., 2009;Sato et al., 2009;Morimoto et al., 2012). The formation of TAN lines and meiotic clusters appears to involve different topological mechanisms. TAN lines only form when dorsal actin cables contact the nucleus, and disruption of actin cables by actin or myosin inhibitors or myosin II knockdown completely prevents their formation (Luxton et al., 2010;Chang et al., 2013). This "outside-in" initiation of TAN line formation is further emphasized by their absence in cells depleted of the formin FHOD1, which is primarily cytoplasmic (Kutscheidt et al., 2014), and by the observation that "nesprin-2G-only" TAN lines form in cells lacking SUN2 . In contrast, meiotic patches form by an "inside-out" mechanism triggered by the accumulation of meiotic-specific proteins at telomeres (or pairing center-associated proteins in C. elegans). Evidence for this includes: (1) the temporal correlation between appearance (and disappearance) of telomere/pairing center-associated proteins and the LINC complex patches in meiotic prophase, (2) the failure of LINC complexes to redistribute into patches in cells deficient in telomere/pairing center-associated proteins, (3) the formation of clusters of SUN proteins in somatic cells ectopically expressing the telomere-associated proteins, and (4) the absence of the effects of SUN or KASH protein depletion on the accumulation of the telomere/pairing center-associated proteins (Chikashige et al., 2006;Ding et al., 2007;Schmitt et al., 2007;Penkner et al., 2009;Sato et al., 2009;Morimoto et al., 2012). In contrast to TAN line formation, disruption of the associated cytoskeleton in meiocytes does not prevent the formation of meiotic patches of LINC complexes, although it reduces their size and increases their number, presumably reflecting the inability to cluster patches in the absence of cytoskeletal-derived forces (Sato et al., 2009). A similar inside-out mechanism may function during homology-directed DNA repair in yeast as shown by the accumulation of Sad1-Kms1 LINC complexes after initiation of double-strand DNA breaks (Swartz et al., 2014). is indirect and is bridged by meiotic prophase-specific Bqt1/2 (Chikashige et al., 2006). On the outer nuclear membrane, the KASH protein Kms1 binds to dynein to facilitate the movement of telomeres, which is essential for telomere clustering and the formation of the "telocentrosome" (Shimanuki et al., 1997;Yoshida et al., 2013). In C. elegans, specific pairing center proteins (HIM-8 and ZIM-1-3) attach chromosomes to LINC complexes composed of SUN1/Matefin and the KASH protein ZYG-12, which in turn binds dynein (Phillips and Dernburg, 2006;Sato et al., 2009). Budding yeast also use a telomere-specific binding protein (Ndj1) to attach to LINC components Csm4 and Mps3 for actin-dependent chromosome movements during meiosis (Conrad et al., 2007(Conrad et al., , 2008Koszul et al., 2008;Wanat et al., 2008). How the LINC complex in meiotic cells is modified to allow force transmission to chromosomes rather than, for example, the lamina, is still unclear. One possibility is that LINC complex components are posttranslationally regulated. In C. elegans, specific phosphorylation of Ser/Thr residues in the nucleoplasmic tail of SUN-1/Matefin occurs during meiosis and is required for meiotic chromosome movements (Penkner et al., 2009). These modifications may contribute to the reduced constraints on LINC complex mobility that have been observed at the onset of meiosis in C. elegans (Wynne et al., 2012). Another possibility is that the lamina itself is modified. In mammalian germ cells, a meiotic-specific A-type lamin, lamin C2, is expressed and localizes to sites of LINC complex-mediated telomere tethering (Jahn et al., 2010;Link et al., 2013). Lamin C2 lacks the amino-terminal head and part of the central -helical rod domain necessary for assembly into filaments, and shows higher diffusional mobility than lamin C when expressed in somatic cells (Jahn et al., 2010). Lamin C2 overexpression in somatic cells alters the distribution of lamin B1 and SUN proteins, which suggests that it may modify their normal anchoring mechanisms. Despite these considerations, tethering of telomeres to the nuclear periphery and their rearrangement into the characteristic bouquet conformation occurs normally in meiocytes lacking lamin C2, indicating that the formation of LINC complexes at telomeres and their initial movements during meiosis do not require this protein. Release of chromosomes from the bouquet stage was affected, so perhaps lamin C2 is only required for these later movements. Chromosome anchorage during DNA repair. Evidence has accumulated that the LINC complex also functions in DNA repair. Initial work in budding yeast showed that the SUN protein Mps3 was required for localization of DNA double-strand breaks to the cell periphery, delaying homologous repair and enhancing repair through an alternative pathway (Oza et al., 2009). More recently, in fission yeast both LINC complex components Sad1 and Kms1 were shown to be localized at sites of DNA double-strand breaks and participate in repair (Swartz et al., 2014). Interestingly, microtubules are also colocalized to these sites, presumably through interaction with Kms1, and promote movements of the complexes and DNA repair. The LINC complex also appears to participate in DNA repair in mammalian cells: SUN1 and SUN2 interact with DNA-dependent protein kinase that functions in DNA repair, and early events in the repair process are defective in cells lacking SUN1 and SUN2 components, including Emery-Dreifuss muscular dystrophy, cerebellar ataxia, arthrogryposis, and progressive high-frequency hearing loss (Gros-Louis et al., 2007;Zhang et al., 2007;Attali et al., 2009;Horn et al., 2013a;Meinke et al., 2014). Polymorphisms in genes encoding LINC complex components have also been putatively linked to autism, bipolar disorder, and several cancers (Sjöblom et al., 2006;Doherty et al., 2010;O'Roak et al., 2011;Green et al., 2013;Schoppmann et al., 2013;Yu et al., 2013). Emery-Dreifuss muscular dystrophy is a particularly provocative case given that disease-causing mutations occur in genes encoding SUNs and nesprins as well as in genes encoding their binding proteins emerin and A-type lamins (Bione et al., 1994;Bonne et al., 1999;Zhang et al., 2007;Puckelwartz et al., 2010;Taranum et al., 2012;Meinke et al., 2014). All of these proteins function in nuclear positioning, which suggests that mispositioning of nuclei may be a contributing factor to the disease. Future research should determine whether other proteins associated with the LINC complex can be implicated in specific pathways that reflect their contribution to disease pathogenesis. In order for higher-ordered assemblies of LINC complexes to form, SUN and KASH proteins must have sufficient mobility in the membrane to permit their clustering. In fact, fluorescence recovery after photobleaching experiments in fibroblasts show that both SUNs and nesprins are relatively mobile in the nuclear envelope (Östlund et al., 2009). In TAN lines, nesprin-2G becomes relatively immobilized . In meiosis, both KASH proteins and SUNs redistribute from dispersed sites in mice and C. elegans and from the spindle pole body in fission yeast, which implies their mobility. To understand these and perhaps other higher-ordered assemblies of LINC complexes still to be discovered, it is worth considering the trimeric nature of the SUN2 protein and the possibility that LINC complexes themselves may contribute to clustering. As noted by Sosa et al. (2012), clustering of LINC complexes could occur if oligomeric KASH proteins were attached through their KASH domains to separate SUN trimers. Although the oligomeric state of KASH proteins is unknown, evidence suggests that they may form homo-oligomers (Mislow et al., 2002;Ketema et al., 2007) or even hetero-oligomers, as shown by the binding of nesprin-1 and nesprin-2 CH domains to nesprin-3 (Lu et al., 2012). Zhou et al. (2012b) offered the alternative possibility that SUNs themselves cluster through formation of hybrids of their amino-terminal dimeric and carboxylterminal trimeric coiled-coils. Another possibility is that other proteins interacting with SUN or KASH proteins contribute to clustering, as seems to be the case for SUN interaction with telomere-binding proteins in meiosis and by the requirement for the dimeric formin FHOD1 for nesprin-2G TAN line formation (Kutscheidt et al., 2014). These possibilities are not mutually exclusive and more than one may contribute to the assembly of higher-ordered LINC complexes. The varied possibilities suggest that exploring the basis for higher-ordered assemblies of LINC complexes will be a rich area for further understanding of how the nucleus is attached to the cytoskeleton. Future perspectives The identification of LINC complex-associated proteins has begun to explain how the LINC complex is adapted to the growing list of functions attributed to it. We have considered how proteins associated with KASH proteins determine its specificity for the cytoskeleton, enhance its resistance to mechanochemical force, and contribute to both conventional and mechanosensitive signaling activities. In an analogous fashion, proteins associating with SUN proteins may alter or adapt the anchoring of the LINC complex for specific tasks. It is likely that more proteins will be identified that contribute to the known functions of the LINC complex and suggest new functions. Indeed, in the only comprehensive screen of KASH protein-interacting proteins, myriad new potential interactors of C. elegans UNC-83 were identified in a yeast two-hybrid screen . Understanding how the LINC complex function is specified or modified by its associated proteins will also enhance our understanding of how alterations in its protein components contribute to disease and whether certain sets of proteins function in a LINC complex pathway. Several diseases and syndromes are caused by mutations in genes encoding core LINC complex
2016-08-09T08:50:54.084Z
2015-01-05T00:00:00.000
{ "year": 2015, "sha1": "146ae8e6912b4fa4bd3328aafda1dfd0de9a718e", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/208/1/11/1366418/jcb_201409047.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c1efa9757feec07d9b3d04562874fafab6327d1c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
209166976
pes2o/s2orc
v3-fos-license
A conserved regulatory mechanism mediates the convergent evolution of plant shoot lateral organs Land plant shoot structures evolved a diversity of lateral organs as morphological adaptations to the terrestrial environment, with lateral organs arising independently in different lineages. Vascular plants and bryophytes (basally diverging land plants) develop lateral organs from meristems of sporophytes and gametophytes, respectively. Understanding the mechanisms of lateral organ development among divergent plant lineages is crucial for understanding the evolutionary process of morphological diversification of land plants. However, our current knowledge of lateral organ differentiation mechanisms comes almost entirely from studies of seed plants, and thus, it remains unclear how these lateral structures evolved and whether common regulatory mechanisms control the development of analogous lateral organs. Here, we performed a mutant screen in the liverwort Marchantia polymorpha, a bryophyte, which produces gametophyte axes with nonphotosynthetic scalelike lateral organs. We found that an Arabidopsis LIGHT-DEPENDENT SHORT HYPOCOTYLS 1 and Oryza G1 (ALOG) family protein, named M. polymorpha LATERAL ORGAN SUPRESSOR 1 (MpLOS1), regulates meristem maintenance and lateral organ development in Marchantia. A mutation in MpLOS1, preferentially expressed in lateral organs, induces lateral organs with misspecified identity and increased cell number and, furthermore, causes defects in apical meristem maintenance. Remarkably, MpLOS1 expression rescued the elongated spikelet phenotype of a MpLOS1 homolog in rice. This suggests that ALOG genes regulate the development of lateral organs in both gametophyte and sporophyte shoots by repressing cell divisions. We propose that the recruitment of ALOG-mediated growth repression was in part responsible for the convergent evolution of independently evolved lateral organs among highly divergent plant lineages, contributing to the morphological diversification of land plants. Introduction During 470 million years of evolution, the body plans of land plants diversified independently among the gametophyte and sporophyte life stages of different plant groups. In extant bryophytes, early-diverging land plants, the gametophyte is the dominant phase of the life cycle [1,2]. The gametophyte comprises an apical-basal axis with an apical stem cell and forms structures in which gametes develop (antheridiophores and archegoniophores). In contrast, the sporophyte is dominant in extant vascular plants. The sporophyte comprises an axial system (shoots or stems growing along apical-basal axes) that develops from an apical meristem and forms structures in which haploid spores develop. Therefore, in different plant lineages, gametophytes and sporophytes develop axial systems that are produced by apical meristems [3][4][5]. Extant bryophytes and vascular plants develop lateral organs on gametophytes and sporophytes, respectively. Apical meristems maintain stem cell activity at their center and iteratively generate lateral organs at the meristem periphery. The spatiotemporal differences in cell division and expansion in lateral organs contribute to the morphological diversity of shoot structures in land plants [6][7][8][9][10][11]. The liverwort Marchantia polymorpha is a bryophyte that forms a gametophyte axis that undergoes indeterminate planar growth in the form of a flattened mat of tissue, called a thallus. The thallus exhibits strong dorsoventrality; gemma cups, gemmae, and air chambers develop on the dorsal side, whereas rhizoids and ventral scales are formed on the ventral side ( Fig 1A) [12][13][14][15][16][17]. Ventral scales cover bundles of rhizoids that run along the underside of the thallus and facilitate water and nutrient transport over the ventral surface of the thallus ( Fig 1B) [17]. In the leafy liverworts, photosynthetic leaves arise next to a tetrahedral single stem cell (apical cell). By contrast, M. polymorpha does not develop photosynthetic leaves. Instead, the ventral scales alternately develop on the left and right sides of the wedge-shaped apical cell on the ventral surface in the apical notch near the growing tip of the thallus. The flattened form, single-cell thickness, and bilateral symmetry of the Marchantia scales resemble the leaves in leafy liverworts ( Fig 1C) [16,17]. The ventral scales of M. polymorpha are hypothesized to be homologous to the photosynthetic leaves of the basally diverging leafy liverworts [18,19]. The fossil record indicates that the shoot of the earliest known land plants comprised branching stems without lateral, determinate organs [20,21]. Subsequently, determinate lateral organs, which develop from the sides of apical meristems, evolved. The earliest example of such a lateral organ is the microphyll that developed on the stems of the sporophyte of Baragwanathia longifolia, a lycophyte, which first appears in the fossil record in the late Silurian [22,23]. No lateral organs are known from the gametophytes of early bryophytes from the Silurian or Devonian; these likely arose subsequently and are found in extant bryophytes. The acquisition and modification of different lateral organ types are likely to have been morphological adaptations to the terrestrial environment that increased photosynthetic efficiency, gas exchange, and water transport [7,[24][25][26]. Mechanisms controlling lateral organ development are well described in angiosperms such as rice and Arabidopsis. However, little is known about the mechanisms that regulate lateral organ development among bryophytes. Therefore, we carried out a forward genetic screen for mutants with defective lateral organ development in the liverwort M. polymorpha to define mechanisms that control lateral organ development in this species. Comparing the roles of the genes that control lateral organ development in liverworts and angiosperms allows the identification of mechanisms that were involved in the independent evolution of analogous lateral organs during land plant evolution [7,[24][25][26]. M. polymorpha LATERAL ORGAN SUPPRESSOR 1 specifies lateral organ identity during vegetative growth We isolated two mutants, vj99 and vj86, that produced abnormal green outgrowths from a population of 105,000 transfer DNA (T-DNA)-transformed M. polymorpha (Fig 1D-1G; S1A and S1B Fig). vj99 and vj86 thalli were hyponastic, bending upward at the thallus margins, unlike wild type (WT) (Fig 1D and 1E; S1C and S1D Fig). A single T-DNA was inserted into the gene Mapoly0028s0118 in vj99 and vj86, suggesting that defective function of Mapoly0028s0118 was responsible for the green outgrowths (S1E Fig). To test this hypothesis, we generated independent mutations in the Mapoly0028s0118 gene by homologous recombination. Mutants of Mapoly0028s0118 generated by targeted deletion developed similar phenotypes to those of the vj99 and vj86 mutants (S1F and S1G Fig). To verify that a defect in Mapoly0028s0118 was responsible for the green outgrowth, we transformed mutant vj99 with a genomic fragment that includes the full-length Mapoly0028s0118 gene. Transformation of the Mapoly0028s0118 genomic fragment into vj99 mutants restored WT development, demonstrating that loss of Mapoly0028s0118 function causes the vj99 phenotype (S1H-S1K Fig). Phylogenetic analysis indicated that Mapoly0028s0118 belongs to the Arabidopsis LIGHT-DEPENDENT SHORT HYPOCOTYLS 1 (LSH1) and Oryza G1 (ALOG) protein family (S1L Fig). The proteins in this family contain a DNA-binding domain with weak transcriptional activity [27][28][29]. We named this gene M. polymorpha LATERAL ORGAN SUPPRESSOR 1 (MpLOS1). In addition to the abnormal green outgrowths, gemma cup spacing was abnormal in the Mplos1-1 (vj99) mutant; the distance between neighboring gemma cups was much shorter than in the WT (Fig 1F and 1G). To more precisely define the nature of the green outgrowths, we performed a phenotypic analysis of the Mplos1-1 mutant. Outgrowths emerged from the ventral side of the thallus near the thallus margins and extended beyond the thallus margin in the Mplos1-1 mutant (Fig 2A-2D). These outgrowths resembled ventral scales in a number of ways. They developed in pairs near the apical notch (Fig 2E and 2F; S2A and S2B Fig). They were in general a single cell layer thick, although the outgrowths located near the apical notch occasionally consist of several cell layers (Fig 2G-2I). Outgrowths located near the apical notch also tended to pile up on one another at the edge of the ventral surface (Fig 2G and 2I). Furthermore, although outgrowths developed, no ventral scales formed on Mplos1-1 mutants ( Fig 2C and 2D), suggesting that the outgrowths are modified ventral scales. Taken together, these data suggested that the outgrowths formed from the ventral thallus on Mplos1-1 mutants were related to ventral scales. Although similar to ventral scales, these outgrowths differ in a number of characteristics. The abnormal outgrowths formed in Mplos1-1 mutants were greener than typical ventral scales (Fig 2J and 2K). There were more cells in outgrowths than in WT scales (Fig 2G-2I; S2C-S2F Fig). Moreover, the mutant chloroplasts were larger than in WT, and there were more thylakoid membranes in the mutants than in WT (Fig 2L and 2M; S2I and S2J Fig). Rhizoids never differentiated in the green outgrowths of the Mplos1-1 mutants, unlike those in WT ventral scales (S2G and S2H Fig). Taken together, these observations indicated that MpLOS1 plays crucial roles in specifying lateral organs as ventral scales, in which MpLOS1 inhibits cell division and chloroplast differentiation. MpLOS1 activity is required for the maintenance of meristem activity The WT thallus comprises an apical-basal axis produced by the activity of apical stem cells. The thallus undergoes periodic bifurcation, and gemma cups develop along the midline of the dorsal surface. When the WT thallus bifurcates, a notch containing an apical stem cell forms on each of the two new apical-basal axes ( Fig 3A). This process, the duplication of apical notches and the subsequent growth of thalli, is termed "branching." Upon branching, adjacent apical notches are initially pushed away by the growth of tongue-like tissues, called central lobes, and subsequently further separated concomitant with the growth of the thallus (Fig 3A) [30]. Gemma cups initiate from dorsal merophytes, clones derived from the cell that are cut off from the dorsal surface of the apical cell [17], and they are regularly spaced along the dorsal midline of each thallus. Gemma cups were more densely arranged along the dorsal surface of Mplos1-1 mutants than in WT ( Fig 1G). This suggested that the mutants had defects in gemma cup differentiation or axis development, or both. To address whether MpLOS1 is involved in bifurcation or gemma cup differentiation, we analyzed the number of apices and gemma cups in Mplos1-1 mutants during cultivation. To count meristems, we imaged expression of the promoter M. polymorpha YUCCA2: ß-glucuronidase (proMpYUC2:GUS) construct, which is preferentially expressed in notches [31]. The number of apical notches expressing GUS was not significantly different between WT and Mplos1-1 mutants until day 7 of cultivation, although subsequently, fewer GUS-expressing apical notches were detected in the Mplos1-1 mutants compared with WT ( Fig 3B-3D). This indicates that bifurcation occurs normally, at least in the early stages of development. In contrast, the density of apical notches in Mplos1-1 mutants was higher than WT at 3 weeks (S3A and S3B Fig). Importantly, there is no clear difference in the number of gemma cups between WT and Mplos1-1 mutants at day 17 of cultivation, although subsequently, fewer gemma cups with higher density were formed in Mplos1-1 mutants (S3B and S3C Fig). These data suggested that the onset of bifurcation, as well as gemma cup differentiation, is not affected and that the separation process of each apical notch is compromised in Mplos1-1 mutants. The lower number of GUS-positive notches and gemma cups in Mplos1-1 mutants after prolonged cultivation may be due to a secondary effect of slow thallus growth or technical limitations of counting densely clustered apices and small-sized immature gemma cups. The separation of apical notches is dependent on the division and expansion of cells between notches, where central lobes develop (Fig 3A). We reasoned that defective cell division in apical notches and/or central lobes in Mplos1-1 mutant thalli would lead to defects in apical notch separation. We analyzed the cell division activity of Mplos1-1 mutant gemmalings during 7 days' cultivation by applying a 3-hour pulse of 5-ethynyl-2 0 -deoxyuridine (EdU), a thymidine analog that is incorporated into cells during DNA replication (Fig 3E-3G) [32]. In early development (days 1-3), the number of cells labeled by EdU was indistinguishable between the WT and Mplos1-1 mutants ( Fig 3G). However, beginning from day 3, the incorporation of EdU was lower in Mplos1-1 mutants than in WT, which suggested that cell division is reduced in Mplos1-1 mutants compared with WT ( Fig 3E-3G). To determine whether the cell divisions that led to the separation of the apical notches during branching occur in the apical notches themselves or within the central lobes after they emerge, we measured EdU incorporation in these areas during WT development. We found that many cells incorporated EdU in the incipient central lobe as it forms in the apical notch region (S3D Fig). In contrast, very little cell division is found within the central lobe once it has emerged (S3E Fig). This indicates that cells in central lobes are mainly supplied from the apical notches, whereas the contribution of cell division activity within the central lobe for separating duplicated apical notches is low. Altogether, these data suggest that the rate of cell division in the apical notch is lower in the mutant than in WT, leading to an impaired separation of apical notches during branching. We also compared the cellular organization of apical meristems of Mplos1-1 mutants and WT. The WT apical meristem comprised a single wedge-shaped apical cell and surrounding merophytes (a group of clonally related cells resulting from sequential cell divisions in a single derivative of the apical cell of a meristem), in which the lateral merophytes and the apical cell display identical shapes ( Fig 3H) [17]. In contrast, there were many wedge-shaped cells in Mplos1-1 mutants, in contrast to the single apical cell of WT ( Whereas the expression of proMpYUC2:GUS was restricted to a small area of the WT apical notch, staining was more dispersed in Mplos1-1 mutants (Fig 3J and 3K). Occasionally (3 out of 20 gemmalings at 14 days' cultivation), apical meristems were aborted in Mplos1-1 mutants (S4I Fig, dotted boxes), a phenomenon not observed in WT in our conditions. These data suggest that MpLOS1 is required for the maintenance of apical meristems. These data also support the hypothesis that Mplos1-1 mutants fail to separate apical notches because of defects in cell proliferation in the apical notches. Gemma cup differentiation as well as bifurcation initiate as in WT, but then subsequent defective meristem activity causes defective axis expansion, resulting in the development of a higher density of gemma cups and apical notches in the Mplos1-1 mutant thallus. MpLOS1 is expressed in lateral organs but not in apical cells To define the spatial expression patterns of MpLOS1, we established a line that expressed GUS under the control of 5 0 and 3 0 regulatory elements that were used in the complementation analysis of Mplos1-1 mutants (S1E Fig). In 4-day-old gemmalings, GUS staining was detected in notches and rhizoids (Fig 4A). Weak signal was observed elsewhere in growing thalli (Fig 4B and 4C). The developing ventral scales in the ventral region of the apical notch stained the strongest (Fig 4B-4F). Staining extended over the entire young ventral scale and the basal region of old ventral scales (Fig 4D and 4E). No signal was detected in the oldest ventral scales (Fig 4D and 4E). The expression of MpLOS1 in ventral scales is consistent with the phenotypic defects seen in these organs in the mutant, further strengthening our hypothesis that MpLOS1 is required for the normal development of ventral scales. We also expressed functional proM-pLOS1:enhanced green fluorescent protein (eGFP)-MpLOS1 constructs in Mplos1-1 mutants (S1H, S1I and S1K apical cells or lateral merophytes despite the defect in apical meristem morphology and maintenance in Mplos1-1 mutants (Fig 4G-4I). These findings suggest that MpLOS1 mediates the maintenance of apical meristems non-cell autonomously, although we cannot exclude the possibility that MpLOS1 proteins below the level of detection in the apical meristems maintain meristem activity (Fig 4I). MpLOS1 specifies lateral organ identity during reproductive growth M. polymorpha produces an umbrellalike gametangiophore (antheridiophore or archegoniophore) that bears antheridia or archegonia during reproductive growth [17]. The gametangiophore is a vertically growing thallus branch [17], and we reasoned that gametangiophore development might be defective in Mplos1-1 mutants. The antheridial receptacles of male Mplos1-1 plants were smaller than WT, and unlike in the WT, antheridia were frequently exposed (S6A- S6C Fig) The archegonial receptacle of female M. polymorpha is highly lobed, with finger-like structures called digitate rays (Fig 5A). The archegonial receptacle lacks the rows of typical ventral scales that develop in antheridiophores. Instead, a pair of specialized scalelike structures called involucres, which are larger than ventral scales, develop between each digitate ray and enclose the archegonia cluster (Fig 5C and 5D) [17]. In female Mplos1-2 mutants (vj86), large leaf-like structures developed as in antheridiophores (Fig 5E). Moreover, more than two involucre-like structures differentiated between each digitate ray (Fig 5C-5F). Importantly, these involucrelike structures resembled ventral scales in their arrangement in several rows (Figs 1B, 1C, 5G and 5H). This suggests that loss of MpLOS1 function results in the transformation of involucres into more scalelike structures. GUS staining was also detected in immature involucres but not in mature involucres in proMpLOS1::GUS archegoniophores (Fig 5I; S6K Fig), accompanied by staining of all parts of the archegonia including eggs, collars, and venters (Fig 5I; S6K and S6L Fig) [17]. These data suggest that ventral scales are transformed into involucres in a MpLOS1-dependent manner upon the transition from vegetative to reproductive growth, in which MpLOS1 inhibits the growth of two rows of ventral scales, resulting in the formation of a single pair of involucres. Protein function of ALOG proteins are conserved between Marchantia and rice Mutation of long sterile lemma (G1), a member of the ALOG gene family in rice, results in the enlargement of sterile lemmas, a small leafy lateral organ in the rice spikelet (flower clusters in grass species, basic unit of inflorescence, consisting of one or more flowers), which is interpreted as a homeotic transformation of a sterile lemma into a lemma [29]. Similarly, Mplos1-2 mutants displayed defects in lateral organ specification that we interpret as the transformation of involucres into ventral scalelike structures. To determine whether the M. polymorpha protein could rescue the homeotic transformation of the rice mutant, we expressed MpLOS1 in rice g1 mutants. Expression of MpLOS1 restored the WT short sterile lemma phenotype (Fig 6A-6C). This suggests that the protein functions of the ALOG proteins have been conserved since the time that M. polymorpha and rice last shared a common ancestor, which likely lacked lateral organs. It further suggests that ALOG family proteins were independently co-opted to specify sporophytic function in the lineage giving rise to rice and gametophytic functions in the lineage giving rise to liverworts when each originated the evolutionary novelty of lateral organs. Land plant ALOG proteins regulate lateral organ development and meristem activity Here, we report the discovery that MpLOS1, a member of the ALOG protein family, controls both lateral organ development and apical meristem activity in M. polymorpha. MpLOS1 represses the growth of different lateral organs, including ventral scales and involucres, and MpLOS1 expression was detected early in the development of these lateral structures. These data indicate that the gene is required for normal lateral organ development. Furthermore, MpLOS1 activity is required for apical meristem maintenance. However, MpLOS1 is not expressed in the apical meristems or surrounding cells. We propose that MpLOS1 cell autonomously regulates lateral organ development but non-cell autonomously regulates apical meristem maintenance (Fig 4I). The role of ALOG proteins in meristem maintenance is conserved between monocots and dicots. Oryza sativa TAWAWA1(OsTAWAWA1) and Solanum lycopersicum TERMINAT-ING FLOWER (SlTMF) (the tomato TAW1 homolog) proteins, members of the ALOG gene family in rice and tomato, respectively, repress maturation of meristems during reproductive growth [28,33,34]. Although the angiosperm genes control meristem development, neither SlTMF, Arabidopsis thaliana LSH3 (AtLSH3) (the Arabidopsis TAW1 homolog), nor OsTAW1 proteins are expressed in apical meristems. Instead, they are expressed at lateral organ boundaries [27,33,35]. Taken together, these data from a diversity of land plants suggest that although the ALOG genes act cell autonomously during the development of lateral organs, they act non-cell autonomously to control meristem development. It remains unclear how this might operate, but there is evidence from angiosperms that lateral organ development is required for meristem maintenance [9,36,37]. Taken together with our discovery that MpLOS1 is required non-cell autonomously for meristem maintenance in M. polymorpha, this means that the evolutionarily conserved ALOG family proteins control apical meristems in divergent plant lineages, in which the apical meristems are found in different phases of the life cycle. We propose that this mechanism for controlling shoot meristematic activity was already present in the last common ancestor of Marchantia and rice. Conserved ALOG proteins negatively regulate lateral organ growth We discovered that MpLOS1 specifies lateral organ identity by negatively regulating the lateral organ outgrowth; involucres are transformed into ventral scalelike structures during reproductive growth in Mplos1-2 mutants (Fig 5G and 5H). The rice homolog, G1, also represses the development of lateral organs to specify the sterile lemmas. Loss-of-function mutations in OsG1 result in the transformation of small sterile lemmas into large lemmas ( Fig 6A and 6B) [29]. Similarly, in tomato, Sltmf mutants display a similar transformation, in which sepals develop leaf characteristics [33]. The conserved function of rice, tomato, and M. polymorpha LOS1 homologs suggests that the role of ALOG proteins in repression of lateral organ development is ancient. This functional conservation among divergent taxa of land plants suggests two alternative hypotheses regarding the evolution of lateral organs. According to the first hypothesis, the common ancestor of liverworts and the seed plants developed lateral organs whose growth was controlled by one or more ALOG genes. Because lateral organs are present in the gametophyte and the sporophyte phase in liverworts and seed plants, respectively, under this scenario, the common ancestor may have had lateral organs in both the gametophyte and the sporophyte phase. These structures would then have subsequently diverged morphologically during the course of land plant evolution. However, this hypothesis is not well supported by the fossil record. The earliest polysporangiophyte and tracheophyte fossils we know of, such as Aglaophyton and Cooksonia, possessed simple axes devoid of lateral organs [20,21]. This suggests that their last common ancestor with the liverworts likewise lacked lateral organs. An alternative hypothesis, which is in better accord with the fossil record, is therefore that the common ancestor of liverworts and the seed plants lacked lateral organs in the gametophyte and sporophyte phase but possessed an ALOG-mediated mechanism for controlling some other developmental process. The ALOG-dependent growth-repression mechanism was subsequently recruited independently during the evolution of lateral organs in the separate lineages leading to the liverworts and seed plants. If the last common ancestor did not develop lateral organs, and because ALOG function regulates lateral organ development in both liverworts and seed plants, we suggest that ALOG function was recruited independently during the evolution of lateral organs in different lineages of land plants. The original ALOG function prior to these independent recruitment events may have been to control some aspect of apical meristem activities because growth by apical meristems is a shared characteristic of land plants [38] and this ALOG function is found in both liverworts and seed plants. The recruitment of ALOG function during the independent evolution of lateral organs thus provides a molecular mechanism for the convergent evolution of growth repression in lateral organs. ALOG proteins may mediate diversification of lateral organs during plant evolution It has been suggested that the repressive activity of the OsG1 gene on growth led to the evolution of the rice spikelet [29]. Loss-of-function Osg1 mutations revert the sterile lemma into a larger leafy structure, which has been interpreted as similar to a hypothetical ancestral structure (Fig 6A and 6B) [29]. According to this model, the formation of a pair of lower lemmas subtending the floret (rice flower surrounded by two bracts; the external lemma and internal palea) was the ancestral state. Then, during the evolution of rice, OsG1 activity was co-opted to repress the development of the lower lemma, which is now much reduced in size in modern rice compared with the ancestral state, resulting in the formation of the sterile lemma. It is formally possible that MpLOS1 may also have played a similar role in the evolution of lateral organs in liverworts. Several liverwort taxa with thalloid form are suggested to have evolved independently from ancestral leafy liverworts, in which leaves are hypothesized to be transformed into nonphotosynthetic ventral scales with reduced growth during this evolutionary transition [18,19]. We found that MpLOS1 is involved in the specification of lateral organ identities by inhibiting cell division and chloroplast differentiation and that the loss of its function leads to the formation of chlorophyll-containing photosynthetic tissues (Fig 2J and 2K). These green appendages are in fact similar to the green-colored photosynthetic scales formed in the Treubiaceae family of liverworts, whose semithalloid form has been interpreted as an evolutionary transition state between the leafy and thalloid form [39][40][41]. Therefore, MpLOS1 function may also be associated with the evolution of the thallose body form by repressing leaf growth in ancestral leafy liverworts in the same way that OsG1 suppresses lower lemma development during rice spikelet evolution. It is possible that morphological modification of lateral organs is controlled by the spatial and temporal differences in expression levels of ALOG family genes, and this would provide a mechanism for the establishment of morphological diversification in lateral organs that develop on shoots during land plant evolution. Conclusion We demonstrate that MpLOS1, a member of the ALOG gene family, plays a role in integrating meristem activity and lateral organ differentiation in M. polymorpha. MpLOS1 acts by repressing lateral organ growth and is required for meristem maintenance. Because ALOG proteins from angiosperms also repress lateral organ growth and are required for meristem maintenance, and these functions were rescued by MpLOS1, we conclude that protein functions of ALOG family proteins are conserved among these taxa and acted in their last common ancestor. We hypothesize that ALOG genes were co-opted to execute the morphological modification of analogous lateral organs during land plant evolution and contributed to the diversification of lateral organs in shoot systems during the course of land plant evolution. Phylogenetic analysis Phylogenetic analysis was performed as described by Bowman and colleagues [3]. Protein sequences were collected using the Marchantia genome portal site MarpolBase (http:// marchantia.info). Multiple sequence alignments were performed using the MUSCLE program [46] contained in the Geneious software (https://www.geneious.com). Gaps were removed by using Strip Alignment Columns in the Geneious package, and phylogenetic analyses were performed using PhyML (http://www.atgc-montpellier.fr/phyml/). Plasmid construction For constructing the proMpLOS1:MpLOS1 plasmid that complements the Mplos1 mutants, a MpLOS1 genomic fragment with a 10-kb upstream region and a 3-kb downstream region was amplified by PCR using Prime STAR GXL polymerase (TaKaRa, Shiga, Japan) and subcloned into pENTR/D-TOPO (Thermo Fisher Scientific, Waltham, Massachusetts, United States), which was subsequently integrated into pMpGWB101 by a Gateway LR reaction [47]. pENTR/ D-TOPO that included the proMpLOS1:MpLOS1 complement fragment was modified to establish proMpLOS1:eGFP-MpLOS1 and proMpLOS1:GUS plasmids. A PCR-amplified eGFP coding sequence was inserted in frame with the 5 0 end of the MpLOS1 coding sequence by the In-Fusion cloning reaction (TaKaRa, Shiga, Japan) to generate the proMpLOS1:eGFP-MpLOS1 plasmid. The coding sequence of MpLOS1 in the proMpLOS1:MpLOS1 complement fragment was replaced with a PCR-amplified GUS coding sequence by the In-Fusion cloning reaction to generate proMpLOS1:GUS plasmids. pENTR/D-TOPO vectors that included proMpLOS1:eGFP-MpLOS1 or proMpLOS1:GUS fragments were subsequently integrated into pMpGWB101 by the Gateway LR reaction. For overexpression of MpLOS1 in rice g1 mutants, a plasmid carrying pro35S::MpLOS1 was constructed. The MpLOS1 genomic coding sequence was amplified by PCR and subcloned into pENTR/D-TOPO, which was subsequently integrated into pGWB2 by a Gateway LR reaction [48]. Histochemical GUS staining GUS staining was performed as described by Naramoto and colleagues [49], except that 50 mM sodium phosphate buffer was used. Samples were cleared with 70% ethanol and subsequently mounted using a clearing solution (chloral hydrate:glycerol:water, 8:1:2) for direct microscopic observation or dehydrated through a graded ethanol series and embedded in paraffin or Technovit 7100 resin for microtome sectioning. Plant embedding and sectioning Plant material was fixed in FAA (45% ethanol:5% formaldehyde:5% acetic acid in water) for embedding in paraffin and Technovit 7100. For paraffin embedding, fixed plant material was dehydrated in a series of ethanol (25%-50%), t-butyl alcohol (10%-75%), and chloroform (20%) solutions and then embedded in Paraplast (McCormick, Baltimore, Maryland, USA). For Technovit 7100 embedding, fixed samples were dehydrated through a graded ethanol series and embedded in Technovit 7100 resin according to the manufacturer's instructions (Heraeus Kulzer, Hanau, Germany). Embedded samples were sectioned on a rotary microtome into a series of vertical transverse and longitudinal sections (thickness of 8 μm for paraffin and 4 μm for Technovit sectioning). The obtained sections were further treated with neutral red dyes as a counterstain for GUS-stained samples or toluidine blue for the other samples. Multi-Mount 480 solution (MATSUNAMI, Osaka, Japan) or Entellan new (MERCK, Darmstadt, Germany) were used as mounting agents to preserve the samples on the slides. ClearSee treatment and staining of cell walls Plants were fixed with 4% paraformaldehyde (PFA) in 1× PBS for 1 hour at room temperature under vacuum. Samples were subsequently washed twice with PBS and transferred to ClearSee solution (10% xylitol, 15% sodium deoxycholate, and 25% urea in water) [50]. ClearSee treatment was prolonged until samples became transparent. Cell walls were stained for 1 hour with 0.1% (v/v) calcofluor or with 0.1% (w/v) Direct Red 23 dissolved in ClearSee. Stained samples were washed for at least 30 minutes with ClearSee solution before observation. EdU uptake experiments Gemmalings were incubated in 1/2 B5 medium containing 10 μM EdU (Click-iT EdU Alexa Fluor 488 imaging kit; Thermo Fisher, Waltham, Massachusetts, USA) for 3 hours. Samples were fixed with 4% PFA in 1× PBS for 1 hour under vacuum and then washed three times in PBS. Coupling of EdU to the Alexa Fluor substrate was performed according to the manufacturer's instructions. Before observations, samples were cleared with ClearSee solution, and cell walls were subsequently stained by Direct Red 23. Microscopy Anatomical features were observed with a light microscope (Olympus BX51) equipped with an Olympus DP71, a light sheet microscope (Zeiss Z.1), or a confocal laser scanning microscope (Olympus FV1000 or Zeiss LSM880). For light microscope observations, a PLAPON 2× objective, a UPlanFl 10× objective, or a UPlanFl 20× objective was used. Light sheet microscope observations were conducted using Lightsheet Z.1 detection optics 5× or Clr Plan-Neofluar 20×. For confocal laser scanning microscopy, cell walls stained by Calcofluor or by Direct Red 23 were excited at 405 nm or 543 nm, respectively, whereas GFP and Alexa 488-labeled EdU were excited at 488 nm. Samples were mounted using ClearSee solution and observed with silicon oil objectives. The 3D reconstruction was done by using Imaris software (BITPLANE, http://www.bitplane.com/). High-resolution images showing ultrastructural details were obtained using an SEM (JEOL JCM-6000Plus NeoScope) and an FESEM (Hitachi SU820). Data analysis and statistics Four alleles of the Mplos1 mutants, including Mplos1-1, Mpos1-2, Mplos1-3, and Mplos1-4 displayed identical phenotypes in vegetative growth and, thus, Mplos1-1 mutants were used as a representative allele unless otherwise described. All experiments were repeated at least three times. Representative images were used for the preparation of figures. Statistical analysis was conducted using Excel. For comparing two groups, we used the t test to calculate the p-value. The p-values of the relevant experiments are described in the Supporting information (S2 Data). Materials Design Analysis Reporting checklist is described in the Supporting information (S1 MDAR Checklist).
2019-12-11T14:01:31.559Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "fc1f39114dcf092c7aef7404d82ebf29c114157f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3000560&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ee0dfe165869f89abb66d36fe33584e77920e52", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
255045491
pes2o/s2orc
v3-fos-license
Venom composition and pain-causing toxins of the Australian great carpenter bee Xylocopa aruana Most species of bee are capable of delivering a defensive sting which is often painful. A solitary lifestyle is the ancestral state of bees and most extant species are solitary, but information on bee venoms comes predominantly from studies on eusocial species. In this study we investigated the venom composition of the Australian great carpenter bee, Xylocopa aruana Ritsema, 1876. We show that the venom is relatively simple, composed mainly of one small amphipathic peptide (XYTX1-Xa1a), with lesser amounts of an apamin homologue (XYTX2-Xa2a) and a venom phospholipase-A2 (PLA2). XYTX1-Xa1a is homologous to, and shares a similar mode-of-action to melittin and the bombilitins, the major components of the venoms of the eusocial Apis mellifera (Western honeybee) and Bombus spp. (bumblebee), respectively. XYTX1-Xa1a and melittin directly activate mammalian sensory neurons and cause spontaneous pain behaviours in vivo, effects which are potentiated in the presence of venom PLA2. The apamin-like peptide XYTX2-Xa2a was a relatively weak blocker of small conductance calcium-activated potassium (KCa) channels and, like A. mellifera apamin and mast cell-degranulating peptide, did not contribute to pain behaviours in mice. While the composition and mode-of-action of the venom of X. aruana are similar to that of A. mellifera, the greater potency, on mammalian sensory neurons, of the major pain-causing component in A. mellifera venom may represent an adaptation to the distinct defensive pressures on eusocial Apidae. Results The venom of Xylocopa aruana is simple in composition and similar to that of Apis mellifera. We used a combined transcriptomic and mass spectrometry (MS)-based strategy to generate a full profile of the composition of polypeptides in venom from an individual adult female X. aruana (Fig. 1a). RNA extracted from the venom glands was used to generate a venom gland transcriptome. We obtained 24,904,864 demultiplexed paired-end reads from Illumina NextSeq RNA sequencing, which, following adaptor trimming, quality trimming and filtering and error correction, were assembled de novo using Trinity to yield a total of 43,374 contigs. Venom was collected by squeezing of the contents of the venom reservoir and venom duct into water. Liquid chromatography-tandem MS (LC-MS/MS) data from three venom samples (native; reduced and alkylated; reduced, alkylated, and trypsin-digested) were searched against a database comprising the translated venom gland transcriptome. Analysis of the venom of X. aruana by LC-MS indicated that it was relatively simple (Fig. 1b). By topdown sequencing of the native and reduced and alkylated venom samples we identified two peptides ( Fig. 1b- Table 1). A 17 amino acid, cysteine-free peptide with an amidated C-terminus, which we called XYTX 1 -Xa1a, dominated the venom. The total ion chromatogram of the native venom, shown in Fig. 1b, illustrates the relative abundance of this peptide. Numerous derivatives (e.g. truncated version of the peptide) were also detected, although at much lower abundance, and are labelled with asterisks in Fig. 1b. We cannot confirm whether these derivatives are present in the natural venom or are an artefact of our venom collection technique. In the venom gland transcriptome two near-identical transcripts (probably representing either allelic variants or paralogues) encoded the mature peptide XYTX 1 -Xa1a, differing only by synonymous substitutions at two sites. Together, these accounted for 93.4% of venom component expression (Fig. 1e,f). The second peptide was 23 amino acids in length with four cysteine residues and an amidated C-terminus. A peak with mass [M + 4H] 4+ = 629.061 (theoretical [M + 4H] 4+ = 629.063) and MS/MS spectra corresponding to the monomeric peptide was detected in the native venom sample. No peaks with a mass corresponding to that of the dimeric peptide were detected, indicating that this peptide exists in the venom as a monomer and not as a dimer. This peptide, which we called XYTX 2 -Xa2a, accounted for 3.0% of venom component expression. Analysis of the native venom sample by matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF) MS was consistent with the data obtained by LC-ESI-MS i.e. Four major peaks corresponded to XYTX 1 -Xa1a (and two derivatives) and XYTX 2 -Xa2a (Fig. S2). Several other proteins were detected by bottom-up sequencing of the reduced, alkylated and trypsin-digested venom sample. Of these, a PLA 2 was the most highly expressed (3.4% of venom component expression) (Fig. 1c). This was almost identical (96% amino acid identity) to that reported in the venom of X. appendiculata (Uniprot: I7GQA7) and similar to those reported from Bombus and Apis venoms (Fig. S1). The remaining proteins were expressed at much lower levels, together constituting only 0.2% of venom component expression (Fig. 1c, Dataset 1). These proteins include some which have been implicated as toxins e.g. hyaluronidase, others likely serve a role in the production and/or maturation of the peptide toxins e.g. dipeptidyl peptidase-4 (DPP-4), while others The high proportion of venom gland-derived reads encoding the two peptides XYTX 1 -Xa1a and XYTX 2 -Xa2a and the venom PLA 2 , as well as the assignment of the major peaks of the total ion chromatogram of the native venom, strongly suggested that together these three polypeptides represent the major components of X. aruana venom (Fig. 1). Pharmacological activity of X. aruana venom peptides. We prepared XYTX 1 -Xa1a and XYTX 2 -Xa2a by solid phase peptide synthesis (SPPS). Oxidative folding of the linear XYTX 2 -Xa2a produced a single major peak which eluted at the same retention time as the native peptide in the venom (Fig. S3). The mature peptide of XYTX 1 -Xa1a shares a similar primary structure with Xac-1, Xac-2 and melectin. Previous studies of these peptides have indicated amphipathic α-helical structure in membrane-mimicking solvents, degranulation of mast cells, and antimicrobial activity, all of which are consistent with a mode-of-action involving disruption of cell membranes [11][12][13] . We hypothesised that XYTX 1 -Xa1a would share the same activity. We tested, by whole-cell patch-clamp electrophysiology, the capacity of XYTX 1 -Xa1a to directly induce leak currents in cell membranes. For these experiments we used HEK293AD cells, which lack appreciable expression of the ion channels found in neurons. At 4 min after application of XYTX 1 -Xa1a (30 μM) we recorded leak currents at test potentials ranging from − 60 to + 60 mV in 10-mV increments from a holding potential of 0 mV every 6 s. At + 60 mV, we recorded currents of 3.3 ± 0.6 nA (mean ± SEM, n = 6 cells) compared with 0.08 ± 0.02 nA (n = 5 cells) for time-matched negative controls (application of extracellular solution (ECS)) ( Fig. 3a-c). These data are consistent with a membrane disrupting mode of action for XYTX 1 -Xa1a, similar to that of melittin (Fig. S4). XYTX 2 -Xa2a is similar in sequence to Bombus and Apis MCD-peptides and apamin. Apamin is a blocker of the mammalian small conductance calcium-activated potassium (K Ca , SK) channel K Ca 2.2 20 , while Apis MCD peptide is a blocker of Shaker-like voltage-gated potassium (K V ) channels K V 1.1 and K V 1.2 21 . We hypothesised that XYTX 2 -Xa2a might share similar activity. Thus, we tested XYTX 2 -Xa2a for activity on human K V 1.1, K V 1.2, Fig. 5a-b,f), while application of XYTX 2 -Xa2a or venom PLA 2 had no direct effect on intracellular calcium levels ( Fig. 5d-f). We measured the potency of XYTX 1 -Xa1a in F11 cells (a mouse neuroblastoma × rat DRG cell line) where the peptide caused an increase of [Ca 2+ ] i with a median effective concentration (EC 50 ) of 5.2 ± 0.7 µM (n = 6) (Fig. 5c). In this assay, melittin was more potent (P = 0.0002, unpaired t-test; n = 6) with an EC 50 of 1.2 ± 0.1 µM (n = 6) (Fig. 5c). Previous studies have shown that A. mellifera venom PLA 2 potentiates the haemolytic activity of melittin 22 , and more recently it was shown that the PLA 2 toxins of spitting cobra venoms potentiate the nociceptive effects the cobra venom cytotoxins 23 . We hypothesized that the nociceptive effects of XYTX 1 -Xa1a might also be potentiated by PLA 2 . Indeed, activation of DRG neurons by XYTX 1 -Xa1a was increased in the presence of venom PLA 2 (1 µM) to 93.2 ± 6.5% (P = 0.0036, versus XYTX 1 -Xa1a alone, unpaired t-test), which was accompanied by increased cell lysis (Fig. 5d-f). Cell lysis is illustrated in Fig. 5d,e by leakage of dye into the extracellular media. Activation of DRG neurons by XYTX 1 -Xa1a was not increased by the presence of XYTX 2 -Xa2a (45.8 ± 5.5% neurons; P = 0.4096, versus XYTX 1 -Xa1a-treated, unpaired t-test) (Fig. 5f). These data suggest that XYTX 2 -Xa2a, apamin and MCD-peptide do not contribute to spontaneous pain (in mammals) associated with envenomation by these bees. We therefore tested whether they might contribute to Representative whole-cell current traces were recorded for (a) hK V 1.1, (b) hK V 1.2, and (c) hK V 1.3 using the voltage protocols shown above the raw current traces every 15 s in the absence (black, control) and presence of 100 nM XYTX 2 -Xa2a (orange) and positive control (TEA + for hK V 1.1 and hK V 1.3, and charybdotoxin (ChTx) for hK V 1.2, blue). (d) hK Ca 2.1 and (e) hK Ca 2.2 currents were elicited with voltage ramps to + 50 mV from a holding potential of − 120 mV every 15 s in the absence (black, control) and presence of XYTX 2 -Xa2a at the indicated concentration (orange) or apamin as a positive control (blue). The currents were corrected for ohmic leakage and then drawn as a function of test potential (E m ). The horizonal dashed line shows the zero current level, the vertical dashed line indicates the expected reversal potential for K + (− 86.5 mV, based on the Nernst equation). (f) Low affinity, concentration-dependent block of hK Ca 2.2 channels by XYTX 2 -Xa2a. Whole-cell hK Ca 2.2 currents were recorded using voltage ramps as for (e). Remaining current fraction (RCF) was calculated as I/I 0 where I 0 is the peak current at + 50 mV in the absence and I is the peak current at + 50 mV in the presence of XYTX 2 -Xa2a at equilibrium block at concentrations of 0.1, 1, 5, and 10 μM (empty circles), respectively. Points on the linear dose-response curve represent the mean of 4-6 independent measurements. The line was drawn using linear least squares fit (see Methods for details). The reciprocal of the slope of the best fitted line yielded an IC 50 of 25.1 ± 3.5 μM. Data are mean ± SEM. www.nature.com/scientificreports/ longer-lasting pain responses e.g. allodynia, as has been reported for other hymenopteran venom peptides 26,27 . Using an automated Von Frey apparatus, we measured paw-withdrawal threshold to a mechanical stimulus at 1 and 4 h following intraplantar injection of either XYTX 2 -Xa2a, apamin or MCD-peptide (injected; 2 pmol/ paw), where we observed no difference to negative control (saline) injection (Fig. 6d). Discussion In this study, we analysed the venom composition and function of the Australian great carpenter bee Xylocopa aruana. While the venom of the eusocial A. mellifera is among the most studied of all venoms, there have been few studies on the venoms of solitary Apidae. One probable reason for this is, due to their solitary lifestyle, a greater difficulty in acquiring multiple specimens and therefore sufficient venom and venom-producing tissue for analysis. However as demonstrated here, advances in the sensitivity of mass spectrometry and nucleotide sequencing have now made it possible to analyse the complete venom composition of an individual bee. Working with an individual rather than multiple specimens comes with both advantages and disadvantages: One advantage is that our data were not confounded by intra-specific genetic polymorphisms, which can interfere with transcriptome assembly and conclusions on venom complexity. But this could also be viewed as a potential limitation, i.e. that the venom composition of our specimen may not be an accurate representation of the species. To resolve this comprehensively would require the individual analysis of multiple specimens. However we note that studies of the congeneric X. appendiculata 13 and X. violacea 14 which used multiple individuals, present Solitary and eusocial Apidae contend with distinct defensive selection pressures i.e. Xylocopa spp. sting solely in self-defence, while the eusocial A. mellifera stings in both self-defence and defence of its colony, including against large vertebrates. Defensive adaptations in the eusocial Apidae include alarm pheromones, increased aggression and sting autotomy (in Apis), which likely serve to increase the dose of venom that can be delivered to an aggressor. One might expect that the differing selection pressures between Apis and Xylocopa would also be reflected in differences in venom composition. However, our data suggest that X. aruana and A. mellifera share a similar venom composition and venom mode-of action. The sole difference we observed was in the potency of the major pain-causing components-melittin is approximately fivefold more potent than XYTX 1 -Xa1a at activating mammalian sensory neurons. We speculate that the greater potency of melittin, from the eusocial A. mellifera venom, over XYTX 1 -Xa1a may represent an additional adaptation in response to the distinct defensive selection pressures associated with the transition to eusociality. While this is consistent with a previous report of a correlation between venom lethal capacity and colony weight in stinging hymenopterans 19 , broader taxon sampling within the Apidae and in other bees, and comparative evaluation of venom potency, will be valuable in further testing this hypothesis. In both X. aruana and A. mellifera, one amphipathic pore-forming peptide is the major venom component and the primary pain-causing agent. Venom PLA 2 increases the pain-causing effects of the amphipathic peptide. Such toxin synergy is widely believed to occur in venoms, yet very few examples have been documented todate. Other examples include the toxin "cabals" of cone snail venoms, where several toxins with complementary activity work together to achieve rapid paralysis of the prey 28 . Similarly PLA 2 in cobra venoms potentiates the www.nature.com/scientificreports/ pain-causing activity of venom cytotoxins 23 . The potentiation, by venom PLA 2 , of the pain-causing effects of melittin and the melittin-like peptide XYTX 1 -Xa1a, in the venoms of A. mellifera and X. aruana, respectively, is a third example of toxin synergy. We found that bee venom PLA 2 also induced pain behaviours in its own right, and thus can also be considered a pain-causing agent. We did not resolve the mechanism by which this occurs, although it appears to be independent of direct activation of sensory neurons. In contrast to the other major venom components, the apamin-like peptides, which are blockers of potassium channels and make up the final major class of polypeptides in these venoms, did not cause spontaneous pain behaviours or allodynia and their contribution(s) in the context of defence and pain remains unclear. This study contributes to our understanding of the evolution, chemistry and pharmacology of the venoms of the Apidae. Mass spectrometry. A combination of top-down proteomics of native and reduced and alkylated venom, and bottom-up proteomics of reduced, alkylated and trypsin-digested venom was used to examine the venom composition of the individual X. aruana. Two aliquots of venom (10 μg each) were dried by vacuum centrifugation. Gas phase reduction and alkylation was performed according to the protocol described by Hale et al. 34 . 100 μL of reduction/alkylation reagent (50% (v/v) ammonium carbonate, 48.75% ACN, 1% 2-iodoethanol, 0.25% triethylphosphine was added to the lid of each 1.5 mL tube containing dried venom, which was then inverted, closed, and incubated at 37 °C for 90 min. One aliquot of reduced and alkylated venom was then digested by incubating with trypsin (20 ng/μL) overnight at 37 °C, according to the manufacturer's instructions (Sigma-Aldrich, St. Louis, MO, USA). Three venom samples (10 μg each)-native venom, reduced and alkylated venom, and reduced, alkylated and trypsin-digested venom-were analyzed by LC-MS/MS. Samples were separated on a Nexera uHPLC (Shimadzu, Kyoto, Japan) with a Zorbax stable-bond C18 column (2.1 × 100 mm; particle size, 1.8 μm; pore size, 300 Å; Agilent, Santa Clara, CA, USA), using a flow rate of 180 μL/min and a gradient of 1-40% solvent B (90% ACN and 0.1% formic acid (FA)) in 0.1% FA over 25 min, 40-80% solvent B over 4 min, and analyzed on an AB Sciex 5600 TripleTOF (SCIEX, Framingham, MA, USA; operated with Analyst TTF v1.8) mass spectrometer equipped with a Turbo-V source heated to 550 °C. MS survey scans were acquired at 300 to 1800 mass/charge ratio (m/z) over 250 ms, and the 20 most intense ions with a charge of + 2 to + 5 and an intensity of at least 120 counts were selected for MS/MS. The unit mass precursor ion inclusion window mass within 0.7 Da and isotopes within 2 Da were excluded from MS/MS, with scans acquired at 80 to 1400 m/z over 100 ms and optimized for high resolution. Using ProteinPilot v5.0 (SCIEX), MS/MS spectra were searched against the translated venom apparatus transcriptome (MS and MS/MS tolerance of 0.05 and 0.1 Da, respectively). False discovery rate analyses were generated by ProteinPilot default method, which uses a decoy database. Transcripts encoding venom components were then manually examined using the Map-to-Reference tool of Geneious v10.2.6 35 , where two paralogues of XYTX 1 -Xa1a were reassembled. These were then reincorporated back into the complete transcriptome, estimation of transcript abundance repeated, and a second, final Pro-teinPilot search performed. Peptides identified by ProteinPilot were validated by comparison of experimentally derived MS/MS peaks against a theoretical peak list generated using MS-Product in ProteinProspector v5.22.1 (http:// prosp ector. ucsf. edu/ prosp ector/ cgi-bin/ msform. cgi? form= mspro duct). LC-MS was used to compare the elution times of oxidised synthetic XYTX 2 -Xa2a and native XYTX 2 -Xa2a in the venom. 10 μg native venom was separated on a Nexera uHPLC with a Zorbax stable-bond C18 column, using a flow rate of 180 μL/min and a gradient of 1-40% solvent B (90% ACN and 0.05% TFA) over 18 min and analyzed on an AB Sciex 5600 TripleTOF mass spectrometer. 1 nmol oxidised synthetic XYTX 2 -Xa2a (red) was analysed under the same conditions. The elution time of the extracted ion chromatogram (XIC) of 629.0627 ± 0.05 m/z (theoretical (M + 4H) 4+ ion of XYTX 2 -Xa2a) was compared. Melittin and bee venom PLA 2 were purchased from Sigma-Aldrich (St. Louis, MO, USA), and apamin and MCD-peptide were purchased from Alomone labs (Jerusalem, Israel). Whole cell voltage-clamp electrophysiology. HEK293AD cells (American Type Culture Collection) were cultured as previously described 36 . Cells were maintained on DMEM supplemented with 10% heat-inactivated FBS, 2 mM l-glutamine, pyridoxine and 110 mg/mL sodium pyruvate. Whole-cell patch-clamp experiments were performed using a QPatch 16X automated electrophysiology platform (Sophion Bioscience). The extracellular solution contained the following: 70 mM NaCl, 70 mM choline chloride, 4 mM KCl, 2 mM CaCl 2 , 1 mM MgCl 2 , 10 mM Hepes, and 10 mM glucose (pH 7.4 with NaOH; 305 mosmol). The intracellular solution contained the following: 140 mM CsF, 1 mM:5 mM EGTA/CsOH, 10 mM Hepes, and 10 mM NaCl (pH 7.3 with CsOH; 320 mosmol). From a holding potential of 0 mV each recorded cell was subjected to a series of 50-ms voltage pulses that ranged from − 60 to + 60 mV in 10-mV increments. Recordings were made prior to and 4 min after the addition of either ECS (negative control) or XYTX 1 -Xa1a (10 µM). Data are mean ± SEM of 5-6 experiments and fitted to a simple linear regression. Chinese Hamster Ovarian (CHO) cells (American Type Culture Collection) were grown in DMEM-high glucose supplemented with 10% FBS, 2 mM l-glutamine, 100 U/mL penicillin-g, and 100 μg/mL streptomycin (Invitrogen) at 37 °C in a 5% CO 2 and 95% air humidified atmosphere. Cells were passaged twice per week following a 7-min incubation in PBS containing 0.2 g EDTA/L (Invitrogen). hK V 1.1, hK V 1.2, hK Ca 2.1, and hK Ca 2.2 channels were transiently expressed in CHO cells using Lipofectamine 2000 (Invitrogen Carlsbad, CA), following the manufacturer's protocol and were cultured under standard conditions. For recording hK V 1.1, hK V 1.2, and hK Ca 2.1 currents GFP-tagged ion channel vectors were used. The hK Ca 2.2 channel plasmid was transiently co-transfected with a plasmid encoding the green fluorescent protein (GFP) at a molar ratio of 1:10. Transfected cells were washed twice with 2 mL of ECS (see below) and replated onto 35-mm polystyrene cell culture dishes (Cellstar, Greiner Bio-One). Currents were recorded 24 to 48 h after transfection. GFP-positive transfectants were identified with a Nikon Eclipse TS100 fluorescence microscope (Nikon, Tokyo, Japan) using bandpass filters of 455-495 nm and 515-555 nm for excitation and emission, respectively and were used for current recordings (> 70% success rate for co-transfection). hK V 1.3 currents were recorded on activated lymphocytes 3 to 4 days after activation. The human veinous blood was obtained from anonymized healthy donors. The peripheral blood mononuclear cells were isolated by Histopaque1077 (Sigma-Aldrich Hungary, Budapest, Hungary) density gradient centrifugation. Cells obtained were resuspended in RPMI 1640 medium containing 10% fetal calf serum (FCS, Sigma-Aldrich), 100 μg/mL penicillin, 100 μg/mL streptomycin, and 2 mM l-glutamine, seeded in a 24-well culture plate at a density of 5-6 × 10 5 cells/mL, and grown in a 5% CO 2 incubator at 37 °C for 3-5 days. Phytohemagglutinin A (Sigma-Aldrich) was added in 10 μg/mL concentrations to the medium to amplify the K V 1.3 expression. Cells were washed gently twice with 2 mL of ECS (see below) for the patch-clamp experiments. Standard whole-cell patch-clamp method 37 was used to record ionic currents. Micropipettes were pulled in four stages by using a Flaming Brown automatic pipette puller (Sutter Instruments, San Rafael, CA) from Borosilicate Standard Wall with Filament aluminum-silicate glass (Harvard Apparatus Co., Holliston, MA) with tip diameters between 0.5 and 1 μm and heat polished to a tip resistance ranging typically 2-8 MΩ. All measurements were carried out by using Axopatch 200B amplifier connected to a personal computer using Axon Digidata 1550A data acquisition hardware, respectively (Molecular Devices Inc., Sunnyvale, CA). In general, the holding potential was − 120 mV. Records were discarded when leak at the holding potential was more than 10% of the peak current at the test potential. Experiments were done at room temperature ranging between 20 and 24 °C. Data were analysed using GraphPad Prism 8 (Graphpad, CA, USA) and pClamp10.5 software package (Molecular Devices Inc., Sunnyvale, CA). Before analysis, whole-cell current traces were corrected for ohmic leakage and were digitally filtered with a three-point boxcar smoothing filter. For hK Ca 2.1-2 the reversal potential for K + was determined and only those currents were analyzed for which the reversal potential fell into the range of the theoretical reversal potential ± 5 mV (− 86.5 ± 5 mV). For hK V were dissolved in the ECS supplemented with 0.1 mg/mL BSA (Bovine Serum Albumin). Bath perfusion around the measured cell with different extracellular solutions was achieved using a gravity flow micro perfusion system at a rate of 0.5 mL/min. Excess fluid was removed continuously. For measurements of currents on hK V 1.1-3 voltage steps to + 50 mV were applied from a holding potential of − 120 mV every 15 s and the peak current was measured. hK Ca 2.1-2 currents were elicited every 15 s with voltage ramps to + 50 mV from a holding potential of − 120 mV. The remaining current fraction (RCF) at a given molar concentration was calculated as I/I 0 , where I 0 is the peak current at + 50 mV in the absence and I is the peak current at + 50 mV in the presence of XYTX 2 -Xa2a at equilibrium block at a given concentration, respectively. (1 µM) or XYTX 2 -Xa2a (1 µM), then at 1 min with XYTX 1 -Xa1a (in assay solution ± venom PLA 2 (1 µM) or XYTX 2 -Xa2a (1 µM)) and monitored for 2 min before being replaced with assay solution and then KCl (30 mM; positive control). Experiments involving the use of mouse tissue were approved by the University of Queensland Animal Ethics Committee (UQ AEC; approval number TRI/IMB/093/17). F11 (mouse neuroblastoma × DRG neuron hybrid; European Collection of Authenticated Cell Cultures) were cultured as previously described 36 . Cells were maintained on Ham's F12 media supplemented with 10% FBS, 100 µM hypoxanthine, 0.4 µM aminopterin, and 16 µM thymidine (Hybri-Max, Sigma Aldrich). 384-well imaging plates (Corning, Lowell, MA, USA) were seeded 24 h prior to calcium imaging, resulting in ~ 90% confluence at the time of imaging. Cells were loaded for 30 min at 37 °C with Calcium 4 assay component A in physiological salt solution (PSS; 140 mM NaCl, 11.5 mM d-glucose, 5.9 mM KCl, 1.4 mM MgCl 2 , 1.2 mM NaH 2 PO 4 , 5 mM NaHCO 3 , 1.8 mM CaCl 2 , 10 mM HEPES) according to the manufacturer's instructions (Molecular Devices, Sunnyvale, CA). Ca 2+ responses were measured using a FLIPR TETRA fluorescent plate reader equipped with a CCD camera (Ex: 470 to 490 nm, Em: 515 to 575 nM) (Molecular Devices, Sunnyvale, CA). Signals were read every second for 10 s before, and 300 s after, the addition of peptide (in PSS supplemented with 0.1% BSA). Pain behaviour experiments. Male adult (6 weeks old) C57BL/6J mice were used for behavioral experiments. To facilitate injections mice were briefly anesthetized using 2.5% isoflurane. Each peptide diluted in saline containing 0.1% bovine serum albumin (BSA), was administered in a volume of 20 µL into the hind paw by shallow intraplantar injection. Negative control animals were injected with saline containing 0.1% BSA. Following injection, spontaneous pain behaviour events were counted from video recordings by a researcher blinded to the treatments. Mechanical paw withdrawal thresholds were measured 1 and 4 h following injection using automated Von Frey apparatus (MouseMet; Topcat Metrology). For calcium imaging experiments of mouse DRG neurons and F11 cells, treatment groups were compared using unpaired t-tests. For analysis of spontaneous pain, sum of pain behaviour counts at 30 min of treatment groups were compared using one-way ANOVA with Tukey's multiple comparisons test. Statistical significance was defined as P < 0.05. All data are presented as mean ± SEM. Data availability Prepropeptide sequences of XYTX 1 -Xa1a, XYTX 2 -Xa2a and the X. aruana venom PLA 2 have been deposited with GenBank, under accessions: ON586842, ON586843 and ON5868424, respectively. RNA-seq reads have been deposited in the NCBI sequence read archive under accessions SRR22306546. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE 38
2022-12-24T16:05:50.787Z
2022-12-22T00:00:00.000
{ "year": 2022, "sha1": "ec7efc989e2a0b0ffd382c2be54485d71de905c9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "eaf8c075d41b44dad49e048f156d2c2e0af2793c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
8089509
pes2o/s2orc
v3-fos-license
Adherence and uptake of Francisella into host cells Francisella tularensis is a highly virulent bacterial pathogen that is easily aerosolized and has a low infectious dose. As an intracellular pathogen, entry of Francisella into host cells is critical for its survival and virulence. However, the initial steps of attachment and internalization of Francisella into host cells are not well characterized, and little is known about bacterial factors that promote these processes. This review highlights our current understanding of Francisella attachment and internalization into host cells. In particular, we emphasize the host cell types Francisella has been shown to interact with, as well as specific receptors and signaling processes involved in the internalization process. This review will shed light on gaps in our current understanding and future areas of investigation. Francisella tularensis is a highly virulent bacterial pathogen that is easily aerosolized and has a low infectious dose. as an intracellular pathogen, entry of Francisella into host cells is critical for its survival and virulence. However, the initial steps of attachment and internalization of Francisella into host cells are not well characterized, and little is known about bacterial factors that promote these processes. This review highlights our current understanding of Francisella attachment and internalization into host cells. in particular, we emphasize the host cell types Francisella has been shown to interact with, as well as specific receptors and signaling processes involved in the internalization process. This review will shed light on gaps in our current understanding and future areas of investigation. Adherence and uptake of Francisella into host cells causes disease in immunocompromised individuals, is also used as a model for virulent Francisella species because it is virulent in mice, and shares a high degree of DNA and protein sequence similarity with the virulent species. 2 F. tularensis is primarily an intracellular pathogen. Once intracellular Francisella escapes from the phagosome it replicates in the cytosol until the host cell lyses, allowing the released bacteria to infect other cells. 1 Although evidence for an extracellular phase of Francisella has emerged, 3 the intracellular phase is thought to be dominant during infection. Indeed, mutants that are unable to survive and replicate intracellularly are typically attenuated for virulence. [4][5][6] The processes of attachment and internalization into host cells are key steps necessary for Francisella to reach its intracellular niche and cause disease. It has been demonstrated both in vitro and in vivo that Francisella infects a wide variety of cell types. This review will discuss potential mechanisms by which Francisella attaches to host cells, as well as specific receptor interactions and processes that mediate internalization of the bacteria. A better understanding of these initial interactions could provide novel targets for therapies that could reduce virulence and thus disease. Host Cells Supporting Francisella Infection F. tularensis is a zoonotic bacterium that can infect a wide variety of species, ranging from arthropod vectors to many species of mammals. 1,2 F. tularensis can also replicate in vitro within a variety of cell types, including phagocytic cells such as macrophages, neutrophils, 7 dendritic cells, 8 the murine macrophage-like cell lines, J774A.1 9-11 and RAW264.7 cells, [12][13][14] and the human monocytic cell line THP-1. 10,15 Francisella uptake and replication also occurs in non-phagocytic cells such as murine 16 and human lung epithelial cell lines, 17,18 hepatocyte cell lines, 5,19 and fibroblasts. 20 Although F. tularensis replicates within all of these different cell lines, there are some differences in the ability of different cell types to interact with and support F. tularensis growth. For example, F. novicida and LVS associate with and are taken up by human monocyte-derived macrophages (HMDMs) in greater numbers than by human monocytes or J774A.1 cells. 21,22 Lindemann et al. SpeciaL FocuS ReView SpeciaL FocuS ReView receptors have been identified, the bacterial factors that contribute to Francisella attachment are not well characterized (Fig. 1). One potential adhesin is a Type IV pilus, which contributes to host cell attachment and virulence of a number of different bacterial species. 34,35 Schu S4, LVS, and F. novicida all encode a Type IV pilus, though there are some differences. 36 LVS has deletions or frameshift mutations in 2 out of 6 genes encoding pilin-like proteins, and in pilT, which in other bacteria encodes the retraction motor. 37 Pili-like structures have been observed on F. novicida, LVS, and Schu S4, 37-39 but the functional role of these structures in attachment and virulence appears to differ among the various subspecies. LVS deletion mutants in pilF and pilT, which encode orthologs of two proteins required for Type IV pilus fiber assembly and disassembly, exhibited significantly decreased attachment to both macrophage and epithelial cell lines. 40 However, deletion of genes encoding pilin-like components pilE4, pilE5, or pilE6 in Schu S4 had no effect on attachment to J774A.1 macrophage-like cells or virulence, and in an LVS background actually found that the degree of attachment of LVS to several different epithelial cell lines (HEp-2, human bronchial epithelial [HBE], and A549 cells) was similar. 23 However, Lo et al. systematically examined uptake and growth of F. novicida in nine different epithelial cell lines using colony counts and immunofluorescent techniques, and found that while all tested cell lines supported replication, there were differences in the degrees of internalization, rates of replication, and bacterial loads. 24 Replication within host non-phagocytic cells may be sufficient for virulence, as a mutant in Francisella Schu S4 that was unable to replicate within macrophages maintained the ability to replicate within epithelial cells in vitro, and retained virulence in a mouse respiratory model. 20 Francisella LVS has also been observed within purified human erythrocytes, 25 which could represent a mechanism of dissemination within the host and possibly transmission to other hosts via arthropod vector. Francisella infection of host cells has also been investigated in vivo. Hall et al. examined infected cell types in the lungs of mice over the course of a respiratory infection with Schu S4, LVS, and F. novicida by flow cytometry. 26 Twenty-four hours post-infection, the primary cell type infected by all three bacteria was alveolar macrophages, though the percentage of these cells was different for the various strains; over 70% of infected cells isolated from Schu S4 and LVS infected lungs were alveolar macrophages, whereas only 51% of the cells in F. novicida infected lungs were of this cell type. CD11b +/+ macrophages, dendritic cells, and Type II alveolar epithelial cells were also infected during this time period. Interestingly, nearly one fourth of all cells infected by F. novicida at 24 h post-infection were neutrophils, whereas only 0.4% of LVS infected cells were neutrophils, and no Schu S4-infected neutrophils were detected at this time point. However, by three days post-infection roughly 50% of the infected cells in both LVS and Schu S4 infected lungs were neutrophils, and in F. novicida-infected mice this percentage had risen to 80%. These differences in infected cell populations and their timing may be due to differences in the inflammatory responses elicited by F. novicida compared with LVS or Schu S4. 26 Arthropods including ticks and mosquitoes are natural vectors for F. tularensis. Thus, it is thus not too surprising that F. tularensis replicates well within Drosophila-derived cell lines. 27,28 F. tularensis also infects and replicates within whole Drosophila, which could potentially provide a model to study aspects of the vector stage of Francisella. Drosophila flies have been used to screen for virulence factors, 29 and investigate interactions with the insect innate immune system. 30 Non-mammalian hosts, such as Acanthamoeba castellanii, 31,32 and Hartmannella vermiformis, 33 also support F. tularensis replication, suggesting that protozoans could serve as a reservoir, or perhaps a mode of waterborne transmission. Attachment to Host Cells As the first step in the process of bacterial uptake, attachment to the surface of host cells can influence the internalization process itself, as well as downstream signaling events, through specific adhesin-receptor interactions. Although several host cell by macrophages appears to be primarily mediated by the mannose receptor (MR), as bone marrow-derived macrophages from MR knockout mice internalize significantly less unopsonized Schu S4 than macrophages from wild-type mice. 50 Additionally, blocking MR with specific antibodies or soluble mannan significantly inhibits internalization of both F. novicida and LVS by monocytes. 21,22 However, deletion of the mannose receptor does not completely eliminate Francisella internalization, 21,22,50 implicating other host molecules in the internalization of unopsonized Francisella. Uptake of opsonized Francisella can be mediated by different cell surface receptors depending on the opsonin. Uptake of antibody-opsonized Francisella occurs almost exclusively through the Fc receptor (FcγR), because uptake of Schu S4 exposed to opsonizing antibodies is completely ablated in macrophages from FcγR knockout mice. 50 Uptake of antibody-opsonized F. novicida also uses the FcγR, as fewer opsonized bacteria are internalized by monocyte-derived macrophages that are depleted of Fc receptors compared with control macrophages. 22 Uptake of opsonized Schu S4 through the FcγR is associated with superoxide production, delayed escape from the phagosome, and decreased cytosolic growth compared with unopsonized bacteria, 53 suggesting that this route of entry would be associated with bacterial destruction in the presence of a humoral response. Several host receptors have demonstrated roles in the internalization of serum-opsonized Francisella. The macrophage scavenger receptor class A (SRA) has been shown to contribute to the uptake of opsonized Francisella. 49,50 This is somewhat unusual because in other gram-negative and some gram-positive bacteria it has been shown that bacteria directly bind to SRA through interactions with LPS or lipoteichoic acid, and that this binding is not influenced by the presence of serum. 54,55 Treatment of J774A.1 cells with SRA agonists or blocking antibodies significantly decreases the uptake of serum-opsonized LVS, and macrophages isolated from SRA knockout mice have reduced ability to internalize LVS or Schu S4 compared with wild-type. 49,50 In addition, expression of SRA in HEK293T cells significantly enhances the attachment of LVS, though not internalization, suggesting that additional receptors or signals are required for uptake. Francisella is also opsonized in the presence of SP-A, a surfactant found abundantly in the lungs. Pre-opsonization of F. novicida with SP-A results in increased association with monocyte-derived macrophages. 22 Multiple complement receptors have been shown to have roles in the uptake of serum-opsonized F. tularensis. Macrophage knockouts for the complement receptor 3 (CR3) internalize significantly less serum-opsonized Schu S4 than wild-type macrophages. 21,50,51 Complement receptor 4 (CR4) plays a role in the uptake of serum-opsonized LVS by immature dendritic cells 8 and monocyte-derived macrophages, 7 whereas complement receptor 1 (CR1) and CR3 are critical for uptake of opsonized LVS and Schu S4 by neutrophils. 7 Entry into host cells via the complement receptor could represent a mechanism for delaying immune responses and enhancing Francisella virulence. Typically, ligation of CR3 on monocytes does not result in production of reactive oxygen species or stimulation of inflammatory responses. 56,57 increased attachment. 39 Forslund et al. also determined that pilA, another gene encoding a pilin-related protein in a virulent Type B strain, was required for full virulence, but not attachment to HeLa cells. 41 In F. novicida pilF is required for a Type II secretion system. 38 Francisella outer membrane protein FsaP (FTL_1658) has also been shown to contribute to host cell attachment. 17 Melillo et al. identified FsaP as a prominent outer membrane protein by surface biotinylation of LVS. Expression of FsaP in E. coli resulted in an 8-fold enhancement of attachment to A549 lung epithelial cells. While expressed in all three major Francisella ssp., FsaP does not localize to the outer membrane in F. novicida. This may be due to an amino acid variation at the signal peptide cleavage site that may prevent cleavage of the signal peptide and proper protein localization. However, there is also some discrepancy with the localization of FsaP; Zarrella et al. detected FsaP in the inner membrane during fractionation of LVS. 42 FsaP is upregulated in a Type A strain isolated from infected mice, 43 but its significance or contribution to virulence has not yet been elucidated. FsaP has some similarity to FimV from Pseudomonas aeruginosa, which is involved in assembly of Type IV secretin, 44 and it also contains a conserved LysM domain, which in other proteins has been shown to bind peptidoglycan. 45 Surface-expressed Francisella elongation factor-Tu (EF-Tu) has been identified as mediating attachment to THP-1 cells. 15 EF-Tu was found to interact with host cell nucleolin by pull-down assays. Both anti-EF-Tu antibodies and pseudopeptide HB-19, which binds irreversibly to nucleolin, blocked LVS attachment to THP-1 cells. Although EF-Tu is generally thought to be a cytoplasmic protein, there is a growing list of bacteria that utilize surface-expressed EF-Tu to bind host cells or factors. [46][47][48] Internalization into Host Cells Francisella has a low infectious dose of 25 colony forming units or less. 1 This low infectious dose suggests that Francisella can efficiently enter and replicate within host cells. However, Francisella internalization into host cells in vitro is fairly inefficient. Opsonization with complete serum or complement enhances internalization into host cells at least 10-fold. 8,49,50 Schu S4, LVS, and F. novicida are resistant to complement-mediated killing through several mechanisms. 51 LVS has been shown to bind Factor H, a host regulator that downregulates activation of the complement alternative pathway. 52 Additionally, Schu S4, LVS, and F. novicida can all cleave C3b into the inactive form C3bi much more efficiently than a complement-sensitive LVS phase variant, implicating this cleavage in protection from complement-mediated killing. 51 Deposition of C3bi on a bacterial surface does not lead to the recruitment of the membrane attack complex and cell lysis, but retains the ability to opsonize and enhance phagocytosis of the bacteria. Host receptors mediating internalization Several host cell receptors have been implicated in the internalization of Francisella (Fig. 1). The predominant receptor mediating uptake depends on whether Francisella is opsonized or unopsonized. In the absence of opsonins, Francisella uptake Lai et al. identified several hypercytoxic mutants in F. novicida that were able to kill host cells more quickly than wild-type bacteria, but were less virulent in mice. 11 These mutants also exhibited enhanced uptake; they were taken up by macrophages at higher rates than wild-type bacteria, even when the cells were pretreated with the actin polymerization inhibitor cytochalasin D. 11 Some of these mutants lacked O-Ag, and also lacked or had defects in the LPS core. However, other mutants that did not have any LPS-related defect exhibited a similar phenotype. These authors speculate that these mutants all have surface alterations that change the mechanism of uptake, or expose a surface molecule that enhances or signals uptake. Enhanced uptake of LPS mutants in other gram-negative bacteria has been reported, so this could represent a common mechanism shared with other bacteria. 11 Role of lipid rafts in F. tularensis uptake Plasma membrane microdomains, such as lipid rafts, are targeted by a number of intracellular pathogens as a mechanism for invasion. 67 Lipid rafts provide a highly concentrated region of receptors and signaling molecules that can interact with pathogens, granting them entrance into host cells. 68 Using fluorescent microscopy, it has been shown that GFP-expressing LVS colocalized with filipin III, a fluorescent agent that binds cholesterol. 12 Depletion of lipid rafts from the plasma membrane of J774A.1 cells also decreased LVS uptake, implicating lipid rafts as hot spots of Francisella internalization. This internalization was associated with glycophosphatidylinositol (GPI)-anchored proteins located in the lipid rafts, as removal of GPI results in significantly decreased intracellular LVS. The role of lipid rafts in internalization of Francisella draws attention to the role of caveolin, a host protein usually associated with pits in cholesterol-rich microdomains of the host cell plasma membrane. 69 Immunofluorescent imaging shows colocalization between caveolin and GFP-expressing LVS, and disruption of caveolin with cholera toxin prevents LVS entry into macrophages. 12 Interestingly, Law et al. failed to see involvement of caveolin in uptake of F. novicida in hepatocytes, 19 but they did see involvement of clathrin-coated pits in this process. Differences in clathrin and caveolin involvement in uptake may be due to differences in cell types analyzed and Francisella subspecies. Cell signaling pathways that influence Francisella uptake and attachment Francisella interaction with host cells activates signaling pathways that promote uptake of the bacteria, and limit the production of proinflammatory cytokines. The specifics of this interaction are dependent on the subspecies or species of Francisella, growth conditions, 42,70 cell types (including mouse or human), 59 and whether the bacteria have been opsonized. 53,59 Using HMDMs Dai et al. found that C3 opsonization of Schu S4, and uptake via CR3 was critical for Schu S4-mediated suppression of proinflammatory cytokine responses. 59 HMDMs incubated with Schu S4 opsonized with C3-depleted serum or unopsonized bacteria produced relatively high levels of proinflammatory cytokines compared with C3 opsonized bacteria. The immune suppression induced by opsonized Schu S4 and CR3 ligation in these cells was characterized by early inhibition Additionally, uptake of Schu S4 by CR3 seems to dampen signaling through Toll-like receptor 2, leading to a less robust proinflammatory response. 58,59 Geier and Celli found that when opsonized Schu S4 entered bone marrow-derived macrophages there was delayed maturation and escape from phagosomes, as well as delayed cytosolic replication compared with entry of unopsonized bacteria. 53 These results suggest that opsonization may represent a restrictive route of entry for Schu S4. However, more controlled replication could potentially be beneficial in delaying proinflammatory responses; Dai et al. showed that entry of Schu S4 into human monocyte-derived macrophages via CR3 limits proinflammatory cytokine responses. 59 Mechanism of phagocytosis Uptake of Francisella is dependent on actin polymerization and microtubules in both phagocytic and nonphagocytic cells. 23,60,61 However, Francisella can use different mechanisms of uptake depending on conditions and cell type. Clemens et al. used electron microscopy to observe phagocytosis of a Type A clinical isolate, RCI, and LVS by human monocyte-derived macrophages (HMDMs) and THP-1 human monocytic cells. 62 This study found that a majority of bacteria appeared to be taken up by spacious asymmetrical pseudopod loops. This was the primary mechanism used for the uptake of both live and killed bacteria, suggesting that it is dependent on a pre-formed factor on the bacterial cell surface. Interestingly, they found that treatment of bacteria with 1% periodic acid to degrade surface carbohydrates resulted in bacterial uptake via conventional phagocytosis, implicating surface components such as LPS, or bacterial capsule in this asymmetrical looping phagocytosis. In support of this hypothesis, they found that, in the presence of serum that had been depleted of C7 to avoid issues with increased serum sensitivity, LVS O-antigen (O-Ag) mutants were taken up by aberrant tight pseudopod loop structures that had multiple, "onion-skin"like layers. 63 These aberrant loops were only observed when O-Ag mutant bacteria were serum-opsonized, as both unopsonized wild-type and O-Ag mutant LVS were taken up by similar appearing spacious asymmetrical loop structures. In their system Clemens et al. rarely (around 5%) observed LVS internalization via a ruffling mechanism resembling macropinocytosis by HMDMs and THP-1 cells. 62 Macropinocytosis is the non-specific uptake of extracellular fluid due to the interaction and fusion of plasma membrane extensions. 64 This process is not constitutive, but rather transiently turned on by specific activating ligands. 65 Using microarray analysis, Bradburne et al. observed that genes regulating macropinocytosis were highly upregulated during the initial period of infection of A549 human lung epithelial cells, and have proposed that LVS uses macropinocytosis as a mechanism for uptake into epithelial cells. 18 In support of this mechanism they showed that uptake of LVS was inhibited by amiloride, an inhibitor of macropinocytosis, and that infection with LVS allowed A549 cells to internalize FITC-dextran, which is known to be taken up by macropinocytosis. 66 This uptake was not seen when FITC-dextran was added to A549 cells alone, indicating that LVS infection facilitated uptake of FITC-dextran by macropinocytotic pathways. Further research is necessary to determine if specific bacterial ligands trigger this process. These investigators used unopsonized bacteria with both human and mouse-derived macrophage cell lines, but found that these two cell types responded similarly to F. novicida infection. Dai et al. also observed that F. novicida stimulated higher levels proinflammatory cytokines in HMDMs but was not influenced by serum opsonization. 12,59,60 The inability of F. novicida to suppress initial proinflammatory responses likely contributes to its attenuation in humans, but F. novicida is still fully virulent in mice despite a seemingly limited ability to dampen proinflammatory responses. The bacterial factors and host mechanisms that account for the increased sensitivity of mice is unclear and still need to be defined. Conclusion Francisella has developed efficient means of attaching and internalizing into a wide variety of host cells. As access to the intracellular environment is a crucial niche for Francisella within the host, a better understanding of initial interactions with host cells should reveal key components required for full virulence, as well as downstream consequences of the bacterial-host interaction. Targeting steps in these processes may also identify novel targets for therapies to treat infection. The various subspecies of F. tularensis vary considerably in their virulence potential, and yet, with the exception of the Type IV pili, few differences in the mechanisms of attachment and internalization into host cells have been delineated. Like many bacteria, F. tularensis has multiple adhesins and uses a variety of receptors to gain access to the intracellular space. However, because of the incomplete knowledge regarding specific adhesins and the consequences of specific adhesin-receptor interactions, subspecies differences that contribute to virulence differences have yet to be identified or completely understood. However, the expanding knowledge of adhesin-receptor interactions allows for greater understanding of how these interactions impact virulence. For example, uptake via specific receptors can influence the induction of proinflammatory cytokines. Serum-opsonized Schu S4 induces low levels of proinflammatory cytokine production in human monocyte-derived macrophages compared with unopsonized Schu S4 or F. novicida. 59 Furthermore, siRNA knockdown of CR3 results in higher proinflammatory productions, indicating a dependence on the C3-CR3 mediated uptake in suppressing proinflammatory responses. Understanding the specific bacterial factors that engage host cells and influence the internalization process is critical for fully understanding how Francisella is able to suppress and evade the immune response. Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed. of ERK1/2, p38 MAPK, and NFκB activation, and activation of Lyn kinase, Akt, and MKP-1. Lyn kinase was shown to have a critical role, because siRNA knockdown of Lyn kinase decreased Schu S4 uptake via CR3, and despite this decrease, these cells produced higher levels of proinflammatory cytokines. Lyn kinase has been shown to have a role in the phagocytosis of Pseudomonas aeruginosa and subsequent inflammatory cytokine suppression, 59,71 as well as in the negative regulation of TLR2 signaling. 72 TLR2 is the major pattern recognition receptor recognized by Schu S4, [73][74][75][76] as unlike many other LPS molecules, Francisella LPS is not recognized by TLR4. Dai et al. also found evidence that cross-talk between CR3 and TLR2 controls both uptake and immune suppression. SiRNA knockdown of TLR2 resulted in a 40% decrease in attachment and uptake of serum opsonized Schu S4; proinflammatory cytokines were also decreased, supporting the role of TLR2 in mediating proinflammatory responses, and cross-talk between CR3. Thus, their data suggests that CR3 mediated uptake of Schu S4 serves to dampen TLR2 mediated proinflammatory responses. In an LVS model of infection of mouse bone marrow-derived macrophages Medina et al. identified a role for phosphatidylinositol 3-Kinase (PI3K)/Akt activation in limiting the TLR2mediated proinflammatory responses. 77 Consistent with Dai et al., they identified a link between ERK1/2 and p38 MAPK activation and proinflammatory cytokine expression in LVS infected mouse bone marrow-derived macrophages; inhibition of PI3K activity resulted in activation of ERK1/2 and p38 MAPK and increased TNF-α and IL-6 expression. However, while activation of p38 MAPK and cytokine expression was TLR2-dependent, ERK1/2 activation was MyD88-dependent, but TLR2-independent, suggesting that there are additional pathways that contribute to immune suppression. Dai et al. were unable to recapitulate their results in Schu S4 infected mouse bone marrow-derived macrophages. Instead they found that ERK1/2 activation was enhanced in response to interaction with C3 opsonized Schu S4 when compared with unopsonized Schu S4. Medina et al. only examined PI3K activity in host cells infected with unopsonized LVS, so it is not known if they would have observed a similar activation with opsonized LVS. However, together these results suggest that there are differences in the signaling responses of mouse and human cells to Francisella infections. In contrast to Schu S4 and LVS, Parsa et al. found that activation of the ERK1/2 and Syk signaling pathway was key to the uptake of F. novicida, as ERK overexpression in RAW 264.7 cells enhanced bacterial internalization. 61 Additionally, inhibitors of ERK1/2 as well as an upstream signaling activator, Syk, led to decreased bacterial internalization. 61 While uptake of F. novicida was significantly reduced in cells with inhibited ERK1/2, this reduction was not to the level seen in cytochalasin D treated cells, implicating additional signaling pathways in this process.
2016-05-12T22:15:10.714Z
2013-07-10T00:00:00.000
{ "year": 2013, "sha1": "27ab065cba9d1af860c5662cd42fdceb25590ddb", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/viru.25629?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27ab065cba9d1af860c5662cd42fdceb25590ddb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
266975598
pes2o/s2orc
v3-fos-license
Treatment persistence and switching patterns of ABP 501 in European patients with inflammatory bowel disease Background: Approval of the adalimumab (ADA) biosimilar ABP 501 for inflammatory bowel disease (IBD) indications was based on the principle of extrapolation, without indication-specific clinical trial data. Objectives: To evaluate the real-world treatment patterns of ABP 501 in patients with IBD. Design: Retrospective analysis of pharmacy claims data from Germany and France. Methods: Continuously insured adult IBD patients who initiated ABP 501 between October 2018 and March 2020 were included. Treatment persistence, adherence, and post-ABP 501 switching patterns were evaluated for two mutually exclusive groups: ADA-naïve patients (i.e. no baseline use of ADA products) and ADA-experienced patients (i.e. previously treated with ADA products). Results: A total of 3362 German patients and 733 French patients were included, with 54.4% and 65.3% being ADA-naïve patients, respectively. Median persistence (95% CI) on ABP 501 was 10.9 months (9.8–11.6) in ADA-naïve patients and 14.2 months (12.7–15.2) in ADA-experienced patients in Germany; for the French cohort, ADA-naïve and -experienced patients had median persistence of 12.8 months (10.2–14.7) and 11.5 months (8.8–14.4), respectively. During the first 12 months of ABP 501 initiation, 53.7% of German patients and 51.0% of French patients were adherent to the therapy. About 20% of patients in both countries switched from ABP 501 to another targeted therapy. In the German cohort, ADA-naïve patients most frequently switched to non-tumor necrosis factor inhibitor biologics, but ADA-experienced patients most commonly switched to reference product (RP); in the French cohort, patients most often switched to RP regardless of prior exposure to ADA products. Conclusion: About 50% of patients persisted on and were adherent to ABP 501 therapy during the first 12 months after treatment initiation in two large European countries. Post-ABP 501, switching patterns varied between countries, indicating diversified treatment practices warranting further research on reason(s) for switching and potential overall treatment outcomes. Introduction Inflammatory bowel disease (IBD) mainly refers to two chronic, immune-mediated inflammatory disorders that primarily affect the intestinal tract: Crohn's disease (CD) and ulcerative colitis (UC). 1 Both disorders can be debilitating and significantly impair health-related quality of life (HR QoL) and work productivity. 2,3Anti-tumor TherapeuTic advances in Gastroenterology necrosis factor (TNF) biologics, including adalimumab (ADA) and infliximab, have been shown to significantly improve HR QoL and reduce the need for hospitalization and surgery for patients with IBD 4,5 and are the mainstay for treating patients with moderate-to-severe CD or UC. 6,7osimilars, a biologic agent highly similar to the licensed reference product (RP, also known as originator), can provide additional treatment options for patients.ABP 501 [AMGEVITA ® (EU) or AMJEVITA™ (United States); adalimumab-atto, Amgen Inc., Thousand Oaks, CA, USA) is the first ADA biosimilar approved by the European Medical Association and the US Food and Drug Administration for the treatment of certain immune-mediated inflammatory diseases including IBD.It was marketed in the European Union since October 2018 and in the United States starting January 2023.Biosimilarity between ABP 501 and ADA RP is demonstrated based on the 'totality-of-the-evidence' that includes comprehensive analytical and preclinical assessments, a phase I pharmacokinetics equivalence study in healthy volunteers, and two comparative, randomized, double-blind clinical trials in patients with rheumatoid arthritis and patients with psoriasis. 8-10Approval of IBD indications for ABP 501, similar to approvals of other anti-TNF biosimilars for the treatment of IBD to date, was based on the principle of 'extrapolation' without IBD-specific clinical trials.[13][14][15] Yet, barriers to biosimilar utilization remain. 16,17 systematic review evaluating studies mainly from Europe and the United States with data collected between 2013 and 2017 revealed that approximately two-thirds of physicians had concerns regarding biosimilars; indication extrapolation and the lack of clinical trial data in IBD were among the most commonly reported reasons for concerns. 18It is possible that these concerns may even impact the discontinuation of biosimilars after utilization.[21][22][23] Although familiarity and acceptance of biosimilars have significantly increased among IBD specialists over the past years, 24 additional real-world evidence on ADA biosimilars from European countries can provide valuable information upon US market entry to continuously address any potential barriers to utilization.Therefore, in this retrospective study, we aimed to evaluate the realworld treatment patterns of ABP 501 among patients with IBD in Germany and France. Study design and data source This was a retrospective cohort analysis using the IQVIA German and French pharmacy claims (longitudinal prescription data, LRx) databases that cover data up to 30 April 2021 (at data lock).The study design schema is presented in Figure 1.The reporting of this study conforms to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement. 25IQVIA LRx is a longitudinal pharmacy administrative database that gathers data from retail pharmacies.An anonymized unique patient ID is assigned to each patient, which enables a longitudinal follow-up of these patients and the evaluation of the dynamic treatment pathway over time through medication reimbursed by health insurance and dispensed in retail pharmacies.The German LRx database was created in 2008 and has national coverage of ~84% of Germany's statutory health insurance patients, with most of the federal states having coverage of 80% or higher. 26[30][31][32][33] Study population Patients (18 years or older) with documented evidence of IBD who received at least one prescription of ABP 501 between October 2018 (market availability) and March 2020 were included in this analysis.Patients were also required to have at least 365 days of continuous observation of overall pharmacy records in the LRx database both before and after initiation of ABP 501, to allow the evaluation of their baseline characteristics and medication use and treatment patterns during the follow-up period, respectively. Diagnosis of IBD was not directly documented in the IQVIA LRx pharmacy claims database.For German LRx, the diagnosis was imputed using the machine-learning models developed based on the patient's treatment/prescription histories and validated in IQVIA electronic medical records databases (Supplemental Material 1).For French LRx, the diagnosis of IBD was imputed based on a rule-based approach (Supplemental Material 2). Study outcomes and variables Treatment persistence.Treatment persistence was measured using Kaplan-Meier analysis, which evaluated the cumulative probability of ABP 501 continuation during the follow-up period.Discontinuation of ABP 501 therapy occurred when no additional ABP 501 prescription was detected after the predefined allowable treatment gap of up to 120 days from the end of supply of the previous prescription; or when patients were switched to another targeted therapy during ABP 501 supply or within the predefined gap from the end of the previous prescription.Patients who reached the end of their observation period (e.g.end of record or end of database coverage) were classified as 'censored'.A sensitivity analysis using a predefined treatment gap of up to 90 days was conducted to evaluate treatment persistence. Treatment adherence.Adherence was measured using the medical possession ratio (MPR), calculated by the below equation.The prescription durations of all ABP 501 prescriptions from the initiation date of ABP 501 within 365 days of follow-up were summarized as the numerator for calculating the MPR.The denominator was defined as 365 days.The MPR was truncated at 1.0 to prevent the population average from being falsely inflated.Adherence to ABP 501 was considered if MPR was ⩾80% of the days covered. Statistical analysis This study was descriptive in nature.No a priori hypotheses were tested and no statistical comparisons were conducted between groups.Data analyses were conducted in each country using the country-specific pharmacy claims (LRx) data.Individual data from Germany and France were not pooled together.Summary statistics, including mean and standard deviation, were calculated for continuous variables.The frequency and percentage were reported for categorical variables.Missing data were not imputed.A threshold of 10 patients is required for presenting aggregated results for French LRx analysis.Below that threshold, results were shown as '<10' due to country-specific data privacy protection guidelines.The analyses were carried out using the statistical software SAS Version 9.4 (SAS Institute, Cary, NC, USA). Patient characteristics and baseline medication use German patient population.A total of 3362 German patients with IBD were included in the final analysis, consisting of 54 Sensitivity analysis.Sensitivity analysis for treatment persistence was conducted using a permissible treatment gap of up to 90 days.The median Discussion The introduction of biosimilars has offered more affordable treatment options to patients, but initial clinician hesitancy in prescribing and patients' reluctance to accept biosimilar treatment poses barriers to utilization in the field of gastroenterology. 35This may be because approval of IBD indications was based on data extrapolated from comparative clinical trials in other disease type(s), such as rheumatoid arthritis and psoriasis.In recent years, increased knowledge of the rigorous approval pathway for biosimilars and supportive randomized controlled trials, 36,37 and real-world efficacy and safety data (mainly from infliximab biosimilars) 11-14,38-40 have helped increase confidence and comfort level with the use of biosimilars in clinical gastroenterology.A 2015 study examining 118 survey responses of IBD-treating clinicians across many European countries reported that familiarity and confidence in the use of biosimilars had improved substantially, with 19.5% stating a lack of confidence in using biosimilars in 2015, down from 63% in 2013. 24,41owever, continuous provision of real-world evidence of biosimilar use among patients with IBD is still of importance to help address concerns and build confidence for both healthcare providers and patients.2][13][14][15] Therefore, in the current study, we leveraged the nationally representative pharmacy claims databases of German and French populations to evaluate treatment persistence, adherence, and switching patterns of ABP 501 among patients with IBD.Despite the differences between the United States and European healthcare delivery systems, such real-world experience from European countries could inform the US medical community upon ADA biosimilars entering the US market starting in January 2023. Overall, we found that slightly more than half of the study population persisted on and remained adherent to ABP 501 therapy 12 months after treatment initiation in both German and French Non-TNFi biologics 13 (9.0)11 (15.It is important to note that many factors can impact treatment persistence.Previous studies showed that patients with CD appeared to stay on the initial biological treatment longer than patients with UC. 42,46 Prior and concomitant medication use across studied patient populations may also affect persistence results.For example, the use of ADA with steroids was reported to be associated with an increased risk of non-persistence. 42,44When interpreting our results of the biosimilar ABP 501 in relevance to data of the ADA originator published prior to the market availability of biosimilars, some additional facts need to be taken into consideration, including the availability of more treatment options and patient cohort including those already treated with the originator before receiving biosimilars.Finally, our study duration included the COVID-19 pandemic period, which has been reported to have significantly impacted clinical practice and treatment persistence. 47en analyzing treatment persistence stratified by prior use of ADA products, we observed a larger proportion of persistent users in ADAexperienced patients than in ADA-naïve patients in the German patient population, which was consistent with a previous line of evidence from a real-world study for ABP 501. 11In part, this may be due to ADA-experienced patients being more likely to be stable on, responsive to, and tolerant of ADA therapy, in turn leading to better persistence than patients who were new starters to ADA therapy.However, it is important to note that baseline clinical characteristics of patients were not available in the LRx database, and baseline medication use was different between ADA-naïve and ADA-experienced patients in our cohort -which could both have an impact on treatment persistence and adherence.In our French cohort, we did not observe substantial differences in ABP 501 persistence between ADA-naïve and -experienced patients.This could be due to the initiation of biologics, including biosimilars, in France being restricted to hospital-based specialists-a setting in which patients are more closely managed and followed if they initiate a new medicine. Regarding initial switching patterns post-ABP 501 therapy, we observed an interesting finding in that the German cohort showed varied switching patterns between ADA-naïve and ADAexperienced patients.Of the patients who switched from ABP 501 to another targeted therapy, ADA-naïve patients most commonly switched to non-TNFi biological treatments, whereas ADA-experienced patients most frequently switched back to ADA RP.This may be attributable to the nocebo effect, which has been documented as a more negative effect of an intervention induced by negative perceptions or expectations, 48 as this pattern was not observed in the ADA-naïve patients who were not previously treated with RP.A recent systematic review assessed more than 30 studies in which patients were switched from infliximab RP to infliximab biosimilar.They found that median discontinuation rates were 14.7% in open-label studies, compared with 6.95% in double-blind trials, supporting the idea that the nocebo effect may influence biosimilar acceptance in patients. 49nterestingly, the difference in switch patterns was not observed in French patients, where patients most often switched to ADA RP regardless of prior exposure to ADA products.One of the reasons might be the timing of the availability of non-TNFi drugs in the French market.In Germany, most of the patients switched from ABP 501 to either IL12/23 inhibitor or integrin antagonist.In France, however, these two classes of drugs were largely unavailable during the study period (e.g.vedolizumab, the only drug of the integrin antagonist class, was first available in French retail pharmacies in March 2021; ustekinumab, the only IL12/23 inhibitor in France, had its UC indication endorsed for reimbursement by French health authorities in July 2020).However, Figure 2 . Figure 2. Kaplan-Meier curve of treatment persistence of biosimilar ABP 501 among German (a) and French (b) patients with inflammatory bowel disease. Table 1a . Baseline characteristics of German patients with IBD, stratified by prior exposure to an ADA RP or a biosimilar. naïve patients, n = 1828 ADA-experienced patients, n = 1534 a a n = 1297 were switched to ABP 501 from ADA RP and n = 237 were switched from other ADA biosimilars.bThepercentages of females and males are based on 2917 patients with sex data available.c Categories are not mutually exclusive.Patients were possibly treated with more than one category of drugs.ADA, adalimumab; IBD, inflammatory bowel disease; JAKi, Janus Kinase inhibitor; RP, reference product; SD, standard deviation; TNFi, tumor necrosis factor inhibitor.journals.sagepub.com/home/tagTherapeuTic advances in Gastroenterology Table 1b . Baseline characteristics of French patients with IBD stratified by prior exposure to an ADA RP or a biosimilar. a n = 246 were switched to ABP 501 from ADA RP and n < 10 were switched from other ADA biosimilars.bThepercentages of females and males are based on 729 patients with sex data available.c'Hospital-basedprescribersrefer to treating physicians practicing in a hospital setting, including those specializing in gastroenterology or internal medicine.dCategoriesarenot mutually exclusive.Patients were possibly treated with more than one category of drugs.ADA, adalimumab; IBD, inflammatory bowel disease; JAKi, Janus Kinase inhibitor; RP, reference product; SD, standard deviation; TNFi, tumor necrosis factor inhibitor.The adherence rate to ABP 501 treatment (defined as MPR ⩾ 80%) during the first 12 months of treatment initiation was 53.7% overall, 52.0% in ADA-naïve patients, and 55.7% in ADA-experienced patients.5-13.9)among the French patient population overall, 12.8 months (95% CI: 10.2-14.7) in ADA-naïve patients, and 11.5 months (95% CI: 8.8-14.4) in ADA-experienced patients [Figure2(b)].At the end of the 12 months after treatment initiation, 50.6% (95% CI: 46.9-54.2) of all French patients remained on ABP 501 therapy [Figure 2(b)]. Table 2a . Switch rates and patterns among German patients with IBD who switched from biosimilar ABP 501 to another targeted therapy during the first 12 months after initiating ABP 501. Table 2b . Switch rates and patterns among French patients with IBD who switched from biosimilar ABP 501 to another targeted therapy during the first 12 months after initiating ABP 501. 6. Siegel CA, Yang F, Eslava S, et al.Treatment pathways leading to biologic therapies for ulcerative colitis and Crohn's disease in the United States.Clin Transl Gastroenterol 2020; 11: e00128.7. Jean-Frédéric Colombel WR and Armuzzi A. The emerging treatment landscape of inflammatory bowel disease: role of innovator biologics and biosimilars.EMJ Gastroenterol 2018; 7: 50-57.8. Markus R, McBride HJ, Ramchandani M, et al.A review of the totality of evidence supporting the development of the first adalimumab biosimilar ABP 501.Adv Ther 2019; 36: 1833-1850.9. Papp K, Bachelez H, Costanzo A, et al.Clinical similarity of the biosimilar ABP 501 compared with adalimumab after single transition: longterm results from a randomized controlled, double-blind, 52-week, phase III trial in patients with moderate-to-severe plaque psoriasis.Br J Dermatol 2017; 177: 1562-1574.10.Papp K, Bachelez H, Costanzo A, et al.Clinical similarity of biosimilar ABP 501 to adalimumab in the treatment of patients with moderate to severe plaque psoriasis: a randomized, doubleblind, multicenter, phase III study.J Am Acad Dermatol.2017; 76: 1093-1102.
2024-01-15T05:06:06.821Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "9c16ee50155236edf92b9800bf74eb296637886c", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9c16ee50155236edf92b9800bf74eb296637886c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4835492
pes2o/s2orc
v3-fos-license
Safety and Viability of Totally Tubeless Ambulatory Percutaneous Nephrolithotomy ( APCNL ) in the Fast Paced World Received: October 10, 2017 Revised: March 10, 2018 Accepted: March 20, 2018 Abstract: Background: Percutaneous Nephrolithotomy (PCNL) is the gold standard for endoscopic management of large renal stones. Various modifications have been done to bring down the morbidity of this procedure. Ambulatory PCNL (APCNL) defines PCNL as day-care procedure, avoiding overnight hospital stay which is less than 24 hours. Totally tubeless makes faster recovery without the need for double J stent or nephrostomy tubes. This study aimed at exploring the feasibility and safety of APCNL in selective patients. It also aimed at improvising the procedure to facilitate early recovery and discharge of patients within 24 hrs. INTRODUCTION Ever since the formation of a surgical percutaneous tract to access the anatomy of the pelvicalyceal system of the kidney, it paved way to innovation & urological advances in the procedure indubitably known as percutaneous nephrolithotomy (PCNL).The contemporary approach is superior to the open approach in terms of morbidity, convalescence, & cost, thereby substituting open removal of large complex calculi. PCNL has been the gold standard for the larger stones.There has been a constant debate on the indications for PCNL vs retrograde intrarenal surgery(RIRS) vs extracorporeal shockwave lithotripsy (ESWL) for stones in between 1cm to 2cm [1].But, in practice the decision for the procedure depends on the anatomy of the pelvicalyceal system, availability of the instruments, cost of the procedure, desire of the patient and the stone free rates. In this present era, newer technologies have emerged to reduce the morbidity of PCNL.There are smaller instruments which have led to the practice of traditional PCNL undergoing miniaturisation to mini PNL, ultra-mini PNL and micro-PNL [2].Similarly the traditional 'tubed' PCNL which is presence of the nephrostomy tube/ double J stent and urinary catheter has been replaced with totally tubeless PCNL.Totally tubeless PCNL denotes the absence of both the nephrostomy tube and double J stent. The technique evolves from its archaic methods, in order to improve better patient outcomes.Out of the conglomeration of advances, Ambulatory PCNL (APCNL), distinct itself from its predecessor, which required a prolonged hospital stay of 4-6 days due to the practice of inserting a nephrostomy tube, at the conclusion of the procedure, for faster healing of nephrostomy tract, promoting haemostasis, preventing urinary extravasation & draining infection [3]. This study was conducted primarily to look at the feasibility of Ambulatory PCNL.It also aimed at exploring the various factors which influence the outcome of APCNL in selected patients. MATERIALS AND METHODS This was a prospective study done at our tertiary hospital from April 2016 to March 2017.Out of 682 PCNL done during this period 12 patients were selected for totally tubeless Ambulatory PCNL (APCNL).All the procedures were done by the experienced single surgeon.The standard protocol was followed, which included careful history and examination of the patients.The investigations in the form of complete blood picture, renal function tests with electrolytes, coagulation profile, urine microscopy with culture and sensitivity were done.Imaging included Ultrasound of the kidney ureter and bladder (KUB), Xray KUB, Contrast enhanced Computed Tomography(CECT), Intravenous Urogram (IVU) and Retrograde Pyelogram (RGP) were tailored as per requirement. The inclusion criteria comprised of a well-informed patient with no associated clinical co-morbidities; residing within a radius of 15km from the hospital, visualised stone size <2cm on imaging like XRAY/CT/IVU, no prior renal or ureteric surgery, CT or RGP revealing normal pelvi-calyceal anatomy (Figs. 1, 2). Preoperatively, preanesthetic evaluation was done on out patient basis, the factors such as Body Mass Index less than 30 and American society of Anaesthesiologists (ASA) score less than 3, nil comorbidities and mentally sound patients were considered.Patients and their family were explained in the language known to them about the procedure, postoperative care, discharge and the follow-up.They were explained regarding the postoperative analgesia, antibiotic regimen and need for re-procedure in the form of double J stenting.It was ensured that all the patients have sterile urine. All patients underwent totally tubeless PCNL i.e. without nephrostomy, DJ stent & catheter.Preoperative antibiotic was administered an hour before the procedure.All the patients were administered general anaesthesia with endotracheal intubation.They underwent cystoscopy and retrograde pyelogram with 5F ureteric catheter insertion.14F urinary Foley catheter was placed.Patients were put on prone position.Under fluoroscopy, puncture to the desired calyx were made using 18G needle by bulls-eye technique.Schullers metallic guide wire was inserted and central rod was placed.Flexible tip 0.0035F safety guide wire was secured.Using Alkens co-axial metallic dilator, tract was dilated ranging between 26 to 30F.Similar size Amplatz TM was placed.Amplatz size/ nephrostomy tract was selected as per the dilatation of the calyx in the imaging.Every tract was dilated up to the calyx, not till the infundibulum, to avoid infundibular injury and bleeding Care was taken to avoid more than two punctures.Nephroscopy was done using 20.8F mini perc nephroscope or 26F regular nephroscope.Stone was removed intact or fragmented into 2 pieces and removed.Nephroscopy was done to visualise the calyces and pelvis for any bleeding/ perforation/ fragments.If there was no bleeding/ perforation procedure was concluded without any placement of nephrostomy tube/ double J stent.Tract was visualised for any active bleeding while withdrawing the Nephroscope.Using 0.25% Bupivacaine, skin and the PCNL tract was infiltrated with LA.Skin incision was closed with 2-0 Ethilon suture.Intramuscular Aceclofenac injection or oral Acetaminophen 325mg plus Tramadol 37.5mg combination was given, depending on the pain in the postoperative period.They were followed up after 2 weeks. RESULTS Twelve patients(n=12) underwent totally tubeless percutaneous nephrolithotmy.Out of 12 patients n=8(66.6%)were males and n= 4(33.3%).The mean age was 44 years with the youngest being 29 years and the other end of spectrum being 55 years.The standard preanesthetic protocol was followed.All were evaluated preoperatively under outpatient basis.Later were given appointment on a planned date.All the patients were admitted on the day of procedure.They underwent prone PCNL under general anaesthesia.The American Society of Anesthesiologists score were 1 in n=4(33.3%)patients, 2 in n=7(58.3%)patients and 3 in n= 1(8.3%) patient.Hence more than 90% of the patients had favourable ASA score (Table 1).All patients had single stone.The laterality was equal with 50% on left and right each.None had stone in the upper calyx.Majority n=7(58.3%)had stone in the lower calyx followed by n=3(25.1%) in pelvis and n=2(16.6%) in the middle calyx (Table 2).The CT imaging showed normal pelvicalyceal anatomy, there were no perinpehric events or anomalies.Out of 12 patients n= 9 (74.9%) had radio-opaque stones, shown on both Xray and CT KUB, n=3(25.1%)had radiolucent stones.After placing the 5Fr ureteric catheter and RGP with visualisation of the pelvicalyceal system and the stone with dilatation of the system, patients were turned prone.All of them (n=12) had single puncture using the 18G puncture needle.All had infracostal punctures, with n=10 (83.3%) inferior calyceal and n=2 (16.6%) middle calyceal punctures.The punctures were clear and were confirmed with the free flow of urine.There was not much of blood staining.The mean fluoroscopy time which was 5 mins.The Amplatz size were as follows: 26F were n=3(25%), 28F were n= 4(33.3%) and 30F were n= 5 (46.1%).The mean operative time was 44.55 mins (37mins to 54 mins) from cystoscopy to withdrawal of Amplatz sheath.In n= 4 cases stones were removed intact and in remaining n= 8 stones were fragmented into two fragments using pneumatic lithotripter and removed.The amount of saline used for irrigation was less than 300 ml.None of the patients had any pelvicalyceal bleeding or perforation.Hence, it was totally tubeless PCNL without nephrostomy tube or double J stent.All the patients received local anesthesia at the puncture site with 0.25% bupivacaine 10ml.skin was closed with the 2-0 ethilon, which was removed on post op day 7 in the outpatient department (Table 3).Post procedure patients were taken to the post-operative recovery room.In the recovery room, vitals were checked and saline nebulisation was given for faster recovery from endotracheal tube related convalescence.Once recovered and the urine was clear, the foley catheter was removed.There was negligible drop in the post-operative haemoglobin which was less than 0.1% (Table 4).Patients were motivated for early mobilisation and hence transferred from the bed to a relaxing chair.The mean hospital stay was 20 hours and 39 minutes.Before discharge, an ultrasound KUB was done to look for any perinephric collection/ clots in the kidneys/ residual fragments, which was normal.Patients were explained regarding the possible complications like haematuria, retention of urine, loin pain, fever, chills, voiding symptoms, abdominal distension.Oral analgesia in the form of tramadol 37.5mg plus acetaminopehen 325mg was given on demand, the time interval was q8th hourly.All the patients required post-operative analgesia, least was 1 tablet taken by n= 3 patients and 8 tablets were the maximum taken by n=1 patient (Table 5).All the patients were discharged on the same operative evening, once they were ambulant and the urine was clear.Nevertheless, n=2 patients who were discharged visited the emergency department the next day; 36hrs and 48hrs due to loin pain.USG KUB, CT KUB and urine microscopy were done, which were normal.They were given a shot of intramuscular aceclofenac 75mg injection, with which the pain subsided.Counselling regarding immediate follow up, in case of haematuria, pain or urinary tract infection was done, however none approached with the aforementioned. DISCUSSION The formation of a surgical percutaneous tract to access the anatomy of the pelvi-calyceal system of the kidney was described by Fernstrom & Johansson (1976) and staged by Wickham (1979).They started with percutaneous nephrostomy under local anaesthesia, followed by the dilatation of the tract serially over the next few days, with subsequent stone removal under general anaesthesia using a rigid 30° cystoscope.This paved way to innovation & urological advances in the procedure known as Percutaneous nephrolithotomy (PCNL) [4 -6]. The technique evolves from its archaic methods, in order to excogitate better patient outcomes.The contemporary approach is superior to the open approach in terms of morbidity, convalescence, & cost, thereby substituting open removal of large complex calculi. Traditional school of thought was inclined towards standard PCNL, where nephrostomy tubes provided haemostasis along the tract, avoiding urinary extravasation & maintained adequate drainage from the kidney and the use of double J stents for urinary drainage down to the bladder [7].However, based on the concept that the purpose of the tube is only to maintain adequate drainage of the kidney, a 'tubeless' approach has been developed by placing a ureteral stent or catheter to provide drainage after PCNL in lieu of a nephrostomy tube. Totally tubeless approach was first reported by Wickham and co-workers.They stated that 'provided the kidney is stone-free, the collecting system remains intact and there is not excessive bleeding, there is no need of nephrostomy tube'.This approach was banked upon by Aghamir et al. with patient inclusion criteria comprising of anomalies like horseshoe kidneys, rotational anomalies & ectopic kidneys [8].The derivative from the study highlighted the lower analgesic requirements, lesser need for prolonged hospitalisation & better return to normal day to day life.Totally tubeless PCNL has similar outcomes when compared with standard PCNL in terms of stone-free rate without increasing complications in selected cases.Tubeless PCNL is a safe and effective procedure and is associated with shorter hospitalization and lower analgesic requirement [9].In selected patients it can be totally tubeless -three tubes less. Two case series published in 2010, emphasize that with ideal clinical judgement in patient selection, good outcomes are achievable and extending up to a case report published on tubeless PCNL for bilateral struvite stones by Beiko et al and Shahrour et al.Our prospective study, provides quality evidence onto the name of totally tubeless APCNL, in the same manner.In the present study, all the patients (n=12) underwent totally tubeless PCNL without any placement of nephrostomy tube, double J Stent or ureteric catheter.None of these patients, had any intraoperative bleed or injury to the pelvicalyceal system, hence avoiding the tubes. Appropriate patient selection, plays a crucial basis for attaining the laudative outcome & ensuring safety of patients in ambulatory PCNL.Residence of a radius of 15km along with compliance & reliability were vital facets of the study.We ensured that the patient understood & approved the proposed care.Intraoperative all patients, underwent formation of a single tract with less than two punctures and no intraoperative impediments such as excessive haemorrhage or perforation.Upon fluoroscopy, all were deemed to have no stone remnants.In terms of the postoperative criteria, the patients were hemodynamically stable evidenced by normal haemoglobin values. The first research on APCNL was by Singh et al, who published 10 cases with spinal anaesthesia [10].The patients were kept for observation for 40 hours.Correspondingly, in another study by Bellman et al, the median hospital stay was 0.6 day or 14.4 hours [11].Our study demonstrated a shorter median patient hospital stay was less than 24hrs, which was 20.39 hrs. The study done by Beiko et al, had three patients with tubeless PCNL and the mean operating time was 70 mins.They had placed the ureteric catheters which were removed post operatively.In our study the number of patients was 12 and all underwent totally tubeless PCNL without any tubes.The mean operating time was less when compared to their study (44.5 mins vs 70 min).However, the hospital stay was lesser in the study done by Beiko et al. [12] Since the patients get discharged within 24hrs, the probability of repeat visit of patients with complications to the emergency room is high.The usual complications in a patient with totally tubeless PCNL are features of urinary tract infection (UTI), hematuria, sepsis, urinary retention, loin pain.The study done by Shahrour et al had two complications: multiresistant UTI with Escherichia coli.which required higher intravenous antibiotics and the second patient had deep vein thrombosis who received low molecular weight heparin [13].Our series had no post-operative complications, but two patients returned to the emergency room with loin pain.These patients were evaluated with ultrasound/ CT KUB and urine microscopy which were normal.Hence they were treated with analgesics. CONCLUSION Treatment of renal stone is individualized.Stone up to 2cm can be treated by ESWL, RIRS or PCNL, when it is in the lower pole.RIRS involves more cost, may need more than one stage procedure and loss of working days.Any procedure that gives benefit of one stage clearance and early restoration to work with extra advantage of lower costs will be suitable for the patients if selected judiciously.Totally tubeless PCNL as done in our patients provides maximum benefits of minimal invasive treatment with lower postoperative pain, early discharge, early reporting to work and no additional procedure (stent removal/ check ureteroscopy) after one month, hence lower the total costs.By making totally tubeless PCNL ambulatory, the benefits for patients with regard to reduced hospital stay, decreased cost of healthcare and early recovery sums up to the advantage.However, this group of patients should be properly advised and counselled of availing nearby healthcare facility for analgesics, catheterization in case of emergency. Fig. ( 1 ). Fig. (1).Xray kidney ureter and bladder (KUB) showing a 1.5cm radiopaque shadow at the level of L 1 / L 2 vertebra in the area of right kidney.
2018-04-14T23:16:48.861Z
2018-03-30T00:00:00.000
{ "year": 2018, "sha1": "b6e65b363142ac464adf8d32469410a0b83c63ec", "oa_license": "CCBY", "oa_url": "https://openurologyandnephrologyjournal.com/VOLUME/11/PAGE/14/PDF/", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b6e65b363142ac464adf8d32469410a0b83c63ec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21662974
pes2o/s2orc
v3-fos-license
Novel post-transcriptional and post-translational regulation of pro-apoptotic protein BOK and anti-apoptotic protein Mcl-1 determine the fate of breast cancer cells to survive or die Deregulation of apoptosis is central to cancer progression and a major obstacle to effective treatment. The Bcl-2 gene family members play important roles in the regulation of apoptosis and are frequently altered in cancers. One such member is pro-apoptotic protein Bcl-2-related Ovarian Killer (BOK). Despite its critical role in apoptosis, the regulation of BOK expression is poorly understood in cancers. Here, we discovered that miR-296-5p regulates BOK expression by binding to its 3’-UTR in breast cancers. Interestingly, miR-296-5p also regulates the expression of anti-apoptotic protein myeloid cell leukemia 1 (Mcl-1), which is highly expressed in breast cancers. Our results reveal that Mcl-1 and BOK constitute a regulatory feedback loop as ectopic BOK expression induces Mcl-1, whereas silencing of Mcl-1 results in reduced BOK levels in breast cancer cells. In addition, we show that silencing of Mcl-1 but not BOK reduced the long-term growth of breast cancer cells. Silencing of both Mcl-1 and BOK rescued the effect of Mcl-1 silencing on breast cancer cell growth, suggesting that BOK is important for attenuating cell growth in the absence of Mcl-1. Depletion of BOK suppressed caspase-3 activation in the presence of paclitaxel and in turn protected cells from paclitaxel-induced apoptosis. Furthermore, we demonstrate that glycogen synthase kinase (GSK3) α/β interacts with BOK and regulates its level post-translationally in breast cancer cells. Taken together, our results suggest that fine tuning of the levels of pro-apoptotic protein BOK and anti-apoptotic protein Mcl-1 may decide the fate of cancer cells to either undergo apoptosis or proliferation. INTRODUCTION Apoptosis is an evolutionary conserved process that is critical for the maintenance of tissue homeostasis in multicellular organisms [1]. Apoptosis also plays an important role during embryonic development [2,3]. In addition, alteration of apoptotic pathways is associated with several pathological conditions including cancers [4][5][6]. For example, evasion of apoptosis is central to cancer growth and progression and defects in apoptotic pathways result in resistance to chemotherapy drug response [7,8]. There are two major apoptotic pathways: Research Paper www.impactjournals.com/oncotarget extrinsic pathway, which involves transmembrane death receptor-mediated interactions and intrinsic pathway, which involves mitochondrial-dependent events. The Bcl-2 gene family members are key regulators of intrinsic apoptotic pathway that include pro-and anti-apoptotic proteins [9,10]. Examples of anti-apoptotic proteins include Bcl-2, Bcl-xL, and Mcl-1, that are important for maintaining the mitochondrial integrity, while BAX, BAK, BAD and BOK are pro-apoptotic proteins that facilitate the disruption and release of cytochrome c, an apoptogenic factor from the intermembrane space of the mitochondria, crucial for the activation of caspase-3 and caspase-7 that execute the final steps of programmed cell death [11]. Bcl-2 gene family members are known to be important contributors to tumorigenesis and therefore, are considered as promising therapeutic targets [12]. For example, Bcl-2 and Mcl-1 are highly expressed in cancers, promote cancer cell survival and are associated with poor therapeutic outcomes [13,14]. In addition, increased levels of Bcl-2 or Mcl-1 promote accumulation of apoptotic resistant neoplastic cells and also help cancer cells evade immune-surveillance [15]. Similarly, loss of pro-apoptotic proteins such as BAX or BAK is frequently observed in cancers [16]. Despite their proven role in tumorigenesis and years of investigation, we are far from completely understanding the mechanism(s) by which anti-apoptotic and pro-apoptotic proteins are regulated in cancers. It is especially important given that the dynamic equilibrium of pro-apoptotic and anti-apoptotic proteins is crucial for maintaining cellular homeostasis and disruption of this balance is one of the ways cancer cells can evade apoptosis. In this report, we address the regulation of proapoptotic protein BOK in breast cancers. BOK is localized in the mitochondria, endoplasmic reticulum (ER), Golgi, and nucleus [17]. The pro-apoptotic function of BOK has been shown to be largely dependent on the presence of BAX or BAK. In addition, BOK is reported to interact with anti-apoptotic protein Mcl-1 and BOK-induced apoptosis is shown to be suppressed by Mcl-1 [17,18]. In addition, recent evidence shows that BOK is frequently deleted in cancers [19]. Our results reveal that BOK expression in breast cancer is regulated at the post-transcriptional as well as post-translational levels. We show that miR-296-5p regulates BOK expression by binding to its 3' untranslated region (UTR). Interestingly, we show that miR-296-5p also regulates Mcl-1 expression by binding to its 3'-UTR. Furthermore, we report that BOK and Mcl-1 may constitute a feedback loop to regulate each other's stability and function. More notably, knockdown of BOK attenuates paclitaxel-induced caspase-3 activation and subsequently apoptosis. In addition to miR-296-5p, we demonstrate that glycogen synthase kinases 3 α/β (GSK3α/β) regulate BOK expression by physically interacting with BOK in breast cancer cells. These findings suggest that the levels of pro-apoptotic and anti-apoptotic signals are tightly regulated and any alteration in the relative ratio and function of pro-and anti-apoptotic proteins can predispose a normal developmental event into malignant transformation. Decreased BOK level in breast cancers To begin to address the importance of BOK in cancers, we first investigated BOK expression levels in breast cancers. Meta-analysis of a Cancer Genome Atlas (TCGA) data set for breast cancers (https://gdc.cancer. gov/) [20] showed significantly lower levels of BOK in tumor tissue specimens compared to normal controls ( Figure 1A). Next, we addressed the clinical significance of lower BOK expression in breast cancers. Kaplan-Meir analysis revealed that higher BOK expression positively correlated with overall survival as well as relapse free survival of breast cancer patients ( Figures 1B-1C). miR-296-5p regulates both pro-apoptotic protein BOK and anti-apoptotic protein Mcl-1 Since miRNAs have been shown to regulate the expression of several genes involved in apoptosis, we predicted that miRNAs might play important role in regulating BOK expression in breast cancer. Using bioinformatics approach, we identified 39 miRNAs that are predicted to target BOK. Of those, miR-296-5p was predicted by at least four algorithms including Targetscan, miRDB, miRanda and mirTarget2 to bind to multiple sites in the 3'-UTR of BOK. Before we addressed miR-296-5pdependent regulation of BOK, we determined the role of miR-296-5p in breast cancer. Consistent with the previous reports [21,22], our results revealed that miR-296-5p acts as a tumor suppressor, as miR-296-5p inhibited long-term viability, migration as well as invasion of breast cancer cells ( Figure 2). Next, we tested whether BOK is indeed a bonafide target of miR-296-5p by assessing BOK levels in MDA-MB-231 and MDA-MB-468 breast cancer cells, ectopically expressing miR-296-5p or anti-miR-296-5p (miR-296-5p inhibitor). Overexpression of miR-296-5p resulted in significantly decreased BOK mRNA and protein levels in breast cancer cells compared to mock or untransfected controls (Figures 3A-3D and Supplementary Figure 1). Conversely, inhibition of miR-296-5p using anti-miR-296-5p led to increased level of BOK ( Figures 3E-3H). Next, we tested whether miR-296-5p regulates BOK expression by binding to its 3'-UTR. To examine this, we cloned two miR-296-5p predicted binding seed regions (seed 1: 299 to 322 nucleotides and seed 2: 980 to 1005 nucleotides) in the BOK 3'-UTR and their respective mutant variants downstream of luciferase gene in pGL3promoter vector and measured the luciferase activity ( Figure 3I). Co-transfection of BOK 3'-UTR constructs and miR-296-5p resulted in decreased luciferase activity when compared to mock transfected breast cancer cells ( Figure 3J). In contrast, expression of the mutant BOK 3'-UTR constructs were unaffected by ectopic miR-296-5p expression ( Figure 3J), further supporting the notion that BOK is a direct target of miR-296-5p. As BOK pro-apoptotic function is shown to be dependent on BAX or BAK [23], we wondered whether miR-296-5p, in addition to BOK, regulates other Bcl-2 family proteins. Our bioinformatics analysis using target prediction algorithms revealed putative miR-296-5p binding sites in BID, BIM, BAK, PUMA, Mcl-1, Bcl-2, Bcl-xL 3'-UTRs (data not shown). The overexpression of miR-296-5p resulted in significantly decreased levels of Mcl-1, Bcl-2, and Bcl-xL (Figures 4A-4B and Supplementary Figure 2), however the levels of proapoptotic proteins BID, BIM, BAK, BAX, or PUMA did not change (Supplementary Figure 3). These findings are interesting given that Mcl-1, Bcl-2, and Bcl-xL are antiapoptotic proteins. To determine whether anti-apoptotic proteins are directly targeted by miR-296-5p, we focused on Mcl-1 as it has been reported to interact with BOK. Moreover, our bioinformatics analysis revealed a perfect miR-296-5p seed sequence match within Mcl-1 3'-UTR sequences ( Figure 4C). Co-transfection of Mcl-1-3'-UTR luciferase construct and miR-296-5p led to significantly reduced levels of luciferase activities compared to mock transfected breast cancer cells ( Figure 4D). Consistent with that, mutant Mcl-1 3'-UTR (with seed sequence mutated) luciferase construct did not show any significant effect on luciferase levels in the presence of miR-295-5p ( Figure 4D). We chose to use MDA-MB-231 and MDA-MB-468 cells for determining miR-296-5p function as well as regulation of BOK and Mcl-1 as these cell lines have lower levels of miR-296-5p and higher levels of BOK and Mcl-1 compared to other breast cancer cell lines (Supplementary Figures 4A-4B). Taken together, our data suggest that miR-296-5p regulates the expression of both pro-apoptotic and anti-apoptotic proteins in breast cancers. . Surprisingly, knockdown of both BOK and Mcl-1 increased the levels of Bcl-xL, Bcl-2, BIM, and BAX when compared to either BOK or Mcl-1 knockdown alone. These findings indicated that the pro-and anti-apoptotic proteins regulate each other's expression in cancer cells. To understand the functional relevance of the cross-talk between pro-and anti-apoptotic proteins, we performed long-term viability assay in breast cancer cells depleted for both BOK/Mcl-1. Our result showed that knockdown of BOK has no effect on long-term viability, while silencing of Mcl-1 led to significantly reduced number of colonies in MDA-MB-231, MDA-MB-468, and MCF7 breast cancer cells ( Figure 5D). However, the number of MCF7 colonies was significantly reduced in both BOK and Mcl-1-silenced cells ( Figure 5D). The reduced long-term survival of BOK-silenced MCF7 cells could be due to higher levels of miR-296-5p in MCF7 cells compared to MDA-MB-231 and MDA-MB-468 cells. In addition, the presence of functional p53 along with positive ER status of MCF7 cells [25] could affect its long-term survival in BOK depleted condition. In accordance with this, ERα is known to activate p53 by blocking MDM2 inhibition of p53 [26]. Therefore, it is possible that presence of ERα may amplify p53 stability and increased activation of p53 may sensitize cells to apoptotic signaling in BOK-silenced MCF7 cells. Furthermore, differences in expression of Bcl-2 family proteins and lack of functional caspase-3 in MCF7 cells [27] may also contribute to reduce long-term viability of BOK-silenced MCF7 cells compared to MDA-MB-231 or MDA-MB-468 cells. Nevertheless, depletion of both Figure 5D). Next, we assessed whether the compensatory effects of BOK and Mcl-1 silencing on breast cancer growth has similar effect on apoptosis. To address this, miR-296-5p or BOK or Mcl-1 knockdown cells were treated with paclitaxel, a chemotherapy drug that acts as a microtubule destabilizer and induces apoptosis [28] via caspase-3 activation [29]. Breast cancer cells treated with paclitaxel induced caspase-3 activation compared to control (Figures 6A, 6B). Notably, breast cancer cells transfected with either miR-296-5p or siRNA against BOK led to significant suppression of activated caspase-3 level in the presence of paclitaxel compared to vehicle treatment ( Figures 6C-6F). In contrast, cleaved caspase-3 level was significantly elevated in Mcl-1 specific knockdown cells in the presence of paclitaxel when compared to vehicle treated cells (Figures 6G-6H). To further substantiate these results, we performed annexin V-FITC/PI staining on miR-296-5p or BOK siRNA transfected breast cancer cells. Consistent with our earlier results, miR-296-5p or BOK silencing did not induce apoptosis. Furthermore, in the presence of paclitaxel treatment, miR-296-5p and BOK silencing suppressed paclitaxel-induced apoptosis compared to scramble transfected breast cancer cells treated with paclitaxel (Supplementary Figure 6). These results suggested that miR-296-5p could protect cancer cells from paclitaxel-induced apoptosis via BOK. and firefly luciferase constructs containing either pGL3-wt-BOK or pGL3-BOK mutants (seed sequence 1 and seed sequence 2 mutants (seed 1, seed 2) in the presence and absence of miR-295-5p. Firefly luciferase activity for each sample was normalized with renilla luciferase activity. Data represent the mean ± SD of three independent experiments. * , p<0.05; ** , p<0.01 GSK3α/β regulates BOK expression in breast cancer cells Having shown that BOK is a critical regulator that plays an important role in determining whether cancer cells die or survive, we wondered whether there are other means by which BOK expression may be regulated in breast cancers. One possibility is the "posttranslational modification" as recent report demonstrated that BOK protein levels could be regulated via ubiquitin degradation pathway [24]. To begin to study that, we generated breast cancer cell lines stably expressing BOK. Interestingly, we observed elevated BOK mRNA but not BOK protein in our stable breast cancer cells. This result further supported the notion that BOK expression is regulated at the post-translational level. As protein phosphorylation is understood to regulate multitude of protein expression, we investigated whether BOK protein is phosphorylated by protein kinases. Using an In silico approach (http://www.cbs.dtu.dk/services/NetPhos/), we identified multiple sites where BOK can be potentially phosphorylated by kinases including protein kinase A (PKA), protein kinase C (PKC), and glycogen synthase kinase 3 (GSK3) (Supplementary Figure 7). For further analysis, we focused on GSK3 as it has been shown to be associated with mitochondrial apoptotic signal [30]. Moreover, GSK3 is known to phosphorylate other Bcl-2 members such as BAX [31]. The GSK3 gene family consists of GSK3α and GSK3β, each of which has distinct roles but are also known to compensate each other's function [32]. Immunoprecipitation using antibody against myc-tag or BOK identified GSK3α/β as bonafide BOK interacting proteins ( Figure 7A). Next, we directly tested whether GSK3α/β regulates BOK expression. Pharmacological inhibition of GSK3 with CHIR99021 or silencing of GSK3α/β using siRNAs resulted in elevated BOK protein levels in breast cancer cells (Figures 7B-7E). However, BOK mRNA levels did not show any significant change (data not shown) further confirming that GSK3 regulates BOK expression at the post-translational level. Future experiments using deletion constructs will identify potential sites in BOK protein that are phosphorylated by GSK or other kinases. Nevertheless, to our knowledge this is the first report to show that BOK expression is regulated at the posttranslational level by GSK3. miR-296-5p and its target gene expression in breast cancers We determined whether miR-296-5p expression correlated with BOK and Mcl-1 expression levels in breast cancer patients. Meta-analysis of TCGA data set revealed that like BOK, the levels of both miR-296 and Mcl-1 were significantly lower in breast cancer tissues compared to normal adjacent controls ( Supplementary Figures 8A-8B). It is worth noting that available TCGA data set does not distinguish between miR-296-5p and miR-296-3p. The lower level of Mcl-1 in breast cancer specimens was unexpected given that Mcl-1 expression was previously shown to correlate with the tumor grade in breast cancer patients [33]. Since Ding. et al., compared Mcl-1 expression between tumors (not between control and tumors), it is possible that even though Mcl-1 expression may be associated with the aggressiveness of a sub-set of tumors, the Mcl-1 expression overall is lower in breast cancers compared to normal control. It is also possible that Mcl-1 expression pattern at the RNA and proteins levels are different in breast cancers as Ding et al., used immunohistochemical analysis to score Mcl-1 expression. Next, we addressed the clinical significance of miR-296-5p and Mcl-1 in breast cancer patients. Kaplan-Meir analysis using METABRIC and TCGA data sets showed that lower miR-296-5p (or miR-296 for TCGA) expression correlated with higher overall survival of breast cancer patients (Supplementary Figures 9A and 9B). This was an unexpected finding given that miR-296-5p acts as a tumor suppressor in breast cancers. It is possible that differential regulation of BOK and Mcl-1 by miR-296-5p may determine whether pro-or anti-apoptotic functions prevail and accordingly affect the survival of breast cancer patients. In accordance with that, higher Mcl-1 expression was correlated with lower overall survival and relapse- Figures 10A-10B). DISCUSSION The balance between pro-proliferation and procell death signals determines the fate of cancer cells to survive/grow or die. The intrinsic apoptotic pathway and in particular Bcl-2 family members play a pivotal role in regulating this balance. For example, increased expression of anti-apoptotic Bcl-2 proteins that block the action of pro-apoptotic effectors (BAX and BAK) has been associated with cancer cell progression and resistance to pro-apoptotic signals [34]. Similarly, enhancing the signal of pro-apoptotic Bcl-2 homology domain 3 (BH3) proteins was the basis for the FDA approval of BH3 mimetics for treating chronic lymphocytic leukemia (CLL). In addition, albeit paradoxically, pro-apoptotic signal has been proposed to promote tumorigenesis [35]. These observations underline the importance of understanding the mechanisms by which expression of anti-and pro-apoptotic proteins may be regulated so that optimal therapeutic strategies for killing cancer cells can be developed. In the present study, we focused on the regulation of BOK, one of the least studied and poorly understood pro-apoptotic Bcl-2 member. We demonstrated for the first time that BOK is regulated by miR-296-5p in breast cancers. Interestingly, we show that anti-apoptotic Mcl-1 is also regulated by miR-296-5p. Given that miR-296-5p is expressed at lower levels and acts as a tumor suppressor in breast cancers, our results suggest that the relative levels of BOK and Mcl-1 as well as other pro-and antiapoptotic proteins may be critical for evading apoptosis and continued proliferation of breast cancer cells. This will be likely because miR-296-5p may have differential regulatory effect on BOK and Mcl-1 with BOK (in comparison to Mcl-1) being more sensitive to changes in levels of miR-296-5p. Indeed, our results reveal that there are at least sixteen putative miR-296-5p binding sites in BOK 3'-UTR compared to two in Mcl-1 3'-UTR. It is also possible that the absence of miR-296-5p could results in increased (or no change in) levels of other anti-apoptotic Bcl-2 proteins that bind to and neutralize the functions of pro-apoptotic proteins in breast cancer. Supporting this, we show that Bcl-2 and Bcl-xL are putative targets of miR-296-5p; and anti-apoptotic Bcl-2 proteins are known to inhibit mitochondrial outer membrane permeabilization (MOMP) by directly binding and activating pro-apoptotic BAX and BAK [36,37]. These findings suggest that miR-296-5p may act as a key regulator that fine-tunes both pro-and anti-apoptotic signals. Consistent with that, our results showed that miR-296-5p, though acts as a tumor suppressor, protect breast cancer cells from paclitaxelinduced apoptosis. It is possible that under acute proapoptotic pressure (such as paclitaxel) miR-296-5p may preferentially attenuate BOK expression leading to resistance to drug-induced cell death and consequently cell survival. Indeed, previous studies have shown that loss of BOK promotes resistance to ER-stress-induced apoptosis in vivo [38]. Furthermore, similar to miR-296-5p, tumor suppressor miR-34c has also been reported to protect cancer cells to chemotherapy drug-induced apoptosis [39]. In addition to miR-296-5p, our study shows that BOK expression may be regulated via functional interaction with Mcl-1. Although BOK was initially identified as a Mcl-1 interacting protein, there was no evidence demonstrating regulatory or functional interaction between these two Our results indicate that post-transcriptional regulation by miR-296-5p and post-translational regulation by GSK3 of pro-apoptotic and anti-apoptotic proteins is critical for determining the fate of cancer cells to survive or undergo apoptosis. Furthermore, our results indicate that expression of pro-apoptotic (BOK) and anti-apoptotic (Mcl-1) proteins is tightly regulated and relative ratio of these proteins is crucial to maintain the normal cellular homeostasis. proteins. This study is the first to show that BOK and Mcl-1 regulate each other's expression and function. Our results showing rescue of growth inhibitory effect of Mcl-1 knockdown in BOK and Mcl-1 depleted breast cancer cells suggest that BOK and Mcl-1 may be in a complimentary feedback loop. It is possible that simultaneous loss of BOK and Mcl-1 may prompt other pro-survival genes to compensate for their loss. Indeed, levels of several prosurvival proteins were found to be elevated when breast cancer cells were depleted for both Mcl-1 and BOK compared to silencing of either of them alone. Furthermore, since Mcl-1 is highly expressed in several cancers, our results support the notion that the increased levels of Mcl-1 may block BOK pro-apoptotic activity by interacting and sequestering BOK away from localizing to the mitochondria. Our data indicate that post-translational modification may be another mechanism by which BOK expression is regulated. Our study is the first to show that BOK is a bonafide target of GSK3α/β. GSK3β has been shown to regulate the levels and function of its targets by phosphorylating and consequently setting them for either degradation via ubiquitination and proteolysis [40,41] or by stabilizing their activities [42]. Therefore, it is likely that GSK3α/β-dependent phosphorylation of BOK may be one of the important mechanisms that control BOK expression and function in cancer cells. Supporting this, we identified several potential GSK3 phosphorylation sites on BOK protein and showed that the inhibition of GSK3α/β led to increased expression of BOK in breast cancer cells. Furthermore, a recent report demonstrated that BOK could be regulated by ubiquitin/ proteasome-dependent pathway [24]. Given that GSK3 can act as a tumor promoter or suppressor and is reported to target other Bcl-2 family members including Bcl-2, Mcl-1 and BAX, it is plausible that stabilization of anti-apoptotic proteins (such as BCL2L12A) and degradation of pro-apoptotic proteins (such as BOK) by GSK-3 may induce tumor growth, while GSK3-mediated stabilization of pro-apoptotic protein (such as BAX) and degradation of anti-apoptotic proteins (such as Mcl-1) may lead to tumor suppression. In summary, our study unveils novel mechanisms by which the levels and activities of pro-apoptotic protein BOK and anti-apoptotic protein Mcl-1 are regulated. Furthermore, our study attests that the regulation of pro-and anti-apoptotic Bcl-2 family member proteins is intertwined and a fine balance of their levels or activities or both is critical for determining whether cancer cell proliferate or undergo programmed cell death. Expression analysis in breast cancer specimens and survival analysis Meta-analysis for BOK expression was performed on a public domain gene expression dataset from The Cancer Genome Atlas Research Network: http:// cancergenome.nih.gov/. Kaplan-Meier survival analyses for the disease outcomes were performed using the online database (www.kmplot.com) and the percentiles of the patients using the upper were auto-selected based on the computed best performing thresholds as cutoffs. The p-values distributions of each comparison of cancer vs normal adjacent tissue obtained from differential gene expression analysis (see below) were considered to check for possible size effects. RNA extraction and quantitation real-time PCR Total RNA was isolated with Qiagen RNA extraction kit according to the manufacturer's protocol. One microgram of total RNA was used for reverse transcription reaction using iScript Reverse Transcription kit (Bio-rad Hercules, CA) according to manufacturer's protocol. The expression levels of BOK, or Mcl-1, and 18S (housekeeping gene) were analyzed using SYBR Green Master mix (Qiagen Valencia, CA) and gene-specific primers on Applied Biosystems (ABI) Thermocycler 7900 Fast. The real-time PCR was done in triplicate for each run. Reverse transcription was performed using the iScript Reverse Transcription kit (Bio-rad Hercules, CA). Real-time PCR was conducted on an ABI Prism 7900 Fast Sequence Detection System (ThermoFisher Scientific Inc., Foster City, CA) using 95°C denaturation for 15 minutes followed by 40 cycles of 94°C for 15 seconds, then 60°C for 30 seconds, and 72 °C for 30 second. Fold change was generated using the equation 2 −ΔΔCt . Cell transfection Prior to transfection, cells were seeded in 10% FBS medium with antibiotics and incubated overnight at 37°C in 5% CO 2 . Subsequently, the medium was replaced with the medium containing 10% FBS without antibiotics. Transfection complex was prepared with RNAiMax for siRNA transfection or Lipofectamine 2000 for plasmid transfection in 1X Opti-MEM using manufacturer's recommendation. Breast cancer cells were transfected with 75 nM of pre-miR-296-5p (Ambion, USA) or 100 nM of anti-miR-296-5p or mock www.impactjournals.com/oncotarget (containing transfection reagent in 1X Optimem medium) or un-transfected control. For gene-specific knockdown, breast cancer cells were transfected with 75 nM of siRNA (Sigma-Aldrich) or scramble-siRNA or mock. Western blot Cells were rinsed with 1X PBS and lysed in RIPA buffer and the lysate was incubated on ice for 30 minutes and centrifuged at 4ºC for 10 minutes at maximum. After the centrifugation, cell supernatant was transferred into a new 1.5 mL microfuge tube. The protein concentration was quantified using 1X Bradford assay (Bradford, Hercules CA). Fifty microgram of proteins were denatured in sample buffer and separated by SDS-PAGE and transferred to PVDF-membranes (Millipore, MS). Membranes were blocked with 5% milk, washed, incubated in appropriate primary antibodies. Membranes were then incubated with horseradish peroxidase-conjugated secondary antibodies for 1 hour at room temperature, washed 3X for 15 minutes and developed using ECL chemiluminescent kit (Millipore, Billerica MS). Polyclonal rabbit anti-BOK (cat.# SAB1300048) was purchased from Sigma Aldrich, St. Louis, MO. Mouse anti-GAPDH (cat.# sc-32233) was purchased from Santa Cruz Biotechnology. Pro-apoptosis Bcl-2 family antibody (cat.# 9942), anti-Bcl-2 (cat.# 15071), Anti-Mcl-1 (cat.# 4572) and anti-Bcl-xL (cat.# 2764) were purchased from Cell signaling Technology. Antibody against caspase-3 (cat.# sc-7148) was purchased from Santa Cruz Biotechnology. Colony formation assay Breast cancer cells were transfected with miR-296-5p, anti-miR-296-5p, mock and subjected to colony formation assay as described previously [43]. Briefly, 800 cells of MDA-MB-231 or 2,000 cells of MDA-MB-468, and MCF7 were plated in 2 mL high glucose 1X DMEM supplemented with 10% FBS, 1% P/S and incubated at 37ºC and 5% CO 2 for 8 days and colonies were then stained in crystal violet dye (0.5% crystal violet and 20% methanol). Migration and invasion Breast cancer cells transfected with mock, miR-296-5p or anti-miR-296-5p were subjected to migration and invasion assay as described previously [43]. Plasmids BOK cDNA plasmid was obtained from DNASU Plasmid Repository (Phoenix, AZ). The cDNA was excised from pDNR-Dual vector using SalI and HindIII restriction enzymes and subcloned into pGEM vector to maintain the orientation. PGEM-BOK construct was digested with BamHI and HindIII, and cloned into pCMV6 mammalian expressing vector. Luciferase assays MDA-MB-231 cells were transfected using lipofectamin 2000 (ThermoFisher Scientific, Grand Island, NY) according to the manufacturer's specification. A total of 50,000 cells were seeded in each well in sixwell plate and incubated overnight. The cells were then co-transfected with renilla luciferase vector (pRL-null) and firefly luciferase vector containing pGL3-wt-BOK, or pGL3-mut-1-BOK, or pGL3-mut-2-BOK, or pGL3, or pGL3-wt-Mcl-1, or pGL3-mut-Mcl-1 overnight and incubated in fresh complete medium for an additional 48 hours after transfection. The transfected cells were then transfected with miR-296-5p or mock and incubated for additional 24 hours. Next, cells were harvested with 1X passive lysis buffer (Promega Madison, WI) and the luciferase activities were read using GLOMAX 20/20 luminometer (Promega Madison, WI). Apoptosis analysis MDA-MB-231 cells were seeded in 12-well tissue culture plates (5 × 10 5 cells/well). On the second day, the cells were transfected with either miR-296-5p or scramble or siRNA against BOK or Mcl-1 and incubated for overnight. The medium was changed and cells were incubated for additional 24 hours. Later, the cells were either treated with 12.5nM paclitaxel or with vehicle control for 72 hours. After treatment, the cells floating in the medium were collected. The adherent cells were detached with 0.05% trypsin. Then the culture medium containing FBS and floating cells was added to inactivate the trypsin. After being pipetted gently, the cells were centrifuged for 5 min at 1500× g. The supernatant was removed and the cells were stained with annexin V-FITC and PI according to the manufacturer's instructions. Untreated cells were used as control for the double staining. The cells were analyzed immediately after staining using a FACScan flow cytometer and FlowJo 9.0 software. For each measurement, at least 20,000 cells were counted. Statistical analysis Results are presented as means of ± SD. Statistical comparisons between two groups of data were made using two-way ANOVA. The p-Value of < 0.05 is denoted as * , p-Value of < 0.01 as ** , p-Value of < 0.001 denoted as *** . ACKNOWLEDGMENTS We thank Hung-I Harry Chen for TCGA data analysis. Onyeagucha BC and Guzman RM are supported by Cancer Prevention & Research Institute of Texas (CPRIT) Training Grant (RP140105). Rao MK is supported by NIH (NCI) Grant R01CA179120-01A1. Chen Y and Rao MK are supported by NCI P30
2018-04-03T03:46:10.391Z
2017-09-12T00:00:00.000
{ "year": 2017, "sha1": "42e484172c5d69b4311b3988d7002852ac7e8be1", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=20841&path[]=66378", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42e484172c5d69b4311b3988d7002852ac7e8be1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
216338912
pes2o/s2orc
v3-fos-license
Peculiarities of urban youth interests’ realization in social conflicts . The article is devoted to the problem of urban youth interest’s realization in the sphere of social conflicts, which is a critical situation determined on the one hand, by the overall logic of socio-cultural development, and, on the other, by the specifics of youth’s participation in the processes of social interaction. According to the research, the conflict is one of the obligatory parts for the social interaction functioning. In relation to youth, it is a condition that ensures its socialization and identification. Based on the interpretation results of questionnaire survey the features of behavioral attitudes of young citizens at the main stages of the genesis conflict were identified: the emergence of an obstacle in the implementation of interest, the reaction to this obstacle, the choice of the conflict type as a way to resolve a difficult life situation, the definition of a behavioral strategy in the conflict and its implementation. The results show that social conflict is considered by youth not only as a means of overcoming obstacles that arise in the implementation of their interests, but also as the way to attract attention to their problems and present personal ideas. Introduction Social conflicts are necessary elements that involve young people into the process of relationships that have been developed between young people and "adult" communities and young people. These relations are universal and act as necessary moments in the life of young people, regardless of their status and other characteristics. In the course of analyzing social system, conflicts can be considered as social attractors that focus on various, often contradictory, processes occurring at the macro and micro levels of a social organization. The ordinariness of a social conflict for young people requires the need to define behavioral patterns, which do not only allow them to resolve it with minimal losses, but also give the chance to use the opportunities arising in the course of the conflict interaction for the implementation of specific interests. It is worth noting that conflicts are often initiated by the youth itself and are considered by it as a completely acceptable, if not optimal, taken as the best way of self-realization process. Of course, the high level of social and subcultural differentiation of youth implies the presence of conflict behavioral features in its individual groups. Therefore, the main idea in the research work is to identify the specific features of social conflicts interests' realization peculiar to urban youththe group which includes most of the young residents in Russia. Literature review Conceptually, the research is primarily based on the works of the German sociologist G. Simmel, who is often regarded as the founder of conflict study science [1]. According to the author, the central idea was the society itself, which is a natural space for human interaction. G. Simmel emphasizes, the struggle between people and social groups is one of the most important social interaction factor. Following this idea, conflict acts as the result of contradictions between forms and individuals, it is a consequence of the internal aggression of people. The conflict situation, according to G. Simmel, is not intended to solve the contradiction, but to detect it, and appears as a form of life dynamics ("conflict feeds on the energy of life"), without which only simple life in nature is possible. Therefore, culture, life and conflict are inseparable concepts. So, culture and conflict coexist at the expense of each other and cannot be separated. It is obvious that any social conflict, firstly, can productively be analyzed only in the context of socio-cultural (in the analyzed city) environment, defining a set of imperatives as a part the content; secondly, it should be measured by the ratio of social conjunction and disjunction processes, as their organic (constructive or destructive strength) component. It determines its vector oriented for creating or destroying. But at the same time, thirdly, the conflict is an expression of the subjective dispositions of actors, who are young people destroying imperatives and changing the vector of the conflict. Nevertheless, up to the 60s of XX century, the prevailed scientific idea was in understanding the society as an equilibrium, and the conflict that violates this equilibrium. Usually it was taken as a social dysfunction. By the early 60s of the XX century, after A. Royce, the majority of scientists analyze the conflict "as the interpretation of heterogeneous societies and understood the conflict as a phenomenon originally set and, therefore, typical for all social interactions" [2]. According to the German sociologist R. Dahrendorf the so-called "conflict model of the society" was developed in the late 50s of the XX century. In his research the author considered conflict to be a natural state of the society, and its absence analyzed as an anomaly. According to R. Dahrendorf the conceptualization of the modern phenomenon of youth conflicts is reflected in the idea that conflict is not only an attribute, but also a source of positive changes in the society. Conflicts are inevitable, universal, and need to be solved [3]. Taking into account these positions, it can be stated that young people in the frames of any social sphere, despite the seemingly hassle-free integration into society, will always be a potentially conflicted group, and perhaps they will even be more conflicted than their counterparts. Quite banal, but, nevertheless, necessary for the analysis of the research work, is the statement that the positive-functional model of conflict is the part of the culture. This idea was primarily spread after the publication of ideas by L. Coser. According to the author, "conflict is not always dysfunctional for the relationship within which it occurs; often, conflict is necessary to achieve connections within the system". It contributes to the formation and in the future-preservation of the society identity it is the definition of its borders. The researcher focused on the functional significance of intra-social conflicts. He referred to them as conflicts arising between different groups that contribute to the formation and preservation of their social unity [4]. These conclusions allow us to interpret youth conflicts as primarily (but not always) functionally significant for the society. Interpreting them solely as undesirable and destructive elements can create significant barriers to social dynamics, preventing the society renewal. To conceptualize youth conflict, the theoretical developments of recent years are of great interest. Special attention was paid to understanding of social conflict phenomenon as a specific interaction of active social participants [5][6][7][8]. Of the particular interest in this regard is the institutional approach to social conflict, which dates back to the works by R. Dahrendorf and L. Coser. In the Russian tradition, it is presented by the research work of A. G. Zdravomyslova, who considered conflict to be a norm of interpersonal relations. In this approach, the study of social conflict involves an analysis of the underlying causes of its occurrence. The choice of means that can be used for regulation, studying the sources of destructive conflicts associated with violence, as well as ways to institutionalize conflicts, understanding mutual transitions at the levels of social organization in the process of conflict deployment, identification of the personal component roleplay the key role [9]. In our research work the interpretation of the youth social conflict is to define as a critical (extreme) situation, determined, firstly, by the general logic of the development of sociocultural sphere, and, secondly, by the specifics of youth participation in the processes of social interaction and having consequences that are significant for the participants. The extreme nature of the conflict is manifested in the fact that it dialectically combines functional and dysfunctional characteristics. Moreover, participants are offered challenges that require the significant strain. A stressful environment of interaction is formed, obviously, there are many risks representing the likelihood of not achieving the goals of the conflicting parties. Methodology The genesis of youth social conflict is rigidly linked with the process of forming and implementing specific interests of young people as a special socio-demographic group. That is in a state of subjectivity seen ability to make independent decisions, reflecting youth own identity and carrying out autonomous actions in accordance with them, while influencing the behavior of contractors. In this regard, the problem of analyzing the content and structure of youth interests becomes very important, in the course the participants inevitably face obstacles caused by objective reasons. In many cases, youth conflict is an attempt of a social action to overcome the opposition and resolve the contradictions. This problem is particularly important for urban youth, a group whose life activity is carried out in specific conditions of the urban environment, which is characterized by a number of features that have a direct impact on the implementation of youth interests in social conflicts. These include the following characteristics: -high level of subcultural differentiation both in the youth environment and among the "adult" urban population. Acting as special "value-based local world" it is opposed to the basic "mother culture" (for example, "socialist", "liberal", "Christian", etc.). It includes "individual and collective stereotypes of behavior and activity embodied in specific signsymbolic manifestations, social codes, forms of consciousness and structures of personal identity" [10], youth and "adult" subcultures competition with each other within the urban space [11]; -the intensity of interpersonal and inter-group contacts, which is a natural consequence of the compactness of living. This circumstance is not the cause of conflicts, but it significantly affects the specific features of the course, since it leads to an increase in the number of participants and, consequently, to an increase in the scale of conflict interaction; -mutual alienation of people. According to the social psychologist S. Milgraim, this step is of great importance as urban behavioral standards are characterized by complete disregard for the needs, interests and requirements of those people who are not considered directly related to the satisfaction of their personal needs [12]; -dynamism of socio-cultural development. Urban metabolism is characterized by a high rate of material, energy and spiritual exchanges due to the concentration of subjects of social action. Also, it is determined by the mass use of technology and technology within the urban space. More than that, the intensity of social contacts allows producing, discussing and implementing innovative artifacts [13]. But one of the consequences of social dynamics is the inevitability of conflicts between the old and the new generations. At the same time, young people usually act as initiators and stimilators of innovative solutions [14]; -syncretism, which is expressed in the close relationship of various spheres of urban life and in the formation of an urbanized social and biotechnical systems of a hybrid nature on this basis. This factor is a network formation turning the city into a network metabolic organism. This organism is literally suspended on networks feeding it with energy and resources coming from outside. The person, who lives in the city, does not only conflict or negotiate, unite or differentiate. A city is a multi-dimensional space that is not limited by the territory, or by the movement of people, information, or goods. It performs a permanent transformation of matter, energy, waste. It should be mentioned, these factors together change the forms of social organization of its life [15]. The research work is based on the empirical sociological research results caleed "Realization of the interests of young people in social conflicts" by A. E. Ushamirsky (made during the period from September till December 2019 using a questionnaire survey among young people aged from 14 till 29 in the Belgorod, Volgograd and Rostov regions, Russia). The sample is quota-based (quota attributes: place of residence, gender, age). The sample population was 1000 respondents, 77.1% of whom live in cities. Results and discussion The authors' research position is that the youth conflict is initiated by the youth itself as a result of inability awareness to realize their personal and social group interests due to the presence of obstacles perceived as unsolvable. In the minds of young people, these obstacles are associated (adequately or incorrectly) with the opposition of young people from specific participants, either in the youth environment itself or among older generations. At the same time, the assessment of the scale of obstacles is determined by the characteristics of young people's lives in an urban environment. The logic of the genesis of youth conflict can be presented in several stages: the emergence of the obstacles during the implementation of interestthe response to this obstacle, which can either be the result of rational reflection, or having emotional nature, or the combination of both elementsthe choice of conflicts as a way of resolving a difficult situationdefining the strategy of behavior in the conflict and its implementation. In the course of an empirical study, the research shows that older people living in cities are significantly more likely than those whose life activities take place in a non-urban environment to face obstacles in the implementation of their interests. The answers to the question "Do you face obstacles in the course of implementing your interests?" proved that 15% of residents of regional centers and 16% of residents of cities of regional subordination stated that this happens often; the answer rarely was respectively chosen by 72% and 61% of residents, correspondingly. Young people in towns and rural localities answered the question differently choosing the answers: often by 5% and 6% of respondents; rarely by 78% and 69%, correspondingly. The resulting distribution of responses gives, in our opinion, the basis for several conclusions. It should be noted, the vast majority of urban youth does not believe that the process of their interest realization involved constantly overcoming many obstacles. In this context, it is quite obvious that youth has a predominantly optimistic perception of a social reality. However, it is possible that the optimistic position does not adequately reflect all the complexities of young people process of socialization and their achievement of life success. Young people, most likely, due to the specifics of their age, lack of life experience, they often do not see many difficulties that they will have to face in the course of implementing their interests. It seems paradoxically, despite the potential desire for young people for having lives' innovations, an objective predisposition to creativity is similar to that of the ordinary citizen, marked by "vague optimism". Describing the consciousness of the common man, P. Mills concluded: "the common person is not able to detach himself in order to observe objectively, to judge objectively that will fall within the sphere of his life experience. The life experience is accompanied not by an internal dialogue, which we call reflection, but rather with a kind of unconscious, constantly repeating monologue...it is imbued with a vague optimism that keeps the person up and which is only occasionally disturbed by small and future disappointments" [16]. Secondly, from the point of view of young people, a modern Russian city creates much more obstacles to the realization of youth interests than not urbanized or low-urbanized environment. Thus, we can say, the explanation of the contradiction is connected with the understanding that the interests of urban youth are of more variety than those of rural youth, they require more significant resource potential, and, therefore, the intensity of communications in the course of their implementation inevitably leads to the appearance of rivals and other conflicts. Urban youth tries to reflect on the nature of the obstacles that arise in the process of realizing their interests. In regional cities, it most often connects them with pressure from the inner circle, which has other beliefs and interests (45%), contradictions with parents who do not approve of their interests (35%), and a generous attitude towards young people from state agencies, whose competence is to solve issues related to the implementation of their interests (31%). The research showed a slightly different picture in revealing the cities of regional subordination. Here the main obstacle young people believe in are such factors as the organizations' bureaucratization responsible for working with the youth, lack of understanding by the representatives of the interests of the young, the desire to impose on young people's own views, far from real life, limitation of resources and the application of other restrictive measures (40%). According to the research work, 26% of respondents pointed the pressure from their close people having other beliefs and interests, 25% considered contradictions with parents who do not approve of young people interests as the main factor, and 31% stressed the indifference towards young people from government agencies whose competence is to solve issues related to the implementation of their interests. For the villages the configuration of the main obstacles is as follows: pressure from close people having other beliefs and interests -44%; opposition to parents who do not approve of interests -34%; bureaucratization of organizations responsible for working with young people, lack of understanding of the interests of young people, the desire to impose their own views on young people that are far from real life, limiting their resources and applying other restrictive measures -27%. In rural areas the results were as following: pressure from the immediate environment, having other beliefs and interests -64%; contradictions with parents, disapproving interests -30%; inadequate representation in the media of the true interests of young people -21% (Fig. 1). Fig. 1. Distribution of the survey participants' responses to the question "What obstacles do face in the course of implementing your interests?" Based on the received distribution of responses, it can be argued that the urban area, although to a lesser extent than the rural one, is under pressure from the external environment in the implementation of its interests, which it regards as attempts to impose alien beliefs and interests. Consequently, the nature of obstacles is seen by respondents primarily in the valuesemantic sphere. Obstacles are, in fact, interpreted as a consequence of the unwillingness of others to understand young people and accept their attitudes and orientations. And, even though residents of cities of regional subordination (unlike residents of large ones) pay special attention to the non-complementary position of the bureaucracy in relation to them, their claim to it is expressed precisely in the rejection of the desire of officials to impose their own views on young people, far from real life, in limiting their resources and in applying other restrictive measures. It is worth noting that the specifics of reflecting obstacles arising in the process of realizing the interests of young people already contains the potential for conflict with the "adult" environment. Of course, this is not a generational conflict, which researchers often write about [17], but a clear mutual misunderstanding between them. Urban youth strives, being as independent as possible, viewing any claim to independence as a basis for opposing contractors. However, according to the data received, the lack of understanding with older people and external pressure on young people in a large city is still less than in rural areas (a small city, paradoxically, shows a different picture). Among the villagers, 42% associate the presence of conflicts with their elders with the opposite views on life; 55% of respondents associate the desire of the older generation to impose their own rules on the young generation. Setting a distance from the "adult world" and active opposition, without which the conflict is impossible, it is likely to be minimized due to the inherent characteristics of young people in the city of psycho-emotional perception of obstacles to their interests. A relatively large number of them, in a conflict situation, either tries to adapt to the situation -16% in large cities and 11% in small ones, or begins to doubt in the possibility of their implementation -52% and 38%, correspondingly. Only 30% of respondents in regional centers and 47% in cities of regional subordination choose the answer: "I value my interests and see my goal in their implementation, regardless of obstacles". It should be noted that in urban settlements this figure is 48%; in rural areas it falls to 61%, correspondingly. The increased pessimism of urban youth can be explained by the representatives who most often point to the presence of obstacles to achieving their interests. Obviously, the modern city creates additional life difficulties for young people, which, with limited resources, leads to the fact that most situations of difficulty are not solved in the course of open confrontation (it is the conflict that provides such an opportunity), it grows into a life problem, which is also characterized as the solution to be postponed for an indefinite time period. The choice of conflict as a way to realize their own interests is motivated for urban youth by various reasons, which differ markedly from the results of regional centers and small cities. For example, in regional centers, young people most often point out that the conflict can affect the change in attitudes towards young people on the part of the authorities (56%) and opens up opportunities for changing ideas of their interests, as well as for forming new interests (42%). Fewer supporters agreed to answers: the conflict makes it possible to openly declare their interests and draw attention to them (37%), and also creates an opportunity for young people to unite (31%). In small towns, these figures were 26%; 30%; 52% and 36%, correspondingly. In rural areas the answers were as following -30%; 30%; 61% and 18% correspondingly (Fig. 2). Residents of villages and settlements Creates an opportunity for young people union The conflict provides an opportunity to declare their interests and draw attention to them Gives opportunities to declare interests and to form new interests The conflict may affect the change of attitude towards young people form the authorities The data presented show it can be argued that, at least in large cities, conflict is taken by young people primarily as a way of appealing to the authorities. This is most likely due to two main reasons. Firstly, the preservation of paternalistic attitudes in the youth consciousness, which is formed in the process of education and upbringing and does not disappear even in the conditions of social networking, which involves stimulating selforganization and self-government. It is obvious that the network factor of social action in real life, characterized as "swarm effect" by an American sociologist G. Reinhold, is not sufficiently realized [18]. Secondly, the desire to consider the conflict as a way to attract the attention of the authorities indicates that young people are disillusioned with the opportunities provided by other, less extreme, technologies of influence (elections, mass media). In this regard, young people in small towns and rural areas are likely to consider authority to be more accessible and less likely to use conflict to attract their attention. It is known that young people can choose different strategies during the conflict. The research showed that young people also differ significantly in this question (see Table 1). Analyzing the information, we can say that, young people in cities (especially in regional centers) often choose a strategy of avoidance conflict strategy, while the share of its supporters is declining in low-urban and non-urban environments. Young people often prefer a strategy of cooperation. In our view, this is understandable due to two circumstances. First of all, as already noted, young people in large cities are characterized by an exaggeration of the importance of obstacles that arise in the course of implementing their interests, as well as a negative psychological and emotional reaction to their presence. Secondly, high social integration is typical for rural areas and it creates many opportunities for constructive interaction [19]. The urban environment is more disintegrated, and the mutual separation between people is more evident within it, which cannot yet be compensated by attempts to stimulate the development of civil formations. Conclusion Despite the fact that the results of the study can not be extrapolated to all Russian cities, each of which is characterized by specific living conditions, they allow us to present some general trends in the implementation of young people's interests in conditions of social conflicts. It is obvious that urban youth is more concerned than rural youth about the scale of obstacles arising in the course of trying to realize their interests. This may be considered as the result of an inadequate perception of social realities, but in any case, such a position requires attention from state and municipal authorities and civil society institutions. Such specific perception of social reality increases social pessimism and distrust to authorities among young people, making it difficult to have a social dialogue, the relevance of which intensifies significantly in the context of intensive information and communication procedure. However, despite the fact that modern technical and technological resources create a lot of opportunities for constructive interaction in solving problems of the urban environment, urban youth is not satisfied with the level of mutual understanding between them and older generations. In conclusion, social conflict is considered by a large part of young people not only as a means of overcoming obstacles that arise in the implementation of their interests, but also as the way to attract attention to their problems and express themselves. However, awareness of all the complexities that arises in the context of conflict encourages young people to focus on a strategy of avoiding conflicts, which does not contribute to the discussion and solution of problems, but usually leads to their preservation.
2020-03-26T10:23:24.022Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "b1621f620445e1f7e769ecd39206cacdba1448d0", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/19/e3sconf_btses2020_05004.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c67203bcf996b5897aa729c450f9897f384fbeeb", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
145052640
pes2o/s2orc
v3-fos-license
A mathematical model for facility location in banking industry Article history: Received March 2 Introduction During the past few years, there has been growing interest in private banking development in Iran.Most banks try to find appropriate places for lower expenses to generate more revenue (Craig, 1984;Al-Hanbali, 2003).There are literally many studies associated with facility locations of banks.Aldajani and Alfares (2009) considered the problem of determining the optimum number and locations of banking automatic teller machines (ATMs).The objective of the survey was to minimize the total number of ATMs to cover all customer demands within a given geographical area.Almossawi (2001) concentrated on determining the bank selection criteria in Bahrain.The survey examination depended on 30 selection factors extracted from relevant literature, personal experience and interviews with some bank officials and college students.They reported that the chief factors determining college students' bank selection could include bank's reputation, availability of parking space near the bank, friendliness of bank personnel, and availability and location of automated teller machines (ATM).Kaynak and Harcar (2005) showed the application of geodemographic segmentation to the service industry by applying commercial banking as a case example.They reported that there were substantial differences between customers of local and national US banks in their evaluation of the relative importance of bank service charges and overall confidence in the bank.Miliotis et al. (2002) showed how demand-covering models could be combined with geographical information systems (GIS) to detect the optimal location of bank branches, taking into account the different factors that characterize local conditions within the demand area. The proposed study The proposed model of this paper plans to locate the places of banks and ATM in 58 different locations.Let P be the population of the province and P r be the population of each alternative location, respectively.In addition, let m j be the defuzzify coefficient associated with population of each alternative.Therefore, for each alternative we define a utility parameter a j as follows, Let x j be a binary variable, which is one when a facility is located in place j and zero, otherwise.Let c j and d j be the cost of each square meter of land dedicated to place j and the distance of facility from the center of service, respectively. The objective functions There are two objective functions associated with the proposed study, one associated with the place a branch of bank is located as follows, (2) In addition, the study minimizes the cost of allocating ATM to a particular place as follows, where d j is the distance from each service facility. Constraints The first constraint is associated with the portion of covering set of population as follows, where α is the level of uncertainty.Next constraint considers the level of wealth distribution. where 2 j a is associated with wealth distribution and is defined as follows, where r j and F j are defuzzify ratio of wealth and availability of money in each location, respectively.Depending on the position of each alternative, we have where where i j a  is a triangular fuzzy number defined as   , , (Zadeh, 1965;Opricovic & Tzeng, 2003).We consider some constraint similar to what we have in Eq. ( 7) for infrastructure with 1 , . Finally, we consider similar constraints as we introduced in Eq. ( 7) and Eq. ( 8) for the level of easy access to facilities. The results The proposed model of this paper, which was formulated as a mixed integer nonlinear optimization has been coded in WinQSB software package.In our survey, there were 4 cities located in province of Semnan, Iran.In addition, we have detected 58 alternative locations for facility establishment. Conclusion During the past few years, there has been growing trends on banking industry in Iran due to massive deregulation, which has facilitated the emerge of private banks.In this paper, we have presented a mathematical model to determine the optimal locations of bank branch and ATM in four cities in province of Semnan, Iran.The proposed study has implemented fuzzy logic to handle any uncertainty associated with input parameters.The preliminary results indicate that the proposed study has capable of locating new alternatives, effectively. Fig. 2 . Fig. 2. The results of facility location
2018-12-10T22:53:05.161Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "c2f690715fd8e2489ad0329c5a9808ecc8d3eef7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5267/j.msl.2014.8.011", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c2f690715fd8e2489ad0329c5a9808ecc8d3eef7", "s2fieldsofstudy": [ "Mathematics", "Business" ], "extfieldsofstudy": [ "Psychology" ] }
244794674
pes2o/s2orc
v3-fos-license
Photovoltaic models parameter estimation via an enhanced Rao-1 algorithm The accuracy of unknown parameters determines the accuracy of photovoltaic (PV) models that occupy an important position in the PV power generation system. Due to the complexity of the equation equivalent of PV models, estimating the parameters of the PV model is still an arduous task. In order to accurately and reliably estimate the unknown parameters in PV models, in this paper, an enhanced Rao-1 algorithm is proposed. The main point of enhancement lies in i) a repaired evolution operator is presented; ii) to prevent the Rao-1 algorithm from falling into a local optimum, a new evolution operator is developed; iii) in order to enable population size to change adaptively with the evolutionary process, the population size linear reduction strategy is employed. To verify the validity of ERao-1 algorithm, we embark a study on parameter estimation of three different PV models. Experimental results show that the proposed ERao-1 algorithm performs better than existing parameter estimation algorithms in terms of the accuracy and reliability, especially for the double diode model with RMSE 9.8248E-04, three diode model with RMSE 9.8257E-04 for the R.T.C France silicon cell, and 2.4251E-03 for the three diode model of PhotowattPWP201 cell. In addition, the fitting curve of the simulated data and the measured data also shows the accuracy of the estimated parameters. Introduction Currently, the renewable energy is receiving more and more attention due to its some promising features such as clean, no pollution, and widespread, which is incomparable to traditional energy sources [1]. In general, these are commonly used renewable energy sources such as solar energy, wind energy, nuclear energy, tidal energy, and geothermal energy [2]. Among these energies, the solar energy and wind energy are considered the most promising energy sources, because they can be widely available. Compared with wind energy, solar energy plants are easier to install. With the development of photovoltaic (PV) technology, solar energy has drew more and more attention. For a PV system, choosing an accurate model is very important, which is of great significance on evaluation of solar cell performance. To this end, three PV models. namely single diode mode (SDM) [3], double diode model (DDM) [4], three diode model (TDM), and its variants on PV module (PVM) [5] are proposed for the PV system. For the SDM and single PVM, it is composed of a photo-generated current source, diode, equivalent series resistance and equivalent parallel resistance. While for the DDM and TDM, different from the former, there are two and three diodes. However, there is no free lunch. These PV models have unknown parameters that need to be estimated. For example, five unknown parameters such as photo-generated current (I pg ), diode reverse saturation current (I rs ), series resistance (R se ), shunt resistance (R sh ), and non-physical diode ideality factor (n) need to be identified in the SDM and single PVM. These parameters are critical to PV models, which is helpful for the design and optimization of the solar cell. Therefore, regardless of the PV model used in the PV system, these unknown parameters must be accurately identified. Thus, designing an effective PV parameter estimation algorithm is becoming more and more urgent. Over the past few years, researchers have come up with various of methods for parameter estimation of PV models. To sum up, three categories such as analytical methods, deterministic methods and heuristic methods can be broadly divided into according to the characteristics of the method. The analytical method is a simple and fast method, which reduces the complexity of the problem by analyzing the equivalent equations of PV models based on some hypothesis. The advantage is that it is simple and does not consume a lot of computing resources. However, the accuracy of this method depends heavily on the correctness of the hypothesis. The deterministic method such as Newton-Raphson method [6] and Lambert W-functions method [7], does not require pre-assumptions, but it needs to provide an initial guess in advance. However, this method is susceptible to the initial estimates, and if the initial estimates are not good enough, it is easy to obtain the estimated unknown parameters with low precision. In addition, this method has a strict requirement on the optimization objective that should be continuous, convex, and differentiable [5]. Unfortunately, these requirements are often difficult to meet for the PV models equivalent equations. In order to alleviate the shortcomings of the above methods, researchers have employed and developed many heuristic methods to estimate the unknown parameters of PV models. The heuristic method is a kind of trial-and-error based method inspired by natural environment phenomena. This approach is simple to implement, and more importantly, it does not require any assumptions, but also does not depend on the characteristics of the problem. In addition, it has no additional requirements for the optimization objective. Therefore, in recent years, more and more heuristic methods and their improved variants such as simulated annealing algorithm (SA) [8], pattern search (PS) [9], particle swarm optimization (PSO) [10], differential evolution (DE) [11,12], hybrid teaching-learning-based optimization and differential evolution (ATLDE) [13], whale optimization algorithm (WOA) [14], generalized oppositional teaching learning based optimization (GOTLBO) [15], improved JAYA algorithm (IJAYA) [16], multiple learning backtracking search algorithm (MLBSA) [17], hybrid teaching-learning-based optimization and artificial bee colony (TLABC) [18], performance-guide JAYA algorithm (PGJAYA) [19], improved teaching learning based optimization (ITLBO) [20], and so on, have been developed to identify the unknown parameters in PV models. Figure 1 shows an overview survey on some recent and popular algorithms for PV model parameter estimation, where the PSO, DE, TLBO, JAYA, and hybird methods are the most used. In addition, many new heuristic methods are developed to solve this problem in recent years. There is no doubt that these methods have achieved better results than previous ones. However, Figure 2. Equivalent circuit diagram of various PV models [12]. In summary, the main contributions of this paper can be summarized as follows: • An enhanced Rao-1 algorithm is developed for parameter estimation of PV models, where a repaired evolution operator is proposed to reduce the randomness in original Rao-1 algorithm. • Design a new evolution operator to prevent Rao-1 algorithm from falling into local optimum. • A population size linear reduction strategy is employed to make population size to change adaptively with the evolutionary process. • The performance of ERao-1 algorithm has been demonstrated by estimating unknown parameters in various PV models. The structure of the remainder of this paper is organized as follows. The definition of PV models and optimization objectives are described in Section 2. In Section 3, a brief introduction about the original Rao-1 algorithm is described. Section 4 gives the detailed description of the proposed ERao-1 algorithm, and Section 5 reports the experimental results. Lastly, the conclusion of this paper is concluded in Section 6. Definition of PV models and optimization objectives As described in Introduction, there are three commonly used PV models namely the SDM, DDM, and TDM. In this section, the definition of these PV models and the optimization objective will be introduced. SDM The equivalent circuit diagram of the SDM is given in Figure 2(a), in which there are a photogenerated current source I pg , a diode D, two resistances R se and R sh . According to [26], the output current I L of this model is calculated as follows: where I d denotes the diode current, the current of the shunt resistor is represented as I sh . The two currents can be worked out by using the Shockley equation and Kirchhoffs Voltage Law, which is shown as follows where V L denotes the output voltage of the model, I rs represents the diode reverse saturation current, n is the diode ideality factor, q is the a electron charge and its value is 1.60217646 × 10 −19 C, k is the Boltzmann constant and its value is 1.3806503 × 10 −23 J/K, while T is the current temperature, which will be converted into Kelvin units when calculating. On the basis of Eqs. (2.1)-(2.3), the output current I L can also be written as where five unknown parameters I pg , I rs , R se , R sh , and n need to be estimated. For this model, the output current I L can be formulated as below [26]: DDM where I d 1 and I d 2 represent the first and second diode currents, respectively, and their formula is shown as follows where I rs 1 is the diffusion current, I sd 2 denotes the saturation current, n 1 is the first ideal coefficient of the non-physical diode, and n 2 denotes the second ideal coefficient of the non-physical diode. On the basis of Eqs. (2.5)-(2.7), the output current I L in DDM can be represented as where seven unknown parameters I pg , I rs 1 , I rs 2 , R se , R sh , n 1 , and n 2 need to be estimated in the DDM. TDM Similar to the DDM, there is another model named three diode model (TDM). The output current of the TDM is calculated as follows: where I rs 3 , n 3 represent the third diode saturation current and ideal coefficient, respectively. As could be seen, there are nine unknown parameters to be identified. PVM As shown in Figure 2(c), the single PVM is similar to the SDM. The difference is that there are N s diodes in series and N p diodes in parallel in PVM. For the single PVM, the manner of calculating the output current I L is shown as follows [13,14]: where there are five unknown parameters like the SDM need to be estimated. In addition, the double-diode and three-diode based PVM are similar to the DDM and TDM. Note that the N s and N p are set to 1 for the single, double, and three diode model, except the PVM. Optimization objectives When employing the heuristic method to estimate the unknown parameters of PV models, the optimization objective need to be developed first. In general, for the PV model optimization, we need to minimize the error between the experimentally measured output current and the output current obtained through simulation. Thus, the error function f ( * ) of the measured and simulated current data should be defined, which is shown as follows: • SDM: • DDM: • SPVM: where x is a vector containing unknown parameters to be estimated, and I M is the simulated current which is calculated by substituting the value of unknown parameters estimated by the proposed ERao-1 algorithm into Eqs. (2.4), (2.8), (2.9), and (2.10). Then, in order to better reflect the overall error between the measured current data and the simulated current data, in this paper, the root mean square error (RMSE) has been employed as the objective function, which has been used in many published literature [26,27,9,28,29,30,30,4,31,14,16,19,32,17,15,33,18,20,34,35]. The definition of RMSE is shown as follows where N represents the number of datasets used in the experiment. From Eq. (2.15), it can be concluded that the smaller RMSE value the more accurate of the estimated parameters are. Rao-1 algorithm Rao algorithm as a simple but effective heuristic algorithm was proposed by Rao in 2020 [21]. In the Rao algorithm, there are three sub-algorithms called Rao-1, Rao-2, and Rao-3. Different from Rao-2 and Rao-3, the structure of Rao-1 is simple and has a fast convergence, so the Rao-1 algorithm has attracted much attention. In Rao-1 algorithm, there are three core operations including population initialization, evolution operator, and selection, which are briefly introduced in the following section. Population initialization Assume that there are np individuals in a population P, where each individual can be seen as a D-dimension vector. In this regard, noting that each individual can be considered a candidate solution. In population initialization, each individual is initialized in a given search space. For example, the i-th individual is initialized as follows where a j and b j represent the low bound and upper bound of the j-dimension, respectively. While r is a random number between 0 and 1. Evolution operator Evolution operator is a core operator in Rao-1 algorithm, which is used to generate the promising offspring. The idea of this operator comes from the learning experience that the worst individual in the population learns from the best individual. For example, the i-th individual's evolution operator is formulated as follows where x i denotes the i-th individual after evolution operator, r j is a random number between 0 and 1 in the j-dimension, x best and x worst are the best and worst individual in the population P, which is done according to the objective function value in ascending order. Then, the generated x i may will be checked for outside the specific search space. If x i oversteps the search bounds at the j-dimension, then the j-dimension of x i will be re-initialized using Eq. (3.1). Selection After the evolution operator, all newly generated x i are evaluated by calculating the fitness value (objective function value in this paper). Thereafter, the algorithm will decide whether the original individual x i or the new individual x i survives the next generation. In Rao-1 algorithm, the one to one greedy choice strategy is adopted, which is expressed as follows: where the x i after selection operator will be used as the individual in the next generation population. Motivations Despite the success enjoyed by Rao-1 algorithm, it is worth noting that there are three points worth further studying. Firstly, the evolution operator in Rao-1 algorithm has too much randomness. From Eq. (3.2), it can be seen that each dimension of the new generated individual is produced by the dimension of best and worst individuals with a random number. If the dimension of the problem is higher, the greater the randomness of each dimension. Due to randomness improvements in other dimensions, improvements in one dimension may result in poor performance. Secondly, in Rao-1 algorithm, there is only one evolution operator shown as Eq. (3.2), namely only use the best and worst individuals to guide searching, which may result in the algorithm not being able to balance exploration and exploitation capabilities well. Thirdly, although the Rao-1 algorithm enjoyed the parameter-free when compared many heuristic methods, there is still a common parameter namely population size np that needs to be set by user. Taking above points into consideration, an enhanced Rao-1 algorithm referred as ERao-1 is proposed. In ERao-1, a repaired evolution operator is presented to reduce the randomness in Rao-1 algorithm. Then a new evolution operator is developed to prevent the Rao-1 algorithm from falling into local optimum. Finally, the population size linear reduction strategy is employed to adaptively adjust the population size as it evolves. A detailed description about these improvements will be introduced in following sections. Repaired evolution operator To reduce the randomness of Rao-1 algorithm, in this subsection, a repaired evolution operator is presented. In the repaired evolution operator, there is only a random number r on all dimensions rather than j random numbers on each dimension. The repaired evolution operator is formulated as follows where r is a random number in interval 0 to 1. From Eq. (4.1), it is straightforward to see that the new individual x i is generated by a random number r through the differential vector of the x best and x worst . In this way, a large amount of randomness in the Rao-1 algorithm can be avoided, thus speeding up the convergence of the algorithm. New evolution operator As mentioned earlier, there is only a single evolution operator in the Rao-1 algorithm, which may not be a good compromise between the exploitation and the exploration. In addition, in Eq. (3.2), only the best and the worst individuals are used to guide search, which may lead to under-utilization of information from other individuals in the population and easy to trap into local optimum. Inspired by the mutation in DE [36,37], a new evolution operator is proposed, which is defined as below where r 1 and r 2 are two random numbers between 0 and 1. x p and x q are two individuals randomly selected from the current population P, and f (x p ) ≤ f (x q ). Noting that p q i. From Eq. (4.2), it can be seen that there is another differential vector (x p − x q ) that is used to guide search. There are two benefits that can be obtained from the new evolution operator. On the one hand, it can make better use of the information of other individuals in the population. On the other hand, it can avoid the algorithm falling into local optimum. However, it should be pointed out that this new evolution operator may slow down the convergence speed. Thus, to alleviate the situation, the repaired evolution operator and the new evolution operator are called adaptively according to the individuals objective function values, which is shown as where ind(i) represents the index of the i-th individual objective function value sorted in ascending order. From Eq. (4.3), it is clear that the better individuals adopt the new evolution operator to prevent from falling into local optimum while the worse individuals take the repaired evolution operator to accelerate convergence. By this way, the exploitation and exploration abilities can be made a good tradeoff. Population size linear reduction strategy In order to make the Rao-1 algorithm parameter-free, in this subsection, a population size linear reduction strategy is employed. In this strategy, the population size np decreases linearly with evolution process. Given that the current population size is np G , np G+1 is the next generation population size, which is updated as Eq. (4.4) where np max and np min denote the the maximum population size and the minimum population size, respectively. While FEs and M FEs represent the number of function evaluations currently called and the allow maximum number of function evaluations, respectively. Overview of proposed ERao-1 algorithm The main three improvements have been described in above section, then based on these improvements, the enhanced Rao-1 algorithm namely ERao-1 is proposed. Algorithm 1 provides the pseudocode of the proposed ERao-1 algorithm, where it can be observed that the proposed ERao-1 algorithm does not introduce new parameters, and it still has a simple structure. Noting that the repaired evolution operator and new evolution operator are used in lines 9-12, while the population size linear reduction strategy is used in line 17. In addition, as for the algorithm complexity, the Rao-1 algorithm complexity is O(G max · np · D), where G max is the maximum number of generations and it can be calculated as G max = M FEs/np. Since the proposed ERao-1 algorithm does not introduce other additional computational burdens. Therefore, the algorithm complexity of ERao-1 is also O(G max · np · D). Experiments and results With the aim of verifying the performance of proposed ERao-1 algorithm, it is evaluated by estimating the unknown parameters of three different PV models including SDM, DDM, TDM, and its expansion in PVM. First, the 57 mm diameter commercial R.T.C France silicon cell is selected, where there are 26 pairs of V L -I L data that are measured under 1000 W/m 2 at 33°C. For the PVM, the polycrystalline Photowatt-PWP201 is selected, where there are 25 pairs of V L -I L data that are measured under 1000 W/m 2 at 45°C. In addition, the search ranges of unknown parameters are given in Table 1. As for the comparison algorithms, in this paper, seven well-established parameter estimation meth- ods including GOTLBO [15], IJAYA [16], MLBSA [17], TLABC [18], PGJAYA [19], ITLBO [20], and original Rao-1 algorithm [21] are selected. Table 2 gives the experimental settings of these algorithms. Note that these settings are kept consistent with their corresponding literatures. In addition, for fair comparison, all comparison algorithms are implemented by using MATLAB2016 software, and the allow maximum M FEs is set to be 30,000 for all PV models. It is worth mentioning that all experiment are conducted on a desktop PC with an Intel Core i7-9700M processor @ 3.0 GHz, 32GB RAM, under the Windows 10 64-bit operating system. As mentioned beforehand, five unknown parameters need to be estimated in the SDM. Table 3 reports the comparison results of ERao-1 and its competitors, where the best objective function value (RMSE) and corresponding estimated parameter values are involved. Noting that the best RMSE value has been marked in black bold. From Table 3, it can be observed that ERao-1, ITLBO, PGJAYA, TLABC, MLBSA, and GOTLBO are able to obtain the same best RMSE value (9.8602E-04), followed by IJAYA (9.8626E-04), and Rao-1 (1.1478E-03). Although the difference of IJAYA and ERao-1 is particularly small i.e., 2.4E-07, it still makes sense. As mentioned in subsection 2.5, the smaller RMSE value the more accurate the estimated parameters are. Besides, since the true parameter value is not available, any reduction in the objective function value means that the estimated parameter value is To further explain the accuracy of the estimated parameters of ERao-1 algorithm, we calculate the simulated current data by using the parameters estimated by the ERao-1 algorithm to substitute into Eq. (2.4). Then, draw the fitting curve between the simulation data and the measured data shown as Figure 3. From Figure 3(a), it can be seen that the simulated current data provided by ERao-1 is able to fit the measured current data well. Also, the simulated power data and the measured power data have a good consistency, which can be observed from Figure 3(b). Based on the above findings, it can conclude that the estimated parameters by ERao-1 are very accurate for the SDM. Results on the DDM of the R.T.C France silicon cell For the DDM, seven unknown parameters I pg , I rs 1 , I rs 2 , R se , R sh , n 1 , and n 2 must be estimated. Obviously, compared with the SDM, the DDM has two additional unknown parameters including I rs 2 and n 2 need to be estimated, which means that the dimension of the problem has increased. It is noteworthy that the increase in dimensions can lead to the complexity of the problem. Table 4 provides the estimated parameters achieved by ERao-1 and other compared algorithms. From this table, it is evident that only ERao-1 algorithm can achieve the best RMSE value (9.8248E-04) on this model, followed by ITLBO (9.8250E-04), PGJAYA (9.8286E-04), TLABC (9.8317E-04), GOTLBO (9.8401E-04), IJAYA (9.8655E-04), and Rao-1 (1.2087E-03). According to this findings, it can be seen that the performance of most well-established parameter estimation algorithms is affected by the increase of problem dimensions while the proposed algorithm still has good performance. Moreover, it is remarkable that the performance of ERao-1 is also significantly superior to the original Rao-1 algorithm on the DDM. In addition, similar to the SDM, the simulated current data is calculated by using the parameters estimated by the ERao-1 algorithm to substitute into Eq. (2.8) for the DDM. Figure 4 plots the fitting curve between the simulated data and the measured data. From this figure, it is evident that the simulated data are highly consistent with the measured data, both for the current data and the power data. Therefor, it can be concluded that the estimated parameters obtained by ERao-1 are pretty accurate for the DDM. Table 5 reports the results on the TDM of the R.T.C France silicon cell, where it can observe that ERao-1 achieves the best performance (9.8257E-04) on this model, followed by ITLBO (9.8260E-04), MLBSA (9.8286E-04), PGJAYA (9.8351E-04), IJAYA (9.8451E-04), GOTLBO (9.8562E-04), and TLABC (9.8622E-04). Note that although the dimension in the TDM is increased to nine dimensions, the proposed algorithm can still provide the smallest RMSE value. Besides, from the Figure 5, it is clear that the simulated data obtained by ERao-1 is also consistent with the measured data. Results on the single PVM of the Photowatt-PWP201 The comparison results of ERao-1, GOTLBO, IJAYA, MLBSA, TLABC, PGJAYA, ITLBO, and Rao-1 are reported in Table 6, where what can be observed is that all algorithms can provide the best RMSE value (2.4251E-03) except the IJAYA (2.4254E-03) and Rao-1 (2.4418E-03). It can be concluded that the enhanced ERao-1 algorithm performs better than IJAYA, and comparable to other algorithms such as GOTLBO, MLBSA, TLABC, PGJAYA, and ITLBO for the single PVM. In addition, Figure 6 draws the fitting curve between the simulated current (power) data and the measured current (power) data. From Figure 6, it can be seen that the simulated current (power) data provided by ERao-1 is well agreed with the current (power) data, which also demonstrates that ERao-1 can provide pretty accurate parameter values for the single PVM. The results on the double PVM of the Photowatt-PWP201 are provided in Table 7. From this table, GOTLBO, PGJAYA, ITLBO, and proposed ERao-1 algorithms can provide the smallest RMSE value (2.4251E-03). MLBSA and TLABC achieved the best results in the single PVM while it does not achieve the best performance on this model. In addition, it is worth mentioning that the optimal value of the double PVM of the Photowatt-PWP201 is almost the single PVM. Lastly, from the fitting curves of ERao-1 shown in Figure 7, it also fits the measured data very well, both for voltage-current and For the three PVM of the Photowatt-PWP201, there are also nine unknown parameters including the I pg , I rs 1 , I rs 2 , I rs 3 , R se , R sh , n 1 , n 2 , and n 3 that need to be estimated. The comparison results of proposed ERao-1 and other state-of-the-art algorithms are given in Table 8, where it is obvious that only MLBSA and ERao-1 are able to achieve the best RMSE value (2.4251E-03), followed by ITLBO (2.4252E-03), PGJAYA (2.4253E-03), IJAYA (2.4254E-03), GOTLBO (2.4257E-03), TLABC (2.4329E-03), and Rao-1 (2.4905E-03). In particular, when comparing with the original Rao-1 algorithm, the proposed ERao-1 has a significant performance superiority. Further, the fitting curves of ERao-1 plotted in Figure 8 also prove the accuracy of its identified parameters. Statistical results analysis From subsection 5.1-5.2, we have analyzed the best RMSE value of ERao-1 and other wellestablished approaches. Based on above analysis, we can find that some algorithms such as GOTLBO, MLBSA, TLABC, PGJAYA, and ITLBO can achieve the best result like ERao-1, especially for the SDM and single PVM. In the context of this, in order to further show the superiority of the proposed ERao-1 algorithm, the statistical results analysis such as the minimum RMSE (Min), maximum RMSE (Max), average RMSE (Ave), standard deviation (Std), FEs to reach the optimal solution (FEs), CPU time of 30 times independent runs, and two non-parametric statistical tests namely Wilcoxon signedranks test and Friedman Aligned test are conducted. Noting that all algorithm are run independently 30 times under the above mentioned experimental environment, and the non-parametric statistical test is carried out by the KEEL tool [38]. In addition, the convergence curves of all compared algorithms on different PV models are plotted. Table 9 and Table 10 report the statistical results, where the Min, Max, Ave, Std, FEs, CPU time, and Wilcoxon signed-ranks test results are provided, where R + (R − ) is the sum of ranks for the optimization objective on which ERao-1 outperforms (loses) its competitors, p-value denotes the significance which will decide whether the statistical hypothesis (α = 5%) should be rejected, and "+" and "≈" indicate that ERao-1 significantly performs better or similar to its competitors. From these tables, it can be seen that • As can be seen from Table 9, for the SDM in R.T.C France silicon cell, it is obvious that most algorithms can get the best Min except IJAYA and Rao-1, while only ERao-1 and ITLBO are capable of achieving the best Max and Ave, especially for Ave that can reflect the accuracy of the algorithm. In addition, from the perspective of the Std, it is clear that ERao-1 and ITLBO have obvious advantages over other algorithms, which indicates that the proposed ERao-1 and ITLBO have excellent reliability on the SDM. For the DDM and TDM, only ERao-1 proposed in this paper can obtain the best RMSE result on Min, Max, Ave, and Std. While those who have achieved good results in SDM do not obtain the best result on the two models, which indicates that the proposed ERao-1 algorithm is still valid on the DDM and TDM. On the other hand, this also shows that with the increase of estimated unknown parameters, the performance of most algorithms has deteriorated. Besides, in view of the FEs, the proposed ERao-1 consumes the minimum FEs to find the optimal RMSE value for all models. • From Table 10, it can be observed that ERao-1 and ITLBO achieve the best performance on the SDM in terms of the Min, Max, Mean, and Std, but ERao-1 only uses FEs = 4432 to find the best RMSE. For the DDM and TDM, ERao-1 achievs the best results on almost all indicators, except the CPU time. • With respect of the Wilcoxon signed-ranks test results of the R.T.C France silicon and Photowatt-PWP201 cells, regardless of which models, it can be observed that ERao-1 performs better than GOTLBO, IJAYA, MLBSA, PGJAYA, Rao-1, while it is similar to ITLBO. In particular, for the DDM and TDM of Photowatt-PWP201 cell, ERao-1 significantly performs than its all competitors. • As for the CPU time, it can be clearly seen that the original Rao-1 algorithm consumes the least time, while the proposed algorithm adds a certain amount of time. This is acceptable to a certain extent, because it has improved a lot in accuracy. In addition, Figure 9 provides the Friedman test results on different PV models. It is worthwhile to mention that the smaller the average ranking of Friedman's test, the better the performance. From Figure 9, it can be seen that the average ranking of ERao-1 is significantly smaller than GOTLBO, IJAYA, MLBSA, TLABC, PGJAYA, and Rao-1 on the SDM, DDM, and TDM. Although the average ranking of ITLBO and ERao-1 is not obvious on the SDM, ERao-1's average ranking is significantly smaller than ITLBO's on the DDM and TDM. Finally, Figure 10 plots the convergence curves of ERao-1 and its competitors, where it can be seen that ERao-1 has a faster convergence speed for the SDM, DDM and TDM of the R.T.C France silicon cell. In particular, for the DDM, IJAYA, GOTLBO, and MLBSA have a quick convergence in the early stage, but ERao-1 converges significantly faster to the optimum value after FEs = 10000. While for the Photowatt-PWP201 cell, ERao-1 obviously converges quickly on the SDM. Note that although ERao-1 in the other two models converges slower than its comparators in the early stage, it can be clearly seen that in the later stage, ERao-1 can converge to a more accurate RMSE value. Compared with other reported results In this section, the results of ERao-1 have been compared with some recent works such as enhanced Lévy flight bat algorithm (ELBA) [39], classified perturbation mutation PSO (CPMPSO) [40], nichebased PSO with parallel computing (NPSOPC) [41], improved equilibrium optimizer (IEO) [42], enhanced JAYA (EJAYA) [43], enhanced adaptive butterfly optimization algorithm (EABOA) [44], shuffled frog leaping with memory pool (SFLBS) [45], modified teachingClearning based optimization (MTLBO) [46], novel hybrid differential evolution and artificial bee colony (nDEBCO) [47], modified Rao-1 (MRao-1) [25], comprehensive learning JAYA (CLJAYA) [48], backtracking search algorithm with competitive learning (CBSA) [49]. The comparison result is given in Tables 11,12,13. Note that since there are too few studies on TDM of the R.T.C France silicon cell, and DDM, TDM of the Photowatt-PWP201 cell, we have not compared these results here. From these tables, it can observe that: • For the SDM, almost all compared algorithms can provide the best RMSE (9.8602E-04), except the NPSOPC. While only EJAYA and proposed ERao-1 consume the least FEs to obtain the best solution. • In terms of the DDM, it can be seen that most of the algorithms proposed in 2021 can obtain the best results. However, with the allowed maximum FEs consideration, it is evident that only In summary, the proposed ERao-1 can not only obtain a comparable optimal solution when compared these well-established parameter estimation algorithms, but also uses the least FEs. Conclusions Aiming at the shortcomings of the original Rao-1 algorithm, this paper designs an enhanced Rao-1 algorithm short for ERao-1, to accurately and reliably estimate the unknown parameters in PV models. In ERao-1, there are three main improvement points. First, a repaired evolution operator is presented to reduce the randomness of the original operator. Second, a new evolution operator is proposed to make full use of the population information and avoid the algorithm falling into a local optimum. Third, to make population size adaptively adjust, a population size linear reduction strategy is employed. ERao-1 is evaluated by estimating the unknown parameters of three different PV models. Experimental results demonstrate that the enhanced Rao-1 algorithm not only can achieve the best RMSE values (9.8602E-04 and 2.4251E-03) for the SDM, but also obtains the best result (9.8248E-04) for the DDM while its competitors can not achieve. In addition, the statistical results also prove the competitive performance of ERao-1 on the standard deviation and the consumed function evaluations to reach the optimal RMSE. Moreovr, two non-parametric statistical tests and the convergence curves also illustrate the superiority of proposed ERao-1 algorithm. Although the ERao-1 is not significantly optimal in terms of CPU time, it is acceptable to sacrifice less time to obtain more accurate solutions. Finally, it is worth noting that ERao-1 is only suitable for solving single-objective unconstrained optimization problems. If there are constraints, it needs to introduce constraint handle technology. In future works, ERao-1 will be adopted to solve more complex optimization problems such as maximum power point tracking in PV system [50], optimal power flow in power system [51,52], non-linear equation optimization problem [53], and so on.
2021-12-02T16:47:08.586Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "8e45ff9fb5ff1a4041aea51c923330ce0e4268c7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/mbe.2022052", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5cd77cb5fb027c82b25dc9bd888d7f138a3e7132", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
145606645
pes2o/s2orc
v3-fos-license
Selling Concept: Strategy for Improving the Marketability of Nigerian World Heritage Sites The Nigerian World Heritage Sites are experiencing a low turn-out of tourists, visitors, researchers and Nigerian dignitaries. The sites are hardly known outside their area of existence. Their international recognition and importance bears no significance to many. This paper therefore examines the root causes of Nigerians’ lukewarm attitude to these sites. Through systematic and purposive sampling techniques, 150 respondents in 10 villages of 5 selected Local Government Areas were sampled. A structured interview schedule was used in the collection of data administered through the heritage Site manager and his staffers. For clarification of responses, unstructured interview and physical condition and linguistic observation were used to support the interview schedule. The result of the analysis showed a positive correlation between awareness level and people visiting the sites. The awareness for the existence and new status of these two sites were low or none existence. Among the five competing marketing philosophies, the application of selling concept comprising of advertisement, promotion, personal selling and public relation was found to be more appropriate in creating awareness. The paper concluded by recommending other marketing principles that can ensure the maximum use of these two sites thereby fulfilling the real purpose of their new status and bringing in foreign exchange. Introduction World Heritage Sites are places of significant, historic and cultural value throughout the World.They are carefully selected for preservation by the World Heritage Committee.The World body which is an inter-governmental organization is responsible for cataloguing and protecting World Heritage Sites, and operates under the direction of the United Nations Educational, Scientific and Cultural Organization (UNESCO).World Heritage Sites represents areas that are particularly ingenious and deserve to be taken into consideration in the search for solutions to today's challenges (Eborieme, 2008: 22-27).Collectively, the rich diversity of African heritage contributes a unique wealth to World Heritage (Adedayo, 2004: 61).The study of these heritage sites makes it possible to better understand today's world and to better prepare for the future, (Adediran, 2008: 49-58).Communities are therefore encouraged to preserve and valorize this heritage that represents the core of their common identity.Tidjani-Serpos (2006: 7) argues that these heritage sites are supposed to contribute every day to the quality of life of the Nigerian and African communities.Elong-Mbassi (2006: 5) opines therefore that Africans need to pay attention and interest to their cultural heritage sites.Enhancing the cultural and heritage values within Nigeria and Africa, would reinforce the cultural dimension, and would undoubtedly upgrade the living condition of Nigerians especially in and around where these sites are (Osuagwu 2008).Nigeria currently has two World Heritage Sites on this prestigious list.The two World Heritage Sites are located in Sukur in Madagali Local Government area of Adamawa State, and Osogbo, in Olorunda Local Government Council of Osun State (plates 1-3).They were approved for this prestigious list in the year 1999 and 2005 respectively.Yusuf ( 2008) is of the view that "very few Nigerians or Africans are aware of the World Heritage Convention and Heritage Sites.He empathizes, that there is a "split" which as is the case for other sector of the economy and society puts Africa at risk of being marginalized."He further lamented that many local people called stakeholders have not yet taken stock of the existence and potential offered by the heritage sites."These cultural heritage sites are presently facing major challenges linked to human development. 151 Despite undeniable qualities that earned these sites a listing in the prestigious World Heritage List, the sites are not being patronized as expected.Ten years annual reports (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011) of visitors to the sites from the Education department of the National Commission for Museums and Monuments showed that visitor-ship is either not increasing or is on the decline.Yorke and Jones (2006: 94) pointed out that, "the promotion of Museums and Heritage Sites services would enhance the stimulation of excessive demand."Applying marketing principles such as selling concept has been variously ignored by successive Heritage Managers because of the fear of the carrying capacity of the two World Heritage Sites.However, Yorke and Jones (2006:94) were of the opinion that fear of the consequences of a marketing orientation is no justification for ignoring it.The duo explains that, what such heritage managers fails to realize is that the proper application of marketing thought and techniques could lead to either provision of those sought after development or better still, an increased number of atisfied customers/segments without any increase in existing resources.This is why this paper presents a marketing model -selling concept as a way of arresting this decline and to also stimulate the patronage of these two sites.Selling concept was propounded by Kotler and Keller (2012:17-18) and is a time tested tool that ensures that products being shunned by prospective consumers becomes enticing to them to patronize.The Management Plans of Sukur Cultural Landscape 2012-2016, and Osun Osogbo Cultural Landscape have, as one of their objectives, proposals to effectively carry out promotional activities as a means of creating awareness within and without the World Heritage Sites.While it is the responsibility of the World Heritage Committee to set general objectives for declared Sites, the onus rests on the country that owns the sites to set strategies on how the site would be maximally utilized by its citizenry and the world at large. Strategy is a plan that intended to achieve a particular purpose (Hornby 2005).It is a well known fact that many organizations have revolutionized their functions through the application of marketing and marketing principles.Marketing is a social and managerial process by which individual and groups obtain what they need and want through creating and exchanging products and value with others (Kotler and Keller 2012).Well-developed marketing strategies and tactics are usually put in place as a means of achieving customer's take-up of a service. The two World Heritage Sites in Nigeria are not being patronized as it should be.A preliminary investigation reveals plethora of reasons which include lack of awareness of their existence. The Research Objective The specific objective of the study was to investigate the snobbish attitude of tourists, local and foreign alike towards the two hard won World Heritage Sites in Nigeria.It aimed at determining the factors hindering tourists, local visitors and foreign ones from patronizing and savouring the various wonderful products that makes the sites worthy of inclusion in the significant prestigious World Heritage Sites. Research Study The research is focused on the two World Heritage Sites (WHS) declared in favour of Nigeria.They are the Sukur Cultural Landscape located in Madagali Local Government Council in Adamawa State, and Osun-Osogbo Cultural Landscape located in Osogbo Local Government Council both in Nigeria.The two sites are managed by the National Commission for Museums and Monuments (NCMM) an agency of Federal Government of Nigeria.In the management of the two sites, there are five levels of responsibility.They are the stakeholders (Local Governments and adjoining villages in five Local Governments), the site management, the education unit, the site guides and site guards. The Sample Response A random sample was drawn systematically for each site, at each level using the sites annual visitors' report or statistics and other publications on the management structure of sites as sample frame.A total of one hundred and fifty (150) questionnaires were distributed using the site guides, education officers and managers, one hundred and five (105) usable responses were obtained.1 gives the details of the composition of the sample and their responses.For clarification of responses among the stakeholders, unstructured interviews and physical condition and linguistic observation were used to support the interview.The response from the stakeholders and its five adjourning local governments for Sukur was 63% and Osogbo was 77%, and a mean of 70% was considered satisfactory.In several cases, failure to respond was explained by ignorance of the people in the two sites.At the site guides level, the response was observed to be higher than those of the stakeholders.This is probably because they are educated and know the value of the research.In Sukur WHS 70% responded, while in Osogbo WHS, 80% responded given a mean of 75% for the two sites.This percentage was also considered very satisfactory. The Education Officers' level with 80% response from Sukur WHS and 90% from Osogbo WHS and a mean of 85% was the most satisfactory.It is probably to be expected because they educate the visitors to the sites of the importance of the two sites to humanity. At the site guards level the response from the two sites were the lowest.With Sukur WHS having 50% response while Osogbo WHS, had 60%.Notwithstanding these low returns among others, the returns were considered satisfactory.The reason for the low responses is expected because these people are illiterates and could only understand and answer the questionnaires through interpreters.They have little or no knowledge of what the research was aiming to achieve.At the management of site level, the response was lower with 60% from Sukur WHS and 70% from Osogbo WHS and mean of 65% was surprisingly lower than expected. The Questionnaires The questionnaires consisted of twenty five statements inviting response on a five point likerts scale.The questions were divided into four parts representing the four components of selling concepts which are: advertisement, sales promotion, personal selling and public relations as tool for creating awareness for the two World Heritage Sites.The four sections deal with: The possible impact of advertisement in drawing attention of tourists, students, children and researchers to the sites. 153 The effect of supporting advertisement with sales promotion.About developing interactive relationship through personal selling.On the issue of building image for the two World Heritage Sites through Public Relations. Reservations on the interpretation of Results There were evidences of misinterpretation of questionnaires to some illiterate stakeholders, this will surely either slightly increase or decrease the response percentages recorded.Also management responses were biased because they were trying to paint a positive picture of the two sites in terms of visitor's patronage. Possible Impact of Advertisements in creating awareness for the two World Heritage Sites. The stakeholders or 74% respondents from Sukur World Heritage Site scored low any form of advertisement of the site.They were unequivocal that advertisement with all its components: radio, television, newspapers, flyers, outdoor billboards etc were none in existence.18% were undecided, while 8% felt there was enough advertisement. In their ranking of the advertisement components, Sukur WHS respondents ranked radio highest 80% as the most effective means of creating awareness, with television coming next with 18% and others 2%.78% Osun WHS respondents viewed advertisement as not existing, 12 were undecided and 10% were of the view that was advert Respondents from Osun World Heritage Site, scored radio advertisement very high 85%, while television was scored 14% and other advertisement components 1%. Sales Promotion The majority of respondents from Sukur World Heritage Sites scored sales promotion generally low with 28% saying that sales promotional tools were enough.There were 12% who were undecided about the availability of sales promotional tools.However an overwhelming majority 66% opined that the tools were neither unused nor available.In ranking sales promotional tools, most of the respondents 78% ranked consumer promotion higher, while sales force promotion was ranked next 20%, and trade promotion 2%. In Osun World Heritage Site, majority of respondents also scored sales promotion low with 72% saying that there was no sales promotion put in place by management of the site, 18% were undecided, while 10% were of the view that sales promotion was enough to create necessary awareness.In ranking Sales Promotion components, 74% ranked consumer promotion highest, while 18% ranked sales-force as the best, while 8% ranked trade promotion as best. Personal Selling Respondents in the WHS were overwhelming in their submissions that the two sites are extremely alienated from the community where they are located.As a result of this, 78% of the respondents from Sukur WHS scored personal selling very low, 14% of the respondents were undecided, while 8% scored personal selling high.On the other hand, 71% of Sukur WHS respondents would wished a strong relationship be developed between the stakeholders, the visitors and the management of the site as a way of creating and sustaining awareness for the site, there were 18% respondents who were undecided, while 11% know nothing about the issue at stake . In Osun World Heritage Site, 91% of the respondents scored personal selling low, and 7% were undecided, while 2% were not bordered about what was going on in the site.However, 80% respondents subscribed to having a strong relationship developed between the stakeholders, visitors and the site management, while 18% were undecided and 2% were not in the picture of what the site stands for globally. Building an Image for the Sites through Public Relation There were general agreement the two WHS required developing good and strong image around them.As a result in Sukur World Heritage Site, 75% of the respondents were of the view that the management of the site have not built good image for the site to ensure continuous visits of visitors to the site.There were 21% respondents who felt the image of the 154 site can be bettered by the management of the site, while 4% do not know about what management of the site was doing to enhance the image of the site. In Osogbo World Heritage Site, 68% of the respondents were of the view that the management of the site was not doing enough to develop strong image for the site.There were 20% respondents who viewed the site as popular enough basing their judgment on the high turnover of people during Osun-Osogbo Annual Festival; however, 12% respondents strongly disagreed that the site is only popular for it fetish activities. Discussion of Results Findings revealed that the two sites were largely under-publicized to Nigerians and the World at large.The general apathy to the sites is traceable to lack of awareness of the existence and new statuses of the two World Heritage Sites.For example, radio advertisement is considered to have far reaching effect because of the wide coverage of signals.It was also argued that more than 20million of Nigerians have mobile phones with radio on them.(Nigerian Communication Commission, 2011) Also, bush pocket radios are cheap and preponderant with rural and urban dwellers.The respondents wondered why management of these WHS is not taking advantage of this very important, omnipresent advertisement tool.Radio also has the advantage of being listened to by many than television.Television purchase is prohibitive as well as advertising on them.For Newspapers, majority of the stakeholders were observed to be illiterate so newspapers were out of the question. The respondents were quick to say that sales promotion such as rebates on group visits, patronage, rewards were not in existence.They would discounts on tickets, allowance for loyal clients and vigorous sales-force promotion should be put in place. On personal selling, the respondents were of the view that as it were, the management of the sites have succeeded in alienating the community from their sites making them elitist.They were against the restricted movements in the two World Heritage Sites. The findings on public relations show that the image of the two World Heritage Sites is poor and needed to be 'polished', for example the Christians and Moslems in Osogbo and its environs, view the Osogbo World Heritage Site as an idol worshipping centre.They saw fetish activities as being the core function of the site.In the same vein, the Sukur World Heritage Site was seen as being occupied by fetish people who are called in derogatory language as pagans.The Christians and Moslems hardly visit the two World Heritage Sites on these bases. In line with this study Hawkins, Best, & Cooney ( 2001: 306-307) explain that in creating awareness for products using media strategy, "the proper approach is to determine to which media the consumers in the target market are frequently exposed to, and then place the advertising messages in those media".This view was corroborated by the executive of the Ford (1981, 307), when he says that "we must look increasingly for matching media that will enable us best to reach carefully targeted emerging markets.The riffle approach rather than the old shortgun approach". In summary, selling concept and its components: adverts, promotion and public relations must be properly evaluated to know which of them will suit a particular target market in the creation of necessary awareness.Since media such as radio and television are well patronized, Hawkins et al (2001) are saying that heritage managers of these sites as the marketers must find media that the target market is interested in and place appropriate advertising message in those media. Suggestions and Conclusion In order to move the awareness creation of Nigeria's two sites to the next level, Management of the sites must be proactive by providing brochures and strategically place them free of charge in all our land, sea and airports for foreigners coming into Nigeria. Apart from this, striking features of the two sites should be displayed in large pictorial posters in all our ports to catch the attention of would -be -visitors from outside the country.Large designated billboards should be made by the National Commission for Museums and Monuments to promote the two sites.Handbills should be mass produced to be distributed at motor parks, schools, universities and churches to attract visitor. The Commission should liaise with the Ministry of Education to ensure that visits to museum and our World Heritage Sites are included in the curriculum of schools at primary and secondary levels.CD's, DVD's and tapes of happenings in the two sites should be played at all our ports and should be available for sale. : MCSER Publishing, Rome-Italy Map of Nigeria showing World Heritage Sites at Sukur in Adamawa State and Osun Osogbo in Osun State. Plate 2: Landscape of World Heritage Site at Sukur, Adamawa State Plate 3: Landscape of World Heritage Site at Osogbo, Osun State Table 1 : Composition of Sample and Responses Source: Survey 2013, table adapted fromWatson (1982)Table
2018-12-18T08:37:53.472Z
2015-11-06T00:00:00.000
{ "year": 2015, "sha1": "1002f090735e93ea3b171a53202a3f169c0dc00b", "oa_license": "CCBYNC", "oa_url": "https://www.richtmann.org/journal/index.php/ajis/article/download/8173/7837", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1002f090735e93ea3b171a53202a3f169c0dc00b", "s2fieldsofstudy": [ "History", "Business", "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
157718096
pes2o/s2orc
v3-fos-license
The Factors of Local Energy Transition in the Seoul Metropolitan Government: The Case of Mini-PV Plants As a way of enhancing urban sustainability, Seoul Special City, the capital of South Korea, has shown strong enthusiasm for urban energy transition by tackling climate change and expanding renewable energy. The Seoul Metropolitan Government (SMG) has adopted the “One Less Nuclear Power Plant (OLNPP)” strategy since April 2012 and specific policy measures, including a mini-photovoltaic (PV) plant program, were introduced to facilitate the energy transition. However, varying degrees of success were achieved by 25 district-level local governments (Gu) with mini-PV plant programs. This study explored the reason why those local governments showed different levels of performance despite the strong will of municipal government (SMG) to implement urban energy transitions through the mini-PV plant program. The tested hypotheses were based on capacity, political context, public awareness and geographical diffusion. The findings indicated that institutional capacity, financial dependence, political orientation and public perception had positively affected the performance of mini-PV plant installation at each district level. Especially, the political will of each district mayor played an important role in the implementation of the policy. Introduction Recent consensus on the need to tackle climate change through greenhouse gas (GHG) mitigation and the Fukushima Daiichi nuclear accident have justified the call for a faster transition from fossil fuels and nuclear power to renewable energies [1][2][3][4][5][6].Most countries have devised and implemented national energy plans to accelerate the mitigation of carbon emission and the introduction of renewable energies [7].The attempts for a more sustainable development and energy transition were also found at the city level [8,9].Cities have the potential to play an important role in realizing a low-carbon economy as they are large energy consumers and GHG emitters, as well as direct regulators of energy production and distribution [10][11][12][13][14][15][16].Therefore, cities can be forerunners in energy transition by installing renewable energy equipment, enforcing requirements for energy-efficient buildings, improving public transportation infrastructure and encouraging carpooling and biking [17].Indeed, a number of municipalities, including Barcelona, London, New York City and Tokyo, have put initiatives into place to contribute to the mitigation of GHGs and to transition urban areas to green energy [13,18]. South Korea has established various policies on GHG reduction and urban energy transition both at the national and local level [19].As the eighth largest energy consumer with a high dependence on imported fossil fuels (in 2014, the share of fossil fuels in South Korea's primary energy supply and electricity generation reached 83.2% and 65.8%, respectively, and dependence on primary energy sources from overseas was 95.2% [20]) and the seventh biggest CO 2 emitter in the world as of 2014 [20,21], South Korea has faced dual challenges of increasing energy self-sufficiency and decreasing carbon emissions.Growing concern about climate change and the rise in oil prices during the mid-2000s led South Korean governments to the realization that a new paradigm was necessary to replace the existing model of unsustainable rapid industrial growth.The "Green Growth Strategy" was announced in 2008 as a new national vision comprised of the following benchmarks: adopting policies reflecting an adaptation to climate change, the achievement of energy independence, the creation of new engines for economic growth and enhancement of South Korea's international status [22].Under the strong political momentum of green growth, South Korean governments accelerated the measures for sustainable development and introduced key projects including a national GHG inventory system, energy efficiency of lighting equipment, greening of buildings, renewable portfolio standards (RPS) and an emission trading scheme [22]. The vision of green growth was designed mainly by the central government in a top-down manner, but it also encouraged the local governments to develop their own energy and climate policies.Seoul Special City, the capital city of South Korea, was the first municipal government to present a city-level vision of the low-carbon economy.Seoul Metropolitan Government (SMG) established a municipal climate fund in 2007 for the first time in South Korea and established the "2030 Seoul Low-carbon Green Growth Master Plan" in 2009, which targeted at the reduction of 2030 GHG levels by 40% compared with 1990 levels of these gases.[23] (pp.62-63).A representative case is the "One Less Nuclear Power Plant (OLNPP)" project, which SMG launched in April 2012.Influenced by the Fukushima nuclear disaster and growing concern about climate change, SMG set a goal to increase the city's self-reliance on electricity and declared a pledge to pursue energy transition under a new slogan of OLNPP [24].OLNPP is a municipal energy strategy to cope with climate change, environmental disasters and the energy crisis, either by reducing or by producing the equivalent amount of energy from one nuclear power plant, i.e., two million TOE (tons of oil equivalent).This strategy would be supported by increased energy efficiency, energy savings and renewable energy production [25]. OLNPP is unique and appreciable since it has taken a different stance to the policy framework of national government.The Green Growth Strategy and OLNPP shared the same objective to reduce GHGs in the energy sector, but they focused on different means to achieve this goal.Lee Myung-bak, former South Korean president (2008)(2009)(2010)(2011)(2012) and an ardent proponent/supporter of green growth, has regarded nuclear power generation as a new major industry and as an effective way to achieve affordable low-carbon generation of electricity [22].On the other hand, the goal of OLNPP has been to reduce reliance on fossil fuels and nuclear energy while emphasizing the importance of renewable energy and energy savings [26,27].After the political will to achieve green growth weakened under the new Park Geun-hye administration, OLNPP has filled the void by pushing ahead with its plans.The change in the national growth priority from "green growth" to "creative economy" and the fall of oil and gas prices resulted in decreased support of the central government for renewable energy [19] and the reinforcement of a centralized energy system based on fossil fuels and nuclear energy [25].In contrast, SMG set more ambitious goals when it launched the second phase of its program in August 2014 after exceeding expectations by achieving the goal of the first phase of saving two million TOE by the end of 2014 [26,27].The OLNPP and other municipal-level strategies for urban energy transition drew attention from the international community, which led to awards, such as the 2013 Climate Action Leadership Award from the World Green Building Council (WGBC), the City Climate Leadership Awards 2014 from the C40 and Siemens and the Global Earth Hour Capital 2015 award from the World Wide Fund for Nature (WWF) and International Council for Local Environment Initiative (ICLEI) [26]. OLNPP instituted various sub-level programs that essentially required public awareness and citizen participation, such as Eco Mileage, which is an incentive scheme for citizens' energy savings, and a mini-PV plant, which supports the installation of small residential-scale photovoltaic (PV) panels.SMG relied on Gu (district) offices for promoting its policies and educating citizens to maximize the effects of green policies.SMG also implemented an incentive program to award the prize to Gu governments showing high performances [28].While the momentum of energy transition was strong at the SMG level, the performance at the local district (Gu) level showed varying degrees of success.For example, the accumulated number of apartment-type mini-PV installed in each Gu ranged from 47 to 1322, as of June 2016. Several studies pointed out the importance of voluntary compliance of local governments in climate change policy and the leverage of individual cities to decide their level of support [10,14,17,29].However, these studies generally focused on programs developed voluntarily by local governments and implemented regardless of the involvements of either the state or national government Furthermore, many studies addressing policy diffusion between the federal government and state governments tended to focus on the mechanism that upper-level governments used to make lower-level governments adopt a certain policy [30][31][32][33], but rarely paid attention to how lower-level governments conformed to the federal or state policy.While most of the earlier studies on OLNPP and its sub-programs [23][24][25]34,35] suggested that capacity and participation of Gu governments should be increased for the success of OLNPP [25], only a few of them paid sufficient attention to these variations among the Gu governments.In this context, this study explores more explicitly the reason why Gu governments showed varying levels of performance in implementing OLNPP and installing the mini-PV plant despite the strong initiative from the municipal government. Seoul is a megacity with a population of approximately 9.9 million as of 2015 [36], which is equivalent to 19.4% of the entire South Korean population.Its annual budget and collected local tax revenues were 36.3 trillion KRW and 15.4 trillion KRW as of 2014, respectively [37].Considering that the population and financial size of this single city surpass that of most other major cities and provinces (Table 1), the case of SMG and its district government has relevant implications to the relationship between the city and local neighborhoods, as well as state and municipal governments.This paper consists of the following sections: the second section builds a series of hypotheses about the factors inducing local governments to engage in energy transition based on the literature review and theoretical considerations.The third section examines the case of OLNPP and its subprograms, the mini-PV plant program in SMG.The fourth section tests the hypotheses raised in the second section and explains the factors of energy transition in SMG.The conclusion summarizes the key finding of this study and discusses its implications. Theoretical Background and Hypotheses Policy diffusion studies provide assessments and explain the adoption and rejection of similar policies among different governments and the reason why those governments show a different level of readiness in accepting specific policies.Policy diffusion can be divided into two categories: vertical diffusion (from the state government to local governments, and vice versa) and horizontal diffusion (from a local government to other local governments or a nation to other nations).Whereas a policy can spread horizontally through competition, emulation and learning, a vertical diffusion often occurs through coercion [30][31][32][33].According to the vertical policy diffusion literature, upper-level governments can affect the policy adoption of lower-level governments by "providing resources to help overcome obstacles that prevent innovation" [30] (p.403) with the form of coercion.Coercion involves various tools, such as official mandates, military conflict, funding and legitimization.The national government often forces states and municipalities to comply with its core policies, gives financial incentives to implement new policy [30,31], legitimizes the new policy and hushes dissenting voices through advocacy campaigns [30].However, lower-level governments do not always conform to the pressure from the upper-level governments.The theory of vertical policy diffusion does not clearly answer the question about what causes the varying level of conformity of local governments in spite of the coercion of upper-level governments.Therefore, this study tries to explain the response of Gu governments in SMG by using factors stated in the literature about horizontal diffusion and local climate change policy.Local contexts and characteristics wield strong influence over climate change policy adoption and diffusion in their cities [10,14,38].Several factors contribute to the varying degree of performance of energy transition and climate change policy between the central and local level government. First of all, capacity is an important factor in determining the level of performance.Administrative and financial capacity have been the traditional determinants of the commitment of local governments to federal policy.For a policy vision to be translated into real action, institutions, human and financial resources should be secured.The size of the organization and budget availability are key resources of governments; local governments with many employees and high financial capacities are more likely to adopt innovative policies, including climate change and low-carbon projects [39,40].Capacity constraints, including insufficient funding and manpower, inexperienced staff and high staff turnover, are the usual difficulties for the state and local governments in implementing federal environmental programs at the state and local levels [17,[41][42][43].Since climate policy and the low-carbon policy are relatively new policy areas, local governments often do not have proper organizations and institutions [38] and information on possible opportunities and benefits that they can get from the new projects [44,45].Furthermore, the insufficient budget of local governments may make officials hesitant about allocating budget to new climate and low-carbon projects [17].Therefore, local governments need to establish a new institutional framework and create a new team of experts to be responsible for new duties [17,38]. Earlier studies showed that wealthy communities with high household incomes and per capita revenues were more likely to invest and conduct renewable energy programs [10,14,46].On the other hand, if a policy contained financial aid from the national or state governments, local governments with low financial capacities and low fiscal self-reliance, tended to adopt the policy to get the subsidy and to revive the local economy [19,40].Based on this literature, the following three hypotheses were tested in this study: Hypothese 1 (H1).Gu governments having high administrative capacities will show higher performance (Performance includes both aspects of effectiveness and efficiency, since it is evaluated not only by achievement level, but also by cost and speed.This paper, however, focuses on the effectiveness aspect of performance: performance basically refers to how well the Gu government conforms to the plan of SMG, and it is measured by the number of installed mini-PV plants). Hypothese 3 (H3).Gu governments having low financial capacities will implement a policy relying on a subsidy from SMG. Burch [38], however, noted that factors other than capacity also existed by explaining Canadian cities, which showed varying levels of performance in spite of sufficient financial, human and technical resources.Recent studies focused on political, cultural and economic contexts as major factors of diverging performance in environmental and energy policies. Political context includes leadership and partisanship of the municipal and local electorate.Effective political leadership and policy entrepreneurs are requisites for successful diffusion and adoption of any new policy [17,38,47,48].Although leadership and policy entrepreneurs can emerge from multiple levels, the most important driver is the political orientation of top leaders, such as governors and mayors, because they have the strong potential to set visions and agendas for their provinces and cities [34,45,49].For instance, former Denver mayor, Wellington Webb, a proponent of economic growth coupled with environmentalism, instituted various environmental programs and strong initiatives in order to establish the environmental leadership of the city.Lee et al. [34] attributed the success of OLNPP of SMG as the product of firm actions from the mayor.The effect of partisanship is still debatable in the literature, but it has been argued that liberal states and communities with democratic political orientations tend to welcome government intervention to deal with air pollution [50][51][52], be more active on local climate action [10,14] and have more households heated with solar energy [46].For instance, Daley and Garand [42] found an insignificant correlation between determining patterns of state hazardous waste policies and political variables, including party control of state institutions and liberal citizenries.Rather, another partisanship issue, such as whether leaders of upper-level and lower-level governments belong to the same party and whether the top leaders and the legislature share the same partisanship, becomes the more important issue.Inter-party conflicts are commonly found in the Congress [53], and different partisanship may be a factor to refuse the policy of upper-level governments, especially under the divided-government setting [54].Based on these political contexts, the following hypotheses can be tested in the case of SMG: Hypothese 4 (H4).Gu governments with district mayors exhibiting strong political resolve will show higher performance. Hypothese 5 (H5).Gu governments affiliated with a progressive party will show higher performance. Hypothese 6 (H6).Gu governments with mayors from different parties than the SMG mayor will show low performance. Public awareness is relevant to psychological factors, such as preference, acceptance and beliefs of organizations and individual citizens [38].Because a low-carbon economy is built only on the basis of every actor's concerted efforts at multiple levels, citizens' interests and participation are crucial to successful implementation of climate change and low-carbon policies [55].Recognition of the seriousness of climate change and associated risks may attract citizens' interests and increase willingness to support climate policy [56,57].Since risk perception can be understood through cultural rationality, which is geared toward personal and familiar experiences [58] (p.132), demonstrable impacts of climate change and visible danger can trigger relevant policies [38].Millard-Ball [59] also shows citizen participation in furthering environmental issues enables local governments to adopt climate and low-carbon policies.On the other hand, cultural and psychological factors may function as barriers to hinder the successful implementation of policy.Citizens may be reluctant to commit themselves to those policies because they do not share enthusiasm about renewable energy [44] or think individual action would not have a visible consequence [38].To evaluate this evidence from the literature, the following hypothesis is tested: Hypothese 7 (H7).Gu governments with a high public awareness of climate change and low-carbon policies will show higher performance. Finally, a geographical diffusion was found in several policy cases.In the USA, state governments tend to adopt or tighten regulations if adjacent states have imposed stringent regulations [39].For instance, city mayors whose neighbor cities participated in the U.S. Mayors' Climate Protection Agreement (MCPA) were more likely to join MCPA.This horizontal diffusion may be a result of communication and information sharing that occurs between officials in neighboring cities or of increased public pressure resulting from heightened regional awareness about the initiative [14] (p.54).On the other hand, Shipan and Volden [32] argue that policy diffusion as geographic clustering may be misleading and outdated.They explain that the phenomenon that geographically-neighboring states adopt the same policy is not because of simple geographic proximity, but because of their political, economic and demographic similarities.Furthermore, the assumption that communication between neighboring local governments or states will be more frequent is outdated due to the development of communication technology and transportation.In this context, it necessitates a look at whether the classic view of geographic diffusion is relevant to Gu governments in Seoul: Hypothese 8 (H8).Gu governments with neighbors who show high performance will also show high performance. Economic factors are also important.When manufacturing is a major industry that supports the local economy, the government may not be able to push strong policies to mitigate GHGs due to the pressure from the industrial, sector as well as the concern about the effects of such policies on the local economy [14].Some local governments, however, sometimes decide to invest in climate technology and renewable energy projects to develop the emerging market.They expect those projects will create new businesses and jobs and build their capacity to dominate future markets [12].In this paper, the economic contexts of hypotheses were not tested, as the case study of the mini-PV plant program mainly focuses on the residential sector and, therefore, may not be relevant to measure the economic preference of commercial and industrial sectors. Case: One Less Nuclear Power Plant and the Mini-PV Plant Program Seoul is a typical metropolitan city, which consumes a far greater amount of energy than it produces and emits a substantial amount of GHGs.In 2011, shortly before OLNPP was launched, Seoul consumed 46,903 GWh of electricity, 10.3% of the national electricity consumption, while generating only 1384 GWh, and emitted 90.9% of total annual GHG emissions (49 million tCO 2 ) in the energy sector (Figure 1) [27].Under these conditions, a series of events occurred in the early 2010s that propelled SMG to recognize problems such as low energy self-sufficiency and to devise a policy to change this situation by using more sustainable energy.The direct trigger was power outages on 15 September 2011, which rang an alarm bell by causing supply and demand side shortages that precipitated a potential energy crisis.The major blackout was a warning signal that the level of energy self-sufficiency should be increased to ensure the energy security of Seoul [27,34].Moreover, the conflict between the central government and the residents in Miryang, a rural area in the southern part of South Korea, over the construction of a high-voltage transmission tower made the issue of self-sufficiency of Seoul assume ethical importance [24,34].Many transmission towers and grid lines were needed for electricity supply to the major cities.Miryang residents, however, resisted the government's decision on the grounds that the electromagnetic waves from high-tension wires and towers would be harmful to them.This event highlighted ethical issues of sacrificing rural residents for the sake of the electricity supply to big cities, such as Seoul [24].Furthermore, the Fukushima nuclear accident in March 2011 served as a wakeup call case to alert cities to the danger of nuclear energy and to emphasize the need for renewable energy. At that time, South Korea depended on nuclear energy for 31% (154,723 GWh) of its total national electricity generation [60], and the central government had a plan to expand nuclear power plants to cope with increasing energy demands [61].The Fukushima Daiichi nuclear disaster and consequent declarations of nuclear power phase-out from a few countries encouraged Seoul to have the vision to increase energy self-sufficiency with more sustainable energy [34].In April 2012, SMG finally launched OLNPP, which aims to reduce energy demand and increase renewable energy sources in order to mitigate GHGs and to leave a desirable environment to the next generation [27]. Sustainability 2017, 9, 386 7 of 21 high-tension wires and towers would be harmful to them.This event highlighted ethical issues of sacrificing rural residents for the sake of the electricity supply to big cities, such as Seoul [24].Furthermore, the Fukushima nuclear accident in March 2011 served as a wakeup call case to alert cities to the danger of nuclear energy and to emphasize the need for renewable energy.At that time, South Korea depended on nuclear energy for 31% (154,723 GWh) of its total national electricity generation [60], and the central government had a plan to expand nuclear power plants to cope with increasing energy demands [61].The Fukushima Daiichi nuclear disaster and consequent declarations of nuclear power phase-out from a few countries encouraged Seoul to have the vision to increase energy self-sufficiency with more sustainable energy [34].In April 2012, SMG finally launched OLNPP, which aims to reduce energy demand and increase renewable energy sources in order to mitigate GHGs and to leave a desirable environment to the next generation [27].Under the OLNPP Phase 1 (2012-2014), SMG aimed to replace two million TOE (8760 GWh) by 2014, which is equivalent to the amount of generated electricity from one nuclear power plant, into renewable energy production, more efficient energy use and energy savings, thereby reducing dependence on conventional energy sources, including nuclear energy.According to the 'Comprehensive Plan for OLNPP (2012-2014)', which SMG announced in April 2012, OLNPP policy was divided into three areas-renewable energy production, energy efficiency and energy saving-and consisted of six policy categories-expand production of renewable energy in order to build a retrofit program (BRP), establish an environmentally-friendly and high-efficiency transportation system, create jobs in the energy industry, shift to a low-energy and urban spatial structure and create a civic culture promoting energy conservation-as well as 78 sub-programs [27,60].Key sub-programs in three areas are presented in Table 2.Under the OLNPP Phase 1 (2012-2014), SMG aimed to replace two million TOE (8760 GWh) by 2014, which is equivalent to the amount of generated electricity from one nuclear power plant, into renewable energy production, more efficient energy use and energy savings, thereby reducing dependence on conventional energy sources, including nuclear energy.According to the 'Comprehensive Plan for OLNPP (2012-2014)', which SMG announced in April 2012, OLNPP policy was divided into three areas-renewable energy production, energy efficiency and energy saving-and consisted of six policy categories-expand production of renewable energy in order to build a retrofit program (BRP), establish an environmentally-friendly and high-efficiency transportation system, create jobs in the energy industry, shift to a low-energy and urban spatial structure and create a civic culture promoting energy conservation-as well as 78 sub-programs [27,60].Key sub-programs in three areas are presented in Table 2. [61].SMG achieved its two million TOE goal, recording 2.04 million TOE in the first half of 2014 (Table 3).This performance is more remarkable compared to the whole nation and other major cities.Whereas the national electricity consumption rose by approximately 2.4% from 2012 (466,593 GWh) to 2014 (477,592 GWh), Seoul registered an approximately 4.7% decrease from 47,234 GWh-45,019 GWh [62].Other major cities, except for Busan, also presented increased or unremarkable decreases in electricity consumption (Figure 2).As of June 2014, a total of 1.33 trillion KRW (247.3 billion KRW of municipal funds, 48.7 billion KRW of national funds and 1.04 trillion KRW of private capital) were spent in implementing OLNPP [61].SMG achieved its two million TOE goal, recording 2.04 million TOE in the first half of 2014 (Table 3).This performance is more remarkable compared to the whole nation and other major cities.Whereas the national electricity consumption rose by approximately 2.4% from 2012 (466,593 GWh) to 2014 (477,592 GWh), Seoul registered an approximately 4.7% decrease from 47,234 GWh-45,019 GWh [62].Other major cities, except for Busan, also presented increased or unremarkable decreases in electricity consumption (Figure 2).SMG began discussions on OLNPP Phase 2 in January 2014 in anticipation of exceeding expected goals and set the new vision, "Seoul, an Energy Self-Reliant City", and the new goal to achieve 20% of self-sufficiency in electricity by 2020.SMG revealed its ambitions to realize energy self-reliance, energy sharing and energy participation through institutional reform and social structure innovation [63].In comparison to the Phase 1, OLNPP Phase 2 tended to place more emphasis on citizen participation.For instance, SMG focused on the promotion of large-scale projects for renewable energy production during Phase 1, but it declared a small-scale participatory, decentralized production system in Phase 2 [63].As of 2014, the proportion of residential electricity consumption used in Seoul was higher than the national level: Seoul recorded 28.64% (12,892 GWh out of 45,019 GWh), but the national level was 13.12% (62,675 GWh out of 477,592 GWh) [62].Moreover, apartments are the most popular types of residence (44.8%) in Seoul [64].Considering such housing types and characteristics of electricity consumption in Seoul, demand management of residential buildings, particularly apartments, was of considerable importance [24].Furthermore, the large-scale renewable energy facilities installed during Phase 1 did not contribute significantly SMG began discussions on OLNPP Phase 2 in January 2014 in anticipation of exceeding expected goals and set the new vision, "Seoul, an Energy Self-Reliant City", and the new goal to achieve 20% of self-sufficiency in electricity by 2020.SMG revealed its ambitions to realize energy self-reliance, energy sharing and energy participation through institutional reform and social structure innovation [63].In comparison to the Phase 1, OLNPP Phase 2 tended to place more emphasis on citizen participation.For instance, SMG focused on the promotion of large-scale projects for renewable energy production during Phase 1, but it declared a small-scale participatory, decentralized production system in Phase 2 [63].As of 2014, the proportion of residential electricity consumption used in Seoul was higher than the national level: Seoul recorded 28.64% (12,892 GWh out of 45,019 GWh), but the national level was 13.12% (62,675 GWh out of 477,592 GWh) [62].Moreover, apartments are the most popular types of residence (44.8%) in Seoul [64].Considering such housing types and characteristics of electricity consumption in Seoul, demand management of residential buildings, particularly apartments, was of considerable importance [24].Furthermore, the large-scale renewable energy facilities installed Sustainability 2017, 9, 386 9 of 22 during Phase 1 did not contribute significantly to the improvement of the self-sufficiency of Seoul.For these reasons, SMG proposed the installation of small-scale PV equipment in Phase 2 [63]. SMG has been aggressively implementing the mini-PV plant program for apartments, which can be seen from the fact that SMG designated the program as one of ten core projects of OLNPP Phase 2 [63].The mini-PV plant program supports part of the installation costs for those who want to install small-scale PV panels in the balconies of apartments.The amount of the subsidy varies depending on the capacity of the module: residents can receive 1500 KRW/W for less than 200-W modules, 1000 KRW/W for 200-500-W modules and 500 KRW/W for 500-1000-W modules.SMG began this program with approximately 2500 households for the pilot in 2014 and had a plan to distribute 10,000 mini-PV plants every year, thereby building 40,000 plants by 2018 [24,63].The keen interests and participation of district governments and citizens are essential to achieving the goals, but the performance records show that not every Gu government and citizen tapped into the SMG's enthusiasm.This paper explores factors that caused those differences. Data Collection As the mini-PV plant program does not have a long history, it is difficult to construct solid datasets based on observations.Since statistical inferences through regression analysis with insufficient datasets may not have real-world significance, this study tries to explain the gaps in policy implementation among Gu governments without regression.Instead, various descriptive statistics, graphs and close examination of qualitative data, such as transcripts of SMG and Gu councils, administrative documents and newspaper articles, are used to understand and visualize the performance differences.The four factors-capacity, political context, public awareness and geographical diffusion, as stated in hypotheses-are measured with several indicators collected from various sources (Table 4).Performance, the dependent variable, is measured by the number of installed mini-PV plants in each Gu.Capacity is comprised of institutional capacity and financial capacity.The number of relevant government employees and the existence of ordinances on climate change and low-carbon green growth show the institutional capacity, and the amount of collected local tax and tax per household, as well as financial independence indicate the financial capacity of the Gu government.Political context includes political inclination indicated with the party orientation of the district mayor and the district council members, and the political will of the district mayor inferred from council transcripts and ICLEI membership.Public awareness measures citizens' perceptions of climate change and governments and energy-saving lifestyles.Finally, geographical diffusion is presented in the maps.The most recent data were collected: performance, geographical data and ICLEI membership as of June 2016 and the other indicators as of December 2014. Performance Gap among Gu Governments The SMG's ambition of mobilizing active participation from all district-level (Gu) governments on the mini-PV plant program met varied outcomes.As of June 2016, 7176 mini-PV plants for apartments were installed in Seoul.In Yongsan-gu, a mere 47 plants were installed, while 1322 plants were installed in Nowon-gu (Figure 3a).This can draw criticism, as the number of mini-PV plants installed varied from one Gu district to the next district; the absolute number of installed plants might not be appropriate for comparing policy implementation levels, since Gu with more apartments may also have more mini-PV plants.Thus, this study used the proportion of apartment units where mini-PV plants were installed out of all apartment units in each Gu as presented in Figure 3b.The proportion also varies depending on each Gu: Nowon-gu has the highest proportion, 0.75%, and Gangnam-gu has the lowest proportion, 0.04%.As shown by these very low proportions, the mini-PV plant program is a fledgling program with aims to expand absolute quantities in order to achieve the main SMG goal, 40,000 plants, by 2018. Performance Gap among Gu Governments The SMG's ambition of mobilizing active participation from all district-level (Gu) governments on the mini-PV plant program met varied outcomes.As of June 2016, 7176 mini-PV plants for apartments were installed in Seoul.In Yongsan-gu, a mere 47 plants were installed, while 1322 plants were installed in Nowon-gu (Figure 3a).This can draw criticism, as the number of mini-PV plants installed varied from one Gu district to the next district; the absolute number of installed plants might not be appropriate for comparing policy implementation levels, since Gu with more apartments may also have more mini-PV plants.Thus, this study used the proportion of apartment units where mini-PV plants were installed out of all apartment units in each Gu as presented in Figure 3b.The proportion also varies depending on each Gu: Nowon-gu has the highest proportion, 0.75%, and Gangnam-gu has the lowest proportion, 0.04%.As shown by these very low proportions, the mini-PV plant program is a fledgling program with aims to expand absolute quantities in order to achieve the main SMG goal, 40,000 plants, by 2018. Capacity The number of government employees who work in the climate change or renewable energy teams and the existence of Gu-level ordinances on climate change and low-carbon green growth were used for administrative capacity indicators, and the amount of local tax collected, the tax per household and financial independence of Gu were used to gauge the financial capacity of Gu governments.Table 5 presents correlation coefficients between the performance of mini-PV plant installations and capacity indicators of Gu governments. Capacity The number of government employees who work in the climate change or renewable energy teams and the existence of Gu-level ordinances on climate change and low-carbon green growth were used for administrative capacity indicators, and the amount of local tax collected, the tax per household and financial independence of Gu were used to gauge the financial capacity of Gu governments.Table 5 presents correlation coefficients between the performance of mini-PV plant installations and capacity indicators of Gu governments.A significant correlation exists between the ordinance on climate change and mini-PV plant installation.Figure 4 compares the number of mini-PV plants installed in Gu governments having an ordinance on climate change and those in Gu governments without a climate change ordinance.The average number of mini-PV plants installed in eight Gu governments with a climate change ordinance, 503.4, is much higher than the other 17 Gu governments, i.e., the average number was 185.2 plants.Another ordinance covering the dissemination of renewable energy, i.e., the ordinance on low-carbon green growth, does not show significant correlation with the number of installed mini-PV plants.However, it is noteworthy that, among 17 Gu governments without a climate change ordinance, three Gu governments ranked as the lowest-Jung-gu (J), Gangnam-gu (GN) and Yongsan-gu (YS)-have not even established an ordinance on low-carbon green growth, and some Gu governments showing relatively high performance, such as Seongbuk-gu (SB) and Dongdaemun-gu (DDM), have an ordinance on low-carbon green growth.This phenomenon may infer that the district-level governments possessing relevant institutional and legal frameworks are more likely to successfully implement a particular policy.A significant correlation exists between the ordinance on climate change and mini-PV plant installation.Figure 4 compares the number of mini-PV plants installed in Gu governments having an ordinance on climate change and those in Gu governments without a climate change ordinance.The average number of mini-PV plants installed in eight Gu governments with a climate change ordinance, 503.4, is much higher than the other 17 Gu governments, i.e., the average number was 185.2 plants.Another ordinance covering the dissemination of renewable energy, i.e., the ordinance on low-carbon green growth, does not show significant correlation with the number of installed mini-PV plants.However, it is noteworthy that, among 17 Gu governments without a climate change ordinance, three Gu governments ranked as the lowest-Jung-gu (J), Gangnam-gu (GN) and Yongsan-gu (YS)-have not even established an ordinance on low-carbon green growth, and some Gu governments showing relatively high performance, such as Seongbuk-gu (SB) and Dongdaemun-gu (DDM), have an ordinance on low-carbon green growth.This phenomenon may infer that the district-level governments possessing relevant institutional and legal frameworks are more likely to successfully implement a particular policy.The correlation between human resources measured by assessing the number of employees that form part of the environmental team and the number of mini-PV plants is not clear.Gu governments assigned 4-16 people to the teams dealing with climate change and renewable energy issues.The patterns vary: small staff and high performance (Songpa-gu (SP) and Mapo-gu (MP)), large staff and high performance (Nowon-gu (NW) and Gangdong-gu (GD)), small staff and low performance (Gwangjin-gu (GJ) and Yongsan-gu (YS)) and large staff and low performance (Gangnam-gu (GN) and Seocho-gu (SC)). Figure 5a presents the relationship between the number of mini-PV plants installed and tax amounts per household, indicating the income level of residents and the wealth of the Gu governments, with red lines referring to the median value of each variable.While the most affluent Gu governments, such as Jung-gu (J), Gangnam-gu (GN), Jongno-gu (JN) and Seocho-gu (SC), show The correlation between human resources measured by assessing the number of employees that form part of the environmental team and the number of mini-PV plants is not clear.Gu governments assigned 4-16 people to the teams dealing with climate change and renewable energy issues.The patterns vary: small staff and high performance (Songpa-gu (SP) and Mapo-gu (MP)), large staff and high performance (Nowon-gu (NW) and Gangdong-gu (GD)), small staff and low performance (Gwangjin-gu (GJ) and Yongsan-gu (YS)) and large staff and low performance (Gangnam-gu (GN) and Seocho-gu (SC)). Figure 5a presents the relationship between the number of mini-PV plants installed and tax amounts per household, indicating the income level of residents and the wealth of the Gu governments, with red lines referring to the median value of each variable.While the most affluent Gu governments, such as Jung-gu (J), Gangnam-gu (GN), Jongno-gu (JN) and Seocho-gu (SC), show very low performance of mini-PV plant installation, Nowon-gu (NW), Seongbuk-gu (SB) and Dobong-gu (DB), which have tax amounts per household below the median, show very high performance.For financial independence, the correlation coefficient and the scatter show that the Gu governments with low financial independence from SMG tended to install more mini-PV plants.This supports Hypothesis 3: Gu governments with low financial capacities are more likely to implement a policy relying on a subsidy from SMG. Political Contexts Table 6 presents correlation coefficients between mini-PV plant installation and political factors.Political factors are divided into two categories: the characteristics of the district mayors of Gu governments and the political composition of the electorate.First, earlier studies in the U.S. pointed out that states and communities showing Democratic political orientations were more active in local climate action and renewable energy programs [10,14,46].In the mini-PV plant case, Hypothesis 5, Gu governments supporting a progressive party will show higher performance, is partly supported by the data.The political affiliation of the current mayor of Seoul was the Minjoo (Democratic) Party, an opposition party.According to Table 6, the Gu governments having the same political orientation as the mayor showed higher performance with respect to mini-PV plant installation.The political orientation was defined by a major party of the Gu government council.However, Gu governments having a higher proportion of the Minjoo Party council members did not always show higher performances.From this result, it can be argued that the important driver for implementation of the mini-PV Plant program was not the overall political atmosphere of the conservatism or progressivism of the electorates, but the same party affiliation of the district mayor, the Gu government council and the SMG mayor.The correlation of the mini-PV plant installation and the party affiliation of the mayor of Gu governments can be explained in the same vein.Figure 6 presents the performance of five Gu governments whose mayors belong to a different party than the mayor of Seoul.Their average number of mini-PV plants installed is 152, which is lower than the average number for the other 20 governments, i.e., 320.8.This result suggests that the district mayors of Gu from a different party than the mayor may not give full cooperation to the SMG or mayor's policy. The political orientation of the district mayor, however, does not explain everything, as Songpa-gu (SP) shows relatively high performance, although its mayor belongs to the Saenuri Party.Rather, the willingness of mayors of Gu governments seems to exert more power to implement the policy.Indeed, Songpa-gu has been implementing its own energy sharing plant, investing money in solar plants and using their profits to assist low-income people since 2009.Taken together, these facts demonstrate the strong will of the previous district mayor.One of the indicators that shows the district mayors' awareness of climate change and the political will to handle the issue is membership in ICLEI-Local Governments for Sustainability, the global network of cities committed to becoming a more sustainable and resilient community [67].Since ICLEI requires the commitment Political Contexts Table 6 presents correlation coefficients between mini-PV plant installation and political factors.Political factors are divided into two categories: the characteristics of the district mayors of Gu governments and the political composition of the electorate.First, earlier studies in the U.S. pointed out that states and communities showing Democratic political orientations were more active in local climate action and renewable energy programs [10,14,46].In the mini-PV plant case, Hypothesis 5, Gu governments supporting a progressive party will show higher performance, is partly supported by the data.The political affiliation of the current mayor of Seoul was the Minjoo (Democratic) Party, an opposition party.According to Table 6, the Gu governments having the same political orientation as the mayor showed higher performance with respect to mini-PV plant installation.The political orientation was defined by a major party of the Gu government council.However, Gu governments having a higher proportion of the Minjoo Party council members did not always show higher performances.From this result, it can be argued that the important driver for implementation of the mini-PV Plant program was not the overall political atmosphere of the conservatism or progressivism of the electorates, but the same party affiliation of the district mayor, the Gu government council and the SMG mayor.The correlation of the mini-PV plant installation and the party affiliation of the mayor of Gu governments can be explained in the same vein.Figure 6 presents the performance of five Gu governments whose mayors belong to a different party than the mayor of Seoul.Their average number of mini-PV plants installed is 152, which is lower than the average number for the other 20 governments, i.e., 320.8.This result suggests that the district mayors of Gu from a different party than the mayor may not give full cooperation to the SMG or mayor's policy. The political orientation of the district mayor, however, does not explain everything, as Songpa-gu (SP) shows relatively high performance, although its mayor belongs to the Saenuri Party.Rather, the willingness of mayors of Gu governments seems to exert more power to implement the policy.Indeed, Songpa-gu has been implementing its own energy sharing plant, investing money in solar plants and using their profits to assist low-income people since 2009.Taken together, these facts demonstrate the strong will of the previous district mayor.One of the indicators that shows the district mayors' awareness of climate change and the political will to handle the issue is membership in ICLEI-Local Governments for Sustainability, the global network of cities committed to becoming a more sustainable and resilient community [67].Since ICLEI requires the commitment of sustainable development and climate change mitigation and its members should pay annual fees for membership, it is difficult to join ICLEI without the district mayor's decision.In Figure 6, seven Gu governments with ICLEI membership show relatively high performance in terms of mini-PV plant installation.The average number of installed PV plants is 448.4,whereas only 224.2 plants on average were installed by the other 18 governments.of sustainable development and climate change mitigation and its members should pay annual fees for membership, it is difficult to join ICLEI without the district mayor's decision.In Figure 6, seven Gu governments with ICLEI membership show relatively high performance in terms of mini-PV plant installation.The average number of installed PV plants is 448.4,whereas only 224.2 plants on average were installed by the other 18 governments. Public Awareness Public awareness includes the perception and practice of each citizen.Even if the SMG and Gu governments hoped to expand mini-PV plants aggressively, it was difficult to achieve the goal without residents' permission because the mini-PV plant is installed in their houses.Based on this perspective, public awareness and perception of climate change and renewable energy, as well as an understanding of the mini-PV plant program are regarded as important factors to determine the success of the program.Table 7 presents the correlation between mini-PV plant installation and cultural factors.Public perception was measured by self-reporting scores based on the perception of natural and nuclear disasters, and trust in governments.Energy saving practices were also measured by self-reporting scores on energy savings, such as electricity and water savings [68]. The indicators showed significant correlations between trust in governments and energy savings.As shown in Figure 7a,b, Gu governments whose residents have more trust in Public Awareness Public awareness includes the perception and practice of each citizen.Even if the SMG and Gu governments hoped to expand mini-PV plants aggressively, it was difficult to achieve the goal without residents' permission because the mini-PV plant is installed in their houses.Based on this perspective, public awareness and perception of climate change and renewable energy, as well as an understanding of the mini-PV plant program are regarded as important factors to determine the success of the program.Table 7 presents the correlation between mini-PV plant installation and cultural factors.Public perception was measured by self-reporting scores based on the perception of natural and nuclear disasters, and trust in governments.Energy saving practices were also measured by self-reporting scores on energy savings, such as electricity and water savings [68]. The indicators showed significant correlations between trust in governments and energy savings.As shown in Figure 7a,b, Gu governments whose residents have more trust in governments and more energy saving practices are more likely to install mini-PV plants.It is not surprising that citizens who trust their governments will have high acceptance rates and compliance with the policies of those governments, and citizens who want to save energy will install renewable energy equipment to save finite resources and electricity bills.Perceptions of natural and nuclear disasters do not show a significant correlation with the number of installed mini-PV plants, except for the interesting case of Nowon-gu (NW).The X axis of Figure 7c is the average score for the perceptions of natural and nuclear disasters.Nowon-gu recorded the highest scores in both surveys and on the performance of mini-PV plant installation.In November 2011, a citizen detected, by chance, that the asphalt of Wolgye-dong in Nowon-gu was contaminated with radiation, and Nowon-gu removed and repaved those contaminated asphalts.This accident led the mayor of Nowon-gu to focus on energy transition [24].This event also may have had an influence on the citizens.Residents in Nowon-gu may have been frightened by the radioactive contamination of their towns after viewing images of the Fukushima Daiichi nuclear disaster and might have become more interested in renewable energy issues. Table 7. Correlation coefficients between performance of mini-PV plant installation and perceptional indicators.governments and more energy saving practices are more likely to install mini-PV plants.It is not surprising that citizens who trust their governments will have high acceptance rates and compliance with the policies of those governments, and citizens who want to save energy will install renewable energy equipment to save finite resources and electricity bills.Perceptions of natural and nuclear disasters do not show a significant correlation with the number of installed mini-PV plants, except for the interesting case of Nowon-gu (NW).The X axis of Figure 7c is the average score for the perceptions of natural and nuclear disasters.Nowon-gu recorded the highest scores in both surveys and on the performance of mini-PV plant installation.In November 2011, a citizen detected, by chance, that the asphalt of Wolgye-dong in Nowon-gu was contaminated with radiation, and Nowon-gu removed and repaved those contaminated asphalts.This accident led the mayor of Nowon-gu to focus on energy transition [24].This event also may have had an influence on the citizens.Residents in Nowon-gu may have been frightened by the radioactive contamination of their towns after viewing images of the Fukushima Daiichi nuclear disaster and might have become more interested in renewable energy issues.The survey conducted by Baek and Yun [24] gives more implications about public awareness.Residents of Nowon-gu installed the mini-PV plants because of subsidies, concerns about climate change and resource depletion, as well as anxiety about nuclear power plants.On the other hand, residents who did not install the mini-PV plants responded that they would not consider installations because of the cost, inappropriate location and bad appearance.Furthermore, a public official of Gangnam-gu stated that the building owners were reluctant to apply to have mini-PV plants installed due to the cost and the appearance of the building [70].Such survey results reveal that cultural and psychological factors play important roles in executing the strategy to engage citizens to participate in the mini-PV plant program. Geographical Diffusion The distribution of mini-PV plants by Gu is visualized in Figure 8 to determine the presence or absence of geographical policy diffusion.Figure 8a,b shows respectively the equal interval map and natural break map based on the number of mini-PV plants installed in each Gu.As shown in the figures, it seems difficult to assert that Gu governments with neighbors that have a large number of installed mini-PV plants also have many plants.For example, the number of plants installed in Jungnang-gu (JNG) is below the average, although its neighbors, Nowon-gu (NW), Seongbuk-gu (SB) and Dongdaemun-gu (DDM), have relatively high installation levels.Figure 8c,d The survey conducted by Baek and Yun [24] gives more implications about public awareness.Residents of Nowon-gu installed the mini-PV plants because of subsidies, concerns about climate change and resource depletion, as well as anxiety about nuclear power plants.On the other hand, residents who did not install the mini-PV plants responded that they would not consider installations because of the cost, inappropriate location and bad appearance.Furthermore, a public official of Gangnam-gu stated that the building owners were reluctant to apply to have mini-PV plants installed due to the cost and the appearance of the building [70].Such survey results reveal that cultural and psychological factors play important roles in executing the strategy to engage citizens to participate in the mini-PV plant program. Geographical Diffusion The distribution of mini-PV plants by Gu is visualized in Figure 8 to determine the presence or absence of geographical policy diffusion.Figure 8a,b shows respectively the equal interval map and natural break map based on the number of mini-PV plants installed in each Gu.As shown in the figures, it seems difficult to assert that Gu governments with neighbors that have a large number of installed mini-PV plants also have many plants.For example, the number of plants installed in Jungnang-gu (JNG) is below the average, although its neighbors, Nowon-gu (NW), Seongbuk-gu (SB) and Dongdaemun-gu (DDM), have relatively high installation levels.Figure 8c,d Moran's I, the most commonly-used statistical test for spatial autocorrelation in univariate map patterns [71] (p.187), for both cases is insignificant at the 95% confidence level.Moran's I for the Moran's I, the most commonly-used statistical test for spatial autocorrelation in univariate map patterns [71] (p.187), for both cases is insignificant at the 95% confidence level.Moran's I for the number of mini-PV plants case is 0.071, and the p-value = 0.17 with 999 permutations.Moran's I of the proportion case is 0.143, and the p-value = 0.10 with 999 permutations.No evidence was found to reject the null hypothesis that there is zero spatial autocorrelation.Therefore, the hypothesis of Gu governments having neighbors with high performances leading to similar high performances cannot be supported in the mini-PV plant installation.Moran scatter plots are presented in Figure 8c,f. Discussion The overall result of the hypotheses test is summarized in Table 8.For the capacity factors, organizational capacity measured by human resources did not have a significant correlation with the performance of mini-PV plant installation, whereas institutional capacity and legal framework might have been helpful for successful policy implementation.Gu governments having low financial independence were more likely to implement the mini-PV plant program, which was accompanied by subsidy of SMG, but Gu governments with high financial capacities showed rather lower levels of performance.Political factors were important for successful policy implementation.Gu governments whose district mayor and the majority of the Gu council belonged to the same Minjoo party as the SMG mayor showed relatively higher performances, and the strong political motivations of district mayors' governments were found to be major drivers of mini-PV plant installation.Public awareness, such as risk perception of climate change and nuclear disaster, practice for energy saving and trust in government, was also a necessary condition.Finally, the performance of the mini-PV plant installation did not show spatial autocorrelation in terms of geographical diffusion. Category Hypothesis Result Capacity 1. Gu governments having high administrative capacities will show higher performance.Partly Supported 2. Gu governments having high financial capacities will show higher performance. Not Supported 3. Gu governments having low financial capacities are more likely to implement a policy relying on subsidy from SMG. Supported Political context 4. Gu governments with district mayors exhibiting a strong political resolve show higher performance.Supported 5. Gu governments supporting progressive parties will show higher performance. Partly Supported 6. Gu governments whose mayor belongs to a different party from the SMG mayor will show low performance.Supported Public awareness 7. Gu governments where citizens exhibit a high public awareness of climate change, and low-carbon concept will show higher performance.Supported Geographical diffusion 8. Gu governments have neighbors who show high performance will also show high performance.Not Supported The crucial factors to implement successful mini-PV installations were administrative capacity, financial capacity and the political will of each district mayor.First, the most obvious correlation was found to be between an ordinance on climate change and mini-PV plant installation.Considering that the provisions of the climate change ordinance typically stated the responsibility of local governments to make efforts for mitigation and adaptation to climate change, officials and council members of Gu governments with a climate change ordinance might have a better understanding of the importance of GHG reduction and renewable energy and regarded the mini-PV plant program as a good opportunity to follow their ordinances. Second, an interesting finding was a correlation between financial capacity and the number of mini-PV plants installed.Many studies have found a positive relationship between community wealth and the adoption of renewable energy programs [10,14,46], but mini-PV plant installation produced a somewhat negative result.The richest Gu governments showed very low performance of mini-PV plant installation, but Gu governments having tax amounts per household below the median showed a relatively high level of installation.This raised doubt that additional factors may impact the adoption and performance of renewable energy programs.One possibility was the financial independence of Gu governments.The analysis showed that the Gu governments with a low financial independence from SMG tended to install more mini-PV plants.It can be interpreted that financially dependent Gu governments are more likely to comply with SMG's policy.The financial aid from SMG could be a good option to prepare their budget and implement the new programs for Gu governments, which did not have a budget surplus to be allocated to new programs.Even if a Gu government recognized the seriousness of climate change and importance of renewable energy, investing a considerable sum of money in renewable energy programs would face resistance because many other programs the government should pursue, including welfare and education, were also waiting for continuous funding.In this situation, a subsidy from SMG could be a solution to allow the Gu governments to implement renewable energy programs by securing external funding sources.Furthermore, Gu governments would be eager to enhance their performance since Gu governments with high performance and exhausted financial support more easily obtained more subsidies in the following year. Third, as shown in the case of Songpa-gu briefly introduced in Section 4.3, the will of the district mayor was an important factor to conform to SMG's policy and pursue energy transition in the Gu government.The representative that exemplified a district mayor of a Gu government with the strong will of PV generation was Nowon-gu (NW).The Mayor of Nowon-gu was a fervent supporter of renewable energy, solar energy in particular.For example, he encouraged officials of Gu office to join the "Solar and Wind Generation Cooperative Union" so that about half of the union members could be public officials [35].Moreover, when this Union planned to build a solar PV plant in the parking lot of the Gu office, he found that a provision of the existing energy ordinance was a barrier for the construction and amended the ordinance to facilitate the construction of the solar PV plant.On the contrary, Gangnam Solar Generation Cooperative Union having the same problem finally gave up the original site since the Gangnam-gu government adhered to the existing energy ordinance [35].Nowon-gu was the first Gu government that implemented the mini-PV plant program before the SMG's policy.It allocated 120 million KRW to the installation of mini-PV plants for 400 households in 2014 [24].After the SMG's implementation, Nowon-gu distributed flyers and posters to promote the mini-PV plant program and held information sessions for the residents.Moreover, if more than 10 households apply for the mini-PV plant installation together, the Gu government gave them an additional subsidy of 50 thousand KRW.Emphasizing renewable energy dissemination despite insufficient budget and strong objection of a Gu council member from the different party [72][73][74] could be interpreted as the realization of the district mayor's motivation.Similarly, Guro-gu (GR) and Yangcheon-gu (YC), which gave additional subsidies in 2015, presented higher performance. On the other hand, geographical diffusion has not occurred yet.As Shipan and Volden [32] stated, geographical proximity no longer enjoys the advantage of communication and information sharing as much as in the past.Moreover, actual economic or welfare benefits that residents received from the installation of mini-PV plants were not assessed as the mini-PV Plant program has been in the initial stage.This means that the time is not yet ripe for horizontal policy diffusion through competition and learning among Gu governments. Future research may include demographic and socioeconomic characteristics of residents of each Gu. Figure 9 shows that the mini-PV plant installation and the number of elderly people, and mini-PV plant installation and house ownership have moderately positive relationships.Demographic factors are often considered in research on the public awareness, but are rarely involved in policy diffusion research.Considering the mechanism that demographic characteristics may cause a difference in public awareness and public awareness may affect the adoption of policy, demographic factors may be worthy of attention.Furthermore, the interesting factor is home ownership.The trend that Gu governments having more homeowners show better performance on mini-PV plant installation infers that some people do not install mini-PV plants, even if they have a preference for renewable energy, and want to install mini-PV plants only because they have leases on their houses. Conclusions This paper found the gaps in policy implementation and performance among Gu governments under the SMG in the case of the mini-PV plant and tried to explain the causes of these differences by testing the eight hypotheses.Among the outcomes following the tests of the different hypotheses, an interesting phenomenon was found in the aspects of the financial capacity, which would distinguish SMG from other country cases.It was also found that some factors, such as home ownership and the number of the elderly, that the policy diffusion theory hardly involves may have a correlation with the expansion of renewable energy. As the history of the mini-PV plant program is still short, the finding of this study might inevitably be limited.In addition, the lack of full data and the short history of the program led to methodological limitations and a reliance on descriptive statistics.Future, carefully executed studies should focus on monitoring and robust quantitative re-evaluation of this program based on more accumulated data to supplement current study findings. However, the exploration of the initial stage of local energy transition in SMG is still important to gauge the future progress of ONLPP and other sustainable development measures in SMP.Furthermore, analyzing the relationship between city-and district-level governments, which attracted less attention in the policy diffusion research, may open more opportunities for future research.Shipan and Volden [32] argue that policy diffusion is also affected by the visibility of the actual policy.If the effects of a policy are clear and highly observable, the policy will spread much faster.In order to achieve the goal of SMG to spread energy transition through renewable energy dissemination, the effects and benefits of mini-PV plants need to be fully shared among residents and mayors of Gu governments.Finally, institutional improvement to lower barriers that impede mini-PV plant installation by tenants, such as reinstallation after moving, should also be considered as a future task. Conclusions This paper found the gaps in policy implementation and performance among Gu governments under the SMG in the case of the mini-PV plant and tried to explain the causes of these differences by testing the eight hypotheses.Among the outcomes following the tests of the different hypotheses, an interesting phenomenon was found in the aspects of the financial capacity, which would distinguish SMG from other country cases.It was also found that some factors, such as home ownership and the number of the elderly, that the policy diffusion theory hardly involves may have a correlation with the expansion of renewable energy. As the history of the mini-PV plant program is still short, the finding of this study might inevitably be limited.In addition, the lack of full data and the short history of the program led to methodological limitations and a reliance on descriptive statistics.Future, carefully executed studies should focus on monitoring and robust quantitative re-evaluation of this program based on more accumulated data to supplement current study findings. However, the exploration of the initial stage of local energy transition in SMG is still important to gauge the future progress of ONLPP and other sustainable development measures in SMP.Furthermore, analyzing the relationship between city-and district-level governments, which attracted less attention in the policy diffusion research, may open more opportunities for future research.Shipan and Volden [32] argue that policy diffusion is also affected by the visibility of the actual policy.If the effects of a policy are clear and highly observable, the policy will spread much faster.In order to achieve the goal of SMG to spread energy transition through renewable energy dissemination, the effects and benefits of mini-PV plants need to be fully shared among residents and mayors of Gu governments.Finally, institutional improvement to lower barriers that impede mini-PV plant installation by tenants, such as reinstallation after moving, should also be considered as a future task. Figure 3 . Figure 3. (a) The number of installed mini-PV plants by Gu district; (b) the proportion of apartment units where mini-PV plants were installed out of all apartment units in the Gu districts. Figure 3 . Figure 3. (a) The number of installed mini-PV plants by Gu district; (b) the proportion of apartment units where mini-PV plants were installed out of all apartment units in the Gu districts. Figure 4 . Figure 4. Comparison of the number of mini-PV plants: Gu governments that have established ordinances on climate change versus those who do not have ordinances. Figure 4 . Figure 4. Comparison of the number of mini-PV plants: Gu governments that have established ordinances on climate change versus those who do not have ordinances. Figure 5 . Figure 5. Scatter plots of the number of mini-PV plants installed and financial capacity of the Gu governments: (a) tax amounts per household; (b) financial independence. Figure 5 . Figure 5. Scatter plots of the number of mini-PV plants installed and financial capacity of the Gu governments: (a) tax amounts per household; (b) financial independence. Figure 6 . Figure 6.Comparison of the performance of mini-PV plant installation depending different political conditions: (a) The number of installed mini-PV plants; (b) the proportion of apartment units where mini-PV plants were installed. Figure 6 . Figure 6.Comparison of the performance of mini-PV plant installation depending different political conditions: (a) The number of installed mini-PV plants; (b) the proportion of apartment units where mini-PV plants were installed. Figure 7 . Figure 7. Scatter plots of the number of mini-PV plants installed and cultural factors of Gu governments: (a) trust in governments; (b) energy saving practice; (c) risk perception. Figure 7 . Figure 7. Scatter plots of the number of mini-PV plants installed and cultural factors of Gu governments: (a) trust in governments; (b) energy saving practice; (c) risk perception. shows the interval and natural break maps based on the proportion of apartment units with installed mini-PV plants.It is still unclear, however, whether spatial autocorrelation exists even though some clustering patterns are clearer than those in Figure 8a,b.Surrounded by neighbors showing a high proportion, such as Dongdaemun-gu (DDM), Gangdong-gu (GD), Yangcheon-gu (YC) and Mapo-gu (MP), Gwangjin-gu (GJ) and Gangseo-gu (GS) show a low proportion of mini-PV installation.Sustainability 2017, 9, 386 15 of 21 Figure 8 . Figure 8. Geographical distribution of mini-PV plant installation: (a) equal interval map of the number of mini-PV plants installed; (b) natural break map of the number of mini-PV plants installed; (c) Moran scatter plot of the number of mini-PV plants installed; (d) equal interval map of the proportion of apartment units where mini-PV plants were installed; (e) natural break map of the proportion of apartment units where mini-PV plants were installed; (f) Moran scatter plot of the proportion of apartment units where mini-PV plants were installed Figure 8 . Figure 8. Geographical distribution of mini-PV plant installation: (a) equal interval map of the number of mini-PV plants installed; (b) natural break map of the number of mini-PV plants installed; (c) Moran scatter plot of the number of mini-PV plants installed; (d) equal interval map of the proportion of apartment units where mini-PV plants were installed; (e) natural break map of the proportion of apartment units where mini-PV plants were installed; (f) Moran scatter plot of the proportion of apartment units where mini-PV plants were installed. Figure 9 . Figure 9. Scatter plots of the number of mini-PV plants installed and demographic and socioeconomic factors of Gu governments: (a) the elderly; (b) household owning a house. Figure 9 . Figure 9. Scatter plots of the number of mini-PV plants installed and demographic and socioeconomic factors of Gu governments: (a) the elderly; (b) household owning a house. Table 1 . Population (2015)and financial indicators (2014) of major cities and provinces in South Korea.  Eco Mileage  Energy Guardian Angel Corps  Good Shops program  Car sharing Table 2 . Key sub-programs of "One Less Nuclear Power Plant (OLNPP)" Phase 1.As of June 2014, a total of 1.33 trillion KRW (247.3 billion KRW of municipal funds, 48.7 billion KRW of national funds and 1.04 trillion KRW of private capital) were spent in implementing OLNPP Table 4 . Data collection.ICLEI, International Council for Local Environment Initiative. Table 5 . Correlation coefficients between the performance of mini-PV plant installation and capacity indicators. Table 5 . Correlation coefficients between the performance of mini-PV plant installation and capacity indicators. Table 6 . Correlation coefficients between the performance of mini-PV plant installation and political indicators. Table 6 . Correlation coefficients between the performance of mini-PV plant installation and political indicators. Table 7 . Correlation coefficients between performance of mini-PV plant installation and perceptional indicators. of Mini-PV Plants Installed The Proportion of Mini-PV Plants Installed ** p < 0.05, * p < 0.10. Table 8 . Test results for each hypothesis.
2019-05-19T13:04:58.075Z
2017-03-06T00:00:00.000
{ "year": 2017, "sha1": "eb8a1a4c2a2af1e2ea162b40543a15551bdb836c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/9/3/386/pdf?version=1488966476", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "efc2013091ad591091b13efd9d308d9452815e5e", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
9840075
pes2o/s2orc
v3-fos-license
Novel prostate cancer immunotherapy with a DNA-encoded anti-prostate-specific membrane antigen monoclonal antibody Prostate-specific membrane antigen (PSMA) is expressed at high levels on malignant prostate cells and is likely an important therapeutic target for the treatment of prostate carcinoma. Current immunotherapy approaches to target PSMA include peptide, cell, vector or DNA-based vaccines as well as passive administration of PSMA-specific monoclonal antibodies (mAb). Conventional mAb immunotherapy has numerous logistical and practical limitations, including high production costs and a requirement for frequent dosing due to short mAb serum half-life. In this report, we describe a novel strategy of antibody-based immunotherapy against prostate carcinoma that utilizes synthetic DNA plasmids that encode a therapeutic human mAb that target PSMA. Electroporation-enhanced intramuscular injection of the DNA-encoded mAb (DMAb) plasmid into mice led to the production of functional and durable levels of the anti-PSMA antibody. The anti-PSMA produced in vivo controlled tumor growth and prolonged survival in a mouse model. This is likely mediated by antibody-dependent cellular cytotoxicity (ADCC) effect with the aid of NK cells. Further study of this novel approach for treatment of human prostate disease and other malignant conditions is warranted. Introduction Prostate cancer is the second most frequently diagnosed cancer and the sixth most deadly cancer in males worldwide [1][2][3]. In the USA, prostate cancer is the most commonly diagnosed cancer in males over the age of 50 years and ranks as the second deadliest cancer in males [4,5]. Traditional treatments for prostate cancer include prostectomy, radiation therapy, chemotherapy and hormone deprivation therapy [5]. These treatments can impair the quality of life for patients and thus new approaches to combating prostate cancer are warranted [4]. Several groups are exploring methods for harnessing the immune system to recognize and kill prostate cancer cells [2]. One such effort has led to Sipuleucel-T, a licensed, autologous cellular immunotherapy for the treatment of asymptomatic or minimally symptomatic metastatic castrate-resistant prostate cancer [6]. Additional immunotherapies for prostate cancer now under development include a number of vaccine candidates, as well as approaches using targeted monoclonal antibodies (mAbs) [7]. Prostate-specific membrane antigen (PSMA) is expressed many fold higher on prostate cells than cells of other tissues, and it is considered an important clinical biomarker of prostate cancer [8][9][10]. Levels of PSMA are further elevated on prostate cancer cells, and studies indicate a strong correlation between increased PSMA expression and prostate cancer progression [4,5]. PSMA expression levels can also be elevated on other malignant cells including those of urologic origin (i.e., kidney and bladder) suggesting this glycoprotein may play a role in their oncogenic progression as well [11]. In other solid tumors including colon, ovarian, breast, and kidney cancers, elevated PSMA expression has been observed on tumor neovasculature, but not normal vasculature suggesting a role for PSMA in angiogenesis [12]. Unlike prostate-specific antigen (PSA), PMSA is a membrane protein which makes it an attractive target to develop mAbs against it for diagnostic and therapeutic purposes [13]. Several therapeutic anti-PSMA mAbs have been developed, and many of these have been used in radioimmunotherapy for targeting cytotoxic radionucleotides, specifically to PSMAexpressing cells [5]. Some anti-PSMA mAbs, such as clone 2C9, have been demonstrated to mediate a therapeutic effect by promoting an antibody-dependent cellular cytotoxicity (ADCC) effect that kills prostate cancer cells [5,14]. DNA plasmids have been used for over 25 years as a non-viral method of in vivo gene delivery, and they have been studied extensively as a platform for vaccines and gene therapy. Recently, our group has explored developing synthetic DNA plasmids as a means of delivering the genes of MAbs that neutralize infectious agents. We have reported that constructs expressing DNA-encoded monoclonal antibody (DMAb) can direct in vivo production of functional levels of antibody targeting human immunodeficiency, dengue, and chikungunya viruses in mice [15][16][17]. Such an approach possesses several advantages over both conventional protein-based mAbs and viral vector-based delivery of antibody genes including; (1) lower production costs; (2) the ability to generate durable, high levels of in vivo antibody production without gene integration; and (3) the ability for repeated administrations due to the non-immunogenic nature of DNA plasmids. While early applications of DNA plasmid technology suffered due to poor in vivo transgene production, recent enhancements in the design of DNA vectors along with new delivery methods including adaptive in vivo electroporation (EP) have combined to boost transgene expression to potent levels in clinical vaccine studies, without compromising safety [18]. This study describes the first application of enhanced synthetic DNA plasmid technology to deliver DNA directing the in vivo production of a human MAb for cancer immunotherapy. We designed a novel construct encoding a therapeutic anti-PSMA MAb, and we show that this plasmid expresses DMAb in vitro and in vivo in mice after EP-enhanced intramuscular delivery. The in vivo generated antibodies retain their ability to bind specifically to PSMA, and they possess ADCC activity. Finally, we show that this anti-PSMA-DMAb can control the growth of a PSMA-positive tumor in a mouse model, likely through engagement of NK cells. PSMA-DMAb plasmid construction and expression confirmation To construct the PSMA-DMAb , the genes of both the variable heavy (V H ) and variable light (V L ) fragments of a human anti-PSMA mAb were examined, optimized, and constructed through the use of synthetic oligonucleotides with several modifications to improve expression as previously described [15]. DNA was formulated in water for subsequent administration into mice. An empty pVax1 expression vector was used as a negative control. Cells (293T) were transfected with the PSMA-DMAb plasmid and confirmation of PSMA-DMAb binding to recombinant human PSMA was carried out by Western blot analysis. Briefly, recombinant PSMA protein (R&D systems) was run on an SDS-PAGE gel and transferred to Immobilon-PVDF membrane (EMD Millipore). Membranes were blocked for 1 h in blocking buffer (Li-Cor Biosciences) and then incubated for 1 h with either commercial anti-PSMA mAb (R&D systems), pooled day 14 sera from PSMA-DMAb plasmid-injected mice, or supernatants from PSMA-DMAb plasmid-transfected 293T cells. Membranes were washed and then incubated for 1 h with a goat anti-human IgG 680RD antibody (Li-Cor Biosciences) and washed. Protein bands were visualized by scanning membranes with a Li-Cor Odyssey CLx scanner [19]. Mice, plasmid administration, and IgG quantification Animal experiments were conducted in accordance with the University of Pennsylvania Animal Care and Use Committee guidelines. B6.Cg-Foxn1 nu /J (C57BL/6 nude) and C57BL/6 (both from Jackson Laboratory) mice were administered 100 µg of PSMA-DMAb or pVax1 plasmid in a single 50 µl intramuscular injection into the quadriceps, followed by in vivo electroporation [15]. For quantifying human immunoglobulin G1 (IgG1) levels, ELISA plates were coated with 1 µg/well of goat anti-human IgG-Fc fragment antibody (Bethyl) overnight at 4 °C. The following day, plates were washed with phosphate-buffered saline with 0.1% Tween-20 (PBS-T), blocked with 10% FBS in PBS-T for 2 h at room temperature, washed, incubated for 1 h at room temperature with the respective samples that were diluted with 1% FBS in PBS-T, washed, and incubated for 1 h at room temperature with HRP-conjugated goat antihuman kappa light chain antibody (Bethyl). SIGMAFAST OPD (Sigma-Aldrich) solution was added to wells and plates kept in dark for at least 10 min for color to develop. The enzymatic reaction was stopped with 1 N H 2 SO 4 and plates were read at 450 nm. A standard curve was generated using purified human IgG/Kappa (Bethyl) [15]. Binding ELISA to evaluate antibody affinity followed a similar procedure except plates were coated overnight with recombinant human PSMA and a HRP-conjugated goat anti-human IgG (H + L) (Bethyl) was used as a secondary antibody. Flow cytometry analysis To detect cell surface PSMA, tubes of 1.0 × 10 6 LNCaP or TRAMP-C2 cells were washed with phosphate-buffered saline (PBS), stained with live/dead fixable violet dead cell stain (Life Technologies) for 15 min, and then washed twice with FACS buffer (PBS + 1% FBS). Cells were next incubated for 30 min at room temperature with a 1:4 dilution of day 14 sera from PSMA-DMAb plasmid-injected mice and then washed. Finally, cells were incubated in the dark for 30 min with a 1:100 dilution of PE-conjugated anti-human Fc IgG (Biolegend), followed by a final wash with FACS buffer. Samples were resuspended in 1× stabilizing fixative (BD) and analyzed the following day on an LSR18 flow cytometer (BD Biosciences). FACS analysis was performed on a gated low forward scatter and side scatter with Annexin-V FITC and PI (Thermo Fisher) following kit protocol for the effects of PSMA-DMAb sera on LNCaP cell death. Indirect immunofluorescence and immunohistochemistry assay Formalin-fixed paraffin-embedded (FFPE) human tumor tissue sections (UMass Cancer Center Tissue and Tumor Bank, Massachusetts, MA) were deparaffinized with xylene and rehydrated. Antigen retrieval was performed using a 1× working solution of citrate buffer, pH 6.0 (Sigma-Aldrich), at 100 °C for 15 min. Tissue sections were blocked with 1× PBS containing 5% normal goat serum (Cell Signaling Technology) and 0.3% Triton X-100 in a humid chamber. Tissues were washed in 1× PBS and incubated with pooled day 14 PSMA-DMAb plasmid-administered mice sera diluted 1:100 in antibody diluent. Tissues were washed in 1× PBS and incubated with a 1:500 dilution of Alexa Fluor 488-conjugated goat anti-human IgG (H + L) secondary antibody (Thermo Fisher Scientific) in antibody diluent for 1 h. Cell nuclei were counterstained with Hoechst reagent (Sigma-Aldrich). Images were acquired using the Leica TCS SP8 confocal laser scanning microscope at the cell and developmental biology microscopy core, University of Pennsylvania, PA, USA. Paraffin-embedded mouse prostate tissue was subjected to antigen retrieval and deparaffinized. Slides were then fixed with acetone and washed with PBS and sections blocked using normal goat serum followed by staining with human PSMA antibody, followed by a biotinylated goat anti-mouse and completion of immunohistochemical procedure according to the manufacturer's instructions (Vector Labs). Antibody-dependent cell-mediated cytotoxicity assay ADCC activity of PSMA-DMAb was examined using Promega's ADCC Reporter Bioassay Kit. Briefly, target LNCaP cells were incubated for 6 h at 37 °C with the engineered Jurkat effector cells and pooled day 14 sera from PSMA-DMAb plasmid-injected mice. Luciferase activity was measured by luminescence to determine ADCC activity as recommended by the manufacturer. All sera samples were tested in triplicate. Tumor challenge For tumor implantation, C57BL/6 male mice were injected subcutaneously with 1 × 10 6 TRAMP-C2 cells in the right hind flank. The experimental mice were divided into treatment groups (n=10). Animals were monitored for tumor growth. As tumors became detectable, electronic calipers were used to measure the length and width of the tumor and the tumor volumes were calculated by applying the following equation: Under the University of Pennsylvania Animal Care and Use Committee guidelines mice are sacrificed when tumor diameter reaches 2 cm, or when tumors became ulcerated. Survival differences between groups were analyzed by Students t test, p > 0.05 is considered significant. In vivo NK cell depletion Mice were treated for NK cell depletion on day −1 (before tumor challenge) and at days +2 and +4 after tumor inoculation with intravenous injection of 100 μl (25 μg) of either control IgG or anti-Asialo GM1 IgG (Wako Chemicals, Richmond, VA, USA) diluted in PBS. Cells were stained with anti-NK1.1 and anti-CD3 monoclonal antibodies and analyzed by flow cytometry to verify the depletion of the CD3 − /NK1.1+ (NK) cell population in the anti-Asialo GM1-treated animals. Statistical analysis GraphPad Prism 6 (GraphPad Software, Inc.) program was used for statistical analysis of the data. The data from ELISA assays are expressed as mean ± SD and are representative of at least three different experiments. Comparisons between individual data points were made using Student's t test. p values < 0.05 were considered to be statistically significant. Construction and in vitro characterization of the PSMA-DMAb plasmid Human PSMA is a type II integral membrane glycoprotein that is highly expressed in prostate secretory-acinar epithelium as well as in several extra-prostatic tissues, and it possesses 86% identity and 91% similarity to mouse PSMA [20]. A plasmid capable of directing in vivo antibody production was designed by (1) creating a cassette consisting of the full-length coding sequences for the variable heavy (V H ) and light (V L ) immunoglobulin (Ig) chains from the published sequence of an anti-PSMA mAb driven off a CMV promoter; (2) optimizing the cassette sequence to improve its expression; and (3) cloning the cassette into a pVax1plasmid (Fig. 1a). Antibodies targeting PSMA produced from this optimized DNA plasmid will henceforth be referred to as PSMA-DMAb. To confirm that the plasmid directs production of fully assembled IgG, human embryonic kidney 293T cells were transfected with either empty pVax1 or PSMA-DMAb plasmid. Supernatants collected from cells at 48 h post-transfection were assayed by ELISA to quantify total human IgG levels. A concentration of nearly 800 ng/ml of human IgG was measured in supernatants of PSMA-DMAb plasmidtransfected cells (Fig. 1b). A binding ELISA performed on the same supernatants indicated that the IgG produced from PSMA-DMAb plasmid-transfected cells bound to recombinant human PSMA with high affinity (Fig. 1c). Western blot analysis further confirmed the specificity of PSMA-DMAb plasmid-derived antibodies for binding to recombinant human PSMA protein (Fig. 1d). The results indicate that the PSMA-DMAb plasmid can direct the production of anti-PSMA-specific antibodies in vitro. PSMA-DMAb plasmid administration generates PSMA-specific antibodies in vivo The ability of the PSMA-DMAb plasmid to direct antibody production in vivo was evaluated in both immune-deficient B6.Cg-Foxn1 nu /J (C57BL/6 nude) and immune-competent C57BL/6J mice. Groups of five mice received a single 100 μg injection of PSMA-DMAb plasmid intramuscularly in their quadriceps muscle followed by EP for enhanced delivery [16]. Injected mice were bled at various time points post-injection to obtain sera that was evaluated by ELISA to quantitate human IgG levels. Human IgG became detectable in sera of injected mice beginning on day 5 post-injection, with peak levels achieved at day 14 post-injection in both C57BL/6 nude (1.17 ± 0.41 μg/ml, Fig. 2a) and C57BL/6 (0.82 ± 0.11 μg/ml, Fig. 2b) mice. While elevated human IgG levels persisted in C57BL/6 nude mice beyond 50 days, the levels in C57BL/6 mice dropped to baseline values by day 35 post-injection likely due to the mouse anti-human antibody response [21,22]. Serum collected at day 14 post-injection from PSMA-DMAb plasmid-injected C57BL/6 nude mice was evaluated by ELISA (Fig. 2c) and Western blot (Fig. 2d) to evaluate the affinity and specificity of serum IgG for recombinant human PSMA. Both assays show that the IgG in day 14 sera recognized human PSMA, but not irrelevant HIV envelope protein with high affinity and specificity, suggesting that the IgG are properly folded and functional PSMA-DMAb. In vivo distribution of PSMA-DMAb in prostate tissue was studied in mice by harvesting tissues 7 days postplasmid injection and performing ELISA and immunohistochemistry for IgG quantification. Prostate tissue from mice administered the PSMA-DMAb plasmid exhibited higher levels of human IgG compared to prostate tissue from empty pVax1 plasmid-injected mice as measured by ELISA of tissue homogenates (Fig. 2e). Further, prostate tissues were evaluated by immunohistochemistry staining for anti-human-Fc expression. A strong immunostaining signal was detected on the cell membranes and within the prostate for the PSMA-DMAb plasmid-injected mice, but not pVax1-treated controls (Fig. 2f). Together, these findings demonstrated that the PSMA-DMAb plasmid can direct the production of robust levels of PSMA-specific human IgG in vivo. In vivo generated PSMA-DMAbs bind to PSMA on prostate cancer cells We next evaluated the ability of PSMA-DMAb in mouse sera to bind PSMA on tumor cells and tissues. Two PSMA-expressing prostate cancer cell lines were chosen for the initial studies: (1) LNCaP cells, derived from human prostate adenocarcinoma cells; and (2) transgenic adenocarcinoma mouse prostate (TRAMP)-C2 cells derived from a heterogeneous 32-week tumor grown in the TRAMP mouse model. Both cell lines were incubated sequentially with day 14 sera from pVax1 or PSMA-DMAb plasmid-injected C57BL/6 nude mice followed by a fluorescently labeled anti-human IgG secondary antibody. Histograms (Fig. 3a) and mean fluorescent intensity MFI (Fig. 3b) obtained from flow cytometry analysis of stained cells show that in vivo produced PSMA-DMAbs bind to both PSMA-positive tumor cell lines. No staining was observed on PSMA-negative PC3 cells (data not shown). In addition to normal and cancerous prostate cells, several studies have reported PSMA expression on a wide variety of tumors, especially on tumor neovasculature [23,24]. Immunofluorescence assays were used to evaluate the ability of PSMA-DMAb to bind to PSMA expressed on tissue sections of human bladder and kidney tumors (Fig. 4). The results show that PSMA-DMAb was able to stain cells in the bladder and kidney tumor tissue sections, but not cells in normal ovarian tissues, confirming previous reports of PSMA expression in these tumors [25]. Furthermore, the staining shows that PSMA distribution PSMA-DMAbs possess antibody-dependent cell-mediated cytotoxicity activity The biological activity of PSMA-DMAb was next evaluated by using an antibody-dependent cell-mediated cytotoxicity (ADCC) mechanism of action assay [26,27]. The assay involves incubating PSMA-expressing LNCaP cells with effector cells for 6 h in the presence of different concentrations of serum from pVax1 or PSMA-DMAb plasmid-injected mice. The effector cells are Jurkat cells that stably express high-affinity V158 FcγRIIIa and a gene for firefly luciferase driven off a nuclear factor of activated T cell (NFAT) response element [28]. The assay readout is based on activation of gene transcription in effector cells as measured by firefly luciferase production. As indicated in Fig. 5a, day 14 serum from PSMA-DMAb plasmidinjected mice mediates an ADCC effect. As a second demonstration of the biological activity of PSMA-DMAb, flow cytometry was used to measure apoptosis and necrosis of LNCaP cells that were co-cultured with human PBMCs in the presence of sera from pVax1 or PSMA-DMAb plasmid-injected mice. The results (Fig. 5b) show that there was a statistically significant increase in apoptosis (Q3 section of the histogram) as well as necrosis (Q2 section in the histogram) for LNCaP cells co-cultured with human PBMCs in the presence of PSMA-DMAb in comparison to control pVax1 sera. Combined, these findings show that the synthetic PSMA-DMAb can bind Fc receptors and mediate an ADCC effect on tumor cells [29]. PSMA-DMAb represses tumor growth in a TRAMP-C2 tumor challenge mouse model In vivo functional activity of PSMA-DMAb was assessed using a TRAMP-C2 tumor challenge mouse model [30]. For this assay, C57BL/6 mice were subcutaneously implanted with 1 × 10 6 TRAMP-C2 tumor cells and then injected 1 week later with 100 μg of either pVax1 or PSMA-DMAb plasmid by intramuscular injection with enhanced EP [30]. Mice were followed for up to 56 days with regular measurements of tumor size made on each mouse during this period (Fig. 6a). Tumors in the pVax1-treated mice began to grow at day 7-10 post-implantation, while tumors were not detectable in PSMA-DMAb-treated mice until days 15-17. Rapid tumor growth was noted for the control groups (pVax1), but the PSMA-DMAb-treated group exhibited an obvious suppression of tumor growth due to the antibody-mediated tumor-protective immunity. Over the course of the 56-day observation period, there was a statistically significant reduction in average tumor volumes (p = 0.0201) (Fig. 6b) and a significant improvement in survival (p = 0.0280) in mice receiving the PSMA-DMAb construct compared to the control mice (Fig. 6c). It is likely that this effect might be further enhanced in the absence of the mouse anti-human antibody response. Visual inspection (Fig. 6d) that developed in each group revealed that the tumors in the PSMA-DMAb group were impacted early and remained small and subdermal, while tumors in the pVax1 control group protruded out of the skin and became ulcerated. The anti-tumor activity of many therapeutic antibodies including ADCC and antibody-dependent cellular phagocytosis (ADCP) is dependent on the interaction of the IgG-Fc domain with Fc gamma receptors (FcγRs) on effector cells. Natural killer cells express high levels of FcγRs, therefore we also examined the contribution of NK cells to the observed effects of PSMA-DMAb on tumor growth. Previous studies have reported that human IgG can bind to all activating mouse FcγRs and can induce ADCC/ADCP with mouse NK cells and mouse macrophages [29]. Groups of mice were treated with either control IgG or the NK celldepleting anti-AGM1 IgG antibody and then implanted with TRAMP-C2 cells. One week later, mice were given a single injection of either pVax1 or PSMA-DMAb plasmid and were subsequently evaluated for tumor growth up to 56 days. There was a rapid onset of tumor development, accelerated tumor growth, and decreased survival in PSMA-DMAbimmunized, NK cell-depleted mice (p = 0.0019, Fig. 6e), but not in those pretreated with the control IgG. Taken together, these data demonstrate that PSMA-DMAb can exert a profound therapeutic effect on a PSMA-expressing tumor in vivo, supporting the possible application of this therapy for the treatment of prostate cancer. Discussion The work presented here describes the construction and characterization of a novel DNA plasmid-based delivery system that can be used to generate protective levels of a therapeutic mAb in vivo. A DNA plasmid encoding the V H and V L segments of a human anti-PSMA mAb was constructed and demonstrated to direct the expression of full-length, antigen-specific IgG in vitro and in vivo following electroporation-enhanced injection into the muscles of mice. PSMA is highly expressed on prostate carcinoma as well as other tumor cells, and it is considered an attractive target for antibody-based therapy due to its expression on the surface of cells. PSMA-DMAb in the serum of mice injected with PSMA-DMAb plasmid was able to bind to PSMA on the surface of the TRAMP-C2 and LNCaP prostate tumor cell lines and to sections of bladder and kidney tumors. Serum antibody levels of 1-2 μg/ml were achieved in mice injected with the PSMA-DMAb plasmid by day 14 post-administration, and the antibody remained detectable in the sera for several weeks. Importantly, PSMA-DMAb retained the ability to recognize PSMA on the surface of implanted tumor cells and to mediate a potent anti-tumor response in vivo, due at least in part through interacting with and no antibody, and a rPSMA-mAb as a positive control were used. Luciferase activity was measured. Results are representative data from two independent experiments. b Flow cytometric analysis of the effects of sera collected from PSMA-DMAb plasmid-administered mice on LNCaP cell death. Day 14 sera were incubated with LNCaP cells in the absence or presence of human PBMCs. Following washing, cells were stained with Annexin-V and propidium iodide (PI), according to the manufacturer's assay specifications. Gated FACS scan panels are shown for the various treatments: pVax1 control (10 μg), 1 or 10 μg PSMA-DMAb and non-treated cell control. Figure illustrates a representative experiment out of two performed independently NK cells to mediate ADCC/ADCP of tumor cells. ADCC has been hypothesized to be the major mechanism mediating the anti-tumor activity of mAbs targeting diverse malignancies [7]. Several mAbs targeting tumor-specific antigens or immunomodulatory molecules are in use or under development for cancer immunotherapy regimens, but there are impediments to their widespread use [7,31]. One of the primary impediments involves the cost of the treatment regimen stemming from the laborious, time-consuming manufacturing and purification processes associated with making these protein-based drugs [2,7,14]. Additionally, multiple infusions of mAbs are often required to attain and maintain their efficacy, which imposes further cost and logistical constraints on patients [31]. Given these challenges, alternative approaches to generate and deliver mAbs are important. Gene-based administration approaches are focused on delivering the genes encoding protective antibodies so that the antibodies can be generated in vivo in a sustained manner. Several groups have developed viral vectors for delivery of mAb genes and have shown that these vectors can be used to drive production of mAbs in vivo [2,13]. However, viral vector delivery of genes carries its own challenges, such as high development and distribution costs as well as the potential for neutralization of gene delivery and the inability to re-dose patients because of immune responses generated against the viral vector. In this regard, the DNA plasmid-based delivery system described here possesses many unique advantageous features for use as a specific patient treatment. Primary among these is the potential for significantly lower costs were measured weekly, in mice for up to 56 days post-tumor administration. c Kaplan-Meier curves (n = 10) showed the tumor survival time of mice in the pVax1 and PSMA-DMAb groups. d Representative mice with TRAMP-C2 tumors from pVax1 and PSMA-DMAb plasmid-injected groups at day 56 post-tumor administration. e Kaplan-Meier curves (n = 10) show the effect of NK cell depletion on PSMA-DMAb-mediated tumor survival time stemming from lower manufacturing costs of DNA plasmids, as well as lower distribution costs because DNA is more stable and simple to produce. Synthetic DNA vectors delivered into muscle or skin with the aid of adaptive electroporation can produce high and durable levels of in vivo transgene expression without integration, and there is abundant clinical data that speaks to its favorable safety profile [18]. Since DNA plasmids are non-immunogenic, multiple administrations of the same or different plasmids can be contemplated for delivery. This feature is particularly important if serum antibody levels decrease or another antibody treatment is required. This is the first report describing the use of a DNA plasmid-based delivery system to direct in vivo generation of a therapeutic mAb that targets a relevant oncology target, PSMA. It is also the first report to illustrate functional engagement of host NK-immune clearance by a DNA-vectored mAb. Due to the flexibility of this platform, combination of DMAb plasmids with other anticancer treatments or immunotherapy agents is important to consider. Furtherr study of this approach for neoplastic disease is warranted.
2017-11-09T18:11:17.925Z
2017-08-17T00:00:00.000
{ "year": 2017, "sha1": "6ec5ee2681be3633672ac2d32249110514289a5c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00262-017-2042-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6ec5ee2681be3633672ac2d32249110514289a5c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
924623
pes2o/s2orc
v3-fos-license
Transcriptome Analysis and Development of SSR Molecular Markers in Glycyrrhiza uralensis Fisch. Licorice is an important traditional Chinese medicine with clinical and industrial applications. Genetic resources of licorice are insufficient for analysis of molecular biology and genetic functions; as such, transcriptome sequencing must be conducted for functional characterization and development of molecular markers. In this study, transcriptome sequencing on the Illumina HiSeq 2500 sequencing platform generated a total of 5.41 Gb clean data. De novo assembly yielded a total of 46,641 unigenes. Comparison analysis using BLAST showed that the annotations of 29,614 unigenes were conserved. Further study revealed 773 genes related to biosynthesis of secondary metabolites of licorice, 40 genes involved in biosynthesis of the terpenoid backbone, and 16 genes associated with biosynthesis of glycyrrhizic acid. Analysis of unigenes larger than 1 Kb with a length of 11,702 nt presented 7,032 simple sequence repeats (SSR). Sixty-four of 69 randomly designed and synthesized SSR pairs were successfully amplified, 33 pairs of primers were polymorphism in in Glycyrrhiza uralensis Fisch., Glycyrrhiza inflata Bat., Glycyrrhiza glabra L. and Glycyrrhiza pallidiflora Maxim. This study not only presents the molecular biology data of licorice but also provides a basis for genetic diversity research and molecular marker-assisted breeding of licorice. Introduction Licorice is an important herbal medicine because of its high medicinal value and applications in light and food industries. In the Pharmacopoeia of the People's Republic of China [1], three species of Glycyrrhiza, namely, G. uralensis Fisch., G. inflata Bat., and G. glabra L., are listed as authentic medicinal licorice. Licorice is mainly distributed in China, especially in the northeast and north China, as well as in northwest arid, semi-arid, and desert regions. The active ingredients of licorice include saponins, polysaccharides, flavonoids, and triterpenes; among these content, total saponins, including glycyrrhizic acid, glycrrhetinic acid, and neoisoliquiritin, exert pharmacological effects, such as protection against hepatotoxicity and anti-inflammatory [2][3][4]. Although licorice has been widely investigated in chemical and pharmacological fields, the metabolic pathways of the active ingredients of this plant have been rarely studied. In particular, insufficient genome and transcriptome sequencing data complicate research on the metabolic pathway of glycyrrhizic acid [5,6]. At the genome level, RNA sequencing can be used for gene screening analysis to detect gene expression and differences [7,8]. This technology has been widely utilized for studies on medicinal plants, such as Polygonum cuspidatum, because of its high flux and repeatability, wide detection range, and quantitative accurate characteristics [9,10]. In this study, the latest HiSeq 2500 platform was used for licorice transcriptome sequencing to completely utilize licorice genes and germplasm resources. The resulting sequence data were assembled and annotated. Genes related to glycyrrhizic acid biosynthesis of secondary metabolites were found. This research not only research glycyrrhizic acid biosynthesis of secondary metabolites, but also provides a basis for gene annotation and discovery. In addition, a large number of molecular markers for simple sequence repeats (SSR) were predicted and developed for licorice. These markers can be used for future studies on gene mapping, linkage map development, genetic diversity analysis, and marker-assisted selection breeding of G. uralensis. Plant materials Plant material was collected from a 4-year-old fresh healthy G. uralensis plant grown in a field in Beijing, China (the Beijing University of Chinese Medicine Endangered Medicinal Plant Research and Testing Base). roots, stems, and leaves were immediately stored in liquid nitrogen for analysis. In addition, Glycyrrhiza uralensis Fisch., Glycyrrhiza inflata Bat., Glycyrrhiza glabra L., Glycyrrhiza pallidiflora Maxim.were chosen to detect polymorphism of primer pairs. RNA isolation and transcriptome sequencing Total RNA was extracted from the roots, stems, and leaves by using an Ed lai kit (Ed Biological Technology Co., Ltd., Beijing; article number: RN40). Nanodrop, Qubit 2.0, and Agilent 2100 were used to determine RNA purity, concentration, and integrity, respectively. mRNA was purified and enriched from the total RNA by using poly (T) low-adsorption magnetic beads. mRNA was interrupted at a high temperature to select its suitable length. Synthesis was continued to the second cDNA chain to purify cDNA. Finally, the resulting cDNA from the mixture of roots, stems, and leaves (3:1:1) was used to construct the transcriptome sequence library. The cDNA library was enriched through PCR and subjected to Illumina HiSeq 2500 highthroughput sequencing. The transcriptome datasets are available in the NCBI Sequence Read Archive (SRA) under accession number SRX1295883. De novo sequence assembly The cDNA library was sequenced using the Illumina HiSeq 2500 system. Raw image data from sequencing were transformed by base calling into raw sequence data and defined as raw reads. Clean data were generated from the raw data through data processing, including removal of low-quality reads and adapter sequences. The clean reads were subjected to de novo assembly using the Trinity software to recover full-length transcripts across a broad range of expression levels; this technique presents sensitivity similar to genome alignment methods [11]. Transcriptome assembly was then conducted. Development and detection of SSR molecular markers for licorice Unigenes larger than 1 Kb were subjected to SSR analysis by using the MISA software (http:// pgrc.ipk-gatersleben.de/misa/misa.html). Search criteria included the number of repetitions for mono-, di-, tri-, tetra-, penta-, and hexa-nucleotides, with repetition times of 10, 6, 5, 4, 3, 3, and 2. Primers for each SSR were designed using the Primer 6 software. A total of 69 primer pairs were obtained and used for amplification. Detailed information about the designed primers is shown in S1 Table. DNA for PCR amplification was extracted from different samples through cetrimonium bromide method [12]. PCR amplification was conducted as follows: denaturation at 94°C for 3 min, followed by 35 cycles of 94°C for 40 s, 55°C-60°C for 30 s, and 72°C for 60 s. Final extension was performed at 72°C for 5 min. PCR products were analyzed through electrophoresis on 2.5% agarose gels. RNA sequencing and assembly of licorice transcriptome To obtain transcriptome information of licorice, we extracted the total RNA from the roots, stems, and leaves mixed at 3:1:1 ratio and sequenced through HiSeq 2500. A total of 26,766,870 raw sequencing reads were generated. By removing the adaptors and low-quality data, we obtained 5.41 Gb clean reads, 45.22% GC, and 0.05% N. The base quality value Q30 reached 86.59%, which indicates satisfactory sequencing quality of the licorice samples. The obtained data were used for further analysis. Sample data were merged and assembled using the Trinity software. By using the overlapping information in high-quality reads, we obtained 3,114,638 contigs with an average length of 51.00 nt and N50 length of 48 nt ( Table 1). The contigs were clustered according to the similarity of the paired-end information and contigs. The clustering yielded 87,242 transcripts with an average length of 1121.92 nt in the assembled part (Table 1). Further assembly generated 46,641 unigenes (total length of 36,725,337 nt and average length of 787.40 nt) ( Table 1). For transcripts, the size range of 1000-2000 nt accounted for most about 25.11% of all transcripts, and unigene with lengths 200-300 nt accounted for most about 32.78% of all unigene, the frequency distribution of transcripts and unigenes are shown in Fig 1 and Fig 2, respectively. Annotation of licorice functional genes To identify the gene function and GO classification of licorice, we annotated the unigenes through BLAST search against the non-redundant database (Nr/Nt), with a significance threshold of an e-value of 1 × 10 −5 . A total of 29,614 licorice unigenes were obtained through BLAST sequence comparison analysis. GO is used to classify gene function and describe the functional attributes of genes and gene products in an organism. Among 29,614 licorice unigenes, 29,389 unigenes were got Nr annotations, accordingly, 22,244 of 29,389 unigenes with Nr annotations were annotated with GO information. The GO classification system comprises three large categories: molecular function, biological process, and cellular components, which can be further divided into 58 small categories (Fig 3). Among all unigenes with GO annotations, 45.59% belong to Biological Process, 22.70% to Cellular Component, and 27.71% to Molecular Function. In Biological Process, oxidation-reduction process (GO: 0055114) accounts for the largest proportion, followed by regulation of transcription, DNA template (GO: 0006355), and protein phosphorylation (GO: 0006468). In Cellular Component, nucleus (GO: 0005634) accounts for the largest proportion, followed by plasma membrane (GO: 0005886) and integral component of membrane (GO: 0016021). In Molecular Function, ATP binding (GO: 0005524) accounts for the largest proportion, followed by zinc-ion binding (GO: 0008270) and DNA binding (GO: 0003677). A total of 8458 of 29389 Nr-annotated unigenes were annotated in the COG database. Among 25 COG categories, only the general function prediction (2313) accounts for the largest proportion, followed by replication recombination and repair (1108), as well as transcription (1068). About 3.97% (336) unigenes present unknown function (Fig 4) and regarded as unique genes of licorice. The KEGG database is employed to analyze gene products in the metabolic pathway of cells and determine their functions. About 6451 unigenes are in contrast with the KEGG database, of which 178 are involved in metabolic pathways. Ribosome (448) contains the most number of unigenes, followed by the plant hormone signal transduction (248), oxidative phosphorylation (221), RNA transport (182), and starch and sucrose metabolism (180). The distribution of pathway containing more than 50 genes, based on the KEGG database, is shown in Fig 5. Main metabolism-related genes of licorice Pharmacological research on licorice has mainly focused in glycyrrhizic and glycyrrhetinic acids, flavonoids, and polysaccharides [13]. Glycyrrhizic acid is an oleanane-type pentacyclic triterpene compound synthesized from mevalonic acid (MVA). Three molecules of acetyl COA condensation form 3-hydroxy-3-methyl glutaric acid-COA, resulting in the formation of MVA under the catalysis of HMG-CoA reductase. Focal phosphorylation, decarboxylation, and dehydration of the compound generate isopenteny diphosphate (IPP), which is an isomer of dimethylally diphosphate (DMAPP). The combination of IPP and DMAPP forms geranyl pyrophosphate (GPP), whereas the combination of GPP and IPP forms farnesyl diphosphate (FPP). The connected end-to-end connection squalene, namely, 2,3-oxidized squalene, generates β-amyrin under the action of the β-AS enzyme. The triterpenoid skeleton is then formed under the action of triterpenoid cyclase and through a series of reactions, such as adding oxygen to form glycyrrhizic acid [14]. Analysis of the KEGG pathways showed that 40 kinds of enzymes are involved in the terpenoid backbone biosynthesis. Moreover, annotation of 29,614 genes revealed that 16 genes participate in the synthesis of 11 kinds of enzymes, such as βamyrin (Table 2). Nevertheless, the corresponding gene sequences of nine types of enzymes were not found. The genes corresponding to the biosynthesis of IPP and DMPPEC1.1.1.88, Development and SSR locus analysis A total of 11,702 unigenes with a length of more than 1 Kb were found in the licorice transcriptome. Unigenes present a total length of 22,739,272 bp. To develop new molecular markers, we used the MISA software (http://pgrc.ipk-gatersleben.de/misa/misa.html) to determine potential microsatellites defined as mono-to hexa-nucleotide motifs. A total of 7,032 potential SSR loci were detected. The frequency of SSRs is 60.10%, and the average distribution distance is 3,234 bp. The SSR loci in unigenes are 48,611,547. Each unigene contained more than one SSR loci. The SSR locus numbers of mono-, di-, tri-, tetra-, penta-, and hexa-nucleotide repeats are 3, 394, 1,692, 1,814, 101, 19, and 12, respectively (Table 3). A total of 1,681 SSR sites were randomly selected from the SSR-containing sequences to design SSR primers with the Primer 6.0 software. Sixty-nine SSR pairs were randomly designed and synthesized. Sixty-four pairs were successfully used for PCR amplification of genomic DNA (Fig 6), whereas the five remaining pairs failed to generate PCR products at the same annealing temperatures. 53 pairs PCR products present the expected sizes and 11 pairs PCR products are larger than the expected sizes, which could be due to the fact that the PCR products contain introns. And 64 primer pairs polymorphic were detected in Glycyrrhiza uralensis Fisch., Glycyrrhiza inflata Bat., Glycyrrhiza glabra L. and Glycyrrhiza pallidiflora Maxim. by the 2.5% agarose electrophoresis analysis, the results showed that 33 pairs of primers were polymorphism in different species. Licorice RNA-seq technology Many technologies have been used to analyze and quantify the transcriptome of model or nonmodel organisms, such as Arabidopsis, rice, radish (Raphanus sativus L.), and Haloxylon ammodendron; as such, these techniques are vital to elucidate the complexity of growth and development of organisms. For medicinal plants, organ formation and development are controlled by complex interactions among genetic and environmental factors. The transcriptome data in publicly available libraries are insufficient and limited to describe the complex mechanisms of gene expression, as well as the genetic characteristics of species. Therefore, new generation of high-throughput sequencing technologies has been used as a powerful and costefficient tool for research on non-model organisms [9,15]. In this experiment, we used RNA-Seq technology and obtained 5.41 Gb of clean data and 46, 641 unigenes from the assembly of the clean data ( Table 1). The N50 length of the unigenes is 1,395 nt, with an average length of 787.40 nt. The results are comparable with the obtained unigenes in the recently published transcriptome analyses of other plant species, such as H. ammodendron (N50 = 1,354 bp, average length = 728 bp) [15], Reaumuria soongorica (N50 = 1,109 bp, average length = 677 bp) [16], and radish (Raphanus sativus L.) (N50 = 1,256 bp, average length = 820 bp) [17]. Longer unigenes may be obtained because of the developed Trinity software, which is a powerful software package for de novo assembly and generates increased number of full-length transcripts [11]. In this study, 29,614 unigenes were functionally annotated, whereas 17,027 (36.51%) did not obtain functional annotations. Unigenes may contain known functions of protein sequences because they are relatively short and lack conservative functional sequence. Known genes were not matched because their sequences contain missing parts and are relatively short. Moreover, unigenes contain non-coding RNA. In this regard, sequences were not functionally annotated because of insufficient number of unigenes and limited public information database. Licorice genes related to the isoprenoid biosynthesis pathway The isoprenoid biosynthesis pathway can synthesize kinetin, gibberellic acid, carotenoid, chlorophyll, sterols, monoterpenes, terpenes, and dolichol secondary metabolites [18]. The triterpene compound of glycyrrhizic acid is synthesized in the isoprene metabolic pathway. In this study, through transcriptome sequencing and KEGG database annotation, 40 kinds of enzymes are involved in terpenoid backbone biosynthesis, and 16 genes participate in the mevalonate pathway synthesis; moreover, 11 enzymes are coded in 29,614 annotated genes of licorice. Nevertheless, nine genes related to enzyme synthesis were not found. Li [5] studied the gene expression of wild licorice for 5 years and found 18 kinds of enzyme involved in licorice saponin synthesis; of these enzymes, 16 participate in the MVA synthesis of mevalonate kinase (EC2.7.1.36) and MEP synthesis pathway DXP synthase (EC2.2.1.7), whereas two enzymes are not related to the annotated genes. This result is significantly related to the sequencing of the samples with different ages and periods; transcriptome sequencing also show the different periods at which sample genes are expressed [19]. Nine kinds of enzyme in the synthesis of licorice saponins were not found. In different periods, the content of glycyrrhizin differs but the conclusion remains controversial. Liu [20] found that cultivated licorice with different ages presents varied contents of glycyrrhizin, total flavones, and polysaccharides; moreover, the highest content of glycyrrhizin was observed in the third year of cultivation. Sun [19] showed that the content of glycyrrhizin was higher after 4 years of cultivation. In addition, the quantity of synthesized saponin differs between the wild and cultivated Glycyrrhiza; licorice grows faster under cultivated conditions than that under wild conditions, resulting in higher primary metabolite contents. Although secondary metabolites are major components of traditional Chinese medicinal materials, their accumulation is related to adversity stress and are thus beneficial for accumulation of licorice saponin [5]. Therefore, we aim to design and conduct detailed tests and analyses by using different periods and ages of licorice material in the future. Characteristics of SSR molecular markers In the analysis of the SSR polymorphism loci of the licorice transcriptome with more than 1 Kb length and 11,702 unigenes by using the MISA software, a total of 7,032 SSR loci were detected with a frequency of 60.10% and an average distribution distance of 3,234 bp. In the licorice SSR loci, the most frequent repeat type is mono-nucleotide with 3,394 (48.27%), followed by triand di-nucleotide repeats, with 1,814 (25.80%) and 1,692 (24.06%), respectively. This distribution frequency differs from those of most plant genomes, such as field pea, faba bean, and autotetraploid Alfalfa, in which the most abundant repeat motif is tri-nucleotide (57.7%, 61.7%, and 61.19%, respectively) [10,21]; in Sesamum, the most abundant repeat motif is di-nucleotide repeat motifs [22]. Autotetraploid Alfalfa, field pea, faba bean, and P. cuspidatum do not have mononucleotide repeat sequences, which could be due to the different standards used in SSR search criteria [9,10,21,22]. In this study, we explored the mono-nucleotide repeat motifs in licorice; during the process, a condition where the mono-nucleotide repeat is dominant was generated, which decreases the number of other nucleotide repeats. In this study, the occurrence frequency of tri-nucleotide repeats (25.80%) is higher than the di-nucleotide repeat frequencies (24.06%). Studies on P. cuspidatum [9], autotetraploid Alfalfa [10], Asteraceae (Mikania micrantha) [23], Asteraceae (Chrysanthemum nankingense) [24], and radish (R. sativus L.) [17] demonstrated similar conclusion. The di-nucleotide repeats of other plants, namely, rubber tree [25], Sesamum [22], and blunt snout bream [26], have higher frequencies than tri-nucleotide repeat frequencies. This finding may be due to the different genetics of different species and standards used for SSR search. In licorice di-nucleotide repeat motifs, AG/CT appeared the most, accounting for 15.84% of SSR. This result is consistent with that in Sesamum [22] and radish [17]. In plants, the presence of CT repeat sequence to 5 0 UTRs is probably related to reverse transcription and has a significant role in gene regulation [27]. By contrast, in the licorice tri-nucleotide repeat motifs, AAG /CTT appeared most, accounting for 6.23% of the total SSR. This result is consistent with that in Sesamum [22], and radish [17]; conversely, in rubber tree [25] and Asteraceae (C. nankingense) [24], AAG/TTC and CCA/ GGT were the most abundant, respectively, which could be due to the frequency used in different encoding proteins of species. Among 69 primer pairs, 64 (92.75%) were amplified successfully. The PCR success rate was similar to that in Sesamum [22], lower than that in rubber tree [25], and higher than that reported in a previous study [10]. These results suggest that the quality of assembled unigenes were high, and SSRs identified in our study could be used for future analysis. Supporting Information S1
2017-07-21T20:07:32.629Z
2015-11-16T00:00:00.000
{ "year": 2015, "sha1": "b28c6dbfa16e1b8061256c5bdd3436898ac47865", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0143017&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b28c6dbfa16e1b8061256c5bdd3436898ac47865", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
239466974
pes2o/s2orc
v3-fos-license
Signaling Pathways That Regulate Normal and Aberrant Red Blood Cell Development Blood cell development is regulated through intrinsic gene regulation and local factors including the microenvironment and cytokines. The differentiation of hematopoietic stem and progenitor cells (HSPCs) into mature erythrocytes is dependent on these cytokines binding to and stimulating their cognate receptors and the signaling cascades they initiate. Many of these pathways include kinases that can diversify signals by phosphorylating multiple substrates and amplify signals by phosphorylating multiple copies of each substrate. Indeed, synthesis of many of these cytokines is regulated by a number of signaling pathways including phosphoinositide 3-kinase (PI3K)-, extracellular signal related kinases (ERK)-, and p38 kinase-dependent pathways. Therefore, kinases act both upstream and downstream of the erythropoiesis-regulating cytokines. While many of the cytokines are well characterized, the nuanced members of the network of kinases responsible for appropriate induction of, and response to, these cytokines remains poorly defined. Here, we will examine the kinase signaling cascades required for erythropoiesis and emphasize the importance, complexity, enormous amount remaining to be characterized, and therapeutic potential that will accompany our comprehensive understanding of the erythroid kinome in both healthy and diseased states. Introduction Aberrant kinase activation contributes to the pathogenesis of many diseases, including hematopoietic disorders, and pharmacologic inhibition of these kinases has become a major therapeutic strategy in the management of these diseases [1][2][3][4][5][6]. Indeed, the first protein-targeting therapy (imatinib mesylate, sti-571, or Geevec ® ) was designed to inhibit the kinase activity of the fusion gene Bcr-Abl in chronic Myeloid Leukemia [7]. As of 31 March 2021, there are 65 small molecule kinase inhibitors approved by the Food and Drug Administration (Drugs@FDA). Diamond Blackfan Anemia (DBA) results from genetic mutations in one of at least 20 different ribosomal genes [8]. These mutations are carried by every cell of the patient, and it is poorly understood why these mutations severely impact erythropoiesis with such specificity [9]. Pure red blood cell aplasia is due to a restriction of the earliest committed erythroid progenitors, manifesting as reduced erythrocytes [10]. Ribosomal stress due to ribosomal insufficiency, increased p53 activity [11,12] and reduced GATA1 transcription [13][14][15] have been linked to the disease phenotype; however, the impact on kinase signaling cascades has not been thoroughly investigated. Similarly, many other anemias that stem from genetic mutations may have disrupted kinase signaling that has not been closely examined. Basic signaling in red blood cell development has been well characterized for a number of years, but recent revelations using conditional knockout animals and more sophisticated techniques have revealed that the linear cascades, stimulating mostly redundant signaling molecules that are attributed to the cytokines that drive erythropoiesis, do not account for the physiological processes they independently regulate. Nor is it clear how these basic pathways are disrupted or hijacked in disease states. The cell cycle is also regulated by a large number of kinases, and cell division is highly regulated during erythropoiesis [16][17][18]. There is evidence that cell cycle progression and differentiation is uncoupled in DBA and is yet another area that deregulated kinases may be impacting erythropoiesis [19]. Metabolism is another kinase-driven pathway disrupted in anemias [20]. In this review, we summarize kinase-signaling cascades initiated during early erythropoiesis, with a particular focus on early erythroid progenitors that are impacted in DBA. Many of these kinase-signaling cascades are well characterized and include Janus kinase/signal transducer and activator of transcription (JAK/STATs), PI3K/Akt, and mitogen-activated protein kinases (MAPKs). In many diseases, it is not the direct deregulation of one of these kinases but rather aberrant deregulation of another kinase that negatively feeds back into the signals initiated by the hematopoietic cytokines [20][21][22][23][24][25][26][27]. While these phosphorylate normally unphosphorylated substrates to trigger pathogenic signals, they frequently disrupt homeostatic pathways. Perhaps the most characterized of these is the activation of the tyrosine kinase cellular Abelson tyrosine kinase (c-Abl) associated with the Philadelphia chromosome fusion event in chronic myeloid leukemia (CML) [28]. Of particular relevance to ribosomopathies, the relatively under-characterized atypical MAPK kinase Nemo-like kinase (NLK) has been shown to be chronically activated in early erythroid progenitors and contributes to the disease phenotype [21][22][23]. Our understanding of how NLK deregulates erythropoiesis is rudimentary but most likely provides only an initial insight into the disrupted kinase network responsible for controlled, healthy red blood cell production. As with Bcr-Abl, kinases are readily druggable pharmacological targets and are the most successfully targeted class of proteins in medicinal chemistry [6]. Evidence is beginning to emerge that disruption of kinase homeostasis in the erythroid progenitors of DBA patients is contributing to disease pathogenesis [21][22][23]. NLK has overlapping substrate specificity with a number of MAPK members activated in early erythropoiesis and suppression of NLK expression [21][22][23] or activity [21] improves erythroid expansion. Some substrates have been identified in these cells, but understanding the novel pathways initiated, as well as how it disrupts existing pathways, is likely to shed significant insight into our understanding of DBA. As our investigations into kinase signaling in DBA matures, a myriad of other deregulated kinases will likely be identified. Understanding the impacts of these events will undoubtedly improve our understanding of both normal and diseased erythropoiesis. Cytokines, Cognate Receptors and Signaling Pathways Regulating Early Erythropoiesis A number of cytokines, and the kinase signaling cascades they stimulate, drive erythropoiesis, particularly early erythropoiesis. Each of the cytokines impacts progenitors at different stages of differentiation, but there is significant overlap, especially SCF/IL-3/IL-6 and EPO/SCF. The cytokines can also synergize or antagonize one another at different stages of differentiation. Through over-expression or recombinant expression of ligand and receptors in knockout mice, we have determined that each of these cytokines regulates distinct cellular responses, despite significant overlap in the characterized signaling cascades they stimulate (see Figure 1). Stem Cell Factor (SCF) is a dimeric cytokine [6] that binds to the extracellular domain and activates the tyrosine kinase c-kit [3]. Signaling through c-kit is crucial for normal hematopoiesis and a range of other processes, including pigmentation, fertility, gut movement, and aspects of the nervous system [3]. Deregulation of c-kit signaling is linked to cancer and allergy [7]. In hematopoietic cells, c-kit expression is detected in stem and early progenitor populations [8] and lost during differentiation in all lineages except mast and dendritic cells [3]. Activation of c-kit is initiated by dimerization of two c-kit receptors that is brought about by simultaneous binding to the two molecules of the SCF dimer [9]. Dimerization leads to conformational changes that enables the kinase domains of each monomer [6] and trans-phosphorylation of multiple tyrosine residues in both monomers [10]. Phosphorylated c-Kit receptors recruit adaptor proteins, including Grb7, Grb10, APS, Lnk, CrkI, CrkII, and CrkL, which, in turn, recruit kinases that are activated [3]. Most notable of these is the p85 subunit of PI3K [10]. Binding of the p85 subunit causes a conformational change that activates the kinase domain of the p110 subunit and PI3K signaling is initiated [12]. Although PI3K phosphorylates phospholipids specifically convert phosphatidylinositol 4,5-bisphosphate (PIP2) to phosphatidylinositol 3,4,5-triphosphate (PIP3) directly, through a series of events, it leads to the activation of Akt that then stimulates a myriad of protein kinase-signaling cascades [12]. The MAPK family of kinases is another large and impactful group of signaling molecules activated upon c-Kit stimulation [3]. ERK1/2 are the most characterized of these, and this pathway is initiated when the small GTPase, RAS, has the guanosine triphosphate (GDP) associated with it exchanged for a guanosine triphosphate (GTP) by a guanine exchange factor [13,14]. A number of the adaptors that associate with phosphorylated c-kit can serve this function (eg, SOS, Vav, Grb2), but the process is tightly regulated and involves a complex of factors [13]. Once bound to GTP, RAS remains active until it hydrolyzes the GTP back to GDP. Active RAS binds the serine/threonine kinase RAF and recruits it to the plasma membrane, where it, in turn, becomes phosphorylated and activated [14]. Activated RAF proteins then amplify the signal by phosphorylating multiple MAPK proteins (including Mek1/2), which are themselves kinases and go on to phosphorylate multiple ERK-like proteins [15]. Many of the substrates are transcription factors in the nucleus [16], but some are powerful signaling molecules, such as ribosomal S6 kinases (RSKs) [16]. There are numerous MAPK family members that signal using a similar template to that of ERK1/2, including the more characterized p38, JNK, and ERK5 signals. All Stem Cell Factor (SCF) is a dimeric cytokine [6] that binds to the extracellular domain and activates the tyrosine kinase c-kit [3]. Signaling through c-kit is crucial for normal hematopoiesis and a range of other processes, including pigmentation, fertility, gut movement, and aspects of the nervous system [3]. Deregulation of c-kit signaling is linked to cancer and allergy [7]. In hematopoietic cells, c-kit expression is detected in stem and early progenitor populations [8] and lost during differentiation in all lineages except mast and dendritic cells [3]. Activation of c-kit is initiated by dimerization of two c-kit receptors that is brought about by simultaneous binding to the two molecules of the SCF dimer [9]. Dimerization leads to conformational changes that enables the kinase domains of each monomer [6] and trans-phosphorylation of multiple tyrosine residues in both monomers [10]. Phosphorylated c-Kit receptors recruit adaptor proteins, including Grb7, Grb10, APS, Lnk, CrkI, CrkII, and CrkL, which, in turn, recruit kinases that are activated [3]. Most notable of these is the p85 subunit of PI3K [10]. Binding of the p85 subunit causes a conformational change that activates the kinase domain of the p110 subunit and PI3K signaling is initiated [12]. Although PI3K phosphorylates phospholipids specifically convert phosphatidylinositol 4,5-bisphosphate (PIP2) to phosphatidylinositol 3,4,5-triphosphate (PIP3) directly, through a series of events, it leads to the activation of Akt that then stimulates a myriad of protein kinase-signaling cascades [12]. The MAPK family of kinases is another large and impactful group of signaling molecules activated upon c-Kit stimulation [3]. ERK1/2 are the most characterized of these, and this pathway is initiated when the small GTPase, RAS, has the guanosine triphosphate (GDP) associated with it exchanged for a guanosine triphosphate (GTP) by a guanine exchange factor [13,14]. A number of the adaptors that associate with phosphorylated c-kit can serve this function (eg, SOS, Vav, Grb2), but the process is tightly regulated and involves a complex of factors [13]. Once bound to GTP, RAS remains active until it hydrolyzes the GTP back to GDP. Active RAS binds the serine/threonine kinase RAF and recruits it to the plasma membrane, where it, in turn, becomes phosphorylated and activated [14]. Activated RAF proteins then amplify the signal by phosphorylating multiple MAPK proteins (including Mek1/2), which are themselves kinases and go on to phosphorylate multiple ERK-like proteins [15]. Many of the substrates are transcription factors in the nucleus [16], but some are powerful signaling molecules, such as ribosomal S6 kinases (RSKs) [16]. There are numerous MAPK family members that signal using a similar template to that of ERK1/2, including the more characterized p38, JNK, and ERK5 signals. All these pathways are activated by SCF, but it is highly probable less characterized members of the family are similarly stimulated [3]. How all these somewhat redundant pathways interact with one another remains under-defined, but it is highly regulated and complex. A summary of SCF signaling is presented in Figure 2. these pathways are activated by SCF, but it is highly probable less characterized mem of the family are similarly stimulated [3]. How all these somewhat redundant pathw interact with one another remains under-defined, but it is highly regulated and comp A summary of SCF signaling is presented in Figure 2. Despite regulating vastly distinct physiological responses to erythropoiesis, SCF, IL-3, IL-6, and EPO share a large number of classical signaling cascades. The different physiological responses are probably due to nuanced regulation of these cascades with signal specific adaptors, substrates, binding partners, and accessory signaling molecules. The molecular landscape in which a signaling cascade is activated contributes as much to the physiological response as to the signal itself. Phospholipase C is activated downstream of c-Kit [3]. These enzymes hydrolyze polar head groups of PIP2 to generate diacylglycerol (DAG) and inositol 1,4,5-tris-p phate (IP3) [29]. These are powerful signaling molecules, but as they are not kinases, will not be covered here in detail. The c-Kit receptor has also been shown to interact with other receptors involve hematopoiesis, including receptors for EPO, IL-3, IL-7 and granulocyte/macrophage ony stimulating factor (GM-CSF). In some cases, signaling from one cytokine upregu others [3]. Interleukin-3 (IL-3) is a family of cytokines believed to be important in regula the growth and development of cells of both the hematopoietic and immune system comparison with other hematopoietic growth factors, IL-3 preferentially supports the liferation and differentiation of progenitors at early stages of hematopoietic developm [30]. IL-3, IL-5, and GM-CSF are bind receptors that are members of the gp140 family 3 acts on the most immature marrow progenitors [31][32][33]. IL-3 is capable of inducing growth and differentiation of multi-potential hematopoietic stem cells, neutrophils, eo ophils, megakaryocytes, macrophages, lymphoid, and erythroid cells [34]. The activ IL-3 receptor (IL-3R) complex consists of two subunits, a 60-70kDa alpha subunit a 130-140kDa beta-subunit, bound to a 20-26kDa IL-3 monomer [35]. While no classica rosine kinase domains have been identified, evidence suggests tyrosine phosphoryla is at least partially required for signaling [36]. Similar to SCF, IL-3 stimulates PI3K, Src, and MAPK families of kinases [35]. A tionally, IL-3 activates JAK-STAT signals. There are four recognized members of the (Janus kinases) family; JAK-2 appears to be primarily responsible for hematopoietic naling, although JAK-1 and TYK-2 have been implicated [36][37][38]. JAK proteins asso with the intracellular domain of IL-3R and, upon receptor binding to the IL-3 ligand, proteins are activated. Activated JAKs phosphorylate a number of tyrosine residue IL-3R that, in turn, serve as docking sites for other signaling molecules, including ST Phospholipase C is activated downstream of c-Kit [3]. These enzymes hydrolyze the polar head groups of PIP2 to generate diacylglycerol (DAG) and inositol 1,4,5-trisphosphate (IP3) [29]. These are powerful signaling molecules, but as they are not kinases, they will not be covered here in detail. The c-Kit receptor has also been shown to interact with other receptors involved in hematopoiesis, including receptors for EPO, IL-3, IL-7 and granulocyte/macrophage colony stimulating factor (GM-CSF). In some cases, signaling from one cytokine upregulates others [3]. Interleukin-3 (IL-3) is a family of cytokines believed to be important in regulating the growth and development of cells of both the hematopoietic and immune systems. In comparison with other hematopoietic growth factors, IL-3 preferentially supports the proliferation and differentiation of progenitors at early stages of hematopoietic development [30]. IL-3, IL-5, and GM-CSF are bind receptors that are members of the gp140 family. IL-3 acts on the most immature marrow progenitors [31][32][33]. IL-3 is capable of inducing the growth and differentiation of multi-potential hematopoietic stem cells, neutrophils, eosinophils, megakaryocytes, macrophages, lymphoid, and erythroid cells [34]. The activated IL-3 receptor (IL-3R) complex consists of two subunits, a 60-70kDa alpha subunit and a 130-140kDa beta-subunit, bound to a 20-26kDa IL-3 monomer [35]. While no classical tyrosine kinase domains have been identified, evidence suggests tyrosine phosphorylation is at least partially required for signaling [36]. Similar to SCF, IL-3 stimulates PI3K, Src, and MAPK families of kinases [35]. Additionally, IL-3 activates JAK-STAT signals. There are four recognized members of the JAK (Janus kinases) family; JAK-2 appears to be primarily responsible for hematopoietic signaling, although JAK-1 and TYK-2 have been implicated [36][37][38]. JAK proteins associate with the intracellular domain of IL-3R and, upon receptor binding to the IL-3 ligand, JAK proteins are activated. Activated JAKs phosphorylate a number of tyrosine residues on IL-3R that, in turn, serve as docking sites for other signaling molecules, including STATs [33]. Multiple STATs are activated in hematopoiesis, with STAT-1, -3, -5, and -6 most characterized [34]. While JAKs are important for STAT activation, other signaling molecules, such as Src and MAPK family members, are crucial for complete regulatory control [32,38]. Although IL-3 stimulates PI3K, RAS/MAPK, and PI3K pathways, the adaptor molecules and activation mechanisms are different. For example, the adaptor Shc binds to the betareceptor, is rapidly phosphorylated, and becomes associated with Grb2 and SOS. These then exchange GTP to RAS to activate MAPK signaling [39][40][41]. While the highly characterized members of ERK1/2, JNK, and p38 pathways are activated, more detailed analysis will likely reveal regulatory differences between IL-3-mediated and other cytokine-mediated activation of these pathways. Similar to RAS activation, alternative adaptors link IL-3R activation to PI3K stimulation. The adaptor p85 links IL-3R to the p85 subunit of PI3K and downstream signaling such as Akt and p70S6K [34]. A summary of IL-3 signaling is presented in Figure 2. Interleukin-6 (IL-6)-IL-6, IL-11, LIF, and OSM are several of the members of an important family of mediators involved in acute-phase response to injury and infection but are also critical to hematopoiesis [42]. Similar to other cytokines, downstream signaling includes JAK-STATs and MAPK pathways [43]. While there must be significant overlap with other cytokine signals, use of different adaptors and regulators likely provides some novel and crucial input required for differentiation. IL-6 signaling is summarized in Figure 2. Erythropoietin is first produced in the neural crest cells to stimulate yolk sac erythropoiesis [44,45] but switches to the fetal liver [46] and, primarily, the kidney after birth [47,48]. The mature form of this glycoprotein is 163 amino acids with three potential N-linked glycosylation sites [49]. The erythropoietin receptor, or EPOR, is a single pass transmembrane protein with no recognizable tyrosine kinase domain [50]. Similar to IL-3 and 6, ligand-bound EPOR stimulates JAK-STATs, but also PI3K and MAPKs [50]. Despite EPO signaling through similar pathways as other cytokines, erythroid commitment is heavily reliant upon this cytokine and red blood cell production. The classically defined EPO signaling cascade is summarized in Figure 2. EPO and EPOR null mouse embryos both die early in embryogenesis with a distinct lack of terminal erythroid differentiation, with EPO being critical for promoting the proliferation, survival, and appropriate timing of terminal maturation of primitive erythroid precursors [44]. The receptor itself (EPOR) is upregulated immediately prior to erythroid commitment, and progenitors become responsive to the cytokine [44]. In renal disease, anemia results from the failure of the diseased kidneys to produce adequate amounts of EPO, resulting in subsequent anemia. Administration of recombinant EPO or EPO-stimulating agents increases red blood cell production and relieves anemia these patients [51]. EPO is also used in many anemias even when blood EPO levels are normal to increase stimulation of erythropoiesis including of anemias associated with malignancies, either due to neoplastic bone marrow infiltration or to chemotherapy-related myelosuppression, the anemia of myelodysplastic syndromes and AIDS, the anemia of chronic inflammatory diseases, prematurity, and bone marrow transplantation [51]. In ribosomopathies such as DBA, blood levels of EPO are frequently elevated, yet progenitors are unresponsive to it and no increase in red blood cells occurs [52,53]. This further emphasizes that the disruption in this disease occurs in a very early erythroid progenitor that precedes EPO sensitivity. As with many cytokine signaling pathways that stimulate apparently similar kinase cascades, the particular nuances of the EPO kinase cascade activation are essential for healthy erythropoiesis and is frequently disrupted in disease states, including ribosomopathies. Understanding the specific signaling regulation of this cytokine will likely have significant benefits to human health in the future. Kinase roles in erythroid and non-erythroid myeloid differentiation; the Src kinase family. Once cells are committed to the myeloid lineage, the emphasis on cytokine signaling is reduced, particularly in erythroid progenitors. Non-erythroid myeloid cells require thrombopoietin, GM-CSF, granulocyte colony-stimulating factor (G-CSF), and macrophage colony-stimulating factor (M-CSF) to stimulate progenitors towards appropriate differentiation. In erythroblasts, intracellular kinases contribute to the process [54]. A family of tyrosine kinases critically regulating myeloid lineages is the Src family. The family consists of ten members, and expression can be very cell-type dependent. The expression of a number of members (eg. Lck, Hck, Lyn, Fgr, and Blk) are largely or entirely restricted to hematopoietic cells [55]. The kinases become activated after conformational changes that occur upon binding phosphorylated tyrosine residues on the c-Kit receptor [56]. The activation of specific Src family members is complex, highly regulated, and not completely defined, but relationships with various adaptor and other kinases are critical [3]. The extent, timing, and duration of activation of these kinases is essential for healthy hematopoiesis, as the phosphorylation of downstream substrates interconnect with almost every critical cellular process [56]. Another way Src family members can influence different cell types in different ways is by altering subcellular localization. It can be cytosolic or associated with membranes, including plasma, perinuclear, and endosomal membranes, each attributed to different physiological roles [55]. Perhaps the most clinically relevant Src kinase is the Abelson kinase (Abl). In CML, Abl is translocated to the BCR gene located on chromosome 22 [57]. Inhibition of this deregulated kinase by imatinib mesylate, and subsequent derivatives, has saved many lives and emphasizes the potential of understanding kinase signaling in healthy and diseased myelopoiesis. The potential is not only limited to non-erythroid myeloid progenitors, as Src family kinases are also critical for erythropoiesis. An example is Fyn kinase, which is a modulator of EPO and stress erythropoiesis [58]. Other Cytokines and Signaling Pathways The microenvironment in the bone marrow is dynamic, with a complex array of soluble and cell bound ligands that stimulate early erythroid progenitors in a myriad of ways [52]. Our understanding of the niche has increased drastically over recent years, but vast gaps in our understanding of the kinase signaling that is regulating, and being regulated by, these cellular interactions remain. We have discussed the most characterized kinase cascades, but our current understanding of the adaptors, substrates, and binding complexes associated with the activated kinases remains rudimentary. Additionally, it is apparent how critical regulated kinase signaling is to erythropoiesis. Just as apparent is how poorly we understand the mechanistic interactions regulating this network of signals. While it is a challenging task, the potential benefits of comprehensively understanding how these kinases interact are significant. As we move forward, we must not think of signaling molecules as on or off, or as belonging to discreet pathways, but rather interconnected pathways enhanced or suppressed to varying degrees by a multitude of signals. Perhaps just as intriguing is the concept that different signals switch the proteins associated with active kinases that result in modified cell localization of the activated kinases. Kinases and kinase signaling are nuanced and not binary. Pyruvate kinase-Central to red blood cell production is glycolysis, and the conversion of phosphenolpyruvate to pyruvate yields 50% of the ATP required for erythropoiesis [61]. The enzyme that catalyzes this reaction in pyruvate kinase. Mutations in this gene give rise to pyruvate kinase deficiency and are characterized by hemolysis and non-spherocytic anemia [20,24]. Although not strictly a signaling kinase, this emphasizes the broad role kinases play in erythropoiesis. Dual-specificity tyrosine-regulated kinase-3 (DYRK3)-The expression of DYRK3 is limited to erythroid progenitor cells and the testis. Studies in mice suggest that the activation of this kinase during induced anemia contributed to reduced erythropoiesis. One possible mechanism of action is that DYRK3 inhibits the NFAT (nuclear factor of activated T cells) transcriptional response pathway to modulate the essential erythroid transcription factor Klf1 [25]. Mammalian target of rapamycin (mTOR)-Ribosomopathies, and the reduced ribosome function associated with them, clearly indicate the importance of protein translation in erythropoiesis. A critical regulator of this process is mTOR [62]. DBA models [63] and patients [64] can be significantly improved upon treatment with the stimulator of mTOR, leucine, further implicating the importance of this master kinase in regulating erythroid development. Inhibitors of mTOR have also been demonstrated to negatively impact erythropoiesis [64], although inhibitors could also improve anemia in some conditions [26,65]. As this kinase is regulated by a complex array of factors, it may contribute in multiple ways under different conditions. The regulatory subunit of mTOR, RAPTOR, is phosphorylated by activated NLK in DBA models [21], suggesting that NLK may be contributing to mTOR deregulation in erythroid progenitors. Heme-regulated elF2α kinase-The kinase is also known as the heme-regulated inhibitor (HRI) and is activated in the heme deficiency that occurs in microcytic hypochromic anemia [66]. The activated kinase phosphorylates elF2α to inhibit translation of certain mRNAs (especially globin) and enhance translation of other mRNAs (such as ATF4). This pathway also represses mTORC1 and impacts mitochondrial function [66]. JAK2-JAK2V617F-This is a point mutation of the JAK2 gene and results in myeloproliferative disorders with a polycythemia-like phenotype and increased erythropoietinindependent red blood cell production and splenomegaly [67]. While most aberrantly activated kinases that contribute to disrupted erythropoiesis in human health do not belong to the signaling pathways of the classical hematopoietic cytokines, this mutation of the JAK2 kinase is a rare example of the direct disruption of such kinases in human disease. This mutation contributes to 100% of cases of polycythemia vera along with many patients with essential thrombocythemia and primary myelofibrosis [67]. ERK/SAPK/JNK/p38-Within the MAPK family of kinase are subfamilies, including ERK1/2, ERK5, p38, JNK, and SAPK. Additional atypical and less conserved members are also present [68]. Of these ERK1/2, p38α and JNK1 are the most characterized, with most members being poorly characterized. In hematopoiesis ERK1/2, p38α and JNK1 are activated in response to erythropoiesis cytokines [3,34,48], but it is highly likely that regulation of other MAPK family members occurs but has not yet been characterized. However, it has been revealed that p38α and JNK1 restrains erythropoiesis [69], and p38 is required for the production of erythropoietin by bone marrow cells [4]. Deregulation of the p38 pathway is partially responsible for apoptosis in Fanconi anemia [27]. Early work suggests erythroid proliferation is dependent on ERK1/2 signaling, while differentiation is mediated primarily through p38/JNK signaling [70]. Deregulated signaling of the SAPK family of kinase is linked to hematopoiesis in Fanconi Anemia [71]. Although correctly regulated MAPK signaling is evidently essential to erythropoiesis, our current limitations in understanding of the complexities of these signaling networks in healthy and disease states of erythropoiesis impairs our ability to utilize them as therapeutic targets. A better understanding of the full complement of MAPK family members should rectify this. Nemo-Like Kinase (NLK)-The atypical MAPK kinase NLK is chronically hyperactivated in DBA and contributes to disease pathogenesis [21][22][23]. NLK expression is moderate in HSCs and is downregulated in non-erythroid progenitors by the upregulation of miR-181 [21,72]. While this miRNA critically regulates the differentiation of MEPs into megakaryocytes [73], it also leads to degradation of NLK mRNA [21][22][23]72]. NLK null mice die late in pregnancy or just after birth due to compromised lung development. Depending on the mouse strain, the hematopoietic system ranges from unaffected to severely compromised [74]. The bone marrow niche is also disrupted with reduced fat tissue and stromal cells. NLK regulates the Wnt pathway but can also influence STAT3 and interferon signaling [74]. While NLK expression appears dispensable for normal erythroid development, in DBA NLK becomes activated [21][22][23]. The upstream regulators are, as yet, uncharacterized, but are dependent on the increased p53 expression associated with the disease [21]. As a MAPK member, NLK shares many substrates with other MAP kinases, albeit with differing kinetics [69]. The exact substrates responsible for erythroid defects are not fully characterized, but the degradation that accompanies c-Myb phosphorylation and inhibition of the translation regulator mTORC1 are likely effectors [21]. The activation of NLK occurs in all DBA genetic mutations examined to date (although this is not exhaustive) and may therefore constitute a common therapeutic target in DBA patients, irrespective of the genetic mutation carried. Unfortunately, NLK suppression does not completely rescue erythroid expansion but does improve erythropoiesis by 2-6-fold, depending on the in vitro model tested [21][22][23]. As p53 is upregulated in a number of anemias, NLK or other MAPK family kinases may be similarly activated in these disorders. This list is far from comprehensive, and advancements in our understanding of the regulatory mechanisms of erythropoiesis will certainly reveal a growing list of deregulated kinases that may serve as therapeutic targets to reduce the burden of anemic diseases. Summary We have learned a tremendous amount about the major signaling molecules and cascades initiated by classical hematopoietic cytokines such as SCF, Il-3, IL-6, and EPO. However, there is a significant overlap, in which kinase cascades are initiated by each, despite each having very distinct and dramatic physiological impacts on erythropoiesis. To understand how these cytokines impact erythropoiesis, we need to delve deeper and better understand the complexities of kinase signaling in erythroid progenitors. As so many other cellular processes that are critical to erythroid expansion are also regulated by kinases we also need to understand the interplay between these kinase and how deregulation in one impacts the entire network. Only once we have a better understanding of kinase signaling in normal erythropoiesis will we be able to translate this understanding for clinical benefit in ribsomopathies such as DBA and other anemias. Kinases offer hope as therapeutic targets in a wide range of human diseases. As erythropoiesis is so dependent on kinase signaling cascades, understanding how these cascades are influenced in the disease state will, almost without doubt, reveal therapeutic strategies to improve patient outcomes. In particular, kinases that are deregulated in multiple genetic blood disorders, for example NLK being activated in ribosomopathies [21][22][23], offer particular value as common targets to multiple genetic disorders. The challenge will be understanding the deep, interconnected network that has developed from many years co-evolving into the highly integrated and regulated system and how the manipulation of one kinase will impact the network as a whole. The potential reward of understanding such complexities could greatly benefit human health.
2021-10-22T15:16:22.497Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "eeeb749c88216f27af6b7adb6c19a44572297f4f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/12/10/1646/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a8dff47f1437b96dbe6c88508fe174bd982fefc7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218984515
pes2o/s2orc
v3-fos-license
Depression and Objectively Measured Physical Activity: A Systematic Review and Meta-Analysis Depression is a major contributor to the overall global burden of disease, with high prevalence and relapse rate. Several factors have been considered in order to reduce the depression burden. Among them, physical activity (PA) showed a potential protective role. However, evidence is contrasting probably because of the differences in PA measurement. The aim of this systematic review with meta-analysis is to assess the association between objectively measured PA and incident and prevalent depression. The systematic review was conducted according to methods recommended by the Cochrane Collaboration and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Relevant papers published through 31 August 2019 were identified searching through the electronic databases PubMed/MEDLINE, Excerpta Medica dataBASE (Embase), PsycINFO, Scopus, Web of Science (WoS), and the Cochrane Library. All analyses were conducted using ProMeta3. Finally, 42 studies met inclusion criteria. The overall Effect size (ES) of depression for the highest vs. the lowest level of PA was −1.16 [(95% CI = −1.41; −0.91), p-value < 0.001] based on 37,408 participants. The results of the meta-analysis showed a potential protective effect of PA on prevalent and incident depression. Introduction Depression is one of the major leading causes of disability worldwide, affecting approximately 400 million people [1], with 9% of men and 17% of women experiencing depressive symptoms at least once in their life. Mainly due to social prejudices, depression continues to be frequently under-diagnosed and inadequately treated [2]. Depression can have several negative consequences, being characterized by sad mood and/or loss of interest, affecting thoughts, feelings, behaviors, Materials and Methods We conducted this systematic review according to the methods recommended by the Cochrane Collaboration [16] and to the Meta-analysis Of Observational Studies in Epidemiology (MOOSE) guidelines [17] and documented the process and results in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [18]. The review protocol has been registered on PROSPERO [19], the International Prospective Register of Systematic Reviews funded by the National Institute of Health Research (https://www.crd.york.ac.uk/prospero/). Information Sources and Search Strategy Studies were identified searching through the electronic databases PubMed/MEDLINE, Embase, Scopus, Web of Science (WoS), PsycINFO and the Cochrane Library. We combined the search strategy of free text terms and exploded MESH headings for the topics of depression, physical activity, objective measurements, and type of study. The strategy was firstly developed in PubMed/MEDLINE and then adapted for use in the other databases (Supplementary Table S1). Studies conducted on human subjects and published in English through 31 August 2019 were included. Inclusion and Exclusion Criteria We considered studies that investigated the relation between physical activity objectively measured and depression, both as a continuous and as a binary variable. Adult participants of both sexes were considered. As done before [20,21], both population-based and hospital-based studies were included. Among hospital-based studies, inpatients, day-hospital, and outpatient subjects were included while emergency care records were excluded as considered non-representative. All experimental and observational study designs were included apart from case reports. Narrative and systematic reviews, letters to the editor and book chapters were excluded. Table 1 shows a detailed description of inclusion/exclusion criteria according to the Population, Exposure, Outcomes and Study design Study Selection and Data Extraction Identified studies were independently reviewed for eligibility by two couples of authors (VG, LB, MM, SC) in a two-step process: a first screening was performed based on title and abstract, while full texts were retrieved for the second screening. At both stages disagreements by reviewers were resolved by consensus. Data were independently extracted by three authors (LB, MM, SC) and supervised by a senior author (VG) using an ad-hoc developed data extraction spreadsheet. The data extraction spreadsheet was piloted on 10 randomly selected papers and modified accordingly. As done before [23][24][25], both qualitative and quantitative data was extracted from the original studies. Qualitative data recorded included the following items: name of first author and year of publication, country where the study was conducted and period during which the study was performed, device used to measure PA and tool used for depression diagnosis. Moreover, characteristics of the subjects were recorded (e.g., age, gender, comorbidities). Quantitative data extracted includes: sample size, number of participants lost (attrition), duration of PA measurement, distribution of depressed participants in the sample, level of PA performed and the results estimating the association between PA objectively measured and depression. Quality Evaluation The quality evaluation of the included publications were independently assessed by two authors using the New-Ottawa Scale [26] for observational studies and Cochrane Collaboration tool for trials [27]. Meta-Analysis We pooled individual studies data using ProMeta3 ® (Internovi, Milano, Italy) software. Due to heterogeneity, a random effects meta-analysis was employed. In order to reduce the heterogeneity, two sensitivity analyses were conducted, considering the following items: (i) study design, (ii) participants' comorbidities. Moreover, a subgroup analysis by gender was conducted in order to estimate potential different effects among the two groups. We assessed publication bias with the visual inspection of a funnel plot [27] and the Begg [28] and Egger [29] tests. Literature Search A total of 4279 articles were retrieved. After a preliminary screening 670 articles were excluded because of duplicates, 409 not original papers (reviews, letters to the editor, editorials, protocols, etc.), and 2796 covering a different topic. After title and abstract screening, a total of 192 full-text articles were consulted, while at the end of the screening process only 41 were included in the systematic review . As it was not possible to extrapolate data from one study, it was not included in the quantitative evaluation [67]. Figure 1 shows the selection process. Two studies reported separate data for men and women [49,54] and for this reason they were considered separately, resulting in 42 datasets being included in the meta-analysis. while at the end of the screening process only 41 were included in the systematic review . As it was not possible to extrapolate data from one study, it was not included in the quantitative evaluation [67]. Figure 1 shows the selection process. Two studies reported separate data for men and women [49,54] and for this reason they were considered separately, resulting in 42 datasets being included in the meta-analysis. The characteristics of the included studies are reported in Table 2. The majority of the studies were conducted in Europe (n = 18, 43%) and North America (n = 12, 29%). The first study assessing objectively measure PA and depression was published in 2004 [68]. The smallest sample size included in a study was of 23 participants [70], whereas the largest sample size was of 16,415 participants [62]. Twenty-six of the 42 datasets were cross-sectional (62%), eight trials (19%), six cohort studies (14%), and one case-control study (2%). The quality assessment of trials is reported in Supplementary Table S2. Thirty-two datasets (76%) used an accelerometer as the measurement device, while nine datasets (21%) used a pedometer. In almost all studies participants were asked to wear the device for 7 days, and even in cohort studies PA was measured only at baseline. With regard to depression, heterogeneous tools were used to make diagnosis, such as the Hospital Anxiety and Depression Scales (HADS), the Patient Health Questionnaire-9 (PHQ-9), the Beck Depression Inventory (BDI-II) and the Center for Epidemiologic Studies Depression Scale (CESD). Most of the time HADS was used (n = 11), followed by PHQ-9 questionnaire (n = 9); however almost all studies used a validated tool. At the same time, the results were expressed using different measures, as for instance Odd Ratio (OR), Relative Risk (RR), β coefficient (β) and Spearman's Rho (r). The characteristics of the included studies are reported in Table 2. The majority of the studies were conducted in Europe (n = 18, 43%) and North America (n = 12, 29%). The first study assessing objectively measure PA and depression was published in 2004 [68]. The smallest sample size included in a study was of 23 participants [70], whereas the largest sample size was of 16,415 participants [62]. Twenty-six of the 42 datasets were cross-sectional (62%), eight trials (19%), six cohort studies (14%), and one case-control study (2%). The quality assessment of trials is reported in Supplementary Table S2. Thirty-two datasets (76%) used an accelerometer as the measurement device, while nine datasets (21%) used a pedometer. In almost all studies participants were asked to wear the device for 7 days, and even in cohort studies PA was measured only at baseline. With regard to depression, heterogeneous tools were used to make diagnosis, such as the Hospital Anxiety and Depression Scales (HADS), the Patient Health Questionnaire-9 (PHQ-9), the Beck Depression Inventory (BDI-II) and the Center for Epidemiologic Studies Depression Scale (CESD). Most of the time HADS was used (n = 11), followed by PHQ-9 questionnaire (n = 9); however almost all studies used a validated tool. At the same time, the results were expressed using different measures, as for instance Odd Ratio (OR), Relative Risk (RR), β coefficient (β) and Spearman's Rho (r). Sensitivity Analysis by Participants' Comorbidities The sub-group analysis considering only the general population (without diseases), included 21 datasets, and the pooled ES was −1. Discussion The current systematic review with meta-analysis-which included 43 studies in qualitative evaluation and 42 studies in the quantitative analysis-provided data on the association between objectively measured PA and the risk of depression. Since some studies expressed data separated for gender, a total of 42 datasets have been considered. With the purpose of deeply understanding the strength of the association between objectively measured PA and depression, a sub-group analysis by participants' comorbidity has been conducted. When studies assessing the association among participants with comorbidities were considered, the ES were not statistically significant (apart for COPD participants). However, prescription of adapted PA among participants affected by co-morbidities should be considered [71]. To the contrary, when only studies with general population (otherwise healthy people) were considered, the pooled ES was statistically significant, indicating an inverse association between PA objectively measured and depression (more PA was associated with lower risk of depression). A subgroup analysis by gender was conducted as well, showing a protective effect of PA only for women. However, this result should be considered carefully, since only three studies assessed PA and depression only in men, reducing the sample size. These results are extremely important considering that depression is one of the leading causes of disabilities worldwide [1]. In the last fifty years a great concern was casted on physical health of depressed individuals. This could be due because physical exercise seems to improve several biomarkers implicated in depression (e.g., impaired neuroplasticity, autonomic and immune imbalances) [9]. In in-vivo models, physical activity showed a serotoninergic effect as some antidepressant medications [8]. Moreover, PA has demonstrated an effect on inflammatory processes, through the hypothalamic-pituitary-adrenal axis regulation involved in the development of depression [9]. Additionally, higher levels of brain derived neurotrophic factor have been found after physical exercise [10]. Lastly, the level of PA directly affects the upper limit of oxygen uptake which depends on the capacity of the cardiorespiratory system to transport oxygen to the organs, including the brain. A lower oxygenation of the brain may result in a chronic cerebral ischemia and, if the affected areas are involved in a mood regulation, this may increase the risk of depression [12]. In the last decades, several studies have shown that a healthy lifestyle, in particular the intensity and length of physical activity [72,73], are important in the prevention and treatment of depression [7]. In our analysis we could not assess the relation between severity of depression and intensity of PA, as in most of the primary studies included, severity of depression was not reported and PA intensity was expressed using different methods. The results from our review confirm the beneficial effect of PA on depression, especially for participants without comorbidities. In this regard, health education campaigns aimed to promote PA should be fostered [74][75][76], especially because approximately 40% of the adult population worldwide is insufficiently physical active [77]. However, in order to better interpret our results, another important aspect should be considered: indeed, even if several sub-group analyses have been conducted, the value of heterogeneity remained stably high. Although a sensitivity analysis including only datasets with otherwise healthy people has been conducted, the I 2 remained extremely high. However, a I 2 value higher than 90% means that heterogeneity is directly due to heterogeneity among studies, instead of sampling error [78]. Moreover, primary papers expressed the level of PA using different types of unit of measures and also the results were reported using different modalities. Even if the pooled ES has been estimated by log OR, allowing comparability, this underlying heterogeneity might have affected the assessment of the I 2 [79]. Another potential explanation of heterogeneity could be the different type of duration of measurement, the device used and the questionnaire adopted to diagnose depression. Furthermore, a variety of confounding variables were selected in original studies and, in order to control the results, we pooled the models with the highest level of adjustment. Limitantions and Strengths The main limitation of this systematic review is the high I 2 value that might reduce the generalizability of our results. Most studies are observational and based on cross-sectional analysis. Nevertheless, we performed sensitivity analyses only including trials and longitudinal studies, increasing the robustness of our results. Due to the high heterogeneity in reporting the level of PA performed by participants in original studies, it was not possible to identify a recommended level of PA. The inability to estimate an association between severity of depression and PA is another important limitation. The main strengths of this review are being systematic in nature and its comprehensive way to include the entire scientific evidence published so far on the main medical-scientific databases. Furthermore, the pooled ES was significantly large, based on 37,408 participants, and sub-group analyses have been conducted based on participants' comorbidity and study design. In the primary studies, diagnosis of depression was consistently based on the DSM criteria and was established by trained investigators using validated assessment scales mainly with interrater reliability. Conclusions To conclude, the results of this systematic review and meta-analysis clearly show a statistically significant protective effect of objectively measured PA on prevalent and incident depression. An increased PA is associated with lower risk of depression. The advantages of our study are several. Firstly, this study offers a systematic overview of previous studies assessing objectively measured PA and depression. Secondly, this study highlights the usefulness of objectively measured PA compared to self-reported one. Objectively measured PA is not only more precise in estimating duration, total amount, and intensity of PA, but indirectly it can also better strengths the association with some diseases, as depression. Thirdly, this study shows the importance to promote physical activity forasmuch it can help to reduce the high burden of depression in our society. Lastly, our findings are relevant for both policy makers and clinicians as physical activity is one of the cheapest, non-pharmacological treatment that might be prescribed to the general population with potentially major public health impact. Physical activity is important across ages and should be integrated into daily life.
2020-05-28T09:13:36.688Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "89372a4bbec21325e94928418a164e6a6c865ebe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/10/3738/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "500368a2ea1aa2dfbc95acdcd684b5ed33a5d629", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
55031703
pes2o/s2orc
v3-fos-license
Constraints and opportunities to upgrading Uganda’s rice markets: A value chain approach Most of Uganda’s rice is produced by smallholder farmers with the purpose of marketing for family income. However, poorly developed market system is a major problem to rice producers. Based in Namutumba district (Eastern Uganda), the study involved both structured interviews with several stakeholders and focus group discussions with three farmers groups and three rice miller groups, each comprising of ten people. Using value chain approach, the study analyzes constraints and upgrading opportunities along the marketing channels. Low rice quality attributed to poor postharvest practices where foreign matter mixes with paddy during drying is a major challenge. High energy cost amounting to 69% of electricity operated and 89% of diesel operated machines during milling lowers farmers’ income. Small volumes of rice supplied by individual farmers to the market also weaken their bargaining power. In addition, there is mistrust between farmers and millers since the latter can only recover up to 70% of financial credit advanced to the former. The above challenges are compounded by limited market support activities by the development partners. Strengthening group cohesion through horizontal coordination, improving relationships between chain actors at different chain nodes through vertical coordination and rural electrification are some of the possible considerations. Key words: Uganda, rice market, upgrading, value chain, farmers, rice millers. INTRODUCTION Unlike most of the food crops grown to satisfy household consumption and food security requirements, rice is consumed more in urban areas, where it is one of the major foodstuffs at homes, schools, hospitals and prisons (Ahmed, 2012).Rice is grown almost throughout the country but mainly in the Eastern and Western Uganda due to availability of lowlands with high moisture contents throughout the growing season.However, these (Eastern and Western) regions' lack of market access is the most significant explanation to their food insecurity (McKinney, 2009).In the same regard, Odogola (2006) precisely observed that 70% of the rice farmers in Kamwenge district (Western Uganda) and 48% of their counterparts in Iganga district (Eastern Uganda) have poor marketing systems.The main problems cited as constituents of poor market access include: Lack of market information, poor road network, small paddy quantities, low quality paddy and inadequate postharvest handling skills (Odogola, 2006). With the larger share of locally produced rice ending up in domestic markets, it is imperative that access to a wellfunctioning market will be required to improve the livelihood of smallholder farmers who are the majority rice producers in the country.There is need for an efficient and effective linkage between the rural producers and the urban consumers.This linkage can be well understood through the concept of value chain.Kaplinsky and Morris (2000) refers to value chain as a full range of activities required to bring a product or service from conception, through different stages of production, delivery to final consumers and final disposal after use. There are many rice value chain studies which have been conducted in Uganda by government agencies such as Ministry of Agriculture Animal Industry and Fisheries (MAAIF, 2009) and Plan for Modernization of Agriculture (PMA, 2009) plus several bilateral and donor organizations either directly or through consulting agencies (Trias, 2012;USAID, 2008;Kilimo Trust, 2012;ACF, 2014).In most of the above studies, the emphasis has been placed on market structure mapping and gross margin analysis with less focus on upgrading opportunities and yet it is mandatory for market access by rice producers.Mitchell et al. (2011) defines upgrading as a means of acquiring the technological, institutional and market capabilities that allow resource-poor rural communities to improve their competitiveness and move into higher-value activities. The purpose of this study was to assess the rice value chains in Uganda in the context of upgrading by: (i) Studying various marketing channels; (ii) Identifying constraints and opportunities along the marketing channels, and (iii) Analyzing the upgrading strategies.Upgrading of rice value chains enables farmers earn higher prices as well as helping consumers access high quality rice at a relatively lower price (JICA, 2013). The concept of value chain analysis The concept of value chain can be traced back to the 1960s when French scientists developed the filiere approach for studying contract farming and vertical integration in agriculture (Mitchell et al., 2009;UNIDO, 2009).They later applied it on export commodity production of cotton, rubber, coffee and cocoa in France's former African colonies.The emphasis of this approach was analyzing how local production system was linked to processing industry, trade, export and final consumption (Nang'ole et al., 2011).At the time, the focus of filiere approach was on production and commercialization without the elements of governance, transformation and value addition (UNIDO, 2009). In the 1970s a related concept 'sub-sector analysis' was developed which involved studying the networks and relationships linking suppliers, processors, transporters and traders in ways that connect producers and enterprises to final consumers of goods and services (Nang'ole et al., 2011).A sub-sector thus involves a set of activities, actors and rules governing those activities. The term value chain was first used and popularized by Michael Porter (1985) where he sought to assess the contributions of various primary and supportive firm activities to the overall added value of its business.The primary activities include inbound logistics, operations, out-bound logistics, marketing, sales and service which can directly add value to the production of goods and services (Nang'ole et al., 2009).On the other hand, support activities include procurement, human resources management, technology development and firm infrastructure which are necessary for the effectiveness and success of the firm (UNIDO, 2009).Porter's approach was aimed at highlighting actual and potential areas of competitive advantage and the interdependences and linkages between vertically arrayed actors in the creation of value for the firm (Rich et al., 2009).The weakness of Porter's approach to value chain is that it restricts analysis to firm level without considering upstream and downstream activities beyond the company (Fasse et al., 2009). The concept of Global Commodity Chain was developed by Gereffi and Korzeniewicz (1994) who applied it to development issues.Whereas Porter's approach focused on within firm linkages of several activities, the Global Commodity Chain was modified and the focus was on inter-firm linkages while emphasizing the governance structure between several actors.Gereffi identified four elements: (i) Input-output structure; (ii) Territorial (international) structure; (iii) Institutional framework, and (iv) Governance structure (Nang'ole et al., 2009;Fasse et al., 2009). Another modification of Global Commodity Chain, "Global Value Chain" was coined in the early 2000s by Kaplinsky and Morris (2002).They defined a value chain as a full range of activities required to bring a product or service from conception, through different stages of production, delivery to final consumers and final disposal after use.Kaplinsky and Morris (2002), distinguish the value chain from supply chain by emphasizing the relationships and linkages both within and between actors at each stage of production.According to Rich et al. (2009), this has considerable merit of highlighting the constraints and opportunities at and between stages of the chain and can thus be used to develop integrative policy recommendations that target chain inefficiencies and address distributional issues.More recently, the concept of value chain analysis seems to have become synonymous to market analysis as it involves the role of policies, institutions and laws in shaping markets (Nang'ole et al., 2011).However, the relevance of Global Value Chain approach in developing countries is questionable as it emphasizes vertical integration with emphasis on international markets leaving behind many smallholder farmers who depend on local and regional markets (Riisgaard, 2009;Tran et al., 2013;Mitchell and Coles, 2011;Trienekens, 2011).Integrating horizontal and vertical coordination is a requirement for developing the value chains of rural farmers (Mitchell and Coles, 2011).Also, agricultural value chains are buyer-driven, meaning buyers have more powers in deciding what to produce (Mitchell et al., 2009).To reduce the power of buyers, developing country chain actors need to upgrade by building technological and managerial capacity that allows them to participate effectively in value chains (UNIDO, 2009).Value chain upgrading is therefore one of the main focus in developing countries (Trienekens, 2011). Upgrading in value chains Upgrading is a key contribution of value chain analysis with regard to understanding how incomes of poor people can be augmented.It refers to acquiring technological, institutional and market capabilities that allow firms or communities to improve their competitiveness and move into higher-value activities (Mitchell et al., 2009).The purpose of upgrading is to enhance the rewards and or reduce the risks to actors in production and marketing.If the anticipated rewards gain or risk reduction is not realized, the actor may choose to revert to previous or less functions.Such a scenario is referred to as downgrading and is the opposite of upgrading (Khiem et al., 2010).Different upgrading strategies have been suggested in various studies (Kaplinsky and Morris, 2002;Mitchell et al., 2009;Mitchell and Coles, 2011;Trienekens, 2011) to help in development of developing countries' value chains.Such strategies are briefly explained as follows: Horizontal coordination One of the main obstacles facing small-scale enterprises in developing countries is the very fact that they are small-scale.Horizontal coordination is the process of firms (which can be as small as individual actors) collaborating within a functional node (for example input supplies, production, processing, trading or retailing) to achieve a strategic balance between competition and collaboration (Mitchell and Coles, 2011).The purpose of horizontal coordination is to address shared constraints, interests and entry barriers associated with scale.These include high transaction costs, low and poor quality output, weak negotiating power and lack of capital and management of common property resources.According to Mitchell et al. (2009), horizontal coordination is often the first step in a sequence of interventions that ultimately result in access to the market, and is a prerequisite for other forms of upgrading.In developing countries, horizontal coordination takes the form of producer associations or cooperatives (Trienekens, 2011). Vertical coordination The process of strengthening relationships between functional nodes of the value chain, involving the shift away from one-off spot transactions toward developing longer-term business connections for instance contract farming (Mitchell et al., 2009;Mitchell and Coles, 2011).In practice, vertical coordination is often a slow and difficult process because it involves the building of trust relations between the buyer and the seller.As such, it rarely takes place in isolation from other upgrading strategies.More formal contracts are often associated with higher performance requirements, such as higherquality products, larger volumes and delivery schedules that are more frequent and reliable.Overcoming the barriers associated with these requirements may necessitate a preliminary step of horizontal coordination (Mitchell and Coles, 2011). Functional upgrading This is also referred to as vertical integration; it involves changing the mix of functions performed by actors in the value chain.This can be through adding new activities by an individual or firm, for instance agricultural producers starting to process some of their output to add value or starting to produce the inputs by themselves.In some instances, the individual or firm may decide to delete some activities (downgrading) if deemed necessary.The resulting distribution of functions among actors in the chain should maximize its efficiency and competitiveness by attaining the optimal level of specialization versus integration (Mitchell et al., 2009;Mitchell and Coles, 2011).Integrating functions vertically offers the possibility of transforming raw materials into new products and thereby increasing the proportion of value captured.Trienekens (2011) identifies functional upgrading as a key issue in developing country value chains as most exports in are raw material form. Process upgrading This involves improving value chain efficiency by increasing output volumes or reducing costs for a unit of output.Examples of this include improving agronomy to enhance yields that result in higher sales or own consumption, or both.This may be the result of improved planting techniques, planting materials or investments such as irrigation infrastructure and technologies which reduce postharvest losses (Mitchell et al., 2009).Process upgrading focuses on the one hand on upgrading the product and on the other hand on optimization of production and distribution processes.The latter includes introduction of new technologies such as automated production and packaging lines, cooling installations and modern transportation technology as well as improved communication facilities in the supply chain such as internet connection, GPS systems or the intense use of mobile phones in production and transportation planning (Trienekens, 2011). Product upgrading This involves introducing new products or improving old products faster than rivals.This involves changing new product development processes both within individual links in the value chain and in the relationship between different chain links (Kaplinsky and Morris, 2000).Along the same line, Mitchell and Coles (2011) defines product upgrading as making better products that hold greater value and fetch higher prices.One of the most common and intransigent barriers for the rural poor is that their output fails to meet market specifications, both in terms of quality and volume.Raising product quality and increasing the efficiency of production are critical prerequisites to accessing and competing successfully and beneficially in markets (Mitchell and Coles, 2011).Process and product upgrading are closely related because improving product quality often involves improvements to the production process (Mitchell et al., 2009). Inter-chain upgrading This is where chain actors introduce value adding processes from other chains to offer new products or services, for instance a farmer who enters into tourism activities (Trienekens, 2011).The new value chain is usually more profitable than the previous one for example shifting from growing traditional commodities to high quality export horticulture.Unfortunately, the upgrading process often has significant barriers to entry for the poor and vulnerable to access the more lucrative value chain (Mitchell et al., 2009). Upgrading of the enabling environment Although not an upgrading strategy in a strict sense, competitiveness of the enabling environment for value chains is a major contributing factor in the success of the operations of a value chain.Improvements to the support services, institutional, legal and policy frameworks in which value chains operate are often a productive area in which development agencies can intervene to improve the functioning of a chain (Mitchell et al., 2009;Mitchell Makosa 389 and Coles, 2011).Such things as standards and certification, rules and regulations regarding contracts, etc. must be in place for successful upgrading in value chains to take place. MATERIALS AND METHODS The study was carried out in the eastern district of Namutumba.Carved out of Iganga district in 2006, Namutumba is located at coordinates 00 51N, 34 41E along Tirinyi road (Mbale-Iganga highway).It occupies a total area of 802 km 2 of which 138 km 2 is covered by water bodies.Administratively, the district is divided into six subcounties of: Namutumba, Magada, Bulange, Nsinze, Ivukula and Kibale.Given its abundant swamps and proximity to Lake Victoria, climate is tropical with small seasonal variations in temperature (22-27°C) and rainfall (900 to 1150 mm).As of 2011, the population estimate was 213,000 people of whom 51.5% were females.Smallholder subsistence farmers comprise 84% of the population.They engage in rearing livestock such chicken, cattle, goats, etc and growing crops such as rice, cassava, groundnuts, millet and coffee.Namutumba, together with the nearby districts of Iganga, Pallisa, Tororo, Butaleja, Bugiri and Busia form the main rice growing region of Uganda.The district is easily accessible due to its location along the highway.Nsinze subcounty was purposively selected since it has most of the rice value chain activities taking place there.It has many rice farmers and rural millers and the nearby Busembatya trading center has a lot of rice milling and trading transactions which makes it to act as a link between rice farmers and urban traders.The researchers first conducted a desktop research to have basic idea about rice farming as a business in the study area.This was followed by discussions with key informants who included the chairperson farmers' forum, representative from National Agricultural Advisory Services (NAADS), local council leaders and farmer group leaders.Focus group discussions were then carried out with 3 farmer groups each containing 10 people.Each group was a representative of a single parish.In addition, discussions with 3 groups of rice millers were conducted.One group of rice millers was in the rural farming area of Nsinze subcounty while the other two groups were in Busembatya trading center.This was necessary since millers from the rural village had different characteristics to those of town millers.For the purpose of cross checking the information got from group discussions, 15 farmers and 5 rice millers were selected for individual interviews.The major processing company in the region which is involved in purchasing the rice paddy from farmers and traders was interviewed to gather data on processing. Analysis was done in the context of value chain upgrading as suggested by Trienekens (2011) with the help of descriptive statistics, tables, figures and gross margins. Overview of value chain actors According to the group discussions, rice farmers own about 2 ha per household.Table 1 is a summary of landholding and land under rice cultivation which was captured from the individual household interviews. The average land holding is 2.2 ha which is the same as reported in group discussions is.The actual landholding, however, varied significantly from 0.8 ha for the smallest farmer to 4.0 ha for the largest.In contrast, the average land holding in region as reported in the agricultural census of 2008 is about 0.8 ha per household.This implies that rice farmers own on average more land than their non-rice farming counterparts.Average cultivated land was 2.0 ha of which 36.7% was under rice.The average rice yield was 2.7 tons/ha.This yield was achieved using seed from the previous harvest and without fertilizer application or irrigation.Chemical herbicide for striga weed was however applied. Rice millers in the survey area can be categorized into two: (i) Rural village millers (hereafter referred to as 'village millers') who are located in deeper villages where rice farming mostly takes place, and (ii) Rural town millers (hereafter referred to as 'town millers') who operate from the trading centers.Using the results of group discussion, Table 2 compares these two categories of rice millers.The village millers are relatively new (2 years old) in business and use diesel as power source.The milling capacity of their machines is low (3.2 tons/day). Despite their proximity to farms, they receive relatively low volumes of paddy ranging from 0.3 tons/day to 1.3 tons/day depending on the season.Due to high diesel price, they charge a relatively higher milling fee (100000 Ush/ton).On the other hand, the town millers have accumulated relatively more experience as they have spent 5 years on average in milling business.They use electricity as a source of power and the milling capacity of their machines is quite large (18 tons/day).Although the quantity of paddy received is larger, it's well below the amount required by their milling machines.Because they are far from farmers and electricity is cheaper than diesel, their milling charges are relatively low. Interviews with the manager of processing company revealed that it was started by individual entrepreneur with the support of government and other donors in Jinja town (in 2006).The company has a large milling machine with a milling capacity of 2 tons per hour and a mechanical dryer with a capacity of 5 tons per hour.It currently supports 10000 clients across the country with some as far as Western Kenya.The clients are mainly smallholder farmers who bring paddy by themselves when from Busoga sub-region (where the company is located) or offered transport service (when from elsewhere).Besides farmers, there are some 300 traders who bring paddy. At the company premises there are several services which include drying, milling, branding, storing and marketing.Milling is of high quality as all foreign matter and unfilled grains are separated from paddy before milling.Commission is charged for these services on the clients after selling milled rice. Rice market structure Figure 1 illustrates the rice market structure in the study area.Most of the dried paddy is taken by individual farmers to rural rice millers for milling.The remaining paddy is either taken by individual farmers to medium scale processor (Upland Rice Millers) or sold to paddy traders who in turn take it to the processing company.The processing company works with up to 300 traders who source paddy from all parts of Uganda and other East African regions such as Western Kenya and Northern Tanzania.The paddy taken to the rural rice millers is sold immediately after milling to the waiting buyers.The buyers are mostly village assemblers who bulk the rice before selling to wholesaling traders from urban areas such as Iganga, Jinja and Kampala. The paddy taken to processing company is dried to required standards (14% moisture content), milled, graded and branded before it is sold.Grading is based on the percentage of broken rice as all foreign matter is removed by the machine during milling.The graded rice is then branded according to varietal features of milled rice: (i) 'Kayiso' for lowland long and narrow grains; (ii) 'Upland' for NERICA varieties, and (iii) 'Super' for lowland short, thick, sticky and aromatic grains.These brands have some meaning attached to them.For example 'Kayiso' literally means needle shaped and comes from indigenous Ugandan varieties.Due to their promotion since 2003, NERICA cultivars are the most popular upland rice varieties in Uganda.To this end, the words 'NERICA' and 'Upland' are often used interchangeably by farmers and consumers.'Super' brand is associated with its superior cooking qualities.The branded rice is either sold to the distributors (wholesalers, retailers or exporters) or to final consumers (individuals, public and private institutions). Besides rice, the processing company also produces other byproducts such as bran (for livestock and poultry feed) and husks which are currently being used as organic fertilizers in maize fields but plans are underway to be used for fuel supply. Limited market support Rice sector in Namutumba district boasts of a good network of governmental and non-governmental organizations.Table 3 indicates different organizations rendering support to farmers and the value chain activity supported.With the help from Japan International Cooperation Agency (JICA), the Uganda National Agricultural Research Organization (NARO) is constantly engaged in development of new rice cultivars and agricultural technologies.Organizations such as Sasakawa Africa Association (SAA), Africa 2000 Network (A2N) and National Agricultural Advisory Services (NAADS) are putting great efforts in rice farming technology dissemination and extension.The support however does not go beyond the farm level as shown by the interventions of various support organizations in the survey area.Besides the East African regional organization "Kilimo Trust", which supports marketing initiatives through its private partnerships, there is minimum assistance in the area.Through the program 'Development of Inclusive Markets in Agriculture and Trade (DIMAT)', which is a partnership with Upland RiceMillers Ltd, Kilimo Trust is expected to reach 3000 rice farmers in the area of rice marketing.The outcome of the aforementioned rice marketing partnership is yet to be seen, however, as the program is still new and not yet rolled out.Bulk marketing which was promoted by SAA could not be sustained after the closure of the project although it was positively viewed by farmers.During the project period, farmers did not actively participate in the bulk marketing project.Instead, they would pack their rice and wait for the group leaders under SAA facilitation to come with the truck and take rice for milling.As a result there were no skills attained by participants during the project and this lead to the collapse of the initiative following the project closure.The rest of the organizations have concentrated on production with little assistance in postharvest handling and marketing.This is contributing to low quality rice produced by farmers.More support which is focused on quality improvement is required. Mistrust between farmers and millers In terms of financial credit, only one rice miller (former carpenter) was able to access credit from a microfinance institution (Pride Microfinance Ltd).Most millers used their own savings or borrowed from friends for their startup capital.Limited financial support is one of the reasons for low quality rice due to poor drying facilities.Efforts by millers to give financial credit to farmers have been futile due to failure in recovering.This has created mistrust between millers and farmers thereby derailing future hopes of credit offer. Table 4 highlights credit recovery success by millers.All the millers who advanced financial credit to individual farmers recovered at most 70% of the total amount with the rest being defaulted.Since the buying and selling of rice takes place at the milling machine, informal agreement is formulated where farmers are supposed to mill their rice from the lender's premises and credit be repaid after milling either in cash or in-kind.If applied appropriately, this arrangement is fair to farmers since sometimes interest rates are not factored into the recovery amount as millers anticipate a steady supply of paddy for the smooth flow of their business. Unfortunately, more often the farmers fail to honor the agreement after harvesting and mill their rice from elsewhere due to misallocation of credit funds.However, one miller who gave credit to a group of farmers was successful and recovered 100% of the amount.In addition, this miller did not offer financial credit but rather provided tarpaulins in-kind which were valued in cash for the purpose of repayment.Based on this model, it is recommended that credit be offered in form of tarpaulins to farmer groups through their leaders. Price formation mechanism Figure 2 is a sketch of price forming mechanism.Through interactions with other farmers or rice millers over phone or face-to-face, farmers get to know the possible rice price range for a particular day before taking it for milling.On the other hand, village assemblers also come to the miller with fair knowledge of the prevailing price after consultations with other buyers through phone.Since most traders come from distant locations and are interested in large volumes, they do not directly purchase from farmers but buy from village assemblers who bulk the rice.Price is determined through negotiations between the farmer and the village assemblers.It depends on the perceived quality as determined by the amount of broken rice and presence of foreign matter.Since there are no quality standards, the perceptions are done in comparison to other available rice.Other factors which influence price on a given day include: Number of traders, volume of rice and bargaining power of a particular farmer. Village assemblers hold the market power Farmers in the survey area engage in growing various varieties of rice which can be branded either as Super, Kayiso or Upland in the wholesale and retail markets.Unfortunately, at farm gate it is sold as single category irrespective of how distinct it may appear.During drying, different varieties are usually mixed either voluntarily by farmers due to limited space or involuntarily by birds when spread separately but adjacent to each other.Because the rice offered by farmers to the market is mixed, village assemblers pay the price for the lowest quality brand even if it constitutes a minor share of the farmer's rice.Farmers in the survey area grow mainly NERICA as a result of previous assistance by Sasakawa Africa Association.However, their rice has been bought at a price comparable to that of Kayiso instead of Upland which is the true brand for NERICA rice varieties.After reaching the wholesale market, traders sell it as Upland without adding any value.Given that prices for various brands are different, farmers lose money in this process.Table 5 gives rice prices at different marketing levels.'Kayiso rice' is the cheapest, followed by 'Upland rice' with Super brand being the most expensive at wholesale and retail price.This implies traders hold power and influence in rice markets at the expense of farmers.Farmers will need to be more coordinated and practice appropriate postharvest procedures if they are to benefit from high price of their rice.Worth noting however, whereas the wholesale and retail prices are quoted from the nearby market, it's important to note that most of rice produced in survey area is procured and taken by Kampala traders. Market constraints to farmers Striga weed is the most severe problem at the production stage.The weed causes many unfilled grains and consequently a low milling recovery.It also increases labor costs as it is cumbersome to eradicate and thus necessitates agricultural chemicals.The weed is more destructive to certain rice cultivars than to others.NERICA 4, the mostly grown cultivar in the survey area is so susceptible and can result into significant crop losses.However, NERICA 10, a newly introduced variety in Uganda is resistant to striga weed (Rodenburg et al., 2015).Given that it gives higher yields, switching from growing NERICA 4 to NERICA 10 is a viable consideration. The most market related challenge to rice farmers is lack of drying facilities.Paddy is dried on bare ground and as a result it ends up mixing with a lot of foreign matter.Coupled with poor moisture control, this leads to low milling quality.Failure of farmers to dry different rice varieties separately lowers their potential income.There is need for post-harvest oriented training with emphasis on drying.Training alone without investment in basic drying facilities such as tarpaulins and moisture meters may not be of much help.Given that farmers do not have the financial ability for investing in the drying facilities, collaboration with other value chain actors mainly millers Challenges facing small scale millers Small scale rice milling is done by diesel operated machines in villages and electricity operated machines in towns.To minimize on defaulters by the power company, rice millers were advised to form clusters through which they were to be connected to electricity.However, the initiative was not successful as several members were operating without paying the fees.The end result was frequent disconnection from the electric power grid due to defaulting cluster members.The faithful members who committed themselves to paying their fees have not been spared.They are suffering to service the debt of their defaulting counterparts so that they can sustain their business.Since clusters seem to have failed, allocating each miller individual electric meter is worth trying.The power cost is also high as it constitutes 69 and 86% of total costs to town millers and village millers respectively.Table 6 summarizes the challenges facing rural rice millers.Besides the aforementioned power related challenges, the amount of paddy available keeps fluctuating.During off season this problem worsens forcing some millers out of business.Low paddy quality also affects the milling machine thereby necessitating frequent servicing.With all the above challenges, small scale rice milling is still worth conducting due to its profitability.Table 7 shows daily profits accrued by village and town rural millers as calculated by the difference between income and operating costs.It is 54800 and 90400 Ush per day for village and town millers respectively.On monthly basis the average profit of village millers translates into 1.6 million Ush which is more than tenfold the average household income in Eastern Uganda (155500 Ush). Constraints and coping strategies by the processing company The company regularly evaluates its activities and designs new strategies to overcome current and future challenges.Table 8 shows such innovations according to different processing functions.Originally paddy was sun dried on tarpaulins.Due to a number of challenges associated with sun drying, a high capacity mechanical dryer was purchased.Shortage of power supply due to load shedding, however, emerged as a fresh challenge. The company is now planning to start power generation from rice hulls as a backup source.Economic viability of such option needs to be assessed before the company starts the initiative.Since many sugar companies in the country are now producing their electricity from bagasse, there is genuine optimism.The increasing number of paddy supplying clients had put pressure on available storage place.This meant that many paddy and rice bags were crammed which in turn resulted into conducive environment for disease and pests outbreak.In response to this challenge, the company set up a modern and spacious (3000 tons) warehouse which has significantly improved storage quality.There is periodic shortage (15%) in the amount of paddy received especially in period of February to May each year.The company tried to overcome this challenge by importing from Kenya for the short term.Partnerships with other organizations are being signed to increase local rice production to stop paddy importation.It is believed, such arrangement with no doubt will avail more paddy than the milling capacity of the current machine. To prepare for this anticipated challenge, the plans are underway to install a higher capacity milling machine.Currently, milling results into four grades of rice: (i) A (100% wholly milled rice), (ii) B (up to 30% broken rice), (iii) C (31-70% broken rice) and (iv) D (more than 70% broken rice).Rice price decreases with grades from A to D, with A being the most expensive.Due to poor postharvest handling, farmers' rice is always dominated by grade C which commands lower price in the market and consequently low income.To help farmers earn more, the company has been marketing three grades (A, B-C and D) by mixing grades B and C.However, this has been done at the compromise of product quality which deteriorates consumer trust. The company is now encouraging farmers to bring freshly harvested paddy so that it can be dried from the premises.At the same time it is equipping the laboratory with chemicals and other instruments for various quality tests to reduce percentage of broken rice and improve milling quality.Super rice, is the most demanded brand, is in limited supply since its varieties do not grow well in most Uganda soils.Most of the paddy for Super rice is currently imported from Tanzania.This has resulted into higher prices which average consumers cannot sustain.Countrywide soil testing has been carried out and Soroti area soils been identified as ideal for Super rice cultivars.Through public-private partnerships, efforts to promote production of Super rice in Soroti are into consideration. Horizontal coordination To a small extent the farmers were organized into farmers groups.In reality however the groups seemed non-existent as no activity was carried out in group apart from trainings.Initially, input purchase and paddy marketing were done collectively through groups with the help of Sasakawa Africa Association.Trucks, often coordinated by Sasakawa, would move from member to member gathering the paddy after weighing, take it for milling before selling to major buyers.Members would then be paid depending on the proportion of their paddy.This process ensured higher selling prices and lower marketing (mainly transportation) costs.In this way, farmers would earn more than if they sold individually.Since members played a passive role in marketing activities, they did not acquire the skills required for sustainability of the initiative.Consequently, collective marketing collapsed after the completion of Sasakawa project in the area.For the rural town millers, the only coordination they had was sharing the power through clusters.Failure by some members to meet their obligation of contributing to the utility charges has led to accumulated debt thereby resulting into frequent disconnection from the power grid.A lot of training on cluster benefits and management should be conducted by the electricity company.Meanwhile, downgrading to individual electricity meters in the short term is worth considering. Vertical coordination Even though there is no formal relationship between different chain actors, they occasionally coordinate.Rural rice millers have been trying to lend money to famers to help in rice production.Because they do it in an informal way, recovery of credit has been difficult.As a result, they Makosa 397 have cut off such arrangements due to the loss of trust in farmers.The medium scale processing company has contracted traders to help in collecting paddy from farmers.In collaboration with other development partners, the company is also hiring agricultural specialists to train farmers in modern rice production and postharvest technologies.In addition, farmers are provided with drying and storage services on the company premises. Functional upgrading Previously farmers would sell their paddy to village collectors who would move from farmer to farmer.This trend has recently changed as most of the paddy is currently taken by farmers for milling before selling.This can be viewed as a form of functional upgrading as farmers are taking up the role of paddy traders.The processing company, which used to sell rice bran to livestock and poultry feed manufacturers, has started making feeds itself before selling.The company is also in the process of turning the rice husks into power supply source which will be used as a backup in case of electricity load shedding.Plans to add diversified products like chips, cakes, flour and wholegrain cereals are underway. Process upgrading To improve productivity in rain-fed rice farming system, farmers in the survey area adopted the cultivation of NERICA 4 which requires less amount of water. Unfortunately, the cultivar is susceptible to striga weed which is causing significant yield losses.Switching to NERICA 10 which is more yielding and resistant to the weed will be a worthwhile venture.Poor drying of paddy results into poor milling quality.Most rural millers have tarpaulins at their premises to help drying the rice to required moisture content before milling.However, they do not possess the moisture meters for observing the recommended moisture contents.To obtain optimally dried paddy, they will need to purchase moisture meters. The medium scale processing company has installed a mechanical drier which is more efficient in paddy drying. Product upgrading This form of upgrading is still the most challenging to rural farmers and millers.Paddy is usually sun dried on bare ground leading to quality deterioration of milled rice. In some cases paddy mixes with metals such as nails which keep damaging the milling machines.The viable solution is drying on tarpaulins but rural rice millers do not have enough financial credit to support the farmers.To dry 2.0 tons of paddy (average output per farmer), 4 pieces of tarpaulin worth 200000 Uganda shillings are required.This implies that rice millers would need considerable investment beyond their capability to support farmers.The medium scale processing company has a mechanical dryer which ensures optimum moisture content and minimizes foreign matter in the paddy.It also has a destoner incorporated into the milling machine which removes stones and other foreign matter from the paddy before milling.The newly constructed spacious warehouse provides good aeration which prevents diseases and pest infestation during storage.The quality standard of the rice, however, is still questionable as it is not yet certified by the national certification body. Inter-chain upgrading During paddy shortage, rural rice millers always divert to milling of maize into flour.In that way, they are able to smooth their income throughout the year.In the same way, paddy traders always venture into maize and coffee trading during paddy shortage. Upgrading of business environment This has been observed by agreements and partnerships between the processing company and other development agencies in the area.One such partner is Kilimo Trust which aims at improved market opportunities for smallholder farmers. Conclusion Ugandan rice value chain is long with many actors who hold varying degrees of power and influence.There are many smallholder farmers who produce rice either individually or in groups.However, marketing is mostly done on individual basis which significantly reduce the power of farmers.Given that most rice millers provide milling services at a commission rather than engaging in buying of rice, market power remains with rice assemblers who purchase rice from farmers and sale to wholesalers.Farmers tend to have low bargaining power due to the small volumes of rice they individually supply to the market.For farmers to raise their bargaining power there is need for horizontal coordination and aggregate their produce before selling.Currently, many farmers have joined groups aimed at joint production.Formation of these groups has been facilitated by several development organizations.However, marketing receives less attention and is supported by few agencies.More marketing support in terms of group formation, trust and management skills is required. In liberalized rice sector of Uganda, bargaining power alone is not enough to improve the incomes of farmers. The high rice milling costs will need to lower for farmers to improve the profitability of rice farming.Although both diesel and electricity costs are high, farmers can save significantly if they mill their rice using electricity operated machines.Similarly, rice millers make a better profit with electricity operated compared to a diesel operated machine.A program aimed at rural electrification is beneficial to all stakeholders and can play a major role in improving the competitiveness of rice produced by Ugandan farmers. Lower cost contributes to competiveness to a certain extent and the rest is covered by high quality. Unfortunately, the quality of Ugandan rice is still low due to poor postharvest handling and simple milling machines without cleaning and grading capabilities.The most critical stage of postharvest handling is drying where foreign matter mixes with paddy leading to further quality deterioration as paddy is spread on the bare ground.If farmers were trustworthy, they would get advance financial credit from millers to invest in basic drying equipment like moisture meters and tarpaulin to improve the quality of their rice.However, farmers' failure to repay the credit has led to mistrust between them and their lenders and as a result hampered any credit advancement.This necessitates strengthening of the linkages between different chain actors through vertical coordination.Vertical coordination is essential in building the relationship and trust between several actors across the chain which can result into a win-win scenario for all the participants. Since the predominantly grown rice variety (NERICA 4) is susceptible to parasitic weeds, farmers lose potential income through yield losses and quality reduction.It is therefore advisable that various stakeholders involve in sensitization of farmers about available weed resistant and high yielding varieties such as NERICA 10. Whereas this paper explores the challenges affecting Ugandan rice sector amore and highlights low rice quality as one of the major constraints.More research on how to improve the quality is recommended. Table 1 . Household landholding and rice cultivation. Table 2 . Characteristics of rice millers. Table 3 . Interventions by support organizations.Project activities in the survey area completed.. **Still in pilot phase.Source: Farmers survey (September to October, 2013). * Table 4 . Credit recovery by millers. Table 6 . Constraints to rice milling. Table 7 . Daily gross margin by rice millers. Table 8 . Constraints to rice processing.
2018-12-07T16:25:20.639Z
2015-12-31T00:00:00.000
{ "year": 2015, "sha1": "86d58739dd753aa07f3670a84a26109ec7538a00", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/JDAE/article-full-text-pdf/DA16CDB56105.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "86d58739dd753aa07f3670a84a26109ec7538a00", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
5617613
pes2o/s2orc
v3-fos-license
Attention-deficit/hyperactivity disorder, delay discounting, and risky financial behaviors: A preliminary analysis of self-report data Delay discounting—often referred to as hyperbolic discounting in the financial literature—is defined by a consistent preference for smaller, immediate rewards over larger, delayed rewards, and by failure of future consequences to curtail current consummatory behaviors. Previous research demonstrates (1) excessive delay discounting among individuals with attention-deficit/hyperactivity disorder (ADHD), (2) common neural substrates of delay discounting and hyperactive-impulsive symptoms of ADHD, and (3) associations between delay discounting and both debt burden and high interest rate borrowing. This study extends prior research by examining associations between ADHD symptoms, delay discounting, and an array of previously unevaluated financial outcomes among 544 individuals (mean age 35 years). Controlling for age, income, sex, education, and substance use, ADHD symptoms were associated with delay discounting, late credit card payments, credit card balances, use of pawn services, personal debt, and employment histories (less time spent at more jobs). Consistent with neural models of reward processing and associative learning, more of these relations were attributable to hyperactive-impulsive symptoms than inattentive symptoms. Implications for financial decision-making and directions for future research are discussed. Introduction By definition, attention-deficit/hyperactivity disorder (ADHD) is an impairing psychiatric condition. [1] Those with the hyperactive/impulsive and combined presentations in particular experience strong preferences for immediate over delayed rewards [2,3], difficulty inhibiting intemperate behaviors. [4] and prospective vulnerability to increasingly intractable comportment outcomes across development. [5] They also experience academic underachievement and grade retention compared with healthy, age-matched peers. [6] Such outcomes evoke social rejection and stigmatization, resulting in considerable intrapersonal distress. [7,8] PLOS ONE | https://doi.org/10.1371/journal.pone.0176933 May 8, 2017 1 / 11 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 pawn shop use, and payday lending. [28]. Pawn shops provide short-term loans that are typically repaid within 30 days and require collateral. A lendee might bring in jewelry, which the pawn shop keeps in exchange for cash of significantly less value. The lendee either pays the loan back at a high interest rate (up to 240% APR in some U.S. states) or the pawn shop sells the collateral for profit. In contrast, payday loans are a high interest rate form of short-term lending that usually do not require collateral. Loans are typically due within 30 days, when the lendee can chose to repay in full, or carry the loan forward with an interest-only payment. Annual percentage rates are up to 700% in some U.S. states. Interestingly, when payday borrowers are forced to think about rather than discount future interest rates, their use of payday loans declines. [29] This suggests that payday borrowers ignore future consequences and focus on current transactions, consistent with delay discounting/hyperbolic discounting interpretations. We evaluate correspondences between ADHD symptoms, measures of delay discounting, and several financial outcomes, including credit card use (carrying balances, late payments, interest rates), amount of savings, and use of extremely high interest rate borrowing (payday loans, pawn shops). Well replicated findings of preference for immediate over long-term rewards suggest that hyperactive-impulsive symptoms of ADHD should predict present bias, more credit card use with less favorable terms, less savings, and more use of extremely high interest lending. As described below, we controlled statistically for a number of possible confounds, including age, education level, income, sex, and substance use. The latter exerts strong effects on financial outcomes, and is associated with both delay discounting and ADHD (see above). Method Participants Procedures were reviewed and approved by the University of Florida Institutional Review Board. Informed consent was obtained online, before participants answered survey questions. All 544 participants (mean age 35.3 years, 46% women) were recruited through Amazon's Mechanical Turk (MTurk), an online labor market comprised of over 100,000 workers in over 100 countries. MTurk is commonly used by social scientists, who post surveys and experiments for workers to choose from. Participation was restricted to respondents located in the U.S. (91.2% native English speakers). Consistent with standard recommendations [30] we also restricted participation to those whose prior session completion rates were 97% or higher. The average completion time was 7.69 min (SD = 2.44; range = 2.82-90.48; median = 7.25; excluding the extreme 5% had no effect on the results). Consistent with prevailing MTurk compensation norms, we paid respondents $0.50 for participating. Data quality from MTurk compare favorably with American college samples. [31] Further details are reported elsewhere. [30,32] Online questionnaire Demographics. The following demographic information was collected: (1) age (years); (2) sex; (3) English as native language (yes, no); (4) level of education (middle school through doctoral); (5) annual income (<$10,000 to > $120,000); (6) currently employed (yes, no); (7) longest time with a single employer (years); and (8) number of different employers in the past five years. ADHD symptoms. Respondents were asked to endorse or reject, as self-descriptors, each of the 18 symptoms of ADHD in the Diagnostic and Statistical Manual of Mental Disordersfifth edition. [1] These include 9 symptoms of hyperactivity/impulsivity and 9 symptoms of inattention. Symptoms were summed to form hyperactivity/impulsivity, inattention, and combined scores. This approach yields internal consistencies, as assessed by coefficient α, in the .9 range. [33] Delay discounting/Present bias. Present bias was evaluated by asking participants which option they preferred given the following choices: (1) $120 in 1 week vs. $120 in 1 year; (2) $120 in 1 week vs. $137 in 1 year; (3) $120 in 1 week vs. $154 in 1 year; (4) $120 in 1 week vs. $171 in 1 year; (5) $120 in 1 week vs. $189 in 1 year; (6) $120 in 1 week vs. $206 in 1 year; (7) $120 in 1 week vs. $223 in 1 year; and (8) $120 in 1 week vs. $240 in 1 year. Present bias scores (1)(2)(3)(4)(5)(6)(7)(8) were assigned based on the highest among these options in which respondents chose the smaller present reward rather than the larger future reward. Similar measures are common in the literature. [34] Self-control. Convergent validity of associations between impulsivity and financial outcomes was evaluated using the Self Control Scale (SCS). [35] Internal consistencies of the SCS also approach .9. High SCS scores are associated with positive psychological adjustment in a number of domains, whereas low scores are associated with impulse control problems, debt burden, and compulsive buying behaviors. [36] Including the SCS enabled us to evaluate financial outcomes across a broader range of individual differences in self-control than symptoms of ADHD capture. Substance use. Substance use was evaluated using the National Institute on Drug Abuse Quick Screen. [37] The measure yields excellent sensitivity (100%) and good specificity (73.5%) in clinical settings. Respondents were asked about frequency of alcohol, tobacco, non-medical prescription drug, and illegal drug use. Ratings were rendered 5 point scales (1 = never to 5 = daily), which were summed across substances to derive a total score. Financial outcomes. Financial outcomes were assessed with a series of questions including (1) do you have a credit card (yes, no); (2) if so, how often are you late on credit card payments (never, yearly, every couple of months, monthly); (3) do you carry a credit card balance (yes, no); (4) if so, at what interest rate (don't know, 5-10%, 10-15%, >15%); (5) what fraction of your income do you save (0%, <10%, 10-30%, over 30%); (6) have you used payday loans in the past five years (never, once or twice, several times, almost monthly); (7) if so, what is the average loan amount (<$300, $300-$500, $500-$1000, >$1000); (8) have you used pawn shop services in the past five years (never, once or twice, several times, almost monthly); (9) if so, what is the total value of items you've pawned in the past five years (<$1000, $1000-$5000, $5000-$10,000; >$10,000); and (10) what is your approximate current debt in dollars? Results Demographic characteristics and variable correlations are presented in Table 1. Although specifics can be gleaned therein, a few findings warrant elaboration. First, ADHD scores were associated negatively with age, years with current employer, and self-control, yet positively with number of jobs held in the past five years, credit card balances carried, credit card late payments, use of pawn services, substance use, and present bias. All of these findings are consistent with previous research and/or a priori expectations, as outlined above. In addition, present bias was associated with credit card late payments, credit card interest rates, payday loan amounts, lower incomes, and less education. Finally, most financial outcomes were correlated with one another, and (inversely) with self-control. We executed a series of multiple linear regressions (MLRs) to assess relations between ADHD scores and employment histories, financial outcomes, use of extremely-high interest rate borrowing, self-control, and present bias, controlling for age, income, sex, education, and substance use. Language was not covaried given that the vast majority of participants were native English speakers. In MLR, coefficients for each variable reflect their significance when entered into the regression equation last. They therefore assess independent effects of each variable, over-and-above all others in the equation. Thus, there was no need for stepwise entry, which is appropriate for variable sets. [38]. In the first set of MLRs, ADHD total scores were evaluated. Results are presented in Table 2. ADHD symptoms were associated with late credit card payments, carrying credit card balances, number of jobs held, use of pawn services, debt, poor self-control, and present bias, over-and-above effects of age, income, sex, education, and substance use. Of note, even though substance use was associated with almost all of the financial variables assessed, ADHD scores provided independent prediction of most of these outcomes. Next, we re-ran the MLRs with hyperactive/impulsive and inattentive symptoms entered separately. Results are presented in Table 3. As outlined above, this analysis was important given strong evidence that (1) the inattentive presentation is distinct etiologically from hyperactive-impulsive and combined presentations [39][40][41][42][43], and (2) only the hyperactive/impulsive presentation is characterized by frontostriatal neural dysfunction, which underlies delay discounting and risky financial decision making (see extended discussion above). As expected, 5 of 8 significant effects linking ADHD to outcome variables were attributable independently to hyperactive/impulsive symptoms, whereas only 1 of 8 was attributable independently to inattentive symptoms. In fact, the only variable that was associated independently with inattention was self-control, which accounted for variance in both hyperactive/impulsive symptoms (b = -.406), and inattentive symptoms (b = -.131). Discussion We evaluated correspondences between self-reported ADHD symptoms, delay/hyperbolic discounting, and financial outcomes among a large sample of adults, by assessing both hyperactiveimpulsive and inattentive symptoms, present bias, savings, debt, use of very high interest rate borrowing, and late payments-controlling for a host of extraneous influences. Debt burden, late credit card payments, and high interest rate borrowing (e.g., payday loans, pawnshop use) were associated with one another. Many of these outcomes were also associated with present bias and hyperactive-impulsive symptoms of ADHD, although effect sizes were generally modest. Nevertheless, findings are likely meaningful given that (1) they were significant over-and-above effects of substance use, income, and education-all of which are associated strongly with both ADHD and delay discounting; and (2) they were more specific to hyperactive-impulsive symptoms, consistent with hypotheses derived from neural accounts of both ADHD and delay discounting. In contrast to specific associations between financial outcomes and hyperactive-impulsive symptoms, present bias was associated with total ADHD scores, but not independently with either inattentive or hyperactive-impulsive symptoms. This remained the case when covariates were removed from the model. With only hyperactive-impulsive and inattentive symptoms entered as predictors of present bias, the regression equation was significant, F(2,543) = 3.27, p < .04, even though neither hyperactivity-impulsivity nor inattention provided independent prediction, both bs .08, both ps!.12. Thus, although relations between financial outcomes and ADHD were specific to hyperactive-impulsive symptoms, relations between present bias and ADHD were not. In contrast, the strong relation between ADHD symptoms and self-control scores (r = -.53, p < .001) derived from hyperactive-impulsive and inattentive symptoms, both bs -.13, both ps .001. Collectively, differential associations among variables indicate that our measures of ADHD symptoms, present bias, and self-control were not redundant. The effect size we observed for the relation between present bias and ADHD symptoms was small, η = .11. This effect size is about half that reported elsewhere in a recent meta-analysis of 21 studies (N = 3913) of delay-discounting in ADHD. [44] There are two likely explanations for this discrepancy. First, all of the studies in the meta-analysis were case-control designs. Thus, those with formal diagnoses of ADHD were compared with controls. Second, although delay/hyperbolic discounting is assessed in many studies the same way it was measured here, computerized tasks are also common, which may yield larger effect sizes because participants usually respond across greater numbers of trials. For example, in one recent study, participants were presented with 91 discounted choices [21], as opposed to 8 in our study. Their effect size was three times what we observed. The biggest limitations of this study stem from convenience sampling using very brief measures of ADHD, present bias, and financial outcomes. All data we collected were self-report, and therefore may suffer from systematic response biases and halo effects. Anonymity softens such effects, and many participants reported significant substance use, which suggests a reasonable level of candor. Nevertheless, future research should include more detailed assessments of ADHD symptoms, more extensive indices of delay/hyperbolic discounting, and more objective measures of financial outcomes. Indeed, only 248 of 409 participants who carried credit card balances were able to report their interest rates. Given issues with inattention among those with ADHD, such lack of awareness of financial details is more likely, which may have resulted in under-reporting and smaller effect sizes. These limitations notwithstanding, our findings suggest that relations between hyperactivity-impulsivity and important financial outcomes generalize to broader samples. We view our results as an important step toward improving our understanding of sources of delay discounting and their implications for financial wellbeing. Our findings connect hyperactive-impulsive ADHD symptoms-which mark pursuit of short-term rewards and associated delay discounting-to real financial decisions and outcomes. Furthermore, our results demonstrate that hyperactivity-impulsivity-not inattention-is the main driver of these effects. Author Contributions Conceptualization: TPB IBD AS.
2017-10-18T12:47:31.198Z
2017-05-08T00:00:00.000
{ "year": 2017, "sha1": "cfe5acb65c6f470a1a546a28034631f7692b2eb1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0176933&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b4158c88a0cb8eb9ba13db76938da4d73bf0f53", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
260911592
pes2o/s2orc
v3-fos-license
Sustainable enhancement of biogas and methane yield of macroalgae biomass using different pretreatment techniques: A mini-review Macroalgae can be grown without the use of fertilizer, fresh water, or arable land. These qualities support its use for biofuel production because it frees up land for other traditional energy sources and food crops. It has been investigated as biogas feedstock to substitute for fossil fuels burning with attendant effects on the ecosystem. The microstructural arrangement of macroalgae biomass is restricting their conversion to biogas. Therefore, application of pretreatment before anaerobic digestion is needed to enhance their availability to microbial degradation and subsequent increase in biogas yield. Pretreatment application for substrate catalysis is vital to recovering eco-friendly and economical energy from macroalgae. This study summarizes the state of the art of various pretreatment methods employed to enhance macroalgae biomass's anaerobic digestion process. These methods were categorized as thermal, biological, chemical, nanoparticle additives, mechanical, and combined. Merits and challenges associated with each of these methods were also considered. The study shows that all the pretreatment methods considered can improve the biogas yield if the appropriate method is selected based on the type of macroalgae species. Pilot-scale studies that will assist in assessing their feasibility on the full-scale implementation are still missing. Introduction The global increase in population and the industrial revolution have led to a rise in energy demand.The major source of this energy is fossil fuels, producing about 80% of the total consumption.Due to the complete reliance on these fossil fuels, the sources have been depleted and greenhouse gas emissions have increased, contributing to global warming. 1 Environmental pollution of fossil fuels has also been identified as a major challenge resulting from the excessive burning of fossil fuels during energy production.Because of these challenges, the United Nations (UN) came up with Sustainable Development Goal 7 (SDG7), which focused on reliable, affordable, sustainable, and modern energy from sources like bioenergy, wind, solar, geothermal, etc., for all by 2030. 2 National governments of different countries under the UN have keyed into this goal.They are looking for low-carbon energy sources to replace fossil fuels in their energy mix.Bioenergy production from biomass has been considered a bright means of producing renewable and sustainable energy because it is available, does not compete with food sources, and is a means of waste management. 3The amount of carbon dioxide present in biogas is almost the same as the quantity of carbon utilized by the plant during photosynthesis, and this is one of the merits that categorized energy from biomass as carbon neutral. 4Biogas is a type of biofuel that is composed primarily of methane, with traces of hydrogen sulfide, hydrogen, ammonia, water vapor, and other gases.It is composed of 30-40% carbon dioxide and 60-70% methane. 5Biogas can be generated from different organic wastes like agricultural residues, municipal solid waste, wastewater sludge, algae biomass, etc., through the biological and chemical process without oxygen called anaerobic digestion (AD), gasification, and hydrothermal liquefaction. 6n recent times, attention has been shifted to the potential of algal biomass in biofuels and highvalue chemical production.The third generational feedstock for biorefineries, which can be transform into various biobased products like biofuels, includes the plant-like organism known as algae. 7iogas, biodiesel, bioethanol, biohydrogen, bio-oil, and biobutanol are some of the biorefinery products of algae. 8Algal can be classified into two major types using their cellularity: microalgae and macroalgae.Microalgae are the unicellular algal class that can live as an individual or in colonies.Algal species that are multicellular are referred to as macroalgae and are mostly called seaweeds due to their ability to grow profusely at any time. 9Macroalgae are eukaryotic organisms that can be categorized into three taxonomical groups based on their photosynthetic pigments composition, which are green (Chlorophyta), red (Rhodophyta), and brown (Phaeophyta) algae. 10Macroalgae storage components, lipids and carbohydrates, and a lower percentage of lignin composition portray macroalgae as a promising substrate for biogas generation. 9Several studies have investigated macroalgae as biogas feedstock, which has been adjudged to be an appropriate biogas substrate.Some of the advantages of macroalgae as biogas feedstock in the previous studies include: (i) photosynthetic capacity is high compared to terrestrial feedstock and indicates high biogas production, (ii) they can be grown in brackish, saline, or wastewater and needs less water for development than terrestrial feedstock, (iii) it can be planted on non-arable soils and, does not put pressure on the food supply, and (iv) they can lower the carbon dioxide emission because they can be cultivated in an environment with high carbon dioxide. 11,12espite several advantages of using macroalgae as feedstock in biogas production, biogas production from macroalgae commercially is not common.New ways of cultivation, harvesting, and downstream processes are needed to encourage biogas production from macroalgae.One of the most affordable and available substrates for the generation of biogas is macroalgae, and this process supports efficient waste management and is a principal source of bioenergy. 13Despite the potential of macroalgae as biogas feedstock, it consists of complex structural arrangements that resist AD, a characteristic referred to as recalcitrance. 14A lignocellulose material called macroalgae has cellulose, hemicellulose, and lignin that are interwoven to prevent effective breakdown. 15herefore, there is a need to apply techniques to reduce macroalgae resistance and improve the accessibility of macroalgae during AD.The cellulose conversion process requires pretreatment, and it is essential to disassemble the feedstock's rigid structure to make cellulose more accessible to the enzymes that produce fermentable sugars from carbohydrate polymers. 16Pretreatment alters macroalgae arrangement at all levels when applying different techniques and improving the morphological characteristics and hydrolysis stage.Nevertheless, there is currently no clear favorite among these technologies in terms of cost-effectiveness and efficiency.The ideal macroalgae pretreatment considerations are also rarely highlighted.These findings are essential for the effective and lucrative use of various macroalgae that are easily accessible and inexpensive.This review discussed the foremost biogas production pathways using macroalgae and highlighted the challenges associated with the digestion rate.The utilization of various pretreatment techniques to improve the biogas yield of macroalgae was discussed, including their respective merits and demerits.Lastly, recent developments in pretreatment methods and their application on macroalgae were reviewed, and some recommendations that can make the biogas production process from macroalgae acceptable at the industrial scale were presented.Works of literature were sourced from many databases (Springer Nature, ScienceDirect, PubMed, and Scopus) and other free repositories (Google Scholar) with "macroalgae", "anaerobic digestion", "pretreatment methods", "biogas", and "methane" as the keywords. Characteristics of macroalgae Macroalgae are categorized into three taxonomical groups based on their photosynthetic pigments, and they are red (Rhodophyta), green (Chlorophyta), and brown (Phaeophyta) algae. 10These types of macroalgae have distinctively different characteristics of structural and biochemical composition. 17Red algae (Phylum Rhodophyta) are a group of red-colored algae that has sulfated polysaccharides in the cell wall, phycobilin pigments, floridean starch, unstacked thylakoids, and chloroplast without endoplasmic reticulum, but no flagella. 18The red color of these macroalgae is because it contains phycoerythrin and phycobilin proteins that cover other photosynthetic pigments and allows red algae to absorb green and blue wavelengths.Most red macroalgae can be found in the marine; therefore, they are usually referred to as red seaweed.Only a small percentage (about 3%) can be found in freshwater.Due to multicellular thalli, red algae consist of relatively more complex structural arrangements than other macroalgae.It consists of cellulose fibrils dipped in gelatinous galactans, carragenans, and agar matrix.Thicker cell walls with extra layers of calcium carbonate were noticed on the outside of some Corallinaceae family. 19This nature of cell walls hindered the availability of organic contents during the biodigestion of red macroalgae.Hypnea valentiae with 11.8-13%, 9.6-11.6%,and 11.8-12.6% of carbohydrates, lipids, and proteins, an example of red macroalgae. 20Palmaria palmata consists of 39.4% carbohydrates, 3.3% lipids, and 22.9% protein, 21 while Acanthophora spicifera which is another type of red macroalgae, consists of 11.6-13.2%,10-12%, and 12-13.2%carbohydrate, lipids, and protein, respectively. 22It can be observed that the biochemical characteristics of the representative red algae varied with species, which is an indication that the effect of pretreatment techniques on red algae may not be universal. The heterogeneous assembly of organisms from two different lineages (Streptophyta and Chlorophyta) is called green algae, and they currently have 12 different classes.They are filamentous with plant-like habits, usually found in freshwater and terrestrial environments, and play crucial ecological roles. 23Green macroalgae consist of strong cell walls that are built from cellulose and other polysaccharides that resist the anaerobic digestion microorganisms.It was found that single, multinucleated seaweed Caulerpa sp., which can grow to be around 3 m long, and solitary green algae cells, which are typically larger than those of cyanobacteria and can have diameters of around 1 μm, are both found in the ocean. 24A typical example of green algae is Ulva reticulate, which consists of 33.3%, 2.5%, and 6.9% carbohydrates, lipids, and protein, respectively. 22On the contrary, Codium decorticatum has a different biochemical composition of 50.6%, 9%, and 6.1% carbohydrates, lipids, and protein due to species differences. 25It was observed in another study that Halimeda macroloba, another species of green algae, has 32.6%, 9.9%, and 5.4% carbohydrates, lipids, and proteins, respectively. 26This variation in the biochemical arrangement of different species of green macroalgae will significantly influence the pretreatment process and the subsequent biogas yield.Brown algae or Phaeophyceae are differentiated by the presence of chloroplasts that consist of four surrounding membranes, thylakoids in three stacks, fucoxanthin that covers chlorophyll-a and -c, alginates known as the wall matrix component, and laminarin as the photosynthetic reserve. 27They are mainly thalloid and filamentous algae that are almost exclusively marine, with very few that can be found in freshwater, and they are attached to substrata such as rock.Brown algae do not have any species that are planktonic. 23The biochemical contents of brown algae differs with species.For instance, Saccharina japonica has 51, 1, and 8% carbohydrates, lipids, and proteins, respectively, 28 whereas Laminaria digitata is a composition of 46.6% carbohydrates, 1% lipids, and 12.9% proteins. 21The biochemical composition of Stoechospermum marginatum can be noticed to differ clearly from others, with 33.6%, 10.9%, and 3.9% carbohydrates, lipids, and proteins, respectively. 25This variation in biochemical characteristics indicates that pretreatment techniques will have different influences on the microstructural arrangement and biogas yield of different brown algae. Anaerobic digestion of macroalgae Biogas is a renewable fuel that is produced through the biological and chemical process of anaerobic digestion, which converts organic carbon into organic acids.Anaerobic digestion is a biological and chemical process of degrading organic carbon into organic acids, and biogas is a renewable carrier.This process consists of four different stages, namely: hydrolysis, acidogenesis, acetogenesis, and methanogenic stages, as illustrated in Figure 1. 29 During hydrolysis, large molecules of organic polymers like fats, carbohydrates, and proteins disintegrate into smaller sizes, like fatty acids, sugars, and amino acids.This stage is considered the stage of rate restriction because it dictates the feedstock's biodegradation rate.It is a vital stage complicated organic substrates are hydrolyzed into digestible molecules by catalytic activities of the anaerobic microbes. 30During acideogenesis stage, the products of the hydrolysis stage are broken down further, and anaerobic bacteria produce CO 2 , NH 3 , H 2 S, and H 2 , volatile fatty acids (VFAs) with shorter chains, organic acids, and other by-products that are in traces, in an acidic environment of the digester.At the third stage of biodigestion, which is acetogenesis, the acidogenesis products are catabolized into H 2 , CO 2 , and acetic acid, and reduce the substrate to the level utilized for methane release by the methanogenic bacteria.The methanogenesis stage is the last phase of digestion, where the acetogenesis products are converted into biomethane by the methanogenic bacteria. 31At this stage, two different pathways can be followed to produce methane.In one of these pathways, carbon dioxide and acetic acid, the primary products of the stages one to three, can produce methane during the methanogenesis stage.The carbon dioxide in the earlier stages can also be converted into water and methane, as shown in equation 1, through the process illustrated in equation 1, and the primary mechanism to generate methane during this stage is the route that involves acetic acid (equation 2).This results in the production of methane and carbon dioxide, the principal compositions of biogas. 293][34] Carbon dioxide released through anaerobic digestion process can be converted to CaCO 3 using carbonic anhydrase bacteria, and the synthesized CaCO 3 can be used in filler material, paper and steel industries, and as construction materials. 35 The biogas production capacity of macroalgae was first investigated in the 1970s when different studies revealed their biogas potential. 36,37Macroalgae have been observed to be a suitable substrate for biogas production due to their very low/absence of lignin content, low lipid, high carbohydrate content, and high carbon-nitrogen ratio (above 30%, depending on the harvesting period). 9,38Different means of biogas production from macroalgae have been experimented with, and this includes (i) gasification, (ii) hydrothermal liquefaction, and (iii) anaerobic digestion. 9he biogas production potential of different macroalgae has been experimented with using an anaerobic digestion process, and it was reported that they are found to have a superior prospect for biogas and methane yield.Green macroalgae Ulva sp. was digested through an anaerobic digestion process at mesophilic temperature (35 °C) using a batch digester, and a methane release of 132 ± 4 mL CH 4 /g volatile solid (VS) was recorded. 38In another study, beach-cast seaweed was digested in a lab-scale batch reactor at mesophilic conditions (37 ± 2 °C), and a methane yield of 106 ± 1 mL/g VS added was recorded after 35 days of retention time. 39Red macroalgae Palmaria palmata was investigated for biogas production potential in a batch digester at mesophilic conditions, and it was reported that 308 ± 9 mL g −1 VS was obtained. 40Different species of macroalgae were studied for their biogas production potential by Tedesco et al., and these include Pelvetia caniculata, Fucus serratus, Gracilaria gracilis, Ffucus vesiculosus linnaeus, and Laminaria digitat. 41The experiment was observed for 21 days in mesophilic conditions (37 °C).Biogas yields of 159.3 ± 24, 64.2 ± 21.1, 81.8 ± 32.5, 71.5 ± 4.9, and 103.3 ± 19.8 mL/g VS were recorded for Pelvetia caniculata, Fucus serratus, Gracilaria gracilis, Fucus vesiculosus linnaeus, and Laminaria digitat, respectively. 41t can be noticed from the result of the biogas production potential studied that there is a clear variation in the biogas and methane yield potential of different species of macroalgae.This high variation can be linked to different macroalgae species' cell wall characteristics and macromolecular composition.The variation in anaerobic digestion capacity because of macromolecular content depends on the strength of various organic contents in macroalgae cells.The organic compounds compositions can be utilized to determine the theoretical methane capacity of the feedstock stoichiometrically.Ulva sp. was reported to have a high theoretical methane strength with total carbohydrates of 33.2 ± 0.8% total solid (%TS wet weight), followed by proteins 11.4 ± 0.5 (%TS), and lipids 1.8 ± 0.05 (%TS). 38Reports have shown that inducing a specific macromolecule accumulation in macroalgae can increase the methane yield of algae. 42The methane potential of macroalgae has been observed to be in the range of 204 to 380 mL/g VS.Table 1 presents the specific methane potential of different macroalgae as observed in different literature.Despite producing higher methane yield when compared to animal waste and sewage sludge (247-293 mL/g VS), lignocellulose feedstock (101-258 mL/g VS), sugar crops (241 mL/g VS) and rice straw (281 mL/g VS), it releases less than 50% of the theoretical methane capacity. 43The low biodigestion of macroalgae can be traced to the availability of uneasily digested polysaccharides, higher sulfur content polyphenols, salinity, and low carbo-nitrogen ratio. 11,44Considering macroalgae's cell wall properties, it consists of organic compounds with low digestion and/or bioavailability, like hemicellulose and cellulose.The hard cell walls inhibit methane release; since the organic matter needed for methane release is embedded in the cytoplasm and is not readily available for the anaerobic digestion of microorganisms.Various factors like lignin percentage, surface area, polymerization grade, crystallinity, and solubility determine the digestion efficiency of macroalgae.Degradation of the lignin-polysaccharide bonds and opening the feedstock for bacterial activities is the primary objective of pretreatment.Generally, pretreatment targets are to ease the accessibility of the microorganisms, avoid degradation or carbohydrate loss, reduce the generation of possible inhibitory compounds, lower the potential influence on the environment, and make the process economical. 45ifferent pretreatment methods have been experimented with to degrade the macroalgae's cell wall and make the hemicellulose and cellulose accessible. 15,38,39croalgae pretreatment Feedstock pretreatment before anaerobic digestion has been categorized into mechanical/physical, biological, chemical, thermal, thermochemical, and combined pretreatment methods. 456][57] Despite the successes recorded in the feedstock pretreatment, some demerits were observed.This includes the high energy inputs, particularly for thermal and mechanical methods, the release of inhibitory materials and corrosion/environmental challenges associated with a chemical pretreatment, the release of inhibitory compounds that lower the digestion rate, and the exorbitant cost of enzymes for biological pretreatments. 45 Biological pretreatment methods Biological pretreatment techniques are the alternative technique reported to be environmentally friendly and need low/no energy. 58This method employs microorganisms, enzymes, and consortia to improve macroalgae biodigestion, thereby increasing biogas and methane yields.Microorganisms are introduced to the feedstocks to debase the cell wall.Compared to other pretreatment methods, biological pretreatment is environmentally benign, requires little/no energy, and does not produce inhibitory compounds.Different hemicellulolytic and cellulolytic artificial microorganisms can be employed for this process. 59Compounds of macroalgae cell walls like hemicellulose and cellulose are converted to compounds of lower molecular weight by hydrolytic enzymes, and these molecules are more readily accessible to methanogenic bacteria.Enzyme dose, treatment time, and temperature influence biological pretreatment.During biological pretreatment, the temperature and pH of the technique are set at the optimum condition that favors the particular enzyme in use.However, the variation in macroalgae cell wall composition and arrangement, enzyme-to-substrate specificity, and economy of producing enzymes are parts of the main limitations that require special attention before this method can be fully accepted in the biogas industry.Fungi pretreatment during biological pretreatments employs various fungi, like soft-rot fungi, white, and brown, to debase biogas feedstock.The process needed minimal energy and chemicals with little inhibitory compound release.White rot fungi have the strength to release enzymes with a high hydrolytic capacity to debase feedstock cell walls, Lacasse, lignin peroxidase, and manganese peroxidase. 60During the biomethane optimization of green macroalgal Ulva sp. was pretreated with Aspergillus fumigatus SL1, at a concentration of 7 mL/100 g/TS for 2-8 days at a temperature of 50 °C, with an initial pH of 5, and moisture ratio of 1:3.The pretreated Ulva sp. was then digested with solid-state fermentation in a batch digester at mesophilic temperature (35 °C) for 8 days.The optimum methane yield of 153 ± 3 mL CH 4 /g VS with an anaerobic degradation rate of 57% was recorded, and compared to the control experiment, methane released was improved by 15.91%. 38exican Caribbean macroalgae consortiums were pretreated with Bm-2 strain (Trametes hirsuta) fungal during biological pretreatment before anaerobic digestion.The macroalgae consortium was inoculated in 5 mL of a mycelial suspension of T. hirsuta for 6 days at a temperature of 35 °C and was shaken at 150 rpm with an orbit shaker.The pretreated substrate was experimented at mesophilic conditions (38 °C) for 29 days in a batch digester and agitated once daily.It was noticed that a methane yield of 104 LCH 4 /kg VS −1 was observed, representing a 20% improvement in methane yield compared to the untreated substrate. 61One of the quickest biological pretreatments is the utilization of enzymes, and it can perform within a very short period since the enzymes are smaller compared to microbes.Enzymes possess good solubility, mobility, and good interaction with feedstocks.All the enzymes containing exogluconase, β-glucosidase, and endogluconase can be used for biological pretreatment. 62As acceptable as the use of enzyme pretreatment as it is, most times, it requires other pretreatment methods, such as sterilization, and the process is not always economical due to the high cost of enzymes. 631 mL of Viscamyl TM Flow cellulase enzyme as a form of biological pretreatment was added to 1 L of Sargassum fulvellum macroalgae at a pH of 4.5 for 24 hours and was digested for 25 days retention time in a batch digester, at a temperature of 38 ± 0.5 °C.The result showed that pretreatment with enzymatic pretreatment of macroalgae lowered methane released by 9.49% compared to the control experiment. 13Seaweed was pretreated biologically using L. digitate cellulase, and the biogas yield was decreased by 1%, but when 2.5% citric acid was added to the cellulase, a 6% increase in biogas yield was observed. 64This result indicates that the combined pretreatment of cellulase and citric acid improved the biogas released compared to the single pretreatment of cellulase enzyme.This may be because of the chain behavior whereby the hydrolysis of one compound improved the bioaccessibility of the other, which may be further hydrolyzed.Some bacteria with high hydrolytic capacity have been examined during the biological pretreatment of biogas feedstock and have been observed to enhance the biogas yield.The likes of Pseudomonas, Salmonella, Escherichia coli, etc., have been experimented with as pretreatment agent for different biogas feedstock and have been adjudged to be suitable for biological pretreatment. 65nother biological pretreatment method is microaerobic treatment.Recent research in biological pretreatment has observed that applying a small quantity of oxygen (or air) to the process during pretreatment or digestion can improve biogas and methane yields.Under microaerobic conditions, definite accessibility of class Clostridia, phylum Firmicutes, and order Clostridiales related to hydrolysis of anaerobic digestion are developed.This process can double the Oxytolerant (acidic resistant bacteria), and Methanobacterium (archaea bacteria that release methane as a metabolic by-product), and the biogas enhancement can be linked to the changes in the microbial community during microaerobic. 66This method's amount of oxygen introduced to the anaerobic digestion process is vital because the inappropriate introduction of oxygen disturbs the activities of methanogenic bacteria and reduces the methane yield released. 67During biological pretreatment, some exogenous microbes can be added to the microbial community, and this method is called bioaugmentation.This method can introduce a specific microorganism into the anaerobic digester to improve a particular stage of anaerobic digestion. 68This process can be utilized to improve the start-up of a reactor, increase the anaerobic digestion process, or enhance the digestion strength of a consortium.One of the major merits of the process is that it does not need other pretreatment methods, therefore, simplifying the technique and giving space to develop other cost-effective methods. 62Despite the strength of bioaugmentation to improve the accessibility to enzymatic and microbial attacks, their capacity has not been fully established.The digestible portions of the carbohydrate are majorly covered by lignin at the initial stage, thereby lowering their accessibility to enzymatic and microbial attack.Ensiling pretreatment is a biological technique that mostly uses stored wet feedstock before pretreatment. 69During this process, soluble carbohydrates can be transformed into butyric, acetic, lactic, and propionic acid through the activities of microorganisms.During this process, pH is less than 4, which will harm the anaerobic digestion microorganisms' growth while conversion is favored. 70It was observed that ensiling could enhance methane release if the appropriate condition is selected. 69One of the major advantages of this technique is that there will be feedstock throughout the year without waiting till feedstock's season.The major challenge is that about 40% of its methane potential can be lost if silage is not properly handled.Palamaria palmata screw-pressed with comparative wilted and chopped were ensiled with and without Safesil silage additive for 90 days.It was observed that the effluent volumes produced during ensiling were 26-49% of fresh weight which has 16-34% of the silage dry matter. 14he influence of ensiling pretreatment method was investigated on Sargassum muticum before anaerobic digestion, and it was discovered the pretreatment had no significant effect on the biogas released. 71t can be deduced from the results of this literature that biological pretreatment has a different impact on the biogas and methane released from macroalgae.The impact depends mainly on the cell wall, temperature, pH, time, and enzyme dose.Our results from the search engines show that investigations on the influence of biological pretreatment of macroalgae is still limited.This implies that there is a need for more research in this field to harness the strength of macroalgae as biogas substrate using this environmentally benign pretreatment method to produce green and sustainable energy and decarbonize the world.Table 2 summarizes the impacts of biological pretreatments on macroalgae biogas and methane yield.Our findings show that there is a limited study on the biological pretreatment of macroalgae.It can be observed from Table 2 that enzymatic pretreatment of L-digitata with alginate lyase at 37 °C for 24 hours reduced the methane released by 13%. 72This can be linked to the strength of the pretreatment to break down the lignin portion of the feedstock and make it easily accessible to the microorganism for digestion.The hydrolysis stage could be observed to improve tremendously, which led to over-accumulation of the digester and subsequent increase in VFAs that alter the pH of the process.The alteration of the pH beyond the acceptable range (6-8) was harmful to the methanogenic bacteria activities and reduced the gas yield. 55,73It was discovered that the majority of the algae biological pretreatments were focused on microalgae pretreatment.The few works of literature accessed were mainly on the application of fungi and enzymes.The use of bacteria, ensiling, microaerobic, and bioaugmentation is still missing in the literature.Some of these methods have been reported to be effective when experimented with on another biogas feedstock, which also needs to be investigated on macroalgae biomass.With the methods that have been experimented with, it is noticed that they are still limited to very few macroalgae species without any information on the influence of biological pretreatment on a good number of macroalgae species.The information on the influence of biological pretreatment biogas yield of macroalgae when other digesters are used could differ from batch type, and other conditions rather than mesophilic conditions are still limited.Research has shown that digester type and process condition determine the pretreatment methods' efficiency.Therefore, there is a need to consider digester types and conditions before concluding the most suitable conditions in terms of pretreatment, digester type, and process condition.Most of the biological pretreatment methods investigated so far are still at the laboratory scale, which has not fulfilled the intended purpose of the research; therefore, this required further study at the industrial scale.Recent studies by Caxiano et al. 7 analyzed the scale impacts of creating a Sargassum muticum seaweed biorefinery using a thorough process modeling approach.It was observed that the economy of electricity production from the biogas yield was not attractive due to the low methane yield without pretreatment.But using the digestate from the process as organic fertilizer was more economical during the alternative study. 7he subsequent anaerobic digestion might be improved with noticeably higher biogas and methane yield if the biological pretreatment was chosen appropriately for a given feedstock and its application was optimized.Because of the maturing or cost-effective processes, full-scale biological pretreatment of macroalgae is not yet commonly used, but due to its intrinsic benefits, biological pretreatments continue to be a promising technique to increase the biogas and methane yield of macroalgae.More work in this area could result in biological pretreatment that is effective, affordable, and safe for the environment. Chemical pretreatment The chemical pretreatment method is popular in biogas production due to its ability to enhance the biodegradability of complex substrates and its effectiveness. 75Different chemicals can be used for the chemical pretreatment of lignocellulose feedstock, and the process's efficiency depends on the substrate's microstructural arrangement. 76Alkali and acid pretreatment methods are the primary chemical pretreatment techniques that have been studied extensively in macroalgae pretreatment.In some cases, chemical pretreatment methods are combined with heat for better efficiency.Alkali and acid reagents are mostly utilized to solubilize polymers and enhance the accessibility of organic compounds for methanogenic activities. 13The small quantity of chemicals in the pretreated substrate may help control pH reduction during the acidogenesis stage of the anaerobic digestion stage.Nonetheless, it should be considered that the solubilized compounds can produce toxic by-products of methanogenesis.Acidic pretreatment is used chiefly for the pretreatment of biogas feedstock, either in dilute or strong form.It has been experimented with at high temperatures and with other pretreatment methods, like a steam explosion. 77This method can solubilize lignin and hemicellulose when a strong acid is utilized, and recovery of acid is needed.When dilute acid is used in this process, lignin is redistributed instead of solubilized, and there is a need to neutralize the pH before anaerobic digestion. 78One major disadvantage of acidic pretreatment is the release of inhibitory materials like aldehydes, phenolic acids, furfurals, and 5-hydroxymethylfurfural, which lower the biogas and methane produced. 45Because of the toxic and corrosive nature of some of the acids, special digesters that have high resistance to these properties are required for the anaerobic digestion of acid-pretreated feedstock.Lignin removal with the application of alkaline pretreatment has been very effective, but the percentage of cellulose remained high. 76This method enhances the available surface area of the feedstock through fiber swelling and lower crystallinity.The process also degrades the bonds between the lignin and carbohydrate and causes interruption of the lignin structure. 79The influence of acidic pretreatment using dilute H 2 SO 4 was experimented on Nizimuddinia zanardini macroalgae feedstock before anaerobic digestion. The substrate was pretreated using 7.0% w/w of H 2 SO 4 concentration at 30, 45, and 60 minutes exposure time, with varying solid loadings of 5 and 10% w/v, at a temperature of 121 °C.After anaerobic fermentation, biogas yield improved from 170 to 200 m 3 per ton of dried Nizimuddinia zanardini, representing about 17.65%. 80Macrocystis pyrifera was pretreated with hydrogen chloride (HCl) and methanol to enhance biogas yield.The feedstock was treated at 50 °C for 30 minutes, using 0.1 mol/L of HCl and 1% formaldehyde, respectively, and enhancement in hydrolysis rate was observed. 81When 6% oxalic acid was used to pretreat seaweed before anaerobic digestion and biogas released was improved by 90.36%. 82The strong ability of alkali pretreatment can produce a phenolic compound capable of inhibiting the anaerobic digestion process. 83he alkali pretreatment technique is cost-effective but requires considerable water to eliminate salt from the feedstock, making the downstream process uneconomical. The application of oxidizing agents such as H 2 O 2 , ozone, FeCl 3 , or air to disintegrate the solid arrangements of the feedstock and enhance biodigestion is a chemical pretreatment technique called oxidative pretreatments.The process aims to degrade hemicellulose partially and delignification the feedstock. 84In this process, the feedstock can be soaked in water after an oxidizing agent like peracetic acid or hydrogen peroxide is dissolved in it.The method's effectiveness depends on the concentration of the oxidizing agent, pretreatment duration, and feedstock structure. 45Ozonolysis pretreatment of biogas feedstock is targeted at reducing lignin levels with ozone and mostly debases only lignin.It does not have a major influence on hemicellulose and cellulose.Compared to other chemical pretreatment techniques, it can be performed at ambient pressure and temperature.The process is eco-friendly and does not release toxic compounds but does not influence other processes like yeast fermentation. 85Using porous glass sparger, ozonation pretreatment of Ulva latuca for biogas enhancement with a flow rate of 8.3 mgO 3 /minute.It was observed that a high dose of ozone at 15 and 30 minutes released the optimum yield of 498.75 and 492 mL/g VS. 86 The use of sulfite to lower the recalcitrance properties of the biogas feedstock, referred to as SPORL pretreatment, has been experimented with on some biogas feedstock and reported to be effective.This can be attained in two different stages; calcium or magnesium sulfite can be utilized to treat the feedstock to eliminate the hemicellulose and lignin portion in the first phase.The second stage required a mechanical disk miller to lower the particle of the previously pretreated substrate. 87he cellulose conversion rate to glucose is very high, with good lignin elimination and recovery.By integrating them into accessible mills for the production of biogas, it may process a variety of feedstocks in high quantities for industrial output. 58Organic solvent (organosolv) pretreatment is a chemical pretreatment technique that uses ethylene methanol, methanol, acetone, or glycol combined to pretreat biogas feedstock with the addition or without an inorganic catalyst at high temperatures. 88Glycerol organosolv pretreatment of biomass was observed to selectively deconstruct the feedstock, effectively improving the hydrolysis stage.The process was observed to enrich lignin with reactive groups β-O-4 linkage and aliphatic groups. 89The feedstock and catalyst's nature dictates the process's temperature.This pretreatment method eliminates lignin and generates hemicellulose and cellulose syrup of sugars C5 and C6. 90 During this pretreatment, the organic solvent alters the intra-molecular bonds and aids the accessibility of enzymes during anaerobic digestion.The system required recovery and reuse of the solvent to some extent, and this level of recovery and reuse determines the economy of the method.The degree of crystallinity, polymerization, fiber length, etc., of the pretreated feedstock are determined by the type of catalyst used, treatment time, solvent concentration, and temperature. 91Carbon dioxide explosion pretreatment (supercritical fluid) of biogas feedstock is a chemical pretreatment process whereby feedstocks are subjected to supercritical carbon dioxide whereby the gas behaves like a solvent.This supercritical CO 2 is released into a high-pressure container that contains the feedstock and is then heated to the required temperature and kept for 15-20 minutes at this high temperature. 92The pressurized gas released on the feedstock disintegrates the feedstock arrangement and enhances the accessible surface area. 93This technique is most suitable for feedstock with high moisture content since the hydrolytic process is optimal when the moisture content is high.This makes the process suitable for macroalgae.The method is bright and economical because low temperature is required, the low cost of CO 2 , and it does not generate inhibitory compounds.Nevertheless, the cost of a pretreatment reactor that can cope with the required pressure is a major drawback of the method. 94Another important chemical pretreatment method is ammonia fiber explosion (AFEX) which uses liquid ammonia to pretreat biogas feedstock, and it is also called soaking aqueous ammonia (SAA), or ammonia recycles percolation (ARP).Recent multidisciplinary research has identified the use of ionic liquids, DES, and NADES as potential candidates for biogas feedstock pretreatment. 45onic liquids are primarily composed of ions (cations and anions), which have low melting temperatures (less than 100 °C), low vapor pressures, high polarities, and higher thermal stabilities. 95These solvents contend with the feedstock hydrogen bonds and disintegrate their network.If the appropriate ionic solvent is selected and applied to feedstock, up to 80% biodegradation can be achieved. 96DES have similar properties to ionic liquids.DES are fluids with about two or three harmless and cheap substances that can self-affiliate, primarily through interactions with hydrogen bonds.DES and ionic liquids are similar in physical properties and behavior, but DES differ since their constituents are not totally ionic and can be produced from non-ionic materials. 97Some natural products have been brought into DES and ionic liquids, such as choline, urea, sugars, amino acids, and some organic compounds.These solvents can be produced naturally and are called NADES.These solvents are easy to synthesize, biodegradable, economical, non-toxic, and harmless ammonium salts that can be liberated from biomass compared to ionic liquids. 56Adding 3-hydroxy-2naphthoic acid (3H 2 NA) during chemical pretreatment was reported to improve enzymatic digestibility by inhibiting polymerization and adding a carboxylic group on lignin, thereby enhancing the efficiency of the digestion process. 98t can be noticed from the literature consulted that chemical pretreatment could enhance the biogas yield of macroalgae, and the result can be enhanced through a thermochemical process.Thermochemical pretreatment of P. palmata was carried out using 0.04 g/g TS of NaOH 0.02 and 0.04 g/g total solids (g/g TS ) HCl at 160 °C for 30 minutes before anaerobic digestion.It was noticed that there was meaningful influence on the methane yield, 40 and this contradicts what was reported when a similar process was experimented with on lignocellulose feedstock. 76ummarily, reports on chemicals pretreatment are limited and, most times, contradictory.Table 3 presents some chemical pretreatments' effects on macroalgae biogas yield.It can be inferred from Table 3 that chemical pretreatment of macroalgae was limited to mostly alkali and acid.Most of the chemical pretreatment methods that have been experimented on lignocellulose materials and waste-activated sludge and adjudged to be suitable are still missing in macroalgae pretreatment.Findings show that the most recent chemical pretreatments methods like ionic liquid, DES, and NADES that are of natural origins are non-toxic, economical, easy to synthesize, and easily degradable have not been investigated with on macroalgae.Establishing the strength of these methods as potential pretreatment techniques for macroalgae will bring a turnaround interest in adopting macroalgae as biogas feedstock since some of the required materials are natural, and the process is non-toxic, with few toxic and artificial ones.Our findings in Table 3 indicate that the anaerobic digestion processes were carried out in batch digesters and at mesophilic temperatures.This necessitated further research that will consider other types of digesters and temperatures.It can also be observed from the table that the same concentration of different chemicals on the same feedstock does not have the same influence on their biogas yield.For instance, when 1% v/v (volume/volume) of NaOH and HCl were experimented with on L. digitata, the biogas yield was improved by 1237.5% and 2375%, respectively. 82A varying percentage of the same chemicals on the same feedstock also produced different improvements, as observed when 1 and 6% v/v of NaOH was experimented with on L. digitata. 82It was observed that different macroalgae have their optimum chemical pretreatment conditions for optimum biogas yield.This required more research in the chemical pretreatment of macroalgae to identify the appropriate conditions for improved biogas production for most macroalgae that are still missing in the literature. Mechanical/physical pretreatment Mechanical pretreatment of macroalgae is one of the most common methods that break cells through physical force.This method improves the feedstock's available area, releases the complex sugars for enzymatic hydrolysis, and enhances the biogas yield. 15This method is less reliable on the macroalgae species and have the tendency to contaminate lipid products than the chemical pretreatment method. 102The main demerit of mechanical pretreatment is that it needs high energy consumption, which may make the process not economical. 103Mechanical pretreatment employed different means to break the recalcitrance characteristics of the macroalgae and improve the biogas and methane yield.Milling or grinding can be used to enhance the crystallinity of the macroalgae.Feedstock size of 10-30 mm can be attained through chipping while milling and grinding can produce a smaller particle size of around 0.2 mm. 104Extrusion is another mechanical means of feedstock pretreatment, and feedstock is subjected to shear force, heat, and compression that results in physical destruction and chemical alteration. 45Before anaerobic digestion, Fucus vesiculosus and Fucus serratus macroalgae were pretreated with particle size reduction.The cumulative methane released after 20 days of retention time was 122 mL/g VS added, which indicates a significant improvement (probability at 95% = 0.042) compared to the untreated Fucus vesiculosus and Fucus serratus macroalgae. 105Laminaria spp.macroalgae were pretreated with beating to the most negligible gap of 76 µm with a milling pretreatment time of 10 minutes, and the total methane released was enhanced from 328 ± 9 NmL/g VS added to 335 NmL/g VS added , representing 2.13% improvement. 106Milling of Laminaria spp.macroalgae to 1 and 2 mm particle sizes and the cumulative methane yield recorded were 241 ± 4 and 260 ± 15 mL/g VS, respectively.Compared to the control experiment, methane yield was reduced by 26.52% and 20.73%, respectively, for 1 and 2 mm particle sizes. 106This shows that particle sizes of 1 and 2 mm reduce the cumulative yield of methane.This can be traced to the over-production of VFAs due to smaller particle sizes.An unbalanced VFA will result in an alteration of the pH of the process, and a pH value below or above 6-7 will negatively affect the activities of methanogenic bacteria, which will influence the subsequent methane yield negatively. 107 The ultrasound pretreatment method is a mechanical pretreatment technique that alters the microbial cell arrangement and opens the cellular materials for methanogenic activities.It consists of rapid compression and decompression cycles of sonic waves.A continuous cycle produces cavitation, generating regions with the liquid-vapor within the cell, regarded as microbubbles. 108Using ultrasound pretreatment encourages macroalgae cell wall breakdown and organic matter solubilization; nevertheless, the improvement depends on macroalgae species and treatment conditions.The effectiveness of this method relied mainly on the energy level, frequency, and exposure time. 109Ultrasound pretreatment of Fucus vesiculosus and Fucus serratus macroalgae before anaerobic digestion significantly improved methane yield.The feedstock was pretreated in an ultrasonicator with 110 V with an exposure time of 10, 15, and 20 minutes, and the optimum methane yield recorded was a 67% improvement compared to particle size reduction. 105he pretreatment method that uses pressure that is distributed relatively on all areas of the substrate to alter the microstructure of the feedstock is referred to as high hydrostatic pressure (HHP). 110The application of rays generated from radioisotopes (Cobalt-60 or Cesium-137) to disintegrate the recalcitrance properties of feedstock for biogas enhancement is called gamma-ray irradiation.Ionizing radiation can quickly enter the feedstock, modify the microstructure arrangement, and dislocate the cellulose crystal areas. 111Another means of mechanical pretreatment that uses rays is microwave irradiation.These are short waves of electromagnetic energy with frequencies varying from 300 MHz to 300 GHz, with wavelength ranges between 1 mm and 1 m. 112This method lyses cell walls and cellulosic crystallinity and enhances the accessible surface area. 113It results from the rapidly vibrating electric field of a dielectric or a polar feedstock, which produces heat due to the frictional forces of molecular movement.Kinetic energy increase produces boiling water, and the quantum energy supplied by microwave irradiation cannot disintegrate the chemical bonds, but the hydrogen bonds can be disintegrated.In this process, dielectric polarization and induction heating altered proteins' secondary and tertiary arrangement. 114Like ultrasounds pretreatment, the major controllable process parameters that control the effectiveness of microwave pretreatment are treatment time and output power.Microwave torrefaction of biomass was simulated to determine the reactor design that allows feedstock to be heated and processed evenly.The simulated temperature profile produced three varying heating rates before 300 °C was reached.This includes 78.3 °C/minute (50-120 °C), 30.6 °C/minute (121-250 °C), and 105 °C/ minute (250-300 °C).This has contributed to the study on the enhancement of microwave heating in feedstock torrefaction. 115Microwave pretreatment of Fucus vesiculosus and Fucus serratus macroalgae during anaerobic digestion was observed to enhance methane yield from 5.93 mL/ g VS added to 30.49mL/g VS added . 105In another related study, microwave pretreatment of Laminaria spp.macroalgae using a 560-W rated microwave until the liquid phase was attained and allowed to boil for 30 seconds.Methane yield from microwave-pretreated feedstock produces a better start-up yield than untreated feedstock, but cumulative methane yields were 244 ± 11 and 328 ± 9 NmL/g VS added . 106This implies that the pretreatment method reduces the cumulative methane yield by 25.61%, necessitating further research to identify the suitable conditions for the microwave pretreatment of Laminaria spp.macroalgae.The results indicate that, in contrast to the beating approach, which caused the process' pH to rise during anaerobic digestion, the microwave pretreatment had no discernible impact on the pH of the process before and after the digestion process (7.48 ± 0.02). 106o disintegrate the rigid structure of feedstock, the use of accelerated beam electrons to dislodge lignin, cellulose, and hemicellulose has been experimented with and referred to as electron beam (EB) irradiation.Radicals are created during this process and are free to move around, breaking cross-link bonds or causing chain scission, decrystallization, and/or lowering the level of polymerization. 116The main merit of irradiation is the ability to easily penetrate the feedstock, while the major demerit is high electricity consumption, which may make the process uneconomical.The use of the pulsed-electric field (PEF) as a pretreatment technique in anaerobic digestion has been observed on some biogas feedstock.The method exposed the cellulose in the substrate by expanding the spaces in the cell membrane and improving its availability to the microorganisms that will convert them to constituent sugars. 60During the PEF pretreatment method, the substrate is opened to an abrupt rupture of high voltage of around 5-20 kV/cm for a concise duration (nano to milliseconds).One of the merits of this technique is that it needs little energy because of its short duration (100 µs) and can be experimented at ambient temperature.In the same vein, the design of the PEF equipment is easy since there are no moving parts in it. 117The impacts of mechanical pretreatment methods on some macroalgae are presented in Table 4.It can be noticed from the Table that no specified particle size is universal to all the macroalgae species considered.For instance, when <1 mm particle size was experimented with on L. digitata and S. latissima, it was discovered that biogas yield was improved by 19.74% and 22.82%, respectively. 82But when the same <1 mm particle size was experimented with on Laminaria spp., it was reported that the biogas released was reduced by 26.5%. 106It can also be discovered from the table that when the same Laminaria spp. was pretreated and beaten for 15 minutes at 40 °C, biogas yield increased by 47% under the same anaerobic conditions. 41Table 4 shows that some pretreatment of macroalgae reduced the gas yield compared to the untreated feedstock.This indicates that inappropriate selection of pretreatment methods/conditions can harm the process.It can be inferred that particle size reduction beyond a particular limit improved the hydrolysis stage beyond what the digester can use for biogas production.This hyper hydrolysis produced over-accumulation of the VFAs and change the process's pH. 118Over-accumulation of the VFAs caused digester imbalance and pH alteration, significantly influencing the activities of methane-released bacteria and lowering the biogas yield. 119It can be observed that various mechanical pretreatment has varying influence on the same macroalgae biomass, and the same mechanical pretreatment effects on macroalgae biomass are influenced by the microstructural arrangement of the substrate.Some of these methods that have been investigated require further research whereby the same feedstock will be considered with all these mentioned mechanical pretreatment methods and come up with the one with optimal biogas and methane yield.It was also noticed that most of the methods investigated so far were digested in the batch digester and at mesophilic temperature.This required further investigations whereby other types of digester and conditions (thermophilic and psychrophilic) are examined and establish the most suitable digester and process temperature.Our findings reveal that some of the recent mechanical pretreatment methods, such as HHP, EB irradiation, pulse-electric field, and high-pressure homogenization, have not been investigated on macroalgae biomass, and this needs to be researched to ascertain their ability to improve the biogas and methane yield of macroalgae that are still underutilized because of their recalcitrant properties. Thermal pretreatment Thermal pretreatment techniques whereby macroalgae are subjected to heat at high temperatures to break down the recalcitrance properties.They have been applied to improve particulate organic matter debasement at temperatures of 50-270 °C.At high temperature, lignin and hemicellulose started to solubilize, and the branching groups of the feedstock depict their structural composition.The optimum pretreatment temperature and time rely mainly on the microstructural arrangement of the feedstock. 16For instance, sewage sludge pretreatment above 180 °C was reported to generate recalcitrant compounds, reducing the feedstock's anaerobic digestibility. 125Lignocellulose feedstocks were observed to begin to solubilize at temperatures above 150-180 °C, and temperatures higher than 250 °C should be exempted when considering thermal pretreatment of macroalgae biomass. 126Thermal pretreatment employs various means whereby heat treatment is applied to the feedstocks for pretreatment.Thermal pretreatment is categorized into different types depending on the process of heat application for pretreatment.The process whereby biogas feedstocks are subjected to compressed hot water at temperature (170-230 °C), and pressure (5 MPa), like steam pretreatment, is called liquid hot water pretreatment.This process hydrolyzes hemicellulose and eliminates lignin, thus exposing the cellulose to enzymatic activities.The temperature of the process can be controlled to minimize the formation of inhibitory compounds that hinders the biogas yield. 127The steam explosion pretreatment method is thermal treatment, where the feedstocks are treated with steam at a particular temperature and pressure.In this method, pretreatment pressure rises with an increase in temperature, particularly above 160 °C.This pressure can be reversed rapidly or gradually, and this pressure change to atmospheric conditions alters the structural arrangement of the feedstock.This method has been adjudged to be an economical pretreatment technique for feedstock degradation, although, in some certain percentage of xylan degraded, the possibility of releasing inhibitory compounds is high. 128To enhance the steam explosion pretreatment technique, a certain application volume of acid or alkali is advised.The use of air or oxygen combined with hydrogen peroxide/water at a temperature above 120 °C for about 30 minutes is called wet oxidation pretreatment. 129The temperature, oxygen pressure, and exposure time all determine the efficiency of this method. Recent development in thermal pretreatment technique has experimented with hydrothermal pretreatment in biogas feedstock treatment.The method effectively pierces the feedstock, cellulose hydration, hemicellulose elimination, and partial lignin removal. 129The technique can eliminate the more significant portion of hemicellulose and a specific percentage of lignin in the feedstock through the degradation into soluble portions and breaking down the recalcitrant properties.It does not need chemicals addition and certain materials with high resistance to corrosion.Pyrolysis pretreatment is a thermal breakdown process that converts organic matter into biochar, which is rich in carbon, condensable liquids, such as bio-oil, and non-condensable volatiles, such as gases when heat is used without oxygen.This process depends on parameters like temperature, feedstock composition, exposure time, particle size, heating rate, pressure, and moisture content. 130The main constituent of feedstock does not degrade evenly during the pyrolysis process; the rate and level of degradation devolve on the process conditions.Hemicellulose is mostly the first constituent to degrade, next is the cellulose, while lignin degrades at a higher temperature.Under this process, the glycosidic bonds that hold glucose portions of cellulose are easily debased at high temperatures and increase the degree of polymerization of the feedstock.The major demerits of this pretreatment method are the release of furans and levoglucosans that inhibit the activities of methanogenic bacteria during anaerobic digestion. 131Findings showed little or no literature on applying pyrolysis as a pretreatment method on macroalgae.The use of oxygen/air combined with water/hydrogen peroxide at a temperature above 120 °C for 30 minutes is referred to as wet oxidation. 129This technique has been experimented with on soil redress and wastewater, and it was observed that it is suitable for the pretreatment of lignin-rich feedstocks. 132The efficiency of this process is influenced by the feedstock's structural arrangement, reaction time, oxygen pressure, and temperature.The technique's main problem and why it is not promoted on a commercial scale is that pure oxygen is inflammable and hydrogen peroxide is extremely expensive. 58The influence of thermal pretreatment methods on the biogas yield of macroalgae is shown in Table 5.It can be noticed that literature on the thermal pretreatment of macroalgae is scarce, and findings show that there are more studies on the thermal pretreatment of microalgae than macroalgae.It can be reported from the existing works of literature that the influence of thermal pretreatment varied with methods and specific feedstock.The three major factors that are noticed to determine the effectiveness of thermal pretreatment are temperature, exposure time, and microstructural arrangement of the biomass.Biogas yield of S. latisima was reported to be 268 mL/g VS added when the steam explosion was applied for 10 minutes, while 260 mL/g VS added was observed when the same steam explosion was applied to the same feedstock at 160 °C for the same 10 minutes. 133This shows that a lower temperature is more effective than a higher temperature during the steam explosion of this particular feedstock.On the contrary, Enteromorpha pretreated with an autoclave was observed to release the optimum biogas yield at a higher temperature.Autoclave pretreatment for 30 minutes was investigated at 120 and 80 °C on Enterromorpha, and the biogas yields were 600 and 450 mL, respectively. 8This indicates that each species of macroalgae has its specified temperature and time and is not universal.The application of conventional heating was noticed to have different influences on different species of macroalgae, and the optimum conditions are not general.Thermal pretreatment methods applications on several macroalgae are still missing in the literature, and there is a need for further research on them.The recent interest in biogas production from macroalgae required more research to establish a standard where the optimal treatment conditions (temperature and time) for various thermal pretreatment techniques for individual macroalgae will be presented.This information will be easily available to researchers and industries in this sector. Nanoparticle pretreatment Multidisciplinary studies in nanomaterials science and technology have established that nanoparticles can revolutionize the structural arrangement of biogas feedstocks and enhance the methanogenic bacteria. 136The enzymes' catalytic activities can be improved by using nanoscale materials that can immobilize the enzymes, and they are called nanocatalysts. 137The enzyme immobilization with the application of cross-linking molecules produces a spacer that minimizes stiff impediments between the solid base and enzyme, improving the flexibility of immobilized enzymes. 138It has been observed that some nanoparticles can absorb and/or react with cell membranes and disintegrate them.These materials can enhance the immobilized enzyme's efficiency by creating a sufficient surface area for enzyme attachment and improving the enzyme loading rate on the feedstock particles. 139Nanobiocatalysts application in biogas feedstock pretreatment is a bright means of feedstock hydrolysis and will turn around the research area.It was observed that nanoparticles of ZnO origin could destroy the bacteria cell membrane, 140 and another study reported that break down of cell membrane or cell death was because of the physical piercing of CeO 2 nanoparticles and the oxidizing strength of the dissolved Ce 4+ on the outside membranes of microbes in the anaerobic digestion process. 141Nanoparticles such as ZnO, CuO, CeO 2 , Fe 3 O 4 , Fe 2 O 3 , TiO 2 , MgO, etc., have been experimented with as nanoadittives to enhance biogas and methane release during anaerobic digestion. 142,143Strong nanoparticles of Ag origins have been noticed to disturb nitrifying bacteria up to 86.3% inhibition rate, 144 and the higher dosage of ZnO nanoparticles was noticed to hinder the hydrolysis, acidification, and methanogenesis stages of anaerobic digestion process.Fe nanoparticles as nano additives have been observed to efficiently lower the amount of H 2 S in the gas yield, enhance methane yield, and reduce the lag period in some situations. 136,142e 2 O 3 's direct interspecies electron transfer capabilities have the potential to significantly enhance the anaerobic digestion process' methanogenesis step.The majority of the reduced electron carriers is thought to be transformed into carbon dioxide, while the syntrophy route and methanogen electron exchange are thought to constitute interspecies electron transfers. 143,145In this process, accessible materials that exist naturally or artificially produced can be used for electron transfer.Mineral-type Fe 2 O 3 nanoparticles can behave as electron conduits between electron acceptors and donors, hastening the methane production from reduced electron carriers and carbon dioxide. 143The nanoparticles in this category function similarly to enzymes in a series of biological catalytic processes. 146Inappropriate use of some of these nanoparticles during anaerobic digestion has been observed to release strong inhibitory compounds and lower the biogas yield. 147ome of these nanoparticles have been experimented with in the pretreatment of wastewater and waste-activated sludge with little interest in lignocellulose materials.Extensive use of this method in commercial and consumer goods has generated fears about their expected influences on the environment; thereby, the impacts of different nanoparticles additive on the biogas and methane yield of macroalgae have not been studied intensely.When 5 mg/L of Fe 3 O 4 nanoparticle additives were experimented with on Ulva intestinalis linneaeus during anaerobic digestion at mesophilic temperature (37 °C) for 42 days.The result indicated that the biogas production was improved from 44.14 to 154 mL/g VS representing a 248.89% increase. 99In a related study, the biogas yield of Enteromorpha algae biomass was enhanced with a nanoparticle additive.10 mg/L of Fe 3 O 4 (<100 nm) was added to Enteromorpha digestion during biogas production at mesophilic temperature (37 °C) for 108 hours.The results show that biogas release was increased from 212 to 289 mL, representing a 36.32%improvement between treated and untreated Enteromorpha. 148t can be deduced that different species of macroalgae required varying quantities of nanoparticle additives for optimum biogas and methane yield.Our literature search shows that there is very limited study on the application of nanoparticles as a treatment method during the anaerobic digestion of macroalgae biomass.This method has been experimented widely on anaerobic sludge with little interest in macroalgae pretreatment.Reports show that different nanoparticle additives affect different feedstock. 149The nanoparticle's particle size and concentration have been observed to determine their effectiveness.Therefore, there is a need to encourage more studies in applying nanoparticle additives in the anaerobic digestion of macroalgae since they have been experimented with and adjudged to be suitable methods in other feedstocks. Combined pretreatment Combining two or more pretreatment techniques has been experimented with and adjudged to produce better results.It can be a combination of two categories or another type of pretreatment method.For instance, alkaline and enzymatic pretreatment can be combined; likewise, particle size reduction and ultrasound that belongs to the same mechanical pretreatment methods can be combined as a single pretreatment. 45It has been reported that combined pretreatment methods produce better biogas and methane yields when compared to single pretreatment methods. 55espite the ability of this technique to improve the efficiency of the anaerobic digestion process, it can be noticed that the number of pretreatments used will determine the cost of pretreatment, which might make the process uneconomical and difficult to compete with fossil fuels.Combine pretreatment of Fucus vesiculosus and Fucus serratus macroalgae using ultrasonic and microwave was reported to improve methane yield from 122 mL/g VS added to 260 mL/g VS added (113.11% increase), and it is the highest yield compared to the results from single pretreatment methods. 105The use of thermochemical pretreatment on F. vesiculous was experimented with using 0.2 mol/L HCl at 80 °C with 90 minutes exposure time and was reported to increase enzymatic hydrolysis and methane yield to 121 mL/g VS added after 22 days retention period.This methane yield recorded was 39% improved compared to the untreated feedstock.Replacing HCl with flue gas condensate that is less acidic showed poorer effectiveness but enhanced methane yield by 24%. Conversely, the use of acid with concentrations lower than 0.1 M was noticed to influence biodigestion negatively and lower the methane yield. 150Table 6 presents some combined pretreatment methods that have been investigated on macroalgae biomass.Mechano-chemical and mechano-biological pretreatment was experimented with on Sargassum spp.It was observed that mechano-chemical pretreatment enhanced the methane yield by 7.19%, but mechanobiological treatment reduced the methane yield by 23%. 124Mechanical pretreatment was combined with nanoparticle additive during the pretreatment of Ulva intestinalis linneaus, and it was discovered that methane yield was improved by 366.7%. 99This is an indication that not all combined pretreatments can improve the methane yield of all macroalgae.Still, specific species have the combination conditions that favor biogas production.The combination of four pretreatment techniques was experimented with on Sargassum spp., and improved methane yield was observed.Mechanical-thermal-chemical-biological pretreatment methods were combined, and the methane yield was enhanced by 72.91%, which is higher than individual pretreatment methods. 124Combined pretreatment methods show a bright promise and should be encouraged for further study as a candidate for improving the biogas yield of macroalgae.Nevertheless, it must be considered that the number of pretreatment methods applied will determine the cost of the process, and this can make the process costly and unable to contend with fossil fuels.Therefore, it is essential to look for easy, affordable, and efficient approaches that are both sustainable and cost-effective. Comparison of pretreatment methods Feedstock pretreatment methods before anaerobic digestion have been observed to be effective in enhancing feedstock degradation and biogas yield when appropriately selected. 76,151,107evertheless, the transfer of most of these methods from the laboratory to the commercial scale is restricted by different technical, environmental, and economical challenges. 43,152Each pretreatment technique's efficiency has been found to be influenced by the feedstock's microstructural configuration.Comparison of pretreatment methods is very difficult because some of the literature was reported on different macroalgae and with different anaerobic digestion conditions.Nevertheless, few authors considered different pretreatment methods for the same macroalgae species.For example, Nemr et al. investigated the nano additives using Fe 3 O 4 at 5, 10, and 20 mg/L, microwave pretreatment at 2 and 4 minutes, ultrasonication for 10, 15, and 30 minutes, ozonization at 10, 15, and 30 minutes and in combination on Ulva intestinalis Linnaeus.When considering their individual yields, it was observed that ultrasonic pretreatment for 10 minutes produced the optimum biogas yield of 179 mL/g VS, and the overall optimum yield of 206 mL/g VS was noticed when 5 mg/L of Fe 3 O 4 was combined with microwave pretreatment. 99Compared with single microwave pretreatment that releases 84 mL/g VS, adding Fe 3 O 4 can cushion the effects of inhibitory compounds that might have been released during the microwave pretreatment.Ramirez 82 compared mechanical and chemical pretreatment methods on L. digitata under the same anaerobic conditions, and it was observed that biogas yield varied with pretreatment methods.L. digitata was milled to < 1 mm before anaerobic digestion, and the biogas yield recorded was 273 ± 1.34 mL/g VS, while the same feedstock pretreated with 6% v.v −1 NaOH produced 186 ± 0.56 mL/g VS. 82 This variation in biogas yield from the same feedstock digested under the same condition can be traced to the toxic nature of NaOH and the tendency to produce inhibitory compounds during the pretreatment process. 45Combined pretreatment of Sargassum spp.using mechanical with chemical and mechanical with biological was also compared.Particle size reduction to >1 mm combined with peroxide on Sargassum spp released 240.32 ± 3.04 LCH 4 /kg VS, while particle size reduction to >1 mm combined with T. hirsuta produced 172.57± 0.56 LCH 4 /kg VS.When these results were compared with the untreated feedstock, it was observed that mechanical pretreatment with peroxide improves the methane yield by 7.19%, whereas the combination of mechanical and biological pretreatment reduced methane yield by 23.03%. 124This difference in their influence can be linked to the ability of T. hirsuta to solubilize the feedstock beyond normal, thereby losing some of the organic matter meant for methane production to the pretreatment process.The impacts of thermal, chemical, and thermochemical pretreatment methods experimented on P. palmata was observed that thermal pretreatment at temperatures of 20, 70, 85, and 12 °C does not significantly impact the methane yield.The addition of 0.04gNaOHgTS −1 at temperatures of 20, 70, and 85 °C improved the methane yield by 11-13%.But when NaOH and HCl were added at 160 °C, methane yield was reduced by 8.44% and 12.99%, respectively. 40It was noticed that thermal pretreatment on P. palmata between 180 and 200 °C lowers the methane yield.This can be traced to the formation of inhibitory compounds in the liquid fraction.It can be observed that thermochemical pretreatments performed better than the thermal pretreatment method on the same feedstock, but it could be inferred that the cost of thermochemical pretreatment would be higher.The influence of four different pretreatments methods of mechanical (particle size reduction), microwave (600 W, 2 minutes), ultrasonic (110 V, 15 minutes), and combined microwave and ultrasonic methods were investigated on the methane yield of Fucus vesiculosus.The cumulative methane after 20 days retention time was observed in905, 2598, 2644, and 2920 mL for mechanical, microwave, ultrasonic, combined pretreatment methods, and untreated substrate, respectively. 105It can be inferred that all the pretreatment techniques considered had varying degrees of impact on the methane yield reported.This could be traced to the strength of each method to break down the cell wall for easy accessibility for microorganisms and the level of inhibitory compounds released during pretreatment.Apart from combined pretreatment, ultrasonic pretreatment can be noticed to perform better, followed by microwave and mechanical.Although, all the treatment methods enhance the methane yield compared to the control experiment.But before the best pretreatment can be identified for this feedstock among the techniques investigated, there is a need to determine the net energy balance for each pretreatment method.In general, pretreatment techniques comparison is more accurate when the same macroalgae with the same anaerobic digestion process are compared since pretreatment influences are specie-specific and not easy to extrapolate.Furthermore, microstructure analysis of pretreated feedstock using microscopes would assist in establishing the impact of individual pretreatment methods on the cell structural arrangement. Microalgae is another algae group that can be used for biogas production.They are unicellular organisms that have complicated and strong cell walls that mainly consist of cellulose, hemicellulose/xylan, and chitin arranged in several layers.Both compounds have low biodegradation, whereas the intracellular content principally consists of lipids (20-30%) and proteins (50-60%). 153,154But macroalgae, conversely, comprise complex structures like terrestrial plants where hemicellulose and cellulose are composed of a crystalline structure that does not degrade easily.When it comes to microalgae, the cell walls can be broken down to make the feedstock more accessible for enzymatic hydrolysis, as opposed to when the specific surface area is increased, and the crystalline structure is broken down in macroalgae. 155When compared to non-pretreated feedstock, pretreatment procedures have been shown to increase the process effectiveness of micro and macro algae while lowering the overall cost. 156Because of the variation in the structural configuration of micro and macro algae biomass, the pretreatment technique does not have a universal influence.To date, literature available on ultrasound and microwave pretreatments was only applied to microalgae, while that of macroalgae is missing.Breaking down the cell wall is surely easier in simpler feedstock composed of unicellular microorganisms like microalgae, but it is difficult in macroalgae. 155Mechanical pretreatment is the primary technique for macroalgae since these techniques produce smaller particle size of the feedstock, which is the best means of enhancing in the case of lignocellulose feedstocks, but for microalgae, thermal pretreatment produced the optimum methane yield with lesser energy input. 157,158mitations to macroalgae pretreatment methods The effectiveness of macroalgae pretreatment techniques has been observed to depend on the microstructural arrangement of the macroalgae.It is difficult to ascertain the particular pretreatment technique(s) that will release the highest yields.Biological pretreatment of macroalgae releases the least inhibitory materials, and mostly the restriction effects on the subsequent phase of anaerobic digestion are very low compared to chemical and physicochemical methods.This method has several advantages with some demerits like extended treatment time, specific conditions of microorganism growth, loss of carbohydrates, and larger space required. 159The high price of enzymes/ fungi/bacteria is a major obstacle to improving the economical sustainability of this process in biogas generation. 160The main challenge of chemical pretreatment is the exorbitant price of chemicals and other steps such as neutralization and the requirement to use corrosion-resistant digesters. 159Releases of inhibitory compounds that hinder methane yield release by forcibly reducing the conversion effectiveness during the hydrolysis stage of macroalgae are a challenge that requires further studies.To improve the efficiency of chemical pretreatment of macroalgae, there is a need to reduce the level of inhibitory compound released during pretreatment by using a lower concentration of chemicals and combination with other pretreatment methods to lower the cost of the process.Alkali pretreatment can reduce lignin content effectively, and the small quantity of alkali reagent left on the feedstock after washing with water assisted in pH neutralization during acidogenesis stage of anaerobic digestion.This makes alkali pretreatment more productive with subsequent anaerobic digestion than acid pretreatment techniques. 161Alkali pretreatment is usually considered unattractive economically but may be utilized for lignin-rich biomass that otherwise cannot be degraded.Concentrated acid is highly effective in cellulose hydrolysis; nevertheless, it requires high energy and cost.Special equipment is required for the digester construction because of the concentrated acid's toxicity level.Dilute acid is more economical for lignocellulose feedstock pretreatment because it can hydrolyze up to 100% of the hemicellulose to its component sugars. 162When organic solvent pretreatment of macroalgae is considered, every constituent of macroalgae can be recovered, but the process releases a huge amount of downstream residues, and specific equipment is needed, which are some of the challenges. 163Ionic liquids are more effective for different ranges of use in extreme conditions compared to organic solvents.Nonetheless, the higher price of ions and the needs for recycling are the shortcomings of the process. 164Chemical pretreatments of macroalgae are limited and, most times, contradictory, but combining chemical and thermal pretreatments have been observed to produce better influences on biomass digestibility.To date, there is no energy assessment report on biological and chemical pretreatment of macroalgae during biogas production.For these pretreatment techniques, the enzyme or chemical dose, treatment time, and temperature must be considered when determining the pretreatment expenses.The cost of an enzyme or chemical is the primary factor determining the process's feasibility.Generally, these pretreatment techniques require low energy, but they usually produce low improvement in biogas yield. Particle size reduction can be assumed to be universal to most macroalgae pretreatment.Despite several reports that mechanical pretreatment significantly enhances methane yield, one of the major limitations of the technique is the inability to degrade the cell wall of macroalgae, a principal hindrance to accessibility to carbohydrates for anaerobic microorganisms.Particle size reduction of macroalgae feedstock to around 1-2 mm is adjudged to eliminate restrictions hydrolysis stage.Still, the process is costly, and about 33% of the energy needed for the process is expended on size reduction. 165The sustenance of mechanical pretreatment is expensive due to the high energy needed for the process.Another disadvantage of the mechanical pretreatment method is the exorbitant cost of maintenance of the machines.The machinery is susceptible to inert materials like metallic materials or stones that could damage the facility easily.To improve the efficiency of mechanical pretreatment of macroalgae, there is a need to devise a means to reduce the energy required for milling and grinding.During thermal pretreatment of macroalgae, the chemical is not always needed, and this will not be accounted for.This process requires high temperatures and could release inhibitory compounds like phenolic acids, 5-hydroxymethylfurfural, and furfural, lowering the methane yield.This technique is most acceptable in places whereby excess heat from a closely power plant or factory can be used to reduce the energy cost for heating. 166Because of the high energy required for this technique, the energy balance in thermal pretreatment needs to be addressed to ascertain its efficiency.Many nanoparticles have been observed to enhance the biogas released.One of the major challenges with this technique is that the production of nanocatalysts has reduced the cost of biocatalysts.This process requires photo-digestion reactors consisting of visible-light photoactive metal oxides to increase the volume of hydrogen produced and improve methane yield.This is a major obstacle to nanoparticle additives. 167Some of the merits and challenges of pretreatment techniques are presented in Table 7. Discussion Using macroalgae as biogas feedstock during desalination could significantly lower greenhouse gas release.Using macroalgae's anaerobic digestion as a renewable energy source could be a wise choice for nations with insufficient freshwater resources.Additionally, the predicted decrease in fossil fuel usage through the anaerobic of macroalgae will result in a significant reduction in the environmental impact of fossil fuels.However, there are still certain difficulties with the anaerobic digestion of macroalgae, which result in poor biogas output and quality.Despite the advantages macroalgae have over other biofuel feedstocks, there are significant obstacles to be solved before reaching industrial-scale production. 168Because of the large concentration of high molecular weight organic compounds and the relative stiffness of the cell walls, which impedes the hydrolysis process, macroalgae anaerobic digestion experiences a significant bottleneck. 169This cell wall can be break down with pretreatment techniques and make the feedstock accessible to the microorganisms, reduce the retention time, and enhance biogas production.According to the works of literature accessed, it was observed that there are different biological, chemical, mechanical/physical, thermal, nanoparticle additives, and combined pretreatment techniques that have been experimented with lately to solve the problems of biodegradation of macroalgae during anaerobic digestion.It was observed each of these methods has its merits and demerits, and appropriate matching of pretreatment methods with macroalgae feedstocks that will fulfil the aim of pretreatment.The process will improve macroalgae's biogas and methane yield when the appropriate methods are selected based on the structural arrangement.An important factor to be considered in macroalgae (xvi) For feedstock with high lignin, the method is very effective and selective. (xvii) Effective in lignin removal with some percentage of hemicellulose. (xviii) The method uses a green solvent that is compatible and biodegradable. (xix) It eliminates the lignin and lowers the cellulose portion. (xx) Because the solvent employed is green, sugars are not degraded. (xxi) It is an appropriate method for the mobile macroalgae pretreatment process. S/ N Pretreatment technique Merits Challenges (vi) Most suitable for substrate with higher percentage of lignin. (vi) Higher energy is needed. (vii) Minor effect on cellulose. (viii) At high temperatures, the tendency to release inhibitory compounds is high. (ix) Due to the explosive properties of oxygen and the cost of hydrogen peroxide needed for the process, it is not recommended for industrial-scale application. Nanoparticles (i) The surface area to volume ratio is adequate. (ii) It exhibits great selectivity, specificity, and catalytic activity. (iii) The process is environmentally benign. (iv) The possibility of releasing inhibitory compounds is low. (i) High cost of investment. (ii) Poor stability and reusability. (iii) Toxicity of some nanoparticles.(iv) Some nanoparticles have characteristics that are harmful to anaerobic digestion bacteria. Combined (i) The method is more effective compared to a single application of the combined methods. (i) It is not economical in most cases. (ii) Mostly complicated process pretreatment is the energy required for the process; in some cases, the methods that required low energy produced small enhancements regarding the breaking down of feedstock and biogas released compared to the techniques that needed higher input.Although, this situation is peculiar to only some cases.Improved degradation of macroalgae and enhancement in recalcitrant produces higher biogas.It is observed that some of these methods can improve the percentage degradation but have insignificant or no influence on the biogas released.Massive investment is required to set up some of these pretreatment methods, and the improvement in biogas released may not be commensurate with the investment cost.There have been several reports on macroalgae pretreatment before anaerobic digestion recently.However, there is still a wide area to be covered, especially on how they can be economically feasible at the industrial level.The release of inhibitory and toxic compounds is another considerable limitation observed in most of the literature reviewed.Inhibitory and toxic compounds released during pretreatment lower the biogas-producing microorganisms' activity, reduce the biogas released, and make the process uneconomical.This is a big concern in macroalgae pretreatment since some of the merits of pretreatment methods were eroded during digestion because of the harmful influence of these compounds on methanogenic bacteria.A pretreatment method with particular macroalgae was noticed not to have the same effect on other macroalgae.This is because various macroalgae feedstock were observed to respond differently to the same pretreatment technique.This could not be economically feasible or improve the net energy for macroalgae feedstocks with high debasement rates.Although from the information accessible for some macroalgae species, efficient techniques can be selected, some do not meet the efficiency and economic needs of the industries. Conclusion and recommendations Physicochemical characteristics of macroalgae have shown them as an excellent potential feedstock for biogas production.But their microstructural arrangements limited their conversion to biogas due to the unavailability of organic matter for microorganisms' use.Enhancing biogas from macroalgae will contribute significantly to sustainable development by reducing greenhouse gas emissions, economic management of seaweeds, and reducing the reliance on energy from nonrenewable resources.Utilizing macroalgae biomass instead of the traditional substrate (agricultural residues, energy and starchy crops, etc.) might be a phenomenal cost-effective and available sugar source for the transportation sector, and other energy uses.The microstructural arrangement of macroalgae exhibits technological hindrances because of their resistance to bioavailability, and the pretreatment of this recalcitrant substrate is important for an efficient production process.Pretreatment of macroalgae biomass before anaerobic digestion has been observed to enhance the biogas and methane yield of this economical feedstock.Macroalgae pretreatment and alteration of the structural arrangement are the principal factors influencing the hydrolysis stage.The considerations utilized in selecting the pretreatment technique will significantly influence different species of macroalgae characteristics and dictates the availability of the feedstock for hydrolysis stage and the subsequent sugar released for biogas release.Therefore, pretreatment methods selection is a vital decision in biogas production from macroalgae, and it is crucial to establish the fundamentals of different processes that can assist in deciding the most efficient technique regarding the microstructural arrangement of the feedstock and hydrolysis microorganisms'.Studies on the pretreatment of macroalgae are limited compared to other organic feedstocks, such as sewage sludge and other lignocellulose materials (terrestrial residues).Most of the studies on macroalgae pretreatment focused on biomethane potential.In contrast, evaluation of the effectiveness is based on feedstock solubilization and enhancement in methane yield.In this manner, the influence of pretreatment techniques on cell wall arrangement has not yet been examined.As previously mentioned, the significant challenges of macroalgae pretreatment are energy cost and the release of inhibitory materials that lower the efficiency of biogas production's downstream bioprocesses.To increase the techno-economic possibility of utilizing macroalgae biomass as biogas feedstock, the idea of incorporating biorefineries where more than one bioproduct is produced on the same platform could be promising.Having biodiesel, bioethanol, biohydrogen, or bio-oil in the same refinery with biogas will reduce the pretreatment cost significantly and add other valuable products to the process.The waste generated from these bioprocesses can serve as feedstocks for biogas production and will not require pretreatment again.This will also promote macroalgae biomass pretreatment at a large scale significantly.Majority of the techniques available in the literature were examined at the laboratory scale, and this might not produce the same efficiency when considered at the industrial scale.Therefore, future studies need to focus on understanding pretreatment mechanisms and investigation on a pilot scale to validate the application of pretreatment technology in the industry and examine its scalability for the commercial conversion of macroalgae into biogas and methane.It is recommended that pretreatment input parameters could be specified by applying a multiobjective optimization method such that optimum biogas yield can be generated with a positive energy balance.Various methods can be applied to solve such multiple-response problems.Desirability techniques have been experimented with in various engineering fields, and it is recommended because of their simplicity, software availability, and flexibility for individual response. Figure 1 . Figure 1.Flow chart of biogas and methane release from macroalgae. ( xxiii) It produces a substrate with minimal lignin residue that reduces unrequired enzyme absorption.(xxiv) The solvent used can be recovered and reused.(xxv) It eliminates a specific amount of lignin and hemicellulose, lowering the substrate's crystallinity.(xxvi) It needs low-pressure solvent equipment.(xxvii) The process needs a modest reaction condition.(xvi) High cost of equipment and reagents.(xvii) It required substantial investment costs.(xviii) Toxic organic solvents are difficult to handle.(xix) It releases high contents of inhibitory compounds.(xx) The process releases inhibitory compounds.(xxi) Poor biodegradability rate.(xxii) It is a toxic process.(xxiii) Liquid synthesis and purification are complex processes.(xxiv) High cost of investment. Table 1 . Biogas and methane potential of different macroalgae. Table 2 . Influence of different biological pretreatments on biogas and methane yield of macroalgae. Table 3 . Impacts of some chemical pretreatments on biogas and methane yield of macroalgae. Table 4 . Effects of some mechanical pretreatments on biogas and methane yield of macroalgae. Table 5 . Effects of various thermal pretreatments on biogas and methane yield of macroalgae. Table 6 . Effects of different combined pretreatments on biogas and methane yield of macroalgae.
2023-08-16T15:10:28.919Z
2023-08-13T00:00:00.000
{ "year": 2024, "sha1": "829df4889cac42b80a744afbab2b1b5cd91989bc", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0958305X231193869", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "b58f85c9f9e437424eaf211d01a64e7b4b5e7aaf", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
4986104
pes2o/s2orc
v3-fos-license
Sublethal Effects of Poly ( Amidoamine ) Dendrimers in Rainbow Trout Hepatocytes of such products into the environment and potential impacts on aquatic ecosystems [1]. The toxicity of nanomaterials arises from the cumulative effects of four basic properties associated with colloids: 1) the leaching of low-molecular-weight molecules or ions, 2) the geometry (size and shape) of the NMs including their aggregates, 3) the surface properties (reactivity), and 4) the vector effect. The last property has been extensively studied in connection with the development of drug, gene and peptide delivery systems in therapeutics. Some NMs have the ability to interact with xenobiotics (drugs) and can increase their bioavailability and toxicity by promoting their internalization in tissues/cells [2]. For example, the cytotoxicity of Adriamycin to Chinese hamster cell line DC3F increased when it was associated with cyanoacrylate nanoparticles. In addition, an Adriamycinresistant hamster cell line became more sensitive to the drug when it was associated with cyanoacrylate nanoparticles, which provides evidence of vector effect. From an environmental risk assessment perspective, it is important to gain a better understanding of the toxicity associated with NMs used as drug delivery “devices” before seeking to determine the vector effect in contaminated environments. of such products into the environment and potential impacts on aquatic ecosystems [1].The toxicity of nanomaterials arises from the cumulative effects of four basic properties associated with colloids: 1) the leaching of low-molecular-weight molecules or ions, 2) the geometry (size and shape) of the NMs including their aggregates, 3) the surface properties (reactivity), and 4) the vector effect.The last property has been extensively studied in connection with the development of drug, gene and peptide delivery systems in therapeutics.Some NMs have the ability to interact with xenobiotics (drugs) and can increase their bioavailability and toxicity by promoting their internalization in tissues/cells [2].For example, the cytotoxicity of Adriamycin to Chinese hamster cell line DC3F increased when it was associated with cyanoacrylate nanoparticles.In addition, an Adriamycinresistant hamster cell line became more sensitive to the drug when it was associated with cyanoacrylate nanoparticles, which provides evidence of vector effect.From an environmental risk assessment perspective, it is important to gain a better understanding of the toxicity associated with NMs used as drug delivery "devices" before seeking to determine the vector effect in contaminated environments. The development and use of poly PAMAM dendrimers for targeted and enhanced drug and gene delivery have been extensively examined [3,4].The interest in these dendritic NMs stems from their structural properties including uniformity, size, shape, monodispersity and functionalized surfaces [5].Dendrimers are composed of an initiator amine core (-NH2) with attached amidoamine units that are radially distributed around the core (Figure 1).Each successive branching that forms a surface layer is termed a generation (G).Full-generation dendrimers (G1, G2, G3, etc.) have cationic amine-terminated groups at physiological pH, while half-generation dendrimers (G2.5, G3.5) have anionic carboxylic moieties at physiological pH.Finally, each successive generation has twice the number of terminal groups and increased diameter size.Cationic dendrimers have been shown to exhibit cytotoxicity and haemolysing properties which are dependent on size and surface charge (Zeta potential) [6].It appears that dendrimers produce small "nanoholes" or "nanopores" in membranes, which can perturb membrane potential integrity and permeability.Thus, the toxicity of Abstract The purpose of this study was to examine the toxicity of drug vectors-poly (amidoamine) (PAMAM) dendrimers-to rainbow trout hepatocytes.Primary cultures of rainbow trout hepatocytes were exposed to concentrations of G2, G4, G5 PAMAM dendrimers and a representative antibiotic-minocycline-in municipal effluents for 48 h at 15 °C.After the exposure period, cells were harvested for the assessment of viability, heat shock protein 70 (HSP70) level and glutathione S-transferase (GST) activity.The results revealed that the PAMAM dendrimers were toxic to rainbow trout hepatocytes, with the G4 and G5 PAMAM dendrimers being 5 times more toxic than the G2 PAMAM dendrimer.In addition, the G4 and G5 PAMAM dendrimers increased HSP70 levels, while the G2 PAMAM dendrimer systematically reduced those levels.The G5 PAMAM dendrimer alone was able to induce GST activity, which is indicative of oxidative stress.Minocycline was found to be toxic to rainbow trout hepatocytes at high concentrations (> 90 µg/ mL) which are not likely to occur in municipal effluents.The antibiotic also systematically reduced HSP70 levels and GST activity.In conclusion, PAMAM dendrimers are cytotoxic to rainbow trout hepatocytes but acute toxicity occurs at concentrations not expected to be found in hospital and municipal effluents.The sublethal effects of these dendrimers on HSP70 levels and GST activity suggest that chronic effects could also occur. Introduction Nanotechnology has under gone exponential development which has reached many sectors of our economy.NMs have found many applications, from electronic devices, paints/dyes, cosmetics and personal products, to biomedical uses such as imaging and drug and gene delivery strategies.Any product at the nanoscale with at least one dimension between 1 and 100 nanometers (nm) is considered a NM.Compounds produced at the nanoscale offer new and interesting emerging properties with tremendous potential for commercial applications.For example, the use of nanoparticles or nano-vectors can permit enhanced delivery of a given drug within the body and can target drug release to specific sites in the body.However, the increasing use of NMs has raised concerns about the inadvertent release dendrimers could be due to their surface properties in addition to their vector properties. Studies on the toxicity of PAMAM dendrimers to non-target species are relatively scarce at present and the environmental risk of these NMs in not well understood at the present time.Hence the examination of cytotoxicity of PAMAM dendrimers at both the lethal and sublethal levels in fish hepatocytes is of relevance in the understanding of the potential toxicity of these compounds in aquatic ecosytems.G4 PAMAM dendrimers were found to decrease growth and larval development in zebra fish embryos [7].In an earlier study, G4 PAMAM dendrimers were associated with reduced algal survival, enhanced oxygen production and stimulation of photosystem II reaction centre activity [8], which points to the formation of reactive oxygen species and oxidative stress.Depending on their size and shape, nanoparticles may induce interactions in the protein space domain leading to protein denaturation.The heat shock proteins of the 70 kDa family (HSP70) are stress proteins that are involved in stabilizing protein conformation [9].This process is clearly energy demanding since these chaperone proteins require ATP to function.For example, it was estimated that one heat shock protein requires up to 100 moles of ATP to re-fold denatured rhodanase protein [10].Heat shock proteins were also shown to respond to oxidative stress [9].Rainbow trout yearlings exposed to cadmium-based quantum dots and to dissolved cadmium showed increased HSP70 levels and oxidative damage [11].However, correction of HSP70 levels against oxidative stress markers (oxidized proportion of metallothioneins or lipid peroxidation) failed to remove the inducing effects of the quantum dots, suggesting that interactions other than oxidative stress were at play.Oxidative stress and xenobiotic conjugation can be conveniently monitored on the basis of glutathione S-transferase (GST) activity.GST requires reduced glutathione (GSH) in order to function, which can be a limiting factor during oxidative stress.The formation of oxygen adducts to molecules during oxidative stress could also be neutralized by conjugation with GSH.For example, GST activity was used as a marker of oxidative stress in marine mussels exposed to cadmium-based quantum dots [12].Exposure to 10 µg/ L cadmium-based quantum dots increased oxidative stress and GST activity, while dissolved cadmium at the same concentration failed to induce GST activity.Given that PAMAM dendrimers are likely to be released into the environment in wastewater effluent containing many pollutants such as antibiotics, the toxicity of a representative antibiotic to fish liver cells is relevant.Tetracyclines (minocycline) are commonly found in hospital and municipal wastewaters [13].Minocycline levels were found to range from non-detectable to 530 µg/ L in hospital effluents and from 95 to 920 µg/ L in wastewater treatment plant effluents.In addition, these compounds are continuously released in to the environment from municipal effluents.This could lead to accumulation in non-target organisms if exposure to such compounds exceeds their capacity to eliminate them. The purpose of this study was to investigate the cellular toxicity of G2, G4 and G5 PAMAM dendrimers and of minocycline in rainbow trout hepatocytes.Cytotoxicity and the levels of stress proteins (HSP70) and GST activity were also determined in order to evaluate the toxicity and mechanisms of action of these NMs in fish hepatocytes. Preparation and exposure of rainbow trout hepatocytes Second, fourth and fifth generation PAMAM dendrimers were purchased from Sigma Chemical Company (Ontario, Canada).They were diluted in High Quality water at 200 mg/ mL to perform dynamic light scattering (DLS) analysis in order to measure particle size distribution and Zeta potential and hepatocyte exposure.The analysis was done using a DLS) instrument with a gel electro mobility option (Wyatt-Instrument Mobius, 532-nm laser).Zeta potential was determined from gel mobility data as described in Domingos et al., 2013 [14].The measurements were made at 1 mg/mL under identical conditions as in High Quality water.The analytical performance of the instrument was validated with NIST polystyrene standard beads (42 nm diameter) and a Zeta potential standard solution (Ostuka mobility Standard, lot No. 302013).Primary cultures of rainbow trout (Oncorhynchus mykiss) were prepared using a perfusion method with saline citrate and albumin [15].Briefly, young-of-the-year (8-to 10-cm fork length) rainbow trout (3 livers pooled) were used.After the trout were anesthetized with 25 mg/L tricaine buffered to pH 7.4 with 1 M NaHCO3, the excised livers were perfused with 10 mM citrate in 125 mM NaCl, pH 7.2, at 4°C until the liver tissue acquired a light brown coloration.The livers were then minced and placed in 10 mL of citrate perfusion media containing 0.5% serum bovine albumin.The suspension was stirred slowly with a magnetic stirring bar at 20-40 rpm for 30 min at room temperature.After this period, the suspension was passed through a cell extraction sieve (40µm diameter mesh, Sigma Chemical Company) and the cells were washed in phosphate-buffered saline (PBS: 140mM NaCl, 5 mM KH2PO4, 5 mM NaHCO3, 1 mM glucose, pH 7.4) containing 0.1% serum bovine albumin, followed by centrifugation (200 ×g for 5 min)/ re suspension 3 to 4 times until a clear supernatant (free of debris) was obtained.A portion of the cell suspension was stained with 0.004% trypan blue in PBS for the determination of cell concentration and viability.The cells were counted and viability was determined (live cells remain transparent and dead ones are blue) using a hematocytometer under a microscope at 200X enlargement.Hepatocytes were plated in 48-well microplates at a density of 0.5 × 106 viable cells/mL (6 replicate wells per treatment) in Liebovitz (L-15) cell culture media containing 10 mM HEPES-NaOH, pH 7.4, 50 units penicillin, 50 µg/mL streptomycin and 0.1 µg/mL amphotericin B. The cells were exposed to increasing concentrations of G2, G4 and G5 PAMAM dendrimers and to minocycline at 1.6, 8, 40 and 200 µg/mLfor 48 h at 15°C in a saturated humidity atmosphere.At the end of the exposure period, the microplates were centrifuged at 250 ×g for 3 to 5 min and the exposure medium was removed by aspiration.Cells were suspended in PBS (without albumin) for cell density and viability assessments.Relative cell density was determined by measuring the absorbance at 600 nm. Cell viability assessment Hepatocyte viability was determined by the fluorescein dye retention assay as described elsewhere [15].A portion (20 µL) of the cell suspension was mixed with 180 µL of 10 µM fluorescein diacetate in PBS containing 1 mM glucose and kept in darkcoloured microplates for 20 min at 20°C.The microplate was centrifuged at 250 ×g for 5 min and the supernatant removed.The cells were then resuspended in 200 uL of phosphate-buffered saline, and fluorescence was measured at 485 nm excitation and 520 nm emission using a microplate reader (Chameleon II, Bioscience, USA).A positive control (100% mortality) was prepared by adding cells to separate wells containing 20% DMSO to completely permeabilize the cells.The data were normalized to controls and expressed as a fold change (reduction) in fluorescence. HSP70 levels were determined using an enzyme-linked immunoassay as described earlier (Louis et al., 2010) [11].The hepatocytes were first homogenized using a Teflon pestle tissue/ cell grinder (4 passes at 4°C) and centrifuged at 12,000 ×g for 20 min at 4°C.The supernatant (S12) was diluted to 1 μg total protein in 50 mM sodium carbonate buffer at pH 9.6.Total protein was determined using the Coomassie brilliant blue protein binding assay with serum bovine albumin for calibration [16].The material was added to high-binding microplate wells (Immulon-4 microplate) and held overnight at 4°C.Afterwards, the wells were rinsed with 200 µL of PBS twice and incubated with PBS containing 1% albumin for 30 min at 20°C to block the remaining sites.The wells were washed with 200 µL of PBS, and 100 µL of HSP72 polyclonal antibody (recombinant human HSP72 IgG SPA-812; Stressgen, USA) diluted1:1,000 in PBS containing 0.5% albumin was added to each well.The wells were incubated at 37°C for 60 min.The cells were washed 3 times in PBS, and 100 uL of the secondary antibody (rabbit anti-IgG linked with peroxidase) diluted 1:5,000 in PBS containing 0.5% albumin was added and incubated for 30 min at 20°C.The wells were washed 3 times in PBS (200 μL ), and peroxidase activity was determined with 1 uM luminol and 10 μM hydrogen peroxide.Luminescence was measured at the initial mixing and monitored for up to 20 min using a luminescence microplate reader (Chameleon II, Bioscience, USA).The data were expressed as peroxidase activity (increase in luminescence)/min.GST activity was determined in the S12 fraction of the supernatants using the colorimetric assay procedure with reduced GSH and 1-chloro-2, 4-dinitrobenzene co-substrates [17].The data were expressed on the basis of the rate of increase in absorbance at 340 nm / (min × mg proteins). Data analysis The hepatocytes were exposed to n = 6 replicates of each concentration of the tested compounds.The toxicity of the PAMAM dendrimers and of minocycline was expressed in terms of toxicity thresholds, which corresponds to the geometric mean of the lowest significant effects concentration (LSEC) and the no-effect concentration (NEC): TT = (LSEC × NEC)1/ 2. The data were checked for homogeneity of variance and normality using Levene's test and the Shapiro-Wilk test, respectively.Analysis of variance was performed, and critical differences were determined using Dunnett's t test.Correlation analysis was also performed using Pearson's product-moment procedure and the tests were performed using the Statistic software package (version 8.). Results and Discussion The prepared dendrimers consisted of the G2, G4 and G5 PAMAM dendrimers which have a poly diamine core and described in Figure 1.The G2, G4 and G5 dendrimers have a theoretical diameter of 2.9.4.5 and 5.4 nm, respectively (Table1).Although the size of these dendrimers did not change much, the number of functional amine groups (-NH4+) at their surface readily increased from 16 to 128 for G2 and G5 dendrimers, respectively.This change was accompanied by an increase in molecular weight in such a manner that an equivalent 20 µg/ mL solution consisted of 6, 1.4 and 0.7 µM for G2, G4 and G5 PAMAM dendrimers, respectively.Compared to the same amount of minocycline, the dendrimer concentrations were at least one order of magnitude lower than the antibiotic, i.e., minocycline was in excess compared to the PAMAM dendrimers.Given the Sublethal effects of poly (amidoamine) dendrimers in rainbow trout hepatocytes Copyright: © 2016 Gagné and Auclair pKa values (5 and 9.5) for the 2 amine groups of minocycline, the molecule can be assumed to be cationic at physiological pH as is the case for dendrimers.This is consistent with what is known about these types of drug vectors, which have high surface area chemistries permitting interaction of drugs at the surface.However, dendrimers can induce pore formation, permitting higher diffusion of contaminants into cells [18].These properties complicate the classical risk assessment paradigm because nanoparticles could change the bioavailability of contaminants in the environment. The effects of PAMAM dendrimers of increasing size were examined in rainbow trout hepatocytes (Figure 2).G4 and G5 PAMAM dendrimers were found to be more toxic than G2 PAMAM dendrimers, with toxicity thresholds of 3.6 µg/mL compared to 20 µg/ mL.Minocycline was the least toxic test substance; it caused a significant drop in cell viability at 200 µg/ mL, giving a toxicity threshold of 90 µg/ mL.This is in keeping with other studies which showed that dendrimer toxicity is size-and surface charge-dependent [6,19].The haemolysing potential and the cytotoxicity observed in erythrocytes in creased with higher generation PAMAM dendrimers (G5 and G6).However, the initial positive Zeta potential value in water dropped to a negative value in cell culture media, which points to an interaction with cell culture media components.Increasing cationic charge at the surface of PAMAM dendrimers was proportionally toxic to Daphnia magna and rainbow trout gonad (RTG-2) cell lines [20].Toxicity was also related to the Zeta potential of G4 to G6 PAMAM dendrimers in the culture media, indicating that toxicity was related to the surface properties (i.e., number of surface groups) of the nanoparticle.Although the Zeta potential decreased in aquarium water, there was no indication of aggregate formation and the dendrimers were shown to influence the innate immunity in zebrafish embryos exposed to G3 and G4 PAMAM dendrimers [21]. The sublethal effects of PAMAM dendrimers were also examined by monitoring HSP70 levels and GST activity (Figures 3 and 4).For protein chaperone HSP70, the dendrimers tended to decrease HSP70 levels, with the exception of G4 dendrimers which increased their levels.The decrease in HSP70 levels occurred at the lowest tested concentrations of the G2 and G5 PAMAM dendrimers.The lowest concentration of minocycline reduced HSP70 levels but with less potency than the dendrimers, however the decrease in HSP70 levels was dampened at 8 and 40 µg/ mL.The activity of GST, a marker enzyme for oxidative stress and xenobiotic conjugation, increased in response to the lowest concentration of G5 PAMAM dendrimer and decreased at the higher dendrimer concentrations.G2 andG4 dendrimers and minocycline reduced GST activity, but G4 PAMAM dendrimer was more potent than the other dendrimers (G2 and G5) in reducing GST activity.Based on correlation analysis, the decrease in HSP70 levels and GST activity was mostly associated with decreased cell viability.Cell viability was significantly correlated with HSP70 (r = 0.52; p < 0.01) and GST activity (r = 0.45; p < 0.05) for G2 PAMAM dendrimer.For G4 PAMAM dendrimer, cell viability was also correlated with HSP70 (r = -0.5;p < 0.01) and GST activity (r = 0.78; p < 0.001).GST activity was significantly correlated with HSP70 only after correcting against loss of cell viability (residuals) at r = 0.38 (p < 0.05), which suggests that oxidative stress was involved in HSP70 expression, in part at least.For the G5 PAMAM dendrimer, cell viability was significantly correlated with HSP70 level (r = -0.45;p=0.01) and GST activity (r = 0.61; p < 0.001). In the case of minocycline, cell viability was only correlated with GST activity (r=0.46;p=0.01).GST activity (corrected against cell viability) and HSP70 levels were significantly correlated at (r=-0.66 (p < 0.001).Minocycline is recognized as having antioxidant properties in addition to bactericidal activity as shown by reduced lipid peroxidation in brain tissues [22].In another study, GST induction by cypermethrin was prevented by minocycline inperipheral red blood cells in the rat [23]. To best of our knowledge, this is the first report on the influence of PAMAM dendrimers on HSP70 levels.Decreased expression of HSP70 could render cells less able to defend against changes hepatocytes.SOJ Biochem 2(3), 6. Sublethal effects of poly (amidoamine) dendrimers in rainbow trout hepatocytes Copyright: © 2016 Gagné and Auclair in protein scaffolding induced by nanoparticles.For example, strong expression of HSP70 occurred in trout hepatocytes exposed to aged cadmium-based quantum dots [24].Induction of HSP70 was the strongest response to these nanoparticles and involved metallothioneins and labile zinc in cells, suggesting that the release of toxic cadmium ions was at play, at least.It is noteworthy that an increase in HSP70 was also associated with oxidative stress [25].GST activity is a marker of oxidative stress as well as the conjugation of polar compounds.GST activity was marginally negatively correlated with HSP70 levels (r = -0.31;p=0.09) for the G4 PAMAM dendrimer, the only dendrimer that induced HSP70.Decreased GST activity could also result from the depletion of reduced GSH in cells undergoing oxidative stress [26].Induction of HSP70 was also associated with oxidative stress in freshwater musselse xposed to zinc oxide nanoparticles [27] and in zebra fish embryos exposed to C60 fullerene [28].In a study involving G4, G5 and G6 PAMAM dendrimers, increased production of reactive oxygen radicals and an increase in genotoxicity were observed in fish hepatocellular carcinoma cell lines [29].The increase in HSP70 levels and the concomitant decrease in GST activity in hepatocytes exposed to G4 and G5 PAMAM dendrimers maybe attributable to oxidative stress.This is consistent with the decrease in HSP70 levels in hepatocytes exposed to minocycline (known to act as an antioxidant), which was not related to decreased cell viability.In conclusion, PAMAM dendrimers were toxic to rainbow trout hepatocytes, with the G4 and G5 dendrimersbeing 5 times more toxic than G2 dendrimer.The G4 and G5 PAMAM dendrimers increased the levels of HSP70, while the G2 dendrimersystematically reduced those levels.Only the G5 dendrimer was able to induce GST activity indicative of oxidative stress.Minocycline was less toxic to rainbow trout hepatocytes than the dendrimers and systematically reduced the levels of HSP70 and GST activity. Figure 1 : Figure 1: Molecular structure of PAMAM dendrimers and minocycline Figure 2 : Figure 2: Change in cell viability in trout hepatocytes exposed to PAMAM dendrimers and minocycline.Rainbow trout were exposed to G2, G4 and G5 PAMAM dendrimers and minocycline for 48 h at 15 o C. The star * symbol indicates a significant difference from the controls at α<0.05. Figure 3 : Figure 3: Change in heat shock proteins in trout hepatocytes exposed to dendrimers of increasing sizes.Trout hepatocytes were exposed to increasing concentrations of G2, G4 and G5 PAMAM dendrimers and minocycline for 48 h at 15 o C. The star * symbol indicates significance at α=0.05. Figure 4 : Figure 4: Change in GST activity of trout hepatocytes exposed to dendrimers of increasing size.Rainbow trout were exposed to G2, G4, G5 PAMAM dendrimers and minocycline for 48 h at 15 o C. The star * symbol indicates significant difference from the controls at α<0.05.
2018-04-21T03:54:31.757Z
2016-12-26T00:00:00.000
{ "year": 2016, "sha1": "fed5d9c28ccab0fd32cfd9c72292dc7726c9c42d", "oa_license": "CCBY", "oa_url": "https://symbiosisonlinepublishing.com/biochemistry/biochemistry17.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "fed5d9c28ccab0fd32cfd9c72292dc7726c9c42d", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
12587598
pes2o/s2orc
v3-fos-license
Gene Expression and Thiopurine Metabolite Profiling in Inflammatory Bowel Disease – Novel Clues to Drug Targets and Disease Mechanisms? Background and Aims Thiopurines are effective to induce and maintain remission in inflammatory bowel disease (IBD). The methyl thioinosine monophosphate (meTIMP)/6-thioguanine nucleotide (6-TGN) concentration ratio has been associated with drug efficacy. Here we explored the molecular basis of differences in metabolite profiles and in relation to disease activity. Methods Transcriptional profiles in blood samples from an exploratory IBD-patient cohort (n = 21) with a normal thiopurine S-methyltransferase phenotype and meTIMP/6-TGN ratios >20, 10.0–14.0 and ≤4, respectively, were assessed by hybridization to microarrays. Results were further evaluated with RT qPCR in an expanded patient cohort (n = 54). Additionally, 30 purine/thiopurine related genes were analysed separately. Results Among 17 genes identified by microarray-screening, there were none with a known relationship to pathways of purines/thiopurines. For nine of them a correlation between expression level and the concentration of meTIMP, 6-TGN and/or the meTIMP/6-TGN ratio was confirmed in the expanded cohort. Nine of the purine/thiopurine related genes were identified in the expanded cohort to correlate with meTIMP, 6-TGN and/or the meTIMP/6-TGN ratio. However, only small differences in gene expression levels were noticed over the three different metabolite profiles. The expression levels of four genes identified by microarray screening (PLCB2, HVCN1, CTSS, and DEF8) and one purine/thiopurine related gene (NME6) correlated significantly with the clinical activity of Crohn’s disease. Additionally, 16 of the genes from the expanded patient cohort interacted in networks with candidate IBD susceptibility genes. Conclusions Seventeen of the 18 genes which correlated with thiopurine metabolite levels also correlated with disease activity or participated in networks with candidate IBD susceptibility genes involved in processes such as purine metabolism, cytokine signaling, and functioning of invariant natural killer T cells, T cells and B cells. Therefore, we conclude that the identified genes to a large extent are related to drug targets and disease mechanisms of IBD. Introduction Ulcerative colitis (UC) and Crohn's disease (CD) are chronic remitting and progressive inflammatory bowel diseases (IBD). Primarily, 5-aminosalicylic acid (5-ASA) and glucocorticosteroids are used in the treatment. Glucocorticosteroid dependent or refractory patients are eligible for immunomodulatory therapy with the purine analogues azathioprine or 6-mercaptopurine, methotrexate and/or anti-TNF-antibodies [1,2,3]. Azathioprine and 6-mercaptopurine are prodrugs, converted in vivo to active metabolites via a complex metabolism [4] ( Figure S1). Two main metabolite groups are produced; the phosporylated thioguanine nucleotides (6-TGNs) which comprise thioguanosine mono-, diand triphosphates, and methylated thioinosine phosphates measured as meTIMP [5]. Both metabolite groups contribute to the immunomodulatory effects in different ways [4,6,7,8,9]. Up to 30% of IBD-patients discontinue thiopurine therapy due to adverse events or refractoriness [1,10,11]. A cut off at .230-260 pmol 6-TGN/8610 8 red blood cells (RBC) has been proposed as a lower limit for clinical efficacy [12], but controversy exists regarding its utility, since an important overlap exists between patients in remission and those with active disease. High concentration of the other major metabolite, meTIMP, has mainly been associated with hepatotoxicity [13] and myelotoxicity [11]. Thus, a high meTIMP/6-TGN concentration ratio indicates an increased risk of both therapy failure and adverse events [14,15,16]. The xanthine oxidase inhibitor allopurinol in combination with a reduced dose of thiopurine (,25-33% of original dose) may be considered in patients with this unfavourable metabolite profile. The combination therapy switches the metabolism towards enhanced 6-TGN production and has been shown safe and effective in IBD [15,17]. It also reduces glucocorticosteroid requirements and hepatic as well as nonhepatic side effects [15,17,18]. The use of 5-ASA has also been suggested as an alternative to manipulate the metabolite profile by inhibition of TPMT [19,20,21]. However, the effect of 5-ASA on 6-TGN in vivo varies between studies [11,13,20,22,23,24,25,26] and it does not seem to change the concentration of meTIMP [23,24]. The use of 5-ASA to modulate the metabolite profile has not been implemented in clinical practice in a similar way as allopurinol. The underlying mechanism to why a proportion of patients preferentially metabolize AZA and 6-MP to meTIMP is currently unknown. Interindividual variation in metabolite profiles and drug response may be explained by differences in activities of drugmetabolizing enzymes, and their correlation with corresponding transcript or protein levels. A well-known cause of adverse reactions to thiopurines is reduced or absent thiopurine Smethyltransferase (TPMT) activity, where patients with low TPMT activity accumulate myelotoxic concentrations of 6-TGNs if treated with standard doses [27]. However, not all interindividual differences in thiopurine metabolism and response can be attributed to variations in TPMT [14,15,28,29,30], since a large number of enzymes are involved. Blockage of inosine 59-monophosphate dehydrogenase (IMPDH) activity could, based on its position in the metabolic pathway of thiopurines ( Figure S1), restrict the formation of 6-TGNs and therefore explain a high meTIMP/6-TGN concentration ratio. Indeed, the IMPDH activity was inversely correlated with the concentration of meTIMP in our previous work. However, no correlation with the concentration of 6-TGN was observed [25,31]. Inosine triphosphatase (ITPase) is involved in a metabolic loop where 6-TIMP is reconverted from 6-thioinosine triphosphate ( Figure S1). ITPase deficiency may contribute to the metabolite profile by increasing the concentration of methylated metabolites (methyl thioinosine triphosphate) [32]. Furthermore, most nucleoside analogues enter and exit cells via nucleoside transporters [33]. Impaired function, as well as up-or down-regulation of transport proteins with different specificities for thiopurine metabolites probably affect the metabolite profiles, as may variations in activities of intracellular nucleotidases and kinases. The aim of this study was to explore the molecular basis of differences in metabolite profiles. We performed a whole genome expression analysis in blood samples from thiopurine treated patients with IBD and related the findings to metabolite concentrations and clinical patient characteristics. Ethics Statement The study was approved by the Ethics Committee at Linköping University, Sweden, dnr 03-260 and M58-06. Written informed consent was obtained from all patients. Patients An explorative cohort of IBD-patients (n = 21) with normal TPMT activity (.8.9 U/mL pRBC; units per mL packed RBC) was selected based on differences in their metabolite profiles. We defined a high metabolite concentration ratio as meTIMP/6-TGN .20 based on metabolite determinations in our laboratory and on studies of deviant metabolism by others [14,15,34]. In our experience, this ratio corresponds to the 75 th percentile of the meTIMP/6-TGN concentration ratios (median 12) amongst 1220 patients with IBD on thiopurine therapy and TPMT activity in the normal range. Ten patients with meTIMP/6-TGN concentration ratios .20 (R20) were included in a microarray analysis, as were four patients with metabolite ratios corresponding to the median metabolite ratio (range 10.0-14.0, Median) and seven patients displaying a profile with a metabolite ratio #4 and 6-TGN $100 pmol/8610 8 RBC (R4). The cut-off $100 pmol 6-TGN/ 8610 8 RBC was employed to ensure acceptable analytical precision. All patients had been on long term medication with an unchanged thiopurine dose for at least 2 weeks prior to blood sampling. Metabolite profiles were stable as judged from historical records, with at least two observations of the same kind available in 19/21 patients. Patients who had received blood transfusion within 4 months were not included. Disease activity (score $5 in active disease) was noted based on the Walmsley index for UC (n = 10) and the Harvey-Bradshaw index for CD (n = 10) and patient characteristics were noted (Table S1). The data from the microarray analysis was further validated by reverse transcription quantitative PCR (RT qPCR) in an expanded patient cohort (n = 54), including the initial study population with the exception of one sample with insufficient amount of RNA (Table S2). Isolation of RNA from Peripheral Blood Samples Blood samples were collected in PreAnalytiX Paxgene TM blood RNA tubes (Becton Dickinson, Franklin Lakes, NJ). RNA was isolated using the PreAnalytiX Paxgene TM blood RNA kit (Qiagen, Hilden, Germany) on the day of sampling, according to the manufacturers instructions. RNA concentration was assessed with NanodropH ND-1000 spectrophotometer (Nanodrop Technologies, Wilmington, DE) and RNA integrity with 2100 Bioanalyzer (Agilent technologies, Santa Clara, CA). Microarray Analysis RNA samples were treated with the SnX TM globin depletion reagent [37] and analysed by AROS Applied Biotechnology (Aarhus, Denmark) with the Affymetrix GeneChip Human Genome U133 Plus 2.0 array (Affymetrix, Santa Clara, CA), representing the entire human genome with more than 38 500 well characterized genes. RT qPCR Real-time PCR was performed with the FAST 7500 real-time PCR system and reagents from Applied Biosystems (Foster City, CA) with 5-10 ng cDNA per reaction in a final volume of 10 mL. Thirty-two potential reference genes (TaqManH Express Human Endogenous Control Fast Plate, Applied Biosystems) were evaluated for low sample-to-sample variation using the Normfinder algorithm [38] and cDNA from six patients. The mRNA expression [C T (threshold cycle)] of each target gene (Table S3) was normalized against the expression level of the selected reference genes (GUSB, YWHAZ, and MRPL19) with Genex Professional software version 4.3.8 (MultiD Analysis AB, Göteborg, Sweden) to obtain a delta-C T (dC T ). The relative expression was determined for each gene in relation to the sample with the lowest expression (highest C T ). By exploring the Pharmacogenomics knowledge base (http:// www.pharmgkb.org/), the KEGG pathway of purine metabolism (http://www.genome.jp/kegg/pathway.html#nucleotide) and the thiopurine literature, 30 genes with a proven or potential relationship to the mechanisms, metabolism or transmembrane transport of purine/thiopurines were selected and analysed separately by RT qPCR (Table S3). Data Analysis Analysis of microarray data. The image files (cel format) were imported to the GeneSpring GX 11 software (Agilent Technologies, Santa Clara, CA). Data was background corrected and normalized by the robust multiarray analysis algorithm [39]. In order to identify new candidate genes differently expressed over metabolite profiles, low-intensity signals were removed by a stringent filtering, retaining all data with a signal intensity greater than the 75 th percentile of all intensities. Thereafter, only intensities above the 20 th percentile in 100% of at least 1 of the 3 metabolite profiles were retained, leaving 7325 probe sets for further analysis. Genes differently expressed between metabolite profiles were identified by analysis of variance (ANOVA) without correction for multiple testing and with the Student-Newman-Keuls post hoc test at a level of statistical significance (P-value) of 0.001. The 7325 probe sets were further included in a Spearman rank order correlation analysis against the individual dose-normalized metabolite concentrations and the meTIMP/6-TGN concentration ratio. The results were considered statistically significant if P,0.001. Pathway analyses. In order to detect differences between metabolite profiles, gene set enrichment analysis (GSEA) was performed on the normalized data set, using the Broad Institute GSEA software, version 3.0 [40,41]. Patients with meTIMP/6-TGN concentration ratio .20 were compared with patients with ratios #4 using the C2 molecular signatures database (MSigDB) of pathways of the Kyoto Encyclopedia of Genes and Genomes (KEGG), containing 186 gene set (414 pathways). The interactions between gene products identified in this study and genes present at susceptibility loci identified for CD, UC and IBD [42] were evaluated with the Search Tool for the Retrieval of Interacting Genes/Proteins database, STRING, version 8.3 [43]. All prioritized candidate susceptibility genes were extracted from Table S2 of Jostins et al. [42], whereas for susceptibility loci with no prioritized genes, all genes were extracted. Candidate susceptibility genes with erroneous interactions with genes identified in this work were manually removed from the network. Statistics. Dose-normalized metabolite concentrations (pmol metabolite per mg azathioprine) were used when investigating relationships between gene expression and metabolite concentrations. 6-mercaptopurine doses were converted to azathioprine doses, assuming a conversion factor of 2.08 [44]. Correlations between variables were evaluated using the Spearman rank order correlation coefficient, R s . Median (range) values are given. For group comparisons of continuous variables, the Mann-Whitney U-test, or the Kruskal Wallis test were used. For categorical variables, Fishers exact test was used. Two-sided testing was used and considered statistically significant if P,0.05. Multiple linear regression analyses, using backward stepwise removal or inclusion of variables, were applied to assess the relationship between log transformed dose-normalized metabolite concentrations or meTIMP/6-TGN concentration ratios as dependent variables and gene expression levels expressed as dC T values and the use of concomitant drugs as independent variables. A P to enter of 0.15 and a P to exit the models of 0.15 was adopted. Log transformation was necessary to normalise the distribution of the dependent variables. Genes Identified by Microarray Screening -ANOVA Four genes that were differently expressed over metabolite profiles were identified in the explorative patient cohort using the stringent filtering of microarray data (P,0.001); FAM46A, SLX1A, TGOLN2, UBE2A and included in the analyses with RT qPCR (Table S3). Genes Identified by Microarray Screening -Spearman Rank Order Correlation Analyses with Metabolite Concentrations Among the significant genes identified by means of Spearman rank order correlation analyses of the 7325 probe sets, the top five or six genes with the most significant correlations with the individual metabolite concentrations, or the meTIMP/6-TGN concentration ratio, were selected for further evaluation. In total, fourteen candidate genes were identified by Spearman rank correlation analyses, one of which overlapped with the four genes identified by ANOVA (UBE2A). No genes with known association with the metabolic pathways of thiopurine drugs or purines were identified in the microarray screen (employing a pvalue ,0.001). RT qPCR Data vs. Microarray Data -General Screening Altogether, 17 genes, identified by microarray screening, were taken to RT qPCR in the expanded patient cohort (n = 54) ( Table S3). The relative gene expression levels of nine genes, correlated with the concentration of meTIMP, 6-TGN or the meTIMP/6-TGN concentration ratio (Table 1, Table S5, and exemplified in Figure 1). Four genes (CD1D, DEF8, HVCN1, and TUSC2) correlated positively with 6-TGN and all except HVCN1 correlated negatively with the meTIMP/6-TGN concentration ratio. In the microarray data, DEF8 and HVCN1 displayed a significant positive correlation with 6-TGN, whereas the other two genes displayed a significant negative correlation with the meTIMP/6-TGN concentration ratio. CTSS, FAM156A, GNB4, and PLCB2 were negatively correlated with both the concentration of meTIMP and the meTIMP/6-TGN concentration ratio. In the microarray data, FAM156A and GNB4 displayed a significant inverse correlation with meTIMP, whereas the other two genes displayed a significant inverse correlation with the meTIMP/6-TGN concentration ratio. LAP3 correlated negatively only with the meTIMP/6-TGN concentration ratio, both in the expanded cohort and in the microarray screen. Substantial interindividual variation in the gene expression levels was observed, with a considerable overlap in expression levels between the three metabolite profiles ( Table 2). Nevertheless, the gene expression levels of CD1D, CTSS, FAM156A, GNB4, and PLCB2 were lower in patients with meTIMP/6-TGN concentration ratios .20 as compared with those with a median metabolite ratio (Median). Of these five differences, four were also noticed when comparing patients with high metabolite ratios (.20) with those with low metabolite ratios (#4) ( Table 2). Genes Related to the Metabolic Pathway of Thiopurines Thirty genes potentially associated with the mechanism, metabolism or transmembrane transport of thiopurines or purines, were analysed with RT qPCR (Table S3). When the relative gene expression levels were evaluated, nine genes correlated with the concentration of meTIMP, 6-TGN and/or the meTIMP/6-TGN concentration ratio (Table 3, Table S6, and exemplified in Figure 2). A positive correlation was observed between the gene expression level RAC1 and the concentration of 6-TGN, whereas HPRT1 correlated negatively with this metabolite (both P = 0.04). Two genes, XDH and NT5C1B, were not expressed in peripheral blood as judged by the RT qPCR data. Patients with meTIMP/6-TGN ratios .20 showed lower expression levels of MGST2, TPMT and NME6, but higher expression levels of IMPDH2 and NT5E than patients with metabolite ratios #4. However, a large overlap in gene expression levels over the three metabolite profiles was observed ( Table 4). The TPMT activity in RBC did not correlate with the mRNA expression of TPMT (R s = 0.10, P = 0.45). The GSEA showed no significant enrichment of genes of any pathway in any phenotype (different metabolite profiles) when a false discovery rate of 0.25 and 1000 permutations of each phenotype was applied (data not shown). However, RAC1, PLCB2 and GNB4 overlapped with the KEGG chemokine signaling pathway (P = 4.6610 25 ) and RAC1 and PLCB2 with the KEGG Wnt signaling pathway (P = 0.002). Sixteen out of 18 genes (89%) which correlated significantly with meTIMP and/or 6-TGN or the meTIMP/6-TGN concentration ratio in the expanded patient cohort (n = 54) interacted in networks with 41 genes present at 6, 4 or 21 susceptibility loci identified for CD, UC or IBD [42], respectively, as judged by the STRING analysis (Figure 3; FAM156A and HVCN1 were not present in any network). All the network-identified candidate susceptibility genes belonged to genes prioritized by Jostins et al. [42] with the exception of four genes at two susceptibility loci without prioritized genes (CTH, ADK, CAMK2G and PLAU). Genes selected for a potential association with the metabolism (IMPDH2, HPRT1, TPMT, NT5E, ENTPD1 and NME6) or transmembrane transport (SLC29A2) of thiopurines or purines connected with candidate susceptibility genes linked to the KEGG pathway of purine metabolism, except MGST2 which linked to candidate susceptibility genes involved in the KEGG pathway of glutathione metabolism ( Figure 3, and data not shown). Genes identified by microarray screening were linked to the following KEGG pathways through interactions with candidate susceptibility genes: an inter-pathway connection between 'cysteine and methionine metabolism' and 'glutathione metabolism' (LAP3), inositol phosphate metabolism and signaling through phosphatidylinositol, calcium, chemokine or Wnt (PLCB2), chemokine and Wnt signaling (RAC1), chemokine signaling (GNB4) (Figure 3, and data not shown). For four of the genes (CD1D, CTSS, TUSC2, and DEF8), the STRING analysis identified interactions solely based on textmining and indicated involvement in functioning of invariant natural killer T cells, T cells and B cells, cytokine and Genes that showed a significant correlation (P,0.05) between relative gene expression levels (RT qPCR data) and the concentration of 6-TGN, meTIMP or the meTIMP/ 6-TGN concentration ratio in the expanded patient cohort (n = 54). b Significant (P,0.001) positive (P) or negative (N) correlations observed in the microarray data of the explorative patient cohort (n = 21) are indicated. doi:10.1371/journal.pone.0056989.t001 chemokine signaling, anti-proliferative and apoptotic effects, and inositol phosphate metabolism ( Figure 3, and data not shown). Active Disease vs. Remission The gene expression levels of CTSS, DEF8, HVCN1, NME6, and PLCB2 were higher in remission than in active disease (P#0.04), whereas a decreased expression of IMPDH2 was associated with remission (P = 0.008). Eight of 9 patients with active disease had CD. Considering CD patients only, the results were essentially the same with inverse relationships between the gene expression levels of CTSS (R s = 20. The measured concentrations of meTIMP and 6-TGN were no different in patients in remission (n = 43) compared with those with active disease (n = 9, P$0.13). Regression Analyses All genes that displayed an individual correlation with one of the metabolites or the meTIMP/6-TGN concentration ratio in the expanded patient cohort, were, together with concomitant therapy with 5-ASA or glucocorticosteroids (distribution between metabolite profiles, P = 0.11 and 0.006, respectively) assessed using multiple linear regression analyses. The models were essentially the same using untransformed data (data not shown). 5-ASA and Glucocorticosteroid Therapy The gene expression level of TPMT was higher among patients Genes that showed a significant correlation (P,0.05) between relative gene expression levels (RT qPCR data) and the concentration of 6-TGN, meTIMP or the meTIMP/6-TGN concentration ratio in the expanded patient cohort (n = 54). b None of the genes selected by means of their potential relationship with the mechanism, metabolism or transport of purines/thiopurines fulfilled the established significance level (P,0.001) in the microarray data. doi:10.1371/journal.pone.0056989.t003 However, there was a large overlap in gene expression levels between the two groups (data not shown). Discussion An aberrant thiopurine metabolism with preferential formation of the methylated metabolites has been related to lesser efficacy and an increased risk of adverse events [11,14,15,16]. At present it is unknown why up to 25% of patients display such a metabolite profile. In order to explore the molecular basis of differences in thiopurine metabolite concentrations we adopted a broad approach using whole genome expression analysis in blood samples from patients with distinct metabolite profiles. Based on the microarray screening we selected seventeen candidate genes for further analysis using RT qPCR in the expanded patient cohort, and nine of these genes demonstrated significant correlations with meTIMP and/or 6-TGN concentrations. However, among the genes there was none with a known or suspected relationship to the purine/thiopurine metabolism, transport or drug effects. Furthermore, although large interindividual differences in gene expression levels were observed (Table 2 and Table 4), only small differences were noticed between the three metabolite profiles (R20, Median and R4). Thus, based on gene expression, we did not find a strong influence on the three distinct thiopurine metabolite profiles. Thirty genes potentially associated with the mechanism, metabolism or transmembrane transport of thiopurines or purines were explored by means of RT qPCR in the expanded patient cohort (irrespective of the microarray result). The expression levels of nine genes correlated significantly with meTIMP and/or 6-TGN concentrations. All of these genes, except RAC1, putatively affect metabolism or transport of thiopurines or purines. RAC1 encodes a small GTPase and is involved in the signaling from the T-cell receptor in activated CD4+ cells. Its downstream targets promote cellular survival. The 6-TGN metabolite 6-TGTP has the potential to induce apoptosis in these cells by blocking the Rac1 protein [7]. Rac1 also facilitates the interaction between antigen presenting cells and effector cells, a process disturbed by 6-TGTP [45]. Multiple regression analyses, using gene expression levels and concomitant therapy with 5-ASA or corticosteroids as predictors, explained at most 46% of the variation in the dependent metabolite variables. Four of the five genes included in the regression models had no established prior relationship with the metabolic scheme of thiopurines, although NT5E has been suggested to facilitate cellular uptake of thiopurine metabolites by extracellular de-phosphorylation [46]. Possibly, the thiopurine metabolite profile depends on several interacting genes, each with an individual small effect. However, it is also possible that the metabolite profile mainly reflects a post-transcriptional regulation. In clinical practise, thiopurine metabolites are measured in RBC, whereas gene expression levels were measured in the target cells of therapy. The use of RBC as a surrogate compartment for the target cells of therapy (the mononuclear cells) may obscure the ability to establish relationships between gene expression levels and metabolite concentrations.
2016-12-30T08:36:52.502Z
2013-02-21T00:00:00.000
{ "year": 2013, "sha1": "eff347b6b10cabd7d6520c468bc44538a57da6c5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0056989&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eff347b6b10cabd7d6520c468bc44538a57da6c5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6914238
pes2o/s2orc
v3-fos-license
Site-specific chromatin immunoprecipitation: a selective method to individually analyze neighboring transcription factor binding sites in vivo Background Transcription factors (TFs) and their binding sites (TFBSs) play a central role in the regulation of gene expression. It is therefore vital to know how the allocation pattern of TFBSs affects the functioning of any particular gene in vivo. A widely used method to analyze TFBSs in vivo is the chromatin immunoprecipitation (ChIP). However, this method in its present state does not enable the individual investigation of densely arranged TFBSs due to the underlying unspecific DNA fragmentation technique. This study describes a site-specific ChIP which aggregates the benefits of both EMSA and in vivo footprinting in only one assay, thereby allowing the individual detection and analysis of single binding motifs. Findings The standard ChIP protocol was modified by replacing the conventional DNA fragmentation, i. e. via sonication or undirected enzymatic digestion (by MNase), through a sequence specific enzymatic digestion step. This alteration enables the specific immunoprecipitation and individual examination of occupied sites, even in a complex system of adjacent binding motifs in vivo. Immunoprecipitated chromatin was analyzed by PCR using two primer sets - one for the specific detection of precipitated TFBSs and one for the validation of completeness of the enzyme digestion step. The method was established exemplary for Sp1 TFBSs within the egfr promoter region. Using this site-specific ChIP, we were able to confirm four previously described Sp1 binding sites within egfr promoter region to be occupied by Sp1 in vivo. Despite the dense arrangement of the Sp1 TFBSs the improved ChIP method was able to individually examine the allocation of all adjacent Sp1 TFBS at once. The broad applicability of this site-specific ChIP could be demonstrated by analyzing these SP1 motifs in both osteosarcoma cells and kidney carcinoma tissue. Conclusions The ChIP technology is a powerful tool for investigating transcription factors in vivo, especially in cancer biology. The established site-specific enzyme digestion enables a reliable and individual detection option for densely arranged binding motifs in vivo not provided by e.g. EMSA or in vivo footprinting. Given the important function of transcription factors in neoplastic mechanism, our method enables a broad diversity of application options for clinical studies. Background Transcription factors (TFs) are core elements of transcriptional regulation and play also an important role in the systems biology of cancer characterized by changes in the expression levels of certain genes [1]. A complex of more than 20 TFs molecules is involved in RNA polymerase II initiation of transcription in the promoter region for the majority of the genes [2]. The activity of the transcription machinery is based on the arrangement and the occupancy of transcription factor binding sites (TFBSs) along the 5'-region of the gene. Because of this dense arrangement and the necessity to analyze the individual occupancy of the TFBSs to establish regulation models, there is a strong demand for methods that enable this type of individual analysis. So far, several methods have been developed for identification and analysis of TFBSs. A commonly used method to verify TFBSs is the electrophoretic mobility shift assay (EMSA) [3]. Indeed, even though the gel mobility shift analysis provides a fast and easy identification of which nucleotides are required for TF binding, it does not work under in vivo conditions [4]. On the other hand, the method of in vivo footprinting [5] enables the investigation of protein binding in living cells, but this technique is only capable of identifying DNA regions that are bound by protein, being not able to identify which protein is responsible for the observed footprint [4,6]. In contrast, the chromatin immunoprecipitation (ChIP) offers a distinct advantage over EMSA and in vivo footprinting, since the ChIP technique not only specifies which nucleotides are bound, but also identifies the interacting protein(s) in the context of in vivo samples [7]. In this context we use the term in vivo to refer to any experiments performed on living cells weather within or outside a whole organism (sometimes referred to as ex vivo). Specific modifications of the ChIP assay exist to enable the analysis of mammalian tissues, thereby allowing the detection of differences in the interaction of transcription factors and promoter regions of genes in normal and neoplastic tissues [8,9]. However, the standard ChIP has its limitations. The applied fragmentation techniques (sonication or enzymatic DNA restriction by MNase digestion) are unspecific. The individual analysis of neighboring TFBSs is therefore limited, since the standard ChIP technique does not provide a DNA cleavage in specific positions flanking a sole binding motif (see Figure 1). Approaches using restriction enzyme digestion instead of the standard methods to fragment chromatin in ChIP in order to restrict analysis to particular gene regions or transcription factor binding sites has been previously described [10,11]. Nevertheless, the enzyme-based DNA fragmentation used in these procedures is applied just after immunoprecipitation or in combination with sonication. Hence, these ChIP variations neither provide an individual analysis of close neighboring TFBSs nor a differentiation between occupied or non-occupied sites. Accordingly, considering the currently available methodology and the complexity of the Protein-DNA interaction within transcriptional active gene regions, the individual analysis of neighboring TFBSs is still a challenging task. Thus, the aim of this study was to develop and optimize a ChIP technique for the specific and individual analysis of neighboring TFBSs at once in both cell culture and tissue material. Methodological design In order to support the individual analysis of adjacent TFBSs, we have developed an improved ChIP assay by replacing the traditionally used random fragmentation step with a site-specific enzyme digestion. This site-specific ChIP allows the immunoprecipitation and enrichment of DNA fragments containing only one TFBS, thus permitting to distinguish between adjacent occupied and non-occupied binding sequences in vivo. The principle of the site-specific ChIP and its advantages over similar procedures like EMSA and in vivo footprinting is exemplified in Figure 1. The site-specific ChIP was established in the challenging sequence environment of the Epidermal Growth Factor Receptor gene (egfr) focusing on the well described transcription factor Sp1 (for experimental design see Figure 2). The egfr contains a GC-rich promoter region in which Sp1 has been previously described to bind to four sites [12,13]. We reanalyzed and verified these Sp1 binding sites using the TFBS prediction program MatInspector V7.4.8 [14,15]. A listing of the results obtained by the MatInspector analysis is shown on Table 1. The binding motifs are located as expected at closely neighboring positions between -471 and-88 bp upstream of the egfr translational start codon (Table 1 and grey oval disks, Figure 2). Three endonucleases were utilized for the site-specific restriction: BfaI, RsaI and SacI. The immunoprecipitated sequences were analyzed by PCR using two sets of primer assays ( Table 2). The first set consisted of specific primer assays targeting each binding site and was designed to measure only the specific signal of occupied TFBSs (grey bars, Figure 2). For verification purpose we designed a second set of primer assays to control the specific enzyme digestion by PCR. Each assay is spanning one enzyme cutting site (black bars, Figure 2). In the case of a successful enzyme digestion, these primer combinations should not allow the detection of the targeted sequence. The procedure was tested with the EGFR expressing osteosarcoma cell line HOS and samples of kidney carcinoma tissue. For verification purpose, the Sp1-directed site-specific ChIP was also tested on the osteosarcoma cell lines MNNG, U2-OS and SJ-SA-1 (data not shown). Analysis of the allocation of Sp1 binding sites in EGFR expressing cells The occupation of the four known Sp1 sites was clearly confirmed in all analyzed osteosarcoma cell lines by application of the site-specific ChIP, as shown in Figure 3 for the case of the HOS cell line. All Sp1 binding sites targeting primer assays produced amplicons of the expected lengths when DNA immunoprecipitated with anti-Sp1 antibody was used as template (white arrows, Figure 3A). Moreover, PCR analysis of the input control and the anti-IgG ChIP-DNA was negative and confirmed the specificity of the site-specific ChIP ( Figure 3A). The control using GAPDH primer resulted in amplification of the targeted region only when the input control was used as template, showing the accuracy of the site-specific ChIP ( Figure 3B). The control of the Figure 2 Experimental design of the site-specific ChIP. Schematic illustration of the strategy used for the analysis of individual TFBSs (here Sp1). Immunoprecipitation and isolation of occupied binding motifs was enabled by specific enzyme digestion on positions flanking the TFBSs. Enzyme cutting sites of the respective endonucleases (RsaI, BfaI and SacI) are depicted by dashed lines. Immunoprecipitated DNA was analyzed by PCR using two sets of primer assays; one for the specific detection of allocated Sp1 TFBSs and a control set for the determination of the efficiency of the enzyme digestion. Sp1 TFBSs within egfr promoter region are represented by grey oval disks, indicating their positions in relation to the translational start codon (ATG). Black bars depict the targeted egfr regions of the enzyme digestion control primer assays. Signals appearing in the enzyme digestion control PCR indicate a failed fragmentation. The grey bars denote location and length of PCR products of primers used for the specific detection of immunoprecipitated DNA fragments containing the targeted Sp1 binding sites (a, b, c, d). Signals indicate binding sites with bound TF (here Sp1). -88 0,772 0,883 gcgAGGCggggactc ‡ Numbering of the position is relative to the egfr translational start codon (ATG) § The maximum core similarity of 1.0 is only reached when the highest conserved bases of a matrix match exactly in the sequence [14,15] $ A perfect match to the matrix gets a score of 1.00 (each sequence position corresponds to the highest conserved nucleotide at that position in the matrix), a "good" match to the matrix usually has a similarity of > 0.80 [14,15] † The "core sequence" of a matrix is defined as the (usually 4-5) highest conserved positions of the matrix [14,15] Site-specific ChIP on cell culture extracts: Individual detection of Sp1 TFBSs within the egfr promoter region. PCR-based analysis of DNA immunoprecipitated by site-specific ChIP targeting each Sp1 TFBSs within the egfr promoter region. Nuclei obtained from formaldehyde-fixed Osteosarcoma HOS cells were lysed, and chromatin was fragmented by specific enzyme digestion. Chromatin was immunoprecipitated either by normal rabbit IgG antibodies as a negative control or polyclonal Sp1 antiserum. Non-immunoprecipitated chromatin was used as total input control. DNAs from either the input chromatin or immunoprecipitated chromatin were subjected to PCR analysis using the Sp1 TFBSs targeting primers or restriction site flanking primers indicated in Figure 2. PCR of input DNA shows equivalent starting material for the assay. As a negative control, primers amplifying a region within the 3'-UTR of the GAPDH gene were used. In the center of the image, the egfr promoter region with all investigated Sp1 TFBSs and enzyme cleavage sites is shown. enzymatic digestion using primers flanking enzyme cutting positions showed no amplification of the targeted sequence when DNA immunoprecipitated with anti-Sp1 antibody was used as template, proving that the DNA was cleaved at the intended sites ( Figure 3C). In contrast, the same reactions using uncleaved HOS DNA of the same lysate as template produced the expected amplicons encompassing the enzyme cutting position ( Figure 3C). Application of Sp1-directed site-specific ChIP on EGFR expressing tissue samples The application of the modified site-specific ChIP for tissue analysis on kidney carcinoma material revealed comparable results to the experiments with cells, thereby being equally successful (Figure 4). Interactions of Sp1 with all four analyzed binding sites within the egfr promoter could be individually detected using the site-specific ChIP technique in combination with the Sp1-TFBSs targeting PCR ( Figure 4A). The enzyme digestion control confirmed the specificity and precision of the enzymatic fragmentation ( Figure 4C). Discussion Even in the complex sequence environment of the egfr, the site-specific ChIP proved to be an adequate and effective method for the individual analysis of the TFBSs in vivo. The site-specific ChIP provides an improvement towards the standard ChIP method and further techniques for TFBS analysis. It unites the benefits of both EMSA and in vivo DNase I footprinting-the specific detection and localization of neighboring occupied binding sites in vivo-in one assay. Hence, the usually performed verification of the obtained results by a second method is therefore unnecessary (Figure 1). In combination with the enzyme digestion control by PCR using primers targeting regions spanning each enzyme cutting site, the site-specific ChIP may be generally applied for the investigation of any transcription factor recognition site along the human genome. Completeness in a molecular sense can only be achieved by analysis methods which in situ detect all possible realised combinations of truly existing binding sites. No modern technology (including next generation Figure 4 Verification of Sp1 TFBSs targeting site-specific ChIP on tissue extracts. Kidney carcinoma samples were chosen for adaptation of the site-specific ChIP to tissue material. Tissue samples were formaldehyde-fixed and nuclei were isolated as described. Sp1 TFBSs and enzyme cutting sites information, enzyme digestion, immunoprecipitation and PCR analysis are as described in the legend to Figure 3, with exception of the uncleaved Kidney DNA from whole cell lysate (uc- Kidney, lanes 21, 24, 27, 30, 33). White arrows depict the bound Sp1 binding sites. sequencing) can assure that up to now. So our approach reasonably approximates this completeness. Though, in combination with a preceded consensus analysis using an adequate algorithm (like MatInspector) our site-specific ChIP is an effective method for the selective identification of complex TFBSs structures. For all restriction sites we used restriction enzymes (BfaI, RsaI and SacI) which exhibit exactly the same digestion and buffering conditions (incubation at 37°C and 100% activity in the following restriction buffer: 20 mM Tris-acetate, 50 mM potassium acetate, 10 mM Magnesium Acetate, 1 mM Dithiothreitol pH 7.9 at 25°C ). The concern that TFBSs might not be flanked by naturally occurring restriction enzyme sites and not separated by an appropriate distance from each other seems, in accordance to our experience, to be a rare event. At least in the case of the egfr, there were no complications. The suitability of the used anti-Sp1-Antibody for an application in the ChIP-procedure has been checked and assured by the supplier (Merck Millipore Co., Billerica, MA). The utilized buffers are common to all antibody based ChIP assays. By adjusting sample preparation and chromatin isolation procedures the technique is also applicable on tissue material, enabling a broad diversity of application options for clinical and molecular studies. The site-specific ChIP is not dedicated to high throughput screening approaches but instead supports the functional analysis of a complex regulation scheme of a single gene in a systems biology view -i.g. the interaction pattern between occupied TFBSs. The focus of this work is develop a method for a) the detection of new binding sites in combination with a preceded consensus analysis and b) the individual examination of single TFBSs allocation in a complex system of neighboring binding motifs of the same type. Hence, the site-specific ChIP technique does not provide an assessment of whether the TF-binding is associated with gene transcription, what is a common limitation of all ChIPvariations. For answering the question of functional relevance further methods have to be employed. E.g. site directed mutagenesis or deletion analysis for TFBSs and promoter activity investigation in vivo. So, the main benefit of the site-specific ChIP lies in the investigation of specific regulatory regions in greater detail. Conclusions The site-specific ChIP, which uses an endonucleasebased TFBSs specific DNA fragmentation followed by a PCR-based enzyme digestion control, opens new possibilities for the functional investigation of complex neighboring TFBSs systems of genes, even within GC-rich regions. In combination with methods for TFBSs prediction [16][17][18], chip-techniques and/or sequencing, it is a specific and sensitive tool for the detailed characterization of the activity of neighboring binding motifs at once. The site-specific ChIP enables the individual and reliable analysis of known and predicted binding motifs in vivo, on both cultured cells and mammalian tissue material. Hence, our method enables the detection of differences in the interaction of transcription factors and promoter regions of genes in normal and neoplastic tissues, thereby opening new possibilities for the investigation of the transcriptional regulation of genes involved in cancer biology. Site-specific Chromatin immunoprecipitation For the cell line experiments we used 10 7 osteosarcoma cells which were fixated by a 1% formaldehyde solution. The Nuclei were isolated using cell lysis buffer, separated by centrifugation and resuspended in adequate standard restriction buffer (New England Biolabs Inc., Ipswich, MA). Chromatin was fragmented by subjecting the nuclei to restriction enzyme digestion according to Kang et al. [19], including the following modifications. A simultaneous application of three restriction endonucleases -BfaI, RsaI and SacI (New England Biolabs Inc., Ipswich, MA) -was performed to cleave the DNA at positions flanking each Sp1 binding site within egfr intron 1. A complete DNA digestion was achieved by chromatin treatment with 200 U of each enzyme for 4 h at 37°C and further 100 U of each enzyme for additional 16 h at 37°C. The nuclei were then incubated with and 200 U aliquot of each enzyme and 200 U of RNase for 2 h at 37°C. Completion of restriction enzyme fragmentation was verified by electrophoresis separation on a 1.5% agarose gel. The optimization of the specific enzyme digestion based DNA fragmentation is shown on Additional file 1: Figure S1. Chromatin was isolated using nuclei lysis buffer. The lysate was diluted 10-fold in ChIP dilution buffer and equal aliquots of cleaved chromatin (equivalent to 2 million cells) from a single cell lysate were used for immunoprecipitation with antibodies against Sp1 and IgG (mock IP) as well as a control for the amount of input DNA used in precipitations (input control). 3 μg of the antibodies against Sp1 (Merck Millipore Co., Billerica, MA) and IgG (Sigma-Aldrich Co., St. Louis, MO) were used. The antibody was captured with protein A/G agarose beads (Santa Cruz Biotechnology Inc., Santa Cruz, CA). After washing the beads-antibodychromatin complexes under stringent conditions, reverse crosslinking and purification of ChIP DNA was performed using Chelex-100 according to Nelson et al. [20]. DNA from the non-antibody whole-cell control supernatant was isolated as described in literature [20] and processed in the same way as the IP samples. All steps not mentioned here were performed according to established standards [20]. The tissues samples were processed as follows. 0.03 g frozen kidney carcinoma tissue was chopped into small pieces and thawed in freshly prepared PBS containing 1% formaldehyde for crosslinking. The tissue was homogenized to avoid cell clumps and the ChIP was performed as described above for the cell culture experiments. PCR based detection of immunoprecipitated TFBSs The detection of the immunoprecipitated Sp1 binding sites was done by PCR. For each sample (DNA extracted from either the input chromatin, the normal rabbit IgG or the anti-Sp1-immunoprecipitated chromatin): 2.5 μl ChIP-DNA template, 5 pmol of Sp1 binding site specific primer, 1 U of Taq DNA Polymerase, 1X PCR reaction buffer II, 400 μM of each dNTP (Applied Biosystems) and 1.3 mM MgCl 2 were mixed for PCR amplification in a 25 μl reaction volume. A defined amount of 4 ng lymphocyte DNA was used for the internal control of each PCR reaction approach. The PCR was started with an initial denaturation of 95°C for 9 min, followed by a secondary denaturation at 98°C for 1 min. Next 40 cycles of denaturation at 98°C for 10 sec, annealing at 65°C for 2 min, and a single product extension step at 72°C for 7 min followed. The PCR products were separated on a 2% agarose gel and visualized by ethidium bromide staining. PCR based controls of site-specific ChIP The control of completeness of the enzyme digestion control was also performed by PCR as described and carried out on DNA templates prepared from the Sp1 immunoprecipitated chromatin sample and the wholecell sample (input control), as well as uncleaved HOS cells DNA from the same lysate (positive control). For the negative control, primers spanning an unrelated genomic region within the 3'-UTR of the GAPDH gene were used yielding a 97-bp product. Availability and requirements Project name: Site-specific chromatin immunoprecipitation: A selective method to individually analyze neighboring transcription factor binding sites in vivo Project home page: none Operating systems: none Programming language: none Other requirements: MatInspector V7.4.8 (http://www. genomatix.de) License: none Any restriction to use by non-academics: none
2016-05-12T22:15:10.714Z
2012-02-20T00:00:00.000
{ "year": 2012, "sha1": "5947718123db0ab024b179d23687353645c1fe28", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-5-109", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5947718123db0ab024b179d23687353645c1fe28", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
115303750
pes2o/s2orc
v3-fos-license
From FAIR to RHIC, hyper clusters and an effective strange EoS for QCD Two major aspects of strange particle physics at the upcoming FAIR and NICA facilities and the RHIC low energy scan will be discussed. A new distinct production mechanism for hypernuclei will be presented, namely the production abundances for hypernuclei from $\Lambda$'s absorbed in the spectator matter in peripheral heavy ion collisions. As strangeness is not uniformly distributed in the fireball of a heavy ion collision, the properties of the equation of state therefore depend on the local strangeness fraction. The same, inside neutron stars strangeness is not conserved and lattice studies on the properties of finite density QCD usually rely on an expansion of thermodynamic quantities at zero strange chemical potential, hence at non-zero strange-densities. We will therefore discuss recent investigations on the EoS of strange-QCD and present results from an effective EoS of QCD that includes the correct asymptotic degrees of freedom and a deconfinement and chiral phase transition. Two major aspects of strange particle physics at the upcoming FAIR and NICA facilities and the RHIC low energy scan will be discussed. A new distinct production mechanism for hypernuclei will be presented, namely the production abundances for hypernuclei from Λ's absorbed in the spectator matter in peripheral heavy ion collisions. As strangeness is not uniformly distributed in the fireball of a heavy ion collision, the properties of the equation of state therefore depend on the local strangeness fraction. The same, inside neutron stars strangeness is not conserved and lattice studies on the properties of finite density QCD usually rely on an expansion of thermodynamic quantities at zero strange chemical potential, hence at non-zero strange-densities. We will therefore discuss recent investigations on the EoS of strange-QCD and present results from an effective EoS of QCD that includes the correct asymptotic degrees of freedom and a deconfinement and chiral phase transition. Introduction The objective of the low energy heavy ion collider programs, at the RHIC facility on Long Island and the planned projects NICA in Dubna and FAIR near the GSI facility, is to find evidence for the onset of a deconfined phase [1,2]. At the highest RHIC energies, experiments [3,4,5,6] have already confirmed a collective behavior of the created system, signaling a change in the fundamental degrees of freedom. Lattice QCD calculations indeed expect a deconfinement crossover to occur in systems created at the RHIC. As theoretical predictions on the thermodynamics of finite density QCD are difficult (see e.g. [7,8,9,10]), one hopes to experimentally confirm a possible first order phase transition and consequently the existence of a critical endpoint, by mapping out the phase diagram of QCD in small steps. Hadronic bulk observables which are usually connected to the onset of deconfinement are the particle flow and its anisotropies as well as particle yields and ratios [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. It has often been proposed, that e.g. the equilibration of strangeness would be an indication for the onset of a deconfined phase, although this idea is still under heavy debate [27,28,29,30,31]. Two main aspects of strangeness physics, closely connected to the equilibration of strangeness and the hyperon interactions, are the formation of nuclear clusters with strange content and the bulk properties of very dense nuclear matter with finite strangeness content. Hypernuclei Exotic forms of deeply bound objects with strangeness have been proposed [32] as states of matter, either consisting of baryons or quarks. The H di-baryon was predicted by Jaffe [33] and later, many more bound di-baryon states with strangeness were proposed using quark potentials [34,35] or the Skyrme model [36]. However, the non-observation of multi-quark bags, e.g. strangelets is still one of the open problems of intermediate and high energy physics. On the hadronic side, hypernuclei are known to exist and be produced in heavy Ion collisions already for a long time [37,38,39,40]. Metastable exotic multi-hypernuclear objects (MEMOs) as well as purely hyperonic systems of Λ's and Ξ's were introduced in [41,42] as the hadronic counterparts to multi-strange quark bags [43,44]. A motivation of hypernuclear physics is that it offers a direct experimental way to study hyperonnucleon (Y N ) and hyperon-hyperon (Y Y ) interactions (Y = Λ, Σ, Ξ, Ω). The nucleus serves as a laboratory offering the unique opportunity to study basic properties of hyperons and their interactions. Hypernuclei production in the spectator fragments In this work we will focus on the production of hypernuclei in high energy collisions of Au+Au ions. In such systems strangeness is produced abundantly and is likely to form clusters of different sizes. We can discriminate two distinct mechanisms for hypercluster formation in heavy ion collisions. First, the formation of hypernuclei in the hot and dense fireball of most central heavy ion collisions where the general assumption is that hypernuclei are formed at or shortly after the hadronisation/chemical freeze out of the hadrons produced. In this work we will focus on a different production mechanism, the absorption of hyperons in the spectator fragments of non central heavy ion collisions. In this scenario we are interested in hyperons which propagate with velocities close to the initial velocities of the nuclei, i.e., in the vicinity of nuclear spectators. To calculate the absorption rate we employed the Ultra-relativistic Quantum Molecular Dynamics model (UrQMD v2.3) [45,46] and the intra-nuclear cascade model (DCM) developed in Dubna [47] to estimate the model dependence of the predictions. The hyperons produced in the hot and dense stage of a heavy ion collisions can be absorbed by the spectators if their kinetic energy in the rest frame of the residual nucleus is lower than the attractive potential energy, i.e., the hyperon potential given by [48]: where α = 57.5 MeV, and β = 0.522. The local nucleon density ρ at the hyperon's position is calculated within the hadronic transport models, whereas the details of the computation and more results on the properties of the absorbed hyperons can be found in [49]. Figure 1 shows the resulting probabilities for the formation of a conventional and strange spectator residual (top panels), and their mean mass numbers (bottom panels) versus the number of captured Λ hyperons (H), calculated with the DCM and UrQMD models for p + Au and Au + Au collisions at an energy of 2 GeV per nucleon (left panels), and 20 GeV per nucleon (right panels). One clearly observes that the production of heavy multi-hyper nuclei is possible at FAIR. The strange equation of state The strange EoS is of particular interest for the understanding of several aspects of QCD: 1. As has been shown in [50] the net strangeness distribution in phase space of a heavy ion collision can fluctuate, although the total net strangeness is zero. To dynamically treat such a system, the equation of state for ρ s = 0 needs to be evaluated. 2. Compact stars are very dense and long lived objects. Due to a βequilibrium inside the star, net-strange conservation is violated by the weak interaction. 3. Lattice QCD results at finite µ B are often evaluated through a Taylor expansion in µ B at µ B = µ S = 0. A vanishing strange number chemical potential induces a non-vanishing net strangeness, which means that the equation of state of net-strange matter is calculated. First investigations on the strange equation of state were done in [51], where one usually considered a first order transition from a hadron to a quark phase. In our study we employ the recently developed SU(3) f parity doublet for hadronic matter and its extension to quark degrees of freedom. In this approach an explicit mass term for baryons is possible, where the signature for chiral symmetry restoration is the degeneracy of the baryons and their respective parity partners. Adding an effective quark and gluon contribution is done via a PNJL-like approach [52,53]. This model uses the Polyakov loop Φ as the order parameter for deconfinement. Φ is defined way. This model allows for a smooth transition from a hadronic to a quark dominated system, where the order parameters and thermodynamic quantities are in reasonable agreement with recent lattice data. For a detailed description of the parity model and comparisons with lattice we refer to [54]. Figure 2 presents our results on the order parameter of the chiral phase transition as a function of µ B and µ S at fixed temperature. The red lines indicate paths of constant values for f s = ρ s /ρ B , the strangeness per baryon fraction. At the temperature T = 56 MeV, the critical endpoint of the chiral phase transition was located at µ cep B ≈ 1150 MeV. We can observe that for increasing f s , the change in the order parameter becomes steeper and the value of T CEP increases slightly to T CEP = 68 MeV for f s = 0.5. For a gas of deconfined quarks there is a strong correlation between the baryon number and strangeness. In a hadronic medium such a correlation is usually not trivial as strangeness can be found in mesons and baryons. These considerations led to the idea that the so called strangeness-baryon correlation is sensitive to the deconfinement and/or chiral phase transition [55]. On the other hand the strangeness to baryon ratio f s should also be sensitive on any phase transition at finite baryon densities. On the lattice such quantities are usually calculated as functions of the expansion coefficients. The information that can be extracted from these quantities is exemplified in figure 3. Here we show c BS as a function of temperature for µ B /T = 3 and µ S = 0. One can observe a distinct peak at T ≈ 150 MeV ⇒ µ B = 450 MeV. One can identify this peak with the crossover transition of the chiral condensate. Such a behavior of c BS has been predicted and also has been shown to exist in lattice data [56]. At higher temperatures the strangeness to baryon correlation approaches unity which resembles closely the behavior of the quark and gluon fraction λ = e Quarks+Gluons /e T ot of the system. In comparison figure 3 also shows the temperature dependence of f s at µ S /T = 1 and µ B = 0. This quantity is even more sensitive in the quark-gluon fraction as c BS , while it seems to be not very sensitive to the chiral phase transition. Summary We presented results on the production of hypernuclear systems in high energy collisions of heavy ions. In particular we have investigated the production of hyperons in peripheral relativistic heavy ion collisions and their capture by the attractive potential of spectator residues. The absorption rate of hyperons in the excited spectators is shown to be quite substantial. This opens the possibility to study the phase transition in nuclear matter with a strangeness admixture and reveal information about the properties of hypernuclei, their binding energies, and, finally, Y N and Y Y interactions. In the second part of this work we discuss properties of the phase diagram at finite net-strange density within a SU(3) parity doublet model. We find that the location of the critical endpoint shifts to a slightly higher temperature for a finite net strangeness (lattice results). In particular the strangeness baryon correlation factor c BS and the strangeness per baryon fraction f s both show to be sensitive to the deconfined fraction on the system while c BS also shows a distinct peak at the chiral crossover at finite chemical potential. This work was supported by the Hessian LOEWE initiative Helmholtz International Center for FAIR, EMMI and used computational resources provided by the (L)CSC at Frankfurt.
2011-12-22T12:01:11.000Z
2011-12-22T00:00:00.000
{ "year": 2011, "sha1": "517417e16a5f5fd2c669a7fe67862c0a2057600d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "517417e16a5f5fd2c669a7fe67862c0a2057600d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260303137
pes2o/s2orc
v3-fos-license
Quality evaluation of Sojae Semen Praeparatum by HPLC combined with HS-GC-MS Sojae Semen Praeparatum is a popular fermented legume product in China, with a delicious flavour and health benefits. However, the quality control methods for Sojae Semen Praeparatum are now incomplete, and there are no standards for defining its degree of fermentation. In this study, we introduced colour, acid value, ethanol-soluble extractives and six flavonoid components’ content to evaluate the quality of Sojae Semen Praeparatum comprehensively. Multiple linear regression was used to streamline the 11 evaluation indicators to 4 and confirm the evaluating feasibility of the four indicators. The degree of fermentation and odour of Sojae Semen Praeparatum were analyzed on headspace-gas chromatography-mass, and two types of odours, 'pungent' and 'unpleasant', could distinguish over-fermented Sojae Semen Praeparatum. Our research developed fermentation specifications and quality standards for Sojae Semen Praeparatum. Introduction The popularity of fermented foods has surged in recent years due to their health benefits. Postulated mechanisms for the health effects of fermented foods include the potential probiotic effects of microorganisms, the conversion of bioactive peptides, biogenic amines, and phenolic compounds into bioactive compounds from fermentation, and the reduction of anti-nutrients. Fermented legume products are widely consumed globally [1]. Through fermentation, the taste, appearance, nutrient digestibility, nutritional value, texture and shelf life of beans are improved [2], while protease inhibitors, lectins, oligosaccharides and phytates (non-nutritional compounds) present in bean seeds are reduced [3]. In addition, the fermentation of legumes leads to an increase in phenolic compounds in legume seeds [4]. A study has shown that fermented legumes exhibit anti-diabetic and anti-cancer properties by acting as antioxidants and modulating some enzymes such as acetylcholinesterase, glucosidase and amylase [5][6][7]. Fermented legume products first originated in China, and to this day, Chinese people still intake many fermented legume products in their diets, such as pickled tofu, soybean paste and Sojae Semen Praeparatum [8,9]. As a fermented soy product, it can be used as a seasoning or cooked as a dish. Sojae Semen Praeparatum is a fermented processed product of the mature seeds of the Glycine max (L.) Merr., with a strong fresh flavour [10]. Sojae Semen Praeparatum can be further processed into flavored Douchi, which has gained popularity all over the world. Sojae Semen Praeparatum needs to be fully fermented to fulfil its health functions. It is considered both a food and a medicine by the Chinese Traditional Medicine Administration [11], with a high concentration of polyphenols, mainly isoflavones and isoflavone glycosides, which show hepatoprotective and antioxidant properties [12]. Many studies have shown that Sojae Semen Praeparatum improves depression-like behaviour in chronically stressed rats [13,14]. However, its quality varies greatly due to the quality of raw materials and the different fermentation conditions. The existing quantitative means of evaluating the quality indicator of Sojae Semen Praeparatum is mostly based on the whole content of two flavonoid compounds, daidzein and genistin [15,16]. However, it was found that most of the fully fermented, semi-fermented and over-fermented Sojae Semen Praeparatum contained flavonoid glycosides that could meet the requirements. Its fragrance, which may be due to pyrazine compounds [17] is another traditional quality indicator. Studies on the odour-presenting compounds of Sojae Semen Praeparatum are scarce, especially the flavour constituents of Sojae Semen Praeparatum with bad odour. Sojae Semen Praeparatum odour is related to the degree of fermentation the odour of Sojae Semen Praeparatum becomes more intense along with the fermentation, and excessive fermentation can produce a bad odour. But there is no volatile compounds data for explaining complete or excessive fermentation. Laboratory studies have found that during controlled fermentation with the same batch of raw material, the flavonoid content of Sojae Semen Praeparatum, acid value, ethanol-soluble extractives content and redness increases, and flavonoid glycoside content and brightness decreases. The odour changes gradually during fermentation. It is therefore hypothesized that the above indicators may show their significance in evaluating the quality of Sojae Semen Praeparatum purchased from the market. In this study, the above characterization data of 36 batches of commercial and laboratory prepared Sojae Semen Praeparatum were analyzed. The indicators that were suitable for the quality assessment of Sojae Semen Praeparatum were selected by multiple linear regression. The flavour components related to the degree of fermentation were detected by headspace-gas chromatography-mass (HS-GC-MS). Source of Sojae Semen Praeparatum samples Glycine max (L.) Merr. (purchased from Anguo Herbal Market); Artemisia annua L. (lot no. C427210202), Morus alba L. (lot no. C449210402) were purchased from Anguo Shenhao Pharmaceutical Co., Ltd. and identified by Yanlin Chen, a researcher from China Traditional Chinese Medicine Co. Information on 33 batches of Sojae Semen Praeparatum samples from markets and companies is given in Table S1. All Sojae Semen Praeparatum samples were stored under seal until analysis. Colour determination The colour was determined according to methods published previously [18]. Briefly, Sojae Semen Praeparatum powder was passed through a 60-mesh sieve to determine brightness (L), redness (a) and yellowness (b). Acid value The acid value was performed referring to the previously published method [19]. Briefly, 1.5 g of Sojae Semen Praeparatum (sieved through 60 mesh), was placed in a 250 mL conical flask, added 50 mL of ethanol-petroleum ether (1:1) mixture, left for 30 min, filtered, washed the filtrate twice with 20 mL of ethanol-petroleum ether (1:1) mixture, combined the filter, added 5 drops of phenolphthalein indicator and titrated with potassium hydroxide solution until the pink colour persisted for 30 s without fading. Ethanol-soluble extractives Sojae Semen Praeparatum was crushed (sieved through 60 mesh) and 2 g was placed in a conical flask, added 100 mL of 70% ethanol, was sealed and weighed the flask. Allow to stand for 1 h, then refluxed for 1 h. Cool, and make up the weight with 70% ethanol. Take 25 mL of the filter, placed it in an evaporating dish, evaporated it in a water bath and dry it at 105 • C for 3 h. Cool in a desiccator for 30 min and weighed precisely. As a dried product, calculate the content of ethanol-soluble extractives (%). Content of six flavonoid components The method used was referred to in the literature [20]. Briefly, the chromatographic conditions were as follows: Agilent Zobax Volatile components analysis 4g of Sojae Semen Praeparatum was placed in a 20 mL headspace vial, and the mass spectra of the peaks were searched in the NIST14 library based on the GC-MS total ion flow diagram. A match score ≥80 and reverse match index ≥700 were considered as the identified volatile component, moreover, the retention index (RI) was used for characterization. The detailed HS sampling, GC separation, and temperature ramp can be found in the supplementary materials. Multivariate statistical analysis The data were statistically analyzed and plotted using GraphPad Prism 9 and the R language. Data processing was carried out on RStudio. Principal component analysis (PCA) and stacked plots were plotted using the ggplot2 package. Correlation analysis was plotted using the corrplot package. Multiple linear regression was diagnosed using the gvlma package and plotted using ggpubr and ggplot2. Content of 6 flavonoids of Sojae Semen Praeparatum from different sources As quality control components, Daidzin, Glycitin, Genistin, Daidzein, Glycitin and Genistein were detected. The most significant changes in Sojae Semen Praeparatum during fermentation were a gradual decrease of flavonoid glycosides whereas a gradual increase of flavonoids. This is the hydrolysis of flavonoid glycosides by microorganisms and suggested that fully fermented Sojae Semen Praeparatum should contain fewer flavonoid glycosides and more flavonoids. In the Chinese Pharmacopoeia (2020 edition), it is stipulated that the total amount of daidzein and genistein in Sojae Semen Praeparatum should not be less than 0.4 mg/g. Here, we have referred to this standard and added three flavonoid glycosides and one flavonoid component as quality control components (Table S2. Fig. 1 A). Ten Sojae Semen Praeparatum samples from the market (out of a total of 16 samples) had less than 0.4 mg/g of total daidzein and genistein and were considered inferior. All batches of Sojae Semen Praeparatum from the company and laboratory met the standard. The flavonoid glycoside content of Sojae Semen Praeparatum from the market was significantly higher (p < 0.05) and the glycoside content was lower (p < 0.001) than the samples from the company and laboratory, indicating that most samples from the market were not fully fermented. Some Sojae Semen Praeparatum had a significant amount of glycosides that had not been decomposed and had not reached full fermentation state, even though the glycoside content met pharmacopeial standards. Acid value, colour, and ethanol-soluble extractives of Sojae Semen Praeparatum from different sources The increase in ethanol-soluble extractives and acid values is due to the breakdown of macromolecules such as protein, cellulose and fat during the fermentation of Sojae Semen Praeparatum [21,22]. Along with the accumulation of enzymatic products [23], Sojae Semen Praeparatum slowly deepens in colour until the section is brownish-black. The acid value, redness (a) and ethanol-soluble extractives content of Sojae Semen Praeparatum from the market were significantly lower (p < 0.001, p < 0.001, p < 0.001) and the L was the opposite (p < 0.001) than the samples from company and laboratory, and the yellowness (b) varied between each sample did not differ significantly (p > 0.05) ( Table S2, Fig. 1B-D). The incompletely fermented samples of Sojae Semen Praeparatum from the market, although qualified in terms of glycosides, had significantly lower acid value, ethanol-soluble extractives and redness than the samples from the company. Indicating that the above indicators can be a good characterisation of the degree of fermentation of the Sojae Semen Praeparatum. Identification of Sojae Semen Praeparatum quality based on flavonoids content, acid value, colour and ethanol-soluble extractives To better evaluate the quality of Sojae Semen Praeparatum, principal component analysis (PCA) was performed on the above 11 indicators. The data were scaled before analysis. The top three principal components with eigenvalues greater than 1, cumulatively explained 81.89% of the total variation. PC1 component explained 55.28% of the variation and was significantly negatively correlated with flavonoid content and L; and significantly positively correlated with glycosides content, acid value, a, and ethanol-soluble extractives, which can be considered as indicators of fermentation degree. PC2 explained 16.16% of the variation and was positively correlated with flavonoid content, acid value, and ethanol-soluble extractives, which can be considered as an indicator of raw material quality. PC3 explains 10.45% of the variation and is mainly positively correlated with b, which can be considered as an indicator of b (Tables 1 and 2). Using the PCA with the above three principal components, the five highest-scoring batches of Sojae Semen Praeparatum were LP_1, C_14, LP_3, LP_2 and C_2, indicating that the laboratory-prepared Sojae Semen Praeparatum was more fully fermented and of better quality in the multi-indicator evaluation. The conversion rate of different components in the fermentation process is different, for example, the acid value of Sojae Semen Praeparatum will still increase and the colour will gradually deepen after the flavonoid glycosides were completely converted into flavonoids. Therefore, three flavonoids, three flavonoid glucosides, colour, acid value and ethanol-soluble extractives content were selected to evaluate the quality of Sojae Semen Praeparatum in a comprehensive manner. Sojae Semen Praeparatum can be classified into 4 classes ( Fig. 2A and B). The flavonoid glycosides in class A are completely converted into flavonoids for complete fermentation, with higher acid value, ethanol-soluble extractives content and redness, and lower brightness. The flavonoid glycosides in class B are also completely converted, but the acid value and ethanol-soluble extractives content are lower, which may be related to the fermentation conditions and quality of raw materials. High flavonoid glycoside content, light colour, low acid value and ethanol-soluble extractives content in class C indicate incomplete fermentation. Low content of flavonoid and their glycoside, acid value and ethanol-soluble extractives in class D showed the poor quality of raw material and incomplete fermentation. Most of the Sojae Semen Praeparatum from the same company are distributed in the same group. Most products from the company are in classes A and B. There was a significant negative correlation within the content of the three glycosides and aglycone components; glycoside content was negatively correlated with acid value and a, and positively correlated with L; aglycone content was positively correlated with acid value, a and ethanol-soluble extractives, and negatively correlated with L, and b has no correlated with other indicators (Fig. 2C and D). To reduce the cost of evaluation, a multiple linear regression model was developed using the above 11 indicators as independent variables and grades as dependent variables (class A, B, C and D denoted by 1, 2, 3 and 4 respectively) (Fig. S1 A). With this model, daidzein, genistin, L and ethanol-soluble extractives (ESE) were found to be used as simplified indicators to evaluate the quality of Sojae Semen Praeparatum with an accuracy of 77.82% (Table S3). Daidzein and ethanol-soluble extractives were negative indicators and were negatively correlated with Sojae Semen Praeparatum quality. Genistin and L were positively correlated with quality (Fig. S2). It was determined that genistin represented the glycoside component, daidzein represented the glycoside component, L represented the colour, and ethanol-soluble extractives represented the acid value and ethanol-soluble extractives. The simplification of the indicators still allowed Sojae Semen Praeparatum to be classified into 4 classes and the data distribution was more compact (Fig. 2E and F), indicating that the 4 simplified indicators were a good representation of the 11 indicators for quality evaluation of Sojae Semen Praeparatum (Table S4). Correlation between volatile components and the degree of fermentation of Sojae Semen Praeparatum We consider a good quality Sojae Semen Praeparatum to be fully fermented rather than over-fermented. However, the previous results do not allow a full determination of the degree of fermentation. Volatile components (VOCs) are produced during fermentation, some samples in class A showed undesirable odours through sensory evaluation. Thus, HS-GC-MS were used to further detection of VOCs. The 22 VOCs can be classified as pungent, unpleasant, food-like, special, and odourless ( Fig. 3 A). The volatile content of Sojae Semen Praeparatum varied considerably between companies and batches, with the overall VOCs of Sojae Semen Praeparatum samples from the company being higher than that from the market (Table S7), suggesting that adequate fermentation may increase the odour of Sojae Semen Praeparatum. We considered samples in class A as a fully fermented product. However, there are more samples in class A containing a greater abundance of VOCs with pungent, unpleasant odours (Fig. 3 B). This suggests that although samples in class A has shown a good degree of fermentation, some of the samples were over-fermented and produced undesirable odours. This is also consistent with the sensory evaluation results, where the samples with the highest stench scores were all from class A (Table S7). Those from Classes C and D showed a lighter odour, which may be because these Sojae Semen Praeparatum had been stored for a long time before being purchased, causing their odour to dissipate. Overall, we found that the better quality of Sojae Semen Praeparatum generally had a stronger odour, and the lower grades had a lighter odour, which may be related to the degree of fermentation (p < 0.05, R = 0.382) (Fig. 4). Volatile fatty acids have a strong odour. In an anaerobic, low-acid environment, the anaerobic bacterium Clostridium butyricum decomposes carbohydrates into short-chain volatile fatty acids, such as butanol, acetic acid and butyric acid. Butyric acid has a pungent and unpleasant smell, and a very dilute solution has a sweaty smell, which is one of the sources of odour in Sojae Semen Praeparatum. 2-methylbutyric acid and isovaleric acid have an irritating smell in high concentrations and can be used in edible flavours at low concentrations. They are also a source of odour in Sojae Semen Praeparatum, probably because they have a similar metabolic pathway to n-butyric acid, or because the superposition of different flavours can amplify the odour. Dimethyl disulphide and trimethylamine are both products of microbial fermentation of proteins and have a strong odour. Post-fermentation is the key stage for the production of flavour substances in Sojae Semen Praeparatum, and probes can be added to detect levels of pungent and unpleasant odour compounds such as n-butyric acid, dimethyl disulphide, and trimethylamine to better control fermentation. Conclusion The quality of fermented foods is influenced by raw materials and process conditions. This study provides an effective attempt to evaluate the quality of Sojae Semen Praeparatum. 11 landmark indicators were chosen to identify the grade of Sojae Semen Praeparatum. Using multivariate statistics, four representative indicators were selected, which could divide Sojae Semen Praeparatum into 4 quality categories with a correct rate of over 75%. For the over-fermentation of Sojae Semen Praeparatum, we used HS-GC-MS to identify the VOCs of Sojae Semen Praeparatum. Of the four categories of odours, we found two categories, 'pungent' and 'unpleasant', that could be associated with over-fermentation. Our study provides a reference for the development of more detailed fermentation specifications and quality standards for Sojae Semen Praeparatum. Author contribution statement Jiaqi Xie: Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Yibo Wang: Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data. Rongrong Zhong; Zhenshuang Yuan: Performed the experiments; Analyzed and interpreted the data. Jie Du; Jianmei Huang: Conceived and designed the experiments. Data availability statement Data will be made available on request. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2023-07-30T15:15:47.666Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "73d91f327c2ad1d90ba345b7f6a086fd2a64387b", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0bab2e5c3ea404a854c1e87addf42fbf71e091d5", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
235352936
pes2o/s2orc
v3-fos-license
The absence of superconductivity in the next-to-leading order Ginzburg-Landau functional for Bardeen-Cooper-Schrieffer superconductor Shortly after the Gor'kov microscopic derivation of the Ginzburg-Landau (GL) model via a small order parameter expansion in Bardeen-Cooper-Schrieffer theory of superconductivity, the derivation was carried to next-to-leading order in that parameter and its spatial derivatives. The aim was to obtain a generalized GL free energy that approximates the microscopic model better. Since 1960s, multiple works have claimed or implicitly assumed that this extended GL model corresponds to the free energy and has solutions in the form of local minima describing superconductivity, such as vortex solutions. In contrast to this, we prove that this extended GL functional does not represent free energy since it does not have any solutions in the form of minima. Accordingly, it cannot be used to describe superconducting states. I. Introduction The Ginzburg-Landau (GL) model of superconductivity has been and continues to be an extremely useful tool. Retaining important degrees of freedom it allows one to describe and analyze inhomogeneous states at a large length scale where no analytical and no numerical solutions of microscopic models are available. Gor'kov provided a microscopic derivation of the GL functional from the Bardeen-Cooper-Schrieffer (BCS) model 1 . Namely, he has shown that GL model emerges by inducting the complex order parameter field ψ(r), which is proportional to superconducting gap function ∆(r), and leading order expansion in small amplitude and small gradient of this order parameter. Almost immediately afterward, the expansion was carried to the next-to-leading order [2][3][4][5][6][7] . In what follows, we refer to this as the extended GL model. The aim of the extended GL model is to more accurately approximate the microscopic theory. In this regard, it is worth mentioning the issue of vortex interaction in the regime of the GL parameter near the Bogomolny point, κ ≈ 1/ √ 2. In the standard GL model the vortices do not interact when κ = 1/ √ 2. However, the solution obtained in the Eilenberger model shows that there are microscopic corrections that lead to a weak non-monotonic interaction that extends up to κ 1.1 at T → 0 8-10 , and vanishes close to critical temperatures T → T c , consistently with the standard picture that in the GL model vortices interact attractively for κ < 1/ √ 2 or repulsively for κ > 1/ √ 2. The Eilenberger equations are a microscopic model that retains more degrees of freedom of BCS theory compared to the GL model and do not rely on the expansion of the order parameter. In addition, the validity of Eilenberger equations is not restricted to the vicinity of critical temperature. In this regard, the question was raised if an extension of the GL model may bring the results closer to those obtained in the quasi-classical Eilenberger formalism. a) Electronic mail: prybakov@kth.se However the problem that arises in the next-to-leading order expansion of BCS theory is the alternation of signs of the coefficients which in turn means that the GL functional deduced from the expansion-derived equations is unbounded from below. To verify this, it suffices to consider |ψ| → ∞ in Ref. 5 or, equivalently, |∆| → ∞ in Ref. 11 . This fact that there are no global minima does not necessarily mean the absence of solutions in the form of local minima. With this in mind, in previous studies the extended GL functional was interpreted as free energy and various energy-based arguments were suggested to estimate length scales and vortex properties 5,7,[11][12][13][14][15][16][17][18][19][20] . Here we prove that the assumption of existence of metastable states is incorrect. Therefore, within the GL formalism expanded to the next-to-leading order there is no superconductivity since there are no stable solutions corresponding to the superconducting state, i.e. the system does not have the Meissner effect and vortices. II. The model Consider the next-to-leading order Ginzburg-Landau family of models for a BCS superconductor [2][3][4][5]7,[11][12][13]15,17,18 : where f is considered as the free energy density, and (4) In our notation, the positive coefficients C 1,2,3,4 denote whatever notations of the standard GL expression, while positive coefficients p 1,2,3,4,5,6 denote additional contributions derived from the BCS model as higher order expansion terms. The microscopically derived coefficients arXiv:2106.02631v2 [cond-mat.supr-con] 7 Dec 2021 can, for example, be found in Refs. 5,11 . Without loss of generality, we choose an additive constant in such a way that the potential term gives zero at its minimum, namely Therefore, for example, in the limit p 1 → 0, Eq. (5) gives standard GL density The constants p k are material-dependent and temperature-dependent. It is important that, as long as one stays within the standard BCS theory, these constants do not vanish, including at BCS critical temperature, T c . Let us estimate the minimum value of p 4 . In the limit of a clean superconductor, from Refs. 11,16 it immediately follows that By taking into account that the BCS coherence length, In the similar way we obtain that III. Exact proof of the absence of minima, by Legendre condition The proof that there are no minima, neither local nor global, irrespective of how close one is to critical temperature, follows from the fact that (1) does not pass the Legendre test 21 for any field ψ(r), while such a test is a necessary condition for existence of a minimum in a functional. Indeed, taking into account that f depends on Re ψ and its spatial derivatives up to the second order, we immediately obtain that That is, the found second derivative is negative everywhere, which means that there are no solutions for the fields that would provide a minimum for the functional. Note that for the density of the kind (1) whereas ψ is unconstrained in the formulation of the minimization problem. However, we can also generalize our conclusion to the case of a constraint in the form of given ψ on Γ. Anyway, for any ψ on Γ the Eq. (10) proves that there is no minimum. The reason for the absence of a minimum is that for any given field configuration there are always various perturbations, arbitrarily small in amplitude and gradients, which lower the "energy". It is important to note that it does not matter at all whether the Euler-Lagrange equations are satisfied. IV. Destabilizing infinitesimally small perturbations The results of the Legendre test guarantee the absence of a minimum, and hence the existence of infinitely weak perturbations that destabilize any given initial state. The scenario of instability consists in a perturbation, for which the energy contribution from the second derivative dominates over the rest of the contributions. Here we give explicit examples of a couple of infinitesimal perturbations that lead to a runaway instability even of the ψ(r) = const configuration. Note that these perturbations are not unique. A. An example of a perturbation in the form of a periodic function Consider a family of fields depending on coordinate x and an additional scalar parameter a ≥ 0 in such a way that the limit a = 0 corresponds to a uniform superconducting state (if such a state exists), where φ denotes the arbitrary constant of the phase shift. The modulus is chosen so that it would correspond to a local minimum of the Landau model without gradient terms. Substituting (12) into (1), we find the following average density: Choosing a period in the range 0 < L ≤ 2π min ensures that the average density of f becomes a monotonically decreasing unbounded function of the parameter a ∈ [0, ∞). That is, an arbitrarily small perturbation destabilizes the assumed state of superconductivity and results in a blow-up, see Fig. 1. The perturbation scale, L, does not play a significant role since the effective theories and free energy functionals require stability regarding infinitesimal perturbations of any scale. Note that the range (14) never vanishes: see (7)-(9). B. An example of a localized destabilizing perturbation Consider the family of the following fields: Infinitesimally weak perturbation, driving the system into a runaway catastrophe. (a) Total "energy" as a function of shape parameter a. (b) Spatial field distributions for points marked in (a), in accordance with Eq. (15). Substituting (15) into E = f dx, and applying an analysis similar to that used for (13), we obtain the range for l entailing a blow-up (see Fig. 2 V. Discussion The perturbations described in the previous section do not exhaust all their diversity. Hence, for example, if one translates (1) into a numerical minimization algorithm, then such an algorithm must lead to blow-up, while the scenarios can be very different between specific implementations and depend on the initial guesses for the fields. A theory that admits stable or metastable states, should be robust against an infinitely weak point-like perturbation. The length scale at which a system recovers a uniform state defines the coherence length. The example from Sec. IV B shows that this principle is violated here, and the system has no coherence length. In this regard, it is useful to discuss separately the approach to (1) using the expansions near T c , i.e. by assuming that the parameter τ = 1 − T /T c is small. Therefore, for example, in Ref. 5 the applicability of the standard GL scaling is postulated for (1), and the variables are replaced by rescaled ones as follows: ψ → √ τ ψ, r → r/ √ τ , and A → √ τ A. After dropping the common factor τ 2 , it effectively changes expression (1) only in that all the coefficients p i are converted according to p i → τ · const i . It is then assumed that due to the smallness of τ , the terms that, in our notation, are multiplied by p i play the role of corrections. However, this is erroneous due to the fact that the prefactor tending to zero does not necessarily nullify here the total contribution of the corresponding term. Indeed, if ψ aims to provide a minimum for the model in question, then lim τ →0 τ |D 2 ψ| 2 dr = 0, (17) because in accordance with the Legendre criterion, there is always a degree of freedom for ψ to reshape and decrease the "energy" monotonically, therefore providing the value of |D 2 ψ| 2 dr arbitrarily large, and, in particular, larger than τ −1 . In turn, that implies that here one cannot classify the gradient terms by powers of τ . To summarize, we conclude that the limit p 1,2,...6 → 0, is a singular limit, and it does not gradually converge to the case when all corresponding terms are absent. Thus, in the model under consideration, there is no passage to the limit to the standard GL. Consequently, the conclusions obtained when considering (1) as the density of free energy (see, for example, Refs. [17][18][19][20] are false irrespective of the proximity to the critical temperature. That statement also applies to spurious multiband versions 18 , which have similar gradient terms but are derived from an erroneous expansion of multiband BCS model, yielding claims of phase diagrams that are principal disagreements with the phase diagrams obtained in microscopic multiband Eilenberger theory 23,24 (see the discussion of these errors in Sec. 4.10 in Ref. 25 ). VI. Conclusion We considered the extended Ginzburg-Landau functional derived by next-to-leading order expansion in the order parameter and its gradients. We demonstrated that the truncation in the form (1) of the corresponding expansion in the BCS theory leads to ill-posedness and catastrophe, and the expression (1) does not represent free energy density. This in contrast to multiple previous studies assumed that the above expression represents free energy density 17,26,27 , and claimed the existence of minima 11,15 . We note that our conclusions do not apply for spin imbalanced superconductors, where higher-order generalizations of the Ginzburg-Landau functional have energy minimizing solutions both for pair-density-wave states 28-32 and for homogeneous states 33 . Our result leads to a natural question if keeping more terms (i.e., next-next-to-leading order and maybe even higher order) in the GL functional would lead to a wellposed problem and, potentially to a better approximation of the BCS theory than the standard GL theory, or such a correction lies beyond all orders. The answer to this question is beyond the scope of this work.
2021-06-07T01:16:03.827Z
2021-06-04T00:00:00.000
{ "year": 2021, "sha1": "3ef8b7d8bc11e7efeaeed950e3db959315cc6eed", "oa_license": "CCBY", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/5.0063874", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "3ef8b7d8bc11e7efeaeed950e3db959315cc6eed", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225261799
pes2o/s2orc
v3-fos-license
Smart Control Strategies for Primary Frequency Regulation through Electric Vehicles: A Battery Degradation Perspective Nowadays, due to the decreasing use of traditional generators in favor of renewable energy sources, power grids are facing a reduction of system inertia and primary frequency regulation capability. Such an issue is exacerbated by the continuously increasing number of electric vehicles (EVs), which results in enforcing novel approaches in the grid operations management. However, from being an issue, the increase of EVs may turn to be a solution to several power system challenges. In this context, a crucial role is played by the so-called vehicle-to-grid (V2G) mode of operation, which has the potential to provide ancillary services to the power grid, such as peak clipping, load shifting, and frequency regulation. More in detail, EVs have recently started to be effectively used for one of the most traditional frequency regulation approaches: the so-called frequency droop control (FDC). This is a primary frequency regulation, currently obtained by adjusting the active power of generators in the main grid. Because to the decommissioning of traditional power plants, EVs are thus recognized as particularly valuable solutions since they can respond to frequency deviation signals by charging or discharging their batteries. Against this background, we address frequency regulation of a power grid model including loads, traditional generators, and several EVs. The latter independently participate in the grid optimization process providing the grid with ancillary services, namely the FDC. We propose two novel control strategies for the optimal control of the batteries of EVs during the frequency regulation service. On the one hand, the control strategies ensure re-balancing the power and stabilizing the frequency of the main grid. On the other hand, the approaches are able to satisfy different types of needs of EVs during the charging process. Differently from the related literature, where the EVs perspective is generally oriented to achieve the optimal charge level, the proposed approaches aim at minimizing the degradation of battery devices. Finally, the proposed strategies are compared with other state-of-the-art V2G control approaches. The results of numerical experiments using a realistic power grid model show the effectiveness of the proposed strategies under the actual operating conditions. Introduction Nowadays, power systems around the world rely on fossil fuel-based energy generation systems, which have caused severe environmental problems. In this context, renewable energy sources (RESs) are progressively driving the transition to the production of low carbon energy [1]. At the same time, the connection to main grid of large-scale wind power systems and distributed photovoltaic (PV) panels in place of traditional synchronous generators (SGs), arise major frequency regulation problems to the power systems [2,3]. Indeed, expensive ancillary plants must operate in order to satisfy the changing power demand, with a significant environmental impact. Ensuring the stability of the power plant frequency is a crucial problem and it is strictly associated to the balancing of power generation and demand. Stability is ensured by several frequency regulation actions, divided in primary, secondary, and tertiary. Currently, the inertia of SGs is adopted in primary frequency regulation (PFR), whose purpose is to compensate the demand and the supply of electricity in seconds. Secondary frequency regulation (SFR) is usually performed in a centralized manner; namely, a central unit restores the system nominal frequency by changing the generators' output within 10-15 minutes. Tertiary frequency regulation (TFR) consists of an economic dispatching of energy aimed at adapting the outputs of the generators to minimize operating costs [4,5]. Nevertheless, the increased penetration of RESs leads to a deterioration in the power system inertia [6]. Moreover, the variations caused by these sources increase the frequency deviations. For instance, Italy's total energy demand was 191.73 TWh from January 2019 to December 2019. However, over such a period, forecasts overestimated and underestimated the real demand for a cumulative amount of 3.95 TWh and 0.85 TWh, respectively [7]. In particular, in Figure 1a, we show the discrepancy between the forecasts and the real demand on a specific day, while in Figure 1b we show the hourly gap during the whole year. Therefore, to support the power grid operations, the installation of energy storage systems (ESSs) is globally becoming even more frequent [8]. In particular, battery energy storage systems (BESSs) are effectively used for frequency regulation and voltage support activities [9]. Moreover, other innovative applications, such as load shifting, peak shaving, and renewable capacity firming, are at an early adoption stage. In this context, electric vehicles (EVs) are considered as key elements in supporting the grid operations [10]. Due to greenhouse gas emissions reduction, EVs have been preliminarily promoted as an essential technology for sustainable urban mobility and logistics [11,12]. Several research studies are focused on the so-called concept of vehicle-to-grid (V2G), where electric vehicle batteries (EVBs) are employed as valuable resources for the power grid, i.e., EVs are connected into power systems when not in use. As a result, together to the progress of Information and Communication Technology [13,14], V2G is enabling a highly distributed, fast-acting means of control for power systems [10]. It has to be highlighted that the majority of vehicles are mostly parked during their usage time, i.e., 95% on average [15], thus their batteries could be used as additional power storage systems to be connected to the main grid that maximize the advantages both for grid operator and EVs' owners. On the one hand, for power system operators, it is possible to ensure higher supply reliability and electricity quality. On the other hand, the possible gains deriving from the retail of the accumulated electricity or the ancillary services, such as PFR, may reduce the effective life-cycle costs of EVs [16]. Utility companies are increasingly considering their customers as a promising solution to enable the system to be more responsive to intermittent renewables, thanks to their storage and eventually generation capability. There is a considerable amount of literature on the use of ESSs to guarantee ancillary services to the grid [17]. For instance, in [18], the authors propose a model able to improve the grid voltage and frequency responses employing ESSs. Following this wave, in recent years, a growing body of literature has examined the concept of V2G focusing on the application of EVs for the load frequency regulation [19][20][21]. Several works concentrate on the benefits that EVs' owners and power system operators can achieve by means of V2G approaches [22,23]. For instance, the authors of [24] propose an optimal dispatching strategy for a V2G aggregator, which aims at meeting the driving demand of EVs' owners while maximizing the economic benefits of the aggregator due to the participation of EV's to supplementary frequency regulation. Most of the existing works rely on centralized control approaches [17,25]; nevertheless, these methods are particularly valuable for the SFR [26]. Among the frequency regulation actions, the PFR is the most interesting for EVs. In effect, the frequency signal is accessible at any connection point available for the EVs. For instance, the authors of [27] present a decentralized V2G control scheme that allows the participation of EVs in PFR by taking into account the charging demands from EVs' owners. In addition, many decentralized or distributed approaches have been proposed for the PFR when considering the increasing presence of intermittent wind power generation [28,29]. For instance, the authors of [23] present a method for PFR that consists in the optimal sizing of an ESS that is based on a lead-acid battery. Differently in [30], the PFR is guaranteed by following a standard droop characteristic, aiming at reestablishing the State of charge (SoC) of the system as soon as the grid frequency is in acceptable boundaries. In [31], the authors present an optimal scheduling algorithm that aims at determining the bidding capacity in each operational period so as to obtain the maximum advantage while guaranteeing a stable SoC interval. A control for the scheduled charging power in V2G is proposed in [32] in order to satisfy the charging demand and based on historic frequency deviation. In [33], the authors analyze the stability of an IEEE 39 bus system with 30% V2G penetration after critical contingencies. The authors compare different strategies that aim at guaranteeing ancillary services through EVs; moreover, in [34], the authors propose an optimal strategy for the charging and discharging of EVs so as to increase the frequency stability of a microgrid. Furthermore, various studies focus on non-conventional techniques. For instance, in [35], a new modified general type-2 fuzzy proportional integral controller was proposed to minimize the system's frequency deviations against load disturbances. In [36], a novel fuzzy logic controller based on genetic algorithms was defined for the control of the frequency, providing improved performance with respect to traditional and further more recent control methods. The previous discussion of the literature review shows that, through participation in the PFR service, the EVs perspective is generally oriented in order to achieve the optimal charge level. To the best of the authors' knowledge, no studies address frequency control strategies that are able to satisfy different types of needs of EVs during the charging process. Few works consider the effects of degradation in the battery, particularly on EVBs [37,38]. For instance, in [39], the authors present a degradation model for ESSs, which is able to assess the PFR impact during a period of 1.5 years. The lifetime of an ESS, which provides PFR, is estimated in [40]. In this work, the impact of different strategies on the lifetime of the batteries is evaluated and discussed. Lastly, in [41], a feasibility study of V2G frequency regulation under an economic perspective is presented, which takes into account the battery wear. However, all of the cited works [37][38][39][40][41] are mainly feasibility and assessment studies that provide an estimation of the impacts of the frequency regulation strategies on the battery lifetime. As a result, there is a lack of studies on EVs' control approaches focused on battery degradation control instead of frequency control. To fill this gap, in this paper we firstly propose a battery degradation model that discerns from different depths of discharge cycles. Subsequently, we propose two novel control strategies for the optimal control of the batteries of EVs during the frequency regulation service. On the one hand, the proposed control strategies ensure to re-balance the power and stabilize the frequency of the main grid. On the other hand, the proposed approaches are able to satisfy a graceful degradation of the EVBs during the charging process. Summing up, the contributions of this work can be thus summarized, as follows. • We propose two novel frequency control strategies that aim at minimizing the EVs battery degradation. Differently from the existing contributions, which only address the need for frequency regulation service, our approach proposes a battery degradation model while ensuring the stabilization of grid frequency. • We propose a profitability analysis to correlate the profit obtained by the EV's user in participating in the frequency regulation service and the cost incurred by the battery degradation. Hence, we compare the proposed frequency control strategies with other related techniques in terms of energy that is exchanged with the main grid and degradation of the battery. The results obtained through numerical experiments based on a realistic power system model show the better performance of the proposed mechanisms under the actual operating conditions with respect to the reference strategies. The remainder of this paper is organized, as follows. Section 2 recalls the basic concepts related to frequency regulation in power grids. Section 3 describes the distribution network architecture and the battery model of EVs under study. Section 4 introduces the novel control algorithms in the V2G context. Section 5 provides the description of the simulated control architecture and the experimental results on a realistic case study. Finally, Section 6 concludes this paper. Preliminaries on Frequency Regulation In power networks, the load frequency must be maintained at the nominal value even though demand or supply vary. In most cases, power grid regulations require that all of the power stations with a capacity over a certain threshold must keep a sufficient damping capability to increase their output in case of a reduction of the system frequency. In detail, each generating unit must provide additional active power (up to its nominal capacity) for under-frequency events and decrease its active power output (droop-type) for over-frequency events. In the absence of any contingencies, the frequency of the grid must be kept within a non-critical deadband at any operating point. However, the non-critical deadband can vary among different countries due to the different power quality regulations. Frequency control systems are mostly classified into three categories, namely the PFR, SFR, and TFR. Figure 2 shows the sequential application of these frequency control actions that are involved in the scenario of a sudden generation loss. The PFR is usually called droop control. This is a completely distributed regulation system and operates on a timescale of a few seconds. The PFR can only stabilize the frequency; however, it is not able to reestablish the frequency to the nominal value. Conversely, the SFR operates on a larger timescale up to minutes and adjusts the generators governors' setpoints in a control area in a centralized way to bring the frequency of the grid back to the nominal value while restoring the inter-area power flows to their prefixed values. Lastly, the TFR, usually named economic dispatch or optimal power flow works on a timescale of minutes up to hours and regulates the grid by modifying the output levels of all the power stations. When an event that modifies the frequency occurs, the inertia of rotating masses of synchronous generators responds immediately to the change of frequency: in fact, the power imbalance is compensated by the kinetic energy stored in the rotating mass of synchronous machines in 0-10 s. PFC comes into operation at this step; in detail, the controllers of generators are activated to stabilize frequency to a new steady-state point. Indeed, inertia is essential for the power system operations: in fact, with different inertia value, the variation of frequency may have a different rate. Therefore, the PFC control system has a different impact when it operates to stabilize the frequency droop. When frequency diverges from its nominal value, the kinetic energy stored in the rotating masses of generators (e.g., flywheels) is released. Let us define the rotational energy of the g-th generator in the grid as: where J g is the moment of inertia of the SG and f g its rotating frequency. Moreover, let us define the inertia constant H g (i.e., the period in when the SG can provide its nominal power releasing barely kinetic energy) of the g-th SG as: where S g is the rated power of the g-th SG. Moreover, it is useful to define the well-known swing equation [42], which describes the change in the rotational frequency f g of SG induced by a power variation. By disregarding the power losses and aggregating several SGs, we can write the so called swing equation of a power grid composed of n generators and d loads as: where f ref is the reference frequency, P G the power supplied by the generators, P L the total load, f is frequency of the center of inertia, H the system inertia constant, S G the total nominal power of the generators, D the inertia of the load, and: However, the swing equation is usually linearized around the reference frequency due to the low load-frequency disturbances. By denoting the variation of frequency, of power supplied by generators, and of total load, respectively, as ∆ f i = f i − f ref , ∆P G , and ∆P L , we thus get: For instance, after an unexpected failure in the generation side, in the transient stage, the power is taken from the system inertia, which makes the frequency to decline at the rate of change of frequency. The latter is inversely proportional to the system inertia. The PFR control restores the power equilibrium a few seconds after the power imbalance occurs. More in detail, at a steady-state, the activation of their primary reserve is made by a specific proportional speed-droop law: where ∆P g is the change in the power output related to the g-th SG until the primary reserve is completely used, S g is the rated power the g-th SG, and K is the permanent droop constant. It is evident that the frequency rate of change is inversely related to the total inertia of the system; therefore, a weaker inertia value leads to a more inadequate frequency response capability. At present, synchronous generators inertia plays a key role in reducing the variations of frequency, responding immediately by releasing kinetic energy. Conversely, photovoltaic (PV) generators and wind turbines are coupled to the power grid through a power electronics interface. Thus, intrinsically, they cannot provide an inertia response. Hence, with the grown diffusion of renewable generation in the grid, the overall system has a weaker inertial response capability than traditional grids. Therefore, traditional frequency regulation controller may not properly counterbalance the disturbance events. In this context, EVBs may deeply contribute to the PFR mitigating the impact of RESs. EV Battery Model Let us now model the EVs in the V2G mode of operation. We consider a group N of EVs with cardinality N. From the perspective of the system dynamics, without lack of generalization, we model each corresponding EVB by a first-order discrete-time system. Hence, it is useful to define for each i-th EVB with i ∈ N : the charging and discharging inefficiencies 0 < β c i ≤ 1 and β d i ≥ 1, the maximum capacity C batt , and the maximum charging rate P max i . The charge level of the EVB at the current slot equals the charge level at the the previous slot corrected by the energy storage profile. The latter is proportional to the charging and discharging inefficiency. The SoC dynamics for the i-th EVB is thus computed as: where: where E i (k) and P i (k) are the power upstream and downstream of the inverter, respectively. Moreover, we include a constraint on the maximum power as: Understandably, the battery efficiency can vary during the charging and discharging phases; in fact, the inverter may have a different efficiency with different charge/discharge values. However, for the sake of simplicity, we assume that the efficiencies are constant. All of the various battery technologies suffer from degradation in terms of capacity decrease and resistance increase. Even though the literature in this field is still insufficient, many influencing factors have been identified in battery degradation, which can be broadly classified into: calendar and operational aging degradation factors. The first category refers to the natural degradation of the battery, whose most important factor is the temperature. In our work, we neglect these degradation effects, because they are not directly dependent on the control strategy. The second category refers to the degradation effects caused by operational factors. The actual operation of a battery determines most of its degradation. Thus, it is crucial to define an accurate control strategy that takes this effect into account. In particular, the category of operational aging degradation factors includes: state of charge and depth of discharge (DoD). The DoD is the most important stress factor and the battery degradation is an highly nonlinear function of both SoC and DoD. In the related literature, the definitions of DoD are various and contradictory. Thus, in this paper, we define the DoD as the full cycle consisting of one equal discharging and charging process. Because the EVs are mostly equipped with common Li-ion batteries, we further refer to the life cycle, which is defined as the maximum number of the charge-discharge cycles until the capacity of the battery falls under to a specific threshold. The life degradation is usually defined as a percentage reduction of the battery life cycle. Therefore, the economic loss due to the degradation in a time slot, under the operating conditions, is defined as follows: where E b is the substitution cost of the battery and δ the degradation percentage of the battery. In the literature, the maximum number of cycles is calculated by performing several tests with a specific battery model. The test is performed by discharging and charging the battery from the maximum capacity to a specific state of charge. We refer to this as a standard DoD cycle (e.g., a standard DoD of 0.3 refers to a cycle varying in a SoC interval from 100% to 70%). However, in a realistic application, the battery works slightly far from a standard DoD cycle. The model available in the literature fits the experimental data with exponential, quadratic, or logarithmic functions. Given the high nonlinear dynamics, most of theworks consider the degradation by a linear or a quadratic function of the charged or discharged power. Other works neglect the difference between a standard cycle and a normal one (e.g., a cycle between 100%-70% is assumed to be equal to a cycle between 50%-20%) [43]. A more accurate model is based on an exponential function that better fits the experimental data, for instance, employing the experimental model proposed in [44], we define the deterioration that the battery undergoes during the charge or discharge between two SoC values, as follows: Figure 3 shows the behavior of the aforementioned relation, where a prominent difference between distinct DoD cycles is evident. V2G for Load Frequency Regulation The V2G control approaches can reduce the frequency deviation from the nominal value by appropriately modulating the charge/discharge profile of the involved EVBs. Each battery, given the frequency deviation signal, modifies its profile proportionally. In this context, the V2G approach for frequency droop regulation equals the conventional SGs one. However, the EVs' primary goal is to satisfy the owners' needs, whilst frequency regulation is solely a secondary service that the EVs may provide. In fact, two important aspects must be analyzed in parallel: the battery charging process and the regulation service. For instance, when the load increases instantly or some generation faults occur (i.e., when the system frequency decreases) the battery management system may increase the power injection into the main grid or decrease the load adsorption of the EV required during the battery charging phase. Conversely, when the residual battery in the EV is not enough, the primary goal is to reach a charging level sufficient for the next trip. In general, the regulation service and the charging process influence each other: it is not straightforward to simultaneously deal with the two aspects. In the sequel, several control approaches for the management of EVBs during the PFR are presented: we preliminarily recall three main state-of-the-art control strategies namely the Elementary Control (ElCo) [45], the Balance Control (BaCo) [32], and the Smart Charging Control (SmChCo) [32]; subsequently, we propose two novel strategies aimed at reducing the impact of the PFR service on the batteries' lifetime: the Bounded Control (BoCo) and the Low Degradation Control (LoDeCo). Elementary Control (ElCo) The most natural approach for the batteries that are involved in frequency regulation mechanisms is to replicate the control scheme of SGs. However, while SGs during the regulation process change their output, which must always be positive (i.e., the SGs can increase only or reduce their output), batteries may also invert the power flow. Therefore, let us define the ElCo as in [45] by the following relations: where ∆ f (k) is the frequency deviation at time k, while K c i and K d i are the constant coefficients representing the EV charging/discharging droop. In general, we have that K c i = K d i , i.e., we assume the same droop for the charging and discharging phases. Note that, given the battery power flow limits, the last two saturation conditions in (11) are added in the ElCo. In Figure 4a, we show the V2G power flow with respect to the system's frequency. This strategy is particularly valuable from the grid stabilization point of view. In fact, by applying this approach, the batteries follow exactly the frequency deviation, without considering any degradation effects and impact on the SoC. Therefore, when the EV is plugged off, the SoC may have any value different from the initial value or the desired final SoC. Balance Control (BaCo) The authors in [32] propose the BaCo approach for batteries management in the PFR. This approach aims at keeping the SoC around a predetermined value. The power exchange by the i-th battery at the time k is: where ∆ f (k) is the frequency deviation at the time k, while K c i (k) and K d i (k) are the coefficients representing the EV charging/discharging droop at the time k. In this strategy, the exchanged power is not only a function of the frequency deviation. In fact, K c i (k) and K d i (k) depend on the SoC of the battery as: and where the coefficients SoC low i , SoC max i , SoC high i and SoC min i define the values among which the SoC must be maintained, while K max is the maximum droop. By applying this approach, the battery always follows the frequency deviation; however, if this leads to a high distancing from the predetermined SoC value, the response of the battery is lower. The BaCo keeps the SoC around a predetermined value; however, it can increase or reduce the SoC value by a proper selection of the parameters in (13) and (14). For instance, in Figure 4b, we show the values of parameters that keep the SoC level around 50%. In particular, the figure shows how the charging and discharging droop range with respect to the EVB's SoC. Smart Charging Control (SmChCo) The authors in [32] propose the SmChCo approach. The SmChCo is a fast-charging control technique that is able to charge the battery while participating in the PFR service. More in detail, the SmChCo is defined as: The SmChCo is composed of two part: the frequency droop regulation and the battery charging. In fact, with half-maximum V2G frequency droop, the battery responds to the frequency deviation, while the other half is used to achieve the battery's charging. If the frequency deviation goes down a given threshold ∆ f min , the maximum discharge policy is immediately applied to support the main grid frequency. In Figure 4c, we show the V2G power flow with respect to the system's frequency. From the figure, it is evident that this approach is different from the ElCo, because the battery is charging, even if the frequency is negative. This approach provides fast charging; however, it does not offer an efficient stabilization of the power grid. Bounded Control (BoCo) We propose a novel approach by modifying the ElCo strategy for the sake of reducing the batteries' degradation. In particular, we introduce two thresholds on the SoC that stop the PFR services. Hence, let us define the BoCo, as follows: where the charging and discharging droops K c i (k) and K d i (k) depend on the battery SoC as: and From Figure 5a, it is apparent that the droop shifts to zero both for the charging and discharging cases when the two SoC thresholds are exceeded. Similarly to the ElCo, the BoCo approach is particularly valuable from the grid stabilization point of view. In fact, by applying this approach, the batteries follow exactly the frequency deviation. However, differently from the ElCo, the batteries are forced to work at an operational point distant from the extreme SoC values that lead to higher degradation, with a beneficial effect for battery life. Low Degradation Control (LoDeCo) We now propose a novel control approach that can support the grid in the PFR while minimizing the degradation effects on the EVBs. The power that is exchanged by the i-th battery at the time k is: where with this approach the charging/discharging droop K c i (k) and K d i (k) are set as: and SoC best i is a characteristic parameter whose value is calculated from the battery degradation function, whilst τ i is a coefficient indicating how much the approach is conservative. This approach aims at decreasing the droop gain when the battery SoC is low, as shown in Figure 5b; in fact, when the battery is poorly charged, employing the battery for a high DoD cycle will profoundly deteriorate its future performance. From the Figure, it is evident that this approach is completely different from the BaCo; when the SoC is low, the BaCo charges the battery at its maximum, causing a high degradation of the EVB. The coefficient τ i ranges from 0 to 1. In fact, when τ i equals to 0, the approach matches the ElCo providing no advantage in terms of degradation. Conversely, when τ i equals to 1, the proposed approach provides higher protection against degradation. Figure 5c shows the impact of different values of τ i . The choice of the value of both SoC best i and τ i has a strong impact on the control approach performance; in fact, a low value of τ i or a not proper selection of SoC best i may reduce the control performance. Therefore, these values must be selected according to the degradation function: in particular, from the previously presented nonlinear degradation function, considering SoC best i = 0.9 and τ i = 1 can be satisfactory. Case Study Different numerical experiments are conducted on a single area power grid composed by a SG, an aggregate load, and an intermittent RES, as in Figure 6. The SG is characterized by the following parameters: the governor time constant (Tg) is set to 0.25 s, the turbine time constant (Tt) is set to 0.5 s, the inertia (H) is 4 s, the governor speed regulation (R) is 0.07 p.u., and the integral gain (Ki) is 4. The generator dead band is set to [−0.1, 0.1] Hz. Moreover, we consider a not constant aggregate load (i.e., the power consumption varies with the frequency variation), in particular we employ a rate D = 0.9, i.e., the load varies by 0.9% with a variation of the frequency of 1%. The system is used to simulate the variation of the frequency that results from variable loads and RES: the variation is recreated through random time series. For the sake of analyzing the influence of the PFR service on the EVBs' lifetime, a one-year simulation is considered. For each presented control strategy, the SoC profile is divided into cycles completed between two distinct values of SoC. As a first outcome, the number of cycles performed by the battery between two distinct values of SoC is presented in Figure 7 for all pf the proposed strategies. Figure 7 shows that, differently from the other approaches, the novel proposed strategy (i.e., the LoDeCo) has an operating area close to the maximum SoC. Furthermore, in Figure 8, we show the SoC profile for each control strategy. As expected, from Figure 8a-c we note that that the ElCo, BoCo, and BaCo strategies follow the frequency deviation without any control on the SoC. Moreover, from Figure 8d, the highly cyclic pattern of the SmChCo caused by its parameter selection is evident. Indeed, with the SmChCo, the batteries are always charged unless the frequency deviation reaches a minimum value. However, when a high concentration of RES is present in the main grid, a high frequency deviation occurs and therefore the parameter should be carefully chosen. In our simulations, we assume a ∆ f min = −0.1 Hz, which, also if it is small, is not sufficient to ensure the proper operation. Conversely, Figure 8e further confirms that the SoC profile settles on the maximum value. In addition, we compare the different strategies from the EVB degradation perspective. Indeed, the batteries are subject to different aging conditions based on the employed control strategy, resulting in different estimation of lifetime. By employing the degradation model described in Section 3, we first calculate the total degradation resulting from each strategy and the total exchanged energy. Table 1 shows the obtained results: it can be noticed that the SmChCo has the highest exchanged energy, imposing the most significant degradation to the battery. Conversely, the proposed LoDeCo ensures the lowest degradation while ensuring a quite high exchanged energy. Moreover, we analyze the impact of the proposed control strategies on the system frequency. In particular, in Figure 9 the system frequency profile is shown for the different strategies and for the case when no batteries are deployed. It is evident that the profile related to the ElCo is the closest to the reference value of the system frequency, thus resulting in the best stabilizing effect. Conversely, because the SmChCo is not at all focused on frequency regulation, the corresponding profile is the farthest from the reference value, even with respect to the the case when no batteries are deployed. The remaining strategies (i.e., the BoCo, BaCo, and LoDeco) have an intermediate profile, resulting in a satisfatory grid stabilizing effect. Finally, we observe that the price paid for the frequency regulation may profoundly influence the results of the control strategy: the use of different price coefficients may lead to different findings. Therefore, let us assume a constant pricing, i.e., the price paid for the PFR service does not depend on the time or other parameters. With this assumption, we can easily correlate the price coefficient with the owners' profit. By assuming a cost of 200 $/kWh for the battery and calculating the resulting degradation cost, we show, in Figure 10, the overall profit when the different strategies are applied. From the figure, it is evident that the LoDeCo is the first strategy to become profitable with a coefficient of 0.26 $/kWh; however, it becomes less valuable than the other strategies with a price coefficient of 1 $/kWh. Conclusions In this work, we propose two novel control strategies for the optimal control of electric vehicle batteries (EVBs) during the primary frequency regulation (PFR) service. By optimally integrating electric vehicle batteries (EVBs) in an isolated power system, the approaches principally aim at minimizing the degradation of batteries while profitably participating in the PFR. This work has a twofold contribution. From a theoretical perspective, it contributes to the literature on frequency droop control by EVBs, which lacks studies for identifying the optimal strategy satisfying the needs both from the regulation service and the battery lifetime. From a practical point of view, the control strategies provide both the power system operator and the electric vehicles' owners with effective mechanisms for charging the batteries while stabilizing the grid frequency. Numerical experiments using a realistic power system model, and including comparison with other state-of-the-art methodologies, show the effectiveness of the proposed strategies under the actual operating conditions. An additional merit of the developed approaches is its scalability to different grid size levels, since the computational complexity does not increase with the number of EVBs. Nonetheless, this study is not without limitations, which still needs to be investigated in future works. In particular, the main limitation of the proposed strategies relies on the non-cooperative computation of the frequency control actions. The cooperation of EVBs could be encouraged to improve system-wide performance through the avoidance of individual EVBs' uncoordinated behavior in order to improve the effectiveness of the vehicle-to-grid approach. Hence, the frequency control strategies may be preferably performed through a cooperative distributed framework. Therefore, our future work will mainly be devoted to extending the defined mechanisms in a cooperative distributed setting. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2020-09-10T10:21:58.393Z
2020-09-03T00:00:00.000
{ "year": 2020, "sha1": "ac30b3d4e251c138bdfb95a450ae791579a4e9d5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/13/17/4586/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ff89f69f061020021efa447ff955484a80504be9", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1591105
pes2o/s2orc
v3-fos-license
‘Knowing what matters in diabetes: healthier below 7’: results of the campaign’s first 10 years (part 1): participants with known type 2 diabetes Introduction During the ‘Knowing what matters in diabetes: healthier below 7’ diabetes campaign, more than 30 000 randomly participating individuals underwent an occasional, voluntary diabetes risk check between 2005 and 2014. Methods This campaign aimed to inform individuals in Germany about diabetes mellitus and its complications, the established risk factors for development of type 2 diabetes (T2D), their prevalence and management in the real-life population, the quality of risk factor control and actual disease management in participants with a history of established diabetes mellitus [people with diabetes (PWD)]. Besides demographic characteristics (e.g. sex, age) and anamnestic information (antihypertensive treatment, history of elevated plasma glucose levels, genetic disposition), risk factor assessment included BMI, waist circumference, and lifestyle (physical activity, nutritional habits). The requested information was complemented by direct measurements of blood pressure (BP) (routine), plasma glucose, and HbA1c (voluntary). Between 2005 and 2014, more than 31 000 individuals participated in 45 single campaigns in numerous German cities. Here, we report on the results of the subgroup of participants with known diabetes mellitus. Results Among the 26 522 individuals with a completed questionnaire participating in the years 2006–2014, 21 055 participants (79.4%) did not have a history of diabetes and 5098 individuals (19.2%) reported being diagnosed with T2D, 369 (1.4%) with type 1 diabetes. The proportion of participants with T2D increased markedly over the years from 13.3 (2006) to 21.7% (2014). The age group older than 64 years was the largest within this subgroup (67.3%), 48.4% men and 51.6% women. The prevalence of overweight or obesity was found in 78% and 69.2% of the PWD. More than 40% of individuals with T2D had no regular physical exercise and more than 15% had unfavorable nutritional habits. In all, 69.9% of participants with T2D had elevated BP as assessed during the campaign or reported treatment with antihypertensive drugs at any time. On average, almost half of PWD (46.3%) had an HbA1c above 7.0%; a significant trend toward higher values over the 10-year period was observed. Conclusion The analysis of PWD participating in the ‘Knowing what matters in diabetes: healthier below 7’ campaign showed that despite huge efforts in the past, important aspects for progression and complications of T2D mellitus are still not well controlled. This includes lifestyle habits as well as pharmaceutical treatment. Although the participants in this study cannot be considered a representative sample of the German population and occasional measurements without standardization further limit firm conclusions, the BP, plasma glucose, and HbA1c results indicate that a major proportion of PWD have insufficient metabolic and BP control. The marked increase in the proportion of T2D among all participants over time is consistent with the increasing prevalence of T2D mellitus found in many other countries worldwide in the recent decades. Our findings underline the importance of an optimized therapy for further improvement of disease management in those already diagnosed with this common chronic, progressive disease. Methods This campaign aimed to inform individuals in Germany about diabetes mellitus and its complications, the established risk factors for development of type 2 diabetes (T2D), their prevalence and management in the real-life population, the quality of risk factor control and actual disease management in participants with a history of established diabetes mellitus [people with diabetes (PWD)]. Besides demographic characteristics (e.g. sex, age) and anamnestic information (antihypertensive treatment, history of elevated plasma glucose levels, genetic disposition), risk factor assessment included BMI, waist circumference, and lifestyle (physical activity, nutritional habits). The requested information was complemented by direct measurements of blood pressure (BP) (routine), plasma glucose, and HbA 1c (voluntary). Between 2005 and 2014, more than 31 000 individuals participated in 45 single campaigns in numerous German cities. Here, we report on the results of the subgroup of participants with known diabetes mellitus. Results Among the 26 522 individuals with a completed questionnaire participating in the years 2006-2014, 21 055 participants (79.4%) did not have a history of diabetes and 5098 individuals (19.2%) reported being diagnosed with T2D, 369 (1.4%) with type 1 diabetes. The proportion of participants with T2D increased markedly over the years from 13.3 (2006) to 21.7% (2014). The age group older than 64 years was the largest within this subgroup (67.3%), 48.4% men and 51.6% women. The prevalence of overweight or obesity was found in 78% and 69.2% of the PWD. More than 40% of individuals with T2D had no regular physical exercise and more than 15% had unfavorable nutritional habits. In all, 69.9% of participants with T2D had elevated BP as assessed during the campaign or reported treatment with antihypertensive drugs at any time. On average, almost half of PWD (46.3%) had an HbA 1c above 7.0%; a significant trend toward higher values over the 10-year period was observed. Conclusion The analysis of PWD participating in the 'Knowing what matters in diabetes: healthier below 7' campaign showed that despite huge efforts in the past, important aspects for progression and complications of T2D mellitus are still not well controlled. This includes lifestyle habits as well as pharmaceutical treatment. Although the participants in this study cannot be considered a representative sample of the German population and occasional measurements without standardization further limit firm conclusions, the BP, plasma glucose, and HbA 1c results indicate that a major proportion of PWD have insufficient metabolic and BP control. The marked increase in the proportion of T2D among all participants over time is consistent with the increasing prevalence of T2D mellitus found in many other countries worldwide in the recent decades. Our findings underline the importance of an optimized therapy for further improvement of disease management in those already diagnosed with this common chronic, progressive disease. Cardiovasc Endocrinol 5:14-20 Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved. Introduction The prevalence and incidence of type 2 diabetes (T2D) are increasing steadily worldwide. With approximately six million individuals diagnosed with T2D [1], Germany is among the countries with the highest prevalence of diabetes in Europe [2], not considering the probably significant number of individuals still undiagnosed. The primary aim in the treatment of patients with manifest diabetes is to reduce disease-associated complications including macrovascular (coronary heart disease, This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. myocardial infarction, stroke, peripheral occlusive disease) and microvascular disease (retinopathy, nephropathy) as well as neuropathy, diabetic foot syndrome, and in general quality of life. Early diagnosis of diabetes, lifestyle changes, and efficient, individualized therapies initiated early in the course of the disease can prevent subsequent complications that make diabetes one of the most expensive chronic diseases in Germany [3]. There is an ongoing scientific discussion on the targets of glycemic control (fasting or postprandial glucose or HbA 1c ) and the optimal strategy for glycemia control in individuals with diabetes. Although an individual approach should be followed, there is a relatively broad consensus to recommend an HbA 1c below 7.0% (53 mmol/ml) for the prevention of diabetes-associated complications. The guidelines of the American Diabetes Association recommend an HbA 1c below 7.0% to reduce microvascular complications [4]. A consensus paper of European Association for the Study of Diabetes and American Diabetes Association calls upon an individual approach, metabolic control, and individualized treatment taking into account patient age, duration of disease, comorbidities, and life expectancy [5]. Besides the promotion of prevention and early detection of diabetes, health care systems still strive to improve disease management. Furthermore, the recommendations include an improvement of blood pressure (BP) and lipid management as these interventions have proven to be very efficient in reducing cardiovascular risk [6]. Background of the campaign In 2005, the campaign 'To know what counts in diabetes: stay healthy below 7' was initiated to contribute toward activities addressing unresolved problems in diabetes. Carried out with many different partner organizations, the campaign aimed at raising awareness about the problem, to identify those at high risk, and to inform those who are already affected by T2D about the different aspects of risk factor management and measures to optimize their disease management. More than 31 000 individuals have participated voluntarily since the start of the campaign by completing standardized questionnaires and undergoing specific investigations as described in the methodology section. This paper summarizes the findings of risk factor management in patients who already had a diagnosis of T2D (because of the low number, individuals with type 1 diabetes are not considered). Data analysis included detection of possible trends over time. Methodology Between 2005 and 2014, during the 'Knowing what matters in diabetes: healthier below 7' campaign, 45 single campaigns were organized in shopping centers in several German cities. During these action days, center visitors were offered information on the metabolic disorder diabetes and had the opportunity to have their own individual diabetes risk determined by experts. Participants reporting to have diabetes (type 1 or 2) were offered to have their metabolic status checked including measurements of plasma glucose and HbA 1c as an indicator for long-term metabolic control and quality of treatment. The following data were collected from all participants using the modified FINDRISK questionnaire developed by Lindström and colleagues [7,8]. (2) Weight (kg). ing to the ESH/ESC Guideline or management of arterial hypertension [10]. Hypertension was diagnosed when antihypertensive medication was taken or when random BP was more than 140/85 mmHg. Participants could also have their plasma glucose levels checked voluntarily. Participants already diagnosed with T2D (as well as those without a known history of diabetes, but a moderate or a high diabetes risk score in the FINDRISK ≥ 15) were proposed to have their HbA 1c value determined as one of the most important diagnostic criteria, with a value over 7.0% indicating poorly controlled diabetes (or a high probability of diabetes in nondiabetic participants). In this paper, only the data collected from participants with known T2D are presented. Biometric evaluation All data collected between 2005 and 2014 were checked for completeness and plausibility. Implausible data were excluded from the statistical analysis. Missing data were not replaced. For data analysis, descriptive methods were used. For quantitative parameters, the following statistical features were determined: mean, standard deviation, median, 25th and 75th percentiles, minimum and maximum values, 95% confidence intervals as well as the P values. For qualitative parameters, the absolute and relative frequencies were calculated and the results were presented as histograms or stack diagrams. On the basis of the year of collection, the data were categorized into 10 annual slices (2005-2014). The statistical procedures and the resulting P values were used exclusively for exploratory description of the results, without having any confirmatory nature. The level of significance was generally set to 0.05 with α adjustment according to Bonferroni. The results of all 10 single years have been examined, thus enabling the detection of changes and trends in patient characteristics as well as frequency and severity of risk factors and final outcomes over time. As the questionnaire used in 2005 did not include differentiation between the types of diabetes and offered no possibility to establish whether diabetes was already diagnosed or not, the year 2005 was not included in any table or graph using these separate categories or in the following analysis. Biometric evaluation was performed using the IBM SPSS Statistics 20 statistical software (IBM, Armonk, New York, USA). Results In total, of 45 single campaigns conducted in 25 cities all over Germany, a total of 31 085 questionnaires were collected. Whereas in 2006 13.3% were people with diabetes (PWD), in 2014 21.7% had T2D; thus, the proportion of individuals with T2D has increased markedly over time (Fig. 1). Age Participants older than 64 years of age were the largest group of T2D patients, with an average frequency of 67.3% (ranging from 48.8% in 2013 to 71.6% in 2008) during the 10-year period. Only 3.2% of the participants with T2D belonged to the youngest age category (< 45 years); 7.4% of the participants were between 45 and 54 years of age and 22.1% were between 55 and 64 years of age. The ratio between the age groups remained constant over the years. Sex In all, 51.6% of T2D patients were women and 48.4% were men. The sex ratios remained constant over the years. BMI More than one-third (34.4%) of all the T2D participants were obese, 43.6% were overweight, and only 22.5% of the diabetics had a BMI in the normal range (Fig. 2). The median BMI was 28.1 kg/m 2 , whereas it was 26.0 in the entire population (n = 30 119). Whereas the proportion of individuals with low BMI (< 25 kg/m 2 ) was significantly higher in the entire population studied than among individuals with T2D, it was the opposite for high BMI (>30 kg/m 2 ; Table 1). In contrast to the entire population studied, there was no trend toward increasing BMI values over time (Fig. 3). Waist circumference Waist circumference values were above the critical values (>102 cm in men and > 88 cm in women) in 69.2% of T2D without changes over time. Lifestyle: exercise and nutrition Almost half (40.7%) of the T2D participants reported that they did not exercise regularly, whereas 16.2% reported that they did not eat fruits, vegetables, and whole grain bread on a daily basis. Similar to nondiabetic participants, a trend toward less favorable nutritional habits was found over the years (P < 0.001). Diabetes in the family Overall, 55.9% of T2D participants reported the presence of this disease in first-degree and/or second-degree relatives, whereas this proportion was 39.8% among nondiabetics. Blood pressure and antihypertensive medications Compared with 42% of all participants, almost 73% of the participants with T2D reported that they are takingor have taken at any time in the pastantihypertensive medication; these PWD should be considered known hypertensives as they were, or had been, treated with antihypertensive medication. The mean systolic BP in PWD was 149.9 mmHg (n = 4763) and the mean diastolic BP was 85.8 mmHg (n = 4760). No trend toward improvement in BP control over the years was observed. When considering the threshold values of 140 mmHg (systolic) and 85/90 mmHg (diastolic) for differentiation of participants with versus without hypertension, 69.9% of participants with T2D had manifest hypertension compared with 57.8% (15 933 out of 27 589) of all participants (Table 2), which is consistent with the proportion defined by intake of antihypertensives. HbA 1c A total of 4170 HbA 1c measurements were performed in PWD (about 82% of participating PWD had their HbA 1c measured during the campaign). Almost half of the participants with already diagnosed T2D had HbA 1c values greater than 7.0%, which indicates suboptimal plasma glucose control. Furthermore, one-tenth of these participants had HbA 1c values between 8.0 and 9.0% and ∼ 4% had HbA 1c values above 9.0% (Table 3). These proportions were comparable in men and women. Over the years, a temporal trend toward higher HbA 1c values was observed in T2D patients (P <0.001; Fig. 4). Relative frequencies of BMI values in patients with type 2 diabetes over time. (1.4%); in this report, we have only analyzed those with known T2D. Discussion and conclusion In these 10 years, the proportion of participants with known diabetes mellitus increased markedly from 14.3% in 2006 to 23.4% in 2014. This increase may at least in part be attributed to increasing awareness of the campaign among individuals with known or suspected diabetes, and thus may not reflect a factual increase in the prevalence of diabetes of the same magnitude. However, there are several reports on a similar increase in the prevalence of T2D in the general population including such different countries as Sweden [11], Portugal [12], and Iran [13] and even in children [14]. Consequently, the prevalence of prediabetes was also reported to increase, for example, in the UK from 2003 to 2011 [15]. The results of the 'Knowing what matters in diabetes: healthier below 7' campaign also confirm earlier findings by the Robert Koch-Institute, which show an increasing prevalence of diabetes mellitus in the German adult population between 2003 and 2009 [16]. This increase can at least partially be explained by an increasing prevalence of risk factors such as an unhealthy lifestyle, and consequently the prevalence of overweight and obesity. Lifestyle risk factors As expected, a large proportion of PWD showed established risk factors. The prevalence of overweight and obesity was observed in more than three quarters (78%) of the participants, with 44% being overweight and 34% being obese. Also, 69.2% had a waist circumference above 88/102 cm, which, together with plasma glucose, BP, and serum lipids, which have not been investigated in this study, is indicative of metabolic syndrome [17]. Lifestyle management remains suboptimal as 41% of the PWD admitted that they do not exercise on a daily basis; furthermore, about 15% did not follow a diet rich in fiber. Considering that there is usually an underreporting/ reporting bias, the proportion of patients actually adhering to a diet and therapeutic lifestyle change should be expected to be even much lower. HbA 1c Importantly, desirable glycemic control, as indicated by an actual measured HbA 1c, turned out to be suboptimal in almost 50% of the patients as 46.3% had an HbA 1c more than or equal to 7.0% and more than 10% had an HbA 1c more than or equal to 8.0%, the latter not fulfilling even less stringent criteria for glycemic control [6]. This finding is in accordance with other observations, which indicate that a considerable proportion of patients with diabetes mellitus do not achieve their targets [18]. The recent report of the ARIC (Atherosclerosis Risk in Communities) study, however, indicated somewhat better metabolic control in that population as about 72% had an HbA 1c below 7% [19]. Blood pressure control Furthermore, in many intervention studies [20,21], it was clearly shown that T2D patients benefit from good BP control. Therefore, the guidelines for the management of arterial hypertension [9] set a target less than 140 mmHg for systolic BP of, and less than, 85 mmHg for diastolic BP. In this analysis, only 30% of the participants had adequate BP control. BP management in PWD is often not adequate; many studies indicate suboptimal BP control [22,23]. As an exception, the ARIC population showed BP less than 140/090 mmHg among 73% of PWD [19]. Limitations However, the limitations of the campaign have to be considered. As the data collection took place in the context of several single campaigns in German cities, there is an evident selection bias: individuals were from geographically restricted regions in the vicinity of campaign sites that cannot be considered representative for the entire German population. Also, participants might represent a positive selection of individuals with known T2D as these might have been more interested in obtaining additional information about their disease and volunteered to participate in the metabolic testing. Certainly, therefore, the data are not representative for the entire German population and our results are not comparable with the results of populationbased studies, in which the frequency and distribution of diabetes risk factors have been determined [24][25][26]. However, even if one takes this into account, it is important to note that even in that somewhat 'preselected' group, measures of individual lifestyle and the assessed risk factor management were shown to be suboptimal in most individuals with T2D. Thus, the results of the present study provide a view on the current risk factor management and treatment quality in already diagnosed T2D in Germany. Our results show that activities such as the 'Knowing what matters in diabetes: healthier below 7' campaign are necessary and important as they can increase awareness and draw attention to a widespread disease such as diabetes mellitus and its risk management. The campaign showed that those with already diagnosed diabetes have a marked deficit in good cardiometabolic risk factor control; therefore, reminding them of the importance of a good and comprehensive risk management such as good plasma glucose and BP control and, most importantly, improvement of their individual lifestyle could contribute toward better management of diabetes. The results of this campaign should have implications and consequences for all parties involved in diabetes care: general practitioners, diabetes specialists, and clinicians as well as health insurance companies. Individuals with diabetes should be aware that they benefit from a more comprehensive risk factor management including good glycemic control but also strict BP and lipid management. However, most importantly, a better lifestyle is required. Overall, with respect to individuals with known T2D, our campaign showed that these individuals often show unsatisfactory risk factor control. It has to be pointed out that these results have been found in a highly developed Western country with a sophisticated medical infrastructure and providing free access to medical care and a multitude of elaborate patient education programs. This is why it is very important that programs such as the 'Knowing what matters in diabetes: healthier below 7' campaign increase public awareness of this disease and its complications, and also the importance of a broad risk factor management.
2018-04-03T02:19:29.647Z
2016-01-04T00:00:00.000
{ "year": 2016, "sha1": "dab274799e505adb86eac382d1b74bd6c859245c", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5367495?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "dab274799e505adb86eac382d1b74bd6c859245c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
132941739
pes2o/s2orc
v3-fos-license
Analysis of air quality and nighttime light for Indian urban regions Indian urban regions suffer severe air pollution issues. A 2014 study by WHO highlighted that out of 20 cities globally with worst air quality, 13 lie in India. Although insufficient ground monitoring data and incomplete air pollution source characterization impedes putting policy measures to tackle this issue, remote sensing and GIS can overcome this hurdle to some extent. To find out how much of this hazard is due to economic growth, past researches have tried to make use of socio-economic growth indicators like GDP, population or urban area to establish its correlation with air quality in urban centres. Since nightlight has been found to correlate well with economic conditions at national and city level, an attempt has been made to analyse it with air quality levels to find regions with high contribution of anthropogenic emissions. Nighttime light activity was observed through DayNight Band (DNB) of VIIRS sensor while the air quality levels were obtained for ANG and AOD (using MODIS sensor) and SO2 and NO2 (using OMI sensor). We have classified Indian landmass into 4 air-quality and DNB classes: LowLight- HighPollution, HighLight-HighPollution, LowLight-LowPollution and HighLight- LowPollution for each air quality species using June 2014 data. It was found that around half of urban regions show high AOD and ANG values. On the other hand almost all urban regions exhibit high SO2 and NO2 values. Introduction Indian cities have some of the worst air pollution levels in the world, as was pointed out by World Health Organization (WHO) in 2014 [1]. In absence of sufficient temporal and spatial data from ground monitoring sensors, satellite imagery can be used to study pollution thanks to their long running data legacy. Socio-economic growth is one of the drivers of pollution, however it varies with composition of economy [2] or social development indicators like literacy, population, etc. [3]. In a previous study [4], satellite data derived NO 2 and aerosol optical depth levels for 16 Indian cites were found to have high correlation with population (0.75) and population density (0.56), respectively. As a new approach and in absence of province level socio-economic data, studies are trying to analyze socio-economic growth by using nightlight data [5]. High correlations of nightlight with gross domestic product (GDP) (0.88) and motor-vehicle count(0.91) have been reported at national level by Katayama and Takeuchi [6] on a global scale. For India, province or city level data of gross regional product (GRP) is not available barring few economic centers. Moreover, in developing countries it is often common for a lot of economic activities to be also centered around informal or unregulated industries whose estimates do not show up in national or state-level GDP values [7]. Such a study that incorporates informal economic sector too by using nighttime light as a proxy for economic activity to study air quality and human activity is being carried out for the first time for India and its urban centers. The objective is to analyze air quality species for Indian landmass and urban regions on the basis of economic activity. The specific aim is to classify air quality species: angstrom exponent (ANG), aerosol optical depth (AOD), SO 2 and NO 2 (collectively referred to as AQ hence) and nighttime light data into four classes based on histogram deduced threshold values. Thus, the percentage of pixels falling within each class shall reveal which AQ species is most related with economic activity induced human emissions. Nighttime light data. To study nightlight radiance, monthly composite of day-night band (DNB) product [8] of Visible Infrared Imaging Radiometer Suite (VIIRS) dataset for June 2014 was used. It consists of light only from persistent sources. This data has not been filtered for forest fires, volcano activity, northern lights or any other activity that may generate light from natural sources. Its spatial resolution is approximately 0.75km. Land cover map. MODIS based global land cover map having a 0.5km resolution, developed by Broxton et al. [9], based on 10 years (2001-2010) of Collection 5.1 'MCD12Q1' land cover type data was used. Figure 1 lays down the flowchart of data selection, processing and analysis to obtain results. Classification by economic activity and air quality Since economic activity levels can be judged fairly from the nightlight observed from space, first the Indian administrative region was subset from all the data sources using the shapefiles developed by Hijmans [10]. Since any forest fires were not reported during June 2014, DNB image was used without any corrections. Further as DNB monthly composites have higher resolution, it was downscaled to the coarser resolution of MODIS and OMI by mean aggregation, henceforth referred to as DNB MODIS and DNB OMI respectively. Downscaling factor used were 20 and 60 respectively. As the objective is to classify regions by socio-economic activity and air quality, we needed to first bin the data into low values and high values. For this binned frequency histograms for each AQ parameter and DNB map values were plotted. Histograms were then thresholded either at valley or peak of the trend plot. Through these thresholds it is expected non-urban urban regions can be identified. The HL regions signify locations of non-agricultural economic activity. They may not necessary be urban regions but may include factories or industrial areas often situated outside urban limits. Similarly LL regions include mountainous, forest, deserts and other regions with no or very less human activities. Since all villages do not share the same economic development characteristics they may fall in a LL or a HL region. In the next part of processing 'urban and built-up' class map was prepared from the Land cover map data. Since the class mask was at higher resolution than MODIS and OMI, it was downscaled using a kernel block of size 20 and 60 respectively. Urban regions in various Indian towns and small cities are often not as large as a MODIS or OMI pixel. To preserve those urban regions in the downgraded urban cover mask, if at least ¼ pixels in the kernel block belonged to urban class, the whole block was marked as urban. Using this urban mask, percentage of urban region was classified to find about their characteristics. Results and discussion After subsetting data for Indian boundary region, DNB image was downscaled to resolution of MODIS and OMI respectively. To obtain classification thresholds following histogram were generated as shown in Figure 3. The histogram plot were not found to be bimodal, which otherwise would have led to easier identification of thresholds. Instead others inflection point values of the plot were checked for their suitability as threshold values. For ANG, SO 2 and NO 2 the first valley comes quite early in the plot (at 0.05, 40(×1000 DU) and 800 microgram/m 2 respectively). However considering they are physically too low to serve as threshold, the next inflection points in the histogram plot were chosen as thresholds. DNB MODIS showed sharper trends in lower mid-ranges of 0.15 to 0.75 nanoWatt/cm 2 sr compared to DNB OMI . This was expected since DNB OMI was obtained by degrading the original DNB image to a greater extent than DNB MODIS . Threshold of 0.3 nanoWatt/cm 2 sr was chosen because a lower inflection value from the plot could not distinguish the human inhabitance of regions based on light. For determining AOD threshold, inflection values were adjudged to be either too high or too low compared to their physical suitability. So, we tried to find an approximate AOD equivalent of particulate matter (PM2.5 and PM10) based WHO 24-hour mean air-quality guidelines (25µg/m 3 and 50µg/m 3 respectively) [11]. By using AOD and PM regression relationships developed for the cities of New Delhi [12] and Agra [13] Figure 3 follow a left skewed shaped curve, implying that the bulk of pixels have low values. However for NO 2 , the bell shaped curve is right skewed. Thus it can be inferred that higher NO 2 values are more common than lower values. Using the combination of threshold values of AQ parameters and DNB (as inferred from the plots in Figure 3) and following the classification scheme mentioned earlier, classified maps for India were developed. These maps are shown in Figure 4. From classification maps in Figure 4, it was found that most of LLHP and HLHP regions lie in mainland northern regions. The eastern coast of southern India shows dominance of HLHP region while the western region shows existence of LLLP regions. A region in northern western edge of India shows HLHP for AOD, SO 2 and NO 2 yet shows HLLP for ANG values. Since low ANG values point to the existence of mineral dust, it can be deduced that those regions are highly industrialized and face severe pollution not only from man-made sources (as can be seen from NO 2 and SO 2 maps) but also a high amount of mineral dust. The small islands of HLHP within the HLLP belt of ANG-DNB classification map are actually cities of Ludhiana, Amritsar and New Delhi. Amongst cities, they tend to have some of the highest ANG levels in the country [4]. To explore which classes do the urban regions belong to, these classification map were overlaid with the land cover map. The comparison of urban and non-urban pixel values is presented in Figure 5. The urban pixels can easily be spotted due to their high DNB values. However considering the threshold of DNB, which is set at 0.3, there are considerable locations which despite not being urban or built-up still have high DNB and AQ values. They are likely to be the suburban areas around the cities. In our analysis, urban pixels represented only 0.7% of all pixels which is considerably less than the reported urban area (2.35% of Indian land mass) for the year 2008 by the Department of Land Resources [14]. Thus there is under-representation of urban regions in the land cover map. Also identity of original urban mask pixels has diminished due to resampling and downgrading. In Table 1, classification result of urban regions is presented. As discussed earlier only high nightlight (HL) region pixels were identified as urban. Incidentally the percentage of pixels identified as HLHP and HLLP is exactly same for ANG-DNB and AOD-DNB maps. It remains to be checked if indeed they are also the same pixels. The problem of SO 2 is comparatively much higher in HL regions while high NO 2 pollution seems to exist almost only Also it can also be used to predict disease outbreaks likely to be born out different polluting species. As a next step, a more recent land cover map should be used to generate urban cover mask to analyze this in detail. If sufficient urban pixels can be detected then thresholding could be done on the urban pixels to assess classification of urban region. Also combining the result with a population density raster can inform the number of people at health-risk due to anthropogenic pollution. To study a long term trend, workflow of this analysis can be adapted to the Defense Meteorological Satellite Program's Operational Linescan System (DMSP-OLS) datasets. It has a long-running legacy and can be used with data MODIS and OMI data which is available since 2001. As a final step if this analysis is performed for multiple periods using DMSP-OLS and VIIRS-DNB datasets, how cities transition from one AQ-DNB class to another can be studied. Conclusion In this study, Indian landmass was classified according into regions with 4 different levels of air quality and economic activity by thresholding their histograms into 'high' and 'low'. By using urban land cover mask this process was also applied to urban areas where it was found all urban area lie in high light regions. NO 2 levels was found to be significantly high for about 90% urban area while SO 2 levels was found to be high for about 74% urban area. ANG and AOD classification exhibited a similar behavior with same percentage of pixels being identified as high (about 55%). Using the same methodology, we wish to process DMSP-OLS data and study how classification behavior of urban regions has changed in the past 15 years. This can help locate regions where air quality has changed under influence of its economic activities
2019-04-26T14:21:39.435Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "0b39735c1618a1a7dbb3572c055d82ba450879c9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/37/1/012077", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b772b194ad754c0414a63430b81135cc80081891", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Geography" ] }
14934995
pes2o/s2orc
v3-fos-license
Association of the Dio2 Gene Single Nucleotide Polymorphisms with Recurrent Depressive Disorder* Genetic factors may play a role in the etiology of depres-sive disorder. The type 2 iodothyronine deiodinase gene (DIO2) encoding the enzyme catalyzing the conversion of T4 to T3 is suggested to play a role in the recurrent depressive disorder (rDD). The current study investigates whether a specific single nucleotide polymorphism (SNP) of the DIO2 gene, Thr92Ala (T/C); rs 225014 or ORFa-Gly3Asp (C/T); rs 12885300, correlate with the risk for recurrent depression. Genotypes for these two single nucleotide polymorphisms (SNPs) were determined in 179 patients meeting the ICD-10 criteria for rDD group and in 152 healthy individuals (control group) using a poly-merase chain reaction (PCR) based method. The specific variant of the DIO2 gene, namely the CC genotype of the Thr92Ala polymorphism, was more frequently found in healthy subjects than in patients with depression, what suggests that it could potentially serve as a marker of a lower risk for recurrent depressive disorder. The distribution of four haplotypes was also significantly different between the two study groups with the TC (Thr-Gly) haplotype more frequently detected in patients with depression. In conclusion, data generated from this study suggest for the first time that DIO2 gene may play a role in the etiology of the disease, and thus should be further investigated. INTRODUCTION Depressive disorder is one of the most common psychiatric diseases (Whiteford et al., 2013).The existing evidence suggests a heterogenic etiology with a possible genetic background (Belmaker & Agam, 2008).One of the hypotheses postulates a deregulation of the hypothalamic-pituitary-thyroid (HPT) axis in depression.In adults, thyroid diseases can lead to various clinical manifestations (Bauer, 2008).For example, hypothyroidism causes fatigue, psycho-motor speed, attention and concentration and memory impairment (Samuels, 2008;Bonnin et al., 2010;Almandoz & Gharib, 2012).Hypothyroidism is also associated with bipolar affective disorders, depression, or loss of cognitive functions, especially in the elderly (Bonnin et al., 2010;Bauer et al., 2002;Fountoulakis et al., 2006;Bunevicius & Prange, 2010).The limbic system, where thyroid hormone (TH) receptors play a particularly essential role is implicated in the pathogen-esis of depression (Murray et al., 2011;Williams, 2008).Changes in TH levels associated with depression include an increase in thyroxine (T4) concentrations and elevated levels of reversed triiodothyronine (rT3) in the cerebrospinal fluid (CSF) (Kirkegaard & Faber, 1991), as well as elevated levels of circulating T4 (Williams, 2008), and lower levels of circulating T3 (Stipcević et al., 2008). TH levels may also be affected by pharmacological treatment including antidepressants (Bauer et al., 2008).In the rat, treatment with various antidepressants results primarily in changes of local D2-activities and to a lesser extent of D3-activities (Eravci et al., 2000).Additionally, treatment with antidepressants results in an increase of T3 in the myelin fraction of homogenates of the amygdala, an essential structure implicated in emotion and fear regulation (Pinna et al., 2003).Various hormones of the thyroid axis, including T3 have been used to treat depression as mono-therapy or, more commonly, in combination with standard antidepressants (Cooper-Kazaz et al., 2009;Joffe, 2011).Interestingly, treatment with antidepressants, including a selective serotonin reuptake inhibitor, results in the induction of D2 (Baumgartner et al., 1994). The D2 protein is mainly expressed in glial cells of various regions of the central nervous system (CNS) and plays an important role in mediating TH action both during CNS development and in the adult brain (Bauer et al., 2008).It is also suggested that D2 protects the thyroid status of the brain under conditions of TH deficiency (Galton et al., 2007). There are results showing that single nucleotide polymorphisms (SNPs) in DIO2 gene may be associated with TH levels (Peeters et al., 2005) and that there is an association between the single nucleotide polymorphism (SNP) within the DIO2 gene, the D2 protein level and its enzymatic activity.For example, allele T of the Thr92Ala (T/C) variant was found to be related with higher D2 activity/TH levels, while C allele of the ORFa-Gly3Asp (C/T) polymorphism resulted in lower D2 activity/TH levels (Bauer et al., 2008;Peeters et al., 2005;Canani et al., 2005).The DIO2 gene polymorphism is linked to a bipolar disorder, mental retardation and well-being (Guo et al., 2004;He et al., 2009;Panicker et al., 2009). Considering the possible changes in TH levels in depression, and the role of the D2 enzyme in maintaining the active TH levels, the current study examined a potential genetic contribution of the DIO2 gene to the etiology of recurrent depressive disorders (rDD). The aim of the study was to investigate whether two common SNPs in the gene, Thr92Ala and ORFa-Gly3Asp, linked to expression/stability of the D2 protein (Peeters et al., 2005;Canani et al., 2005) are associated with rDD. Subjects. The study enrolled 179 patients diagnosed and treated for rDD (110 females -61.45% and 69 males -38.55%).The diagnosis was established according to ICD-10 (1992) criteria (F33.0-F33.8).A medical history was obtained and assessed using the standardized Composite International Diagnostic Interview (CIDI) form (Patten 1999).The Hamilton Depression Rating Scale (HDRS) was used to assess the level of depressive symptoms.Next, the number of depressive episodes, the duration of disease and the age of patient at disease onset were recorded for each individual.The control group (CG) consisted of 152 healthy subjects (87 females -57.24% and 65 males -42.76%) with a negative family history for psychiatric disorders.Healthy controls constituted of healthy community volunteers, enrolled in the study on the basis of a CIDI psychiatric interview (Patten, 1999). Individuals (both patients and CG) with other psychiatric diagnoses within the axis I and II disorders were excluded from the current study.Severe or chronic diseases with confirmed inflammatory or autoimmune etiology served as an additional exclusion criteria.All study subjects (patients and CG) were unrelated individuals from central Poland.To avoid a population stratification effect, genotypes were determined only in individuals of Polish origin, i.e., all four grandparents identified themselves to be of Polish origin.The study protocol had earlier been approved by the Local Bioethics Committee. Genotyping of SNPs.Peripheral blood was collected and genomic DNA was extracted using a commercial isolation kit according to the manufactures' protocol (A&A Biotechnology, Gdańsk, Poland). The rs225014 SNP is a polymorphism at nucleotide 674 of the D2 sequence predicting a threonine (Thr) to Alanine (Ala) substitution at codon 92 (Thr92Ala).The rs12885300, C/T polymorphism is in the most up- stream short open reading frame (ORFa-Gly3Asp) of the 5'untranslated region of DIO2.The region containing the Thr92Ala and the ORFa-Gly3Asppolymorphism was amplified by PCR-based method as described by He and coworkers (2009) and Dora and coworkers (2010) with some modifications.Statistical analysis.The results are reported as percentages (%) or means with standard deviations (± S.D.).In order to determine the association between SNPs within the DIO2 gene and recurrent depression disorder rDD, χ 2 test was used.A Post-hoc power analysis was performed with the use of non-central χ 2 distribution.The analysis of association was based on 95% confidence interval (CI) for the disease odds ratio (OR dis ), calculated with the use of logistic regression model including sex and age as covariates.Deviations from the Hardy-Weinbergs equilibrium were determined by comparison of observed genotype prevalence rates with the expected ones.The Hardy-Weinberg equilibrium for genotype frequencies in rDD group was calculated using c 2 tests.In all the analyses, p ≤ 0.05 was accepted as the level of statistical significance. RESULTS No significant differences were found between rDD patients and CG with respect to gender (p > 0.05).Groups were gender matched (p=0.44),but varied significantly with respect to the age distribution (p < 0.0001).The mean age was higher for rDD patients than for CG: 48.5y ± 10.8y vs 31.7y± 9.1 (Table 1). No significant difference in the distribution of demographic and clinical characteristics for different genotypes was observed, except for the age difference for the Thr92Ala polymorphism between rDD patients and the control group (Kruskal-Wallis test; p=0.048).The observed differences in age and gender distribution between groups and certain genotypes caused necessity of age and sex adjustment using a logistic regression. A distribution comparison between rDD patients and the control group with respect to genotypes/alleles of Thr92Ala and ORFa-Gly3Asp polymorphisms did not reveal a significant difference except for the CC genotype of Thr92Ala polymorphism.The CC genotype of the Thr92Ala polymorphism was significantly less frequent in rDD patients than in the controls.A summary of results for genotypes and allele frequencies within the examined SNPs is presented in Table 2. The results of the DIO2 gene haplotype analysis indicate a significant difference in the distribution of haplotypes for the combined SNPs between rDD patients and controls (Table 3). DISCUSSION This study is the first to test the hypothesis whether DIO2 polymorphisms are linked to depressive disorders. Our results suggest that common variation in the DIO2 gene may be linked to a lower risk for rDD.The CC genotype is present in a small proportion of the population, approximately 4% of our study population.In a group of patients we found only 1 person with CC genotype.Interestingly, a relatively low prevalence of the CC genotype (16%) was reported by Panicker et al. (2009) for a larger group from Bristol, United Kingdom.Similarly, Babenko and coworkers (2012) observed an 8% presence of Ala/Ala (CC) homozygous in the Caucasian population.In the cohort consisting of 946 subjects Zevenbergen et al., (2014) found CC genotype also to be less frequent (16.6%). The Thr92Ala genotype frequency differences were noted between the Caucasian and two other Asian and Indian populations (He et al., 2009;Nair et al., 2012).Ethnical and geographical differences are also suggested by Guerra et al. (2013) who investigated the Thr92Ala polymorphism in the Brazilian population.By contrast to the above effect, neither the ORFa-Gly3Asp polymorphism nor its genotype frequencies were significantly different between two study groups.However, the frequency distribution of ORFa-Gly3Asp polymorphism genotypes in our study remains in agreement with previous observations for the Caucasian population (Peeters et al., 2005;de Jong et al., 2007;Hoftijzer et al., 2011). The obtained results show that the TC (Thr-Gly) haplotype is statistically more frequently detected in rDD patients.The ORFa-Gly3Asp is located in 5' untranslated region and is a functional variant related to the enzyme activity.Our findings of TC haplotype increasing rDD risk may possibly support both the fact that depressive disorder is a low T3 syndrome (Baumgartner et al., 1998;Premachandra et al., 2006) and the suggestion on the supplementation of antidepressant therapy, especially in some cases. The DIO2 gene polymorphisms were linked to psychiatric disorders.For example, the DIO2 gene is linked with bipolar disorder in the Asian population (He et al., 2009).Activity of the D2 in the brain may be a determinant of well-being and neurocognitive function (Panicker et al., 2009).It is known that the CC genotype of the Thr92Ala polymorphism is relevant to TH metabolism, as it is linked to a greater therapeutic improvement in hypothyroid patients on T4/T3 combination hormone replacement as compared with T4 mono-therapy.It is also associated with poorer psychological well-being, as measured by General Health Questionnaire (GHQ) score (Panicker et al., 2009). The possible effect of the DIO2 gene polymorphism on TH metabolism could be related to the effect of a small amino acid substitution in the D2 protein, thus influencing its level and/or activity.Studies on subclinical hypothyroidism show that subtle changes in TH bioavailability may have clear effects on well-being and neurocognitive function (Cooper, 2001;Toft, 2001).Previous studies have shown that Thr92Ala polymorphism is related to minimal effects on circulating levels of TH (Canani et al., 2005;Torlontano et al., 2008), whereas the ORFa-Gly3Asp polymorphism was associated with lower serum T4 and free T4 but unaltered TSH and T3 levels (Peeters et al., 2005;Canani et al., 2005;Peltsverger et al., 2012).It was also postulated that the Thr92Ala substitution may result in the instability loop in D2 protein.Such instability may affect ubiquitination, prompting D2 protein degradation and impairing the ability of D2 enzyme to increase its activity in the presence of low T4 levels, reducing the ability to maintain homeostasis and increasing dependence on serum T3 as a source of T3 in the brain (Dentice et al., 2005).Thus, the contribution of DIO2 polymorphisms to the etiology of depressive disorders could be related to the D2 increased activity and subsequently to TH levels in the brain. The findings that the CC genotype may act as a protective factor appears to be contradictory to previous observations that found low TH levels in patients with depression (Rao et al., 1996).The results can also enlarge discussion on the hyperthyroidism involvement in the depressive disorder.Hyperthyroidism is accompanied by some psychiatric symptoms including depressive symptoms (Tylor, 1975).The possible protective role of CC genotype may be related to the lower levels of TH as C allele is associated with decreased thyroid stimulating hormone-stimulated (TSH) release of T3 and the lower levels of TSH (Butler et al., 2010;Zevenbergen et al., 2014).On the other hand, it is possible that the Thr92Ala variant may have no specific function or may be in linkage equilibrium with other variants.Given the involvement of the inflammatory process in the etiology of depression (Anisman, 2011;Maes et al., 2011) and taking into consideration more recent data that suggests the role of iodothyronine deiodinase in the inflammation process (Boelen et al., 2011), another potential explanation for the obtained results could be proposed.A significant increase in both DIO2 expression and D2 levels was observed after lipopolysaccharide-induction (LPS) in increased lung injury, while the Ala (C) allele was considered a protective factor in sepsis and severe lung injury (Ma et al., 2011).Further, a markedly higher amount of D2 protein positive cells are characteristically found in osteoarthritis (Bos et al., 2012).Thus, an increase in D2 activity would be characteristic for the inflammation process.Similar to the above mentioned diseases, depression is characterized by an increased number of leukocytes and pro-inflammatory cytokine levels in the peripheral blood (Anisman, 2011;Maes et al., 2011).In addition, LPS which is associated with an increased levels of both DIO2 and D2, was shown to be able to induce depression (Maes & Leunis, 2008).Thus, our results showing a significantly higher frequency of the CC genotype of Thr92Ala in healthy subjects with no inflammatory com-Polymorphisms of the DIO2 gene and depression ponent involved, which potentially could be linked to lower D2 activity, would be in agreement with the above hypothesis. There are several limitations associated with our study.First, a modest study cohort used to determine genetic variants associated with rDD.It is acknowledged that a relatively larger study cohort would be required in order to confirm trends observed in the current study.Second, the current study focuses on two functionally known polymorphisms and other DIO2 polymorphisms should be investigated in order to fully characterize the potential involvement of the DIO2 gene in the etiology of depression disorder.Further studies involving larger cohort, characterization of TH levels, as well as markers of the inflammation process should provide a better understanding of the role of DIO2 polymorphisms in depression. CONCLUSIONS DIO2 polymorphisms may constitute potential risk factors in depression. Depression is not simply a thyroid-related illness but should be rather considered as a complex process with an inflammatory component involved. Table 1 . Descriptive statistics for demographic and clinical data in rDD patients and the CG for different genotypes of Thr92Ala and ORFa-Gly3Asp polymorphisms p stands for p-value of appropriate statistical test: χ2 or Kruskal-Wallis Anova rDD, recurrent depressive disorder; CG, control group; p, level of statistical significance; F, female; M, male; ± standard deviation, HDRS, Hamilton Depression Rating Scale Table 2 . Comparison of genotypes and allele frequencies of Thr92Ala and ORFa-Gly3Asp polymorphisms in rDD patients and the control group. Odds ratios (OR) with 95% confidence intervals (95%CI) were calculated by the method of logistic regression with age and sex adjustment. Table 3 . The DIO2 gene haplotype analysis for rDD patients and CG rDD, recurrent depressive disorder; CG, control group; p, level of statistical significance; c 2 , Chi-square statistics; df, degrees of freedom; %, percentages; 95% CI, 95% confidence interval; OR, odds ratio E. Gałecka and others
2016-10-26T03:31:20.546Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "996ebdc08be1a5ca256ee81ec462f91ebcfe0ada", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.18388/abp.2015_1002", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "996ebdc08be1a5ca256ee81ec462f91ebcfe0ada", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6387282
pes2o/s2orc
v3-fos-license
Is there a standard for surgical therapy of hepatocellular carcinoma in healthy and cirrhotic liver? A comparison of eight guidelines Background and aims Liver resection (LR) and transplantation are the most reliable treatments for hepatocellular carcinoma (HCC). Aim was to compare different guidelines regarding indication for resection and transplantation because of HCC with and without underlying cirrhosis. Methods We compared the following guidelines published after 1 January 2010: American (American Association for the Study of Liver Diseases (AASLD)), Spanish (Sociedad Espanola de Oncologia Medica (SEOM)), European (European Association for the study of liver-European Organization for Research and Treatment of Cancer (EASL-EORTC) and European Society for Medical Oncology-European Society of Digestive Oncology (ESMO-ESDO)), Asian (Asian Pacific Association for the Study of Liver (APASL)), Japanese (Japan Society of Hepatology (JSH)), Italian (Associazione Italiana Oncologia Medica (AIOM)) and German (S3) guidelines. Results All guidelines recommend resection as therapy of choice in healthy liver. Guidelines based on the Barcelona Clinic Liver Cancer staging system recommend resection for single HCC<2 cm and Child-Pugh A cirrhosis and for HCC≤5 cm with normal bilirubin and portal pressure, whereas transplantation is recommended for multiple tumours between Milan criteria and for single tumours ≤5 cm and advanced liver dysfunction. Patients with HCC and Child-Pugh C cirrhosis are not candidates for transplantation. JSH guidelines recommend LR for patients with Child-Pugh A/B with HCC without tumour size restriction; APASL guidelines in general exclude patients with Child-Pugh A from transplantation. In patients with Child-Pugh B, transplantation is the second-line therapy, if resection is not possible for patients within Milan criteria. German and Italian guidelines recommend transplantation for all patients within Milan criteria. Conclusions Whereas resection is the standard therapy of HCC in healthy liver, a standard regarding the indication for LR and transplantation for HCC in cirrhotic liver does not exist, although nearly all guidelines claim to be evidence based. Surprisingly, despite European guidelines, Germany and Italy use their own national guidelines which partially differ from the European. Possible solutions of the problems are discussed. INTRODUCTION Hepatocellular carcinoma (HCC) is the 5th most common cancer and the 3rd leading cause of cancer-related deaths worldwide. 1 2 Surgical resection and transplantation are the most reliable treatments for local control and are up to date the only potential curative treatment. Surgical resection is the treatment of choice in patients without cirrhosis, who account for 5% of the cases in Western countries and for about 40% in Asia as these patients tolerate major resections with low morbidity. 3 When HCC is diagnosed in cirrhotic liver, the indication for liver resection Article summary ▸ Surgery is the only curative option for hepatocellular carcinoma (HCC). ▸ Several guidelines exist that provide recommendations regarding indication for resection and transplantation. ▸ Although nearly all guidelines claim to be evidence-based, we only find consensus in regard to indication for liver resection and transplantation for HCC in healthy liver, but a standard for the treatment of HCC with underlying liver cirrhosis does not exist. ▸ Traditional guidelines are based on efficacy but not yet effectiveness data. ▸ Only when outcomes, conditions, patient characteristics and interventions are described transparently, it will be possible to discuss possible reasons for different guidelines in different countries. ▸ Traditional guidelines are based on efficacy but not yet effectiveness of data. ▸ Progress in the development of guidelines will be made when the reasons that explain the differences in the existing guidelines can be identified. ▸ Promising prognostic factors considering tumor biology as well as liver function tests should be included in future guidelines. (LR) should be carefully given. 4 5 The 5-year survival after resection can exceed 50%. 3 Early diagnosis and accurate evaluation of the preoperative liver function allow the identification of those patients, in which a resection could lead to postoperative liver failure with higher probability. 3 Next to the Child-Pugh classification, also the assessment of the presence of portal hypertension plays a central role in the identification of candidates for surgical resection. Studies have shown that normal bilirubin concentration and a hepatic vein pressure gradient <10 mm Hg are the best predictors of excellent outcomes after surgery, with almost no risk for postoperative liver failure. 4 6 These selected patients may achieve a 5-year survival of more than 70%, 3 7 whereas <50% 5-year survival is to be expected in patients with portal hypertension. The 5-year survival of patients with elevated bilirubin value and portal hypertension and/or multifocal disease is <30%, regardless of their Child-Pugh stage. 4 8 Different guidelines exist for the same problem. By presenting similarities and differences between guidelines of different countries regarding the indication for LR, liver transplantation (LT) as well as the recommendations regarding expansion of the transplant criteria, bridging and downstaging therapies and living-donor LT (LDLT), aim of this work is to evaluate, interpret and present solutions for the accounted problems. MATERIAL AND METHODS Systematic literature search To generate a standardised basis for the systematic literature research, uniform comparison criteria were established within the guideline group. Criteria for selection were: guidelines should be in English, German, Italian or Spanish and published after 1st January 2010 to ensure that outdated guidelines are excluded. The guidelines should be generated by expert groups of internationally recognised organisations and based on evidence-based publications. If evidence-based guidelines were not provided, we included consensus-based guidelines. Tumour classification can be based on the Barcelona Clinic Liver Cancer classification (BCLC) staging system, 9 10 which links staging of HCC in cirrhosis with treatment modalities or not. We performed a systematic research with Ovid. We screened the database Medline, Cochrane and PubMed. Table 1 illustrates the results in English language. Our key words for the search were: "guidelines hepatocellular carcinoma" and "guidelines HCC". On websites of medical institution we found additional results in their respective native language (table 2). We conducted our online search on 21 April 2016. Selection of the guidelines Two authors (GM and MK) screened the results manually and independently, by looking at the title and the abstracts. If the inclusion criteria were met, the manuscript was analysed. The search on the database in English language retrieved following guidelines: American (American Association for the Study of Liver Diseases (AASLD)), 3 Asian (Asian Pacific Association for the Study of Liver (APASL)), 11 Hong Kong, 12 Japanese ( Japan Society of Hepatology (JSH)), 13 14 European Association for the study of liver-European Organization for Research and Treatment of Cancer (EASL-EORTC) 15 and ESMO-ESDO 16 and Spanish (Sociedad Espanola de Oncologia Medica (SEOM)). 17 The Spanish guidelines and the evidence-based Japanese guidelines are a synopsis. The entire version of the Spanish guidelines in original language is freely accessible only for members of the Spanish Society of Medical Oncology (http://www.seom.org). As we could not have access to the entire version, we included the synopsis in our analysis. The found article about the updated version of the evidence-based Japanese guidelines suggested in the introduction a link to the homepage of the Japanese Society of Hepatology, where the entire new guidelines version was freely accessible and in English (http://www.jsh.or.jp/English/). We excluded the consensus-based Japanese guidelines, as evidencebased guidelines were also found, as well as the Hong Kong consensus recommendations because of a rather small population and because of the inclusion of the APASL guidelines. The search with the medical institutions in table 2 found following regional guidelines: Italian 18 and German 19 in the original language and the full version of the evidence-based Japanese guidelines. These were all included. We finally included in our analysis a total of five international and three regional guidelines, which are listed in table 3. Comparison of the guidelines We compared the guidelines included in table 3 regarding the indications for LR and LT in patients with and without cirrhosis. Additionally, we analysed the recommendations regarding the expansion of the transplantation criteria beyond Milan, regarding bridging therapies of patients still on waiting list for transplantation as well as downstaging of patients initially beyond Milan criteria and finally regarding LDLT. In order to assess the treatment recommendations of different guidelines in a comparable way, we generated uniform comparison criteria. To present this comparison clearly, we chose a colour coding: in the event that no or no clear treatment recommendation was given, a black circle was assigned. Once a treatment option was recommended as first-line therapy, a green circle was assigned. A yellow circle was assigned when therapy can be carried out or when is recommended as second-line therapy. Rejection of a therapy was symbolised by a red-circled white dot. By this, we could summarise the mentioned treatment options and their associated recommendations very clearly in a tabular form. Surgical treatment of HCC without cirrhosis In all analysed guidelines, surgical resection is the treatment of choice for resectable HCC in absence of cirrhosis. In several guidelines, a more precise indication is given. According to the Spanish guidelines, 17 LR is preferred in patients with early stage HCC who have no cirrhosis and an anticipated liver remnant of at least 20%. 4 According to the ESMO-ESDO guidelines, 16 resection is the recommended treatment in patients without advanced fibrosis, as long as an R0 resection can be carried out without causing postoperative liver failure due to a too small liver remnant. 20 The German S3-guidelines 19 define the criteria of non-resectability as follows: non-resectable extrahepatic tumour manifestation, patients' general comorbidities, tumour infiltration in all three liver veins and a too small liver remnant. 3 21 Also, re-resection in case of a recurrence appears to be feasible, as 5-year survival rates of up to 80% can be achieved as long as no extrahepatic tumour manifestation is found. 22 Adequate postoperative liver function and portal hypertension need to be taken into account on functional resectability. In healthy liver, a minimum of 25-30% of liver parenchyma is needed to prevent risk of postoperative liver failure. 19 Only the Italian 18 and German S3-guidelines 19 S3-guidelines, LT without cirrhosis should only be considered in the specific case of local recurrence of fibrolamellar carcinoma in absence of lymphonodal metastases. 23 Figure 1 illustrates the comparison of the guidelines using the assigned colour codes as described in the methods section. Surgical treatment of HCC with cirrhosis The American, 3 the SEOM 15 and the European guidelines EASL-EORTC and ESMO-ESDO 13 14 base the therapy of HCC on the BCLC. For this reason, indication for resection and transplantation are mostly similar. The Asian, 11 Italian 18 and German 19 guidelines base the treatment on the Child-Pugh Score. The Japanese evidence-based guidelines base the treatment algorithm on three major factors: liver function (Child-Pugh score), number and size of tumours. Child-Pugh class A According to the AASLD guidelines, 10 resection is the firstline therapy for patients who have a single lesion irrespective of the size and a still preserved liver function, normal bilirubin and hepatic vein pressure gradient <10 mm Hg. An increased bilirubin, a significant portal hypertension or minor fluid retention requiring diuretic therapy exclude resection also in case of Child A, and LT is indicated if the patient is within the Milan criteria (BCLC-A). In case of multinodular HCC within Milan criteria, LT is the first-line therapy. Resection is not indicated. For multinodular tumour outside the transplant criteria, neither resection nor transplantation is indicated. The SEOM guidelines 17 recommend LR for patients with solitary or limited multifocal HCC (stage BCLC-0 and BCLC-A), with no major vascular invasion or extrahepatic spread, no portal hypertension (defined as hepatic venous pressure gradient <11 mm Hg or platelet count >100 000), adequate liver reserve and an anticipated liver remnant of at least 30-40%. 4 Patients within Milan criteria could be considered for LT from either a dead or living donor. 17 Similarly, according to the EASL-EORTC guidelines, 15 resection is the first-line therapy option for patients with solitary tumours and well-preserved liver function, defined as normal bilirubin with either hepatic venous pressure gradient <10 mm Hg or platelet count ≥100 000. LT is the first treatment choice for patients with small multinodular tumours meeting Milan criteria (≤3 nodules ≤3 cm) or those with single tumours ≤5 cm and advanced liver dysfunction. In case of recurrence, the patient is reassessed with BCLC and treated accordingly. According to the ESMO-ESDO guidelines, 16 in case of cirrhosis, resection is effective and safe (postoperative mortality <5%) in early BCLC stages (0 and A), provided that one is dealing with a single lesion, a good performance status and no clinically important portal hypertension. 24 25 LR is the first-line curative treatment of solitary or multifocal HCC confined to the liver, anatomically resectable and with satisfactory liver function reserve according to the APASL guidelines. 11 Definite contraindications for resection are distant metastasis, main portal vein thrombosis and inferior vena cava thrombosis. In case of non-resectable HCC within Milan criteria in Child A cirrhosis, local ablation is recommended. According to the evidence-based Japanese guidelines, LR is indicated for HCC if there are three or fewer tumours and all are limited to the liver. There is no restriction on tumour size. 8 26 It is suggested that patients with tumour invasion to the portal vein be indicated for surgery if the tumour has not progressed beyond the first-order branches. In fact, portal vein invasion is consistently reported as the most powerful prognostic factor for HCC. [27][28][29] No transplantation is indicated at this stage of cirrhosis. According to the Italian 18 and German 19 guidelines, LT is the treatment of choice for patients with Child-Pugh A cirrhosis within the Milan criteria. 3 30 According to the Italian guidelines, a hepatic resection can be done for patients within the Milan criteria not eligible for transplantation (age, comorbidities) in Child-Pugh A patients. The best survival results are achieved for patients with good performance status, without comorbidities and with single tumours. For single tumour sized 2-3 cm, the 5-year survival of 60-70% and the perioperative mortality is about 2-3%. [31][32][33][34] Portal hypertension ( portal-hepatic gradient >12 mm Hg or platelets count <100 000/mL with splenomegaly or oesophageal varices) is associated with poor prognosis, but does not exclude resection in well-selected patients. 8 In case of unifocal HCC beyond the Milan criteria regarding size (>5 cm), surgical resection is the main indication, if feasible, and if the liver remnant is large enough. According to the German S3-guidelines, 19 patients not suitable for transplantation with Child A cirrhosis can be resected or treated with radiofrequency ablation (RFA) according to tumour size and number. Adequate postoperative liver function and portal hypertension need to be taken into account on functional resectability. In Child A cirrhosis, a minimum of 40% of liver parenchyma is needed to minimise risk of postoperative liver failure. 19 Figure 2 shows the comparison of the indications to LR and LT using the assigned colour codes as described in the methods section. Child-Pugh class B According to the BCLC-based guidelines and JSH guidelines, the treatment recommendations for HCC on Child-Pugh B cirrhosis are identical to Child-Pugh A cirrhosis as described in the previous chapter. In particular, the SEOM guidelines 17 recommend LR for patients with solitary or limited multifocal HCC (stage BCLC-0 and BCLC-A), with no major vascular invasion or extrahepatic spread, no portal hypertension (defined as hepatic venous pressure gradient <11 mm Hg or platelet count >100 000), adequate liver reserve and an anticipated liver remnant of at least 30-40%. Anatomical resections are recommended. Patients within Milan criteria could be considered for LT (from either a dead or living donor). 4 LR is the first-line curative treatment of solitary or multifocal HCC confined to the liver, anatomically resectable and with satisfactory liver function reserve according to the APASL guidelines. 11 LT can be offered to patients within the Milan criteria when resection is not possible. According to the Italian and German guidelines, LT is the treatment of choice for patients with Child-Pugh B cirrhosis within the Milan criteria. 3 35 According to the Italian guidelines for patients with Child B cirrhosis non-eligible for transplantation, LR represents an option in case of a single tumour which can be removed with a limited resection in particular for those patients without clinically manifested portal hypertension. Patients not suitable for transplantation with Child B cirrhosis can be resected or treated with RFA according to tumour size and number according to the German S3-guidelines. 19 Figure 3 shows the comparison of the indications for LR and LT. Child-Pugh class C According to AASLD, 3 SEOM 17 and EASL-EORTC 15 guidelines, Child-Pugh C score defines an end-stage disease. Neither transplantation nor resection is recommended. According to the ESMO-ESDO guidelines, 16 patients with poor liver synthetic function and tumour extent within the Milan criteria should not be denied the possibility of LT and are therefore not classified as terminal stage. According to APASL guidelines, LT provides the best curative treatment within Milan criteria associated with Child C cirrhosis and without radiological evidence of venous invasion or distant metastasis. In Japan, transplantation is recommended at this stage of cirrhosis for patients with HCC within Milan criteria and age ≤65, if disease control is not possible using other treatment methods. Tumour diameter, tumour number, tumour marker levels, extent of vascular invasion and degree of tumour differentiation are strong predictors of recurrence. According to the AIOM 18 and German S3-guidelines, 19 LT is the treatment of choice for patients with Child-Pugh C cirrhosis within the Milan criteria. For Child C cirrhosis, no LR is recommended according to the Italian guidelines. Expansion of the criteria beyond Milan The AASLD guidelines, 3 APASL, 11 evidence-based Japanese guidelines and German 19 do not recommend the expansion of the listing criteria beyond the standard Milan criteria. ESMO-ESDO guidelines 16 give no statement in this regard. According to the SEOM guidelines, 17 patients with tumour characteristics slightly beyond Milan criteria and without microvascular invasion may be considered for LT. However, this indication requires prospective validation. The EASL-EORTC guidelines 15 state that the extension of tumour limit criteria for LT for HCC has not been established. Modest expansion of Milan criteria applying the 'up-to-seven' criteria (new Milan criteria: HCC with seven as the sum of the size of the largest tumour (in cm) and the number of tumours) proposed by Mazzaferro et al in 2009 36 in patients without microvascular invasion achieves competitive outcomes, and thus this indication requires prospective validation. In Italy, the expansion of the criteria was proposed, but the probability that a patient beyond Milan is transplanted is very low. LT for patients beyond Milan cannot be recommended according to the German S-3 guidelines. Bridging therapy for liver transplant candidates already on waiting list Generally, all guidelines recommend bridging therapy if the waiting list time exceeds 6 months. According to the ESMO-ESDO guidelines 16 in case of a long-anticipated waiting time (>6 months), patients may be offered resection, local ablation or transarterial chemoembolisation in order to minimise the risk of tumour progression and to offer a 'bridge' to transplant. In Italy, bridging therapies are also allowed under progression while on waiting list. According to the German S3 guidelines, 19 bridging is recommended when a long waiting time until transplantation is expected. According to the APASL guidelines, 11 bridging therapy using local ablation or chemoembolisation may reduce the dropout rate with long waiting time of more than 6 months. According to the EASL-EORTC guidelines, 15 patients already on the waiting list with tumour progression beyond Milan criteria and liver-only disease should be placed on hold until downstaging by local ablation or chemoembolisation is achieved and maintained for a period of at least 3 months. In the SEOM 17 and JSH guidelines, no recommendation is given about bridging therapy for patients on waiting list. Downstaging of patients beyond Milan criteria According to the SEOM guidelines, 17 downstaging cannot be recommended. According to the EASL-EORTC guidelines, 15 downstaging policies for HCCs exceeding conventional criteria cannot be recommended and should be explored in the context of prospective studies aimed at survival and disease progression end points. According to the APASL guidelines, 11 downstaging therapy using local ablation or chemoembolisation may reduce the dropout rate with long waiting time of more than 6 months, but there is no proven benefit in long-term survival or downstaging to allow expanded indication. The Japanese evidence-based guidelines state that there is insufficient scientific evidence to support tumour downstaging prior to LT to improve HCC prognosis. The role of transplantation after downstaging is not established in Italy because of lack of high-quality evidence. On the basis of the available data, it is reasonable that patients slightly beyond Milan and in good general conditions can receive a consultation for possible transplantation. According to the German S3 guidelines, 19 downstaging can be considered in order to achieve Milan criteria. AASLD 3 and ESMO-ESDO 16 offer no recommendation regarding downstaging. Living donor LT According to the AASLD guidelines, 3 LDLT is a reasonable approach if the waiting time exceeds 7 months by taking into account the risk of dropout while waiting (4% per month), the expected survival of the recipient (70% at 5 years) and the risk for the donor (0.3-0.5% mortality). 37 This procedure should be only performed by expert surgeons. According to the SEOM guidelines, 17 patients within Milan criteria could be considered for LT from either a dead or living donor, achieving a 5-year overall survival of more than 70% and a 5-year recurrence rate of <10%. 30 According to the EORTC-EASL guidelines, 15 LDLT is an alternative option in patients with a waiting list exceeding 6-7 months. It is not recommended for any extended indications, except in the context of research studies, and should be restricted to centres of excellence in hepatic surgery. According to the APASL guidelines, 11 LDLT is theoretically a more preferred choice for HCC patients, because the waiting list time is significantly reduced. However, risk of donor hepatectomy (0.3-0.5%) and recipient complications (20-40%) need to be considered in offering such treatment. LDLT is the main type of transplantation performed in Japan and does not involve wait list. In Italy, it represents only 0.6% of all transplantations. According to the German S3-guidelines, 19 LDLT is an option for patients in whom a tumour progress is likely while on the waiting list with the risk of drop out. By using LDLT, waiting time can be avoided and thus tumour progression can be prevented. Additionally, it relieves the limited pool of deceased donor organs. As the potential risk of complications for the donor in experienced centres is relatively low, this possibility in absence of an appropriate postmortem donor and therefore a long-anticipated waiting time should be evaluated. Morbidity and mortality after LDLT is comparable with the recipient of a postmortem LT. The guidelines ESMO-ESDO 16 give no recommendation regarding live donor LT. Figure 5 shows the comparison of the recommendations regarding the expansion of the criteria beyond Milan, bridging therapy, downstaging and LDLT. DISCUSSION There is no worldwide consensus on the recommendation for surgical treatment of HCC, although the evidence is the same. Relative homogeneity in indications exists for the countries using the BCLC classification, with the exception of patients within Milan criteria and Child-Pugh C cirrhosis. These patients are classified as patients with end-stage disease according to the AASLD, 3 SEOM 17 and EASL-EORTC 15 and therefore are consequently excluded from transplantation. Therapeutic-suggested option is best supportive care. The ESMO-ESDO guidelines 16 allow transplantation for HCC within Milan on Child-Pugh C, and these patients are not classified as end stage. It is remarkable that the two European guidelines differ in a so important point, in one case excluding Child-Pugh C patients from transplantation (EASL-EORTC) and in the other allowing it (ESMO-ESDO). The Italian 18 and German guidelines 19 recommend transplantation for Child-Pugh C patients within Milan criteria. The question about the effective usefulness of European guidelines, when, for example, European countries like Germany and Italy use their national guidelines, remains open. Moreover, there is no homogeneity between European guidelines itself. Spain has also its own guidelines which are only accessible to members of the Spanish society of medical oncology. Another critical point where misunderstanding can arise is the treatment of single tumours between 2 and 5 cm in liver cirrhosis Child-Pugh A/B according to the EASL-EORTC and SEOM guidelines. In fact, both rely on the updated BCLC staging system (2011). As the original BCLC classification, on which the AASLD guidelines rely, clearly states that first-line treatment option is LR if no portal hypertension and elevated bilirubin are present and LT is indicated only in case of advanced liver dysfunction, the EASL-EORTC and SEOM guidelines are unclear in this point. While the text of the guidelines suggests treatment according to the original BCLC classification LR as first-line therapy, the graphical representation of the treatment algorithm in both guidelines suggests first-line therapy for such tumours is transplantation and not resection. Taking into consideration the graphical representation of patients with early stage disease (BCLC A) that they are not candidate for LR, we interpreted the guidelines according to the text and not according to the figure. However, this possible double interpretation needs to be mentioned and future guidelines should state without ambiguity the therapeutic strategy for these tumours. The major difference between the treatment algorithm used in Japan and the BCLC system is the indication for hepatectomy for HCC with ≤3 lesions and a diameter ≤3 cm on Child-Pugh A/B. The BCLC system recommends LT or RFA for HCC with two or three nodules and a diameter ≤3 cm. In contrast, the treatment algorithm in Japan recommends hepatectomy for HCC with ≤3 lesions if liver function is good, regardless of the tumour size. According to the Japanese guidelines, as well as for Italy, Germany and European ESMO-ESDO guidelines, first-line therapy for patients with Child-Pugh C cirrhosis and HCC within Milan criteria is transplantation. In Japan, the majority of transplantations are LDLT, whereas in Italy, only 0.6% of the patients treated with transplantation are LDLT. In general, cultural attitudes in Asia regarding life, death, ethics and religion have influenced their attitude towards organ transplantation from deceased donors greatly. In highly specialised centres, the survival after LDLT is comparable with the survival after postmortem transplantation (70% at 5 years). Although donor morbidity and mortality is low, a reported mortality between 0.3% and 0.5% does not appear to be acceptable. 38 Donor safety is paramount and has been a topic of much discussion in the transplant community worldwide. The donor risk appears to be low overall, with a favourable long-term quality of life. The latest trend has been a gradual shift from right lobe grafts to left lobe grafts to reduce donor risk, provided that the left lobe can provide adequate liver volume for the recipient. 39 Significant low morbidity and mortality rates of donor patients are reported by high-volume centres in Asia due to high case load and standardised perioperative and postoperative treatment. Also, already published in 2007 on US-data, it appears that LDLT at experienced centres results in best long-term survival compared with all other groups. 37 Moreover, LDLT offers the advantage over the deceased donation of a clinically more stable recipient and an optimal time of transplantation avoiding long waiting time. 19 As a result of the high dropout rate for patients with HCC, worldwide the priority of liver graft allocation has been reconsidered. First, waiting list priority was determined primarily by liver disease severity based on the Model for End-Stage Liver Disease (MELD) score in order to reduce the rate of death on the waiting list. 40 Second, patients with HCC that fulfilled the Milan criteria were registered with an adjusted score and were subsequently assigned additional scores at regular intervals to reflect their risk for dropout as a result of tumour progression. With such priority listing, the access to timely transplant liver for patients with HCC has improved in the USA. 41 However, introduction of the MELD score to reduce death on the waiting list did not only achieve positive results in all countries. Especially in Germany, because of lack of donor organs, only high MELD scores, partially high in the 30s, result in allocation of liver grafts. Although waiting list mortality was decreased, this basically means to transplant patients in Child C status with high risk of poor outcome, and increased morbidity and mortality of those extremely sick patients is common. 42 The treatment algorithm of Japanese evidence-based guidelines includes grade of liver damage, tumour number and tumour diameter. Extrahepatic disease and vascular spread are not included in the algorithm in contrast with the AASLD and the APASL guidelines. This was explained by the need to keep the treatment algorithm simple, few evidences available to recommend a certain treatment option for HCC with vascular invasion and extrahepatic HCC at the time of initial diagnosis was considered rare in daily practice in Japan. 13 Interestingly, extrahepatic spread and vascular invasion are included in the treatment algorithm of the consensus-based Japanese guidelines, 14 whereas neoplastic invasion of bile ducts plays no role in all guidelines so far. Interestingly, only in the evidence-based Japanese guidelines an age limit of ≤65 is set to be transplantable. In several countries, as Germany or Italy, for example, no patients >65 yrs are routinely transplanted although no age limit is expressed in the respective guidelines. Quantitative liver function tests allow a more precise assessment of postoperative morbidity and mortality. The relevance of quantitative liver function tests has so far found consideration only in the guidelines of the JSH. In the JSH-HCC guidelines, the indocyanine green (ICG) test as an indicator of liver function is considered indispensable for surgical decision-making, but is not routinely performed before non-surgical treatments like RFA or Trans-Arterial Chemo-Embolization (TACE). As several publications demonstrate the usefulness of ICG clearance alone 43 or in combination with other parameters 44 or imaging-based liver function tests 45 as a predictor of postoperative death, extended liver surgery has been made safer to avoid postoperative liver failure. Additionally to the ICG clearance, the LiMAx test has been found to be valuable to quantify liver function. 46 Perioperative morbidity and mortality was reduced after implementing LiMAx algorithms in LRs, 47 and after LT the LiMAx score was predictive for postoperative liver failure. However, so far LiMAx has not been recognised in any of the guidelines for treatment of HCC or LT. Also, new innovations in liver surgery like portal vein embolisation, 48 two-stage hepatectomy, 49 Associating Liver Partition and Portal Vein Ligation for Staged Hepatectomy (ALPPS) 50 or partial ALPPS 51 allow extensive LR with acceptable morbidity and mortality, even when transplantation because of tumour load is no option anymore. However, extended LR can only be performed in healthy liver, thus again leaving transplantation as only potential curative treatment. Another critical point which is not addressed in any of the guidelines is the tumour biology. Tumour biology, immunological and genetic tumour-specific treatments gain more and more impact on diagnosis, interdisciplinary treatment and outcome. One promising field regarding risk of acute rejection after LT is gene expression profiling. Thude et al 52 demonstrated that genotyping of liver recipients for specific genetic polymorphisms might be useful to stratify liver transplant recipients according to the risk of acute liver transplant rejection. Mazzaferro, 53 who introduced the Milan criteria in the field of LT, recently published an article about an adaptive approach for selection and allocation in LT for HCC. He proposes to maximise 'all tumour and therapy heterogeneities in a model that utilizes variations in HCC presentation and response to treatment as adjusting factors to reconcile selection and allocation logistics, with the ultimate aim of increasing the benefit, effectiveness, and justice of transplantation for cancer'. Unfortunately, as stated before, not one actual guideline considers these important developments in individualised and specific treatment. In conclusion, whereas we find a consensus in HCC treatment in healthy livers, the analysed international recommendations about the treatment of HCC in cirrhotic livers show several variations, although nearly all guidelines claim to be evidence based. Moreover, promising prognostic factors considering tumour biology as well as liver function tests should be included in future guidelines. One possible explanation for the inhomogeneity among the guidelines included in our analysis might be cultural difference as well as variation in the healthcare system. Progress in the development of guidelines will be made when the reasons that explain the differences in the existing guidelines can be identified. These reasons can be identified when the burden and risks that have to be accepted and the outcomes, that is, the achieved survival and quality of life, can be assessed. 54 Meaningful assessments require two essential conditions. First, the conditions under which 'costs' and 'consequences' are compared have to reflect the situation of day-to-day clinical practice and second, the conditions have to be standardised. The traditional method for comparative assessments of clinical outcomes is the randomised controlled trial. These trials measure effects under ideal study conditions, that is, efficacy, but not effects that can be detected under real-world conditions, which is effectiveness. Traditional guidelines are based on efficacy but not yet effectiveness data. Methods that compare effectiveness under real-world conditions have only recently been proposed. 55 Some of these methods include risk stratification which means that only patients with similar risks (high, low or intermediate) can be compared and the baseline risks of each patient have to be related to each of the outcomes that will be assessed. These assessments under real-life conditions can be completed in any community hospital and will be important as basis of clinical guidelines. When outcomes, conditions, patient characteristics and interventions are described transparently, it will be possible to discuss possible reasons for different guidelines in different countries.
2018-04-03T03:29:52.322Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "89d61ef335e23d17bd89652d035349b3449589e5", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1136/bmjgast-2016-000129", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "89d61ef335e23d17bd89652d035349b3449589e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4255814
pes2o/s2orc
v3-fos-license
Differential effects of vision upon the accuracy and precision of vestibular‐evoked balance responses Key points Effective balance control requires the transformation of vestibular signals from head‐ to foot‐centred coordinates in order to move the body in an appropriate direction. This transformation process has previously been studied by analysing the directional accuracy of the averaged sway response to multiple electrical vestibular stimuli (EVS). Here we studied trial‐by‐trial variability of EVS responses to measure any changes in directional precision which may be masked by the averaging process. We found that vision increased directional variability without influencing the mean sway direction, demonstrating that response accuracy and precision are dissociable. These results emphasise the importance of single trial analysis in determining the efficacy of vestibular control of balance. Abstract Vestibular information must be transformed from head‐ to‐foot‐centred coordinates for balance control. This transformation process has previously been investigated using electrical vestibular stimulation (EVS), which evokes a sway response fixed in head coordinates. The craniocentric nature of the response has been demonstrated by analysing average responses to multiple stimuli. This approach misses any trial‐by‐trial variability which would reflect poor balance control. Here we performed single‐trial analysis to measure this directional variability (precision), and compared this to mean performance (accuracy). We determined the effect of vision upon both parameters. Standing volunteers adopted various head orientations (0, ±30 and ±60 deg yaw) while EVS‐evoked response direction was determined from ground reaction force vectors. As previously reported, mean force direction was orientated towards the anodal ear, and rotated in line with head yaw. Although vision caused a ∼50% reduction in response magnitude, it had no influence on the direction of the mean sway response, indicating that accuracy was unaffected. However, individual trial analysis revealed up to 30% increases in directional variability with the eyes open. This increase was inversely correlated with the size of the force response. The paradoxical observation that vision reduces the precision of the balance response may be explained by a multi‐sensory integration process. As additional veridical sensory information becomes available, this lessens the relative contribution of vestibular input, causing a simultaneous reduction in both the magnitude and the precision of the response to EVS. Our novel approach demonstrates the importance of single‐trial analysis in revealing the efficacy of vestibular reflexes. Introduction Because the vestibular system is locked within the skull, the signals it provides must be transformed from headto foot-centred coordinates for balance control (Lund & Broberg, 1983;Hlavacka & Njiokiktjien, 1985;Pastor et al. 1993;Fitzpatrick & Day, 2004;Mian & Day, 2009). For example, when leftward head motion is detected while facing forwards, a compensatory body movement to the right would be the appropriate response to maintain balance. However, if the head is turned 90 deg rightward, the same pattern of vestibular afferent feedback would require a backward body movement. This coordinate transformation process requires an accurate sense of head-on-feet proprioception (Dalton et al. 2017;Reynolds, 2017). Any breakdown in this process would compromise the efficacy of the vestibulo-spinal reflex, which may increase fall risk. This efficacy of the coordinate transformation process can be investigated using electrical vestibular stimulation (EVS) (Fitzpatrick & Day, 2004). EVS modulates activity of vestibular afferents, leading to a false sensation of body sway towards the cathode electrode. This evokes a compensatory sway response towards the anodal ear. This response is fixed in head coordinates, such that turning the head in yaw produces an equal rotation of the evoked sway direction. Previous studies have demonstrated the craniocentric nature of the EVS response by measuring the direction of the evoked body sway and/or ground reaction force vector at different head angles (Lund & Broberg, 1983;Mian & Day, 2009, 2014. Response direction is typically calculated by averaging sway responses to multiple EVS pulses of direct current, known as galvanic vestibular stimulation (GVS) (Inglis et al. 1995;Welgampola et al. 2013). More recently, the transformation process has been investigated using stochastic vestibular stimulation (SVS) (Dakin et al. 2007;Mian & Day, 2009). This involves application of a continuous randomly varying current lasting up to minutes. SVS offers advantages over GVS, including greater signal-to-noise ratio, and the ability to analyse the response in the frequency domain. GVS, by contrast, allows for the precise determination of response latency in the time domain (e.g. Nashner & Wolfson, 1974;Britton et al. 1993). For both SVS and GVS, previous analysis has involved studying the conglomerate response to stimulation over time. For GVS, this consists of the average response to multiple stimuli. For SVS, cross-correlations between stimulus and response time series are calculated for all possible directions over a prolonged period (ࣙ30 s). The direction which produces the largest correlation value is then deemed to be the response direction. Both analysis techniques miss any transient or trial-by-trial variations in the direction of the sway response. These variations may be important for understanding the efficacy of balance control under more ethological circumstances. If we suffer a fall due to a transient error transforming vestibular input in motor output, an accurate average response is of little consolation. In other words, it is important to measure the precision, as well as the accuracy, of the vestibular-evoked sway response. Here we address this gap in the literature by measuring variability in the direction of the sway response to GVS and SVS. We ask two related questions. First, is the precision of the vestibular-evoked sway response dissociable from its accuracy? Second, how are both parameters affected by vision? We hypothesise that closing the eyes will produce more variable (less precise) sway responses, while accuracy will be unaffected. Our rationale for this prediction is that the absence of vision will negatively affect head-on-body proprioception, and thus the ability to transform vestibular input into motor output for balance (Dalton et al. 2017;Reynolds, 2017). In fact, our results refute this hypothesis. Closing the eyes produced less variable responses. This occurred for both GVS and SVS, but was more clearly demonstrated using the latter technique. We discuss this unexpected finding in the context of a multisensory integration process. Accuracy, however, was unaffected by vision, confirming that precision and accuracy are indeed dissociable. Ethical approval The experiment was approved by the local ethical review committee at the University of Birmingham, and was performed in accordance with the Declaration of Helsinki, except for registration in a database. Informed written consent to participate was obtained from all participants. Protocol Participants stood in the centre of a force plate, unshod, with feet together and hands held relaxed in front of them for the duration of each 100 s stimulation period (Fig. 1). Prior to each trial participants were instructed to face one of five visual targets (±60, ±30 and 0 deg) located at eye level. This could be achieved through a combination of neck and trunk rotation until a head-mounted laser crosshair became aligned with the target 1 m away. EVS was delivered using carbon rubber electrodes (46 × 37 mm) in a bipolar binaural configuration. Two electrodes were coated in conductive gel and secured to stimulus. The stimulus waveform was generated by passing white noise through a low-pass filter (5 Hz; 6th order Butterworth) and then scaling to give a root mean square value of 0.6 mA, and a peak amplitude of ±2 mA. Each target angle (−60, −30, 0, +30 and +60 deg) and stimulation condition (GVS and SVS) was performed separately with eyes open and closed, giving a total of 20 conditions. Trial order was randomised and participants were allowed seated rest in between trials. Data acquisition Head orientation was sampled at 50 Hz in the form of Euler angles using a Fastrak sensor attached to welding helmet frame (Polhemus Inc., Colchester, VT, USA). Sensor yaw was used to calculate head direction (i.e. rotation about the vertical axis). Any offset in yaw or roll angle between head orientation and sensor orientation was measured using a second sensor attached to a stereotactic frame, and subsequently subtracted. A slight head up pitch position was maintained throughout each trail to ensure that Reid's plane (line between inferior orbit and external auditory meatus) was horizontal, thus optimising the response to the virtual signal of roll evoked by vestibular stimulation (Fitzpatrick & Day, 2004). The evoked sway response was recorded in the form of ground reaction forces at 1 kHz using a Kistler 9281B force platform (Kistler Instrumente AG, Winterthur, Switzerland). stimulus onset was first removed from both mediolateral (F x ) and anteroposterior (F y ) force. Prior to individual trial analysis, we first averaged F x and F y traces across all trials within each condition. The time of the peak average force vector was then measured, and a window of ±200 ms either side of this time point was subsequently used to analyse each individual trial. The magnitude and direction (atan F x /F y ) of the peak force vector within this time window was measured separately for all trials. This resulted in 20 individual trial directions for each condition, from which we could calculate the mean direction (i.e. accuracy) and its variance (i.e. precision) using circular statistics (see below). Response direction was referenced to head orientation, as measured by the Fastrak sensor. After inverting anode-left trials, there was no significant effect of polarity upon response magnitude (mean ± ST; AL 1.65 ± 1.01, AR 1.62 ± 1.02, T 89 = 0.39, P = 0.70) or direction (F 1,178 = 0.92, P > 0.34). Hence, both polarities were combined. SVS analysis. Analysis of SVS-evoked shear force is depicted in the bottom half of Fig. 1. We used a modified version of the technique described by Mian & Day (2009) whereby the cross-correlation between the SVS stimulus and shear force is calculated. The component of the force vector is first determined for each degree of a circle (±180) to produce 360 separate force traces F ROTθ , using the following formula: where s is sample. The SVS-force cross-correlation is then calculated for each trace, and the angle which results in the largest cross-correlation value is deemed to be the response direction. Initially we performed this analysis using the entire 100 s stimulation period. This was used to calculate the timing of the peak cross-correlation response. To study response variance, we then split the data into segments and performed the same analysis again, determining peak correlation values at the time point derived from the full 100 s. We experimented with segments of differing lengths (1, 5, 10 and 20 s) and settled upon 5 s because it offered the greatest potential for detecting changes in variance between conditions (see Fig. 9 in Results). As for the GVS analysis, response direction was referenced to head orientation. To determine response magnitude for SVS data, we measured the peak of the SVS-force cross-correlation (units in mA·N), and normalised this by dividing it by the peak of the SVS-SVS autocorrelation (units in mA 2 ). This resulted in a measure of gain that is independent of segment length (units in N mA −1 ). Circular statistical techniques For both GVS and SVS, response direction is represented by angular data. Therefore circular statistical techniques were implemented using the CircStat toolbox for Matlab (Berens, 2009). Angular conventions are represented in Fig. 2, which depicts a representative subject's responses to GVS during the head-forward/eyes open condition. To calculate mean directions, individual angles (α 1 , α 2 . . . . α n ) were first transformed to unit vectors in two dimensions (r 1 , r 2 . . . . r n ) by demanding that the circle had a radius of 1. Thus, the magnitudes of the individual subject responses did not affect the analysis of mean response direction. Rectangular coordinates of each unit vector were then calculated by applying trigonometric functions, where the sine and cosine of the angle give the x-coordinate and y-coordinate respectively: Vectors (r 1 , r 2 , . . . r n ) were then averaged to calculate the mean resultant vector (r̅ ): To compute the mean angular direction α̅ , r̅ is transformed using the four-quadrant inverse tangent function. Angular deviation was calculated as a measure of response variance, as it is equivalent to the standard deviation in linear statistics (Batschelet, 1981) where R is the length of the mean resultant vector. Statistical analysis A 2 × 5 repeated measures ANOVA (SPSS general linear model) was used to compare angular deviation and response magnitude across visual conditions and head orientations (visual condition: eyes open, eyes closed; head orientation: ±60, ±30, 0 deg). In all cases, where significant Mauchly's tests indicated violation of the assumption of equal variances, the degrees of freedom were corrected using the GreenHouse-Geisser technique. Response accuracy was determined by a linear fit between response direction and head direction. We also performed correlations between response magnitude and variance. To do this, we determined response 'error' for each trial, measured as the angular difference between the individual trial direction and the mean direction. Pearson correlations were used to determine the significance of the magnitude-error relationship for each condition for each participant (see Fig. 8 below). For all statistical tests, significance was set at P < 0.05. Mean angle and angular deviation/standard deviation [α̅ ± AD (STD)] are reported in the text and figures. with the head facing forwards. GVS evoked a polarityspecific response, predominantly in the mediolateral direction ( Fig. 3A and B). SVS evoked a response in the same direction, as can be seen in the SVS-force cross-correlation ( Fig. 3C and D). For both GVS and SVS, this subject's responses were larger with the eyes closed. Assessing response direction The effect of head orientation on the direction of the evoked force vector is depicted in Fig. 4. For all conditions, the mean force response (dashed line) is directed approximately 90 deg to head orientation (continuous line). As the head is turned between ±60 deg, the force vector turns by a similar amount for both GVS and SVS stimuli. The direction of the mean force vector was used to determine response accuracy. In contrast, response precision was determined by analysing the withinsubject variability of vector angles taken from individual trials/segments. This variability is depicted by the shaded areas in Fig. 4 which show angular deviation (circular equivalent of the standard deviation). For SVS, each 100 s stimulation period was split into 20 segments of 5 s. Response accuracy The effect of head orientation upon mean response direction is shown in further detail in Fig. 5. GVS-evoked responses exhibited greater between-subject variability than those produced by SVS stimuli (GVS; SD = 26.21. SVS; SD = 13.56). Furthermore, 3 of 12 subjects showed no significant correlation between head orientation and response direction for GVS stimuli (eyes closed: R 2 < 0.56; eyes open: R 2 < 0.48 P > 0.05). These subjects were removed from subsequent analysis and presentation of GVS responses (although their inclusion did not affect the outcome of any statistical analysis). In contrast, this relationship was significant for all subjects when using SVS stimuli (eyes closed: R 2 > 0.90; eyes open: R 2 > 0.85, P < 0.01). One subject was removed due to a malfunctioning of the Fastrak sensor system used to record head orientation. For both GVS and SVS there was a significant linear relationship between head orientation and response direction (GVS R 2 = 0.88, P = 0.03; SVS R 2 = 0.95, P < 0.01). However, there was no effect of vision upon this relationship (ANOVA main effect of vision: GVS, Response precision Individual trial/segment analysis was used to determine the variability of the evoked force vector (Fig. 6). There was a significant increase in angular deviation with the eyes open, both for GVS (11% increase, all head orientations combined; F 1,8 = 15.16, P < 0.01) and for SVS (31% increase, all head orientations combined; F 1,10 = 26.86, P < 0.01), indicating that vision actually reduced precision. Response magnitude For GVS and SVS stimuli, response magnitude was determined by the peak force and the stimulus-response gain, respectively (Fig. 7). With the eyes closed, response magnitude was approximately doubled, both for GVS and for SVS (GVS, F 1,8 = 65.74, P < 0.01; SVS, F 1,10 = 30.32, P < 0.01). There was no effect of head orientation upon response magnitude (Fig. 7B). Relationship between precision and magnitude To investigate the relationship between response precision and magnitude we calculated both the absolute error and the magnitude of each force vector for individual trials. Absolute error was calculated as the angular difference of individual force vectors from the mean vector, for each condition (Fig. 8A). There was a tendency for larger responses to exhibit lower error (Fig. 8B). This relationship was more consistent for the SVS response, where 9 of 11 participants exhibited a significant inverse correlation between these parameters, for both eyes-open and eyes-closed conditions (Fig. 8D). For GVS, 4 of 9 participants produced a significant inverse correlation for both conditions (Fig 8C). Effect of SVS segment length upon response precision The analysis of SVS responses reported above was obtained by splitting each 100 s stimulation period into 20 5 s Figure 10. Simulating effects of response magnitude upon directional variance A GVS-evoked force response was generated from averaged empirical data. This archetypal response was then summed with random noise to simulate baseline force variations. The peak response was used to calculate the direction of the resulting force vector for multiple artificial trials, allowing angular deviation to be calculated. Response magnitude and baseline noise were then independently varied to determine the effect upon angular deviation. segments. Figure 9 shows the effect of altering segment length on directional variance for a forward facing orientation. Angular deviation systematically declines as segment length is increased. This may simply be due to the differing numbers of data samples produced by varying segment length. However, the values are consistently higher for the eyes-open condition (F 4,44 = 318, P < 0.01). The largest percentage difference between visual conditions occurred for the 5 s segment length (25% increase, mean ± SD eyes closed: 24.08 ± 9.53 deg, eyes open 34.67 ± 13.34 deg). Simulating changes in precision The above results suggest that vision increases the variability of the vestibular-evoked balance response. However, there was an associated reduction in response Figure 11. Baseline force variability Standard deviation of force data was calculated during a 1 s pre-stimulus window for all GVS trials. There was a significant effect of vision upon baseline variability (F 1,7 = 35.54, P = 0.001), but no effect of head angle or force direction (F x vs F y ) (P ࣙ 0.296). magnitude with vision. It is therefore possible that change in variability is a direct consequence of this change in magnitude, rather than sensory reweighting for example (Fig. 8). To address this possibility, we generated artificial GVS responses where we could systematically modify response magnitude and observe the effect upon angular deviation (Fig. 10). Initial values of response magnitude and baseline noise were set to match the values observed empirically during the eyes-closed GVS condition. We then decreased response magnitude by 42% to replicate the effect of opening the eyes. This caused a 39% increase in angular deviation, suggesting that the change in variance is indeed directly linked to response magnitude. However, this ignores variations in baseline force which might affect response variance. Analysis of the empirical data shows that baseline force variability decreases by 44% with the eyes open (Fig. 11). When we simulated this change alone (maintaining a fixed response magnitude), it caused a 27% decrease in angular deviation, opposing the effect of response magnitude. When we simultaneously implemented the 42% decrease in response magnitude and the 44% increase in baseline force variability, the net effect was a 0.4% increase in response variability (Fig. 12). This compares to the empirically observed change of 11%. Hence, our simulation suggests that the observed changes in precision are not purely due to changes in response magnitude or baseline variability per se. Discussion Our results confirm the craniocentric nature of the vestibular-evoked sway response (Lund & Broberg, 1983; Comparison of empirical versus model data A, the empirically observed effects of vision upon response and baseline force magnitude were simultaneously implemented in the simulation. B, angular deviation was calculated for comparison against empirical data. C, there was minimal effect of these interventions upon the simulated angular deviation results. This contrasts with the 11% increase in angular deviation observed empirically when the eyes were opened. Hlavacka & Njiokiktjien, 1985;Pastor et al. 1993;Mian & Day, 2009). EVS stimuli evoked a ground reaction force directed towards the anodal ear, rotating in line with head orientation. The novel aspect of our study was to analyse the variability of this response in addition to its mean direction. When subjects opened their eyes, mean sway direction was unaffected. However, response variability increased, reflecting a reduction in precision. This demonstrates that the accuracy and precision of vestibular-motor transformations for balance are dissociable. This raises the possibility that a person might exhibit poor balance control at any given instant, while appearing to sway accurately on average. The averaging process may therefore mask any deficits in vestibular control of balance. We used two different methods of vestibular stimulation. The GVS stimulus consisted of a short-lasting square-wave pulse of direct current, allowing us to measure the direction of the vestibular response at a fixed instant in time. By measuring responses to multiple pulses, variability was readily ascertained. In contrast, SVS involved a continuous, long-lasting and randomly varying current. To determine variability in this case, we quantified response direction over multiple segments of time ranging from 1 to 20 s, using the cross-correlation method described by Mian & Day (2009). We settled upon a segment length of 5 s, because it showed the clearest distinction between visual conditions. Despite the difference in techniques, both GVS and SVS produced essentially the same result; vision had no influence upon the direction of the mean response, while variability increased with the eyes open. However, the practicality of both techniques differed. When using GVS, 3 of 12 subjects exhibited no clear relationship between head angle and response direction, and were thus excluded from further analysis. In contrast, this relationship was significant for all subjects when using SVS. Furthermore, the distinction between visual conditions was clearer in the SVS response, which exhibited a 31% increase in angular deviation with the eyes open, versus 11% for GVS. This is supported by previous work demonstrating greater signal-to-noise ratios for SVS-evoked sway responses (Dakin et al. 2007;Reynolds, 2011). Of course, such differences may be partly attributable to the chosen stimulus parameters (Dakin et al. 2010). Varying the amplitude, number and frequency content of the stimulus current could conceivably alter angular deviation in ways we have not investigated here. Nevertheless, the qualitative similarity in results, regardless of the precise stimulus parameters, supports our assertion that vision increases the directional variability of the vestibular-evoked sway response. The observed effect of vision refutes our original hypothesis. We had reasoned that the sense of head-on-feet orientation would improve with vision. This would enhance the coordinate transformation of vestibular input into motor output for balance (Dalton et al. 2017;Reynolds, 2017). In contrast to our prediction, however, directional variability increased with the eyes open. How could vision reduce the precision of vestibular control of balance in this way? The answer to this apparent paradox may be sensory reweighting. We found that evoked force responses were ß50% smaller with the eyes open. This concurs with previous findings showing that GVS-evoked sway responses become smaller as additional veridical sensory information becomes available . This has been demonstrated for tactile (Britton et al. 1993;Smith et al. 2017) and proprioceptive modalities , as well as for vision (Day & Guerraz, 2007). The CNS must combine these sometimes divergent sources of information to compute a single estimate of the state of the body. This process has been likened to electoral proportional representation, with each sensory modality providing a vote towards the overall estimate of body orientation . Hence, the relative contribution of any given modality will depend upon how much alternative sensory representation is available. The reduction in EVS-evoked sway size with vision may therefore reflect down-weighting of vestibular information. We also found a negative correlation between response magnitude and directional variability. We confirmed that this correlation was not due to inherent effects of noise in the force plate sensors (data not shown). Instead, it suggests that reduced precision is a direct consequence of the down-weighting process. In other words, the CNS' estimate of sway direction at any given time is less influenced by vestibular input. Hence there will be a greater influence of veridical visual cues upon sway direction. Alternatively, it is possible that the changes in precision we observed are not directly attributable to sensory reweighting. The reduction in response magnitude could conceivably increase the variability of the sway force vectors via changes in signal-to-noise ratio. Specifically, a fixed level of random noise on the shear force signals (F x and F y ) would evoke greater angular changes for a smaller versus larger force vector. In this case, altered precision would not be caused by sensory reweighting per se. However, the results of our simple model suggest that this is not the case (Fig. 12). When we recreated the observed reduction in response magnitude, it did cause an increase in angular deviation. However, when we simultaneously implemented the empirically observed reduction in baseline force variability, angular deviation remained constant. This suggests that the effects of vision upon the precision of the vestibular-evoked postural response are not mediated purely by changes in signal-to-noise ratio. It is important to emphasise that the reduced directional precision that we observed with the eyes open does not reflect impaired balance control overall. Quite the opposite; in the absence of vestibular stimulation, baseline J Physiol 596.11 sway was 44% lower with the eyes open. Nevertheless, the analysis that we report here does offer a new method for analysing the efficacy of vestibular control of balance. Any increase in response variability in the absence of any other changes would indeed reflect impaired transformation of vestibular input. Furthermore, as our data demonstrates, it is possible for such changes to occur even when mean response direction remains accurate. This may be important for revealing potential contributions of vestibulo-motor dysfunction towards increased fall risk, caused by age, sensory loss or neurological disease. Analysis of averaged responses may mask such deficits. In summary, we observed a clear dissociation between the directional accuracy and precision of vestibularevoked balance responses. The directional variability of the EVS-evoked sway response increased with the eyes open, while its mean direction was unaffected by vision. This paradoxical finding suggests that additional veridical sensory information leads to the down-weighting of vestibular input for balance, resulting in an apparently less precise response.
2018-04-03T05:06:44.876Z
2018-04-16T00:00:00.000
{ "year": 2018, "sha1": "d3dd50cea3828cad561385553f28f9604f7405d4", "oa_license": "CCBY", "oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.1113/JP275645", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d3dd50cea3828cad561385553f28f9604f7405d4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Mathematics", "Chemistry", "Medicine" ] }
149567657
pes2o/s2orc
v3-fos-license
The potential role of urbanization in the resistance to organophosphate insecticide in Culex pipiens pipiens from Tunisia Objective To examine the effects of urbanization on the resistance status of field populations of Culex pipiens pipiens to organophosphate insecticide. Methods Bioassays and biochemical assays were conducted on Tunisian field populations of Culex pipiens pipiens collected in four various areas differing in the degree of urbanization. Late third and early fourth larvae were used for bioassays with chlorpyrifos and adults mosquitoes for biochemical assays including esterase and acetyl cholinesterase (AChE) activities. Results The distribution of resistance ratios in this study appears to be influenced by the degree of urbanization. The highest resistance was recorded in the population from most urbanized areas in Tunisia whereas the lowest resistance was found in relatively natural areas. Both metabolic and target site mechanisms were involved in the recorded resistance. Conclusion This is the first study in Tunisia showing evidence of the impact of urbanization on the resistance level in Culex pipiens pipiens. Proper management of the polluted breeding sites in the country and effective regulation of water bodies from commercial and domestic activities appear to be critical for managing insecticide resistance. Introduction Culex pipiens mosquito has been strongly suspected as the most likely vector in the transmission of West Nile virus outbreaks that have affected Tunisia 1 in 1997, 2003, 2007, 2010, 2011 and 2012. Vector control by insecticides is the main tool to prevent these diseases and organophosphate insecticides are one of the most effective mosquito larvicides used in many places. Unfortunately, the massive use of insecticides during the malaria eradication program between 1967 and 1978 has led to the development of strong resistance worldwide in Culex pipiens from Tunisia 2 . This situation becomes a serious problem in Tunisia where very high resistance to the organophosphate chlorpyrifos (> 10,000-fold) was described in Culex pipiens pipiens 2 . The various mechanisms that enable insects to resist the action of insecticides can be different in different populations and be resumed into four distinct categories: metabolic resistance, target-site resistance, re-duce penetration and behavioral avoidance. In the case of organophosphates, the insensitive acetylcholinesterases (AChE1) and enzyme system including esterases, oxidases (CYP450), and glutathione S-transferases (GSTs) have been frequently reported 2,3,4 . Among factors likely to influence insecticide resistance in mosquitoes, urbanization has been strongly implicated but rarely studied in detail. As Culex mosquitoes adapt to the polluted environment of the urban areas, transmission of pathogens may increase. The effect of this adaptation on the mosquitoes tolerance to insecticides used in vector control is unknown. A detailed knowledge of the biology of urban vectors, including the processes and mechanisms by which these vectors adapt to pollutants as well as to the many insecticides is needed to plan and implement urban vector control strategies. Historically, urbanisation has always been closely linked to economic development, which leads to the increase of vector-borne diseases 5,6 . Many problems have emerged as a result of urbanization, including environmental pollution, crowding, and the destruction of natural ecology. Changes in environmental conditions as a result of urbanization may directly and/or indirectly affect the ecology of mosquitoes, e.g., larval habitat availability and suitability, development, and survivorship facilitating the invasion and establishment mosquito populations in proximity to their hosts and therefore leading to an uncontrolled use of insecticides 7 . Previous studies showed that an additional selective pressure favoring insecticides resistance in urban areas may be presented as results for such human's practices 8 . On the other hand, many anthropogenic pollutants in water bodies are always associated to urbanization and may puts indirect pressure of the resistance of mosquitoes to chemical insecticides. These urban pollutants are often not toxic to mosqui-toes but may affect rapidly their resistance to different insecticides inducing mainly detoxification enzymes activities 9-12 . As a result, knowledge on resistance status of vectors against organophosphate and the mechanism involved as well as factors that influence the resistance have become important. In this context, it is important to note that most previous studies have been focused on resistance level and associated mechanisms. However, less research effort have been carried out to study the influence of urbanization on mosquito's resistance. This study was therefore carried out to assess resistance status of Culex pipiens pipiens to organophosphate insecticide in four various areas differing in the degree of urbanization and the possible mechanisms involved as well as environmental factors associated with its distribution. Our objective was to investigate factors facilitating the vectors adaptation to setting differing in the degree of urbanization in view to develop an integrated vector control strategy to successfully vector control in urban settings. Indeed, a top-down approach and methods, based on a limited or inadequate understanding of mosquito ecology, evolution, and urban social ecology, will fail. Materials and methods Mosquitoes Four populations of Culex pipiens pipiens were collected in four various areas differing in the degree of urbanization (anthropogenic ie densely populated urban area, semi-anthropogenic ie moderately populated urban area, semi-natural ie rural area weakly populated and natural sites ie rural area without human population) ( Figure 1). The characteristics of study areas including insecticides usage is given in Table 1. Data were collected according to both ministries of health and agriculture and during individual interviews with the collection sites residents. A susceptible strain named S-Lab 13 was used to calculate the resistance ratios of field populations. Two resistant strains named SA2 (A2-B2) and SA5 (A5-B5) were used as references in starch gel electrophoresis 14 . Bioassays Different bioassays were performed on late third and early fourth larvae according to standard methods of Raymond et al 15 , using ethanol solutions of organophosphate chlorpyrifos and carbamate propoxur under standard laboratory conditions (25 ± 1°C and 70 ± 5% RH) . Chlorpyrifos bioassays included 5 concentrations providing between 0 and 100% mortality and 5 replicates per concentration on sets of 20 late 3 rd and early 4 th instars in a total volume of 100 ml of water containing 1 ml of ethanol solution of the tested insecticide. It should be noted that we used a series of five beakers in the case of control larvae and we added only 1 ml of ethanol. A standard sub lethal doses of 0.08 mg/l for DEF (S,S,S-tributyl phosphorotrithioate), and 2.5 mg/l for Pb (piperonyl butoxide), 4 hours before the addition of the insecticide was added to all synergized treatments to estimate the role of detoxification enzymes in the recorded resistance. Dead larvae were counted 24 hours after treatment; larvae that did not move when touched with a thin needle were considered dead. The carbamate propoxur bioassays included just one dose (1mg/liter) and five replicates to detect the involvement of the common mechanism of resistance to both insecticides. This concentration kills all susceptible mosquitoes. Biochemical assays The identification of different esterases was performed using starch electrophoresis according to the methods of Pasteur et al 16 . Detected esterases were identified by comparing their electrophoretic mobility to that of known over-produced esterases. AChE activity The enzymatic assay was investigated according to the standard protocol of Bourguet et al 17 to measure the susceptibility of AChE1 to a propoxur and detect the presence of AChE1S and AChE1R. Data analysis Obtained data were analyzed using log probit program of Raymond et al 18 based on Finney 19 (1971). Values of LC 50 , LC 95 , confidence limits at 95% and slopes were computed. Resistance ratio at LC 50 (RR50 = LC 50 of field population/LC 50 of sensitive strain) and synergism ratio at LC 50 (SR50 = LC 50 in absence of synergist/LC 50 in presence of synergist) were calculated. Results Details on Log-dosage probit-mortality analysis are shown in Table 2. Resistance ratios ranged from 1.8 to 8929. The highest and the lowest resistance ratio was observed in sample 1 (anthropogenic site) and 4 (naturel site), respectively. The resistance ratio values of samples collected from semi-antrhpogenic and semi-natural were 163 and 75, respectively. The highest resistance was recorded in the population from most urbanized areas in Tunisia whereas the lowest resistance was found in relatively natural areas. As shown in Table 2, the use of synergists showed that detoxification enzymes were not involved in the recorded resistance of studied samples. However, five esterases of high activity were observed in studied field samples except for sample 4 using starch electrophoresis ( Table 3). The esterase C1 encoded by the Est-1 locus and four esterases encoded by the Ester super locus: A1, A2-B2, A4-B4 (or A5-B5, which has the same electrophoretic mobility) and B12. The high level of chlorpyrifos resistance observed in resistant populations was correlated with propoxur resistance indicated an insensitive AChE 1. The frequencies of the resistant genotypes were 0.83, 0.85 and 0.44 for samples 1, 2 and 3, respectively. These findings may be due to the higher insecticide selection pressure in anthropogenic areas than the rest of the study sites. [-]: The empty cells were due to the loss of some populations. Discussion In Tunisia, Culex pipiens pipiens is an important member of Culex pipiens complex and act as an important vector for West Nile virus that recently affected the country 1 . For these reasons, it was necessary to address the insecticide resistance problem. Here, we had undertaken the most comprehensive research into insecticide resistance in Culex pipiens pipiens mosquitoes from four various areas differing in the degree of urbanization. It is important to note that the general characteristics of study areas showed that insecticide usage varied in different ecological settings (anthropogenic, semi-anthropogenic, semi-naturel and naturel sites). The distribution of resistance ratios of Tunisian Culex pipiens pipiens in this study appears to be influenced by the degree of urbanization. Indeed, the highest resistance was recorded in the population from most urbanized areas in Tunisia whereas the lowest resistance was found in relatively natural areas. The characteristics of study areas showed that agricultural and domestic use of insecticides may be as the major cause of resistance in urban areas 8 . However, despite the absence of both public health and agricultural applications, mosquitoes collected from semi-naturel area were resistant and therefore cannot fully explain the cause of the recorded resistance. In this preliminary assessment, it is clear that urban populations are exposed to higher levels of anthropogenic pollutants exhibit stronger signals of selection. These observations must take a critical look at what needed to be done to manage the polluted breeding sites in the country and regularized water bodies from commercial and domestic activities. The impact of urban pollutants on insecticides resistance in mosquitoes has been confirmed in previous studies 10,11 . Contrary to agricultural pest control, the role of urban pollutants and uncontrolled use of insecticides for personnel protection were strongly involved although Essandoh et al 20 suggested the important impact of agricultural use of pesticides on organophosphates resistance. Other studies showed that Anopheles mosquitoes were found susceptible to organophosphates which were detected in large quantities in their breeding sites. These finding are in agreement with those of our study where mosquitoes collected from naturel site were found susceptible although the occasional use of insecticides in this area. In addition to both public health and agricultural applications, several unknown chemicals or insecticides in polluted breeding sites can affect and select multiples resistance mechanisms in mosquitoes that can confer important resistance levels to new insecticides in the country. In this context, it is important to noted that both metabolic and target site mechanisms were identified to be involved in the recorded resistance of studied field populations. An insensitive acetyl cholinesterase (AChE1) and detoxification esterases were detected in resistant samples and these mechanisms were positively associated with chlorpyrifos resistance. These findings are consistent with previous investigations which related both mechanisms with resistance to organophosphates insecticides 2-4,21-23 . Thus, monitoring the resistance mechanisms may help the surveillance of organophosphate resistance in Culex pipiens pipiens. An interesting result from this study was the detection of the impact of urbanization on insecticide resistance in Culex pipiens pipiens. Resistance differed widely between anthropogenic, semi-anthropogenic, semi-naturel and naturel sites. Data on the distribution of resistance in various studied areas will be of great importance to develop efficient mosquito control strategies. Conclusion This is the first study in Tunisia showing evidence of the impact of urbanization on the resistance level in Culex pipiens pipiens. Besides the public health and agricultural applications, anthropogenic pollutants may be an important cause of resistance in mosquitoes. Proper management of the polluted breeding sites in the country and effective regulation of water bodies from commercial and domestic activities appear to be critical for managing insecticide resistance. In this context, it is important to mention the necessity of alternative effective vector control methods including larval resource reduction and biological control, as well as new chemical insecticides. Tunisia. We are very grateful to S Ouanes, for technical assistance, A Ben Haj Ayed and I Mkada for help in bioassays, S Saïdi, Tunisian hygienist technicians for help in mosquito collecting, and M Nedhif and M Rebhi for their kind interest and help.
2019-05-12T13:27:45.028Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "b79bf1101bd6acf44877e90d33bea9260ebd30ec", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/ahs/article/download/185644/174951", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b79bf1101bd6acf44877e90d33bea9260ebd30ec", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
269398780
pes2o/s2orc
v3-fos-license
Are We Always Right? Evaluation of the Performance and Knowledge of the Passive Leg Raise Test in Detecting Volume Responsiveness in Critical Care Patients: A National German Survey Background: In hemodynamically unstable patients, the passive leg raise (PLR) test is recommended for use as a self-fluid challenge for predicting preload responsiveness. However, to interpret the hemodynamic effects and reliability of the PLR, the method of performing it is of the utmost importance. Our aim was to determine the current practice of the correct application and interpretation of the PLR in intensive care patients. Methods: After ethical approval, we designed a cross-sectional online survey with a short user-friendly online questionnaire. Using a random sample of 1903 hospitals in Germany, 182 hospitals with different levels of care were invited via an email containing a link to the questionnaire. The online survey was conducted between December 2021 and January 2022. All critical care physicians from different medical disciplines were surveyed. We evaluated the correct points of concern for the PLR, including indication, contraindication, choice of initial position, how to interpret and apply the changes in cardiac output, and the limitations of the PLR. Results: A total of 292 respondents participated in the online survey, and 283/292 (97%) of the respondents completed the full survey. In addition, 132/283 (47%) were consultants and 119/283 (42%) worked at a university medical center. The question about the performance of the PLR was answered correctly by 72/283 (25%) of the participants. The limitations of the PLR, such as intra-abdominal hypertension, were correctly selected by 150/283 (53%) of the participants. The correct effect size (increase in stroke volume ≥ 10%) was correctly identified by 217/283 (77%) of the participants. Conclusions: Our results suggest a considerable disparity between the contemporary practice of the correct application and interpretation of the PLR and the practice recommendations from recently published data at German ICUs. Introduction Fluid administration is used as a first-line therapy to maintain organ perfusion in patients with acute circulatory failure [1][2][3][4].Although early fluid administration is beneficial compared with delayed fluid administration, the optimal amount of fluid required for an individual patient varies.The term "optimal" refers to the amount of fluid that restores blood flow to the end organs and, at the same time, does not impair the perfusion of the end organs.In patients with signs of acute circulatory failure, including systemic arterial hypotension, tissue hypoperfusion associated with organ dysfunction, and hyperlactatemia, fluid administration has been historically guided by static indices, such as intravascular pressures (e.g., central venous pressure and pulmonary artery occlusion pressure) and the cardiac output or stroke volume (measured via echocardiography or transpulmonary thermodilution) [2,4].Concerning its moderate predictive power, the value of the central venous pressure is more useful in gauging the potential risk of further fluid administration than as an accurate predictor of fluid responsiveness in critical care patients [5,6]. In contrast, dynamic measures, such as the respiratory dependent diameter of the inferior vena cava (IVC), stroke volume variation (SVV), and passive leg raise (PLR), have been shown to more accurately predict fluid responsiveness in critically ill patients than static parameters, such as central venous and mean arterial pressure [7][8][9][10][11].The use of the IVC diameter during respiratory changes is limited for obese patients, those undergoing laparoscopic surgery, and those who show a poor echocardiographic subcostal window [12,13].However, the use of the IVC for decisions about fluid responsiveness should be considered if certain technical and clinical conditions are met, i.e., in patients who are under mechanically controlled ventilation with a tidal volume of ≥8 mL/kg, an intra-abdominal pressure of <12 mmHg in non-obese patients, and in patients without acute cor pulmonale or severe right ventricular dysfunction.Stroke volume variation results from heart-lung interactions and is a sensitive indicator of preload responsiveness.However, a regular heart rhythm and controlled mechanical ventilation with a tidal volume of more than 8 mL/kg of predicted weight (PBW) are required to accurately predict the volume responsiveness [14].The PLR enables a reversible-volume challenge that is proportional to body size and does not result in volume overload in non-fluid-responsive patients.Furthermore, the advantages of the PLR include the ability to perform the test at the bedside, and it remains reliable in several conditions in which dynamic measures of preload responsiveness that are based on the respiratory variations of stroke volume are limited, such as spontaneous breathing, arrhythmias, tidal volume of <8 mL/kg (PBW), and low lung compliance [15,16].Monnet et al. described five rules for performing the PLR and the correct application and interpretation of the hemodynamic effects [17].First, one starts with the correct position of the patient-semi-recumbent (the trunk is at 45 degrees)-followed by lowering the trunk and raising the leg.Second, the hemodynamic effects of the PLR should be assessed solely through the direct measurement of the cardiac output (CO) and not by measuring decreased heart frequency or increased mean arterial pressure [15][16][17].Third, it is recommended to use techniques such as echocardiography or arterial pulse contour analysis with an effect size of ≥10% of aortic blood flow in real time.Fourth, the measurement of cardiac output should be performed after the PLR when the patient has been removed to the semi-recumbent position.Fifth, confounding factors such as pain when performing the PLR or coughing should be avoided, as these can provoke adrenergic stimulation, resulting in a mistaken interpretation of cardiac output measurements [15][16][17]. Adherence to existing recommendations for performing the PLR correctly and the actual implementation of the PLR in German intensive care units (ICUs) have not been studied thus far.We hypothesized that there is high uncertainty regarding the application of the PLR and the interpretation of its hemodynamic effects among critical care patients.Our cross-sectional survey aimed to assess the current practice of the PLR and to identify possible differences in the experiences of the respondents. Ethical Considerations The local ethics committee (Ethical Committee N • 2019-14744) of the medical association of Rhineland-Palatinate State (Chair Dr. A. Wagner) approved the study in 2021.This study was registered with the ClinicalTrial.govunder register number NCT05882240.Data were fully anonymized before the researchers accessed them.A consent form was provided at the beginning of the survey and was circulated among all of the participants via email.The requirement of a written informed consent form was waived by the ethics committee.The study followed the guidelines of the Declaration of Helsinki.We excluded incomplete data reports.Survey links that had been opened without the provision of replies were also excluded from the study. Study Design and Questionnaire This study was a qualitative and quantitative analysis of an anonymous cross-sectional mixed-method survey (Lime Survey ® , Vers.5.3.19,Hamburg, Germany) in accordance with the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) [18] guidelines, and it was conducted online from December 2021 to January 2022. This was a closed-access survey consisting of 7 introductory questions and 4 questions on the preparation, interpretation, and limitations of the PLR, covering 3 pages.The questions were designed as multiple-choice and multiple-selection questions.The items' order of appearance was not randomized.Completeness checks were carried out before submission, and the selection of at least one response option was enforced.The survey was targeted to be completed within 10 min.Participants were informed before about the approximate time duration, data storage, data management, and the purpose of the study.The demographic variables included the level of hospital care (first-or second-level hospital or university hospital), the number of ICU beds, the case mix of the unit department (predominantly surgical, medical, mixed, or specialist ICUs, such as for cardiac surgery), as well as the level of training, years of experience, and any additional specialties of the survey participants.Furthermore, in a subgroup analysis, we evaluated the differences between non-cardiac and cardiac surgery ICUs regarding the measurement of the correct PLR effect size. To identify and explore the current practice in performing the PLR in ICU patients, the survey participants were asked for the correct sequence of patient positions for increasing the test's sensitivity, followed by questions about the procedure for conducting measurements of the hemodynamic effects and the correct effect size for detecting fluid responsiveness.Finally, participants were also asked about the limitations of the PLR test.To assess the adherence to and knowledge of the correct PLR practice, we created a score for each important point of PLR measurement.A minimum score of 0 (no knowledge of the PLR practice and incorrect statement of the hemodynamic effect size), and a maximum score of 6 (correct PLR measurement) could be achieved (see Appendix A).The questions on the implementation, target value, and effect size of the PLR were scored with one point per correct answer.The question on the limitations of the PLR was awarded three points.The register of the German Interdisciplinary Association for Intensive Care and Emergency Medicine comprised 1340 sites of the 1903 hospitals in Germany reporting their capacity for intensive care beds on a daily basis.The invitation for our online survey was sent to leading ICU physicians of 182 hospitals with different levels of care and reached a response rate of >10%.An email was distributed to each contact who was identified through the aforementioned process (one per institution).The email introduced the nature and the purpose of the survey and invited the contact to complete the survey on behalf of their department.The survey was not advertised publicly.After two weeks, we sent a reminder via email to reach a response rate of >10%. Data Analysis All collected data were analyzed using GraphPad Prism 9.0 h (GraphPad ® Software, Version 9.0 for MAC, La Jolla, CA, USA). Descriptive statistics were calculated by providing absolute numbers and percentages of background and demographic variables for all questions relating to the knowledge and current practice of the PLR.The Shapiro-Wilk test was used to examine the distribution of each variable.Normally distributed variables are presented as the mean and standard deviation (SD), while non-normally distributed variables are presented as the median and interquartile range (IQR).Where applicable, contingency tables were produced and analyzed using Fisher's exact test.Statistical significance was set with a p-value of <0.05. Demographics of the Survey Participants Questionnaires were sent to 182 hospitals with different levels of care.We received 292 responses, and 283/292 (97%) of the complete survey responses were obtained.We excluded nine surveys with incomplete answers on the preparation and interpretation of the PLR in critically ill patients from the analysis (5/9 (55%) of the survey links had been opened without the provision of replies, and 4/9 (45%) of the participants closed the survey after the first question).A total of 46/283 (16%) of the respondents worked at a first-level hospital, 118/283 (42%) worked at a second-level hospital, and 119/283 (42%) worked in a tertiary hospital (university hospital).Of these, 44/283 (16%) worked in mixed ICUs that admitted medical and surgical patients (Table 1).The following distribution of intensive care unit characteristics and the level of training of the survey respondents is shown in Table 1.Values are presented as absolute numbers and relative proportions (%). Preparation of the Patient Position A total of 72/283 (25%) of the respondents would start a PLR test intervention with the patient in a semi-recumbent position (with the trunk at 45 degrees), lower the trunk, and raise the leg.However, a proportion of 198/283 (70%) would start with an initial leg raise (both legs) at 45 degrees, and 5/283 (2%) would start with an initial leg raise of one leg.Additionally, 7/283 (3%) would start with a flat position following the leg raise. Indices of Fluid Responsiveness A total of 23/283 (8%) of the respondents stated that they would look for a change in the left ventricular velocity time integral (VTI) at the left ventricular outflow tract as measured via echocardiography when conducting the PLR.Other indices are shown in Table 2. Otherwise, most of the participants looked primarily for a change in the systolic pressure (p < 0.0001). Effect Size A total of 217/283 (77%) of the survey respondents knew the correct effect size for the increased stroke volume (≥10%).Table 3a shows other reported effect sizes.Anesthesiologists responded with the correct effect size (124/283 (80%)) most often, followed by cardiac surgeons 38/50 (76%) and internal medicine physicians 26/42 (62%).Table 3b shows the distribution of the correct effect size with respect to the number of ICU beds.Values are presented as absolute numbers and relative proportions (%). Performance and Interpretation of the PLR A total of 40/283 (14%) of the respondents scored at least four points on the described scale for performing and interpreting the PLR, and no participants achieved the maximum score of six points (correct performance of the PLR, correct effect size, and knowledge about the limitations; also see Appendix A).Table 4 shows the distribution of the points for the correct management of the PLR. Limitations of the PLR In the survey, 175/283 (62%) of the participants selected "arrhythmias" as a clinical situation in which it is not appropriate to use the PLR.Table 5 shows the participants' assessments of other clinical situations in terms of the limitations of the PLR. Discussion The PLR test is a physiological assessment that requires correct and standardized execution.In our survey, only 40/283 (14%) of the respondents scored at least four points on the described scale for performing and interpreting the PLR, with no participants achieving the maximum score of six points (correct performance of the PLR, correct effect size, and knowledge about the limitations).Only 69/283 (24%) used continuous and sufficiently precise cardiac output monitoring, yet 217/283 (77%) were aware of the correct effect size (an increase in VTI/stroke volume by more than 10%).Possible reasons for the inconstancy in the physicians' responses could be the different case mixes of the ICUs (e.g., mixed ICU versus special cardiac surgery ICU with a focus on hemodynamics), as well as the different levels of experience with a possible lack of teaching following an incorrect understanding of the PLR.Interestingly, small ICUs with 1-10 beds responded with the correct effect size of the PLR more often than ICUs with a higher capacity of beds.There might be an association between the workload of the physicians and the quality of the diagnostic performance of the PLR. To successfully predict fluid responsiveness, one can use the change in the preload, on the one hand, as well as the measurement of the subsequent changes in physiological variables, such as cardiac output or a derivate-like pulse pressure, on the other hand.The outcome variables of several studies differed from the "flow" variables, meaning the cardiac output or the cardiac index as an assessment of the cardiac output value based on the patient's size, stroke volume (index), or aortic blood flow."Pressure" variables, such as pulse pressure, which quantifies the changes in arterial pulse pressure, or systolic pressure, define the difference between the maximum and minimum values of systolic blood pressure following a single positive-pressure breath.Most of the published studies included patients with circulatory failure (mostly sepsis) and had different ventilator modes ranging from mechanically controlled ventilation to spontaneous breathing; most of the included patients had sinusoidal rhythm [15,[19][20][21][22][23][24][25][26][27][28].The measurement techniques varied from transthoracic echocardiography [15,19,20,23,[29][30][31], calibrated pulse contour analysis [22,[24][25][26][32][33][34][35], and esophageal Doppler analysis [17,19,27] to bioreactance [36,37].A meta-analysis of 23 clinical trials showed a higher diagnostic performance of changes in flow variables in the PLR (sensitivity of 85% [95% CI, 78-90] and specificity of 92% [95% CI, 87-94]) compared with the PLR-induced changes in pressure variables (sensitivity of 58% [95% CI, 44-70] and specificity of 83% [95% CI, 68-92]; p < 0.001) [38].On the other hand, 50% of ICU patients with acute circulatory failure do not respond to fluid administration, and excessive fluid administration can increase the risk of complications [39].Therefore, it is very important to measure flow variables (e.g., VTI/stroke volume or cardiac output) in critical care patients.Studies have shown that preferences or familiarity with any measured technique for fluid challenges outside clinical research do not exist.Interestingly, our survey showed consultants' high familiarity with echocardiography for the prediction of fluid response, but there was no such familiarity in trainees.This might be used as a point of approach for the better teaching of trainees in ICUs.A total of 23/283 (8%) of the respondents identified an increase in stroke volume via an LVOT VTI of ≥10% as predictive of volume responsiveness.However, there are published limitations: (i) VTI measurements are not continuous, and echocardiographic examinations are always dependent on ICU physicians' experience.(ii) The smallest change in the VTI between two measurements is considered significant and is not attributable to the variability in examinations; even a stroke volume of 11% might be found when the test is performed by the same examiner.(iii) Although this is close to the threshold value required for the PLR (stroke volume increase of at least ≥10%), it is still less precise than a continuous CO measurement using calibrated pulse contour analysis [40] and requires continuous ultrasound examination by an experienced physician, which is not always feasible in clinical practice. A total of 41/283 (15%) of the participants considered CO measurement via pulmonary thermodilution using a pulmonary artery catheter (PAC) to be an appropriate monitoring method for predicting volume responsiveness following the PLR.However, this is also a discontinuous method, and the effects of the PLR can be missed if measurements are taken at the incorrect time (as measured with the bolus thermodilution method).Even the modern PAC, which measures CO semi-continuously, only displays an average of measurements from the preceding 3-5 min and, thus, lacks sufficient temporal resolution to accurately assess the PLR [14].A small yet-to-be-validated study found that a PLRinduced increase in end-tidal carbon dioxide (EtCO 2 ) of ≥5% predicted a fluid-induced increase in the CI of ≥15% with 71% sensitivity (95%CI = 48-89%) and 100% specificity (95%CI = 82-100%).The authors concluded that the changes in EtCO 2 induced using a PLR test predicted fluid responsiveness with reliability, while the changes in arterial pulse pressure did not [35].A total of 150/283 (53%) of the participants answered correctly regarding the limitations of the PLR.One study demonstrated that raised intra-abdominal hypertension (20 ± 6 mmHg compared with 4 ± 3 mmHg) was responsible for some false negatives in the PLR test [41].As intra-abdominal hypertension with impaired lung compliance caused by cephalic displacement of the diaphragm is a frequently encountered situation in ICU patients, this limitation gives physicians the opportunity to remember the essential points regarding the PLR. Further findings corroborated the implementation of pressure variables, such as the pulse pressure variation (PPV).A study from France demonstrated that although 87/145 (60%) of the physicians were familiar with the conditions for measuring the PPV, none were able to accurately interpret the results [42].A total of 87 of the 145 (75%) of the participants did not perform the PLR in the previously described 45 • semi-recumbent position and, instead, raised one or both legs.This approach risks false-positive outcomes due to the activation of the sympathetic nervous system, potentially leading to a subsequent increase in CO [17].Several requirements limit the accuracy and precision of the diagnostic performance of the PLR in critically ill patients, such as mechanically controlled ventilation versus spontaneous breathing in patients with the corresponding tidal volumes and regular heart rhythm.Other factors, such as the current central volume status and the application of propofol and norepinephrine, influence the degree of preload dependency and, subsequently, the effect of the PLR [24,43,44].In the case of intraabdominal hypertension, this can possibly provoke increased resistance to venous return.De Backer et al. described that the capacity of PPV is a reliable predictor of fluid responsiveness in mechanically ventilated patients with a tidal volume of at least 8 mL/kg PBW [45].However, an important drawback of the PPV is that it is inaccurate when using a low tidal volume strategy, which is considered a common ventilation strategy for lung protection in ICU patients.Low tidal volume causes less intrathoracic pressure variation and makes falsely negative PPV-based predictions of fluid responsiveness.Mallat et al. demonstrated in an observational study a poor predictive performance of PPV, which might have been explained through a low tidal volume (median 7.1 mL/kg −1 ideal body weight) [46].Furthermore, they concluded that PLR-induced changes in PPV accurately predict fluid responsiveness with a small grey zone in intensive care patients caused by mechanical ventilation.A meta-analysis of 23 trials showed a similar diagnostic performance of the PLR when comparing spontaneously breathing patients with patients undergoing mechanically controlled ventilation [38].In addition, no difference was observed when the PLR was performed starting with the recommended semi-recumbent position compared with starting in the supine position.Interestingly, the majority of the patients included in the meta-analysis had a sinusoidal rhythm, so no comparisons between regular heart rhythm and arrhythmias could be made [38]. Our study found that the majority of the respondents (99/283 (35%)) considered an increase in systolic blood pressure as an appropriate measure of effect size.However, systolic blood pressure correlates only weakly with CO because the blood pressure is calculated by multiplying the cardiac output by the systemic vascular resistance.It is a clinical reality that not all ICU patients undergoing a PLR test will have access to continuous CO monitoring or any advanced cardiovascular monitoring.In such cases, the pulse pressure (systolic blood pressure-diastolic blood pressure) should be used for assessment rather than the systolic blood pressure [47], although this correlation was not established in other studies [48].Additionally, increases in diastolic or mean arterial blood pressure did not show a sufficient relationship with the increase in CO in these studies. Limitations Our trial has several limitations.First, although the selection of hospitals was randomized, we had no control over who and how many staff members from a single hospital participated in the survey.This method could have introduced a selection bias.Second, we could not definitively rule out that some participants from one ICU may have taken the survey more than once or that more physicians from one ICU gave their responses.Third, due to a concentration of ICUs that primarily treated surgical patients, there may have been an overrepresentation of this group.Fourth, this study included a limited sample size and the naturally rigid structure of an online survey.This included a moderate response rate that led to a semi-representative sample size.It should also be considered that ICUs with a very high workload were possibly unable to respond to our survey due to a lack of time and resources. Finally, the findings cannot be extrapolated to those of a national survey, as the number of respondents was inconclusive. Conclusions The PLR test is considered a standard diagnostic procedure for assessing volume responsiveness according to many guidelines.Our study demonstrated that neither the execution nor the interpretation of this seemingly simple test has sufficient accuracy in German intensive care units.There is potential for quality improvement through the education and practical training of ICU physicians.Furthermore, in this context, we should encourage physicians to engage in scientific and medical reading, create local hemodynamic protocols, and establish standard operating procedures in ICUs. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, the local ethics committee (Ethical Committee N • 2019-14744) of the medical association of Rhineland-Palatinate State (Chair Dr. A. Wagner) approved the study in 4 October 2021. Informed Consent Statement: Written informed consent has been obtained from the patient(s) to publish this paper. Table 1 . Intensive care unit characteristics and levels of training of survey respondents. Table 2 . Respondents' indices for correct measurements. Values are presented as absolute numbers and relative proportions (%). Table 3 . (a) Effect sizes for the PLR reported by survey respondents.(b) Distribution of the correct effect size with respect to the number of ICU beds. Table 4 . Frequency distribution of the points regarding the responses of the participants. Values are presented as absolute numbers and relative proportions (%). Table 5 . Limitations of the PLR stated by the survey participants. Values are presented as absolute numbers and relative proportions (%).
2024-04-27T15:18:16.236Z
2024-04-25T00:00:00.000
{ "year": 2024, "sha1": "e31bad40d9ce17722d19e1661d6ae45e9d663390", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d4c1eb60a68651d129c86fd98aa889c04a711b39", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
3698279
pes2o/s2orc
v3-fos-license
The clinical picture of cachexia: a mosaic of different parameters (experience of 503 patients) Background Despite our growing knowledge about the pathomechanisms of cancer cachexia, a whole clinical picture of the cachectic patient is still missing. Our objective was to evaluate the clinical characteristics in cancer patients with and without cachexia to get the whole picture of a cachectic patient. Methods Cancer patients of the University Clinic “Klinikum rechts der Isar” with gastrointestinal, gynecological, hematopoietic, lung and some other tumors were offered the possibility to take part in the treatment concept including a nutrition intervention and an individual training program according to their capability. We now report on the first 503 patients at the time of inclusion in the program between March 2011 and October 2015. We described clinical characteristics such as physical activity, quality of life, clinical dates and food intake. Results Of 503 patients with cancer, 131 patients (26.0%) were identified as cachectic, 369 (73.4%) as non-cachectic. The change in cachexia were 23% reduced capacity performance (108 Watt for non-cachectic-patients and 83 Watt for cachectic patients) and 12% reduced relative performance (1.53 Watt/kg for non-cachectic and 1.34 Watt/kg for cachectic patients) in ergometry test. 75.6% of non-cachectic and 54.3% of cachectic patients still received curative treatment. Conclusion Cancer cachectic patients have multiple symptoms such as anemia, impaired kidney function and impaired liver function with elements of mild cholestasis, lower performance and a poorer quality of life in the EORTC questionnaire. Our study reveals biochemical and clinical specific features of cancer cachectic patients. Background Ongoing cachexia represents a significant factor affecting the quality of life and prognosis of cancer patients. Cachexia is present in up to 40% in early stages of patients with gastrointestinal cancers and may be involved in up to 80% cancer deaths. However, it is still difficult to identify cachectic patients, as 40-60% of cancer patients are overweight or obese, even in advanced cancer [1]. But what do we know about clinical features of cachexia patient? Cachectic patients usually but not always demonstrate lower body mass index (BMI), which is associated with an increased risk of tumor progression [2,3]. At the same time, other groups report that BMI is not a prognostic factor for cancer cachexia in a cohort of patients with 17% obese, 35% overweight, 36% normal weight, and 12% underweight persons [4]. Cancer cachectic patients experience numerous complications including reduced effectiveness of chemotherapy [5,6], reduced mobility, and reduced functionality of muscle-dependent systems, such as the respiratory and cardiovascular systems, leading to decreased quality of life and survival [7][8][9]. Especially in older population, cancer cachexia clinical features are key predictors of one-year mortality [10]. There is a strong correlation between decreased quality of life scores and decreased physical activity, which is strongly related to weight loss [11]. It was demonstrated that cachectic patients present lower protein, albumins, and hemoglobin levels [12]. Notably, cachexia is not an incurable situation. The important message is that weight-losing patients with unresectable pancreatic cancer can attenuate their weight loss after eight weeks of intensive nutrition intervention, and weight stabilization is associated with prolonged survival and improved quality of life [13]. However, despite our growing knowledge about the pathomechanisms of this symptom complex, a whole picture of the cachectic patient is still missing. Some studies aim to define diagnostic criteria of cancer cachexia [14]. Usually, diagnostic tools for cachexia include loss of weight and lean body mass, fatigue, anorexia, reduced physical performance (for example, total activity or 6-min walk distance) and biochemical abnormalities of c-reactive protein (CRP), albumin, and protein. The existing concepts for the therapy of cachexia are focusing either on nutrition or physical activity. Therefore we founded a nutrition and exercise center for cancer patients in which we are focusing on the definition of the cachectic patient and combined treatment of cancer cachexia with numerous therapy options. Our aim was to evaluate the clinical characteristics such as physical activity, quality of life, clinical dates and food intake in patients with and without cachexia to get the whole picture of a cachectic patient. Patients From March 2011 cancer patients of the University Clinic, "Klinikum rechts der Isar" with gastrointestinal (GI), gynecological, hematopoietic, lung and some other tumors were offered the possibility to take part in the treatment concept including a nutrition intervention and an individual training program according to their capability. We now report on the first 503 patients at the time of inclusion in the program. All parameters like physical capability, daily calorie intake or selected lab values were documented in a prospectively designed database. The exact definition of cachexia is a debatable issue in medical literature (reviewed in [15]). We used the definition of malnutrition proposed by ESPEN (the European Society for Clinical Nutrition and Metabolism) Consensus Statement using following criteria [16]: Weight loss (unintentional) > 10% indefinite of time, or >5% over the last three months combined with either -BMI <20 kg/m2 if <70 years of age, or <22 kg/m2 if > 70 years of age or -FFMI (fat-free mass index) <15 and 17 kg/m2 in women and men, respectively. Our definition of cachexia was also according to Fearon and co-workers [17] and is used by other researchers [18]. Here, the patients are defined as having cachexia, either when they show a weight loss of 5% during the last six months, or a weight loss of 2-5% in combination with a BMI < 20, or a weight loss of 2-5%, together with the presence of sarcopenia. Sarcopenia was defined according to a report of the European working group on sarcopenia in older people (EWGSOP) using first criterion (low muscle mass) plus either second criterion (low muscle strength) or third criterion (low muscle performance) [19,20]. Laboratory parameters Blood tests (red blood cells and white blood cells counts, platelets, hemoglobin concentrations), serum electrolytes, serum creatinine, c-reactive protein (CRP), liver function tests (aspartate aminotransferase, alanine aminotransferase, alkaline phosphatase, serum bilirubin, and cholinesterase), coagulation tests, and serum albumin levels are routinely performed upon admission to the clinic. Performance Endurance capacity, maximal power output (POmax) and peak oxygen uptake (VO2peak) were measured as described [21] in a submaximal incremental exercise test on a computer-controlled bicycle ergometer. A stepwise incremental exercise protocol was applied starting at 25 or 50 watts with increments of 25 watts every three minutes until volitional exhaustion or medical reasons for exercise termination were reached. The exercise was terminated prematurely in the case of significant ECG abnormalities, severe dyspnea or excessive blood pressure increase to more than 230 mmHg systolic and/or less than 110 mmHg diastolic. Lung function Spirometry provided a measurement of the forced vital capacity (FVC) and the forced expiratory volume at the end of the first second of forced expiration (FEV 1 ). Quality of life and mental health Health-related quality of life (HRQoL) is important parameter which can predict survival. It was assessed with the 36-Item Short Form Health Survey SF-36 survey and EORTC QLQ-C30. The EORTC QLQ-C30 is a HRQoL measure specific to cancer, whereas the SF-is a generic measure [22,23]. The EORTC QLQ-C30 is a cancerspecific measure that can capture patients' functional status in several domains (physical, psychological, and social), their global health status/quality of life (QoL), and symptom severity [22,23]. Mental health The Hospital Anxiety and Depression Scale (HADS) was used for identifying distress. There are two subscales: depression (HADS-D) and anxiety (HADS-A). The optimal cut-off point is to be ⩾8 for the identification of suspicious cases and ⩾11 for safe cases on both subscales, with a sensitivity and specificity of 0.80 on an average [24]. With a score of ⩾13, it is possible to detect 76% of the cases among cancer patients with a specificity of 0.60, whereas 95% of the cases can be detected with a score of ⩾6 (specificity 0.21) [24]. Nutritional risk screening (NRS) A diet record was performed to register food intake (number of meals, calories intake/day, number and kind of additional nutrition) as described [25]. Role of the funding source The study was in part supported by Nutricia. Statistical analysis Results are expressed as median values. Statistical analyses were performed using the SPSS (version 23, SPSS Inc., Chicago) software package. Two-sided tests and a significance level of 0.05 were used. Values were compared by Mann-Whitney U test for independent samples. Results The parameters of the patients are noted in Table 1. One hundred thirty-one patients (26.0%) were classified as cachectic, 369 (73.4%) as non-cachectic ( Fig. 1). In 3 patients (0.6%) this information was not available. As expected, cachectic patients showed pronounced weightloss and lower values for BMI, nutrition score and Karnofsky-Index (Table 2). 54.3% of cachectic patients still receive curative treatment (Fig. 2). Laboratory variables Anemia parameters In our study hemoglobin, erythrocytes and hematocrit were significantly (p < 0.001) lower in cachectic patients (Table 3). Excluding patients who received chemotherapy at the time of evaluation or prior evaluation, the significant difference (p = 0.015) in hemoglobin level is still present (13.2 ± 1.3 g/dl for non-cachectic patients and 12.5 ± 1.5 g/dl for cachectic patients). Serum albumin und protein values Serum albumin and serum protein were significantly decreased (p < 0.001) in cancer patients with cachexia (Table 3). Kidney function Both, median (0.8 mg/dl for non-cachexia and 0.8 mg/dl for cachexia, Table 3) and mean (0.85 ± 0.24 mg/dl for non-cachexia and 0.79 ± 0.19 mg/dl for cachexia) serum creatinin values were significantly lower in cachexia group (p = 0.042 for medians and p = 0.009 for means). Urinary creatinine, as well as urinary values for IgG, alpha-1-microglobulin and protein were significantly higher in cachectic patients (Table 4). Liver function and parameters of protein synthesis Two cholestasis enzymes, alkaline phosphatase (ALP) and gamma glutamyl transpeptidase (GGT), were significantly increased in cancer patients with cachexia ( Table 3). The parameters of hepatocyte integrity, aspartate aminotransferase (AST) and alanine aminotransferase (ALT), were not changed. Markers of liver synthesis function cholinesterase (CHE), serum albumin and serum protein, were significantly decreased (p < 0.001). Totally, 187 patients (37% of all study participants) received chemotherapy at the moment of inclusion in this study, 63 (33.7%) in cachexia group (this information was not available in 2 of patients) and 124 (66.3%) patients in non-cachexia group (this information was not available in 4 patients). A significant correlation was seen between AP and current chemotherapy (r = 0.258, P < 0.001), GGT and current chemotherapy (r = 0.205, P < 0.001), as well as CHE and current chemotherapy (r = − 0.182, P < 0.001). 66 (50.4%) cachexia and 245 (66.4%) non-cachectic patients did not receive chemotherapy at the moment of inclusion in this study. In this group there is still a significant difference between cachexia and non-cachexia regarding AP (p < 0.001), CHE (p < 0.001), Quick (p < 0.05) and serum albumin (p < 0.001) but not in case of GGT (p = 0.154). Physical performance and lung function Three parameters of endurance capacity (absolute and relative performance) were significantly lower in cachectic patients ( Table 2). The FEV1 and VC were not significantly decreased (p = 0.616 and p = 0.688 respectively), and relative VC was significantly lower in cachectic patients ( Table 2). Quality of life, mental health and food intake There are significant differences between cachectic, and non-cachectic patients regarding Global Health Score, Physical Functioning Score, Role functioning score, Social functioning score, Fatigue score, Nausea & vomiting score, Appetite loss score and Diarrhoea score (p < 0.001). Food intake Cachectic patients understand the problem of weight loss and take more meals per day as patients without cachexia (Table 5). Cancer patients with cachexia sometimes receive more calories compared to cancer patients without cachexia (Table 6). 12.5% of cachectic patients receive already parenteral nutrition. A summary of the clinical parameters of the cachectic cancer patient is shown in Fig. 3. Discussion Our study demonstrated that cancer cachectic patients have multiple symptoms such as anemia, impaired kidney function and impaired liver function along with elements of mild cholestasis. Cachexia patients have low level of protein and albumin. As a result significantly more extracellular water and less intracellular water, compared to patients without cachexia. This means that not only low calories but also low oncotic pressure because of low protein play an important role in weight loss in cachectic patients. In parallel to protein deficiency, cachectic patients have lower performance parameters. The low levels of serum albumin, hematocrit, and fibrinogen are well-known for cachectic patients but probably not specific. Furthermore, the performance status of cachectic patientsmeasured by ergometry -was significantly reduced, leading to a poorer quality of life in the EORTC questionnaire (Fig. 3). Fearon and co-workers described a population consisting of 170 advanced pancreatic cancer cachectic patients using Karnofsky Performance Score, grip strength, dietary intake, quality-of-life assessment with EuroQol EQ-5D and QLQ-C30, CRP, and CA19-9, but they were mostly concentrated on evaluation whether a 3-factor profile incorporating weight loss, low food intake, and systemic inflammation might relate better to a patient's overall prognosis than will weight loss alone [14]. Wallengren and co-workers report on 405 patients about cachexia criteria like body mass index (BMI), weight loss, fatigue, Karnofsky performance score, physical function measured on a treadmill, low handgrip strength, lean tissue depletion (DXA or arm muscle circumference), quality of life measured by QLQ-C30 and abnormal biochemistry (inflammation, anemia, or low serum albumin) [26]. The biggest data set with 8160 patients was reported by Martin and co-workers [3], but the authors were mainly focused on BMI and % weight loss about overall survival to develop a grading system. Takayama and co-workers analyzed 406 stage IV NSCLC patients using handgrip strength, quality of life, Karnofsky Performance Scale, biochemical parameters (white blood cell count, hemoglobin, protein, albumin, triglycerides, calcium, CRP, and Insulinlike growth factor-1) and survival [27]. In the study of Theresen and co-workers 77 patients with advanced colorectal carcinoma were described using clinical parameters such as energy intake, the skeletal muscle mass crosssectional area, a tool for assessing nutritional status the Subjective Global Assessment (SGA), protein, albumin and CRP [18]. Laboratory variables Anemia parameters In our study population, the median hemoglobin was 12 g/dl and mean hemoglobin was 11.8 ± 1.5 g/dl. Our data regarding anemia in cachectic patients are by other groups. It was additionally reported using univariate Cox proportional hazard regression that hemoglobin was significantly associated with mortality risk [28]. According to CACHEXIA score of Argiles and co-workers [29], a tool for staging cachectic patients, hemoglobin in cachectic patients should be below 12 g/dl. Serum albumin und protein values Although we observed hypoalbuminemia and hypoproteinemia in cachectic patients, these changes were not severe. Additionally, we observed that calcium level in cachectic patients was lower than in non-cachectic patients. Taking into consideration that half of circulating calcium ions are bound to albumin, this effect resulted probably from hypoalbuminemia. Reasons for hypoalbuminemia are usually decreased synthesis, increased degradation, or an increased transcapillary escape rate [30]. We hypothesize that the primary mechanism was decreased synthesis what is supported through decreased liver synthesis function measured using liver cholinesterase (Table 3). At the same time decreased degradation was not observed because urinary albumin was unchanged (Table 4). According to Consensus Statement of the European Society of Clinical Nutrition and Metabolism (ESPEN), visceral proteins like serum albumin concentrations that are good indicators of disease severity and outcome should not be used for either screening or diagnosis of malnutrition because of a low grade of nutrition specificity [16]. Kidney function In our study there was a significant difference in serum creatinine in cachectic and non-cachectic groups that is by data of another working group [31], demonstrating that serum creatinine can be a biomarker of skeletal muscle mass in chronic kidney disease. The urinary excretion of enzymes, in particular, N-acetyl-beta-D-glucosaminidase (NAG) and alpha-1-microglobulin, non-invasive parameters of the renal tubular function, were significantly higher in cachectic patients. Impaired liver function in cachexia Two cholestasis markers, AP and GGT, were raised in cachectic patients in isolation with normal bilirubin. Though non-liver causes of this elevation like bone metastases, hyperparathyroidism, renal impairment and Paget's disease are possible, the combination of two markers makes liver problems more likely. One possible explanation is the hepatotoxic effect of the chemotherapy confirmed by the correlation between AP, GGT, CHE and chemotherapy at the time of inclusion. The difference in AP, GGT, CHE between chemotherapy patients and chemotherapy-naive patients were not significant in our study. To our knowledge, elevated cholestasis markers and decreased liver synthesis parameters were not described in cancer cachexia until now. This elevation was mild but present in cachexia in patients under chemotherapy and without chemotherapy. Only for cardiac cachexia, it was demonstrated that 60% of cachectic patients present with abnormal cholestatic parameters [32]. Some authors proposed the importance of the role of liver enzymes in cancer cachexia (reviewed in [33,34]) when a flow of amino acids from skeletal muscle to the liver occurs and serves for gluconeogenesis and acute-phase protein synthesis. It was suggested that an interaction between the tumor, peripheral blood mononuclear cells, and the liver may play a central role in the development and regulation of cachexia [35]. The important role of the liver in cancer cachexia was proposed by Lieffers and co-workers [36]. They hypothesized that a viscerally driven cachexia syndrome in patients with colorectal cancer originates from an increase in mass of high-metabolic-rate tissues, such as the liver and spleen. Inflammation parameters (CRP) in cachexia Increased CRP is supposed to be a valid laboratory and clinical marker in cachexia [5,14,37,38]. Fearon and co-workers proposed that inclusion of a marker of systemic inflammation (e.g., CRP) in a cachexia stratification system could account for patients with real loss of function also perceiving themselves to have reduced function [14]. Though we saw a significant difference in CRP-value between cachexia and non-cachectic patients. This difference (0.1 mg/dl versus 0.2 mg/dl) is nonspecific to provide additional information to the clinician when other accessible markers, such as serum hemoglobin or cholinesterase are considered. In spite of some prognostic scores for the assessment and treatment of cancer cachexia, like the Glasgow Prognostic Score (GPS) [39] or the cachexia score (CASCO) [29], which are based on CRP and albumin values, we agree with Utech and coworkers who suggest that inflammatory markers may not necessarily improve our ability to predict survival when cancer staging, serum albumin, and weight loss history are available [28]. Additionally, we think that CRP is not necessarily a characteristic parameter in cancer cachexia because it is not routinely measured in clinical practice, in Germany usually only if indicated. Physical performance Two parameters of endurance (capacity performance and relative performance) were significantly lower in cachectic patients. The dramatic change in cachexia was 23% reduced capacity performance (108 Watt for non- [40]). These data are of special importance because for the EORTC QLQ-C30, both the general health and functioning scales and symptom scales (Dyspnea and Appetite Loss), as well as for the SF-36, roleemotional, general health, energy/vitality, and social functioning significantly predicted survival [23]. Fearon and co-workers report that weight loss alone (≥10%) did not define a population that differed from self-reported functional aspects of quality of life [14]. With our present study, we were able to demonstrate slight but significant changes in quality of life in cachectic patients without using CRP as a diagnostic parameter for cachexia. This could be explained by the different patient populations (pancreatic cancer patients that were not considered suitable to receive systemic chemotherapy in the study of Fearon and co-workers, and patients with mixed cancers in our population). Food intake It is supposed that a reduction in food intake is common in patients with progressive cancer and cachexia. Dysphagia, nausea, xerostomia and changes in taste and smell may lead to diminished food intake and thereby insufficient energy intake (reviewed in [1]). Our data show that weight loss didn't depend on calories because cachectic patients know their problem and eat appropriately after a medical recommendation. Additionally, doctors recognize the problem of under-nutrition and prescribe parenteral nutrition (in 12.3% of patients in our cohort of cachectic patients). Tsoli and colleagues confirm our result in murine model and report that not only reduced food intake but dysregulated expression of transcription factors that control lipid metabolism and thermogenesis in brown adipose tissue lead to weight loss during the development of cachexia [41]. So, despite the same amount of meals per day, patients with cachexia had a reduced calorie intake. Limitations One potential limitation of this study was the observational design, so there may be bias inherent in who ultimately was referred to our nutrition-exercise center or decided to participate in the study. Totally, 187 (37%) patients received chemotherapy at the moment of inclusion in this study. This fact could influence the characteristics of patients. The patients are inhomogeneous regarding the Conclusion Our study reveals biochemical and clinical specific features of cancer cachectic patients. The positive feature of our study is that it was conducted on large study groups (369 patients without cachexia and 131 patients with cachexia). We were able to demonstrate that the problem of cachectic patients is not the calorie intake but protein turnover and maybe disorder in fat metabolism. Therefore we postulate that cachectic patients should be treated as high-risk patients and propose that after diagnosis of cachexia the patients should be presented to a cachexia team including "leading doctor" (for, example a surgeon, oncologist or internist, who supervises the treatment), nutritional specialist, clinical pharmacist, sports scientist and psychiatrist.
2017-08-03T01:46:22.230Z
2017-02-14T00:00:00.000
{ "year": 2017, "sha1": "07d4fdc23520aa4725f44563f3d2205f190bc022", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-017-3116-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a1f0d06f8206165ca6fae87b048e7824b932bf3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52251987
pes2o/s2orc
v3-fos-license
USING PHYTOREMEDIATION AND BIOREMEDIATION FOR PROTECTION SOIL NEAR GRAVEYARD The aim of present research was to assess the usefulness of Basket willow (Salix viminalis) to phytoremediation and bioremediation of sorption subsoil contaminated with pesticides. Studies upon purification of sorption material consisting of a soil and composting sewage sludge were conducted under pot experiment conditions. The study design included control pot along with 3 other ones polluted with pesticides. The vegetation season has lasted since spring till late autumn 2015. After acclimatization, the mixture of chloroorganic pesticides was added into 3 experimental pots. After harvest, it was found that pesticide contents in sorption subsoil (from 0.0017 to 0.0087 mg kg DM) were much higher than in control soil (from 0.0005 to 0.0027 mg kg DM). Achieved results initially indicate that Basket willow (Salix viminalis) can be used for reclamation of soils contaminated with pesticides, particularly for vitality prolongation of sorption barrier around the pesticide burial area. In future, it would allow for applying the sorption screen around pesticide burial area, which reduces pesticide migration into the environment, and grown energetic plants – through phytoremediation – would prolong the sorbent vitality and remove pesticides from above ground parts by means of combustion. INTRODUCTION Waste dumps with outdated and useless plant protection means are the most serious threat for natural environment, the agriculture chemization could cause in Poland.In the case of corrosion and damage of pesticide burial site construction, a continuous supply of contaminants to open waters occurs and will occur for many years.[Biegańska 2013;Ignatowicz 2008Ignatowicz , 2015] ] Therefore, there is a need to search for methods to reduce pesticide migration to the environment and incorporate new concepts.Thus it is purposeful to perform studies upon the application of sorption process on selected natural and waste materials as the shield for penetration of pesticides and metals (as pesticide constituents) into the environment, and to reduce their migration from other pesticide burial sites and stores.[Ignatowicz 2008; 2015] Phytoremediation on energetic plants was additional element that should limit the contaminants migration.The success of phytoremediation depends mainly on the properly selected plant species.[Antonkiewicz 2006;Borkowska 2003;Parzych 2016]] Desirable features making possible to apply a given plant are: fast growth, producing large amounts of biomass in short time, developed root system, higher tolerance to pollution, great ability to accumulate toxins in above ground parts, resistance to diseases, pests, and weather conditions.All above requirements are met by energetic plants, the representative of which if Jerusalem artichoke.This species does not require special soil conditions, thus its cultivation may be performed on chemically contaminated areas where production of consumption plants is not USING PHYTOREMEDIATION AND BIOREMEDIATION FOR PROTECTION SOIL NEAR GRAVEYARD Katarzyna Ignatowicz 1necessary.Jerusalem artichoke is utilized for energetic purposes as the fuel, for chipboards and compost production. Soil fungi are an additional factor in pesticides degradation.Two reasons of soil fungi high activity can be describe.First is higher durability for vegetation conditions in compare with other soil microorganisms, and second one is activity of enzyme production due to organic compounds in soil.Soil microbes with higher activity for pesticides degradation are: Penicillium, Aspergillus, Fusarium Trichoderma [Różański 1992;Ignatowicz 2015]. Present study was aimed at evaluating the usefulness of Basket willow (Salix viminalis) to phytoremediation and bioremediation of sorption subsoil (consisting of the soil and composting sewage sludge) contaminated with pesticides.In future, it would allow for applying the sorption screen around pesticide burial area, which reduces pesticide migration into the environment, and grown energetic plants -through phytoremediation -would prolong the sorbent vitality and remove pesticides accumulated in above ground parts by means of combustion. MATERIAL AND METHODS Investigations upon phytoremediation of sorption material were conducted under pot experiment conditions.The experimental design included 4 objects: control pot and 3 other pots containing soil amended with pesticides.The initial studies confirmed [Ignatowicz 2008;2009, Ignatowicz at al. 2015] the usefulness of soil mixture collected from pesticide burial area and composting sewage sludge (tab. 1) to make a sorption shield around that site.Basket willow (Salix viminalis) was grown in 4 pots of 0.3 m 2 area and 90 dm 3 capacity filled with above mixtures (Figure 1). Research were conducted in four pots (0.3 m 2 each, volume of 90 dm 3 ) filed with sorbent and planted with Salix viminalis.The vegetation period has lasted since spring till late autumn 2015.After acclimatization, mixtures of chemically pure chloroorganic pesticides (HCH, DDT) were continuously added (which imitated surface supply) onto 3 experimental plots.During the whole experimental period, 5 mg of each active substance per pot was administered.After harvest, samples of soil, above and underground parts of plants were collected.Pesticide concen- RESULTS AND DISCUSSION Achieved results confirm observations made by Borkowska [2003] and Styk [1984], who found that Jerusalem artichoke and Basket willow (Salix viminalis), as perennial multipurpose species, is characterized by great yielding potential despite of poor soil and climatic requirements.Plants set in the first experimental year (set as several-yearold seedlings from a plantation) revealed high yields of above ground parts.Opportunity to get high yields allows for proposing Jerusalem artichoke as one of the species useful for chemically degraded areas reclamation, particularly phytoremediation of pesticides from the sorption barrier. Studies of Lunney at all [2004] were to compare the ability of five plant varieties to mobilize and phytoremediation DDT and its metabolites.The potential and limitations of phytoremediation for removal of pesticides in the environment have been reviewed by Chaudhry at all.[2002]. High plant growth of steam and leaves were observed in first year of research experiment.Heavy yielding of plants such as Salix viminalis confirms its use for recultivation of chemical degraded soil especially for phytoremediation of pesticides from sorption barrier.Studies of Antoniewicz and Jasiewicz [2006] revealed high yielding potential of Jerusalem artichoke on the soil with varied heavy metals contamination, which proves its great resistance and fast adaptation to polluted soils.Own studies were also confirmed by Borkowska [2003] and Xia [2006], who observed more abundant yields of Jerusalem artichoke on subsoil amended with sewage sludge than on mineral soil.It referred both to plant height and yield biomass (Figure 1). Besides high yield-forming potential, Basket willow (Salix viminalis) also shows a great ability to intake pesticides from the subsoil (Table 2).Much higher levels of absorbed pesticides were recorded in soil mixed with composting sewage sludge (0.0017-0.0087 mg kg DM) than in native soil (0.0005-0.0027 mg kg DM).Similar dependence was observed in samples of Basket willow (Salix viminalis) above ground parts.Both leaves and stems of plant cultivated on sorption subsoil accumulated more pesticides.Higher toxins concentrations were detected in stems (DDT 0.0087, HCH 0,0029 mg kg DM) than in leaves (DDT 0.0027, HCH 0,0017 mg kg DM), regardless the subsoil Basket willow (Salix viminalis) was cultivated During own researches 20 of fungi species were isolated (Table 3).Results were close to to Wagner [2004] and Mietkiewicz [1997].They were detemineted fungi in pesticides wastes.Penicillium and Trichoderma were dominated as responsible for pesticides degradation in soil as Chrysosporium, Wardomyces i Oidiodendron.[Ignatowicz 2015, Różański 1992] CONCLUSION Achieved results allow for concluding that Basket willow (Salix viminalis) can be used for phytoremediation of soils contaminated with pesticides, and particularly to prolong vitality of sorption barrier around a pesticide burial area.More abundant yields of Basket willow (Salix viminalis) on the subsoil amended with composting sewage sludge than mineral soil allows for predicting large amounts of a biomass for energetic purposes, thus removing accumulated pesticides by means of combustion. Table 1 . The characteristic of composting sewage sludge. Table 2 . Mean concentration pesticides in Basket willow (Salix viminalis) Table 3 . Fungi species in the sorption solum.
2018-09-13T01:41:27.788Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "c130d3f7b9b4eac94f7db6fff6277ae444ef1479", "oa_license": "CCBY", "oa_url": "http://www.jeeng.net/pdf-63313-4663?filename=USING%20%20PHYTOREMEDIATION.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c130d3f7b9b4eac94f7db6fff6277ae444ef1479", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
237507045
pes2o/s2orc
v3-fos-license
Clustering Analysis of Risk Divergence of China Government’s Debts It is a difficult time for the world’s economics while the impact of COVID-19 is undergoing. A possible worldwide sovereign debt crisis could emerge, in short term, for supply chains blockage due to its slowing-down in many countries. China, having the second largest economy in the world, is crucial for the stability and sustainability of the economic recovery. China endures a longterm growth since 2000; nevertheless, a large amount of that growth is contributed by the government debt, which was spent on infrastructures. (e accumulation of debts is a potential risk to the future growth of China. (is research evaluates the central government and local government debts with a series of indicators.(e weights of indicators are determined by objective methods of the CRITIC approach. Results confirm that the central government debt of China is on the edge of risk, while the risk of local governments debt is already in a concerning danger. (e local government risk is 50% higher than the central government’s risk. Moreover, the K-means clustering algorithm performed on data, collected from various provinces, suggests that the local government debts of China follow a pattern of geographical distribution; that is, the closer to the coast, the lesser the risk, which is in accordance with the pattern of labor flowing. Labors are attracted by job opportunities which lie in the well-developed regions of China.(is is confirmed by the crosscheck with the wage growth data. (is indicates that the less developed areas of China rely more heavily on debt-investment stimulation that could be of a potential stagnation because the yield of investment follows diminishing marginal returns and the relative lacking labor weakens the potential economic growth. Introduction Government debt has been a crucial factor in stability of governments and world economy. Usually when nations are not able to pay their debts, a particular nation would choose to "default" the debts if there is no more desperate action such as waging wars. Default on debt will drastically increase the borrowing costs because creditor requires higher interest rates to compensate the possibility of receiving nothing. e higher debt service costs will further decrease the nation's fiscal expenditure, especially in investments, which will crush the potential economic growth. at is what we observed in most debt crisis, such as the cases of Latin America and the 2008 sovereign debt turmoil in Europe. Furthermore, the debt crisis does not harm the default country, only, but the debt risk is contagious at regional level if it is not a global one [1]. China had enjoyed a long-term economic growth for the last decades. However, the potential growth rate of China becomes lower than 6% compared with the twofigure growth. From the second decade of 21st century, China depends on investments much more, and the large share of the investments is from the government (including local governments). Yet, the investment follows the law of diminishing returns, making investment less economic efficient. us, government debts start to accumulate fast. e government debt of China (including local governments) is more than 38 trillion Chinese Yuan. Comparing to the Gross Domestic Products (GDP) of about 100 trillion Yuan, the debt may not be an immediate and direct threat. e fast growth makes the debt risk a potential debt when the world economy was disrupted by the COVID-19 pandemic. Meanwhile, China now becomes the largest creditor in the world [2]. is suggests that the fiscal situation would decide the debt relief decisions on counties with heavy burden of debts. Sovereign debt problem is not a new problem; from historical perspective, it is rare when there is no country to default their debts [3]. ough the foreign debt default usually causes international quarrel and conflict, the domestic debt's risk should not be underestimated [4]. Also when export takes a large share in nation's economic engine, the sovereign debt is vulnerable to international shocks [5]. For a large economic country like China, the foreign debt to total debt ratio is now much lower compared to the 1980s when the country requires immediate investment to stimulate the economic development. Note that China has gone through a certain economic transformation that significantly reduced their dependency on exporting, while the domestic consumption starts to rise. e research about sovereign debt focuses on the risks it brought to other political and financial systems. While a market rich is in liquidity, stock market is easy to be endangered by the debt risk [6]. What follows is the bond market, which is directly linked with the government debt. Since most part of the government bond is traded in the domestic bond market, and based on the study about spreads of local currency bonds, the domestic debt risk is lower than foreign debts [7]. However, according to the study on Europe debt crisis, domestic banks of debt-stressed country would buy in more bonds that does not really ease the issue but buy more time for the nation to deal with the debt crisis [8]. erefore, close risk monitoring and evaluation on the government debt is necessary. Mao et al. found that a debt crisis would transfer into a financial crisis, by the domestic commercial banks factor, as the debt crisis in Europe can be considered as an outcome of the 2008 financial crisis [9]. How to regulate the finance system is also an important topic discussed in the debt research. From the experience of Europe debt crisis, the right way would be setting more firewalls between the financial products. Research indicates that for the post-debt crisis in European counties, CDs and bonds in their markets have lower statistical cointegration [10]. Yet, the fundamental action would be reforming the fiscal situation. e mainstream of the field considers that the optimal fiscal policy would be procyclical actions [11]. Meanwhile, contractionary fiscal policy is also advocated by many researchers who believe high debt to the GDP ratio will force the interest rate to go up, based on the model study [12]. Moreover, Croce et al. [13] believe that cutting the balance of debt will increase the output and welfare level (2021). Nevertheless, in general, scholars are aware of incoming debt crisis. ey call for more rigid fiscal rules while many economies are disrupted by the COVID-19 pandemic [14]. Afterall, both the procyclical and anticyclical fiscal policies require rigorous reviews and carry-outs. is research is an investigation of the China governments debts including both the central and the local. Methods such as Kmeans clustering are used on real datasets to obtain certain findings. Following are the major contributions of the research conducted in this paper: (1) We evaluate the debt risks for both the central government and local government of China (2) e methods of evaluation; that is, K-means clustering is implemented over real datasets to obtain certain findings (3) We conclude that the central government debt of China is on the edge of risk, while the risk of local governments debt is already in a danger (4) e local government risk is 50% higher than the central government's risk; and the local government debts of China follow a pattern of geographical distribution e structure of the remaining part of this paper is as follows. Section 2 describes the details of the research methodology. In Section 3, we discuss the outcomes of our results that are based on a clustering method, that is, K-mean clustering approach. Section 4 describes our research findings. Finally, Section 5 concludes this paper along with several directions for future research. Research Methodology Various risk indicators are used to reach an accurate and precise evaluation. Below we describe different indicators both for local and central governments and how these indicators are computed, and what kind of risk they reflect. At last, we describe the K-means clustering technique, briefly. Central Government Indicators. e central government debt risk can be divided into 6 risk indicators from (R 11 to R 16 ) to reach an accurate and precise evaluation. Below are how these indicators organized, and what kind of risk they reflect. R 11 is calculated by dividing the fiscal deficit to GDP, which evaluates the fiscal deficit's share in the overall economic activity of a year. at also can be interpretated as how much a nation's economic relies on financial deficit. For this indicator, a lower value means a lower risk. R 12 is bond issuing volume divides fiscal expenditure, evaluating how much a government's expenditure relies on borrowing. Also, the lower is the better. Usually, the benchmark of R 12 is set to 25%. However, considering the bonds of China local governments are endorsed by the central government (the market and everybody strongly believe so), the central government bond issuing volume to fiscal expenditure ratio R 12 is set to 15%. For local governments, the same risk indicators will be evaluated using R 12 . R 13 checks the pressure from debt service, by dividing the debt service with fiscal income. e higher the R 13 , the higher the risk, because the government must use a larger share of their income to pay debts and interests. R 14 is constructed in the same manner as R 11 , which is the ratio of bond issuing volume to GDP, which represents how much the economic running relies on the government borrowing of a year. e lower is the better, as well. According to the rule of thumb, the benchmark line is set on 3%. ere is no strict theoretical explanation about that number. Researchers and studies choose the "3%" that could be influenced by the "Treaty of Maastricht" from Europe. e Treaty established economic and fiscal standards for those countries wish to join the European Union (EU). For instance, a country cannot have a high inflation rate and a fiscal deficit higher than 3% of their GDP. Although, since 2008, China started to enact proactive fiscal policies, their R 14 are well controlled by under 3% (the benchmark was breached for only two times). Meanwhile, many major nations have higher rates than 3%. Considering the impact of the COVID-19 pandemic, many nations rely on the borrowing much more than before. e benchmark of R 14 still sticks to 3%. R 15 is constructed by dividing bond issuing volume with deposits outstanding. Government bond is a kind of borrowing; thus, creditors who provide cash funding are required for the equation. Deposits outstanding is the money which can purchase bonds. A low R 15 ratio indicates more money is available to do the purchase. ough the interest rates can evaluate the money shortage level, their rapid fluctuations make it difficult to be a reliable indicator. e interest rates are influenced by many other markets such as stock and real estates. e bond issuing volume to deposits outstanding ratio is a more objective indicator to reflect the risk of borrowing. Furthermore, R 16 is a crucial indicator measuring the balance of a nation debt to its GDP. For developed countries, currently their R 16 are way ahead of 100% or even higher. For instance, since Japan's bubble economy burst in late 1980s, the Japanese government relied highly on borrowing (yet the stimulation effect was not so satisfying). e GDP per capita stays still for a long while; thus, the saying of "the lost decade/decades" arose. Nowadays, the debt to the GDP ratio of Japan is larger than 270%, which reflects that R 16 is a significant indicator (refer to Table 1 for various indicators). e GDP or R 16 ratio is also an important fiscal criterion of joining the EU, by setting the benchmark as 60%. In this research, the benchmark is lift to 70%, because the nations around the world have higher debt to GDP ratios than a decade ago. For instance, the United States now reaches 128%. Similarly, the foreign debt to GDP ratio of Japan is over 92%. Traditional developed countries in Europe also suffer from high debt level: Germany 78%, Netherlands 75%, UK 102%, France 114%, Spain 118%, Italy 160%, Belgium 122%, and Greece (which been criticized a lot by EU countries) 213%. Meanwhile, a promising country that attract many foreign companies to set their headquarters, Ireland, also has a high debt ratio of 90%. It can be concluded that the benchmark set by EU has been penetrated by almost all EU nations. erefore, in this research, we lift the benchmark value of R 16 to 70%, which will not underestimate the potential risk, for China that has a much lower debt ratio. Considering the international financial market is functioning well so far, the portfolios will invest more on Chinese government bonds while others are too risky. e central government debt risk indicators are shown in Table 1. Local Government Indicators. e indicators introduced above in Section 2.1 sum up the central government debt risk evaluation. ey are organized into Table 1. For the local government debt risk, the indicators are constructed in the same manner with the central government evaluation. R 21 to R 24 are identical with R 11 to R 14 , while R 25 is constructed in the same way as R 16 . ey are highlighted in Table 2. Tables 1 and 2, all indicators have their own weights. at was calculated by the CRITIC method (Criteria Importance rough Intercriteria Correlation), which can handle the issuance of indicators having the same elements efficiently. For R 11, R 14, and R 16 , they have the same element of GDP. us, GDP has influence on the values of these indicators. at is why CRITIC was introduced because it can measure information entropies (while evaluating the level of correlations) and calculates the redundancy of indicators. So, when few indicators are correlated on a certain level (making them provide less information), their weights will be trimmed in CRITIC method. Again, R 11, R 14, and R 16 all have the elements of GDP, but they also evaluate the fiscal deficit, bond issuing volume, and debt balance, which implies that these indicators cannot be further simplified. By introducing CRITIC, the problem of correlated indicators can be balanced, and the final weights can be well balanced with the objective. Below are how weights calculated by the CRITIC method [15]. e CRITIC Method. As shown in Say there are n samples, and each sample is regulated by p indicators and that can be denoted as a matrix A in equation (1), where u ij is the indicator value of number j (of the sample i). (1) All indicators need to be nondimensionalized by Maxbest or Min-best (larger value the better or less value the better). Considering this research focus on risk evaluation with the indicators introduced in Tables 1 and 2, indicator value should be handled with Min-best by equation (2). For simplicity, u ij ′ after equation (2) will still be denoted as u ij . en, the standard deviations of indicators S j need to be evaluated by the following equation, while u j � 1/n n i�1 u ij . e conflicts of indicators R j need to be calculated by equation (4), while r ij is the correlation coefficient between indicator i and j. e larger the r ij , the more redundance in indicators i and j, which means they provide less information. And their weight should be less in the overall indicators. Scientific Programming e information entropies C j are calculated by the following equation, that C j � S j × R j . And the final weights of indicators are generated by equation (6). Above we briefly summarized how the weights for various indicators, as shown in Tables 1 and 2, are calculated. K-Means Clustering Method. Before entering to the discussion of actual results, the analysis method of Kmeans needs to be introduced for risk evaluation in China local governments, to reach a detailed investigation on the risk differences of Chinese provinces. ere are two advantages of applying the K-means clustering approach. First, it is an efficient clustering algorithm. Secondly, when the clusters are highly dense with nonsignificant differences, K-means can produce well clustering results. Below are how clusters are determined by K-means method [16]. For a dataset X � x 1 , x 2 , x 3 , . . . , x n , there are n d-dimensional data point, while x i ∈ R d . e goal is to make the data into K clusters. In the first, the algorithm will divide the dataset into K subsets, C � c i , i � 1, 2, . . . , K . Each subset has a clustering center u i . en, J(c k ) is the sum of the data points distances from the center defined in equation (7), which is calculated by the Euclidean distance. e overall goal is to minimize the sum of all J(c k ) in equation (8) (8) and the mean square error E is used for evaluation in equation (9). In equation (9), p is the data point while m 1 is the clustering center of the cluster c 1 . e actual K-means clustering algorithm starts with a dataset of n points. en, randomly choose k points for clustering centers m i (i � 1, 2, 3, . . . , k), following by calculating each point p to its center's distance d(p, m i ), and that is defined in equation (10), where i � x i1 , x i2 , . . . , x in and j � j i1 , j i2 , . . . , j in are n-dimensional data. For each point p, it will go through distance calculations for each cluster, the minimum distance d(p, m i ) will decide which cluster p belongs to. After all points have been evaluated, cluster centers m i will be recalculated by equation (11), where m k is the Kth cluster and N is the number of data points in the cluster K. Data points will be assigned to the most similar clusters. e process will iterate until E of equation (9) ceases to decrease, which indicates an optimized clustering has been achieved. Results and Outcomes For the central government's debt risk, the R 11 to R 16 indicators are calculated and organized in Table 3. ese outcomes are based on the data from the National Bureau of Statistics and Ministry of Finance, China. e data being 16 , it is the ratio of balance of national debt to GDP, and the benchmark is 70%. Take the R 16 of 2019, for example, its value is 0.17. us, the R 16 risk is 0.17/0.7 � 0.24, which is not an abrupt threat while the weighted risk is 0.24 * 0.257 � 0.109. About the risk value of indicators (before weighted), the value of 0.8 to 1 can be interpreted as the risk (represented by the indicator) is now exposed in danger. Moreover, the risk value between 0.5 and 0.8 can be concluded as median threat, meaning that the risk is about to cause problems and need to be handled using careful measures. e value between 0.2 and 0.5 is a minor threat, which requires some sort of intervenes. Moreover, 0 to 0.2 can be considered as risk free, and no immediate actions are required except observation and monitoring. e overall risk of the national government risk is interpreted in the same fashion as the risk indicator above. e values of the national debt's risk indicators are presented in Table 3. e overall national debt (and weighted risk indicators) risk values are organized in Table 4.Source: Data organized from the National Bureau of Statistics of China. For the local government debt risks, each province's indicators from R 21 to R 25 are calculated in the same manner as the nation debt risks analysis. However, the results of every province will be too much to discuss and present here. erefore, only the overall debt risks (from years 2015 to 2019) of each province are demonstrated in Table 5. According to the risk levels, these provinces are clustered by the K-means method (with K � 5) to reach a better understanding of the geographical distribution pattern about the government debt risk. One more thing about the data of local government debts needs to be explained, that the data started from 2015 rather from earlier. It is because that the local government debt was not in the form of local government bonds, which makes it difficult to estimate the overall balance of debt. e data from 2015 onwards would be more precise, because at that time the local governments debts (and previous debts with all kinds of forms) were already reviewed and available in the forms of bonds. e debt risks of the local governments and their clustering, using the K-means method, are shown in Table 6. Discussion According to the results, as shown in Table 4, the overall national debt risk has a clear increase in 2015. From 2010 to 2014, the risk stabilizes in the interval from 0.4 to 0.5. Within two years, the risks value doubled and reaches 1, which is exposed to immediate danger. e weighted values of national debt's major indicators are shown in Figure 1. e changes of risk indicators values would reveal the core threats. e major contributor of the national debt risk is R 12 . It rises from 0.1 to 0.3 from 2014 to 2016, which indicates the government relies on debt much more than the maintained daily functioning. Also, R 11 shares the same pattern, indicating the fiscal deficit to GDP ratio grows rapidly. However, R 16 remains at a low and safe level. e balance of debt to GDP keeps around 30%, providing a certain room for future borrowing. Comparing with major developed countries' R 16 (more than 100%), China has a much lower risk which would attract domestic and foreign investors to the Chinese government bonds that may buy more time for China to modify its debt structure. For the local governments of China, the risk is much more severe. Figure 2 shows the debt risks of all provinces according to the result in Table 5. It can be found out that most local governments are in debt risk danger, in particular, those having the risk value larger than 1. e overall local governments' debt risk is the sum of all the provinces' weighted risk values. e weights of provinces are determined by their shares in the national GDP. e local government's overall debt risk starts at 1.3 in 2015; then, in 2016, it grows to a dangerous level of 1.8. en, it drops down fast in 2017 and has stabilized at 1.5 in recent years. However, the risk shows with a strong upward trend, indicating that the local governments are in trouble of debt, and it is difficult to turn the flow. It may require more investment on infrastructures of poor provinces to stimulate the economy but that means more funding will be needed, especially when most provinces are already in deficit and rely on borrowing. For provinces of the first cluster in Table 1, their average risk values are around 1, which indicate they are already exposed to danger. For them, further monitoring is required. Following is the second cluster with average risk value of 1.3. ese provinces are in debt trouble and only their strong financial and economical actions could turn the flow. Particularly, for Tianjin, its value jumps up fast from 0.7 to 1.9 in less than five years and the situation could go worse. en, clusters 3 and 4 have rather high average risk values, which is more than 2. It can be considered that these local provinces cannot drive them out of the debt mire on their own (neither further borrowing nor economic measures), especially for the Guizhou, Qinghai, and Tibet. However, the situation is not that severe, because these provinces have much lower population as compared to other regions, making the bailout actions from the central government possible and affordable. Meanwhile, Guizhou's debt issue is improving because Guizhou uses its geographical advantages (high attitude with low temperature) well to attract cloud-computing industry to invest. e future finance of Guizhou would perform better with more tax income. e debt risk of the local governments shows a strong geographical pattern. e provinces with the lowest risk lie in the east coast of China, while provinces in the second cluster (except Tianjin) lie in the middle region. Similarly, the provinces in the third cluster (except Guangxi) are all in the west and north-east regions. e debt risk of the local governments becomes lower from the west to the east of China. e pattern is on account of the provinces on the coast endures a long term of investment due to their conveniency of transportation and supply chain, which booms the economy. Moreover, the study on China population mobility data also records this trend [17]. A better economic efficiency allows government borrowing can turn into highquality future income. is may ease the accumulation of the local government debts. e provinces in the middle and the west have not the same financial boost as the east provinces in the past. is is similar to the conclusion of the research on less developed countries in the EU, which suffer more from the debt problem [18]. Yet, it is not the reason to stop or cut down the relatively poor provinces' borrowing. e structural economic problem needs to be tackled to ensure a balanced development around the country. One important problem is the divergence of wage around the provinces. Table 5 lists the wage changes compared to the average changes. For instance, Zhejiang's wage growth compared to average change is −0.71, which indicates the wage in Zhejiang province grows slower than other provinces. Moreover, a slower wage growth provides a comparative advantage in the economic growth, which is reflecting in the potential risks of the local government debts. is means better job opportunities and population inflow in the province. e results of other provinces in the table can also be interpreted in the same manner.Source: Data organized from the National Bureau of Statistics of China. For the provinces having the lowest debt risks (cluster 1), their wage growth is slower with an average of 0.315, while the figure of provinces in clusters 2 and 3 is around 0.05 which is slightly higher than the average level. e provinces in clusters 4 have a faster wage growth of 0.175. e wage divergence of provinces correlates with the debt risks. Mobility and relocation of population result in wage divergence, which eventually reshape the economicperformances of different regions. To ensure a balanced development and control the debt risk, the factor of wage and population needs to be investigated further. Conclusions and Future Work is research investigates the current situation of the debt risk problem, covering both the national government and local governments of China. At this moment, COVID-19 has been spread over the world for two years, strongly disrupting the world economy. e IMF and other institutions concern the potential impacts of sovereign debt crisis and have raised the warning line of debt to GDP ratio as 90%, which we believe may ease or conciliate the market to prevent government bonds' panic selling. Assuming the world's economy will not reach a pleasing level in the short term, sovereign debt risk requires close attention to prevent an upcoming potential debt crisis like Europe in 2018. We evaluated the risk of the central government of China showing that the risk has an increasing trend and reaches the critical level. However, China carried out many direct fiscal expenditures cuts, which is affective to cover the risk. Moreover, the debt to GDP ratio is still on low level as compared to developed countries, which buys more time for China to deal with debt problems. We estimated that the nation debt is not facing immediate threat or risk. According to the general opinion, the debt to GDP ratio does not have to be kept on a low level [19]. For the local governments of China, our evaluations indicated that almost all provinces breached the critical level of debt risk. Few well-developed provinces are free from urgent risk while other rely on borrowing to maintain the debt services. Furthermore, they depend on Beijing funds, making them less willing to improve their fiscal situations [20,21]. Due to fiscal transfer payment system, the Beijing fund is considered a rich resource and the local bodies get used to "sleep on," which increases economical welfare [22]. As the risk evaluation indicated, the overall debt will accumulate until the central government can no longer cover. Assuming that the central government is on good position about borrowing, debt risks of local bodies can be handled with right moves. In 2021, the central bank of China tightened the money supply to inefficient industries that is a good start to turn the flow. We observed that the debt risk divergence of the local governments matches the wage divergence-most like the studies on Europe debt crisis. Future research will focus on actual actions for China's balanced development policy. By filling the economic gap of provinces, the population mobility situation and wage divergence would certainly change, which could alter the trend of the local debt risks. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this study.
2021-09-15T13:12:24.741Z
2021-08-30T00:00:00.000
{ "year": 2021, "sha1": "c669fb9ce8d418280734ccfe0a539afee2f64cac", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sp/2021/7033597.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5ce979d3408c8e0019dd8ffd673a35b6e53f2d0b", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [ "Computer Science" ] }
270554014
pes2o/s2orc
v3-fos-license
Periprocedural Edoxaban Management and Clinical Outcomes in Patients Undergoing Transcatheter Cardiovascular Procedures in the EMIT-AF/VTE Program Annually, 10% of patients with atrial fibrillation (AF) or venous thromboembolism (VTE) treated with non-vitamin K oral anticoagulants undergo diagnostic or therapeutic procedures. This subanalysis of the multicenter, prospective, observational Edoxaban Management in Diagnostic and Therapeutic Procedures real-world registry included patients in Europe and Asia with AF or VTE who underwent transcatheter cardiovascular (CV) procedures. Edoxaban interruption and clinical outcomes were assessed for all arterial or venous access procedures and stratified by bleeding risk. Overall, 2695 procedures were reported; 755 (28.0%) were transcatheter CV procedures, of which 373 (49.4%) were arterial access and 382 (50.6%) were venous access procedures. Patients with arterial versus venous access procedures had significantly higher bleeding and stroke and thromboembolism risk scores (P < 0.0001 for both) ands having a underwent procedures that were more frequently classified a higher European Heart Rhythm Association bleeding risk. Edoxaban was interrupted in 59.5% (222) arterial versus 42.4% (162) venous access procedures, mostly either only preprocedurally or both pre- and postprocedurally. The combined incidence of clinically relevant ischemic or bleeding event rates and deaths was low (0.8 events/100 procedures). This subanalysis showed that while edoxaban was interrupted in approximately half of all interventions, ischemic events and major bleeding were low, suggesting transcatheter CV procedures can be performed safely in high-risk patients with AF or VTE. Patient and procedural factors should be considered to personalize the decision of edoxaban management around the time of a transcatheter CV procedure. Clinical trial registration number: NCT02950168, NCT02951039 Introduction 2][3][4] The majority of patients with AF require long-term treatment with antithrombotic agents to reduce the risk of stroke.6][7][8][9] Of the millions of patients who are chronically treated with oral anticoagulants, approximately 10% require diagnostic or therapeutic invasive procedures annually. 10,11For these patients, interruption of oral anticoagulation can raise the risk of ischemic events, particularly in those with a high CHA 2 DS 2 -VASc (congestive heart failure, hypertension, age ≥75 years [doubled], diabetes, stroke [doubled], vascular disease, age 65-74 years, and sex category [female]) score. 12Furthermore, the risk of major bleeding (MB) is elevated in patients with predisposing factors and in procedures with a high MB risk as reflected by scores and classifications such as the one devised by the European Heart Rhythm Association (EHRA). 13This poses the questions whether, when, for how long, and for which procedures to interrupt chronic oral anticoagulation therapy.Although these questions are frequently asked in a clinical setting, very limited evidence-based recommendations exist to guide periprocedural management of anticoagulants. Among the most frequently performed procedures are those requiring arterial and/or vascular access, such as coronary angiographies, percutaneous coronary interventions, valvular procedures, and ablations. 10,14,15No clear evidence exists for the optimal time to interrupt or reinstate oral anticoagulation, balancing the reduction of MB against the potential higher risk of ischemic events, notably, thromboembolic stroke.It is also unclear which criteria physicians use to determine their periprocedural oral anticoagulation strategy and whether they would decide to use other antithrombotic agents during that interruption. Therefore, the above questions were addressed by investigating the real-world clinical practice of edoxaban management in a subgroup of the Edoxaban Management in Diagnostic and Therapeutic Procedures real-world registry in patients with AF or VTE (EMIT-AF/VTE) 14 with the focus on effectiveness and safety parameters.The objective of the present work was to analyze the periprocedural management of edoxaban and clinical outcomes in patients with AF or VTE undergoing unselected diagnostic or therapeutic transcatheter cardiovascular (CV) procedures in daily practice. Study Design EMIT-AF/VTE (NCT02950168, NCT02951039) is a multicenter, prospective, observational program conducted in Europe and Asia in accordance with the Declaration of Helsinki and with the approval of local Institutional Review Boards.All participants provided written informed consent prior to enrollment.Periprocedural management of edoxaban was at the discretion of the investigators.The protocol design and overall results of the Global EMIT-AF/VTE program were previously published. 14,16 Patient Recruitment Enrollment commenced in December 2016 and was completed in April 2020 for the countries included in the Global EMIT-AF/VTE program.Patients were recruited from Belgium, Germany, Italy, the Netherlands, Portugal, South Korea, Spain, Taiwan, Thailand, and the United Kingdom.Eligible patients were adults with AF or VTE who were treated with edoxaban according to the local labels and were not enrolled concurrently in any interventional study.This subanalysis of the Global EMIT-AF/VTE program included patients who underwent transcatheter CV procedures. Outcome Parameters Time and duration of edoxaban interruption relative to the procedure, as reported by the physician, and dosing of edoxaban were recorded.Data on edoxaban interruption were collected from 5 days prior to the procedure to 30 days after; ischemic and bleeding events were collected from the day of the procedure to 30 days thereafter.Events were defined in accordance with the International Society of Thrombosis and Haemostasis. 17rimary outcome parameter was the composite of ischemic stroke, transient ischemic attack (TIA), acute coronary syndrome (ACS), any systemic embolic event (SEE), and the rate of MB. Secondary outcomes included ACS, ischemic stroke, TIA, SEE, deep vein thrombosis, pulmonary embolism, as well as CV and all-cause mortality.Other outcome parameters included the incidences of clinically relevant nonmajor bleeding (CRNMB), hemorrhagic stroke, and all strokes combined.All incidences of MB, CRNMB, ACS, and acute thromboembolic events were reviewed and unanimously adjudicated by the Steering Committee.Bleeding had to commence during or after the procedure to be classified as a procedural complication. Observations The observation period of the study started 5 days before and ended 30 days after the procedure.Details of edoxaban treatment and clinical outcomes were documented for the 30-day period after each procedure."No interruption" of edoxaban therapy was defined as edoxaban administration on each day of the observation period.Any interruption of edoxaban treatment was recorded as the number of days without administration of edoxaban.Any dose skipped before or on the day of the procedure was categorized as preprocedural.Heparin bridging was defined as any heparin use during the time period ranging from the day before the procedure until the day after the procedure.Details of edoxaban treatment, use of concurrent antiplatelet therapy (APT), type of diagnostic/therapeutic procedures, periprocedural EHRA bleeding risk, 13 modified HAS-BLED (hypertension, abnormal renal/liver function, stroke, bleeding history or predisposition, elderly, drugs/ alcohol concomitantly), 18 and CHA 2 DS 2 -VASc scores were recorded at the baseline. 12 Statistical Analysis Data quality checks were performed on a regular basis to ensure that reported data were accurate and complete and that the conduct of the study complied with the observational plan and regulatory requirements. 16Binary, categorical, and ordinal parameters were summarized as absolute and percentage numbers.Numerical data were described by descriptive statistics.Comparisons of baseline demographics and clinical characteristics between patients with arterial versus venous procedures were performed using Fisher's exact test.Edoxaban interruption, clinical outcomes, and APT were summarized for all vascular access procedures, by access route (arterial or venous) and by EHRA procedural bleeding risk.The frequency and duration of edoxaban interruption were assessed at the following time points: preprocedure only, postprocedure only, and both pre-and postprocedure.Clinical event rates are presented as the number of events per 100 procedures.Statistical analyses were performed using SAS® version 9.3 or higher (SAS Institute, Cary, NC, USA). Results To date in the Global EMIT-AF/VTE program, 2695 procedures in 1952 patients were reported, of which 755 were transcatheter CV procedures, 373 (49.4%) through arterial and 382 (50.6%) via venous access (Supplemental Figure S1).Most arterial access procedures were coronary angiographies (66.4%); venous access procedures were primarily electrophysiologic interventions or studies (72.5%; Figure 1).Most patients (657 [86.9%]) underwent a single CV procedure; 43 patients (5.6%) underwent 2 procedures, whereas 3 patients (0.4%) underwent 3 or more.At the baseline, almost all CV risk factors (ie, arterial hypertension, coronary heart disease, diabetes, or heart failure) were considerably more frequent or more pronounced in patients with procedures through arterial versus venous access routes (P ≤ 0.0001 for all).When combined, this led to significantly higher mean ± standard deviation CHA 2 DS 2 -VASc (3.4 ± 1.5 vs 2.2 ± 1.5) and modified HAS-BLED scores (1.9 ± 1.0 vs 1.3 ± 0.9; P < 0.0001 for both) in patients who underwent arterial compared with venous access procedures (Table 1).Arterial (53.6%) versus venous (7.4%) access procedures were more frequently classified as a low or high bleeding risk category.Patients who underwent procedures with a higher EHRA bleeding risk level had higher mean CHA 2 DS 2 -VASc and HAS-BLED scores compared with patients who underwent procedures with a lower bleeding risk level (low or minor risk).For the overall population, CHA 2 DS 2 -VASc and HAS-BLED scores showed a significant (P < 0.0001), though moderate, correlation (r = 0.46).More patients with arterial than venous access procedures fulfilled at least one edoxaban dose reduction criterion and, subsequently, received edoxaban 30 mg once daily (P = 0.0003). Edoxaban was interrupted in approximately half of all procedures, with a somewhat higher percentage of interruptions occurring for arterial (222/373 [59.5%]) versus venous access procedures (162/382 [42.4%]).In patients with a high bleeding and/or stroke risk, as reflected by patient characteristics (ie, CHA 2 DS 2 -VASc and HAS-BLED scores) and EHRA procedural bleeding risk classification (ie, more arterial procedures were classified as high-risk procedures), edoxaban was interrupted more frequently and for a longer period (Figures 2 and 3).As a consistent pattern between arterial and venous access procedures, edoxaban was interrupted primarily only preprocedurally (253/754 [33.6%]) or both pre-and postprocedurally (98/754 [13.0%]), whereas postprocedural-only interruption was rare (32/754 [4.2%]; Supplemental Table S1).One patient had 1 arterial access procedure and 1 venous access procedure on the same day.These procedures were counted once each in arterial and venous procedures; however, they are counted only once in all vascular procedures. APT was more frequent for arterial versus venous procedures on the day of the procedure and during follow-up (24% vs <3%).During the observation period, APT spiked around the day of the procedure and was higher post-than preprocedurally.APT was more common in arterial and venous procedures for low (41.8% and 24.1%) versus minor (30.5% and 5.1%) bleeding risk (Supplemental Figure S2).For both arterial and venous access procedures, single APT was more common than dual APT at all time points assessed (Supplemental Figure S3). Clinically relevant ischemic or bleeding events occurred in 0.8 of 100 procedures (Table 2).Ischemic and bleeding events occurred more frequently in venous access procedures compared with arterial access procedures. In aggregate, while patients were on edoxaban (<72 h since the last dose), 2 MB and 1 ischemic stroke events occurred.One of the MB events occurred during the observation period and was related to a polypectomy done during that time frame.No MB or ischemic events were reported in patients off edoxaban.Six clinically relevant events (ie, 2 MB, 2 CRNMB, and 1 ischemic stroke, and 1 all-cause death) occurred on Days 0 to 8 of the procedure (Table 3).The only death occurred on Day 19 after the procedure, with a very unlikely causal relation to the index angiography procedure. Discussion In this large, international, prospective, real-world subanalysis of the Global EMIT-AF/VTE program in patients undergoing transcatheter CV procedures, physicians interrupted edoxaban in about half of all interventions.Interruption was more frequent in higher bleeding risk procedures and in patients with a higher bleeding risk score.APT was more common in arterial versus venous procedures and was most often used on the day of procedures or postprocedurally.Despite relatively high ischemic and bleeding risks, patients undergoing transcatheter CV procedures had a very low risk of ischemic and MB events or deaths for an observation period of 30 days, indicating adequate periprocedural management of edoxaban in these patients in routine clinical practice.Understanding which factors were involved in the choice and timing of interruption can further help to improve periprocedural management. Factors associated with a higher likelihood of interruption were higher risks for stroke and bleeding based on patient characteristics (CHA 2 DS 2 -VASc and HAS-BLED scores) and procedural risk (EHRA classification). 13Arterial versus venous access procedures were more likely interrupted.This finding may have been due partly to a greater risk of ischemia and MB in patients undergoing arterial access procedures.Arterial access procedures, such as coronary angiography or percutaneous coronary interventions, may have an increased bleeding risk in patients on chronic oral anticoagulation. 19Procedures in patients with higher risks for ischemic events, as described by the CHA 2 DS 2 -VASc score, were more frequently interrupted in this registry compared with procedures in patients with lower ischemic risks.At first glance, this seems counterintuitive.However, as the CHA 2 DS 2 -VASc and HAS-BLED scores show a significant correlation in this registry, it appears reasonable to hypothesize that ischemic risk was unlikely a reason for edoxaban interruption rather than a reflection of a somewhat higher risk for MB.][22][23] When physicians interrupted anticoagulant treatment, they did so before or on the day of the procedure in the majority (92%) of cases, whereas postprocedural-only interruptions were rare (8%).These cases may have been more complex procedures in which physicians deemed the risk of subsequent bleeding higher. Major bleeding was very rare.The 2 MB events (0.26% for the entire cohort) occurred in 1 patient on the day following a transcatheter aortic valve implantation (TAVI) but before a scheduled colonoscopy with polypectomy and in 1 patient on the day of an electrophysiologic study.There were no clinical sequelae from these 2 MB or the 2 CRNMB events. Although the number of adverse events was low and despite a higher frequency of anticoagulation interruption, patients that underwent arterial access procedures had a higher rate of MB or CRNMB events compared with those who underwent venous access route procedures (0.8 vs 0.3 events per 100 procedures).More patients with arterial (24%) than venous (<3%) access procedures were administered oral APT during the observation period to a large extent driven by percutaneous coronary interventions.However, excluding the patient who underwent TAVI, no patient with an ischemic, MB, or CRNMB event was on oral APT.One death of unknown cause occurred on Day 19 after an uneventful coronary angiography and was very unlikely considered as a direct consequence of the procedure. To the authors' knowledge, this is the first analysis assessing periprocedural management and outcomes for patients with AF or VTE receiving NOACs who underwent transcatheter CV procedures in real-world clinical practice.Though tempting, the juxtaposition of the current study with a subanalysis of the Effective Anticoagulation With Factor Xa Next generation in Atrial Fibrillation-Thrombolysis in Myocardial Infarction 48 (ENGAGE AF-TIMI 48) randomized, controlled trial that included a wide variety of surgical procedures warrants caution. 24These patients had a higher number of and more severe CV and bleeding risk factors on the one hand but longer (up to 10 days) interruptions of edoxaban on the other. 24Ischemic and MB event rates were higher in the ENGAGE AF-TIMI 48 trial than in the current EMIT subanalysis.However, only 10% were percutaneous coronary interventions, and the EHRA classification was not reported.In the Global EMIT registry, which included all types of diagnostic procedures, the overall rates of thromboembolic, CRNMB, and MB events were low (0.6%, 0.7%, and 0.4%, respectively). 14The Perioperative Anticoagulation Use for Surgery Evaluation (PAUSE) study assessed perioperative management of apixaban, dabigatran, and rivaroxaban in patients with AF undergoing 13 different types of elective surgery or procedures and found the overall 30-day perioperative rates of MB were 1.35% in the apixaban cohort, 0.90% in the dabigatran cohort, and 1.85% in the rivaroxaban cohort. 11The overall rates for CRNMB and minor bleeding were 1.67% and 4.3% in apixaban, 1.95% and 5.69% in dabigatran, and 2.4% and 5.73% in rivaroxaban. 11In comparison, the rate of MB, CRNMB, and minor bleeding in patients receiving edoxaban in the current study was 0.26%, 0.26%, and 1.06%, respectively.A lower MB rate in the current study may be due to the inclusion of a more specific population, in which only patients undergoing transcatheter CV procedures were included, whereas the PAUSE study included primarily cardiothoracic and gastrointestinal procedures. 11Additionally, 33.5% of patients in the PAUSE study underwent high bleeding risk procedures compared with 0.4% of patients in the current analysis.Notably, the PAUSE study used a predefined interruption protocol for surgeries with different bleeding risks, while in the current analysis, periprocedural management of edoxaban was at the discretion of the physicians. 11wo main limitations of this study are the lack of a comparator arm and the investigation of only one NOAC.Given the low event rates, either a larger registry or a randomized trial would be desirable, along with the attempt to differentiate between different oral anticoagulants.However, these approaches require a much greater sample size and longer duration, which may not be feasible from a timing and economic perspective. 25For the same reasons, no attempt was undertaken to evaluate the impact of geographical location or ethnicity.Furthermore, edoxaban management was not standardized, as it was at the discretion of the investigator.To account for the challenges of data collection, patients were provided with memory aids to support data recall.Data elements were reviewed at the patient level, and all incidents of MB, CRNMB, ACS, and acute thromboembolic events were centrally adjudicated. Conclusions In patients on chronic oral anticoagulation with edoxaban, transcatheter CV procedures can be performed safely.Periprocedurally, a brief interruption of edoxaban in high-risk patients was not associated with high risks of ischemic events.Additionally, continuing edoxaban in selected patients was also not associated with a high risk of bleeding events.Patient and procedural factors should be considered to personalize the decision of edoxaban management around the time of a transcatheter CV procedure. Figure 2 . Figure 2. Type (A) and mean ± SD duration (B) of edoxaban interruption by EHRA risk score.a One patient had 1 arterial and 1 venous access procedure on the same day.These procedures were counted once each in arterial and venous procedures, respectively; however, they are counted only once in all vascular procedures.Mean ± SD interruption duration excludes procedures without edoxaban interruption.EHRA, European Heart Rhythm Association; NA, not applicable; SD, standard deviation. Data shown as n (%) unless otherwise noted.Among all patients at the baseline, missingness ("unknown" or no data reported) was <3% for all individual variables reported in the baseline characteristics table except for CrCl (12.5%) and the modified HAS-BLED score (30.7%).In the latter case, missing values were primarily attributable to missingness in alcohol abuse (22.9%), a subcomponent of the HAS-BLED score.a n = 0 patients with venous access procedures were classified as high EHRA risk.b P-values were calculated comparing all arterial versus all venous procedures using Fisher's exact test.c Baseline data for patients who underwent multiple procedures with the same vascular access were counted once overall and in the highest EHRA risk group.Baseline data for 10 patients who underwent multiple procedures with arterial and venous access were counted in both groups.d Age at time of enrollment.CHA 2 DS 2 -VASc, congestive heart failure, hypertension, age Table 2 . Clinical Outcomes for Arterial and Venous Access Procedures.Data shown as n (events/number of procedures × 100).One patient had an arterial and a venous access procedure on the same day but did not experience a clinical event.Cause of death unknown.CRNMB, clinically relevant nonmajor bleeding; NSTEMI, non-ST-segment elevation myocardial infarction; STEMI, ST-segment elevation myocardial infarction. a International Society of Thrombosis and Haemostasis definition.bSTEMI, NSTEMI, unstable angina.c
2024-06-18T06:17:06.764Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "31254e5debcdd9707c86a51e49c87152fbafaad3", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1a669cfb9f1b290b4ecc27935876cc14ce3a9ad4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222141069
pes2o/s2orc
v3-fos-license
Single-Pixel Pattern Recognition with Coherent Nonlinear Optics We propose and experimentally demonstrate a nonlinear-optics approach to pattern recognition with single-pixel imaging and deep neural network. It employs mode selective image up-conversion to project a raw image onto a set of coherent spatial modes, whereby its signature features are extracted nonlinear-optically. With 40 projection modes, the classification accuracy reaches a high value of 99.49% for the MNIST handwritten digit images, and up to 95.32% even when they are mixed with strong noise. Our experiment harnesses rich coherent processes in nonlinear optics for efficient machine learning, with potential applications in online classification of large size images, fast lidar data analyses, complex pattern recognition, and so on. Recently, optical neural networks have emerged as distinct candidates for solving complex problems, by leveraging their large data parallelism, high connectivity, low power consumption, and other advantages inherent with optical circuits and architectures [19][20][21][22]. In this pursuit, optical Fourier transformation, diffraction, interference and spatial filtering have been utilized for optical pattern restoration and recognition [23][24][25], phase retrieval, and information processing [26,27]. While most developments use only linear optics [28][29][30][31][32], nonlinear optical effects have been shown to assist well phase retrieval of ultrashort pulses [15,16], polaritonic neuromorphic computing [33], and the classification of ordered and disordered phases of Ising model [34]. As nonlinear optics can realize even richer and more complex functions than its linear counterparts, nonlinear optical machine learning (NOML) promises another level of data processing capability and efficiency. In this letter, we demonstrate a new nonlinear optics paradigm for efficient and robust machine learning. It is based on mode-selective image conversion (MSIC) by spatially modulated sum-frequency generation, an exceptional nonlinear optical technique that we recently demonstrated for photon efficient classification with super-resolution [35] and mode selective detection through turbulent media [36,37]. MSIC is implemented by applying a spatially structured pump beam to drive the frequency conversion in a nonlinear crystal, so that only a single signal mode of certain prescribed phase coherence can be converted efficiently. All other modes, even if they spatially, temporally, and spectrally overlap with the signal mode, are not converted or converted with a much lower efficiency. It thus realizes a distinct tool to sort optical spatial modes according to their phase coherence signatures. Unlike linear optical approaches, MSIC does not involve any direct modulation on the signal, thus eliminating the otherwise inevitable modulation loss or noise [38]. Also, it allows much more flexible and capable operations by engineering the nonlinear dynamics in the crystal with the assistance of spatial dispersion [37]. Here, we incorporate MSIC and a deep neural network (DNN) for pattern recognition of high-resolution images. In this hybrid architecture, the information in each input image is first processed and extracted by MSIC, through frequency upconversion by pump beams in 40 different Laguerre-Gaussian (LG) modes. For each mode, the converted power is measured and used as the input to the DNN for subsequent machine learning. In this design, MSIC pre-processes the input images, extracting information contained in both amplitude and phase spatial profiles of the images, and condense them into the upconversion efficiencies of 40 modes. It reduces a large amount of pixel-wise information to only 40 signatures, thus substan- LG phase masks on SLM2 Digit "1" tially downsizing the required DNN while enabling efficient processing of high resolution images. As a benchmark, our experiment achieves an accuracy of 99.49% for recognizing the MNIST (Modified National Institute of Standards and Technology) handwritten digit images, in a good agreement with our theoretical value of 99.83%. Even when strong noise is added to each image, whose resulting mean signal-to-noise ratio (SNR) is about −11.2 dB for the image, an accuracy of 95.32% can still be reached. Neural Networks The experimental setup is shown in Fig. 1. A mode-lock laser generates optical pulses with ∼300 fs full width at half maximum (FWHM) and 50 MHz repetition rate. Two inline narrowband wavelength division multiplexers (WDMs, bandwidth ∼ 0.8 nm) select two separate wavelengths, one at 1558 nm as the pump and the other at 1545 nm as the signal. The two pulse trains are then each amplified using an Erbium-doped fiber amplifier. They are then aligned to be in horizontal polarization using free-space optics and expanded to a 2.8-mm beam waist (FWHM) for the pump and 2.6 mm for the signal. Then, the signal is directed to a reflective, liquid-crystal spatial light modulator (SLM1 in Fig. 1) at a 55 • incidence angle, while the pump to another modulator of the same model (SLM2 in Fig. 1) but at a 50 • angle [36]. The SLMs are Santec SLM-100 and have the same specs: pixel pitch ∼ 10.4 × 10.4 µm, active area ∼ 1.4 × 1 cm. In our experiment, the MNIST digit images are uploaded onto SLM1 as phase mask patterns. To prepare the pump in a sequence of LG modes, their helical phase patterns are expressed on SLM2 in the form of Θ(r, φ) = −lφ + πθ(−L |l| p (2r 2 /w 2 0 )), where Θ is wrapped between 0 and 2π, θ is a unit step function, {L |l| p } are the generalized Laguerre polynomials with the azimuthal mode index l and the radial index p, w 0 is the beam waist, and φ = arctan(y/x) is the azimuthal coordinate (see [37] for more information). The SLM response time is typically 500 ms, which varies with the structural complexity of the phase patterns. The pump and signal beams are then merged at a beam splitter (BS) and focused ( f =200 mm) inside a temperature-stabilized periodic poled lithium niobate (PPLN) crystal with poling period of 19.36 µm and the total length of 1 cm (5 mol.% MgO doped PPLN, from HC Photonics) for MSIC. The normalized conversion efficiency of the crystal, assuming optimal focusing inside the crystal, is ∼ 1%/W/cm. The generated SF light is coupled into a single-mode fiber and detected by a power meter (Thorlabs, PM-100D with sensor S130C) through a MATLAB interface. The SF power readings are fed into the DNN after normalization (see the following paragraphs), which consists of one convolutional layer, five fully connected layers, and one output layer with 10 neurons for the 10 different classes, as shown in the right panel of Fig. 1. There are 16 filters in the convolutional layer with kernel size of 2, and the five fully connected layers have 512, 256, 128, 64, and 32 units, respectively. The nonlinear activation function rectified linear unit [ReLu(x) = max(0, x)] is used in each connection between hidden layers because of its faster convergence than other nonlinear functions [39,40]. The categorical cross entropy is selected as the Loss function to evaluate the performance of the DNN where y ik is the true output,ŷ ik is the predicted output, N is the total number of samples, and K = 10 is the number of classes. To minimize the Loss function, adaptive moment estimation (ADAM) gradient descent algorithm is used as the optimizer in the training process [41]. Then the softmax function is adopted to normalize the values in output layer so that they represent the probability of each class, as P(j) = e x j ∑ K k=1 e x k ,where j = 1, ..., K and x is the probability vector without normalization. Finally, the predicted classification for each input will be picked as the one with the highest probability, i.e., the "winner" of all classes, and the accuracy score of the whole database is the fraction of correctly classified samples. To determine the optimal set of the pump LG modes, we start from simulating the performance with 110 LG modes with l ∈ [−5, 5] and p ∈ [0, 9], as the SF generation efficiency is negligible for other modes. Then, based on the simulated classification accuracy for different combination of LG modes, we choose a subset of most effective 40 modes with l ∈ [−2, 2] and p ∈ [0, 7] as the pump modes for our experiments. For the benchmark evaluation, we select the first 600 handwritten images of each digit in the MNIST database as our dataset. The resolution of those images are first increased by the Image Resize function in Matlab from 28×28 to 400×400 pixels to match with the SLM pitch and signal beam diameter. After collecting SF power readings for all images, the data is shuffled and separated into training (4800 images) and testing sets (1200 images). With the training set, for each LG mode of the pump, the power readings are normalized across all digits so that different features extracted by those modes contribute more or less equally to the machine learning. Afterwards, the 40 normalized readings for each image are mapped onto a 4×10 matrix for feeding into the DNN for training on a NVIDIA Quadro T2000 graphics processing unit (GPU). Prior to the experiments, we first perform some simulation studies to understand the nonlinear optical process and overall system behavior. Figure 2(a) plots the simulated accuracy as a function of the epoch number during learning process. It shows that the accuracy scores for both training and testing dataset converge stably to high values. At the end, the recognition accuracy of 100% is achieved for the training and 99.83% for the testing dataset. Figure 2(b) presents the final normalized confusion matrix for the testing set, with nearly perfect classification for all digits. This indicates that our model is well trained without over-fitting or under-fitting. In the current experiment, the overall system speed is limited by the SLM response time (∼ 0.5 s), due to which a pause time of 1.5 s is set in the experiments to ensure a well uploaded mask. Therefore, it takes days to measure all MNIST images. While the system is stable during short time, over the span of days there are significant mechanical fluctuations, beam drifts, and ambient noise in our table-top settings over free space. All lead to measurement errors and biases that reduce the recognition accuracy. To access those effects, we adopt two data collection procedures. The first uses a sequential run, where the 600 images for each digit are measured in a group before moving to the next digit. The second runs in a loop, with the images measured in groups of 10, with each containing one image for every digit. Figure 3 presents normalized confusion matrix of the recognition accuracy for each digit. The sequential run reaches an accuracy score of 99.49%, which approaches to the theoretical value of 99.83% obtained in our simulation. Such an excellent agreement validates our method and the underlying optics. With the loop run, on the other hand, the accuracy is dropped to 94.25%. This may be caused by the experimental instabilities and noise during the elongated data taking, so that the signature features for each digit are disrupted and blurred. In the future, this issue can be overcome by designing a more compact and enclosed system. Also, the low-speed SLMs can be replaced with fast digital micromirror devices (DMDs) to substantially shorten the data taking cycle. To further benchmark the present hybrid pattern recognition technique, we next test the classification of the handwritten images with added random noise. Figure 4 shows an example of such random noise as a phase mask added to a resized MNIST image. Each noise spot in the noise mask is in a cluster with 10×10 pixels, which is designed as stronger noise than smaller clusters. The noise cluster each have random phase values uniformly distributed between [0, 2π), and together they take up 40% of the total pixels. To quantify the resulting signal to noise, we define a SNR in decibel as SNR = 10 × log 10 (σ 2 s /σ 2 n ), where σ 2 s and σ 2 n are the phase variance of a digit image and noise mask, respectively. As shown in Fig. 4, the SNR in the current experiment varies between -19.2 dB and -5.3 dB, with a mean value -11.2 dB. Figure 5 presents the normalized confusion matrices of the MNIST database with added noise. By the sequential run, the accuracy reaches 95.32%, with the low accuracy values all locating near the diagonal line. It indicates that the noisy digit images are prone to be mis-classified with its neighboring digits. By contrast, the recognition accuracy is only 79.05% in the loop run case, marking an enlarged drop than the non-noise case as in Fig. 3. This indicates that the noisy digit images become more prone to the experimental instabilities and ambient interference, as the signature features are blurred by the added noise. In summary, we have demonstrated a hybrid machine learning system incorporating nonlinear optics and a deep neural network. It uses mode-selective image conversion-realized through a coherent nonlinear optical process in a χ (2) crystal-to extract the signature features of images for subsequent machine learning. The nonlinear optics step utilizes all-to-all connection and speed-of-light processing of large-volume image data over hundreds of thousands, which are otherwise overwhelming for typical electronic digital processors. Using 40 pump modes for the information extraction, we demonstrate in experiment handwritten digital classification at a high accuracy of 99.49%, close to the theoretical result of 99.83%. Even when the images are mixed with significant noise to have a mean signal to noise ratio of -11.2 dB, the classification accuracy can still exceed 95%. Our results indicate the viability and potential advantages of introducing coherent nonlinear optics to machine learning and artificial intelligence. For the future work, we hope to replace the current SLMs with much higher speed devices such as those of micro-mirrors, where orders of magnitude speedup is expected. Also, the current setup faces the challenges due to free-space optics fluctuations and ambient noises, which shall be addressed by an improved, enclosed design. Disclosures. The authors declare no conflicts of interest.
2020-10-07T01:00:46.155Z
2020-10-05T00:00:00.000
{ "year": 2020, "sha1": "31a27b8905524717d8c85523e23c7dd0f4ab1783", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2010.02273", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ba3f8470864b52fc9c397444f99f17730de1dfde", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Engineering", "Medicine", "Computer Science" ] }
236583326
pes2o/s2orc
v3-fos-license
ErbB Receptor Stimulation Is Required for Mouse Colon Adenoma Organoids to Form Crypts The majority of colon adenomas harbor genetic mutations in the APC gene. APC mutation leads to changes in Wnt signalling and cell-cell adhesion: as a consequence, intestinal crypt budding increases and the excess crypts accumulate to form adenomas that progress to colon cancer. When cultured with Wnt, R-spondin, EGF, Noggin, myofibroblast conditioned medium and Matrigel, crypts from normal mouse colon mucosa form crypt-producing organoids and can be passaged continuously. Under the same culture and passage conditions, crypts isolated from colon adenomas derived from Apcmin/+ mice typically grow as spheroidal cysts and do not produce crypts. The adenoma organoid growth requires EGF, but not Wnt, R-spondin or Noggin. However, when mouse colon adenoma spheroids are grown for more than 10 days in the presence of EGF, crypt formation occurs. EGF, EREG, β-cellulin, Neuregulin-1 or AREG are sufficient for initiating crypt formation, however, neuregulin-1 is more potent than the other EGF-family members. EGFR and ErbB2 inhibitors both prevent crypt formation in adenoma cultures. Either EGFR:ErbB2 or ErbB3:ErbB2 signalling is sufficient to initiate adenoma crypt budding and elongation. ErbB2 inhibitors may provide a therapeutic avenue for controlling and ablating colon adenomas. Introduction The intestine is a highly regenerative organ with cells being replaced continuously. 50 The epithelium surface is made up of a sheet of epithelial cells folding into glandular 51 like crypts. Under normal homeostatic conditions, regeneration is maintained by the 52 production of new cells from actively cycling stem cells located at the base of each 53 crypt as well as the production of new crypts. During developmental growth or repair 54 of the intestinal epithelium, new crypts are produced by crypt budding 1,2 , where the 55 new crypt is initiated at the base of an existing crypt and elongates as the upper rim 56 migrates to the top of the crypt 3-5 . Aberrant crypt budding has been implicated as a 57 potential mechanism responsible for colorectal hyperplastic polyps 6 and linked to 58 adenoma formation 7 . These observations have been corroborated in the intestinal and 59 colonic epithelium of Apc min/+ mice during polyp formation 8 . 60 Intestinal epithelial cells can be cultured in vitro to produce multicellular three-61 dimensional structures called organoids 9-11 . The development of small intestine 11 and 62 colon 9,10,12 organoid cultures allows the direct study of both intra-crypt cell production 63 and crypt budding in organoids. Organoids can be used to measure the effects of 64 factors regulating crypt initiation and survival or intra-crypt cell production in normal, 65 adenomatous or cancerous colon mucosa 9,10,12 . 66 Previous reports show that factors such as epimorphin 13 , bone morphogenetic 67 proteins (BMPs) 14,15 , Wnt 16,17 , Epidermal Growth Factor (EGF) 18 , TGFβ 19 and 68 Hedgehog 16 can affect the formation of the villus-crypt structure in the small intestine. 69 Whereas mouse small intestinal organoid growth requires R-Spondin, Noggin and 70 EGF, but not Wnt, colon organoids require the addition of Wnt 12 . We have previously 71 reported our results with mouse colon organoids looking at the effect of the structural 72 environment and biochemical factors 9,10 on development of crypts in wildtype mouse 73 colon organoids. Our previous study suggests that crypt formation in mouse colon 74 organoids also requires the conditioned medium derived from mouse myo-fibroblast 75 cells (WEHI-YH2) 20 . In contrast, mouse colon adenomatous organoids grow and 76 passage weekly as spheres or cysts and have not been previously reported to form 77 crypts in vitro 21,22 . In this study, we report that whilst colon adenoma cells from Apc min/+ 78 mice initially form spheres, when cultured 10 or more days without splitting the 79 cultures, in the presence of EGF or selected members of the ErbB family of ligands 80 23,24 , adenomatous crypts will form. Our results provide insights into the mechanisms 81 initiating crypt production and the aberrant growth of adenomas and colon cancers. 82 83 Crypt formation is induced by EGF stimulation in long term colon adenoma 86 cultures 87 The normal mouse colon is lined with a single layer of polarised epithelial cells that 88 form the regular array of crypts which create the flat luminal surface (Fig. 1a). Colon 89 crypts were labelled with phalloidin to visualise F-actin (luminal-specific) and E-90 cadherin (baso-lateral specific). The high resolution 3D imaging reveals the highly 91 organised crypt structures and apical/basolateral polarity (Fig. 1a, Sub Fig. 1a,b). In 92 contrast to the normal colonic epithelial mucosa, adenomatous colonic polyps from 93 Apc min/+ mice display irregular crypt structure and disorganised packing of the crypts 94 i.e. the crypts no longer stack in perpendicularly to the mucosa surface (Fig. 1b, Sub 95 Fig. 1c,d). The crypts are still tightly packed, but the luminal axes are nolonger aligned. 96 Individual adenomatous crypts retain the apical/basolateral polarity of the epithelial 97 cells (Fig. 1b). 98 Cells from normal colon mucosa form colonospheres in matrigel cultures (with the full 99 complement of growth factors) and from Day 7 the colonospheres begin to produce 100 crypt buds (Fig. 1c). As previously shown, the addition of myofibroblast conditioned 101 media (WEHI-YH2) is required for optimal crypt production 9 . Most budding crypt 102 organoids produce multiple crypt structures formed from epithelial cells with typical 103 apical/basolateral polarity( Fig. 1c and inset, Fig. 1f). In contrast, cells from Apc min/+ 104 colon adenomas require only the addition of EGF in the basic culture media to 105 proliferate and form spheroids, (cyst-like) structures (Fig. 1d). The Apc min/+ adenomas 106 organoids grow similarly with or without the addition of Wnt, RSpondin and Noggin 22 . 107 The spheroids are readily visible by Day 6 and continue to grow but do not form crypt-108 like structures when in cultured and passaged for up to 10 days (Fig. 1d). Adenoma 109 cultures can be passaged and grown in long term culture, but require passaging after 110 7-10 days and continue to grow as spheroids. Under these conditions, in contrast to 111 the organoids from normal colon, Apc min/+ adenoma organoids do not appear to 112 produce crypts. This was surprising given that adenoma tissue contains tightly packed 113 and irregular crypt structures (see Fig. 1b) and suggested that the culture conditions 114 did not provide the appropriate micro-environemt to form adenomatous crypts. In order 115 to investigate adenoma organoid growth further, the Apc min/+ adenoma cultures were 116 allowed to grow for more than 7 days without mechanical disruption and in the 117 presence of EGF, but in the absence of R-spondin and Wnt3a. Under these conditions, 118 the organoids produced crypt-like structures from Day 12 (Fig. 1d). Budding crypt 119 structures were evident in a significant proportion of the organoids, similar to the 120 normal colon organoids (Fig. 1c, d). To ensure that the crypt formation phenomenon 121 presented by the current colon adenoma line was not an isolated event (i.e. specific 122 to a specific organoid line), adenoma cell lines derived from separate Apc min/+ 123 adenomas were tested and similar crypt-like structures were observed when cultured 124 in the presence of EGF up to 20 days (adenoma line #B13 shown in Supplementary 125 Fig. 2). Time-lapse imaging of another crypt forming organoid suggest that the crypt-126 like structures can occasionally start to appear after 7 days (Supplementary Movies 127 S1 and S2). Apc min/+ colon adenoma organoids require EGF for both growth of the 128 colonospheres and crypt-like structure production. The adenoma colonospheres grew 129 poorly without EGF (Fig. 1e, h). The Apc min/+ adenoma organoid crypt structures have 130 a similar morphology to cultures of normal colon organoids with similar dimensions 131 (eg. diameters of ~40 µm 5 ) but produce many more crypt structures per organoid ( Fig. 132 1c, d, f, g, i). High resolution 3D imaging of an immunostained Apc min/+ adenoma 133 organoid with multiple crypts reveals the structural organisation of the crypts with 134 strong peripheral staining of E-cadherin and β-catenin (Fig. 1i). 135 Effect of EGF on the formation of colon adenoma organoid crypts 136 The EGF signalling pathway is activated in the intestinal epithelial stem cell niche 25 and imaged daily over 21 days. Organoids remained as spheroids at 0.005 ng/ml EGF 142 but produced crypts between 0.05 -5.0 ng/ml EGF ( Fig. 2a-b). The proportion of crypt-143 containing organoids was scored and is presented in Fig. 2b. In the absence or at 144 lower concentrations of EGF the organoids remain as spherical cysts (Fig. 2a, b). The 145 crypt formation colon adenoma cultures increased with increasing EGF concentration, 146 from 0.05 ng/ml to 20 ng/mL (Fig. 2b) and was significantly increased above 0.5 ng/ml 147 (Fig. 2b). 148 Different morphological shapes of colon adenoma organnoids. 149 The colon Apc min/+ adenoma organoids have four three distinctive morphologies: 150 Spheroids (see Fig 3a up to day 8) , Large body organoids with crypt extensions (Fig 151 3a), Lenticular organoids (Fig 3b) and Hyperbudding organoids (Fig. 3c). The Large 152 body extension organoids had crypts forming after the organoid had grown to a 153 significant size. Crypts extensions then appeared from the main body, leading to the 154 gradual regressing/reduction of the size of the main organoid body. Lenticular 155 organoids started to form crypt extensions as they grew, reshaping the organoid into 156 a series of interconnected tubular crypts. These extensions could be very long ( > 100 157 µm , twice the maximum mean crypt width of normal murine crypt 5 ) with occasional 158 occurrence of secondary branches (Fig. 3b). Finally, Hyper-budding organoids 159 produced large numbers of small crypt extensions on the surface of the organoid 160 formed at the same time (Fig. 3c). The timing of crypt extensions formation is different 161 for each class of morphology, and varied between day 8 and day 14. For example, the 162 Hyper-budding organoids shown in Fig. 3c formed budging structures on day 11 and 163 day 15, respectively. 164 The elongation rate of crypts in adenoma organoids is independent of EGF 165 concentration. 166 We analysed the number and length of the adenoma crypt extensions over a range of 167 EGF concentrations (Fig. 4a).There was little crypt production at low concentrations 168 of EGF (5 pg/ml) but the number of crypts increased dramatically at 0.5 ng/ml (Fig. 169 4a). The number of crypt extensions per organoid increases with EGF concentration, 170 peaking at 0.5 ng/mL. The crypt lengths over the culture period of 10-25 days at three 171 EGF concentrations (50, 5, and 0.5 ng/mL) were measured ( Fig. 4b-d). The mean 172 length of crypts at 50 ng/mL EGF appears to be constant (~ 50 µm) between day 14 and day 24 (Fig. 4b). In contrast, at lower concentrations of 5 ng/mL and 0.5 ng/mL 174 EGF the crypt lengths continued to increase throughout the time course (Fig. 4c, d, 175 respectively), at a rate of 2.24 µm/day and 2.03 µm/day respectively (Fig. 4e). 176 Collectively, this data shows that a minimal concentration of EGF is required for crypt 177 production, at EGF concentrations (i.e. between 0.5~5ng/mL), the crypt extends at a 178 rate of ~2 µm/day. At higher concentrations of EGF, crypt length rapidly reaches ~50 179 µm and does not extend any further 180 Effect of EGF-family ligands and the ErbB signaling pathway in colon adenoma 181 crypt formation. 182 Members of the epidermal growth factor receptor family include EGFR, ErbB2 (also 183 known as HER2), ErbB3/HER3, and ErbB4/HER4. EGFR and ErbB2 have been 184 associated with the growth of many human cancers, including colorectal cancer 28,29 185 (Fig.5a). Ligand binding to the extracellular domain of EGFR, ErbB3 or ErbB4 induces 186 the receptors to form oligomers and consequentially activate the intracellular kinase 30 . 187 ErbB2 does not bind a ligand but is in a conformation which allows it to bind to other 188 EGFR family members and when the co-receptor is in the ligand-activated 189 conformation, the ErbB2 kinase is activated 31 . Interestingly, the intracellular domain of 190 ErbB3 has no measurable kinase activity of its own, but when the ligand bound form 191 of ErbB3 combines with ErbB2, the heterodimer (or higher order aggregates of ErbB3 192 and ErbB2) activates the ErbB2 kinase activity. 193 To investigate which ErbB receptors might be driving colon adenoma crypt formation, formation with Heregulin /NRG-β1 being the most potent (80-fold more active than 202 EGF) (Fig. 5b). BTC and EGF showed a similar dosage range for stimulating the 203 formation of organoids with crypts; AREG and EREG both required higher 204 concentrations to stimulate crypt formation (Fig. 5b). Given that AREG binds to EGFR 205 but with a significantly lower affinity than that of EGF 34,35 , it is not surprising that a 206 higher concentration of AREG is required to stimulate the same level of crypt formation 207 in colon adenoma organoid assay (Fig.5b). Similarly, EREG binds to both the EGFR 208 (~2.8 µM) and ErbB4 (>5 µM) but with a significantly lower affinity than EGF binding 209 to these same receptors (1.9 nM and 49 nM, respectively) 36 , so higher concentrations 210 of EREG were required to stimulate crypt formation (Fig. 5b). 211 Although NRG-β1 (Heregulin-β1) has a high affinity for the ErbB3 and ErbB4 212 homodimers (IC50 ~5 nM), it has an even higher affinity for the ErbB3/ErbB2 213 Erb4/ErbB2 heterodimers (IC50 0.1-0.2 nM) 36 . EGF's affinity for EGFR homodimer 214 and the EGFR/ErbB2 heterodimer is between 1.2 to 1.8 nM. Heregulin-β1 stimulates 215 crypt formation more potently than EGF (Fig.5b, Supplementary Fig. 2). This result 216 indicates that a pathway stimulated by ErbB3/ErbB1, ErbB4/ErbB1, ErbB3/ErbB2 or 217 of their potencies for inducing colon adenoma crypt formation (Fig.5b). 223 The mean number of spheroids and organoids with crypts were also scored for each 224 of the EGF-like ligands (Fig.5c). NRG-β1 and BTC appeared to be the most potent in 225 terms of organoid formation with a consistently higher mean organoids formed, 226 particulary each ErbB family member in crypt formation. 237 261 This study demonstrates that EGFR/ErbB family signalling is required for crypt 262 production in organoids produced by mouse colon adenoma stem cells. Although 263 excess crypt formation appears to be the defining feature of adenomas, the current 264 paradigms for intestinal repair 37 , colon adenoma and colon cancer focuses on excess 265 intra-crypt cell proliferation 38 . Cell production within normal colon crypts occurs at a 266 rapid rate, indeed the rate of cell production in the intestines 39 (including the colon) 267 exceeds all other tissues. The renewal of the colon epithelial cell occurs rapidly and 268 continuously within the crypt, however under normal homeostatic conditions, crypt 269 production is a rare event: less than 1 in 200 crypts in the mouse colon are in the 270 process of budding 2,40 and in the normal human colon less than 1 in 2000 crypts are 271 producing Victoria, Australia), preparation and culturing of the WEHI-Ad67 colon adenoma 361 organoid line was described previously 9 . Using the same technique, we prepared 362 another colon adenoma organoid line: WEHI-AdB13 using a polyp from a separate 363 Apc +/min mouse. The colon adenoma organoid lines were passaged weekly (details as harvesting and preparation: The colon adenoma organoids were harvested from the 374 96 well plates, collected and pipetted with a 26G needle for 5-10 times, visually 375 checking the size of the resulting fragments after each round of pipetting (~10 cell 376 fragments are ideal to work with). The fragments were resuspended with DMEM/F12 377 and centrifuged at 8500 × g for 5 minutes. The supernatant fluid was discarded, and 378 the pellet resuspended with 1mL DMEM/F12. The suspension was centrifuged at 379 10000 × g for 3 minutes and supernatant fluid discarded. After resuspension of the 380 pellet with 150µL basic growth medium, an aliquot of the fragment suspension was 381 removed for counting using a haemocytometer (Hausser Scientific). 382 Based on the number of wells to be plated and required seeding density per well, the 383 volume and fragment concentrations of the seeding mixture were determined as 384 described in Sub Fig. 3a near the bottom of the well (Sub Fig. 3c). After dispensing the adenoma cells, the plate 397 was centrifuged for 1 minute at 450 RCF (Eppendorf 5810 R centrifuge) and incubated 398 at 37˚C for 1 hour (Sub Fig. 3d). 399 Reagent preparation and application: The respective EGF-like ligand stocks were 400 prepared in basic growth medium while the ErbB receptor inhibitors were made up in 401 basic growth medium containing recombinant mouse EGF (0.5 ng/mL, PeproTech, 402 #315-09). The study used a 5-point titration of the ligands using 1 in 10 dilutions. The 403 reagents were pipetted into the designated wells of a 96-well deep well plate 404 (Eppendorf cat#951033529) with layout as per described in Sub Fig. 3 Content Analysis version 5.11). For both systems, each experiment was followed and 433 imaged daily for up to 3 weeks. Each dataset which represents all the wells of a 434 specific timepoint was processed using a customized Fiji script 53 to extract each 435 individual well's image stack, before digitizing, saving as tiff images and organizing 436 into folders for analysis. 437 Computational image analysis for organoid imaging assay. For analysis, a 438 customised Fiji script was used to batch process and organise all the images in each 439 experiment, this includes steps to focus the multiple images at different focal planes 440 for each well, background and debris removal, as well as image segmentation to 441 extract organoids and for feature selection. Specific details of each processing steps 442 are briefly described below. 443 The individual well's Z-stack bright-field images were flattened using ImageJ/Fiji 444 software 53 . The projection of z-stack images onto a single in-focus image was carried 445 out using the Stack Focuser plugin (https://imagej.nih.gov/ij/plugins/stack-446 focuser.html) (parameters: select=10 variance=0.000 edge select_only) and exported 447 as high-quality tiff files. 448 Well exterior, background and debris removal as well as organoid selection and 449 feature extraction were performed were achieved using sequential steps of the 450 following operations: conversion to 8-bit, "Auto Local Threshold" (method=Phansalkar, 451 radius=150, parameter_1=0 parameter_2=0 white stack), set Black Background, 452 "Convert to Mask" (method=Default background=Default calculate black list), "Find 453 Edges" as stack, "Invert" stack, set Foreground Color as white, clear boundary (in 454 steps), "Dilate" as stack, "Fill Holes" as stack and "Remove Outliers" (radius=7 455 threshold=50 which=Dark stack). Analyses of object sizes performed using "Analyze 456 Particles" function (size=1000-Infinity circularity=0.30-1.00). 457 The script was used to processes all well images and used to threshold, segment and 458 count the individual organoids. The physical parameters of each organoid, including shape, size, intensity and circularity were recorded. These features for all organoids 460 identified per well/image were exported as text files (.csv) for analysis and tabulation 461 of the parameters of interest (e.g. mean organoid size). Overlayed images with 462 selection masks were exported for visual curation and validation. The results were 463 curated to remove false positives and to add missing organoids. 464 Whole mount staining and imaging. Colon tissue (normal and adenoma) was 465 cleared and stained as described in ref 54 . Briefly, the tissues were fixed, cleared and 466 stained with rat monoclonal anti-E-cadherin (Clone ECCD-2, Thermo Fisher Scientific 467 Cat #13-1900 (1:250)), goat anti-Rat IgG (H+L) Cross-Absorbed Secondary Antibody, 468 Alexa Fluor 488 (Thermo Fisher Scientific Cat #A-11006 (1:500)), Rhodamine 469 Phalloidin (Molecular Probes/Invitrogen(1:200)) and DAPI (Thermo Fisher Scientific 470 Cat# 62248 (1:1000)). For the adenoma tissues, 3D image stacks were acquired on a 471 Leica SP8 Resonance Scanning Confocal microscope using a 20x objective for 472 100µm depth at 0.5 µm per section. For normal tissues, 3D image stacks were 473 acquired on an Olympus IX-81 microscope with an Olympus FV1000 Spectral 474 Confocal attachment and a 20x objective for ~55 µm depth at 1 µm per section. All 475 acquired image data were processed and rendered using Imaris software package 476 (Bitplane, Zürich, Switzerland).
2021-08-02T00:05:56.749Z
2021-05-10T00:00:00.000
{ "year": 2021, "sha1": "95a8370e59b9489ae057659a52d9905934cdae43", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-478160/v1.pdf?c=1620669069000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "4cf944335cd14231d09cbe23da7892863a54fe56", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
4616454
pes2o/s2orc
v3-fos-license
Etiology, ethics, and outcomes of chronic kidney disease in neonates Objectives: To report the epidemiology of chronic kidney disease (CKD) in neonates at a single tertiary center and the outcomes of renal replacement therapy (RRT) in these patients and discuss ethical considerations regarding RRT in this population. Methods: In this retrospective study, we reviewed clinical data from all neonates with evidence of CKD who were followed up at King Abdulaziz University, Jeddah, Kingdom of Saudi Arabia between 2005 and 2015. Follow-up serum creatinine levels were recorded every 6 months. Results: A total of 181 neonates presented with CKD. Their mean age at the time of presentation was 11.1 days (95% confidence interval [CI]: 9.5-12.8) and the mean creatinine level was 106.5 µmol/ (95% CI: 91.3-121.7). Congenital anomalies of the kidneys and urinary tract (CAKUT) were the underlying causes of CKD in 84.5% of the neonates. Mortality was high, particularly in the first 6 months (10%), and reached 16% by 4 years of follow-up. At the time of the last follow-up, 42 (41%) neonates had hypertension and 27 (26.5%) had significant proteinuria. Five patients received dialysis in the neonatal period and another 6 were commenced on dialysis later. Conclusion: Congenital anomalies of the kidneys and urinary tract is the most common etiology in neonates with CKD. Chronic kidney disease in neonates is associated with high morbidity and mortality rates. I nformation about the epidemiology of chronic kidney disease )CKD( in neonates and infants is limited worldwide. 1,2 These patients require special attention, as the characteristic features of CKD in these patients differ from those of children older than 2 years. Furthermore, the need for dialysis among neonates with severe CKD should be considered carefully from medical and ethical points of view. 2 Congenital anomalies of the kidneys and urinary tract )CAKUT(, particularly obstructive anomalies and renal dysplasia, are the most frequent underlying causes of CKD during the neonatal period. 3,4 Chronic kidney disease in neonates requiring renal replacement therapy )RRT( is very rare. In a survey, conducted between 2000 and 2011, of patients on chronic dialysis in 32 countries, only 264 neonates had received RRT. 4 Delivery of RRT is influenced by the uncertain outcomes of chronic dialysis in neonates. 4 As a result, there is significant variation in the attitudes of pediatric nephrologists caring for infants with end-stage kidney disease )ESKD(. 5 In addition, the economic impact of RRT leads to major regional variation in its use globally. For example, more neonates with ESKD receive RRT in countries with a higher gross national income than those in countries with a lower gross national income, 5 which is likely influenced by the high cost of providing RRT to young children. 6,7 The challenges faced by pediatric nephrologists in developing countries are even greater due to limited facilities and cultural issues. However, there is a paucity of studies about CKD in neonates in these countries. In this study, we reported the epidemiology of neonatal CKD and outcomes of RRT in these patients at our institution. We also discussed cultural and ethical issues regarding the provision, or lack thereof, of lifesustaining RRT treatment for neonates. Methods. Compliance with ethical standards. The Ethics Research Committee at the Faculty of Medicine of King Abdulaziz University )KAU(, Jeddah, Kingdom of Saudi Arabia provided us permission to perform this study. The need to obtain consent from the participants was waived, as we studied anonymous subjects retrospectively and we did not perform any interventions. The study was performed according to the ethical principles of the Declaration of Helsinki. Study population and data collection. Data of all fullterm neonates, either born at or referred to King Abdulaziz University Hospital, with evidence of CKD between January 2005 and December 2015 were evaluated. Low-birth-weight neonates )premature and small for gestational age( or neonates with acute kidney injury that had resolved were excluded. Data were collected retrospectively from the patients' medical records and included age at the time of presentation, gender, neonate length at presentation, etiology of CKD, associated comorbidities, blood pressure )BP(, serial serum creatinine level, hemoglobin level, urine protein level, and mortality. For subjects with stage 5 CKD )ESKD(, we recorded details about RRT, including reasons for it not being performed in neonates who did not receive RRT. Subjects aged 1-28 days were considered to have been born in the neonatal period. Hypertension was defined as an elevation in systolic blood pressure ≥95th percentile for gestational and post-conception age and the birth weight, measured on 3 separate occasions. 8,9 Babies with normal BP readings on antihypertensive were considered hypertensive. We defined neonatal CKD as an abnormality in renal structure or function that manifested in the neonatal period and was long term or expected to last for more than 3 months. Significant proteinuria was defined as an early morning urine protein level of 30 mg/dL or greater. 10 We determined the estimated glomerular filtration rate )eGFR( using Schwartz's 11 formula for infants. The related studies, which have been referred to throughout our study were obtained from a variety of reliable online databases such as WebMD and PubMed. Statistical analysis. We used STATA software )StataCorp. 2011: Release 12. College Station, TX: StaCorp LLC( for all analyses. The proportion and mean for dichotomous and continuous variables, respectively, were measured to describe the patients' characteristics. The 4-year survival was estimated using a Kaplan-Meier curve. The impact of the severity of disease and RRT on the 4-year mortality was estimated using a Cox proportional hazards regression model. A multivariate regression analysis was performed to control for potential confounding factors, including baseline age, gender, and hypertension, which were determined based on an a priori theoretical assumption using directed acyclic Disclosure. Authors have no conflict of interests, and the work was not supported or funded by any drug company. This project was funded by the Scientific Deanship of King Abdulaziz University, Jeddah, Kingdom of Saudi Arabia )Grant Number G-385-140-38(. graphs. All patients, including censored patients, with a determined follow-up duration were included in the survival analysis. Statistical significance was determined with a p-value of 0.05, and data are presented as 95% confidence intervals )CIs(. Results. One hundred eighty-one neonates fulfilled our inclusion criteria and were included in the study. They represented 14.25% of our pediatric CKD population )1,270 children with CKD(. Their baseline demographic and disease characteristics are summarized in Table 1. There was a predominance of male patients, and obstructive uropathy was the main underlying etiology of CKD. Data on consanguinity were available for 77 children, of whom 53 )69%( had a history of consanguinity between parents. Figure 1 demonstrates patient survival over the course of 4 years, with considerable mortality within the first 6 months of life )10%(. Although the baseline creatinine level did not influence the 4-year mortality )hazard ratio [HR]: 1.004; 95% CI: 1.001-1.007; adjusted for age, gender, and baseline hypertension(, it significantly influenced the risk of long-term renal impairment )the 4-year creatinine level increased by 0.7 µmol for each 1 µmol increase in the baseline creatinine level; 95% CI: 0.54-0.90; adjusted for age, gender, and baseline hypertension [ Figure 2](. Similarly, the baseline eGFR did not significantly influence the 4-year mortality )HR: 0.98; adjusted for age, gender, and baseline hypertension; 95% CI: 0.95-1.00( ) Figure 3(. Five of 39 neonates with a serum creatinine level greater than 133 µmol/L received RRT. An additional 6 children underwent dialysis later in infancy or childhood. The parents of 3 patients refused to allow their child to undergo dialysis because of social issues or because a family member had a bad experience with dialysis; moreover, we advised 6 patients with comorbidities not to undergo dialysis. The indication and decision to perform RRT significantly influenced neonatal mortality ) Figure 4). The need for dialysis was based on the serum creatinine level, eGFR, and symptoms such as fluid overload. Details on the patients who received RRT during the neonatal period or later in infancy and those who did CI -confidence interval, eGFR -estimated glomerular filtration rate, RRT -renal replacement therapy Discussion. We reported the epidemiology of CKD in neonates presenting to our institution over the course of 10 years. Congenital anomalies of the kidneys and urinary tract was the main underlying cause of CKD in our patient group, consistent with the results of other international reports. 3,4 A combined database study representing 40 countries and 264 patients showed that the most frequent underlying etiology of neonatal ESKD was CAKUT in 55% of patients, followed by cystic kidney disease in 13%, cortical necrosis in 11%, and congenital nephrotic syndrome in 6%. 4 The incidence of neonatal ESKD was reported to be about 7.1 per 1 million age-related people, and the estimated incidence of neonatal CKD being 1 in 10,000 live births. 12 In our cohort, we observed a high percentage of consanguinity, which is similar to that seen in the rest of our country. 13 In our cohort, only a small percentage of neonates required RRT, and this treatment was offered to only a few children in the neonatal period. Some parents refused to allow their child to undergo dialysis for various reasons, while others were advised against dialysis. The contraindications for dialysis included neonates with associated anomalies or poor social background and parental inability to perform peritoneal dialysis at home. Apart from the presence of anuria, there are no clear indications for the initiation of dialysis in neonates with CKD. Many infants, especially those with CAKUT, have ongoing urine production, and their condition may be managed without dialysis until they have recovered from respiratory complications, which are common in neonates with CAKUT. Some patients may not receive RRT, despite having ESKD, are summarized in Table 2. At the time of the last follow-up, 42 )41%( children had hypertension and 27 )26.5%( had significant proteinuria. Twenty-two children )12.1%( died at mean )SD( age of 8.8 )11.2( months, median )range( 4.3 )0.14-42( months. The underlying cause of death was variable; however, chest infection and septicemia were the main underlying etiology of death in 6 patients who had congenital nephrotic syndrome. Progression and complication of the ESKD leading to other systems failure was the cause in the remaining children. Died RRT -renal replacement therapy, eGFR -estimated glomerular filtration rate, CKD -chronic kidney disease, RFU -regular follow-up, BP -blood pressure, CDK -cystic dysplastic kidneys, CNS -congenital nephrotic syndrome, PUV -posterior urethral valve, VUR -vesicoureteral reflux, RTA -renal tubular acidosis, M -month, PD -peritoneal dialysis, HD -hemodialysis, RT -renal transplantation, AA -physician advised against RRT show a surprising improvement in kidney function, as RRT can be discontinued in 10% of neonates who have been commenced on dialysis. 4 There is no international agreement on who should be responsible for clinical decision-making regarding the initiation of dialysis therapy in pediatric patients who are unable to provide consent. The United States Presidential Commission for the Study of Bioethics 14 has divided ethical issues about providing treatment into 3 categories. First, for clearly beneficial treatments, the guidelines state that treatment is mandatory, even without parental consent. Second, for clearly futile treatments, the guidelines state that treatment should not be provided, even if parents insist upon treatment. Finally, when outcomes are ambiguous or uncertain, the parents should assume final responsibility regarding the most appropriate course of action. However, the French Neonatology Society 15,16 recommends that although parents must be involved in the decision-making process, any important decision affecting the patient's life is an individual medical responsibility. In our institution, we provided RRT to neonates Saudi Med J 2018; Vol. 39 )4( www.smj.org.sa in whom RRT was not advised by physicians due to the presence of other comorbidities but whose parents insisted that all possible treatment options be tried. However, their outcome was poor. Some Islamic scholars 17 have advised )fatwa( that life-sustaining treatment such as resuscitation should be withheld or withdrawn if recovery of the heart and lungs is deemed unlikely in the opinion of 3 trustworthy specialist doctors. Similarly, the Royal College of Pediatrics and Child Health in London 18 recommends that it is ethically acceptable to withhold a treatment that is not in a child's best interest. This is the case in many neonates with CKD requiring RRT, as this therapy could prolong their suffering, with no improvement in the outcome. This scenario is particularly true when the infrastructure is not optimum for providing this expensive treatment. In this case, withdrawal )or withholding or limiting( of treatment could be ethical because a greater benefit could be achieved by using other resources. 19 It is also important to consider social issues when making decisions about the use of RRT in neonates. For example, clinicians should consider whether parents are able to provide peritoneal dialysis for neonatal patients at home prior to recommending such sophisticated treatment. Serious medical abnormalities are the most important factor affecting decision-making when providing RRT to neonates with ESKD. Other factors include anticipated future morbidity, the family's right to decide, the doctor or medical staff's right to decide, family socio-economic status, oliguria, and budget constraints. 16 In this study, we used only peritoneal dialysis )PD( to start RRT, as we do not have the facilities to perform hemodialysis or renal transplantation in neonates at our institution. Peritoneal dialysis was used in 91.7% of the 264 neonates registered to receive RRT in the European Society of Pediatric Nephrology )ESPN(/European Renal Association-European Dialysis and Transplant Association )ERA-EDTA(, International Pediatric Peritoneal Dialysis Network )IPPN(, The Australia and New Zealand Dialysis and Transplant Registry )ANZDATA(, and Japanese RRT registries. 4 Recently, data have shown that the survival of neonates receiving RRT improves when appropriate infrastructure and expertise are available. The survival of neonates and infants who received RRT early in life in the form of PD 3 years after dialysis initiation was 78.6% and 84.6%, respectively. 20 Neonatal RRT is associated with significant morbidity and mortality. A recent study 21 reported that there were high rates of complications, including peritonitis )83%(, malpositioned catheters )72%(, and leaks )55%(, after PD in neonates. An alternative option for long-term dialysis in neonates is hemodialysis. Poolack et al 22 performed hemodialysis in 7 neonates, and reported that 86% )6 children( survived at 18 months of age, but there was considerable morbidity. The results described here are consistent with those of a previous study demonstrating that there was a high mortality rate in neonates receiving RRT )65.3%(, which was most likely within the first month of life )56%(, with the majority )94%( occurring within the first year. 20 Furthermore, among long-term survivors, 44% were severely developmentally delayed and 22% were moderately developmentally delayed. Higher survival rates were reported in combined registries )the ESPN/ERA-EDTA, IPPN, ANZDATA, and Japanese RRT registry(: 81% at 2 years and 76% at 5 years. 4 Nevertheless, considerable morbidities, including growth retardation )63%(, anemia )55%(, and hypertension )57%(, were reported at 2 years of age. 4 Similar to our report, another study found that the decision to begin RRT in neonates is difficult due to lack of the surgeon's experience and evidence-based guidelines. 21 The outcome of this treatment remains uncertain, and there are many ethical issues regarding the benefit of treatment in neonates with comorbidities and restricted resources. Over the last 20 years, many institutions in developed countries have begun to offer RRT routinely to neonates. However, this treatment is viewed as optional, rather than mandatory. 23,24 This viewpoint is even more common in developing countries, as both medical staff and caregivers both feel that RRT is a demanding treatment with uncertain or poor outcomes. This explains the high percentage of parental refusal to start dialysis in our study. A similar situation of parents refusing therapy could occur with optimum infrastructure and expertise, leading to an ethical dilemma. 25,26 Study limitation. The main limitation of our study is that it was a single-center, retrospective study in a tertiary pediatric nephrology center. In conclusion, CAKUT is the most common etiology of CKD in neonates. Chronic kidney disease is associated with high morbidity and mortality in these patients. Further studies are required to justify offering complicated and expensive therapy to vulnerable children in developing countries and to create guidelines that help professionals make decisions about treatment.
2018-04-26T19:49:05.269Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "42db4c37d96b7e8ec635df05ed43527fd016c541", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.15537/smj.2018.4.21712", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42db4c37d96b7e8ec635df05ed43527fd016c541", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247447071
pes2o/s2orc
v3-fos-license
Island and Page curve for one-sided asymptotically flat black hole Great breakthrough in solving black hole information paradox took place when semiclassical island rule for entanglement entropy of Hawking radiation was proposed in recent years. Up to now, most papers which discussed island rule of asymptotic flat black hole with $D \ge 4$ focus on eternal black hole. In this paper, we take one more step further by discussing island of"in"vacuum state which describes one-sided asymptotically flat black hole formed by gravitational collapse in $D \ge 4$. We find that island $I$ emerges at late time and saves entropy bound. And boundary of island $\partial I$ depends on the position of cutoff surface. When cutoff surface is far from horizon, $\partial I$ is inside and near horizon. When cutoff surface is set to be near horizon, $\partial I$ is outside and near horizon. This is different from the case of eternal black hole in which $\partial I$ is always outside horizon no matter cutoff surface is far from or near horizon. We will see that different states will manifestly affect $S_{\text{ent}}$ in island formula when cutoff surface is far from horizon and thus have different result for Page time. I. INTRODUCTION Black hole information paradox is a long lasting debate over more than 40 years since Hawking discovered that information may be lost in evaporation of black hole [1]. Hawking's calculation implies that von Newman entropy of Hawking radiation will increase monotonically. On the other hand, quantum mechanics requires black hole evaporation to be unitary, thus von Newman entropy of Hawking radiation should obey Page curve if information is preserved during black hole evaporation [2]. Since then, producing Page curve in gravitational calculation is a key step towards solving information paradox. In [3][4][5], island rule is proposed to calculate Page curve of Hawking radiation (see e.g. [6] for a review). Island rule states that fine grained entropy of black hole is given by and fine grained entropy of Hawking radiation is given by where R denotes the region outside cutoff surface A and collecting Hawking radiation, I is called island and denotes an codimension-one hypersurface which penetrates into the interior of black hole, ∂I is the codimension-two boundary of I (see fig.(1)). ∂I is chosen to be the quantum extremal surface (QES) [8] that extremizes the generalized entropy [9][10][11] S gen = Area(∂I) 4G N + S ent , (1.3) where the first term is area term from Ryu-Takayanagi formula [12,13], and the second term is coarse-grained entropy of matter fields. If there are more than one extremal surface, the global minimum one should be taken. This is the meaning of "min" and "ext" in eq.(1.1) and eq.(1.2). Island formula can be derived by using replica trick to construct replica wormhole [14,15]. [7]. Observer on cutoff surface A (green dashed line) collects Hawking radiation in region R. Island (red line) penetrates into interior of black hole. ∂I is quantum extremal surface. I ∪ B ∪ R is a Cauchy slice. (b) Without island, we can equivalently fix rI = 0 and tI as some constant. Up to now, as far as we know, most of the papers that discuss Schwarzschild black hole [28][29][30][31][32] and Reissner-Nordström black hole [33,34] only concern about eternal black hole and static vacuum [36]. In [28], the authors discuss one-sided dynamical black hole but they implicitly assume Hartle-Hawking state [37] which is more appropriate to eternal black hole. Hartle-Hawking state or Hartle-Hawking vacuum is the unique state which is regular in the whole Kruskal extension of Schwarzschild black hole and is invariant under Schwarzschild time translation. It represents a black hole in thermal equilibrium with an outgoing Hawking radiation reaching future null infinity J + and an equal incoming thermal radiation coming in from past null infinity J − . In this paper, we will discuss "in" vacuum state which describes one-sided dynamic asymptotically flat black hole formed from collapsing of spherical null shell (see fig.(1)) [7]. It can be approximated by Unruh state in late time limit. "In" vacuum state and Unruh state both can describe black hole formed by gravitational collapse since they both contain no incoming flux from J − . Hence, it is generally believed that, compared with the Hartle-Hawking state, "in" vacuum state is more appropriate to one-sided dynamic black hole. We will see that different state will manifestly affect S ent in island formula and thus have different result for Page time. In addition, different from Hartle-Hawking state for eternal black hole, 1 due to absence of incoming flux, s-wave approximation is valid for "in" vacuum state of one-sided black hole formed from collapsing of spherical null shell when cutoff surface A is far from horizon (see Appendix A). The use of "in" vacuum state is the key difference between our paper and the other papers that use Hartle-Hawking state [28][29][30][31][33][34][35][36]. This paper is organized as follows. In Sec. II, entanglement entropy S ent is discussed and we will see its dependence on state. In Sec. III, we discuss the case in which cutoff surface is far from horizon and find that boundary of island ∂I is inside black hole horizon and produce the Page curve. In Sec. IV, we discuss the case in which cutoff surface is near horizon and find that boundary of island ∂I is outside black hole horizon and produce the Page curve. In Sec. V, we discuss the case for higher dimensional (D > 4) black hole formed by collapse of spherical null shell and find similar results as the 4D case. The conclusion and discussion are in Sec. VI. II. ENTANGLEMENT ENTROPY IN BLACK HOLE BACKGROUND In this paper, we will mainly focus on the island of "in" vacuum state [7] in the background of Schwarzschild black hole. I ∪ B ∪ R is a Cauchy slice on which the quantum state is pure since we consider vacuum state which is pure. Then and thus S(BH) = S(R) as shown in eqs. (1.1) and (1.2). In the following, we will mainly focus on S ent (B) for simplicity. A. Cutoff surface far from horizon When the cutoff surface A is far from horizon, it is reasonable to assume that we only consider 2D massless scalar fields, i.e. s-wave approximation is valid (see Appendix A). Then the matter part S ent is approximately given by entanglement entropy of massless scalar fields in 2D spacetime [22] where c is central charge. And is the distance between A and ∂I, 3 in flat metric ds 2 = −dx + dx − , and Eq.(2.2) can be understood as follows. Entanglement entropy in vacuum state of conformal field theory (CFT) [39][40][41] in flat spacetime ds 2 = −dx + dx − is given by Fields in Minkowski spacetime can be quantized as Minkowski vacuum state is defined with respect to coordinates (x + , x − ) (i.e. with respect to the modes (4πω) −1/2 e −iωx + and (4πω) −1/2 e −iωx − ) such that (2.8) 1 As shown in Appendix A, s-wave approximation for eternal black hole in Hartle-Hawking state is questionable. Since now there are ingoing modes in addition to the outgoing modes. We would like to thank the anonymous referee for pointing out this issue. 2 In Eq.(2.2), we omit UV cutoff parameter, since it can be absorbed in the renormalization of Newton constant G N [4,22,28,29,38]. 3 Throughout the paper, we denote the coordinate of ∂I by using subscript I such as (u I , v I ). That is to say, vacuum expectation value (VEV) of normal ordered stress tensor in coordinates (x + , x − ) vanishes 0| : In two-dimensional gravity ds 2 = −e 2ρ(x + ,x − ) dx + dx − , VEV of normal ordered stress tensor defines a set of function t ± , 4 Ψ| : and is the Schwarzian derivative. Obviously, t ± is a state-dependent and coordinate-dependent function. B. Cutoff surface near horizon When length scale of the region is sufficiently small compared with the length scale of the curvature, entropy of matter fields can be approximately given by the one in vacuum state in flat spacetime no matter which spacetime and state we consider. Thus renormalized entanglement entropy of matter term is now given by [29,42] S ent = −κc Area L 2 , (2.14) where κ is a constant and L is the geodesic distance between boundary of island ∂I and cutoff surface A. "Area" in (2.14) is given by the area of cutoff surface which is approximately equal to the area of ∂I. Since we assume both ∂I and cutoff surface is near horizon, L can be approximately given by [29] We would like to point out that eq.(2.15) is approximately valid when L is sufficiently small with respect to the length scale of the curvature. 4 Throughout the paper, we assume cG for D-dimensional macroscopic black hole so that we can neglect backreaction effect for simplicity. III. CUTOFF SURFACE FAR FROM HORIZON Now let us consider the detailed calculation in 4-dimension. As suggested in the above section, the (approximate) expression for the entanglement entropy of the matter term depends on the position of the cutoff surface. In this section we will first consider the case that the cutoff surface is far from horizon. A. With island We consider one-sided black hole formed by spherical null shell collapsing at v = v 0 . In the "in" region v < v 0 , we have Minkowski metric while in the "out" region v > v 0 , we have Schwarzschild metric To have smooth metric at the null shell, we have connecting condition [7] Notice that this is only valid outside horizon. On the other hand, in terms of the Kruskal coordinates Then at late times where v H = v 0 − 2r h . Since both U and u in are smoothly defined inside and outside horizon, eq.(3.11) is also valid inside horizon (at least in the vicinity of horizon), 5 then Now ds 2 = −e 2ρ(uin,v) du in dv covers the whole spacetimes. In "in" region v < v 0 , e 2ρ(uin,v) = 1, and in "out" region, e 2ρ(uin,v) is related to conformal factor e 2ρ(U,V ) in Kruskal coordinates via coordinate transformation where we have used eq.(3.12). Now let us turn to generalized entropy of the system, which is given by is given by the area of boundary of island ∂I. Since we are considering the case that cutoff surface A is far from horizon, thus s-wave approximation is valid, then matter part is given by eq.(2.2). As explained previously, now we need to consider "in" vacuum state |in of Minkowski region v < v 0 . |in is defined with respect to coordinates (v, u in ) (i.e. with respect to the modes (4πω) −1/2 e −iωv and (4πω) −1/2 e −iωuin ), thus VEV of normal ordered stress tensor in coordinates (v, u in ) vanishes in| : T uinuin : |in = in| : T vv : |in = 0 [7]. Then t ± = 0 for vacuum state |vac = |in in (v, u in ) coordinates, thus is the distance between A and ∂I in flat metric ds 2 = −du in dv and Actually, up to now, almost in all papers that discuss island in Schwarzschild spacetimes [28][29][30][31] or Reissner-Nordström black hole [33,34], Hartle-Hawking state |H [37] is taken into account. 6 |H is defined with respect to Kruskal coordinates (V, U ) (i.e. with respect to the modes (4πω) −1/2 e −iωV and (4πω) −1/2 e −iωU ) [7], Then for eq.(2.2), and In [28-31, 33, 34], the authors used eqs. (3.20) and (3.21). This is the key difference between our paper and the others. To obtain the second line of (3.23) we also have used where r * A , r * I can be converted to Kruskal coordinates by (3.7) and (3.8) respectively. 7 We assume ∂I is near horizon (which will be confirmed in eq.(3.32)), then r * I → −∞ thus U I → 0. Expand S gen to first order of U I , we have which has solution which gives . Since U A < 0, thus U I > 0, which means ∂I is inside horizon. Moreover, which means ∂I is near horizon and this confirms our assumption. 7 In (3.25), we implicitly assume r I < r h . For the case r I > r h , we can use r I ≈ r h 1 + e r * I r h −1 and r * I can be converted to Kruskal coordinates by eq.(3.7), then we will obtain same expression of Sgen as (3.23). Thus calculation in this subsection is valid both for the cases that ∂I is inside or outside horizon. We will finally find that ∂I is inside horizon when cutoff surface is far from horizon. Plug these solutions back to eq.(3.26), we have where in the last line we have taken into account that cG N r 2 h . Thus with island configuration, at late time, entropy of Hawking radiation is bounded by black hole Bekenstein-Hawking entropy which decreases monotonically due to black hole evaporation. B. Without island If there is no island, we can equivalently fix r I = 0 and t I as some constant. Then the area term (3.16) vanishes. We only have field term eq.(2.2), and u in (I) = t I , e 2ρ(uin,v)(I) = 1 since now r I = 0 is in the Minkowski region (3.1). As a consequence we have where in the last line we have taken the late time limit t A r A ( r h ). 8 In a word, without island configuration, at late time the radiation entropy grows linearly with time, which is consistent with Hawking's result. Compare eq. Our result for Page time is about twice as much as the Page time calculated in [28]. This is due to the fact that we choose "in" vacuum state |in which has t ± = 0 in (v, u in ) coordinate. While in [28], although the authors also considered dynamical black hole, they still started from Hartle-Hawking state |H which has t ± = 0 in (V, U ) coordinate and |H is thermal with respect to (v, u in ) coordinate. S ent of vacuum state |in is smaller than S ent of thermal state |H , thus we will arrive at Page time later. A. With island In this subsection, we consider the case in which cutoff surface is near horizon and we will follow similar logic in [29]. The authors in [29] considered Hartle-Hawking state in eternal Schwarzschild black hole while we consider "in" vacuum state in one-sided black hole formed by collapse of null shell. Despite this fact, when length scale of the region is sufficiently small compared with length scale of the curvature, entropy of matter fields can be approximately given by the one in vacuum state in flat spacetime no matter which spacetime and state we consider [29]. In this case we still have But entropy for fields is now given by eq.(2.14). 9 Since we assume both ∂I and cutoff surface is near horizon, L can be approximately given by [29] L Then we have Since we assume both ∂I and cutoff surface are near and outside horizon 10 , we can make replacement r A = r h (1 + α) and r I = r h (1 + β), where β < α 1, then we have Extremizing (4.5) over t I , we get which has solutions or where Z represents integers. Dropping complex solutions, we are left with This confirms the assumption t I = t A in [29]. Plug t I = t A back into eq.(4.5), and expand to first order of β, we have which has solution β= e α α(α + 1) 5 c 2 G 2 N κ 2 3(α + 1) 5/2 cG N κ + e α α(α + 1) 5/2 cG N κ − 2e This matches the result in [29] and confirms the assumption that ∂I is near and outside horizon due to the fact that c 2 G 2 N r 4 h . Plug eq.(4.12) into (4.10) and expand the result to zeroth order in G N , we have where in the last line we have taken into account that cG N r 2 h . Thus for the case that cutoff surface near horizon, with island configuration, at late time, entropy of Hawking radiation is bounded by black hole Bekenstein-Hawking entropy which decreases monotonically due to black hole evaporation. B. Without island In the case without island, we still need to use eq.(2.2) to calculate S ent due to the fact that r I = 0 is far from nearhorizon cutoff surface for macroscopic black hole (r h 2 p ). Then the result of S gen without island for near-horizon cutoff surface will be same as eq.(3.39). Comparing with (4.17), we find same Page time V. HIGHER DIMENSIONS To examine our result obtained in previous sections more comprehensively, in this section, we will consider one-sided black hole in D (D > 4) dimensional spacetime formed by null shell collapsing at v = v 0 . Similarly, in the "in" region v < v 0 , we have Minkowski metric and in the "out" region v > v 0 , we have Schwarzschild metric We have defined r h in higher dimensions as 11 where Pr means principle value integral. Now the Kruskal coordinates are 2r h , (Inside horizon) (5.6) and (5.2) is converted to A. Connecting condition To have smooth metric at the null shell, metric on both sides of null shell needs to be the same. Thus we have connection condition [7] and r(v 0 , u) is given implicitly outside horizon by Thus we have connection condition outside horizon into eq.(5.12) and expanding it around (5.14) 11 is the area of a unit D − 2 dimensional sphere S D−2 . From (5.14) we have (5.15) However, the above formula is only valid outside horizon. This is because outside horizon we have and thus Similar to the case for 4D, since both U and u in are smoothly defined inside and outside horizon, eq.(5.21) is also valid inside horizon (at least in the vicinity of horizon). This is explicitly shown in Appendix. C. B. Cutoff surface far from horizon with island Again the generalized entropy is given by where area term is and based on similar logic of the case of 4D, we can also make s-wave approximation for S ent in higher dimensions. Then matter term is where (e 2ρ A ) t±=0 should be in the form of eq.(5.21) while (e 2ρ I ) t±=0 should be in the form of eq.(B.21). In what follows we will assume cG . Put all things together, we have Next we need to convert all variables to Kruskal coordinates. For r A r h , Define a dimensionless constant a = r A r h D−3 1. We then expand r * > (A) around a = ∞, Then r * > (A) can be converted to Kruskal coordinates via eq.(5.3) and (5.5). On the other hand, since r I is close to horizon, we can expand it around horizon, i,e., r I = r h (1 − ) with 1, then and now r * < (I) can be converted to Kruskal coordinates via eq.(5.3) and (5.6). 14 Finally, we have 15 then we will obtain same expression of Sgen as (5.36). Thus calculation in this subsection is valid both for the cases that ∂I is inside or outside horizon. We will finally find that ∂I is inside horizon when cutoff surface is far from horizon. 15 In D → 4 limit, eq.(5.36) is slightly different from eq. (3.23). This is because that we take approximation (5.30) and the fact that in D → 4 limit, eq.(5.17) has additional proportional constant when compared with (3.11). But this will not change the physical content. We assume ∂I is near horizon, and expand S gen to first order in U I , then extremize the result over U I , we get On the other hand, since we assume ∂I is near horizon, we can expand S gen to first order in U I and extremize the result over V I , then we get where we have taken into account that ln V A V I 1. Then together with eq.(5.38), we have As in the case of 4D, U A < 0, thus U I > 0, which means ∂I is inside horizon. Moreover, where we have used cG where in the last line we have taken into account that cG . Thus with island configuration, at late time, entropy of Hawking radiation is bounded by black hole Bekenstein-Hawking entropy which decreases monotonically due to black hole evaporation. without island As in the case of 4D, if there is no island, we can equivalently fix r I = 0 and t I as some constant. Then area term eq.(5.23) vanishes, we only have field term eq.(5.24). u in (I) = t I , e 2ρ(uin,v)(I) = 1 since now r I = 0 is in the Minkowski region (5.1), then where in the last line we take late time limit t A r A ( r h ). 16 Thus without island configuration, at late time radiation entropy grows linearly with time, which is consistent with Hawking's result. Compare eq.(5.48) with eq.(5.45), we have S(R) = min(S gen ) ≈ min(S ent , S BH ), We now consider the case in which cutoff surface is near horizon. 17 In this case we still have But entropy for fields is now given by where L is geodesic distance between ∂I and cutoff surface. Since we assume both ∂I and cutoff surface are near horizon, L can be approximately given by where in the last line, we have made replacement r A = r h (1 + α) and r I = r h (1 + β) expand L to first order in α and zeroth order in β. 18 This is appropriate since we assume both ∂I and A both are near horizon. We will justify this assumption later. Then Extremizing (5.54) over t I , we get which has solutions Since β < α, cosh −1 β α is imaginary. Dropping complex solution, we are left with which again confirms the assumption t I = t A in [29] in the higher dimensional case. Plug t I = t A back into eq.(5.54), we have 18 In [29], the authors further assume t A = t I and eq.(5.53) reduces to , which matches the result in [29]. And this confirms that geodesic distance L can be approximately given by d(A, I) 2 e ρ A e ρ I when A and ∂I is nearby. In this paper, we will not assume t A = t I , but we will deduce it in eq.(5.56). Extremizing (5.58) over β, we get which matches the result in [29], and confirms the assumption that ∂I is near and outside horizon due to the fact that cG . Thus for the case that cutoff surface near horizon, with island configuration, at late time, entropy of Hawking radiation is bounded by black hole Bekenstein-Hawking entropy which decreases monotonically due to black hole evaporation. without island On the other hand, in the case without island, we still need to use eq.(5.24) to calculate S ent due to the fact that r I = 0 is far from near-horizon cutoff surface for macroscopic black hole. Then the result of S gen without island for near-horizon cutoff surface will be same as eq.(5.48). Comparing with (5.64), we find same Page time VI. CONCLUSION AND DISCUSSION In this paper, unlike other papers that discuss island rule of eternal black hole, we make a further step towards more realistic black hole that formed from gravitational collapse. Based on this motivation, we choose "in" vacuum state |in which describes vacuum of Minkowski region and has thermal Hawking flux in Schwarzschild region. Due to the absence of incoming flux, s-wave approximation is valid for "in" vacuum state of one-sided black hole formed from collapsing of spherical null shell when cutoff surface A is far from horizon. The use of "in" vacuum state is the key difference between our paper and the other papers that use Hartle-Hawking state [28][29][30][31][33][34][35][36]. We find that, when cutoff surface is far from horizon, island emerges at late time with its boundary ∂I inside and at vicinity of horizon and saves the entropy bound. Page time is about twice as much as the one of Hartle-Hawking state. On the other hand, when cutoff surface is near horizon, island emerges at late time with its boundary ∂I outside and at vicinity of horizon and saves the entropy bound. Page time is also about twice as much as the one of Hartle-Hawking state. For D > 4 dimensional case, we find similar results. Our result is different from the case of eternal black hole in which boundary of island ∂I always emerge outside horizon no matter cutoff surface is far from or near horizon. Different states manifestly affect S ent in island formula when cutoff surface is far from horizon and thus have different result for Page time and different position of boundary of island ∂I.
2022-03-15T01:16:15.281Z
2022-03-12T00:00:00.000
{ "year": 2022, "sha1": "810fc3940eb04110befb4f9b66d81d0fd5a184fa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "739e491ace918ee2bd156c11b926bfef8f93a273", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270112247
pes2o/s2orc
v3-fos-license
Crystalline lens changes in porcine eyes with implanted phakic IOL (ICL) with a central hole Background We calculated the smallest diameter of a hole in the center of the optic at which the optical character of a phakic IOL (ICL) may be maintained. The changes induced in the aqueous humor dynamics and the pathology of cataract development with such a hole were investigated. Methods A simulation was performed using ZEMAX software to calculate the hole diameter that makes possible the maintenance of a stable optical character of a phakic IOL. After a hole of calculated diameter was trepanned in the center of the optic of the ICL, the latter was implanted into one eye of a 5-month-old minipig, and an unperforated ICL into the other. The postoperative course was observed for 3 months. Then, Evans blue was injected into the vitreous body under general anesthesia to stain the anterior capsule of the crystalline lens. Within 30 min, the eye was enucleated and the tissues removed were fixed. Results The MTF of the perforated ICL (hole diameter, 1.0 mm) in the center of the optic resembled that of the unperforated ICL. In all cases with non-perforated ICLs, subcapsular turbidity developed, but no staining caused by EB was observed in the anterior capsule. On the other hand, the anterior capsules of the eyes fitted with ICLs with a 1.0-mm hole were stained, but exhibited no turbidity. Conclusion An ICL with a central hole of diameter 1.0 mm in the optic is similar to an unperforated ICL. The size of the hole influenced the aqueous humor dynamics and increased the aqueous humor perfusion volume over the entire anterior surface of the crystalline lens. The possibility of preventing cataracts was therefore suggested. Introduction The use of a phakic IOL for eyes with high refractive errors that cannot be corrected with refractive surgery has become increasingly popular. Implantation of a phakic IOL involves no optical invasion because the optical region of the cornea is not incised, and lens exchange may be possible if an error or a change of refraction occurs as a result of a reversible maneuver. ICL (Staar Surgical, Monrovia, CA, USA) is a posterior-sac type of phakic IOL that can be used to correct a wide range of refractive errors (-20D to +20D). It can be implanted into the eye through a corneal incision of only 3.2 mm. A current issue of concern, however, is a secondary cataract as a postoperative complication [1,3,4,11,13,17]. According to the reported results of ICL clinical trials in the US, the incidence of secondary cataract is 2.1% within 1 year and 2.7% within 3 years after surgery [8,16]. However, the reasons for this are not yet understood. Fujisawa et al. implanted several types of ICL into the eyes of pigs, and reported changes in aqueous humor dynamics and in the crystalline lens. Implantation of ICLs of the ordinary type into porcine eyes was followed in all cases by turbidity as an anterior subcapsular cataract. Similar, but milder, symptoms were seen after the implantation of all ICLs with four holes around the optic. On the other hand, ane ICL with a 3.0 mm hole in the center of the optic improved the aqueous humor circulation, and no turbidity is reported in any instance beneath the anterior capsule [2]. Although this method revealed the risk of turbidity in the anterior capsule, such a large hole in the optic compromises lens function. We therefore calculated the diameter of the central optic hole that would not affect the optical character of the lens, and conducted an experiment in which such perforated lenses were implanted into porcine eyes in order to observe the changes in aqueous humor dynamics and the occurrence of secondary cataract. Calculation of hole diameter To calculate the diameter of the hole in the center of the optic that does not affect the properties of the ICL, a simulation was performed using ZEMAX software (ZEMAX Development Corporation, Bellevue, WA, USA). As a basic specification of eyeball models, the pupil diameter was set at 3.0 mm using the Gullstrand model eye. Then, the modulation transfer function (MTF) was set to the maximum level occurring at the location of the retinal image. Four lenses of the same size as the optical diameter of the ICL were added to this eyeball model, one with no hole at all, the others with holes of 2 mm, 1.5 mm and1.0 mm respectively. We then compared and reviewed the contrast at a spatial frequency of 100 cycles/mm to evaluate ocular performance [9]. Implantation experiment The study was performed on 12 eyes of six minipigs (Göttingen strain) aged 5 months. Each minipig was raised in an animal room 6 m 2 in area and maintained with 12-h light-dark cycles. The animal was fed twice a day and given water freely. After about 1 month, the center of the optic of the perforated ICL was implanted into one eye using a trepan of a diameter that caused no problems during the computer simulation. An unperforated ICL was implanted into the other eye. After implantation of the lenses, the animals were raised in the same room for 3 months, during which their progress was observed. The eyeball was then extirpated. For that period, the anterior ocular segment was photographed and the course was observed. Regarding ICL, a minus lens 13.0 mm in diameter was used. Although there were individual differences, we took the average of the actual values of the white-to-whites and used it to decide the lens size. Implantation method Endotracheal intubation was performed after anesthesia was induced with an infusion of pentobarbital sodium. Inhalation anesthesia was administered using nitrous oxide laughing gas. Temporal keratotomy of 3.2 mm in length was performed, and the anterior chamber and a cartridge for ICL implantation were filled with a viscoelastic substance. The ICL was pressed into this substance, and the cartridge was then slid into a special injector. The cartridge was inserted from its front end into the wound, and the injector was operated with great care to avoid contact of the cartridge with the crystalline lens, so that the ICL entered the anterior chamber. The four haptics were positioned behind the iris. During surgery, we operated with care so as, again, to avoid any contact with the crystalline lens. After the haptics were securely in place, the viscoelastic substance was removed. Finally, the corneal wound was sutured with 10-0 nylon. Follow-up Postoperatively, instillation of 0.5% levofloxacin, 0.1% diclofenac sodium and 0.5% betamethasone sodium phosphate was carried out four times a day for 1 week. The anterior ocular segment was photographed 1 week, 1 month, 2 months, and 3 months after surgery, and the course was observed. Pigment injection and extirpation of eyeball To determine the hemodynamics of the aqueous humor before the extirpation of the eyeballs, 10 μl of 20% Evans blue pigment dissolved in physiological saline (molecular weight 960.8) was injected into the anterior part of the vitreous body of each minipig eye from the ciliary ring, using an Ito microsyringe with a 31 gauge needle under general anesthesia. After 30 min, the pigment was circulated in the aqueous humor. The eyeball was then extirpated and fixed with 4% glutaraldehyde-0.1 M phosphate buffer. After extirpation, the minipigs were treated according to the ARVO resolution regarding animals used in research. Fixing the eyeball After the extirpated minipig eyeballs were fixed with 4% glutaraldehyde-0.1 M phosphate buffer, the eyeball was cut in half, and the ICL and crystalline lens were removed from the eyeball. At this time, the torn surface of the eyeball, the anterior and posterior surfaces of the extirpated crystalline lens, and the ICL were observed under a stereoscopic microscope. In addition, the eyeballs were left in fixative for 3 days to fix the eye tissues thoroughly. The extirpated crystalline lens was cut in half again, and the torn surface was observed under the stereoscopic microscope. The specimen was fixed with 1% osmic acid-phosphate buffer 4 h after washing with 0.1 M phosphate buffer. After being dehydrated with an ethanol series, it was further dehydrated three times with propylene oxide and was embedded into Quetol 812. Semithin sections were stained with toluidine blue and then observed under a light microscope. Thin sections were stained with uranyl acetate and lead citrate, and examined by electron microscopy. Calculation of crystalline lens turbidity area For the turbid part of the anterior surface of the crystalline lens observed under a stereoscopic microscope, an image of the anterior surface of the porcine crystalline lens was cropped using "Adobe Photoshop" (an image-editing program) according to the size of the ICL optic, and the cropped image was subjected to black-and-white processing. Then, a border was drawn with the computer mouse immediately around the area of turbidity, the image was pixelized, and the number of pixels in the turbid part (which was black) was counted to make possible calculation of the ratio between the turbid and cropped optical areas. We calculated the proportions of the unperforated and perforated lenses represented by the turbid areas, and used Student's t-test to compare these proportions. Hole diameter The simulation results obtained using the optical design software ZEMAX are shown in Fig. 1. In the table, the ordinate represents contrast and the abscissa, the waveform region. Fig. 1d shows the simulation in which the lens has no hole, and the contrast is 0.45 when the MTF is 100 cycles/mm. As Fig. 1a shows, the contrast is 0.15 (rate of decrease, 67%) when MTF is 100 cycles/mm, and in Fig. 1b, it is 0.23 (rate of decrease, 49%) when the MTF is 100 cycles/mm. These rates of decrease were much greater than those seen with an unperforated lens. On the other hand, since the contrast was 0.39 (rate of decrease, 13%) when MTF was 100 cycles/mm, there was a smaller decrease (Fig. 1c). According to simulation using ZEMAX, the degrees of contrast depicted in Figs. 1c,d were similar. On the basis of this result, a 1.0-mm hole was made in the center of the ICL optic using a trepan, and subsequently, the implantation experiment was conducted (Fig. 2). Crystalline lens turbidity area In the slit-lamp microscope image 1 month after ICL implantation, both the unperforated and the perforated lenses were free of turbidity in the crystalline lens ( Fig. 3a,d). However, 3 months after the ICL was implanted, the unperforated ICL had induced turbidity on the anterior surface of the crystalline lens, while the perforated-ICL showed no turbidity (Fig. 3b,e). After eyeball extirpation, stereoscopic microscope examination showed light turbidity consistent with the turbidity on the anterior surface of the crystalline lens observed using a slitlamp microscope for the unperforated ICL, but no turbidity or any other abnormality was found with the perforated ICL ( Fig. 3c,f). The proportion of each turbid area is shown in Table 1. The cases shown in gray were excluded from the review because there were postoperative complications. All cases in which unperforated ICLs were implanted had turbidity covering 4.7-31.3% of the area of the anterior crystalline lens and located in its center, while all cases in the group with perforated ICLs had no turbidity whatsoever, and there was a significant statistical difference in the area of turbidity between the two groups of cases. Stain state in cranial sac by pigment injection Pigment was injected into the anterior part of the vitreous body to compare the degree of staining on the anterior surface of the crystalline lens. In the unperforated ICL, the posterior capsule of the crystalline lens was strongly stained, but the anterior capsule, only slightly (Fig. 4a). On the other hand, in the perforated ICL, the posterior capsule of the crystalline lens was strongly stained and its anterior capsule was also comparatively well stained (Fig. 4b). Histopathological findings Light-microscopic observation of the equator of the crystalline lens in the eye into which a perforated ICL had been implanted showed a mixture of light cells and dark cells in the crystalline lens epithelial cells (Fig. 5a). Electron-microscopic observation of these epithelial cells showed no remarkable changes in the cytoplasm in either light or dark cells. However, the enlarged cisterns of the granular endoplasmic reticulum were observed in the dark cells (Fig. 5b, 6). Examination at high magnification showed accumulations of stringy material in the cisterns (Fig. 6). In the dark cells, unlike the light cells, there were many filaments in the cytoplasm. In the stella lentis iridica, light cells and dark cells were mixed, as they were at the equator, but the proportion of dark cells was greater than in the latter region (Fig. 7a). Electron-microscopic examination showed no structural abnormality in the organelles of epithelial cells, but the contrast between light cells and dark cells was marked, and enlarged cisterns were present in the granular endoplasmic reticulum of the dark cells (Fig. 7b). In the layers of fibrocytes under the epithelial cells, considerable amounts of granular material suggesting organellar denaturation were observed (Fig. 7d). The cortical fibers in the stellae lentis hyaloidea of both eyes with perforated, and those with unperforated lenses exhibited no abnormality (Fig. 7c). In the eyes containing an ICL with a central hole, there were few dark epithelial cells at the equator of the crystalline lens (Fig. 5c). Electron-microscopic observation showed no structural abnormality in the organelles of the Fig. 2 A hole of 1.0 mm diameter was prepared in the center of the optic using a trepan epithelial cells of the crystalline lens and almost no enlarged cisterns in the granular endoplasmic reticulum, which was observed in the eyes with an ICL without a hole (Fig. 5d). In the stella lentis iridica, as well as at the equator, scarcely any dark cells were seen, and the rest of the crystalline lens had a normal structure (Fig. 8a). In the stellae lentis hyaloidea of eyes with perforated and with unperforated lenses, the cortical fibers showed no abnormality (Fig. 8b). Discussion Fujisawa et al. have reported that ICLs with a 3.0 mm hole in the center of the optic improved the aqueous humor circulation, and that there was no turbidity in any instance beneath the anterior capsule, but they did not examine the optical character of the lens [2]. It is a matter of course that a 3.0 mm hole should affect the optical character of the lens. We therefore examined the clinical application of the ICL with the central optic hole, and calculated the maximum diameter of the central optic hole that would not affect the optical character of the lens, and conducted an Fig. 3 a Slit-lamp microscope image 1 month after an unperforated ICL was implanted. The crystalline lens has no turbidity. b Slit-lamp microscope image 3 months after an unperforated ICL was implanted. The anterior of the crystalline lens shows turbidity. c Stereoscopic photomicrograph of the anterior surface of the crystalline lens in an eye with an unperforated ICL. The anterior surface of the crystalline lens exhibits slight turbidity. d Slit-lamp microscope image 1 month after a perforated ICL was implanted. The crystalline lens shows no turbidity. e Slit-lamp microscope image 3 months after a perforated ICL was implanted. The crystalline lens shows no turbidity. f Stereoscopic photomicrograph of the anterior surface of the crystalline lens in an eye fitted with a perforated ICL. The anterior surface of the crystalline lens is not turbid Table 1 Fractions of the total anterior surface of the crystalline lens occupied by the turbid material in eyes bearing perforated ICLs t-test P<0.05* The cases with gray shading were excluded from the review because there were postoperative complications (2 cases of severe postoperative inflamation 1 of displacement of the haptics). The turbid areas have statistically significant differences. experiment in which such perforated lenses were implanted into porcine eyes in order to observe the changes in aqueous humor dynamics and to see whether a secondary cataract would occur. To calculate a hole diameter that will not affect the optical character of an ICL, a computer simulation was performed using the commercial optical design software ZEMAX, and the results were compared with those of ICLs without holes. In a 1.0-mm-diameter perforated ICL, the rate of decrease in contrast was reduced, as in the case of unperforated ICLs. This result suggests that it is desirable for maintaining good image formation that the size of the central hole in the ICL optic be 1.0 mm or less. In this experiment, a trepan of the minimum diameter (1.0 mm) was selected, and a hole was Neither light nor dark cells exhibit any structural abnormality in their organelles. However, in the granular endoplasmic reticulum of the dark cells there are many enlarged cisterns. c Photomicrograph of epithelial cells at the equator of the crystalline lens in an eye with a perforated ICL. Few dark cells can be seen. d Electronmicroscope image of epithelial cells at the equator of the crystalline lens in an eye bearing a perforated ICL. There is no structural abnormality in the organelles Fig. 4 a Stereoscopic photomicrograph of the anterior surface of the crystalline lens in an eye containing an unperforated ICL. The anterior surface of the crystalline lens was slightly stained by Evans blue. b Stereoscopic photomicrograph of the anterior surface of the crystalline lens in an eye containing a perforated ICL. The anterior surface of the crystalline lens was relatively well stained by Evans blue made in the center of the ICL optic to implant an ICL. One focus of this study was an optical simulation model, with the eventual aim of investigating the usefulness of perforating an ICL for implantation into a human eye without aggravating aberration or the quality of the retinal image. The perforation is intended to correct the aqueous humor circulation dynamics currently encountered behind an unperforated ICL. The next stage of this research will seek to demonstrate in humans that the degree of aberration and the quality of the retinal image achieved with a perforated lens are desirable. In addition, it is necessary to develop a new design in which the lens does not become displaced, because its displacement will probably prevent it from correctly fulfilling its function. All cases of unperforated ICL had turbidity with an area ratio of 4.7-31.3% in the center of the anterior surface of the crystalline lens. In contrast, all cases of perforated ICL with a 1.0-mm hole in the center of the optical part were entirely free of turbidity. Fujisawa et al. have reported the possible reasons for the far higher speed and frequency of cataract development in minipigs than that in human eyes, and that turbidity with an area ratio of 15 to 21% developed in all pigs with an unperforated ICL of lens diameter 13.0 mm implanted into the eye. They have also reported that, for an ICL with a 3.0 mm hole in the center of the ICL optic, no eye showed turbidity because the aqueous humor perfusion on the anterior surface of the crystalline lens increased, resulting in an adequate provision of the substances needed for the metabolic activity of the crystalline lens [2]. The present experiment yielded findings similar to those in the report by Fujisawa et al. In other words, unperforated ICLs cause cataracts, but placing a hole in the center of the optic appears to prevent the Fig. 6 Electron-microscope image of dark cells in the crystalline lens equator in an eye bearing an ICL without a hole. Enlarged cisterns in the granular endoplasmic reticulum are present, and it can be seen that they contain a stringy material. Many fibrous bodies can be seen in the cytoplasm Fig. 7 a Photomicrograph of epithelial cells in the stella lentis iridica in an eye with an unperforated ICL. There is a mixture of light and dark cells, and the cells appear to have a markedly irregular morphology. b Electron-microscope image of epithelial cells in the stella lentis iridica in an eye containing an unperforated ICL. The organelles show no structural abnormality, but there is a distinct contrast between light and dark cells, and the latter have many enlarged cisterns in the granular endoplasmic reticulum. c Photomicrograph of the stella lentis hyaloidea in an eye bearing an unperforated ICL. No particular abnormality can be seen. d Electron-microscope image of subepithelial fiber cells in the stella lentis iridica in an eye fitted with an unperforated ICL. The cells vary in size and contain granular material development of a secondary cataract. In addition, it was found that, even if the hole diameter is only 1.0 mm, cataracts can be prevented. The mechanism of cataract prevention is considered to be related to the aqueous humor circulation. When pigment was injected into the anterior part of the vitreous body, and staining of the anterior capsule of the crystalline lens was confirmed, in the unperforated ICL the anterior capsule was only slightly stained, while in the perforated ICL it was relatively well stained. This slightly stained state is thought to result from changes in the aqueous humor circulation dynamics resulting from implantation of the unperforated ICL, and from inadequate perfusion of the aqueous humor through the space between the anterior of the crystalline lens and the ICL. However, it is considered that the 1.0-mm hole in the center of the ICL optic allowed aqueous humor perfusion from the posterior side onto the anterior surface of the crystalline lens and out through the ICL into the anterior chamber via the 1.0-mm hole, resulting in staining of the anterior capsule. This indicates that the aqueous humor spreads out between the anterior surface of the crystalline lens and the ICL. In this experiment, periodic examinations were conducted using the slit-lamp microscope, and each time the anterior ocular segment was photographed and recorded ( Fig. 9a,b). Of all cases in both eyes with unperforated ICLs and perforated ICLs, the distance between the anterior surface of the crystalline lens and the ICL (vaulting) was maintained at about 1/4 to 1/3 of the corneal thickness. Since the thickness of the cornea of the minipigs used in this experiment was about 800 μm, at least 150-μm vaulting was needed. In humans, one report states that the vaulting is 0.15 mm or more [3], and thus the present vaulting was considered adequate. In addition, only as a guide, at examination time the eyes with perforated ICLs appeared to have narrower vaulting than those with unperforated ICLs. In the latter eyes, the vaulting was slightly enlarged because of stagnation of aqueous humor between the crystalline lens and the ICL. In contrast, in the perforated ICL, the space was narrowed because the aqueous humor flowed out via the hole in the optic. These results suggest the possibility that a small difference in vaulting occurs. Usually, in the crystalline lens, the state of high K + and low Na + is maintained by Na-K ATPase. It has been reported that, from this concentration difference, an electrochemical potential of 24 mV is generated at the epithelial side, and that ions and nutrients such as amino acid were incorporated in the crystalline lens [10]. In the Nakano mouse (cac mouse), which is well known as a hereditary cataract model, it has been reported that since its cataracts result from the inhibition of Na-K ATPase, Fig. 8 a Photomicrograph of epithelial cells in the stella lentis iridica in an eye containing a perforated ICL. Small dark cells are present. b Photomicrograph of the stella lentis hyaloidea in an eye fitted with a perforated ICL. No particular abnormality can be seen Fig. 9 a Slit-lamp microscope image of an eye bearing an unperforated ICL. The ICL is not in contact with the crystalline lens, and the vaulting measures about 1/4 to 1/3 of the thickness of the cornea. b Slitlamp microscope image of an eye containing a perforated ICL. The ICL is not in contact with the crystalline lens, and the vaulting measures about 1/4 to 1/3 of the thickness of the cornea swelling, disruption, and vacuolation of the fiber cells in the stellae lentis iridica and hyaloidea occur [6]. In addition, in relation to changes in the crystalline lens due to trauma, it has been reported that when trauma occurred, reduplication of the crystalline lens also took place [7,18]. In this study, the cataracts caused by implantation of unperforated ICLs were always anterior subcapsular cataracts. The histopathological findings were a mixture of light and dark epithelial cells on the crystalline lens and many enlarged cisterns in the granular endoplasmic reticulum. The disturbance has progressed beyond not only the epithelial cells but also the fiber cells. Normal fiber cells have almost the same shape and size, while the morphology of the fiber cells in the crystalline lens of the eye fitted with an unperforated ICL shows cells of varying size containing much granular material. In short, we consider that the tissue abnormalities that we observed are similar to the abnormalities in cataracts resulting from the inhibition of Na-K ATPase, rather than to those in cataracts due to trauma. This is thought to be because the presence of an unperforated ICL suppressed adequate perfusion of the aqueous humor, and the metabolic activity in the epithelial cells of the crystalline lens was disturbed, which resulted in a lack of normal protein synthesis and in changes in the epithelial cells. Fujisawa et al. have reported that cataracts are caused by degeneration of the epithelial cells of the crystalline lens and by the consequent enlargement of the capsules of the granular endoplasmic reticulum [2]. Although these have been reportedly caused by prolonged circulatory disturbance of the aqueous humor, no such dramatic changes were observed in this study. However, the abnormality in the granular endoplasmic reticulum is consistent in type but not in amount, and we consider that it is involved in some metabolic disturbances. Turbidity occurred in the stella lentis iridica only, but histopathological examination showed the same abnormality in the epithelial cells at the equator of the crystalline lens, although it was milder than that in the stella lentis iridica. It has been shown that in a normal monkey eye, 35 S-L-cystein is absorbed from near the equator of the crystalline lens, especially the germinative zone, into the crystalline lens, and is transported to the stellae lentis iridica and hyaloidea [15]. In another report, it was shown that when iodoacetic acid, which is harmful to the crystalline lens, was intraperitoneally administered into rats, it was transferred via the blood from the ciliary body into the posterior chamber, and was absorbed by the epithelial cells of the crystalline lens at the equator of the crystalline lens [14]. In short, the equator of the crystalline lens plays an important role in the absorption of nutrients. In this result, it is possible that the metabolic disturbance was accelerated by changes in the epithelial cells at the equator. On the other hand, for the stella lentis hyaloidea, neither the eyes with unperforated ICLs nor those with perforated ICLs had any histopathological abnormality. This is a very interesting finding when considered together with the cell growth factors. One of the cell growth factors, transforming growth factor β (TGF b ), is present in the anterior chamber and the vitreous body, and promotes and inhibits cell growth depending on the types and quantities of the cells affected. Although it is reported that anterior subcapsular cataract is induced by TGF b [5,12], it mainly modulates cell differentiation. TGF b absorbed from the posterior capsular side is transferred to the stella lentis iridica, where it affects the epithelium of the crystalline lens to differentiate them from fiber cells. In this experiment, when either a perforated or an unperforated ICL was implanted, there was no histopathological abnormality whereby the absorption and transport of TGF b occurred in the stella lentis hyaloidea, but unperforated ICLs caused mild differentiation abnormalities in the stella lentis iridica and at the equator. It may be that, although TGF b absorption from the stella lentis hyaloidea was normal, inhibition of TGF b absorption from the anterior chamber led to this mild differentiation abnormality. This study indicated that, in eyes bearing unperforated ICLs, the epithelial cells of the stella lentis iridica and the equator of the crystalline lens consisted of a mixture of light and dark cells, and suggested that the cisterns of the granular endoplasmic reticulum of these dark cells became enlarged as a result of disturbances of normal protein synthesis. Although the morphology of fiber cells showed a wide variation in size, and although degenerated organelles were found here and there, there was no abnormality in the stella lentis hyaloidea. On the other hand, for eyes in which perforated ICLs had been implanted, there was little abnormality in the stella lentis iridica or hyaloidea, or at the equator. In short, there is no need for a hole of as much as 3 mm in the center of the ICL optic, because a hole of only 1.0 mm in diameter adequately increased the aqueous humor perfusion volume on the anterior surface of the crystalline lens, resulting in the prevention of cataract. In addition, it was found that a hole of 1.0 mm in diameter in the center of the optic had no optical effect on vision.
2014-10-01T00:00:00.000Z
2008-02-26T00:00:00.000
{ "year": 2008, "sha1": "14ae4b1f75c2159b55f1458c1b48b0a6befd0e09", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00417-007-0759-2.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "14ae4b1f75c2159b55f1458c1b48b0a6befd0e09", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
255782654
pes2o/s2orc
v3-fos-license
On the effect specificity of accessory gland products transferred by the love-dart of land snails Sexual selection favours the evolution of male bioactive substances transferred during mating to enhance male reproductive success by affecting female physiology. These effects are mainly well documented for separate-sexed species. In simultaneous hermaphrodites, one of the most peculiar examples of transfer of such substances is via stabbing a so-called love-dart in land snails. This calcareous stylet delivers mucous products produced by accessory glands into the mate’s haemolymph. In Cornu aspersum, this mucus temporarily causes two changes in the recipient. First, the spermatophore uptake into the spermatophore-receiving organ, called diverticulum, is probably favoured by contractions of this organ. Second, the amount of stored sperm increases by contractions of the copulatory canal, which close off the tract leading to the sperm digesting organ. However, it has yet to be determined whether these effects are similar across species, which would imply a common strategy of the dart in increasing male reproductive success. We performed a cross-reactivity test to compare the in vitro response of the diverticulum and copulatory canal of C. aspersum (Helicidae) to its own and other species’ mucus (seven helicids and one bradybaenid). We found that the contractions in the diverticulum were only induced by dart mucus of certain species, while the copulatory canal responded equally to all but one species’ mucus tested. In addition, we report a newly-discovered effect causing the shortening of the diverticulum, which is also only caused by dart mucus of certain species. The advantage seems to be a distance reduction to the sperm storage organ. All these findings are the first to shed light on the evolution of the different functions of accessory gland products in dart-bearing species. These functions may be achieved via common physiological changes caused by the substances contained in the dart mucus, since the responses evoked were similar across species’ mucus. Moreover, while these substances can act similarly in separate-sexed species as in simultaneous hermaphrodites, differences may occur in their evolution between the two sexual systems. Background The transfer of male bioactive substances during mating is ubiquitous [1,2]. So far these substances are especially well characterised in the genus Drosophila, being highly divergent and species-specific [3][4][5][6][7][8][9]. The evolution of these substances is favoured by sexual selection since they enhance male reproductive success by affecting the behaviour or physiology of the female. Changes in the female body, such as inducing egg laying [10] and decreasing female willingness to remate [11], are common examples of such male manipulation. However, these effects might contrast with the interests of females over reproduction, leading to sexual conflict [2,12]. In internal fertilizers, male manipulation is achieved in two ways. The bioactive substances are either transferred along with the sperm in the seminal fluid or separately [13]. A way to employ the latter strategy is to inject the substances through the partner's skin [14]. This occurs both in separate-sexed and hermaphroditic species (such as scorpions, salamanders, earthworms, sea slugs; [13]). One of the most prominent examples is the lovedart of simultaneously hermaphroditic land snails (gastropods), which has received growing attention recently (e.g. [15][16][17]). Simultaneously hermaphroditic snails are male and female at the same time and the love-dart is a stylet-like structure that is mainly made of a crystalline form of calcium carbonate and hosted in a muscular dart sac [18]. The morphology of darts depends on the species and so does the number of darts possessed (reviewed by [19]). During courtship, the dart is pushed out of the sac (referred to as dart shooting behaviour), exits via the genital pore and pierces the right flank of the partner's body wall. When the dart is expelled, it is coated with mucus produced by accessory glands that are located adjacent to the dart sac [18]. Once this mucus is introduced into the haemolymph of the partner, it is distributed throughout the circulatory system [20]. For Cornu aspersum (Helicidae), which is the only species investigated in vitro in this respect, the mucus has been reported to cause temporary changes of two female reproductive organs of the partner [21]. The first occurs in the diverticulum, a blind-ended duct receiving the spermatophore, which is connected to the tract leading to the bursa copulatrix, the sperm digesting organ (Fig. 1). Depending on the species, the diverticulum can be present or not [22]. When absent, the spermatophore of the mate is received either in the bursa copulatrix tract or in the oviduct [22]. The mucus has been shown to increase the number of contractions in the diverticulum, probably allowing an easier uptake of the spermatophore [21]. The second change occurs in the copulatory canal (called pedunculus of the bursa copulatrix by [23]), which bifurcates into the diverticulum and the bursa copulatrix tract connecting them to the atrium, placed behind the genital pore (Fig. 1). Under the influence of mucus, contractions of the copulatory canal make the entrance to the bursa copulatrix tract less accessible, permitting more sperm to avoid digestion and reach the sperm storage organ [21]. As a result, the successful dart shooter more than doubles its paternity in the partner's eggs [24]. The above changes represent the better known ways of affecting the partner physiology via the action of the mucus. Other examples are found in Euhadra quaesita (Bradybaenidae), whose dart mucus suppresses the partner's willingness to remate and induces egg laying in small individuals [25]. All these findings suggest a common function of the dart in enhancing male reproductive success across families. Whether this function is achieved via common physiological changes in different species remains unknown. To gain insight into the evolution of male manipulative function across species, we investigated whether the effects induced by mucus carried on the love-dart in land snails are conserved or species-specific. We did so by assessing the response to dart mucus using a crossreactivity test. This test is important to assess the diversity in effects of mucous products in related species, which would be missed when mucus was tested on the species itself. Thus, the mucus of the species tested Fig. 1 The part of the reproductive system of C. aspersum that was used in this study. This includes the genital pore where the partner's spermatophore enters; atrium; copulatory canal; the bursa copulatrix which is responsible for sperm digestion; the diverticulum which receives the spermatophore. This is the preparation used for each experimental trial. The three black squares (±2 mm 2 ) of electrical tape glued onto diverticulum, copulatory canal and atrium were used as markers. The position of these markers was recorded with a webcam, then tracked with DLTdv5 marker tracking software to measure the response of the preparation to each species mucous extracts added to the saline bath (see Additional file 1: Movie 1). Not illustrated: the small dish containing 2 ml saline solution where the preparation was placed and the pins on both sides along the length of the diverticulum, in the Sylgard base of the small dish, to make the measurements comparable might not cause the same reaction when applied to their own reproductive organs. This may be due to counteradaption to the bioactive substance taking place, making the female organs less reactive, which usually occurs in response to sexual conflict [26]. It should be noted that with our approach we cannot distinguish whether the observed effects are caused by similar substances; mucous products can be different between species but still maintain the same manipulative function [9,27,28]. Hence, for this study, we used seven species of the Helicidae and one of the Bradybaenidae family to test whether the response of the female reproductive tract of C. aspersum (Helicidae) to dart mucus is the same. This was done by comparing the in vitro reaction of C. aspersum's diverticulum and copulatory canal to mucous accessory gland products of these different species. We also present the new technique developed to quantify the physiological response and its intensity. In addition, we report a newly-discovered effect of the mucus on the diverticulum. Study species Snails of the species Cepaea nemoralis, Cepaea hortensis and Arianta arbustorum were collected between June and September 2013 in Almere, the Netherlands. Cornu aspersum, Helix pomatia, Theba pisana and Eobania vermiculata were obtained from the snail farm EuroHelix Chierasco, Italy in September 2013. The species Helix lucorum and Fruticicola fruticum were acquired from Thessaloniki, Greece in October 2013. These species belong to the Helicidae family besides the bradybaenid F. fruticum. All species were kept at 20°C with a reversed photoperiod L:16 h D:8 h at 60 % humidity. Adult snails were kept individually in plastic boxes lined at the bottom with moist paper (Helix pomatia and Helix lucorum: 17.5 cm x 11 cm x 13 cm; the smaller species: 11.5 cm x 11.5 cm x 5 cm), and isolated for at least two weeks to prevent them from mating and consequently empty their mucous glands. The snails were cleaned, fed with lettuce and snail feed as a source of calcium twice a week (the Chase mix; R. Chase, personal communication: 50 % chicken feed for growing chickens and 50 % grain mix of calcium carbonate 18 %, soya protein 10 %, wheat flour 20 %, wheat bran 10 %, corn flour 16 %, barley flour 16 %, ground sunflower seeds 6 %, calcium phosphate 2 %, ground vitamin mix 1 %, methyl paraben to retard mould 1 %). Cross-reactivity test On each experimental day, one C. aspersum snail was anesthetized with 50 mM MgCl 2 and the genital atrium, diverticulum, bursa copulatrix and its tract were dissected out. Besides the atrium, these organs were also the targets of stimulation by C. aspersum mucus [21] (the atrium, leading up to the genital pore, was included in case it was activated by mucus of other species). These organs, hereafter jointly referred to as the "preparation", were kept in a small dish containing 2 ml of saline solution with pH 7.8 (control saline [29]), equivalent to the amount of haemolymph of C. aspersum [30]. In order to have comparable measurements of the diverticulum's movements, several pins were placed on both its sides in the Sylgard base of the small dish (see also [21]). Subsequently, three squares of black electrical tape of approximately 2 mm 2 were glued onto the diverticulum, copulatory canal and atrium with tissue adhesive (®TA5). These black squares will be referred to as "markers" (Fig. 1). Each mucous gland extract was obtained by dissecting the mucous glands associated with the dart sac out of one anesthetized snail and crushing them with a plastic pestle in 0.5 ml saline solution. At the beginning of each experimental day, before testing the different types of mucous extracts, the preparation had a 30 min. adjustment period in the saline. Then, portions of gland extracts were tested by addition to the saline bath of the preparation. Every portion contains 2.2 mg of mucus, which represents a biological relevant dose since it is equivalent to the amount of mucus carried by the dart of C. aspersum (this amount was calculated by subtracting the dart's wet weight before and after dart shooting [21]). The mucous extract of C. aspersum was used as positive control once at the beginning and once at the end of each experimental day to check whether there was variation in response over time for each organ. The mucous extracts of five other species were chosen randomly as well as the order in which they were tested between the two controls. This means that not all the species were tested for each preparation and this created different sample sizes between species. To prevent pseudoreplication each extract was used only once. For each trial, i.e. each mucus tested, the control activity of the preparation was recorded for 10 min. with a webcam (Logitech® HD Pro Webcam c920). Subsequently, a mucous extract was added and allowed to take effect for 5 min. before recording the response activity for another 10 min. [21]. Between trials the preparation was washed three times with saline, and allowed to rest for 5 min. in new saline solution. In total, each preparation was used for 3.5 h, comparable to the amount of time employed by Koene and Chase [21] in their experiments with the preparations. The videos recorded were analysed with DLTdv5 software [31], which was set to auto-track the position of the markers for each frame. The resulting output coordinates were used to obtain graphs of the displacement of the markers every 5 sec., where the difference between time points was calculated with the Euclidian distance between two points. The measurements obtained with these graphs are: number of contractions induced by mucus, calculated as difference between numbers of contractions counted in the response period and control period; intensity of contractions to assess the potency of the extracts, measured as the maximum displacement reached in each trial (maximum displacement of the diverticulum can be approximately 1.5 cm, equal to the distance between the pins placed along the sides of the diverticulum, and approximately 1 cm for the copulatory canal); percentages of times each organ reacted to the different types of mucus based on the instances that the number of contractions in the response period were higher than the ones in the control period. Some additional criteria were applied in order to count relevant contractions (see also [21]): first, a threshold of 25 % difference was fixed between the minimum and maximum point for each trial; second, contractions that lasted more than 3 min. were not counted; third, peaks of contractions had to be separated at least 15 sec. Figure 2 shows an example of a video track used in our analyses and Movie 1 (see Additional file 1) an example of the activity of the preparations recorded. Shortening effect on diverticulum Although it was not the main focus of the current study, while performing the experiment an effect never described before caused by the mucus was observed: within seconds from the addition of certain species' mucous gland extract, the diverticulum became shorter (see Additional file 2: Movie 2). This effect remained consistent throughout the response period. Hence, two pictures of the preparation were taken with the webcam, once after the control period (when the mucous extract was not added yet) and once after the response period (15 min. later). These pictures were used to assess the length of the diverticulum with ImageJ by measuring its length from the tip to the branching of the bursa copulatrix tract. Three length measurements of the diverticulum were made on each side of the organ, and the average of these measurements was used. Percentages of length reduction caused by each species' mucus were also calculated. Phylogeny To address the phylogenetic relationship between the species used in this study, the maximum likelihood (ML) method was applied by using partial 28 s nuclear gene sequences (sequences by [32]). Discus rotundatus was chosen as outgroup and Euhadra sandai was added as a second species (besides Fruticicola fruticum) from the Bradybaenidae family. The ML tree was built with MEGA5 following the protocol of Hall [33]. The sequences were aligned with MUSCLE, resulting in 726 reliably aligned nucleotide positions. The Tamura-3-parameter with a gamma distribution was the best substitution model obtained by ranking the models by the lowest Bayesian Information Criterion (BIC). Partial deletion of gaps/missing data was applied and to estimate the reliability of the tree, 1000 bootstraps were performed. Note that a full reconstruction of a phylogeny of land snails goes beyond our purpose here; our intention is to use the phylogenetic information to test for phylogenetic signal in our data. This would indicate whether mucus of closely-related species resemble each other in their response. Statistical analyses Data from the cross-reactivity test were analyzed as follows. Since the atrium showed very low reaction, also in response to other species' mucus, it was excluded from Fig. 2 Example of a video track. The x-axis shows the duration of each trial where a control period is recorded as well as a response period after 5 min. pause in which the mucous extract was added. The y-axis indicates a measure of organ displacements (it can be approximately 1.5 cm for the diverticulum and approximately 1 cm for the copulatory canal). This measure is based on the coordinate output of the marker tracking software, expressed in millions, and shown on the y-axis as values divided by 10 6 . The displacements are counted as contractions and marked with asterisks. In this case the mucus induced one more contraction compared to the control. The maximum intensity reached by the contractions is shown by a dotted line. To count only relevant displacements, a threshold of 25 % difference was fixed between the minimum and maximum point for each trial (dotted line) further analyses as a non-meaningful response. Data from experimental days were left out from the analysis when either the diverticulum or copulatory canal did not respond to any species' mucus (this happened twice for each organ; these preparations responded normally to the control). To account for dependency in our data, we included potential order effects by testing, within a preparation, whether the species' mucus tested before affected the response to a specific species mucus. We did so using a Generalized Linear Model (GLM) with Poisson distribution and Log link function for the number of contractions, and a normal distribution and Log link function for the intensity of contractions. In these analyses we included Species and Previous species tested as factors and compared the models with or without the second factor based on the Akaike criterion. To check whether the physiological preparations differed in their response (number of contractions and their intensity for both diverticulum and copulatory canal) across experimental days, we performed a Kruskal-Wallis test to compare days. To test whether for each organ there was variation in number and intensity of contractions between the two time points in which the positive control was tested, we performed a Wilcoxon test for paired observations. For the other mucus types, we tested whether there was a difference in response depending on testing order using a Kruskal-Wallis test. To test whether the number and intensity of contractions in response to different mucus types diverged from the response to C. aspersum mucus, multiple comparisons were performed with the Steel method with control [34] because these data were not normally distributed. This test compares each species to the control species and is thus the non-parametric version of Dunnett's test with control. The percentage of times each organ reacted to different types of mucus was compared to the average percentage response of the two positive controls with a Chi-square test. Data on the shortening effect were repeated measures since the diverticulum length was measured at two time points. Since not all groups were normally distributed and in order to include them all in the same statistical test, we logtransformed these data. Then, we performed a mixed ANOVA to test the effect of time on the length of the diverticulum. This test is regularly used to compare means between two or more independent variables of which one is a repeated measure [35]. Our two independent variables were Time and Species of which Time is the repeated measure. In case of significant interaction, we calculated the simple effect with Fisher's LSD adjustment to reveal for which species the factor Time had a significant effect. This test makes pairwise comparisons between the variable Time for each species. Finally, to test for dependency in our data, we performed a one-way ANOVA on diverticulum length at time 0 to assess if this organ regained its original length after each trial. With the phylogenetic reconstruction, we tested for a phylogenetic signal in the response to mucus (number and intensity of contractions for both diverticulum and copulatory canal) and in the shortening effect (actual shortening in mm) by using Blomberg's K with the phylosig function in the R package phytools [36]. K indicates the degree at which a trait shows phylogenetic signal predicted under Brownian evolution (K = 0 means that there is no phylogenetic signal, K < 1 means that closely-related species weakly resemble each other, and K > 1 indicates that closely-related species strongly resemble each other) [37]. To obtain p-values of K we used 1000 randomization. Cross-reactivity test The mean number ± SD of contractions induced by the mucus of the two C. aspersum positive controls, per 10 min. recording, is 1.98 ± 2.23 for the diverticulum (N = 58), 1.28 ± 1.78 for the copulatory canal (N = 58) and 0.57 ± 1.25 for the genital atrium (N = 58). The mucus type tested, i.e. the factor Species, had a significant effect on the response of the diverticulum (GLM: number of contractions, d.f. = 8, p = 0.005; intensity, d.f. = 8, p < 0.001) but not of the copulatory canal (GLM: number of contractions, d.f. = 8, p = 0.134; intensity, d.f. = 8, p = 0.094), indicating that the variance in the response of the latter was similar between species. Moreover, these analyses showed that our data points were independent from each other, since the response to a species' mucus was not influenced by the mucus type tested before (factor Previous species tested). This was neither the case for the diverticulum ( , with the second control being higher, while its intensity showed a trend in the same direction (control 1 = 2.4x10 6 ± 4.6x10 6 , control 2= 4.0x10 6 ± 5.2x10 6 ; Wilcoxon: Z = −1.894, p = 0.058). However, no such pattern could be found in the testing order of the other mucus types (Kruskal-Wallis: H = 2.086, p = 0.72). To overcome the variation of the diverticulum response between the two time points, the response of the preparation to each mucus type (sample sizes are indicated in Fig. 3) was expressed as relative response: the response minus the mean response of the two controls on that experimental day. The relative number of contractions of the diverticulum as well as the relative intensity differed in the three species H. lucorum, H. F. fruticum, Z = −4.08, N = 19, p = 0.0004), with all of them being lower than the positive control (Fig. 3a, b). Except for C. hortensis (Steel method: Z = −4.21, N = 18, p = 0.0002), the relative number of contractions of the copulatory canal induced by mucus types did not differ from C. aspersum mucus (Fig. 3c). For the relative intensity of the contractions, only H. lucorum and F. fruticum had lower intensity than C. aspersum (Steel method: Z = −4.50, N = 20, p < 0.001; Z = −2.72, N = 20, p = 0.0456, respectively) (Fig. 3d). The percentages of times the diverticulum and copulatory canal responded to each type of mucus are shown in Fig. 4. Only H. lucorum and F. fruticum diverticula reacted significantly fewer times compared to the control (χ 2 1 = 9.08, p = 0.003; χ 2 1 = 4.97, p = 0.026, respectively) and no differences between percentages of response of the copulatory canal were found. Fig. 3 Graphs illustrating the response of C. aspersum to each species' mucus. a Relative number of contractions of the diverticulum. b Relative intensity of contractions of the diverticulum. c Relative number of contractions of the copulatory canal. d Relative intensity of contractions of the copulatory canal. Mean ± SE is given for each graph. The graphs are obtained by subtracting the mean response of the control species from the response of each species mucous extract for every experimental day. Note that the intensity is shown here as percentages of displacement compared to the control species (e.g. lower percentages indicate a relatively lower response); the control species has 100 % response and 0 % response is based on coordinates of preparations of the species showing zero response. The symbol 1 indicates that C. aspersum is the baseline hence its bar mean is zero. Numbers in parentheses indicate the sample size and the asterisks show the significance according to the Steel method (*p < 0.05, **p < 0.01, ***p < 0.001). For ease of comparison, species are listed according to their order of appearance on the phylogenetic tree and E. vermiculata (N = 18, p < 0.001) (Fig. 5). The strongest length reduction was induced by the mucus of E. vermiculata, which makes the diverticulum of C. aspersum approximately 20 % shorter (Table 1). After performing each trial, the diverticulum regained its original length (ANOVA: F = 0.601, d.f. = 9, p = 0.795) permitting the measurements to have the same baseline and the data points to represent similar measurements. Phylogeny In order to assess the phylogenetic relationship between the species tested, we performed a phylogenetic reconstruction analysis that resulted in a tree (Fig. 4) that is in agreement with the latest Helicoidea phylogeny [38]. Fig. 4 Summary of the results and phylogeny of the species used in this study. The ML phylogenetic tree is shown on the left side. The phylogeny is based on 726 nucleotide sites using Tamura-3-parameter model with a gamma distribution. Bootstrap values above 50 % support are given to the nodes (1000 replications). The length of branches refers to the estimated number of changes occurred between nodes (see scale bar). Discus rotundatus is the outgroup and Euhadra sandai was chosen to support the Bradybaenidae. On the right side, a summary table of the results for both diverticulum and copulatory canal is shown (each row corresponds to each species used in this study). The first column of each organ shows whether it is present (Yes), absent (No) or facultative (Yes/No). The second column shows the percentages of response to the mucous gland extracts for each species. For the focal species Cornu aspersum, the average response of the two time points the mucus was tested is given (once at the beginning and once at the end of each experimental day). The asterisks refer to the significant Chi-square test when that species was compared to the focal species. The last two columns refer to the results shown in Fig. 3 indicating whether the response was similar to the one of the focal species (Yes) or not (No) Features to point out include that the genus Helix is retrieved as a monophyletic group [39,40] and the two Bradybaenidae species form a distinct group [40,41] with relatively well supported nodes (bootstrap values: 99; 66, respectively). Discussion By comparing the response of Cornu aspersum (Helicidae) to its own and other species' mucous gland extracts, we found that the reactions of the copulatory canal and the diverticulum are not species-specific, since most species' mucus induced a similar effect. An overview of the results and the phylogenetic relationship of the species used, eight helicids and one bradybaenid, is provided in Fig. 4. The difference in response of the diverticulum among mucus types may be due to the morphological variation in this organ. In response to sexual conflict, this organ evolved as female counter-adaptation to the male manipulation of the love-dart [19,32]. In this way the female increases the distance to the sperm storage organ and the diverticulum is further elongated in species that possess darts with a greater surface area (potentially holding more mucus) [32]. As a result, the presence and length of the diverticulum depends on the species, whereas the copulatory canal is always a standard component of the reproductive system. This implies that the significantly lower reaction to the mucus of F. fruticum (Bradybaenidae) might be due to this species lacking a diverticulum and receiving the spermatophore in the vaginal duct instead [42]. Within the Helicidae all species tested possess a diverticulum. Although in C. hortensis, C. nemoralis and H. lucorum it is very short, and in H. pomatia either it is short or absent, the length showing The asterisk (*) refers to the two positive controls and their effect range a geographical trend [43]. Interestingly, the response of the diverticulum was significantly lower only for the two closely-related Helix species, with short or no diverticula. Thus, the substance causing the contraction of the diverticulum might have evolved only in the Helicidae since it is the only family possessing such an organ. The two Helix species may either never have had such a substance, have gained and subsequently lost it, produce a lower concentration of it, or have evolved really different proteins to achieve the same effects in their own species. Alternatively, the substance targeting this organ evolved in ancestors of the Helicidae but is not shared by the Bradybaenidae. Clearly, an extended study including more species of both families would clarify this point. For the copulatory canal, our video recording method allows us to describe the potential muscular mechanism induced by the mucus in more detail, even if the biochemical mechanism underlying this response is still unknown. Namely, we clearly show that the entrance to the bursa copulatrix tract is closed off by waves of contractions of the copulatory canal rather than via a permanent closure of the tract. This is supported by the fact that these contractions were visible once every 10 minutes on average. With one exception within the Helicidae, our results indicate that the above-described reaction of the copulatory canal did not differ when the mucous extract of C. aspersum or one of the other species was applied. The non-specificity of this response is supported by two helicid species, C. aspersum and T. pisana, that share the recently discovered love-dart allohormone (LDA) that is the peptide contained in the mucous glands responsible for the contraction of the copulatory canal [44]. In addition, this effect of the copulatory canal may occur also in the Bradybaenidae family: when Euhadra peliomphala is injected with conspecific mucus, the bursa copulatrix tract of this species becomes inaccessible [16]. Euhadra peliomphala is a bradybaenid, just like F. fruticum, the only non-helicid in our study. This indicates that the function of dart mucus in closing off the tract leading to the bursa copulatrix may be a common strategy. Worth noting is that within the Helicidae, besides the two Helix species, the lowest percentage of copulatory canal response is found for A. arbustorum, a species for which dart shooting is facultative [45] and sperm storage does not increase in snails hit by the dart [46]. This suggests that the copulatory canal might not be strongly influenced by mucus in this species. Overall, for both organs, we obtained a lower percentage of times the preparations responded to the mucous gland extract of C. aspersum compared to Koene and Chase [21]. This lower responsiveness might be due to the markers glued on the surface of the preparation affecting its activity. However, the markers did not prevent the preparation from moving, since waves of contractions were seen to spread through the entire length of the organs. The low activity might be due to an overestimation by Koene and Chase in quantifying the response, since our recorded videos showed that the diverticulum could be displaced by either the contracting copulatory canal or the atrium and vice versa (which is something that was hard to disentangle based on their method of recordings). In addition, to explain the difference in response measured for the diverticulum at the two time points, the acclimation period of the preparation in the saline solution immediately after dissection might not have been long enough for the diverticulum to adjust to the new solution. Consequently, this is likely to have influenced its stability (e.g. [47]). However, this does not affect the purpose of our study, since a relative response was used in our analyses and the responses of the preparations were all measured under the same experimental conditions. It should be noted that in our tests, substances other than dart mucus were not tested. We decided not to include these here for several reasons. Firstly, Koene and Chase [21] used control extracts to test for general effects of muscle, connective tissue and mucous extracts; their mucus control being the pedal gland that releases mucus onto the snail's foot during locomotion. That study demonstrated that the dart mucus was the most reliable substance to induce the contractions, but that the mucus of the pedal gland also evoked a similar effect in terms of number of contractions, even though the intensity was not measured. As they concluded, this indicates either that the active substance is a general constituent of mucus or that the two extracts cause a similar effect via different mechanisms. In this context, one can wonder about how likely it is that mucus from an entirely different source enters into the haemolymph via dart shooting. It is known that the love-dart is exclusively used to deliver mucous products into the mating partner (e.g. snails with excised mucous glands transfer a dry dart [20]). However, mucus present on the body wall might enter along with the mucous-covered dart albeit in insignificant quantities considering the small area of body wall that is wounded. Secondly, even if mucus from other sources also evokes a similar effect, this only indicates that the active component may also be used elsewhere in the body, in a different context than dart shooting. This is strongly suggested by the fact that the LDA precursor (see above) resembles buccalin precursors, which are known to be involved in modulation of muscle contractions in molluscs [44]. Moreover, the response induced on different parts of the body may also vary in strength. For example, in the grasshopper Melanoplus sanguinipes all male tissue extracts (e.g. brain, ventral nerve cord, haemolymph, accessory glands) induced contractions of the female oviduct with different degrees of response, the highest caused by the accessory gland complex [47]. Note that Koene and Chase [21] did not measure whether the intensity of contractions induced by pedal gland extracts was similar to the ones induced by dart mucus, hence a difference in intensity cannot be excluded. Thirdly, previous work on the focal species (C. aspersum) has reported repeatedly that mucous products delivered by the dart into the mating partner's haemolymph are the ones responsible for the physiological changes that ultimately increase the paternity of the dart user, for example compared to controls with saline solution (e.g. [24,48]). Given that most of the later studies did not include additional controls for nonspecific mucus effects, future work should aim to test such controls whenever relevant (e.g. when finding a new effect caused by dart mucus). Finally, with our study we now go beyond a single-species approach: we compare the response of C. aspersum to its own dart mucus with its response to those of other species in order to assess how specific the induced physiological changes are between different dart-bearing species. The proper way to test this is to contrast the known response (to mucus of the own/focal species) with the response to mucus from other species. Ideally, future research would include more recipient species in order to extend our understanding of the specificity of such male accessory gland products. While the reactions of the diverticulum and copulatory canal were similar to the ones described by Koene and Chase [21], since contractions of both organs were easily observed simultaneously, a new response causing length reduction of the diverticulum is here presented for the first time. There is an advantage to produce such an effect, since there are two possibilities for sperm to escape the sperm digesting organ and reach the oviduct, either by elongating the spermatophore or by shortening the organ in which it is received [43]. A longer spermatophore's tail, for example, protrudes into the vagina. Thus, sperm can safely exit the spermatophore through the tail [49]. Interestingly, the mucus of Eobania vermiculata causes the strongest shortening of C. aspersum's diverticulum (almost 20 % of the total length). Among the species tested, E. vermiculata is the only one to possess a relatively long diverticulum with respect to its spermatophore-producing organs [50]. To overcome the difficulties that this long diverticulum imposes on the male, since the distance to the sperm storage organ increases, E. vermiculata could potentially benefit the most from the evolution of such a shortening effect. However, whether this advantage occurs within this species remains to be tested. Worth noting is that F. fruticum, belonging to a family without diverticulum, does not cause the shortening effect, strengthening our idea that substances targeting this organ may be exclusive to the Helicidae. The other species, for which this reaction was significant, might not benefit as much as E. vermiculata since they induce a much lower length reduction and because they have short diverticula and longer spermatophores. However, their response indicates that this effect is not species-specific. Overall, the extent of the measured responses to each species' mucus is not explained by phylogenetic relatedness, i.e. closely-related species do not resemble each other more in their response than less closely-related species, which contrasts with what would have been expected. Different degrees of manipulation between closely-related species could be expected if, under sexually antagonistic co-evolution, the female function of a species recently evolved a counter-adaptation (e.g. an increased threshold at which male substances are effective) and, as a consequence, the male function modified such manipulation to achieve its effect again (e.g. increased quantity of the substance). Based on the current findings, we suggest that the responses to mucus carried by the love-dart in helicids snails are due to a mixture of bioactive substances, each targeting different organs and causing different effects. These substances induce responses that are similar between species, making the effect of mucus not speciesspecific. To strengthen such evidence, more effects need to be identified in other species as well as the peptides and proteins causing them. The lack of identification of mucous products (besides the LDA) prevent us from explaining whether the responses observed are due to similar or divergent substances possessed by the species tested. When the substances are different they can still have the same function [9,27,28]. This can, for example, be due to proteins with different amino acid sequences but similar tertiary structures [51]. Male manipulative substances can be diverse but their ultimate aim remains altering female behaviour or physiology. As a result, male substances in simultaneous hermaphrodites can also act similarly as in separate-sexed species. For example, land snails bearing love-darts cause a delay in remating of the partner [25] which is similar to what a Drosophila male does with females [11]. Comparable to the action of love-dart mucus reported here, studies on insects show that male products can also directly induce contractions of the female reproductive system (e.g. Melanoplus sanguinipes [47]; Locusta migratoria [52]). However, differences may occur in the evolution of such substances between different sexual systems. For instance, males can evolve manipulative substances that resemble female neuroendocrine molecules used for regulating processes of the reproductive system [53]. Males of separate-sexed species can evolve these female-like molecules through different mechanisms than simultaneous hermaphrodites, either via mutations or activation of silenced genes (reviewed in [54]). In contrast, the male function of simultaneous hermaphrodites has the great advantage to also possess female genes within the same individual. However, these genes still need to be expressed in male tissues before being exploited by the male function [55].
2023-01-14T14:57:52.211Z
2016-05-13T00:00:00.000
{ "year": 2016, "sha1": "4b8ff972f384de6145da2d024b99ba9ef165279a", "oa_license": "CCBY", "oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/s12862-016-0672-6", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "4b8ff972f384de6145da2d024b99ba9ef165279a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
255687509
pes2o/s2orc
v3-fos-license
The working relationship between internal and external auditors and the moral courage of internal auditors: Tunisian evidence Purpose – Thispaperaimstoexaminetheassociationbetweentheworkingrelationshipbetweeninternaland external auditors and the moral courage of internal auditors to report management fraud in the Tunisian setting. Design/methodology/approach – Data are gathered from 163 internal auditors working in Tunisian companies and a partial least squares – structural equation model (PLS-SEM) is used to test the hypothesis regarding the effect of the cooperation between internal and external auditors on internal auditors ’ moral courage. Findings – The results of this study provide strong empirical support for the positive impact of the working relationship between internal and external auditors on internal auditors ’ moral courage to report management fraud and unethical behaviors. Practical implications – The reported results increase the awareness of Tunisian regulators to enact regulations that strengthen the collaboration between internal and external auditors to promote internal auditors ’ moral courage and then limit fraud and improve organizational performance in the Tunisian setting. Originality/value – This paper fills one of the major research gaps in internal audit and moral courage research streams by revealing that the courageous behavior of internal auditors can be fostered by specific means efficacy such as the working relationship between internal and external auditors. Introduction Recent financial scandals have emphasized the role played by internal auditors in reporting frauds and wrongdoings (Khelil, Hussainey, & Noubbigh, 2018;Khelil, 2022;Eulerich et al., 2021;Christ et al., 2021).However, the decision to report detected fraud is dependent on certain virtues and personal characteristics of internal auditors.In this regard, Khelil, Hussainey, and Noubbigh (2016) and Khelil, Akrout, Hussainey, and Noubbigh (2018) posit that moral courage represents a key factor that can stimulate internal auditors' intentions to report fraud and unethical behaviors.Given this importance, it becomes crucial to understand the main factors that may enhance the courageous behaviors of internal auditors. Extant literature dealing with the psychological characteristics of internal auditors has uncovered the developmental processes of internal auditors' moral courage (Khelil et al., 2016;Khelil, Akrout et al., 2018, Khelil, Hussainey et al., 2018).This is particularly true in emerging economies where the internal auditing profession is still trying to find its feet as a profession with unregulated rights and duties (Khelil & Khlif, 2022).Therefore, this study aims to fill in this research gap by examining the effect of the working relationship between internal and external auditors on internal auditors' moral courage.It is worthy to note that the corpus of studies on the external auditor's evaluation of Internal Audit Function (IAF) quality represents the highest developed research area and the most chronic topic within auditing research studies (Behrend & Eulerich, 2019).However, research on the positive effect of internal-external auditor working relationships remains scarce (Behrend & Eulerich, 2019;Alzeban & Gwilliam, 2014).Alzeban and Gwilliam (2014) advocate that the cooperation and coordination between internal and external auditors have long been viewed as important to the audit's benefits for the organization and external stakeholders.Examples of such coordination and cooperation include joint planning and exchange of information, opinions and reports to facilitate higherquality audits. Concerning our research objective, and to the best of our knowledge, only the study of Khelil, Hussainey et al. (2018) has qualitatively addressed the significant role of the working relationship between internal and external auditors in promoting moral courage among internal auditors.Internal auditors interviewed by Khelil, Hussainey et al. (2018) explain that strong collaboration with their external counterparts makes them feel more confident and supported.The present study extends this qualitative work by conducting an empirical inquiry dealing with the effect of the working relationship between these two groups on internal auditors' moral courage.Thus, the question raised by this study is: "what is the effect of the working relationship between internal and external auditors on internal auditors' moral courage?" Based on the social cognitive theory developed by Bandura and the efficacy model of Eden (2001), this paper hypothesizes that the strong collaboration between internal and external auditors leads to increased moral courage of internal auditors.Under the model of internalexternal efficacy of Eden (2001), the working relationship between internal and external auditors may represent a specific mean efficacy [1] that can foster the moral courage of internal auditors in Tunisian companies (Khelil, Hussainey et al., 2018). The choice of the Tunisian context is motivated by the previous conclusions of Khelil and Khlif (2022) that the fear of negative consequences is the main reason for the failure of internal auditors to perform their tasks effectively.Moreover, previous literature has advocated that Tunisia represents an excellent example of a MENA country where internal auditors can play a critical role in combating corruption and protecting firms from wrongdoings (Murphy & Albu, 2018;Khelil, 2022;Khelil & Khlif, 2022). A total of 163 questionnaires were administered to internal auditors working in Tunisian companies, and a partial least squares-structural equation modeling (PLS-SEM) was used to test the hypothesis regarding the effect of the cooperation between internal and external auditors on internal auditors' moral courage.Although the advantage of utilizing SEM has been recognized in several precedent studies, SEM is still underused by accounting and auditing works compared with associated disciplines such as management and information systems (Hampton, 2015). The findings of this study show that the working relationship between internal and external auditors is positively and significantly associated with internal auditors' moral courage.Therefore, the collaboration between these two groups represents a key factor in enhancing the courageous behaviors of internal auditors to report illegal wrongdoings.Thus, Tunisian regulators should enact regulations that strengthen this collaborative relationship to enhance the courageous behaviors of Tunisian internal auditors as they still operate within a jurisdictional void with no formal rules regulating their roles and duties (Khelil & Khlif, 2022). Noting that Tunisia is adopting an approach to enhance transparency and good corporate governance, revealing what encourages internal auditors to break their silence and behave ethically can help perform this goal.Indeed, auditing literature support that an ethical AGJSR internal audit function can improve corporate governance by reporting financial irregularities, reducing administrative corruption and deterring employee theft (Gramling, Maletta, Schneider, & Church, 2004;Asiedu & Deffor, 2017;Khelil, 2022). It is believed that telling the truth by reporting management fraud has various beneficial effects not only for organizations but for society as a whole (Balafoutas, Czermak, Eulerich, & Fornwagner, 2020;Khelil, 2022).Indeed, previous evidence has shown that reporting wrongdoing has protected the interests of several stakeholders including employees, consumers, minority investors and citizens since it allows the company to continue the activity and avoid bankruptcy (Miceli, Near, & Schwenk, 1991;Harbour & Kisfalvi, 2014). It should be noted that our results can be relevant not only for the Tunisian setting but also for the international context and particularly for MENA [2] countries as they share close cultural and institutional characteristics (Al-Akra et al., 2016;Khelil, 2022). The remainder of the paper is organized as follows.Section 2 presents the social cognitive theory.In section 3 this paper reviews relevant literature and develop the hypothesis.In section 4, this paper discusses the research methodology.The interpretation of the results is provided in section 5 and discussed in section 6. Section 7 concludes the paper. Social cognitive theory Social cognitive theory also named as the theory of efficacy beliefs, which has been developed by Bandura (1986) reveals beliefs of efficacy as a crucial concept that guide people and motivate their actions (Bandura, 1997;Eden, 2001, Eden, Ganzach, Flumin-Granat, & Zigman, 2010).According to Bandura (2000), perceived efficacy plays a key role in human functioning because it directly influences behavior, expectations, aspirations and goals, outcomes and perception of opportunities in the social environment and impediments. The social cognitive theory has been extended by Eden (1996Eden ( , 2001) by developing a model of internal-external efficacy.According to this model, Eden (2001) suggests that behavior and task achievement are not only enhanced by internal efficacy (notably self-efficacy) but also by external efficacy covers means efficacy in addition to collective efficacy (Eden et al., 2010;Yaakobi & Weisberg, 2020). Defining means-efficacy as "the individual's belief in the utility of the means available to him or her for performing the job . . .The individual attaches utility to a myriad of means that may facilitate (or hinder) performance" (Eden, 1996, p. 4), Eden supports that both researchers and practitioners should consider means-efficacy, as they do self-efficacy, a crucial component of motivation (Agars & Kottke, 2021).Efficacious means include implements (e.g.computers, equipment and software), bureaucratic tools (e.g.processes, procedures) and persons (e.g.supervisors, coworkers and followers) (Eden, 2001;Eden et al., 2010;Hannah, Sweeney, & Lester, 2010). Based on the works of Eden (1996Eden ( , 2001) ) and Agars and Kottke (2021) have made a distinction between general and specific means efficacy.The authors define "specific meansefficacy" as an individual's assessment of the particular resources (e.g.information, time, specific persons, tools and software) required to achieve a specific identified task, and "general organizational means-efficacy".Based on these predictions, the working relationship between internal auditors and external auditors may represent a specific mean-efficacy.Such a specific mean efficacy has been found, by Khelil, Hussainey et al. (2018) as a determinant of the internal auditors' moral courage. Literature review and hypothesis development 3.1 Internal auditing regulation in Tunisia To reinforce financial transparency corporate governance procedures and financial transparency in Tunisia, the guide to good practice for the governance of Tunisian Internal and external auditors companies (2012) requires the creation of an internal auditing function and audit committee (Khelil, 2022).However, unlike external auditors who operate under a clear jurisdictional framework in the code of commercial companies in Tunisia (in terms of duties, rights and appointments), Tunisian internal auditors function within a jurisdictional void with no formal rules regulating their duties and roles.We can find the Internal Audit Tunisian Association (IATA or IIA Tunisia) that is founded in 1981 and affiliated with the International Institute of Internal Auditors (IIA). The IATA has the status of a nonprofit organization (Decree-Law No. 88 for the year 2011) and does not have either disciplinary power or dismissal authority.Its principal objective is to bring together all Tunisian internal auditors and the diffusion of international standards and best practices related to internal auditing in public and private companies (Khelil & Khlif, 2022;Khelil, 2022). Internal auditor's moral courage and ethical behavior The IIA is the primary professional organization that sets standards for auditing practice that encourages sensitive information reporting both internally and externally.Indeed, truthfulness is rooted in the definitions of internal auditing since it highlights the importance of independence and objectivity in supporting truthfulness.In the same context, Standard 1,120 underlies that "Internal auditors must have an impartial, unbiased attitude and avoid any conflict of interest" (IIA, 2009;Standard 1,120). In addition to being bound by duties that prevent them from behaving in their self-interest, the code of ethics requires internal auditors to not be influenced by their interests and to remain unbiased to tell the truth (Norman et al., 2010).The IIA Code of Ethics (IIA, 2009) emphasizes several cardinal principles that internal auditors are expected to uphold together with rules of conduct that specify norms of behavior expected of internal auditors.Cardinal principles such as integrity and objectivity are understood to be applied and upheld by internal auditors to remain unbiased and truthful. Although the professional and ethical standards of internal auditing functions are designed so that internal auditors act as truth-tellers in organizational contexts; internal auditors still face ethical conflicts (Roussy, 2012) when the disclosure of audit results can have negative effects on their careers (Khelil & Khlif, 2022).Balafoutas et al. (2020) recognize the particular importance of objective reporting in the internal audit field due to the conflict of interest.The authors explain that conflicts of interest arise when an internal auditor has a personal interest or a competing professional making the auditors' independent and objective decision-making hardly possible. The view of Balafoutas et al. (2020) is supported by Khelil and Khlif (2022) who claim that serving different customers (e.g.managers, informal groups in society and audit committees) with conflicting expectations, puts internal auditors under pressure and urges them to follow a strategy of a trade-off between commercial and professional values.Khelil and Khlif (2022) reveal that the fear from negative consequences is the main reason for the intention of internal auditors to prioritize managers' interests in the favor of other stakeholders.This conclusion emphasizes the significant role of moral courage in breaking the silence of internal auditors and behaving ethically by telling the truth about management fraud. Indeed, in addition to being considered an important tool for overcoming enormous psychological pressures (e.g.fear), previous studies (e.g.Koerner, 2014;Sekerka et al., 2009) support that moral courage serves as an instrument that promotes ethical behaviors by overcoming ethical conflict and moral pressures.According to Mansur, Sobral, and Islam (2020), moral courage represents a fundamental basis for genuine ethical behavior by reflecting moral standards within one's moral self.Similarly, Comer and Sekerka (2018) AGJSR explain that moral courage enables the individual to "be" ethical persons and, then, "act" morally. Building on Thorne (1998)'s model adopted from Rest's four-component model of ethical decision-making (see Figure 1), Armstrong et al. (2003) consider moral courage as an instrumental virtue that enables individuals to move from ethical intention to ethical behavior. In this regard, moral courage is defined as a virtue and moral competence that "compels or allows an individual to do what he or she believes is right, despite fear of social or economic consequences" (Peterson & Seligman, 2004, p. 216).As a result, it contributes to consistency between moral intentions and behavior (Solomon & Brown, 1992).Roussy (2012) suggests that courage is an essential value in addition to integrity and these two features go hand in hand.For instance, Everett and Tremblay (2014) find that moral courage was the virtue that motivated Cynthia Cooper (WorldCom's ex-Vice President of internal audits) to report WorldCom's fraud and fostered her resilience in the face of adversity, threat and risk.).These standards suggest that working relationships between the respective audit parties should include sharing information and coordination of activities which consequently permits assisting internal auditors in performing their objectives and providing better service to the organization (Alzeban & Gwilliam, 2014).Moreover, the information provided by the internal auditor to his/her external counterpart permits assisting in providing a higher-quality audit opinion and possibly one delivered with greater resource efficiency (Behrend & Eulerich, 2019;Alzeban & Gwilliam, 2014). Hypothesis development According to the review of Behrend and Eulerich (2019), the stream of research examining the external auditor's evaluation of IAF quality represents the highest-developed research area and the most chronic topic within auditing research studies.However, research on the positive effect of internal-external auditor working relationships remains scarce (Behrend & Eulerich, 2019;Alzeban & Gwilliam, 2014). Internal and external auditors Recent evidence shows a positive relationship between external and internal audit cooperation and the strength of the internal audit function (Pike, Chui, Martin, & Olvera, 2016;Mat Zain, Subramaniam, & Stewart, 2006;Brody, Golen, & Reckers, 1998;Maletta, 1993).O' Leary and Stewart (2007) maintain that by working together, the relationship between internal and external auditors should be one of cooperation and support to improve overall audit quality.In their analysis, O' Leary and Stewart (2007) identify the external auditor as a crucial component of corporate governance with the ability to affect the internal auditor's decision-making.The results report a significant external auditor impact on both the likelihood judgment and the ethical assessment.Likewise, Brody (2012) argues that good communication is an obligatory condition to realize good cooperation.The author adds that such communication may increase the likelihood of fraud detection (Calderon & Green, 1994), foster openness and engender greater trust (Mat Zain et al., 2006).Conversely, communication barriers between external and internal auditors can have a significantly negative impact on an audit's efficiency. From an empirical standpoint, Alzeban and Gwilliam (2014) find a positive association between the working relationship between internal and external auditors and the effectiveness of the internal auditing function.Similar results are reported by Pike et al. (2016) who demonstrate that the coordination between external and internal auditors enhances efficiency in the evaluation of internal controls and improves organizations' compliance with SOX-related regulations (2002). Concerning our research objective, and to the best of our knowledge, only the study of Khelil, Hussainey et al. (2018) has qualitatively addressed the significant role of the working relationship between internal and external auditors in promoting moral courage among internal auditors.Khelil, Hussainey et al. (2018) provide evidence that the collaboration between internal and external auditors is a key determinant of internal auditors' moral courage since interviewed internal auditors consider external auditors as "a window for the disclosure" and "an indirect and intelligent disclosure" (Khelil, Hussainey et al., 2018, p. 329).In other words, internal auditors will have more incentives to report wrongdoings when collaborating with external auditors during their audit mission. Based on the above discussion and the assumptions of social cognitive theory, the following hypothesis is tested. H1.The moral courage of the internal auditors is positively related to a strong working relationship between them and external auditors. Research method 4.1 Data collection The data were collected from Tunisian firms that have an internal auditing function and where the owners are not applied in the management of the companies.Three copies of the questionnaire were administered (face-to-face and electronically) to 72 listed firms and 6 unlisted firms in both financial and nonfinancial sectors.By doing so, the final sample includes 234 potential respondents. Our questionnaire (Appendix) consisted of two parts.The first part gathered basic demographic information of the internal auditor (including gender, age, work experience, training level, certification, work experience and tenure).The second measure is the level of internal auditors' moral courage and the level of cooperation between internal and external auditors in the given company.The survey measures included in the questionnaire were translated from English to French by a translation specialist so that they become understandable to internal auditors who are more familiar with the French language.The questions are then independently back-translated into English by a second translator to AGJSR ensure that the meaning of each statement is preserved (Brislin, 1980).To ensure the understandability of our questionnaire, five internal auditors were consulted.Based on their suggestions, this paper improved the structure and understandability of the questionnaire. The data collection lasted 10 weeks (in 2021) and allowed us to receive useable answers from 163 internal auditors.Our final sample is composed of 72 internal auditors working in the financial sector and 91 working in the nonfinancial sector.The respondents include 104 men and 59 women with an average age of 33.17 years.The participants had between 2 and 33 years of professional experience.In addition, more than half of the respondents (57%) had a master's degree in accounting and auditing and approximately 9% of them had an international certification related to internal auditing (CISA, CIA or DPAI). 4.2 Variable measurement 4.2.1 Dependent variable: moral courage (COURAGE).Following Khelil, Akrout et al. (2018), this paper used the four-item moral courage scale developed by Hannah and Avolio (2010) to measure the moral courage of internal auditors.Hannah and Avolio's (2010) moral courage scale has shown high reliability and construct validity in previous empirical studies (Schaubroeck et al., 2012;Hannah et al., 2013;Khelil, Akrout et al., 2018).Participants were asked to answer the following question "How do you act when confronted by frauds committed by your manager?" based on a five-point Likert scale ranging from 1(strongly disagree) to 5 (fully agree). 4.2.2Independent variable: relationship between internal and external auditors (RELEX).The relationship between internal and external auditors was assessed by the same proxies used by Alzeban and Gwilliam (2014).Participants were asked to rate their level of agreement (from "1 5 strongly disagree" to "5 5 fully agree) with each statement aiming to measure: their attitude towards external auditors; discussion of the audit plan discussing mutual interests; frequency of meetings; sharing working papers; external auditors" reliance on the work of the internal audit; and management's promotion of the relationship between internal and external auditors. Descriptive statistics Descriptive statistics are presented in Table 1.The mean value of moral courage amounts to 4.131 and it ranges from 1 to 5. The value is closer to 5 indicating that internal auditors enjoy a high level of moral courage in the Tunisian setting. The average relationship between internal and external auditors accounts for 4.112 and varies from 1 to 5. It indicates that the collaboration between Tunisian internal auditors and their external counterparts is strong. Measurement model analysis The reliability of the measurement model in PLS is assessed based on indicator reliability and internal consistency reliability.However, its validity is evaluated based on convergent validity and discriminant validity (Khelil et al., 2018;Lisi, 2018). Internal and external auditors The factor loading indicator is used to assess the reliability.According to the common rule of thumb, only items with factor loading exceeding 0.700 should be retained in the model to ensure internal consistency reliability (composite reliability > 0.700) and convergent validity (average variance extracted, AVE > 0.500) (Hair, Hult, Ringle, & Sarstedt, 2014;Hajli & Lin, 2016).Table 2 reveals that all the factor loadings, in our model, are greater than 0.700.Furthermore, the satisfactory reliability of the constructs is supported as all composite reliabilities are greater than 0.700 (Hajli & Lin, 2016;Lisi, 2018).On this basis, no item was deleted from our measurement model.The values of Cronbach's alpha that exceed 0.600 also confirm the constructs' reliability (Murphy & Davidshofer, 1988;Khelil, Akrout et al., 2018).The convergent validity of constructs which is evaluated based on the AVE values and presented in Table 2, shows a satisfactory convergent validity (the AVE for each variable exceeds 0.500) (Khelil et al., 2018;Lisi, 2018). The discriminant validity of the measurement model was assessed in the last step.As shown in Table 3, the terms of discriminant validity are satisfied in the model (Hair et al., 2014).Therefore, the structural equation modeling is allowed to test our hypothesis linking AGJSR the morale courage of internal auditors to the level of collaboration between internal and external auditors. Structural model analysis: the test of hypothesis The assessment criteria for the structural model are the level of significance of the path coefficients produced by PLS and the measures of R 2 .Hair et al. (2014) and Khelil, Akrout et al. (2018) suggest that the main target constructs' level of R 2 should be high as the objective of the prediction-oriented PLS-SEM approach is to explain the variance of the endogenous latent variables.Standardized path coefficient, t-statistics and R 2 are shown in Table 4 and graphically, in Figure 2. As reported in Table 4, our model has a good explanatory power R 2 5 0.771.The coefficient for the hypothesized path is statistically significant (p 5 0.000).This provides strong support for our proposed hypothesis. Discussion The result shows a positive and significant effect of the working relationship between internal and external auditors on internal auditors' moral courage.This finding corroborates previous results reported by Khelil, Hussainey et al. (2018b) who describe the cooperation between internal and external auditors as a tool that enhances the moral courage of internal auditors in Tunisian organizations. Our result is also in line with that reported by Brody (2012) who argues that good communication and cooperation between these two groups can increase the detection of fraud, enhance openness and engender greater trust. Internal and external auditors The finding of this study can be explained by the interpretation of O' Leary and Stewart (2007) who identify the external auditor as a crucial component of corporate governance and can affect an internal auditor's decision-making by affecting both his/her judgment and ethical assessment. In sum, the study confirms the previous evidence on the working relationship between external and internal auditors and the strength of the internal audit function (Pike et al., 2016;Mat Zain et al., 2006).It also confirms the pertinence of the recommendations of international legislation that encourages the coordination between internal and external auditors (PCAOB, 2013(PCAOB, , 2007 Furthermore, the result supports the assumptions of social cognitive theory suggesting that means efficacy represents a crucial component of employees' motivation since it influences their behaviors, expectations, aspirations and goals, outcomes and perception of opportunities in the social environment (Eden et al., 2010;Agars & Kottke, 2021). Based on the above discussion, it is believed that the audit committee is required to review the internal auditors' coordination with their external counterparts and encourage a cooperation relationship between both parties in the Tunisian setting.This collaboration may also have a beneficial effect on external auditors as internal auditors may provide them with confidential and sensitive information which plays a critical role in assessing audit risk and tracing audit procedures. Conclusion, contributions and future research perspectives This study aims to investigate how the collaboration between internal and external auditors may influence internal auditors' moral courage concerning fraud and wrongdoing reporting in the Tunisian setting.Based on 163 questionnaires collected from internal auditors working in Tunisian companies and using the PLS-SEM, this study provides empirical evidence on the significant effect of the working relationship between internal and external auditors on internal auditors' moral courage. This paper makes noteworthy contributions to both internal audit and moral courage literature.It fills one of the major research gaps in these research streams by revealing that the courageous behavior of internal auditors can be fostered by specific means efficacy such as the working relationship between internal and external auditors.In addition, to be based on auditing and accounting works, this study relies on extant literature from other disciplinaries (ethics, psychology and social cognitive theories) that permits to contribute to the existent audit research field. From a methodological standpoint, this study contributes to accounting and audit research by using SEM in exploring empirically this relationship in an emerging economy. Since the audit committee does not represent a real effective control mechanism in both emerging and developed economies (Roussy, 2012;Oussii, Klibi, & Ouertani, 2019;Khelil & Khlif, 2022), it is believed that the result of this study can offer an alternative solution to regulators and standard setters to implement rules fostering the cooperation between external and internal auditors to reduce their fear and report accurate information. Moreover, noting that Tunisia is adopting an approach to enhance transparency and good corporate governance, revealing what encourages internal auditors to break their silence and behave ethically can help reach this goal.For instance, previous empirical evidence (e.g.Gramling et al., 2004;Asiedu & Deffor, 2017) suggests that an ethical internal audit function can improve corporate governance by reporting financial irregularities, reducing administrative corruption and deterring employee theft.By doing so, the internal audit department may foster the firm's financial well-being and organizational efficiency. AGJSR As for other studies using survey data, this empirical inquiry may suffer from self-report measures (assessment was only made by internal auditors) that may introduce a bias in the measurement of the working relationship between internal and external auditors. Based on that both internal auditing activity and professional moral courage involve normative elements and cultural differences, the present study opens the door to further experimental investigations to examine the effect of the working relationship between internal and external auditors on internal auditors' moral courage in other emerging economies and survey both internal and external auditors to assess the degree of collaboration between both parties. The relationship between internal and external auditors has been widely addressed by professional standards (Institute of Internal Auditors [IIA], 2009; American Institute of Certified Public Accountants [AICPA], 2008; Public Company Accounting Oversight Board [PCAOB], 2013, 2007; International Standards for the Professional Practice of Internal Auditing (ISPPIA) Figure 1.Thorne's (1998) integrated model of ethical decision making Table 1 . MC: moral courage; RELEX: Relation between internal and external auditors Descriptive statistics COURAGE: Moral courage; RELEX: Relationship between internal and external auditors CR: Composite reliability *Diagonal elements are the square roots of AVEs.Off-diagonal elements are the correlations between constructs COURAGE: Moral courage; RELEX: Relationship between internal and external auditors SPC: standardized path coefficient ; American Institute of Certified Public Accountants [AICPA] 2008; Institute of Internal Auditors [IIA] 2009).
2023-01-12T16:11:32.220Z
2023-01-12T00:00:00.000
{ "year": 2023, "sha1": "2a93a6b1e49b11284f5b59773f345d19768013d5", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/AGJSR-07-2022-0121/full/pdf?title=the-working-relationship-between-internal-and-external-auditors-and-the-moral-courage-of-internal-auditors-tunisian-evidence", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9ff07c68989dd0c32f28300b6e8571f4fb053b8a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
202558286
pes2o/s2orc
v3-fos-license
21-Gene Recurrence Score and Adjuvant Chemotherapy Decision for Breast Cancer Patients with Positive Lymph Nodes The 21-gene recurrence score (RS) assay is prognostic and predictive for hormone receptor (HR)+/HER2-/node- breast cancer (BC) patients. However, its clinical value in node + patients hasn’t been elucidated. HR+/HER2-/pN1 patients operated in Comprehensive Breast Health Center, Shanghai Ruijin Hospital from January 2014 to December 2018, with available RS results were retrospectively included. Clinico-pathological characteristics were compared. Adjuvant chemotherapy recommendations pre-/post- RS assay and actual usage were analyzed. A total of 303 patients were included, with 59, 178, 66 RS < 18, 18–30 and ≥ 31. Age (P < 0.001), comorbidity (P = 0.013), and RS category (P < 0.001) were independently associated with chemotherapy recommendation. Compared with low RS patients, those with intermediate (OR 6.58, 95% CI 2.37–18.31, P < 0.001) or high (OR 54.14, 95% CI 3.77–776.54, P = 0.003) RS were more likely to be recommended with chemotherapy. RS independently influence chemotherapy decision in postmenopausal population as well. Chemotherapy recommendation changed for 9.57% patients after RS assay. Patient adherence rate to chemotherapy recommendation was 94.72% (287/303). The 21-gene RS independently influenced chemotherapy recommendation in pN1 BC patients, which could provide additional information to guide chemotherapy decision with relatively good treatment adherence rate. Breast cancer (BC) is the most common malignant tumor in women worldwide. According to the latest global epidemiological cancer survey, an estimated 2.1 million new BC cases would be diagnosed in 2018, representing 25% of all cancer cases among women. BC is estimated to be responsible for 626,700 deaths, accounting for 6.6% of all cancer deaths 1 . As an essential part of systemic treatment, standard adjuvant chemotherapy reduces one third breast cancer mortality compared with no chemotherapy 2 . Patients with high absolute risk of disease recurrence and death gain most absolute benefit from chemotherapy, independent of classical clinico-pathological characteristics including age, hormone receptor (HR) status or node involvement 2-4 . 21-gene recurrence score (RS) is the most frequently applied multigene assay in clinical practice to provide individualized information other than routine clinico-pathological features, which can predict chemotherapy benefit and guide adjuvant treatment decision in HR-positive, human epidermal growth factor receptor 2 (HER2)-negative, and node-negative BC patients. The assay is designed to measure the expression of 21 genes including 16 cancer-related genes and 5 endogenous references in formalin-fixed paraffin-embedded (FFPE) breast tumors using quantitative reverse transcriptase polymerase chain reaction (qRT-PCR) methods 5,6 . According to the results of the prospective TAILORx trial 5,7 , the 2018 NCCN Clinical Practice Guidelines in Oncology for Breast Cancer suggest to spare selective low risk patients from adjuvant chemotherapy 8 . Meanwhile, based on the NSABP B-20 trial, chemotherapy is still recommended for high risk patients, since a 27.6% absolute decrease of 10-year distant recurrence rate was reported in high risk N0 patients receiving chemotherapy 9 . While the current guidelines suggest the routine use of 21-gene RS testing in node-negative patients, results from several clinical trials have extended its application in patients with 1-3 histologically proven involved axillary lymph nodes (ALNs) [10][11][12] . Retrospective analysis from phase III SWOG S8814 trial demonstrated that 21-gene RS was prognostic in postmenopausal BC patients with HR-positive, HER2-negative and node-positive disease. Patients with low risk RS got little benefit from adjuvant chemotherapy, while high RS ones could receive more benefit from chemotherapy 10 . Data from ECOG E2197 showed that continuous RS was a highly significant independent predictor of recurrence for node-positive patients 11 . Similarly, the WGS Plan B trial also found an excellent survival outcome in node-positive, RS < 11 low risk patients treated with endocrine therapy alone, indicating a satisfactory prognostic value of RS 12,13 . Based on these findings, the 2018 NCCN Guidelines suggested to consider RS assay testing in selected patients with HR-positive, HER2-negative and pN1mi or pN1 disease, so as to guide adjuvant treatment choice 8 . Nevertheless, the impact of 21-gene RS results on adjuvant chemotherapy decision has not been fully understood in BC patients with positive ALN. In the current study, we aim to evaluate whether 21-gene RS can influence adjuvant chemotherapy choice for patients with HR-positive, HER2-negative and pN1 BC, and to further analyze the adherence rate of adjuvant chemotherapy after 21-gene RS testing in clinical practice. Patients and Methods Study population. BC patients who met the following eligibility criteria were included in the study: (1) female gender; (2) Histo-pathologic analysis. Tumor histo-pathologic analysis was performed in the Department of Pathology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China by experienced pathologists. The methods and criteria for immunohistochemistry (IHC) assessment of estrogen receptor (ER), progesterone receptor (PR), HER2 and Ki-67 were as described in our previous reports 14,15 . The cutoff of ER expression was set at 50% because the St. Gallen International Expert Consensus on the Primary Therapy of Early Breast Cancer 2009 has suggested that tumor cell staining for HR ≥ 50% indicated highly endocrine-responsive tumors 16 . HER2 negativity was identified according to the 2018 ASCO/CAP guidelines, which included IHC HER2 0, IHC HER2 1+, and IHC HER2 2+ with fluorescence in situ hybridization HER2 non-amplified 17 Evaluation of 21-gene RS. The detailed information of 21-gene RS evaluation was described in our previous work 19 . RNA was extracted from three 10μm unstained sections of FFPE breast tumor tissue, which was prepared by experienced pathologists in the Department of Pathology, using RNeasy FFPE RNA kit (Qiagen, 73504, Germany). Reverse transcription was performed using Omniscript RT kit (Qiagen, 205111, Germany). Quantitative RT-PCR was accomplished using Premix Ex TaqTM (TaKaRa Bio, RR390A) in Applied Biosystems 7500 Real-Time PCR System (Foster City, CA). Gene expression was verified in triplicate, and normalized to five endogenous reference genes. Gene-specific normalized cycle threshold value was applied to calculate RS. For patients with multifocal disease, the highest RS was recorded. Treatment decision. Treatment choices pre-and post-RS were both decided through a two-round multidisciplinary team (MDT) meeting including surgical oncologists, medical oncologists, radiation oncologists, pathologists, BC specialized nurses, and other related specialists. After the completion of histo-pathologic analysis, a first-round MDT would be held to give an initial recommendation of adjuvant treatment regimen based on patients' clinico-pathological features. For those who need additional information to guide treatment choice, MDT would recommend a 21-gene RS test. After receiving the RS result, a final treatment recommendation would be made through a second-round MDT based on traditional clinico-pathological features and RS. Frequently suggested chemotherapy regimen included EC-T, 4 cycles of epirubicin 90 mg/m 2 and cyclophosphamide 600 mg/m 2 every 21 days followed by 4 cycles of docetaxcel 100 mg/m 2 every 21 days or 12 cycles of weekly paclitaxel 80 mg/m 2 ; TC*4, 4 cycles of docetaxel 75 mg/m 2 plus cyclophosphamide 600 mg/m 2 every 21 days; TC*6, 6 cycles of docetaxel 75 mg/m 2 plus cyclophosphamide 600 mg/m 2 every 21 days. Actual chemotherapy usage and regimen were confirmed during follow-up, which was accomplished by the BC specialized nurses in our center. Statistical analysis. The 21-gene RS was calculated from the reference-normalized formula. Since the optimal RS cutoff in node-positive patients remains unknown, here we adopted two classifications. The classic classification divided patients into three risk groups: low RS (RS < 18), intermediate RS (18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), and high RS (≥31), respectively. Another more specific classification was also presented (RS < 11, 11-17, 18-25, 26-30, and ≥31) based on classic and new TAILORx trial category classifications. Charlson Comorbidity Index was applied to evaluate patient comorbidity. Categorical variables were analyzed by using Chi-square test or Fisher's exact test. Multivariate logistic regression was used to identify the impact factors for treatment recommendation. The change of treatment recommendation before and after 21-gene RS result was calculated by the subtraction between pre-and post-RS chemotherapy recommendation. Disease-free survival (DFS) was calculated from definitive surgery to the first proven local regional recurrence, distant metastasis, contralateral BC, second malignancy or death of any cause. Kaplan-Meier curve was applied to compare DFS between RS groups. Data were analyzed using IBM SPSS statistics software version 23 (SPSS, Inc., Chicago, IL). Two-sided P values < 0.05 were considered statistically significant. www.nature.com/scientificreports www.nature.com/scientificreports/ Ethical approval. This study was reviewed and approved by the independent Ethical Committees of Ruijin Hospital, Shanghai Jiao Tong University School of Medicine. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from each patient. Results Baseline characteristics. Overall, 303 women were enrolled in this study (Fig. 1). The baseline characteristics of the participants were presented in Table 1. The mean age was 59.41 ± 12.02 (range 30-89) years. Charlson Comorbidity Index was 0, 1, and ≥ 2 in 184, 81, and 38 patients, respectively. Invasive ductal carcinoma (IDC) was diagnosed in 279 out of 303 patients, while others had invasive lobular carcinoma, mucinous carcinoma, or mixed carcinoma. Grade I-II tumors were found in 73.27% (222/303) patients. There were 19.80%, 52.48%, 22.44%, and 5.28% patients with micro-metastases, one, two, and three positive ALN(s), respectively. All patients had ER-positive disease, of whom only 7 had ER staining in less than 50% BC cells. PR was less than 20% in 81 patients and 19 were PR-negative. One hundred and eighty-nine (62.38%) patients had Ki-67 ≥ 14%. Table S1) and multinomial logistic regression showed that the overall distribution of grade (P = 0.009), ER status (P = 0.009), and PR status (P < 0.001) had a significant difference among low, intermediate, and high risk groups (Supplementary Table S2 (Fig. 2B). Univariate and multivariate analysis revealed that age (P < 0.001), comorbidity (P = 0.013), and 21-gene RS (P < 0.001) were independent impact factors for chemotherapy recommendation in patients with pN1 BC www.nature.com/scientificreports www.nature.com/scientificreports/ (Table 2). Compared with patients <50 years old, elder patients >70 years old were less likely to be recommended with chemotherapy (OR 0.01, 95% CI 0.00-0.11, P = 0.001). Those with comorbidity score of 2 or more had less possibility of chemotherapy recommendation compared to those without any comorbidity (OR 0.08, 95% CI 0.02-0.46, P = 0.004). Patients with intermediate RS (OR 6.58, 95% CI 2.37-18.31, P < 0.001) or high RS (OR 54.14, 95% CI 3.77-776.54, P = 0.003) were more likely to be recommended to receive chemotherapy than those with low RS. www.nature.com/scientificreports www.nature.com/scientificreports/ When stratified by menopausal status, 3 out of 75 premenopausal patients omitted chemotherapy (Supplementary Table S3). For postmenopausal pN1 patients, elder age (P < 0.001), more comorbidities (P = 0.016), higher 21-gene RS (P = 0.001) were independent impact factors for chemotherapy recommendation ( Change in chemotherapy recommendation before and after 21-gene RS assay. The distribution of pre-and post-RS chemotherapy recommendation was presented in Fig. 3. Overall, physician's treatment recommendation changed for 9.57% (29/303; Table 4) patients. The most apparent alteration was found in the low RS group, with 6 (10.17%) patients changing from chemotherapy to no chemotherapy, and 2 (3.39%) patients reversely. Eighteen (10.11%) intermediate RS patients switched to receive chemotherapy, whose median was 24.5 (18.0-29.0). Two patients in the high risk group were changed to receive chemotherapy, while another patient was recommended to omit chemotherapy after MDT, since this patient was 82 years' old with medical history of hypertension, type 2 diabetes, and severe cerebral infarction. Supplementary Table S5 compared the clinico-pathological features between patients with or without chemotherapy recommendation alteration. Tumor grade, tumor size, positive lymph node number, Ki-67 level and molecular subtype were significantly associated with treatment recommendation change in univariate model. Multivariate analysis showed that patients with greater tumor size (>2 cm vs ≤2 cm, OR 0.36, 95% CI 0.15-0.89, P = 0.026; Supplementary Table S6) or 2 positive lymph nodes (2 vs micro-metastasis, OR 0.07, 95% CI 0.01-0.58, P = 0.014) were less likely undergo treatment recommendation change. Table 5 showed the chemotherapy regimen recommended before and after 21-gene RS result. In the low RS group, TC*4 was the most frequently recommended regimen both before (47.46%) and after (40.68%) RS assay testing. For patients with intermediate risk, EC-T was suggested in 33.15% and 33.15% patients pre-and post-assay, respectively. Recommendation of TC*4 increased from 36.52% to 47.19% after receiving RS result. Moreover, in the high risk group, EC-T was proposed to 57.58% before and 63.64% after having 21-gene RS results. Actual adjuvant chemotherapy usage and disease outcomes. With respect to the actual adjuvant chemotherapy usage, 16 patients didn't follow the treatment recommendation ( After a median follow-up of 21.17 (range 1.38 to 55.43) months, 11 (3.63%) DFS events were observed, including 4 local regional recurrences, 5 distant metastases, 1 second primary malignancy, and 1 death. The detailed information of the 11 patients with DFS events were summarized in Supplementary Table S4. There was no significant DFS difference among different risk groups (P = 0.225). Discussion In this study, we included 303 HR+/HER2-and pN1 BC patients with 21-gene RS records. The distribution of RS was 19.47%, 58.75%, and 21.78% for low, intermediate, and high risk group, respectively. Chemotherapy was recommended for 258 patients after MDT meeting. We found that age, comorbidity, and 21-gene RS were independently associated with chemotherapy recommendation in the whole population and for postmenopausal patients. Treatment recommendation changed for 9.57% patients after RS results. The overall adherence rate of actual chemotherapy usage to MDT decision was 94.72% (287/303). To our knowledge, this is the first and largest study in Chinese population to focus on 21-gene RS and adjuvant chemotherapy decision in ALN + BC patients. Earlier data from our center revealed that among node-negative and node-positive BC population receiving 21-gene RS tests, 26.1%, 49.3% and 24.6% were categorized into low, intermediate and high RS groups, respectively 19 . Tumor grade, PR status, and Ki-67 were significantly associated with RS category in the whole population 19 , which was consistent with evidence from other centers 20,21 . In this current study, grade, ER status, and PR status were identified as independent factors associated with RS in pN1 patients. We found that the RS category was independently associated with chemotherapy recommendation in pN1 patients in our study, which was in consistency with previous findings [22][23][24] 25 . Similarly, in the WGS Plan B trial, the 5-year DFS in RS < 11 N + patients treated with endocrine therapy alone was 94% after a median follow-up of 55 months 12,13 . Nevertheless, given the relatively short follow-up for ER-positive disease, and an increase in DFS events of 5.6% at 5-year compared with the 3-year results, we cannot spare all RS < 11 N + patients from chemotherapy. The optimal cutoff in node-positive patients remains unclear. Actually, www.nature.com/scientificreports www.nature.com/scientificreports/ the two classification (classic classification of 18-30, the new TAILORx classification of [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25] are both in use in clinical practice. Based on previous evidence, the 18-30 is more frequently adopted when doing prognostic analysis (for example in transATAC trial), while the 11-25 is more frequently applied when studying the predictive value. We carried out a Chi-square test to compare two different classifications, and the Chi-square value is 22.6 when applying the new classification, compared to 42.2 when applying the classic classification. In addition, we found that the chemotherapy recommendation rates were similar between patients with RS < 11, and RS 11-17 (65.00% vs 56.41%, P = 0.525, Fig. 2B), but much lower than those with RS ≥ 18 (P < 0.001). As a result, we adopted the cutoff of 18-30 in the current study. The ongoing prospective randomized phase III RxPONDER trial is designed to study the efficacy of adjuvant endocrine therapy with or without chemotherapy in HR-positive, HER2-negative patients with 1-3 positive ALNs, RS ≤ 25. The results are awaited to evaluate the interaction of RS and chemotherapy benefit in pN1 patients, and to estimate a clinical meaningful cutoff point for chemotherapy recommendation in this subgroup 26 . The influence of 21-gene RS on chemotherapy usage has been noticed. According to data from other centers, the change in adjuvant treatment recommendations for node-positive patients after 21-gene RS assay was 21%-39%, in the direction of exempting from chemotherapy 23 www.nature.com/scientificreports www.nature.com/scientificreports/ patients received inconsistent adjuvant treatment recommendation before and after 21-gene RS assay, and more patients of intermediate risk group were recommended with chemotherapy after RS results. One possible reason was that, in previous studies of other centers, chemotherapy recommendation both pre-and post-21-gene RS assay were made by individual clinician or only one-round MDT, which might be more frequently influenced by RS results. In our clinical practice, however, chemotherapy recommendation pre-and post-21-gene RS assay were both made through two-round MDT meetings. Our previous study about MDT found that MDT changed the traditional single-disciplinary treatment mode, and showed significant advantages in providing better treatment options for patients 31 . Such effect of MDT decision might thus dilute the impact of 21-gene RS on chemotherapy recommendation. Previous studies have indicated that 21-gene RS assay provides additional prognostic information beyond clinico-pathological features. For example, Wang et al. analyzed data of 4059 T 1-2 N 1 M 0 patients with ER-positive, HER2-negative diseases with available 21-gene RS results from the SEER database. Their study indicated that the RS risk categories were positively associated with pathological prognostic stages (P < 0.001) based on the 8 th edition of American Joint Commission on Cancer (AJCC). The RS risk category was an independent prognostic factor for BCSS and overall survival 32 . Similarly, another SEER database-based study found that RS result was a strong predictor of BCSS for patients with micro-metastases or 1-3 positive ALNs (P < 0.001) 29 . In addition, other retrospective studies demonstrated that RS result was an independent impact factor for local regional recurrence (Hazard ratio = 2.59, 95% CI 1.28-5.26, P = 0.008) 33 and distant recurrence (Hazard ratio = 3.47, 95% CI 1.64-7.38, P = 0.002) 25 in ER-positive, node-positive patients. Our study found that after a median follow-up of 21.17 months, 11 (3.63%) DFS events were observed in these node-positive patients. Due to the relative short period of follow-up and small number of DFS events, we cannot assess the prognostic value of 21-gene RS in ALN positive BC patients, which warranted further evaluation. Post-RS Several limitations existed in the current study. To begin with, 245 (44.38%) pN1 patients didn't receive a 21-gene RS test since not enough data supported the application of 21-gene RS in node-positive patients at time of diagnosis, which might introduce bias. Secondly, the RS distribution differed from previous data from other clinical trials like SWOG S8814 and transATAC. This might be explained by the difference in clinico-pathological features between the enrolled cohorts. For example, in the SWOG8814 trial, 35.7%, 52.9% and 11.4% patients had grade 1,2 and 3 tumor, compared to 5.6%, 67.7% and 26.7% in our cohort. Moreover, given that there was limited evidence about the application of 21-gene RS in node-positive patients, we recommended 21-gene RS testing for N + patients increasingly but not routinely, only after the publication of SWOG S8814 and WGS Plan B trials, thus the follow-up was too short to clarify the prognostic effect of 21-gene RS, which needed continuous follow-up. Conclusions In conclusion, 21-gene RS category independently influenced chemotherapy recommendation in HR+/HER2-BC patients with 1-3 positive ALNs. Chemotherapy recommendation changed for 9.57% patients after 21-gene RS results, which provide relatively little information to guide adjuvant treatment decision in these patients before the publication of RxPONDER trial. After 21-gene RS testing, ALN positive patients had a good adherence to MDT decision. Further analysis is warranted to clarify the prognostic and chemotherapy predictive value of 21-gene RS in pN1 breast cancer patients. Data Availability The datasets analysed during the current study are available from the corresponding authors on reasonable request.
2019-09-12T16:20:43.702Z
2019-09-11T00:00:00.000
{ "year": 2019, "sha1": "0dfeea2a43dc4a84724a8458599bc8b70a7edb29", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-49644-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0dfeea2a43dc4a84724a8458599bc8b70a7edb29", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17928054
pes2o/s2orc
v3-fos-license
Intraoperative Hemorrhage and Postoperative Sequelae after Intraoral Vertical Ramus Osteotomy to Treat Mandibular Prognathism Objective. To investigate the factors affecting intraoperative hemorrhage and postoperative sequelae after orthognathic surgery. Materials and Methods. Eighty patients with mandibular prognathism underwent surgical mandibular setback with intraoral vertical ramus osteotomy (IVRO). The correlation between the blood loss volume and postoperative VAS with the gender, age, and operating time was assessed using the t-test and Spearman rank correlation coefficient. The correlation between the magnitude of mandibular setback with the presence of TMJ clicking symptoms and lip sensation was also assessed. Results. The mean operating time and blood loss volume for men and women were 249.52 min and 229.39 min, and 104.03 mL and 86.12 mL, respectively. The mean VAS in men and women was 3.21 and 2.93, and 1.79 and 1.32 on the first and second postoperative days. There is no gender difference in the operating time, blood loss, VAS, TMJ symptoms, and lip numbness. The magnitude of mandibular setback was not correlated with immediate and long-term postoperative lip numbness. Conclusion. There are no gender differences in the intraoperative hemorrhage and postoperative sequelae (pain, lip numbness, and TMJ symptoms). In addition, neither symptom was significantly correlated with the amount of mandibular setback. Introduction Orthognathic surgery, which includes several procedures of varying complexity requiring high surgical skill, demands that surgeons consider multiple variables, such as the patient's overall physical condition, operating time, intraoperative hemorrhage, postoperative pain, and potential postoperative sequelae and complications. Not only are surgeons cautious about intraoperative hemorrhage and subsequent transfusion [1][2][3][4], patients are also concerned and may question the safety and risks of blood transfusion. Postoperative pain and related sequelae are the most highly studied topics in modern medicine. Because of inappropriate postoperative analgesia, patients would reflect a negative emotion and decrease satisfaction on the postoperative quality of life. Orthognathic surgery involves relocation of the mandible, and potential postoperative complications must be realistically discussed by the surgeons and patients, such as whether the temporomandibular joint will exhibit clicking symptoms, whether mandibular mobility and mouth opening will recover, and whether abnormal lip sensation will occur. For the correction of mandibular prognathism, several improvements in orthognathic surgery have emerged, with sagittal split ramus osteotomy (SSRO) and intraoral vertical ramus osteotomy (IVRO) being among the most popular of the current techniques. These two surgical methods differ in the operating time, hemorrhage volume, and the risk of postoperative sequelae and complications. While most studies focus on SSRO, very few reports discuss IVRO. Therefore, the present study aims to investigate the intraoperative hemorrhage and postoperative sequelae occurring in patients who undergo IVRO to treat mandibular prognathism. Materials and Methods A total of 80 patients with mandibular prognathism who were hospitalized at the Department of Dentistry and Maxillofacial Surgery and meeting the following criteria were enrolled: (1) absence of facial asymmetry, (2) absence of facial injury or other congenital facial deformities, (3) undergoing bilateral IVRO with no additional procedures, (4) 6-week maxillomandibular fixation period, and (5) at least 6 months of follow-up. The surgical duration and hemorrhage volume were recorded in all 80 patients, the change in the preand postoperative lip sensation was recorded in 69 patients (42 women and 27 men), the postoperative pain in Visual Analogue Scale (VAS) scores during hospitalization in 47 patients (28 women and 19 men), and the change between the pre-and postoperative maximum mouth opening (MMO) and temporomandibular joint (TMJ) clicking symptoms in 32 patients (22 women and 10 men). All surgeries were performed under hypotensive anesthesia. The mean arterial pressure (MAP) was maintained at 60 mmHg to minimize hemorrhage. During hospitalization, an intravenous nonsteroidal anti-inflammatory drug (Aspegic, 0.5 g) was prescribed for pain control at 6-hour intervals. The operating time, volume of intraoperative hemorrhage, and the change between the pre-and postoperative blood components, pain in VAS (0-10 cm), postoperative lip sensation, TMJ clicking, and MMO were recorded during hospitalization. The correlation between the blood loss volume and postoperative VAS with the gender, age, operating time, and change in blood components was assessed using the -test and Spearman rank correlation coefficient. The correlation between the magnitude of mandibular setback with the presence of TMJ clicking symptoms and lip sensation was also assessed by the Chi-square and Logistic regression tests. Results In analysis by sex, there was no significant difference among the patients in age, operative time, and blood loss volume ( Table 1). The mean operating time and blood loss volume for men and women were 249.52 min and 229.39 min, and 104.03 mL and 86.12 mL, respectively, indicating the absence of any gender variation. The mean magnitude of mandibular setback and change in blood components was significantly greater in men than in women. The mean VAS in men and women was 3.21 and 2.93, and 1.79 and 1.32 on the first and second postoperative days, respectively, suggesting that the VAS did not differ significantly between genders ( Table 2). The VAS was significantly lower on the second day than on the first day in both genders. For patients exhibiting TMJ clicking preoperatively, the incidence in the right and left joints was 2/10 and 2/10 in men, and 5/22 and 4/22 in women, respectively. For patients exhibiting TMJ clicking postoperatively, the incidence in the right and left TMJs was 1/10 and 2/10 in men, and 1/22 and 3/22 in women, respectively. There is no gender difference in the TMJ symptoms ( Table 2). The preoperative MMO did not differ between males and females, whereas postoperative MMO of males was significantly greater than females. Moreover, postoperative MMO of females was significantly smaller than their preoperative MMO. The amount of left and right mandibular setback was 11.11 mm and 11.27 mm in men, and 9.01 mm and 9.60 mm in women, respectively ( Table 1). The mandibular setback was significantly higher in men than in women. The incidence of patients reporting lower lip numbness immediately after surgery was 5/27 and 3/27 on the right and left sides in men, respectively, and 2/42 and 2/42 on the right and left sides in women, respectively (Table 2). At the final follow-up examination, only two male patients still experienced rightsided lower lip numbness, whereas all the female patients recovered sensation in both sides of the lower lip. Spearman rank correlation coefficient analysis was used to assess the correlation between the blood loss volume and age, operating time, magnitude of mandibular setback, and change in blood components. Hemorrhage and operating time were positively correlated in men only ( Table 3). The correlation analysis of the VAS on postoperative days 1 and 2 during hospitalization revealed that the decrease in postoperative Hct (%) and the day 2 VAS were positively correlated in women and negatively correlated in men (Table 4). When comparing the correlation between pre-and postoperative TMJ symptoms, our results showed significant reduction in the right side ( Table 5). The magnitude of mandibular setback was not correlated with immediate and long-term postoperative lip numbness or abnormal sensation in male or female patients (Table 6). Discussion Orthognathic surgery is performed on the maxillofacial region, which is extensively vascularized, and certain intraoral maneuvers may have a limited field of vision. Thus, operative techniques must be highly accurate because hemostasis can be difficult to achieve. A study by Moenning et al. [1] revealed that the mean volume of hemorrhage was 176.6 mL in 171 bilateral SSRO patients, none of whom required transfusion. Similarly, our IVRO patients did not receive transfusion, and the mean volume of hemorrhage was 93 mL (women, 86.12 mL; men, 104.03 mL). Compared with SSRO [1], our IVRO patients had the advantage of a lower intraoperative hemorrhage. Ueki et al. [3] reported a significant positive correlation between intraoperative hemorrhage and operative time, whereas Böttger et al. [4] found that the correlation was low. In Table 3, our study revealed a difference between the genders, with the blood loss volume and operating time showing a positive correlation in men and no correlation in women. We also investigated the relationship between the blood loss volume and operating time with the mandibular setback amount and did not find any significant correlation in men or women. Therefore, we inferred that vascularity and perfusion were more abundant in men than in women, causing greater hemorrhage and a longer operating time to achieve hemostasis in male patients. Many methods [5][6][7] are currently employed to minimize intraoperative hemorrhage and avoid the need for transfusion, among which hypotensive anesthesia is a wellestablished and effective technique that can reduce blood loss by 40% in orthognathic surgery in some reports. Hypotensive anesthesia can reduce blood flow, improve visibility in the operative field, and increase the efficacy of surgery and hemostasis, shortening the operating time. The volume of hemorrhage is also reduced, which further lowers the chance of requiring blood transfusion. As suggested by current data [8,9], a MAP between 50 and 65 mmHg is safe in young patients because it does not interfere with perfusion to the brain, heart, kidney, and liver. However, close physiological monitoring during surgery and good communication between the surgeon and anesthesiologist is critical to ensure safety during hypotensive anesthesia. Pain is a complex response involving central neuron-glial interactions during neuron transduction [10]. Postoperative pain following orthognathic surgery is not only caused by the operation itself, but also caused by surgical inflammatory reaction, surrounding muscle stiffness, and contraction of peripheral soft tissues. All of these events can induce the changes of pain perception in the central neuron system [10]. Our study revealed that postoperative pain did not differ in men and women, as both genders showed a significantly lower VAS on day 2 than on day 1 postoperatively. However, when the relationship between age and pain was evaluated, the data showed that older men had a greater reduction in VAS on day 2 postoperatively, probably because men may exhibit greater cognition and tolerance for pain as they age. Niederhagen et al. [11] reported that postoperative pain was highly correlated with operating time. However, our study found that postoperative pain had no significant correlation with operation time, intraoperative hemorrhage, and amount of mandibular setback either. Evans et al. [12] studied 45 patients who underwent orthognathic surgery and concluded that postoperative pain was generally not severe enough to justify administering strong opioid analgesics. Postoperative administration of nonsteroidal anti-inflammatory drugs (NSAIDs) has been widely demonstrated to effectively alleviate pain and reduce morphine dosage. We also found that NSAIDs were sufficient to control postoperative pain. We evaluated the patients' postoperative VAS scores and found them comparable to the preoperative orthodontic pain in VAS scores. This finding can greatly facilitate preoperative communication between surgeons and patients and allow surgeons to inform patients about postoperative pain, reducing the anxiety and pressure associated with surgery. The method of orthognathic surgery can also affect the recovery of mandibular mobility and the time required for recovery. Some studies [13,14] show that intraoperative internal fixation of the mandibles or postoperative intermaxillary fixation may affect future mandibular mobility. SSRO employs internal fixation but not intermaxillary fixation; therefore, mandibular mobility can be resumed immediately postoperatively. By contrast, IVRO uses intermaxillary fixation but not internal fixation; therefore, the mandible is immobilized for 6 weeks postoperatively. MMO measurement is a simple and easy way to assess the postoperative recovery of mandibular mobility. Boyd et al. [13] reported that SSRO and IVRO do not differ in the MMO recovery. The MMO has been significantly correlated with intermaxillary fixation during the 6 months after surgery, a correlation that disappears 1 year postoperatively. The preoperative MMO did not differ between our male and female patients, but women had a significantly lower MMO than men at the final follow-up examination. There was no difference in the preoperative and final MMO in the male patients. The final MMO in female patients was significantly lower than the preoperative MMO by approximately 3 mm. We suspect that the male patients may have paid greater attention during the mouth opening exercise, while the female patients may have considered it acceptable to exceed the MMO target of 45 mm because normal speech and eating do not require reaching MMO. How orthognathic surgery affects TMD is still under debate [15][16][17]; some scholars believe it can alleviate TMD symptoms, whereas others report that orthognathic surgery may induce or worsen TMD symptoms. TMJ clicking is the most common TMD symptom in patients. There was no gender variation in the patients exhibiting TMJ clicking before surgery and at the final examination. However, after surgery, women had significantly decreased TMJ clicking on the right side. The number of women with right-sided TMJ clicking decreased from five people to one person. The four women with left-sided TMJ clicking initially improved with no symptoms postoperatively, while an additional three women developed symptoms after surgery. The number of men with right-sided TMJ clicking decreased from two to one, and two men with left-sided TMJ clicking remained symptomatic after surgery. Based on these findings, it remains unclear whether orthognathic surgery improves or induces BioMed Research International 5 TMJ clicking symptoms. We also found that the amount of mandibular setback was not correlated with postoperative TMJ clicking in either gender. Some researchers [18][19][20] reported that SSRO increased the risk of neurosensory damage in the lower lip than IVRO. The currently accepted reasons for this difference include peeling of the inner periosteum of the ramus, exposure of the inferior alveolar nerve during SSRO splitting, crushing of the nerves during bone segment fixation, and postoperative edema. The distal segment may press against the nerves during backward replacement. Rigid (screw) and nonrigid (wire) fixation methods can be used in SSRO to fix the two bone segments. Lemke et al. [21] reported that rigid fixation caused greater mental nerve numbness than wire fixation in SSRO patients examined by brush stroke determination. This finding indicates that rigid fixation compresses the nerves more strongly than wire fixation. By contrast, because IVRO does not split the bones or fix the bone segments, there is a lower risk of injuring the mandibular canal or compressing the inferior alveolar nerve and less extension of the vascular/nerve bundles caused by distal segment movement. Among our patients, approximately 9% (12/138) showed lower lip numbness postoperatively, and only 2% still reported numbness at the final follow-up, with reduced symptoms. In addition, there was no gender difference in the incidence of lower lip numbness immediately after surgery or at the final follow-up. When investigating whether the amount of mandibular setback increased the incidence of postoperative lower lip numbness, we did not find any correlation, indicating that accurately determining the bone cutting position and carefully overlapping the two bone segments can reduce extension of the inferior alveolar nerve bundle and decrease the occurrence of lower lip numbness. Conclusion Even with a larger amount of mandibular setback, our patients presented a lower intraoperative hemorrhage without blood transfusion. There are no gender differences in the intraoperative hemorrhage and postoperative sequelae (pain, lip numbness, and TMJ symptoms). In addition, neither symptom was significantly correlated with the amount of mandibular setback. The 6-month postoperative MMO of males and females were returned to normal range. Therefore, IVRO is a reliable technique without intraoperative and postoperative sequelae.
2018-04-03T01:15:20.384Z
2015-10-12T00:00:00.000
{ "year": 2015, "sha1": "87a75e7ab20ab48c3e74497d7120c705e0ef7918", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2015/318270.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a72e7a4d2f2f33586f4491b99c1281708124339d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227947506
pes2o/s2orc
v3-fos-license
Magnesium Contact Ions Stabilize the Tertiary Structure of Transfer RNA: Electrostatics Mapped by Two-Dimensional Infrared Spectra and Theoretical Simulations Ions interacting with hydrated RNA play a central role in defining its secondary and tertiary structure. While spatial arrangements of ions, water molecules, and phosphate groups have been inferred from X-ray studies, the role of electrostatic and other noncovalent interactions in stabilizing compact folded RNA structures is not fully understood at the molecular level. Here, we demonstrate that contact ion pairs of magnesium (Mg2+) and phosphate groups embedded in local water shells stabilize the tertiary equilibrium structure of transfer RNA (tRNA). Employing dialyzed tRNAPhe from yeast and tRNA from Escherichia coli, we follow the population of Mg2+ sites close to phosphate groups of the ribose-phosphodiester backbone step by step, combining linear and nonlinear infrared spectroscopy of phosphate vibrations with molecular dynamics simulations and ab initio vibrational frequency calculations. The formation of up to six Mg2+/phosphate contact pairs per tRNA and local field-induced reorientations of water molecules balance the phosphate–phosphate repulsion in nonhelical parts of tRNA, thus stabilizing the folded structure electrostatically. Such geometries display limited sub-picosecond fluctuations in the arrangement of water molecules and ion residence times longer than 1 μs. At higher Mg2+ excess, the number of contact ion pairs per tRNA saturates around 6 and weakly interacting ions prevail. Our results suggest a predominance of contact ion pairs over long-range coupling of the ion atmosphere and the biomolecule in defining and stabilizing the tertiary structure of tRNA. INTRODUCTION Electrostatic interactions play a determining role for the secondary and tertiary structures of RNA in the native aqueous environment. The formation of stable macromolecular conformers requires balance of repulsive and attractive electric interactions between the charged and/or polar constituents of the macromolecular structure and its surroundings. 1−3 In particular, repulsive interactions between the negatively charged phosphate groups in the ribose-phosphodiester backbone need to be compensated by positively charged ions and water molecules. Effective shielding of the high negative charge density of RNA is essential for stabilizing the equilibrium structures in their hydration shell and minimizing their total free energy. Spatial arrangements of positively charged (counter)ions around hydrated RNA, that is, the cations Na + , K + , Ca 2+ , Mn 2+ , or Mg 2+ , have been a subject of calculations based on macroscopic polyelectrolyte theory, 4,5 the (nonlinear) Poisson−Boltzmann (PB) equation, 6,7 and molecular dynamics (MD) simulations, which include the electrostatic interaction potential at the molecular level. 8−11 Such treatments predict a pronounced spatial gradient of the cation concentration, induced by the attractive electrostatic interaction with the negatively charged backbone. This (counter)ion condensation leads to a comparably high ionic concentration close to the backbone, which decreases with radial distance on a length scale of typically 20 Å. Some 70% of positive ions reside within the first 5−6 water layers around the biomolecule. Results of small-angle X-ray scattering studies of short DNA double strands are in qualitative agreement with calculated ion density profiles, without, however, characterizing specific ion sites and/ or hydration geometries. 12 There is a variety of molecular geometries in which the ion ensemble and the embedding water shell interact with the high negative charge density of the RNA backbone. Contact geometries, that is, cations in touch with one or several phosphate groups of the backbone, are characterized by a particularly strong attractive interaction at the expense of a partial desolvation of the ion. Comparably long ion residence times up to microseconds have been reported. 13,14 Contact geometries have been identified in high-resolution X-ray diffraction studies of RNA 15,16 and characterized dynamically by vibrational spectroscopy and theoretical analysis of model systems. 17,18 In contrast, ions separated by several water layers experience a substantially weaker interaction with the backbone. They are part of the so-called "diffuse ion atmosphere" and maintain a diffusive mobility. Because of thermally induced fluctuations in ion position, the ion atmosphere has remained elusive in X-ray diffraction but nevertheless exerts a fluctuating electric force on the hydrated RNA structure. In addition to the cations, the dipolar water molecules of the hydration shell make a significant contribution to the overall electrostatic potential. 19,20 They represent sources of electric fields with a strength of up to 100 MV/cm and simultaneously screen attractive and repulsive electrostatic interactions. The positions and orientations of water molecules adapt to the total Coulomb force they experience. At the same time, water molecules undergo fluctuations on time scales between 50 fs and several tens of picoseconds. The role of this complex molecular ensemble for stabilizing secondary and tertiary structures of RNA as well as the relevant many-body interactions are not understood at the molecular level. Even the spatial range of electric forces and the local hydration geometries of RNA in the presence of ions are barely characterized. Such issues call for experimental probes which map specific interaction sites and their local dynamics. In this article, we address the fundamental role of Mg 2+ ions for stabilizing the prototypical equilibrium structure of transfer RNA (tRNA), a central player in translation steps of protein synthesis. Depending on the specific species, tRNA contains 75−90 nucleotides arranged in a folded cloverleaf structure. 16,21,22 The structure of phenylalanine tRNA from yeast (tRNA Phe ) has been determined by X-ray diffraction with a high spatial resolution of better than 2 Å (ref 16) and is shown in Figure 1a. It consists of the acceptor stem, the TΨC loop, the D loop, the variable loop, and the anticodon loop. The acceptor stem contains a single-strand 3′-end (top right of Figure 1a) which protrudes from its double-strand part and serves for attaching the amino acid phenylalanine for protein synthesis. The anticodon loop contains the specific base sequence (O 2 ′-methyl-guanosine−adenosine−adenosine) for reading out complementary messenger RNA. The different loops are connected to double-strand stem regions with paired nucleobases. Other tRNA structures differ in the total number and sequence of standard and nonstandard nucleobases and preserve a folded cloverleaf tertiary structure. 23 X-ray diffraction with a spatial resolution better than 2 Å (ref 16) has identified 11 binding sites of divalent metal ions, the majority of which are Mg 2+ ions close to positions of phosphate groups in the folded backbone. Of particular interest are the positions M1, M3, and M7 ( Figure 1a) in vicinity of the D-loop where the bending of the backbone results in a separation of neighboring (PO 2 ) − oxygens below 4 Å, that is, substantially less than in the A-helical parts of the structure. X-ray diffraction data suggest that such sites are preferentially populated by Mg 2+ ions with the (PO 2 ) − oxygens being part of the first solvation shell of the ion or separated by a single water layer. Such assignments have been challenged by findings of the nonlinear PB model where binding of Mg 2+ to yeast tRNA Phe has been interpreted on the basis of a single class of ions that retain a complete water shell and stabilize the RNA structure by long-range electrostatic interactions. 24 In our experiments, we study local interactions between Mg 2+ ions and phosphate groups in the backbones of tRNA Phe and, for comparison, tRNA from Escherichia coli (E.c. tRNA). tRNA Phe is chosen because of its well-characterized structure which serves as the starting point for theoretical modeling of electrostatics at the molecular level. E.c. tRNA represents a mixture of different tRNA structures and serves for benchmarking the tRNA Phe results in a wider range of structures. In all systems, local interactions are directly probed via their impact on vibrations of the phosphate groups. Asymmetric (PO 2 ) − stretching vibrations ν AS (PO 2 ) − thus serve as sensitive noninvasive probes of the local interaction potentials and allow to map the local dynamics. Samples of dialyzed tRNA Phe and dialyzed E.c. tRNA with a specific Mg 2+ concentration in a range from zero to approximately 15 Mg 2+ ions per tRNA are employed to follow the formation of contact geometries step-by-step with the help of linear and femtosecond nonlinear infrared spectroscopy. Such results are analyzed with the help of extensive theoretical calculations, MATERIALS AND METHODS Dialyzed tRNA samples are prepared with a defined Mg 2+ content and characterized by fluorescence titration and vibrational spectroscopy. The formation of contact ion pairs with tRNA phosphate groups is followed by linear and nonlinear 2D infrared spectroscopy of the asymmetric phosphate stretching vibration and analyzed by quantum mechanics/molecular mechanics (QM/MM) calculations. A detailed description of materials and methods is given in Supporting Information. RESULTS The tRNA samples are prepared in an aqueous buffer solution with tRNA Phe and E.c. tRNA, respectively (both from Aldrich). The E.c. tRNA sample represents a mixture of tRNAs with different anticodon units and amino acid acceptor arms (cf. Figure 1a). The samples of millimolar tRNA concentration are repeatedly dialyzed by following the procedures described in ref 25. In this way, the magnesium content in the tRNA sample is reduced to less than one Mg 2+ ion per tRNA entity on average. To this aqueous reference solution, defined amounts of a solution of MgCl 2 in water are added in order to generate a well-defined content of Mg 2+ ions in the sample. The Mg 2+ ions interacting with tRNA Phe and E.c. tRNA need to be distinguished from the Mg 2+ ions fully solvated in the water environment. To this end, the fraction of Mg 2+ ions interacting with the tRNAs is determined with the help of the fluorescence titration method outlined in refs 25 Linear infrared absorption spectra in the range of the asymmetric (PO 2 ) − stretching vibrations ν AS (PO 2 ) − of the tRNA Phe backbone are summarized in Figure 2. Figure 2a shows the infrared bands consisting of two strong components with maxima at 1220 and 1241 cm −1 and a shoulder around 1270 cm −1 . Upon addition of Mg 2+ ions, the infrared absorption undergoes systematic changes, that is, a decrease of absorption on the two strong components and an increase of absorption between 1250 and 1300 cm −1 . To display this behavior more clearly, the absorbance difference of the Mg 2+containing samples and the sample without Mg 2+ content was calculated. The resulting spectra in Figure 2b clearly exhibit a differential absorption band around 1270 cm −1 , which rises proportional to the Mg 2+ concentration with minor changes of line shape. The vibrational spectra of E.c. tRNA behave in a very similar way (not shown). As will be discussed in detail below, the absorption band at 1270 cm −1 is induced by the formation of contact ion pairs (CIPs) of Mg 2+ ions with phosphate groups. In Figure 1c, the peak value of differential absorbance at 1270 cm −1 normalized to the peak absorbance A ref at 1240 cm −1 for R = 0 is plotted as a function of the ratio R of total Mg 2+ to tRNA concentration for both tRNA Phe (solid squares) and E.c. tRNA (open squares). Normalization to A ref makes data comparable which were taken with slightly different tRNA concentrations and sample thicknesses. In the range from R = 0 to 2, the differential absorbance increases by some 0.01. The linear extrapolation of this absorbance increase to higher ratios R is shown as thick black line. However, the experimental values for both tRNA Phe and E.c. tRNA (symbols) display a more gradual rise which is much weaker than the increase of interacting Mg 2+ ions plotted in Figure 1b. This discrepancy shows that only a fraction of Mg 2+ ions interacting with tRNA contribute to this particular absorption band, that is, are accommodated as CIPs with tRNA phosphate groups. The measurements of linear infrared absorption spectra were complemented by extensive two-dimensional infrared (2D-IR) experiments in order to separate and characterize the different types of ν AS (PO 2 ) − excitations, including their ultrafast dynamics, in depth. Figure 3 summarizes 2D-IR spectra for (a−e) dialyzed tRNA Phe at different concentration ratios R = c(Mg 2+ )/c(tRNA Phe ) and (f) E.c. tRNA for R = c(Mg 2+ )/c(E.c. tRNA) = 15. The absorptive 2D signal given as the real part of the sum of the rephasing and non-rephasing signal is shown as a function of excitation frequency ν 1 (ordinate) and detection frequency ν 3 (abscissa). The yellow-red contours represent the 2D signals on the v = 0 to 1 transitions of the different vibrations, caused by bleaching of the v = 0 ground state and Three components around 1220, 1245, and 1270 cm −1 are clearly discerned in the v = 0 to 1 2D signals and the cuts of the tRNA Phe spectra along a diagonal line running parallel to ν 3 = ν 1 through the maximum of the 2D signal at ∼1250 cm −1 (Figure 3g). Compared to the linear absorption spectra (Figure 2a), the relative amplitudes of the three components markedly changed, with a pronounced enhancement of the contribution around 1270 cm −1 . The origin of this behavior will be discussed below. All line shapes are elongated along the diagonal, a fact reflecting inhomogeneous broadening due to a distribution of vibrational frequencies of phosphate groups with a different local environment. Cuts of the 2D spectra along the antidiagonal direction are presented in Supporting Information and reveal a smaller antidiagonal width of the 2D signal contours around 1270 cm −1 than of those at lowerdetection frequencies. The 2D-IR spectra of E.c. tRNA at R = 15 and at lower values of R (not shown) display a very similar behavior. Results of femtosecond pump−probe experiments with tRNA Phe are presented in Supporting Information. In the absence of Mg 2+ ions, such measurements give lifetimes of the v = 1 state of the ν AS (PO 2 ) − vibrations of 290 ± 30 fs, similar to decay times observed with other DNA and RNA structures. 19,27,28 Upon addition of Mg 2+ , one observes a slowing down of the overall decay at probe frequencies above 1260 cm −1 . This behavior is accounted for by a biexponential signal decay with time constants of 290 and 700 fs (cf. Supporting Information). (Figures 4a,b and S9) were constructed by considering two adjacent phosphate groups and the three bridging ribose moieties in the QM region. Additionally, the first solvation shell of phosphate groups was considered in the QM region containing first-shell water molecules, contact ions, and waters in the first solvation shell of the ions. The QM region of vibrational frequency simulations comprises, depending on the particular hydration geometry, 52 sugar-phosphate backbone atoms, 7−18 water molecules, and 0−2 ions (73− 108 QM atoms, 682−929 atomic basis functions; see Supporting Information). Figure 4a compares the simulated linear infrared absorption spectrum of tRNA Phe in the frequency range of the ν AS (PO 2 ) − vibrations to the experimental spectrum of undialyzed tRNA Phe in water. We find excellent agreement in frequency position of ν AS (PO 2 ) − covering a range from ∼1180 to 1290 cm −1 while some deviation in the intensity in the different frequency ranges is recognized. A characteristic feature of the experimental linear infrared absorption spectra of both tRNA Phe and E.c. tRNA is the increase in absorption between 1250 and 1300 cm −1 upon addition of Mg 2+ ions. To characterize the molecular geometries of tRNA Phe that contribute to this spectral range, we have analyzed the contributions of CIPs (blue lines), solvent-shared ion pairs (SSIPs) of (PO 2 ) − groups with Mg 2+ ions (red lines), and all other (PO 2 ) − groups (black) to the vibrational density of states (DOS, inset Figure 4a). We find a predominant contribution from CIPs in the frequency range ν AS (PO 2 ) − = 1247−1285 cm −1 , mimicking the experimental observation. A blue-shift of vibrational frequency requires the integration of one of the (PO 2 ) − oxygens in the essentially octahedral first solvation layer around the Mg 2+ ion, similar to what has been observed in model systems. 17,18,28 Because of the short Mg 2+ −oxygen distance of approximately 2.1 Å, the vibrational excitation probes the repulsive part of the interaction potential and, thus, a blue-shift arises. There is a single CIP with absorption at a much lower frequency ν AS (PO 2 ) − = 1219 cm −1 . The lower ν AS (PO 2 ) − frequency is due the particular geometric structure of the CIP being subject We have further analyzed the effective electrostatic potential at the surface of tRNA Phe (Figure 4c). For the helical domains The Journal of Physical Chemistry B pubs.acs.org/JPCB Article of the acceptor stem and the anticodon region, we find the typical negative surface potential on the order of −40k B T/e ≈ −1.0 V (k B T: thermal energy at a temperature T = 298 K, e: elementary charge), due to the negatively charged (PO 2 ) − groups and in qualitative agreement with findings for doublestranded RNA. 7 However, in the crowded regions of tRNA Phe (D and TΨC loop), the negative electrostatic potential due to the high charge density of (PO 2 ) − oxygens is fully compensated by the presence of a small number of immobilized (contact) Mg 2+ ions, locally inducing a net positive effective surface potential (cf. Figure 1a: M1, M3, M7, M2, M8). The contact interactions of Mg 2+ ions with (PO 2 ) − groups thus (over)compensate the repulsive Coulomb interaction and stabilize the tertiary structure of tRNA. Similarly, the low electrostatic surface potential in the anticodon region arises from the compensation of negative (PO 2 ) − charges in the presence of the Mg 2+ ion together with particular high solvent accessibility (Figures 1a and 4c,d: position M5). DISCUSSION The combination of dialysis and linear infrared spectroscopy gives insights into interaction patterns between Mg 2+ ions and phosphate groups in the backbone of tRNA. Starting from tRNA Phe and E.c. tRNA samples with negligible magnesium content, the number of interacting Mg 2+ ions rises linearly with the concentration ratio R = c(Mg 2+ )/c(tRNA), as shown in Figure 1b. Up to R ≈ 7, all added Mg 2+ ions interact with the tRNAs. At higher Mg 2+ concentrations, only a fraction of ions interacts with the tRNAs, leading to the deviation from a linear behavior in Figure 1b. There are no indications of cooperativity of the Mg 2+ uptake in the concentration shown in Figure 1, a conclusion in line with previous dialysis studies at lower tRNA concentrations. 25 The linear infrared absorption spectra of tRNA Phe (Figure 2) exhibit two strong components with maxima at 1220 and 1241 cm −1 and the shoulder at 1270 cm −1 . The component around 1220 cm −1 is due to phosphate groups fully exposed to water with separate hydration shells consisting of up to 6 water molecules and a prototypical tetrahedral hydrogen-bond arrangement around the (PO 2 ) − oxygens. 27 The absorption around 1241 cm −1 is due to ν AS (PO 2 ) − vibrations of phosphate groups with an under-coordination in the number of water molecules, including "ordered" hydration environments consisting of chain-like arrangements of water molecules. The absorption around 1270 cm −1 , a prominent component of the differential absorption spectrum of Figure 2b, is a hallmark of CIP formation, as is evident from the theoretical calculations and previous work on model systems. 17,18 The CIP infrared absorption around 1270 cm −1 rises with the Mg 2+ concentration (Figure 2b). Its peak value saturates as a function of the concentration ratio R (Figure 1c) but at much lower Mg 2+ concentrations than the number of Mg 2+ ions interacting with tRNA Phe and E.c. tRNA (Figure 1b). The differential absorbance at 1270 cm −1 (Figure 1c) reaches a value of up to 3% of the peak absorbance of tRNA at 1240 cm −1 . Assuming a similar molar extinction coefficient of the ν AS (PO 2 ) − vibrations of CIPs and phosphate groups without a Mg 2+ ion nearby, one estimates a minimum number of 3 CIPs per tRNA molecule. On the other hand, the relative strengths of the 2D-IR signals at 1240 and 1270 cm −1 (cf. Supporting Information, Table S1) suggest the existence of 6 ± 2 CIPs per tRNA for R = 15. We consider the latter number an upper limit of the number of CIPs per tRNA. The CIPs are expected to be formed at sites with a high negative charge density from phosphate groups, like at the sites M3, M7/M8, M1, and M2 (Figures 1a and 4d). The discrete number of Mg 2+ ions inverts the sign of the effective electrostatic surface potential, thus stabilizing the tertiary tRNA structures locally. Our experimental and theoretical results provide clear evidence for the existence of CIPs in the equilibrium structures of tRNA Phe and E.c. tRNA. Such CIPs represent the "strongly interacting ion species", which has been discussed in the literature. 25 Their impact on the electrostatic potential at the crowded sites of tRNA (M1−M2, M3, M7−M8) is much stronger than the contribution from long-range electric fields originating from the distant outer ion atmosphere and from contact ion pairs with Na + (cf. Supporting Information). This fact shows that CIPs play a prominent role for stabilizing the tertiary folded cloverleaf structure of tRNA. It should be noted that the water molecules around phosphate groups without Mg 2+ ions make a major contribution to the electrostatic potential (cf. Figure S11b). Our results are in contrast to predictions from PB treatments, claiming that Mg 2+ ions solvated in the outer ion atmosphere were the structure-stabilizing constituents. 24 The surface electrostatic potentials derived in ref 24 are substantially lower (∼20%) than the potentials shown in Figure 4c. Such small potentials fail to account for the electric field-dependent frequency positions of the ν AS (PO 2 ) − vibrations. 28,29 PB theory neglects the direct contribution of water molecules to the electrostatic potential and uses the static dielectric constant of water to scale the bare Coulomb interaction potential. Given the subtle balance of attractive and repulsive molecular interactions in this complex many-body system of fluctuating charges, such two approximations appear inappropriate. The 2D-IR spectra presented in Figure 3 give information on dynamics at the molecular scale and on interactions between the different charged and polar constituents of hydrated tRNA. The 2D-IR spectra display strong overlapping diagonal peaks (yellow-red contours) around detection frequencies ν 3 = 1220 and 1245 cm −1 , which are complemented by a shoulder-like feature around 1270 cm −1 , the strength of which rises with the Mg 2+ ion concentration. There are no cross-peaks in any of the 2D-IR spectra, that is, vibrational couplings between the different diagonal components are minor. This fact is a clear indication that the different diagonal contributions originate from phosphate groups, which are mainly uncoupled and embedded in different local environments. For a quantitative analysis of the line shapes in the 2D-IR spectra of tRNA Phe , we performed simulations based on a density matrix approach for describing the nonlinear vibrational response. 30 This treatment includes four vibrational transitions centered at 1220, 1245, 1270, and 1280 cm −1 (cf. Figure 2). The frequency fluctuation correlation function (FFCF) of the aqueous environment is accounted for by a Kubo ansatz with two exponential terms of 300 fs and 50 ps decay time. The simulated line shapes include lifetime broadenings which are calculated with vibrational lifetimes of 290 fs for the 1220 and 1245 cm −1 components and 700 fs for the 1270 and 1280 cm −1 contributions. A comparison of experimental and simulated spectra is presented in Supporting Information (Figures S4 and S5) and shows good agreement in the overall line shapes. Of particular interest is the 2D-IR signal around ν 3 = 1270 cm −1 which is due to CIPs and much more pronounced than the linear absorption at 1270 cm −1 in the ν(PO 2 ) − absorption spectrum ( Figure 2). The higher relative amplitude in the 2D-IR spectra is mainly caused by (i) the longer vibrational lifetime of the 1270 cm −1 excitations in comparison to those at 1220 and 1245 cm −1 (700 vs 290 fs) and (ii) reduced amplitude of the fast fluctuation component in the FFCF. At a population time T = 300 fs at which the 2D-IR spectra of Figure 3 were recorded, the 1220 and 1245 cm −1 signals have decayed to some 35% of their maximum value, while the 1270 cm −1 contribution is at 70% of its initial value. The reduced amplitude of the fast decay in the FFCF points to a more rigid hydration structure in CIP environments, partly due to the strong impact of the strong local electric fields on the orientation of water molecules. Such experimental observations are corroborated by results from MD simulations that analyze the tRNA Phe first-solvation shell. The hydrogen bond angular distribution of hydration geometries in the first solvation shell around phosphate groups ( Figure S10) shows a bimodal distribution arising from direct hydrogen bonds with the oxygen atoms of the phosphate group and water molecules that occupy the first solvation shell of Mg 2+ ions in CIPs. The predominant contribution to the hydration geometries of water molecules in the first solvation shell of Mg 2+ ions is characterized by vibrational frequencies ν(PO 2 ) − > 1255 cm −1 in the simulated linear infrared absorption spectra (Figures 4a and S9). The results are, thus, indicative of a more rigid hydration shell around CIPs, as manifested in the smaller fluctuation amplitudes in 2D spectra, where a substantially smaller width of antidiagonal 2D-cuts at 1270 cm −1 is observed compared to 1245 cm −1 . CONCLUSIONS In conclusion, we have studied the electrostatic properties of tRNA Phe and E.c. tRNA embedded in an aqueous environment, which contains Mg 2+ ions. A combination of dialysis, fluorescence spectroscopy, linear infrared spectroscopy, femtosecond 2D-IR spectroscopy, MD simulations, and ab-initio calculations of the tRNA Phe sugar−phosphate vibrational frequencies gives evidence of a prominent role of Mg 2+ − phosphate contact ion pairs in stabilizing the folded tertiary structure of tRNA. The formation of contact ion pairs is manifested in a blue-shift of the infrared transition of the asymmetric (PO 2 ) − stretching vibration to frequencies around 1270 cm −1 , a behavior present in both the linear and the 2D-IR spectra. Up to six contact ion pairs are formed per tRNA, predominantly at positions with a high negative charge density from phosphate groups. Addition of a Mg 2+ ion to such distinguished sites results in stabilization of the folded tertiary structure because of strong attractive electrostatic interactions. The Mg 2+ contact sites found in the present work agree with results from X-ray diffraction studies. The double-helical parts of the folded tRNA structures display an overall negative surface potential which is compensated for by water molecules as well as Mg 2+ and other cations separated by one or more water layers from tRNA. Our results underline the need for probing electric fields at the local molecular level and for atomistic simulations of the local interactions geometries. They demonstrate the predominance of local over long-range electrostatic interactions in defining the tertiary RNA structure. Materials and methods, results of pump−probe experiments, simulation of 2D-IR spectra, and theory results (PDF)
2020-12-09T14:06:46.217Z
2020-12-07T00:00:00.000
{ "year": 2020, "sha1": "d8e9bd832e991092708e95ebc688c6dabf0d1b40", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpcb.0c08966", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "9829a1f8fb9b8a97566cc3eafe8e028a254e4546", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Medicine", "Chemistry" ] }
39451561
pes2o/s2orc
v3-fos-license
The Potential That Electronic Nicotine Delivery Systems Can be a Disruptive Technology: Results From a National Survey Introduction: This study evaluates the reasons for use and acceptance of Electronic Nicotine Delivery Systems (ENDS) among current and former cigarette smokers to assess if ENDS may become a satisfying alternative to cigarettes. Methods: Data are from a national probability sample of 5717 US adults, surveyed June–November 2014. The survey contained questions on awareness, usage, and reasons for use of traditional and novel tobacco products. The analytic sample was current and former smokers who ever used ENDS (n = 729) and was divided into four mutually exclusive categories. Among the 585 current smokers, 337 were no longer using ENDS (“E-Cig Rejecters”), and 248 were continuing to use both ENDS and cigarettes (“E-Cig Dual Users”). Among 144 former cigarette smokers, 101 were non-recent users of ENDS (“Quit All Products”), and 43 were continuing to use ENDS exclusively (“Switchers”). Results: Former smokers (the “Switchers”) report finding ENDS a satisfying alternative to regular cigarettes, with only 15.8% (95% confidence interval [CI] 4.4–27.1) rating ENDS as less enjoyable than regular cigarettes. However, greater than fivefold more current smokers did not find them satisfying and stopped using them (77.3%; 95% CI 72.1–82.4 of “E-Cig Rejecters” rated ENDS as less enjoyable). Being less harmful was the most highly rated reason for continuing to use ENDS among “Switchers.” Most (80.9%) “Switchers” reported that ENDS helped them quit cigarettes. Conclusion: Since many current smokers who have tried ENDS reject them as a satisfying alternative to regular cigarettes, ENDS will not replace regular cigarettes unless they improve. Implications: Since about one-half of recent former smokers are trying ENDS with about one-fourth continuing to use them, and many reporting that these products have helped them quit regular cigarettes, the potential impact of ENDS on population quit rates deserves continued surveillance. However, since most current smokers who have tried ENDS reject them as a satisfying alternative to regular cigarettes, the potential of ENDS becoming a disruptive technology replacing regular cigarettes remains uncertain. ENDS need to improve as a satisfying alternative or the attractiveness and appeal of the regular cigarette must be degraded to increase the potential of ENDS replacing regular cigarettes. Introduction Over the last 50 years, progress in tobacco control is estimated to have saved approximately 8 million Americans from premature death caused by smoking; however, over the same period, smoking has caused the deaths of more than 20 million Americans. 1,2 Moreover, each year about an additional 500 000 Americans will die prematurely from a smoking-related illness. 1 Recognizing that this persisting deadly toll is caused primarily by the highly engineered, addictive, and lethal cigarette, the public health community has been debating whether the emerging Electronic Nicotine Delivery Systems (ENDS) could become a disruptive technology that would compete with combusted tobacco products as satisfying and efficient alternative sources of nicotine and thus disrupt the entire tobacco product market. [3][4][5][6][7][8][9][10][11][12] Unlike sustaining innovations which improve an existing product, disruptive innovations fundamentally differ from existing technologies, usually by being less complicated, more accessible, and less expensive (eg, "…the Kodak moment when, with the rise of digital processes, photographic film manufacturers were left with an obsolete technology [p. 653]"). [12][13][14] Thus, for the cigarette smoker who is unable or unwilling to discontinue using nicotine, an alternative product which provides "nicotine without the hazards associated with smoking (p. 654)" can be a disruptive innovation if it becomes widely adopted. 12 While the efficiency of nicotine delivery is a key product characteristic in defining ENDS as a potential disruptive innovation, 3,5,12,[15][16][17][18][19][20] research has shown that perceptions about the potential health benefits of ENDS are a primary predictor of use among cigarette smokers. [21][22][23][24][25][26][27] Most of the evidence about ENDS users' reasons for use and satisfaction have focused on differences between ever users and current users and were based on use of earlier or first-generation models of e-cigarettes (ie, from 2010 to 2013). [21][22][23][24][25] However, ENDS products are evolving and improving beyond the initial "cig-alike" designs. [15][16][17][18][19][20]28 Recently, Finney-Rutten and colleagues reported that the use of ENDS by current smokers in 2014 was motivated by potential health benefits and related to higher intentions to quit or reduce amount smoked. 27 Another recent survey in the United States (May, 2014) by Rass and colleagues reported that among dual users of ENDS and tobacco cigarettes, the primary reasons for using ENDS were harm reduction and smoking cessation. 26 However, neither these studies, 26,27 nor other national surveys of ENDS use, [29][30][31] have evaluated if recent former smokers in the United States are finding ENDS a satisfying alternative to the regular cigarette. Preliminary findings from the nationally representative US National Health Interview Survey (NHIS) of 36 697 civilian adults aged 18 and over living in US households in 2014 reported that almost half of current smokers had ever used ENDS (ie, ever tried, even just one time), with 15.9% currently using ENDS (ie, at least once during the past 30 days), and 22% of recent former smokers (quit within the past year) currently using ENDS. [32][33][34] Additionally, NHIS estimates of current smoking prevalence have shown recent declines. 35 These declining smoking rates plus the high rates of ENDS use among recent former smokers are consistent with a potential emerging pattern of tobacco product market disruption. However, the NHIS does not include data on attitudes and reasons for use of ENDS needed to evaluate if the recent former smokers are finding ENDS a satisfying alternative to the regular cigarette. Our 2014 Tobacco Products and Risk Perception Survey observed rates of ENDS similar to the 2014 NHIS reported rates; 51.1% of current smokers reported ever using ENDS, with 20.7% currently using ENDS, and 25.9% of recent former smokers (quit within the past year) using ENDS. 34 Since the probability sample used in the Tobacco Products and Risk Perception Survey provides representative estimates of non-institutionalized US adults and includes more detailed data on use of ENDS, we examined attitudes and reasons for use of ENDS among four groups of recent former and current smokers to assess the potential that ENDS could become a disruptive technology that replaces combusted tobacco products in the United States. 3,6,8,9,12 Procedure and Sample This study used data from the 2014 Tobacco Products and Risk Perceptions Survey conducted June-November, 2014 by the Georgia State University (GSU) Tobacco Center of Regulatory Science (TCORS). This survey is an annual, cross-sectional survey of a probability sample drawn from Gfk's KnowledgePanel, an online web panel designed to be representative of non-institutionalized US adults; the survey sample includes a representative oversample of pre-identified cigarette smokers selected with probabilities proportional to size (PPS) after application of the panel demographic post-stratification weight. Overall, we invited 7991 KnowledgePanel members to participate in the survey: 7061 members for the general population sample, of which 74.3% completed the screener survey and qualified for the main survey; and 930 members for the smoker augment sample, of which 697 completed the screener and 599 (74.9%) qualified for the main survey by confirming their current smoking status. Thus, from the 7991 KnowledgePanel members invited to participate in the survey, we obtained a sample of 5833 qualified participants who completed the survey. After excluding 116 cases for refusing to answer more than one-half of the survey questions, the final sample was 5717 cases, yielding a final stage completion rate of 74.4% and a qualification rate of 98.2%. The sample of interest for the present study consisted of 729 current and former smokers whom reported ever use of ENDS. This study was approved by the GSU Institutional Review Board. 34 More details on the survey sample, weights, and missing data are provided in the Supplementary Material. Cigarette Smoking Status Smoking status was assessed using two items, "have you smoked at least 100 cigarettes in your lifetime" and "when was the last time you smoked a cigarette, even one or two puffs?" Respondents that reported not having smoked at least 100 cigarettes in their lives were classified as "never-smokers." Of the remaining respondents, those who reported as having smoked at least 100 cigarettes and reported currently smoking cigarettes "every day" or "some days" were classified as "current smokers" and those who responded "not at all" were classified as "former smokers." Former smokers who reported current use of any other combustible tobacco product (eg, little cigars or cigarillos, large cigars, and/or hookah) were excluded from the analysis. Use of ENDS Awareness and use of ENDS were assessed by asking respondents if they had heard of the product before taking the survey and, if so, whether they had ever tried the product, even just one time. Prior to the questions assessing awareness and use of these products, respondents were shown descriptions and images of ENDS. The description for ENDS used the descriptor "e-cigarette" to broadly include ENDS products. Those respondents who indicated they had tried one or more of the products were asked whether they had used each of the products at least once during the past 30 days. Respondents reporting past 30-day use were considered current users. Weaver et al. 34 previously reported the patterns of use of ENDS and other traditional tobacco products in this 2014 online sample of 5717 US adults. Among the overall weighted sample, 16.6% (95% confidence interval [CI]: 15.6-17.6, unweighted n = 1349) were current cigarette smokers and 27.6% (95% CI: 26.3-28.8, unweighted n = 1554) were former cigarette smokers. Among the current smokers, 51.1% (unweighted n = 585) reported ever use of ENDS, and 20.7% (unweighted n = 248) reported use of ENDS in the past 30 days. Among the former smokers, 13.1% (unweighted n = 164) reported ever use of ENDS, and 3.8% (unweighted n = 52) reported use of ENDS in the past 30 days. Twenty of these 164 former cigarette smokers were excluded from the analysis due to missing data (n = 5) or due to current use of any combustible tobacco product (eg, little cigars or cigarillos, hookahs, or large cigars; n = 15), yielding a sample of 144, among whom 101 were non-recent users of ENDS and 43 were continuing to use ENDS exclusively. ENDS and Smoker User-Groups The selected sample (unweighted n = 729) of current and former smokers reporting ever use of ENDS were classified into four mutually exclusive groups based on their current use of ENDS: E-Cig Rejecters, E-Cig Dual Users, Quit All Products, and Switchers. Current cigarette smokers who had ever used ENDS but who were no longer using them were classified as E-Cig Rejecters (n = 337). E-Cig Dual Users (n = 248) were current cigarette smokers who were also currently using ENDS. "Quit All Products" (n = 101) were defined as former smokers who had tried ENDS but no longer using them or any combustible tobacco product. Switchers (n = 43) were former smokers who reported use of ENDS in the past 30 days but reported no current use of any other combustible tobacco product (eg, little cigars or cigarillos, hookahs, or large cigars). Supplementary Table S1 provides a summary of these four mutually-exclusive groups. Attitudes, Affect, and Reasons for Use of ENDS Respondents were asked a series of questions about their attitudes toward ENDS, reasons for use, and perceptions of how ENDS compared with smoking regular cigarettes. To measure affect toward ENDS, respondents were asked: "If you were to use an e-cigarette, would it make your feel…?" Similarly, they were asked: "How tense or relaxed would using an e-cigarette make you feel?" Current and former smokers who had ever used ENDS were asked: "How would you compare the experience of using e-cigarettes to smoking regular cigarettes?" (Responses included: "E-cigarettes are more enjoyable," "About the same," or "E-cigarettes are less enjoyable"). To assess reasons for using ENDS, respondents were asked: "For each reason listed, please indicate how important it is to you in your use of e-cigarettes." Reasons were presented in random order: 1. I could use them in places where regular cigarette smoking isn't allowed 2. E-cigarettes are less harmful to me than regular cigarettes 3. E-cigarettes are less harmful to those around me than regular cigarettes 4. E-cigarettes could help me quit smoking regular cigarettes 5. E-cigarettes could help me reduce the number of regular cigarettes I smoke 6. Using an e-cigarette feels like smoking a regular cigarette 7. E-cigarettes are more acceptable than regular cigarettes 8. To satisfy my curiosity 9. They come in flavors I like 10. People who are important to me use e-cigarettes ENDS and Quitting Regular Cigarettes Former smokers who had ever tried ENDS were asked: "Did using e-cigarettes help you quit smoking regular cigarettes?" (Responses included: "Yes," "No," and "I don't know") and "How likely are you to go back to smoking regular cigarettes in the future, now that you've used e-cigarettes?" (Responses included: 1 = "Very unlikely" to 5 = "Very likely"). All respondents were also asked: "In your opinion, are cigarette smokers who also use e-cigarettes more likely to quit smoking regular cigarettes, less likely to quit, or equally likely to quit smoking regular cigarettes." Respondent Characteristics Demographic and other respondent characteristics data were obtained from profile surveys administered by GfK to all KnowledgePanel panelists. Respondent characteristics included selfreported sex, age, race/ethnicity, educational attainment, annual household income, US Census region, perceived health status, sexual orientation, and presence of a child (under 18 years) in the home. Statistical Analysis We used SAS 9.4 to obtain design-based (weighted) point estimates and 95% CIs. Bivariate associations among variables were tested using Rao-Scott χ 2 tests, 36 and between group differences were tested using multinomial logistic regression, or weighted F or t-tests. Prior to conducting these analyses, we assessed the extent and ignorability of missing data for "ever use" and past 30-day use questions for the tobacco products. On the bases of these checks, respondents with missing data were excluded from further analyses under the data supported assumption that missingness is ignorable and completely at random (See Weaver et al. 34 Supplementary Material for an expanded summary of the missing data.). Table 1 reports the population demographics of the selected sample (n = 729) of current and former smokers reporting ever use of ENDS and the four mutually exclusive groups based on their current use of ENDS: E-Cig Rejecters, E-Cig Dual Users, Quit All Products, and Switchers. In multinomial logistic regression analyses, the four groups were similar in sociodemographic distributions for sex, age, race/ethnicity, household income, perceived health status, sexual orientation, and presence of children under 18 in the home, although E-Cig Rejecters and E-Cig Dual Users were less likely than Switchers to have a college degree (P = .002 and P = .044, respectively). E-Cig Rejecters also were less likely than Switchers to live in the West region of the United States (P = .0235). Weighted frequencies by population demographics are provided in Supplementary Table S2. Figure 1 displays the mean rating of importance for each of the 10 reasons for using ENDS across the four groups. Details on statistical testing of these ratings across the four groups are provided in Supplementary Table S3. Switchers rated ENDS "were less harmful to me than regular cigarettes" and "could help me quit smoking regular cigarettes" as highly important reasons for using ENDS, with ratings significantly higher than the E-Cig Rejecters and the Quit All products , little cigars or cigarillos, large cigars, and/or hookah) were excluded from "Switchers" and "Quit All Products" groups. Results *P < .05, **P < .01. groups (P < .05). Rating that "ENDS were less harmful for others around me than regular cigarettes," and that the ENDS could help users reduce cigarette consumption also were significantly more important reasons among the Switchers than the Quit All Products (P < .05). Similarly, ratings for the reason that ENDS "are more acceptable than regular cigarettes" were significantly higher among the Switchers than the Quit All Products (P < .05). Ratings on the importance of the reason "To satisfy my curiosity" overall were somewhat lower and did not differ between the four groups. The ratings between groups were significantly different for "They come in flavors I like" (P < .0001) with this reason being rated higher among Switchers than the Quit All Products (P = .003). This is consistent with 74% (95% CI: 60. As displayed in Figure 2, the four groups varied significantly when asked to imagine how they would feel using an ENDS, from very bad to very good (Rao-Scott χ 2 = 107.06, P < .0001) and from very tense to very relaxed (Rao- 39.9% (95% CI: 28.6-51.1), respectively, imagined that they would feel somewhat or very relaxed using an ENDS. Figure 3 displays the differences in opinions about "How would you compare the experience of using E-cigarettes to smoking regular cigarettes" among the four groups (Rao-Scott χ 2 = 111. 79 Affect towards using Electronic Nicotine Delivery System. Note. Depicted by the darker shade bars are the percent that reported feeling "very good" or "somewhat good" to the question: "Please imagine how you would feel using an e-cigarette. If you were to use an e-cigarette, would it make you feel…. " Depicted by the lighter shade bars are the percent that reported feeling "very relaxed" or "somewhat relaxed" to the question: "How tense or relaxed would using an e-cigarette make you feel. " (95% CI: 4.4-27.1) rated ENDS as less enjoyable than smoking regular cigarettes. Overall, 87.8% (95% CI: 85.0-90.6) endorsed the opinion that cigarette smokers who also use ENDS are equally or more likely to quit smoking regular cigarettes. Among the Switchers, 80.9% (95% CI: 68.3-93.6, weighted n = 1 396 753) reported that using ENDS helped them quit smoking regular cigarettes, whereas only 25.4% (95% CI: 16.0-34.8, weighted n = 1 037 727) of the Quit All Products reported this. Among these approximately 1 million Quit All Products former smokers, an estimated 460 000 quit smoking regular cigarettes within the past year (Supplementary Table S5 provides more detailed data on self-reported time since smoking last cigarette among Quit All Product and Switchers groups). Among the approximately 1.4 million Switchers who reported that using ENDS helped them quit smoking regular cigarettes, an estimated about 920 000 quit within the past year. Although this cross-sectional survey did not have extensive questions on their smoking cessation process, these data suggest that about 2.4 million former smokers perceived that the use of ENDS may have helped in quitting use of regular cigarettes, with about 1.4 (460K + 920K) million having quit in the past year. No former smokers who had used ENDS but then had quit all nicotine products (Quit All Products) rated the likelihood of going back to smoking regular cigarettes as somewhat or very likely. Among the Switchers, 2.0% (95 CI: 0.0-6.1) rated relapse back to regular smoking as very likely, and 5.2% (95% CI: 0.0-12.4) rated it as somewhat likely. Discussion ENDS need to improve as a satisfying alternative or the attractiveness and appeal of the regular cigarette must be degraded to increase the potential of ENDS developing into a disruptive technology that most smokers may adopt in place of the cigarette. 3,[5][6][7]9 About onefourth of recent former smokers (the Switchers) reported finding ENDS a satisfying alternative to regular cigarettes (almost 85% rating ENDS as equally or more enjoyable than cigarettes). However, more than fivefold more current smokers (1 787 584 ÷ 9 443 788 = 5.28) did not find them satisfying and stopped using them ("E-Cig Rejecters"). Consistent with other recent research, 26,27 we found that 6.8 million "E-Cig Dual Users" had more mixed perceptions about how satisfying ENDS were and continued to smoke regular cigarettes along with using ENDS. These varying patterns of acceptance and rejection of ENDS as a satisfying alternative to regular cigarettes provide some insights into factors about ENDS that will be most important in evaluating their potential to develop into truly disruptive innovations beyond efficiency in nicotine delivery (ie, documented health benefits, public acceptability, and flavors). 3,[5][6][7]10,12,[22][23][24][25][26][27]37 Evidence from the United Kingdom have emphasized the need for "clear information on the relative harm of cigarettes and e-cigarettes" especially among smokers trying to quit who are seeking a less harmful source of nicotine. 4,17,[38][39][40][41][42] The comparative costs of ENDS versus regular cigarettes also could be very important in wider adoption of ENDS as an alternative. 27,43,44 Additionally, the E-Cig Dual Users and Switchers were more likely to use ENDS products other than the basic E-cigarette, suggesting that continuing product innovation could increase users' satisfaction (Supplementary Table S4). To encourage a potentially positive pattern of ENDS replacing cigarettes, it can be argued that efforts are needed from the public health community to reduce the appeal and attractiveness of the cigarette and other combusted tobacco products; namely, decreasing the product, promotion, placement, and price advantage of these more lethal combusted tobacco products. 1,45 The passage of the 2009 Family Smoking Prevention and Tobacco Control Act (Tobacco Control Act) provides the FDA with authority to regulate cigarettes and smokeless tobacco, including the authority to adopt product standards, such as reducing nicotine levels 1,37,[46][47][48] or requiring the smoke to have a higher pH 46 ; and graphic warning labels to reduce the "promotional" appeal. 1,49-51 Educational campaigns 1,52,53 and sales restrictions using "time-place-manner" local authority, 1,54-56 plain packaging, 57 raising the minimum age of legal access to 21 58,59 and raising the average retail "price" of combusted tobacco products have also been recommended. 1,43,44,60 A range of potential regulatory options for ENDS have been reviewed 10,17,42,55,[61][62][63][64][65][66][67] ; however, ENDS include a wide variety of electronic cigarettes (disposable and rechargeable) and tank-style systems that can be modified, thus forming a very heterogeneous group of products 4,5,15 that make the evaluation of safety and potential harm and benefit difficult to evaluate. 5,10,17,61,68 Although the 2009 Tobacco Control Act did not originally cover ENDS 69 ; once the proposed deeming rule is finalized, the FDA may have the authority to regulate ENDS, creating premarket approval requirements and product standards appropriate for the protection of public health. How the population health standard in the 2009 Tobacco Control Act will be implemented in regulating ENDS as a disruptive technology has been widely debated. [3][4][5][8][9][10]42,61,63,[70][71][72] A positive impact on adult cessation rates would be important, 73,74 but the overall impact on population health would also consider the impact of ENDS on initiation among youth and young adult never smokers. 62,73,75,76 Our 2014 data estimate that about 2.4 million US adults were helped in quitting regular cigarettes by using ENDS, with 1.4 million of these former smokers quitting in 2014. However, we cannot assess how many of these former smokers would have quit without using e-cigarettes nor how many "dual users" may have been delayed from quitting cigarettes. Hence, the impact (if any) of these 2.4 million former smokers on national quit rates cannot be determined by this study. Limitations This study has multiple limitations. First, the use of the internet panel may raise concerns about sample representativeness, especially if the panel has been used in prior tobacco research. Mitigating this concern, however, is internal research by GfK that suggests minimal panel conditioning from participation in prior tobacco research. 77 Second, the data are based upon self-report, and biochemical verification of cigarette smoking and use of other products could not be conducted. While the validity of self-reported cigarette smoking has been confirmed, 1,78 the accuracy of self-report of other products, particularly novel products, has not been evaluated and remains uncertain. Third, the rapidly changing nature of ENDS makes accurate questionnaire descriptions and terminology difficult to define. Fourth, the 2014 Tobacco Products and Risk Perception Survey had limited assessments of smoking cessation behaviors among ENDS users. Fifth, the cross-sectional design of this study makes it very difficult to assess the actual impact on population quit rates of these self-reports of using ENDS to quit or how much dual use may be delaying smoking cessation. Conclusion The potential of ENDS to have positive impacts on population health remains uncertain. 1,41,42,61,62,[73][74][75]79,80 Since about one-half of recent former smokers are trying ENDS with about one-fourth continuing to use them, and many report that these products have helped them quit regular cigarettes, the potential impact of ENDS on population quit rates deserves continued surveillance. However, since most current smokers who have tried ENDS reject them as a satisfying alternative to regular cigarettes, the potential of ENDS becoming a disruptive technology replacing regular cigarettes remains uncertain. If the level of acceptance of ENDS among some recent former smokers (almost 85% of the Switchers rating ENDS as equally or more enjoyable than cigarettes) could be achieved among all current smokers who are trying ENDS, the potential of ENDS becoming a disruptive technology replacing regular cigarettes would dramatically increase. This outcome could become more likely if the ENDS products continue to improve or the attractiveness and appeal of regular cigarettes is degraded. Supplementary Material Supplementary Material and Tables S1-S5 can be found online at http://www.ntr.oxfordjournals.org Funding Research reported in this publication was supported by grant number P50DA036128 from the NIH/NIDA and FDA Center for Tobacco Products (CTP). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or the FDA.
2018-04-03T05:39:13.830Z
2016-05-03T00:00:00.000
{ "year": 2016, "sha1": "a01385927ea967cea59993dc68d8a95d7c6148fc", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ntr/article-pdf/18/10/1989/19523535/ntw102.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a01385927ea967cea59993dc68d8a95d7c6148fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
240288660
pes2o/s2orc
v3-fos-license
Stand-alone device for IoT applications Internet of Things (IoT) is a digital world of connected and talking devices, providing room for countless and diverse smart applications. This paper proposes one such IoT enabled stand-alone device with numerous capabilities: (i) interaction with user, (ii) required application selection, (iii) data sensing, (iv) data publish, and (v) decision making and actuation. The algorithm allows user to pick an application and input specific data for calibration, which on completion enables the device for further working. To verify and test its capability, a smart home garden environment is created using this device, temperature, humidity, soil moisture sensors and actuators. As it is implicit that real-time communication is inevitable for an IoT application, the sensor data is published to a Mosquitto MQTT broker to permit real-time remote access. The decision taken by the device is sent to actuators via relay, thus a continuous monitoring process is achieved. Results are obtained for the application which proves the device suitability for IoT applications. I. INTRODUCTION The rapid growth in digitization has made Internet a basic necessity and made the world digital, where people have limited time and less accuracy when compared to machines. Only if we have systems that know everything about different things based on the gathered data without any human assistance, life could be much easy. It would become very simple and easy to keep a trace down, count everything and reduce any waste, loss and cost to much an extent, we would know when a thing needs to be repaired, replaced, etc. A future evaluation of Internet, the Internet of Things (IoT) [1] can be referred to as a system of inter-related computing devices, objects, things, animals, people, etc. that possess the capability to share information over a network without human involvement. Internet has been expanding everyday which gives rise to new opportunities for creating a link in between real physical world and the virtual world and this collaboration makes both the computing and networking more efficient, adaptable, secure and reliable for e-planes, vehicle collision avoidance, in WSNs, device monitoring, e-health and such other applications. The evolution of IoT has arisen from the convergence of various micro-electromechanical systems, micro-services, wireless technologies and the Internet that has helped to allow random and unstructured data generated by the machines to be analyzed for driving any improvements. IoT comprises of sensors, communication organization, processing units, decision making algorithms and the action invoke unit. Each object is unique, can be accessed over the Internet. The sensors get information and communicate it over Internet to the processing unit. Finally the processing unit output is passed to the decision making algorithm for further action. Internet growth has boosted technical power of the devices in user hand and it has connected billions of such objects all over the globe. The devices differ in dimensions, processing capabilities, computational complexities and support distinct applications. This pushes the conventional Internet to amalgamate with sophisticated and smart IoT. It has the potential to connect devices, implant intelligence in the designed systems to make them capable of processing the sensor information smartly, take practical and neat decisions without human involvement. IoT can thus give rise to a wide range of useful and competent applications that humans had never envisioned before. It gives an opportunity to establish a wide range of applications and many of such have been successfully put into use. A few include wearables, smart grid, smart infrastructure, connected car, smart retail, smart supply chain, industrial internet, etc. These applications have been explored and some existing IoT products are mentioned in Table I. These things communicate with each other and with the server using various protocols at different layers as given below, • Infrastructure: 6LowPAN, IPv4/IPv6, RPL. • Identification: EPC, uCode, IPv6, URIs. Now being called the language of the IoT, it seems obvious that any standard that is aiming to bring a common network service layer to the IoT architecture should be able to utilize MQ Telemetry Transport (MQTT) [2]. In this paper the deviceserver communication is done using the MQTT protocol due to its simplicity and low latency for data transfer. The paper is organized as: Section II introduces the stand alone device, explains its algorithm and explores it capabilities, while its results for a home gardening application have been shown in Section III. Finally the paper has been concluded in Section IV and references have been mentioned at the end. This section introduces the proposed stand-alone device and the algorithm. The device capabilities as seen from Figure 1 has intelligence using which it gets data from the sensors, uses current data and previous data history to predict and act accordingly for numerous applications. It also allows tracking of sensor data using MQTT protocol. The system architecture using proposed device is shown in Figure 2 and the device algorithm in given by Algorithm 1. Figure 2 shows that multiple sensors are connected to the device ESP8266 which operates as per proposed Algorithm 1 and the data is published to the broker which can be remotely observed on mobile, dashboard, etc. The device runs an html webpage where the user can provide inputs as per the application environment and it has sensing and decision making capabilities using sensors and actuators respectively. The GPIO pins on the device connects multiple sensors and actuators as per requirement where application specific calibration can be done. The algorithm on the device runs multiple routines that are responsible for its efficient working and its pseudo code can be seen by Algorithm 1. Here, S: set of sensors for fetching physical data, δ: application specific decision, α: actuators required. The algorithm first runs in a user-device communication mode for a pre-defined time interval. In this mode the device takes inputs from the user if any and saves it temporarily onto the device for further usage. After time-out it disables this mode and starts working normally where necessary computations are done in a one time device setup routine. After successful calibration and setup, the device then starts sensing the data using the attached sensors and continuously looks for actuation if needed. It also enables the user to track down and monitor the sensor data anytime and from anywhere. This is achieved by uploading the data to a MQTT server using which the user is aware of the present scenario and can manually control the actuators if needed. As seen in step 7 of Algorithm 1, the device establishes a connection with Mosquitto, an open source MQTT broker for publishing the sensor data. MQTT being a light-weight, energy efficient and way faster than HTPP, is preferred here in order to reduce the power requirement and latency. The data is sent to this Mosquitto broker where the user continuously tracks it and statistics is also generated. If an unsuccessful Alert to relay → control α 10 end connection occurs it might create data loss, so to prevent it a compressed backup is also stored offline on the device. This increases the device validity, which implies that it is capable of adapting to changes thus making it efficient for future use without the need of recalibration. The sensor data is used for actuation and so a decision δ based on the sensed data alerts the α thus maintaining a continuous monitoring loop. The device connects to relays that can be used to connect to multiple application specific actuators α. Possible sensors for some smart applications have been mentioned in Table II. The proposed device can be used for these applications and the corresponding sensors when connected to the GPIO pins completes the working. As mentioned earlier, the device hosts an html page and the Figure 4 (a) shows three options: (i) network configuration, (ii) network information and, (iii) application, that are available in user-device communication mode. The device scans for all nearby available networks and can be configured to connect to any of these using the network configuration button. On selection, it displays the connected network and its details and, the nearby networks. The final network information is then available on the network information page. The user inputs are obtained using the application button. To test the efficiency and working of proposed device, a smart home gardening application environment is created and, S, α are chosen accordingly. Section III introduces the application and presents the respective results obtained from the same. III. APPLICATION AND RESULTS The proposed stand-alone device is tested for its efficacy using a smart home gardening application and some of the sensors mentioned in smart farming portion of Table II The device is used for a smart farming application and apart from its basic capabilities, the above mentioned are achieved using it. It can be used to track the growth and determine IN needs of crops planted in backyard gardens, indoor or organic farming, vertical gardens, fields, etc. To achieve this, temperature and humidity sensor for ambient data and, soil moisture sensor for getting soil moisture level are connected to the device. In this case the user inputs taken are: crop, the type of soil to be planted in and the date of crop plantation. These data altogether are used to compute the IN requirements and an irrigation schedule. It is then connected to an actuator, a water pump in this case to supply water when required. A setup consisting of sensors, actuators is needed for the application and the arrangement is as shown in Figure 3. Here, Fig. 3. Application setup (a) crop unser test, (b) soil moisture sensor: gets the soil water content, (c) temperature and humidity sensor: fetches the ambient data, (d) the controller: controls the sensors, actuators and works as per Algorithm 1. A detailed working of this stand alone device for a smart farming irrigation based application [9] has been done. An application option on the webpage as shown in Figure 4 (a) is available which on selection generates fields for user input consisting of crop, plantation date and soil type. A list of crops seen from Figure 4 (e) is available from which a selection is made for a crop that needs to be planted. In a similar fashion, dates and soil type are selected and these are stored onto the device for further usage as seen in Figure 4 (f). Based on these selections and a history of weather data, some crop related parameters are selected and computed that help in determining the IN needs and the IN schedule. The device is intelligent enough to tackle abrupt climatic conditions and accordingly update the IN needs. Some sample results can be seen in Figure 5. Figure 5(a) shows the number of days in every growth stage and the starting dates for the same, and Figure 5(b) gives the IN depth, water runtime and dates (highlighted). Using this data provided by the device, user can grow plants whenever needed. As mentioned earlier, the data fetched via the sensors connected to the device can be remotely monitored using MQTT based server generally referred as broker and Figure 6 shows corresponding results. The Mosquitto broker status is seen from Figure 6 (a), which shows the received and sent data information, number of connections to the broker, published messages information. Figure 6 (b) shows information about the subscribed topics, while the subscribed topicwise sensor data values, timings is seen from Figure 6 (c). This data can also be viewed from an app on a smartphone as shown in Figure 7. Here, the MQTT dashboard is first connected to the host Mosquitto broker, which in this case is an IP address rather than a host name. After successful connection is established, it subscribes to topics hosted by the broker i.e. temperature, humidity and soil moisture topics (in this application). This keeps the user updated with the latest data and actuator status. An additional advantage is that if the device fails it will not able updates thus the user gets the knowledge that it has failed/ stopped working and can rectify the issues with it. The device could also be used for another smart application just by connecting the proper sensors, output tool, providing IV. CONCLUSION A stand-alone device with, (i) user interaction, html webpage hosting, application specific user requirement, (ii) data sensing, decision making and actuation, (iii) data publish to server capabilities was proposed in this paper. It allows remote access of sensor data, actuator control if needed and, thus eliminates the need of manual operation. For verification purpose a home gardening environment was created using the proposed device along with some farming related sensors. The overall irrigation requirement and schedule obtained for various plants using the device have also been mentioned. It shows flexibility to be used for multiple smart applications like, smart home, organic farming, indoor gardening and organics, etc. Thus with perfect combination of sensors, the device could be used for various other applications as well.
2021-11-01T01:15:54.290Z
2021-09-02T00:00:00.000
{ "year": 2021, "sha1": "abc9f410b323bd2bf9263905791341ded3308d4a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "abc9f410b323bd2bf9263905791341ded3308d4a", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
258334536
pes2o/s2orc
v3-fos-license
Cross-reactivities and cross-neutralization of different envelope glycoproteins E2 antibodies against different genotypes of classical swine fever virus Classical swine fever (CSF) is a highly contagious swine disease caused by the classical swine fever virus (CSFV), wreaking havoc on global swine production. The virus is divided into three genotypes, each comprising 4–7 sub-genotypes. The major envelope glycoprotein E2 of CSFV plays an essential role in cell attachment, eliciting immune responses, and vaccine development. In this study, to study the cross-reaction and cross-neutralizing activities of antibodies against different genotypes (G) of E2 glycoproteins, ectodomains of G1.1, G2.1, G2.1d, and G3.4 CSFV E2 glycoproteins from a mammalian cell expression system were generated. The cross-reactivities of a panel of immunofluorescence assay-characterized serum derived from pigs with/without a commercial live attenuated G1.1 vaccination against different genotypes of E2 glycoproteins were detected by ELISA. Our result showed that serum against the LPCV cross-reacted with all genotypes of E2 glycoproteins. To evaluate cross-neutralizing activities, hyperimmune serum from different CSFV E2 glycoprotein-immunized mice was also generated. The result showed that mice anti-E2 hyperimmune serum exhibited better neutralizing abilities against homologous CSFV than heterogeneous viruses. In conclusion, the results provide information on the cross-reactivity of antibodies against different genogroups of CSFV E2 glycoproteins and suggest the importance of developing multi-covalent subunit vaccines for the complete protection of CSF. Introduction Classical swine fever (CSF) is a highly contagious World Organization for Animal Health (WOAH) notifiable disease that causes significant economic losses. A devastating CSF has been reported in Central and South America, Europe, Asia, and Africa (1)(2)(3)(4)(5). Even though areas can be declared CSF-free, the re-emergence of CSF and emergence of new sub-genotypes of classical swine fever virus (CSFV) have been reported (6). In Japan, outbreaks of G2.1d CSFV in pig farms and wild boars in Gifu City in 2018 were re-emerged after 26 years of CSF-free status (6)(7)(8) indicating the difficulty in eradication of the disease. Clinical signs of CSF are determined by the virulence of the viral strain, age, health condition, and immune responses of pigs and can be divided into peracute, acute, subacute, chronic, and subclinical (3,9,10). The common pathological findings in the acute phase are hemorrhage and petechiae in multiple organs with necrotizing tonsillitis and enteritis (11,12). The most prominent histopathological changes in chronic CSF are lymphoid depletion and lymph node necrosis (13). Subclinical CSF, resembling a persistent infection, is caused by a transplacental transmission during mid-gestation periods (14,15). Infected piglets can be asymptomatic but persistently shed the virus, becoming a source of virus (11,16). Classical swine fever virus, belonging to the family Flaviviridae and genus Pestivirus, is a single positive-strand RNA virus. CSFV carries a genome of ~12.3 kbp, encoding one continuous open reading frame (ORF) flanked by two non-translated regions (NTR) on both sides. The ORF encodes a polypeptide precursor of approximately 3,898 amino acids (aa) that can be cleaved into 12 mature proteins, including four structural proteins, namely nucleocapsid protein (C), enveloped glycoproteins (E) E rns , E1, and E2, and eight non-structural (NS) proteins, namely N-terminal protease (N pro ), p7, NS2, NS3, NS4A, NS4B, NS5A, and NS5B (17-19). Among the CSFV proteins, the E2 protein is the most immunogenic and essential for inducing neutralizing antibodies and protecting against lethal challenge (20). It has been demonstrated that the removal of certain glycosylation sites of the E2 protein significantly reduced the immunogenicity of the protein and increased its virulence (21,22). There are four immunogenic domains at the C-terminus of the E2 protein, which can be divided into a less conservative B/C domain (690-779 a.a.) and a conservative A/D domain (780-859 a.a.). Several linear epitopes were identified in these domains (23), such as 772 LFDGTNP 778 at the tail of domain B/C (24) and 829 TAVSPTTLR 837 recognized by the monoclonal antibody (mAb) WH303 (25). At the N-terminus of the B/C domain, four residues at positions 709 P, 713 E, 725 G, and 738 I/V have been identified as important for antigen-antibody interactions (26). Substitutions can cause dramatic topology changes and might abolish antibody binding (27). It has been shown that specific glycosylation or the lack of E2 glycoprotein through point mutation and deglycosylation of the highly virulent Shimen strain at position 986 could result in a lower virulence (22). In this study, deglycosylation of the E2 protein at the 986 NYA 988 glycosylation site resulted in a decrease in E2 dimerization, which affected viral interactions with cell surface attachment factors, virion stability, and virus replication (22,28,29). Classical swine fever virus can be divided into three genotypes (G1, G2, and G3). Each genotype comprises four to seven sub-genotypes according to the 5'NTR and E2 sequences (17,18,30,31). Among the different genotypes, the nucleotide sequence identities genetically range from 80 to 86%. In the same genotype, there is 86-91% similarity among various sub-genotypes (18). Only the original reference strain, G1, has been reported in North America. The G2 CSFV emerged in Europe in the 1980s. The G3 CSFV has only been identified in Asia (11,32). Regarding the historical distribution of sub-genotypes, the G1.1 CSFVs were identified in Argentina, Brazil, Colombia, and Mexico. The G1.3 strains were identified in Honduras and Guatemala. The G1.2 and G1.4 strains were identified in Cuba (32)(33)(34)(35). Currently, genotype 2, |originating in Central Europe, is the predominant strain. G2.1 CSFV is a moderately virulent genotype compared with high-virulence G1 strains. The G2.1, 2.2, and 2.3 CSFV strains have been reported in Nepal, China, Japan, Korea, and the Middle East. G3 CSFV has only been reported in Asia, with G3.2 isolated in Korea between 1988 and 1999 (36), G3.3 in Thailand between 1988 and 1996 (1), and G3.4 in Japan and Taiwan (37). In Taiwan, the G3.4 strain was gradually replaced by the G2.1 CSFV. This was suggestively due to the superior replication and infectivity of the G2 virus compared with the G3 CSFV (1). However, the mechanism responsible for genotype switching has not been completely investigated. To study the cross-reaction and cross-neutralizing activities of antibodies against different genotypes of E2 glycoproteins, ectodomains of G1.1, G2.1, G2.1d, and G3.4 CSFV E2 glycoproteins derived from the HEK293 mammalian expression system were generated to mimic the integrity of E2 glycoproteins. These E2 glycoprotein-based in-house ELISAs were developed to evaluate the cross-reactivity of a panel of immunofluorescence assay (IFA)characterized sera derived from Lapinized Philippines Coronel strain live attenuated vaccine (LPCV) immunized pigs. These ELISA performances were compared with that of a commercialized CSFV ELISA. Hyperimmune mouse serum against these CSFV E2 glycoproteins was generated to detect the neutralizing activity against different genogroups of CSFV. The plasmids obtained were transfected into HEK 293 cells with PolyJet (SignaGen Laboratories, Frederick, MD, United States) and selected by culturing in DMEM high-glucose culture medium (Gibco, USA) containing 1.5% geneticin (G418) (Gibco) and 10% FBS (Gibco). Once stable cell lines were developed, the cells were placed Immunocytochemistry staining for E2 detection The expression of E2 glycoproteins was detected by fixing the cells in a 96-well-plate with 80% acetone (Avantor, PA, United States) for 20 min on ice. After air-drying and washing with 200 μL of PBS, 100 μL of anti-V5 antibody (Invitrogen; 1:1,000 dilution) was added to each well and incubated at room temperature (RT) for 1 h. Each well was washed six times using 200 μL PBS. In each well, 100 μL of Dako REAL EnVision antirabbit/mouse horseradish peroxidase (HRP)-conjugated antibody (Dako, CA, United States; 1:10 dilution) was added and incubated at RT for 1 h. The signals were detected using 3,3′-diaminobenzidine (DAB) (Dako) following the manufacturer's instructions. Results were evaluated using an inverted light microscope. Sodium dodecyl sulfatepolyacrylamide gel electrophoresis and western blot The E2 glycoproteins were mixed with NuPAGE LDS sample buffer (Thermo Fisher Scientific, Waltham, MA, United States). For the denatured samples, NuPAGE Sample Reducing agent (Thermo Fisher Scientific) was added and incubated at 95°C for 10 min. The samples were then separated by SDS-PAGE using a Bio-Rad Mini-PROTEIN electrophoresis system (Bio-Rad, Hercules, CA, United States) with a 10% separating gel and 17% stacking gel, following the manufacturer's recommendations. The proteins were transferred to a polyvinylidene fluoride (PVDF) membrane (Bio-Rad) and blocked with 5% skim milk (Beckton, Dickson and Company, MD, United States) in 5% tris-buffered saline and polysorbate 20 (Tween 20) (TBS-T) (Genestar, Beijing, China) at RT for 1 h. followed by 1 h. of WH303 (APHA Scientific, United Kingdom; 1:1,000 dilution) or anti-V5 (Novex, Invitrogen; 1:5,000 dilution), and 1 h. of Goat-anti-mouse HRP conjugated secondary antibody (Jackson ImmunoResearch, PA, United States; 1:10,000 dilution) with three washes of TBS-T between each incubation. The results were visualized using Clarity Western ECL Blotting Substrates (Bio-Rad) and a ChemiDoc XSR+ Imaging System (Bio-Rad). Protein affinity-based purification The collected expression medium was filtered through a 0.22 μm filter to remove any cell debris. The filtered expression medium was then incubated at 4°C overnight with HisPur cobalt resin (10 mL/1 L) (Thermo Fisher Scientific). The resin was collected in a column and washed with 10 resin-bed volumes of sodium-phosphate-based wash buffer. The proteins were eluted by passing five resin-bed volumes of 300 mM imidazole elution buffer through a column. The eluates were concentrated using Amicon Ultra-15 10 kDa concentration tubes (Millipore, Merck, Ireland). The concentration was determined by measuring the UV absorbance at 280 nm using a Take 3 BioTek microplate (Cytation 7, Agilent, Santa Clara, CA, United States). Indirect immunofluorescent assay of swine serum antibody PK-15 cells were seeded in a flat bottom 96-well-plate at 80% confluence and infected with the attenuated LPCV (AHRI) virus at a multiplicity of infection of 10. After 72 h of inoculation, the cells were fixed by adding 100 μL of 10% formaldehyde, incubated at RT for 1 h, and air-dried. One hundred microliters of 10% goat serum (Dako) were used as a blocking buffer and were incubated at RT for 1 h. The sera collected from pigs submitted to Veterinary Medicine Diagnostic Center at School of Veterinary Medicine in National Taiwan University for diagnostic needs with or without LPCV immunization history was diluted 80 folds and incubated at RT for 1 h. After washing with PBS six times, fluorescein isothiocyanate (FITC)-conjugated AffiniPure goat anti-swine IgG antibody (Jackson ImmunoResearch; 1:100 dilution) was applied to the microplates for 1 h at RT. After washing with PBS, the cells were mounted with a mounting medium containing DAPI (Abcam, Cambridge, United Kingdom). Fluorescence was observed using an inverted fluorescence microscope. Commercial and in-house CSFV enzyme-linked immunosorbent assay A CSFV antibody ELISA kit (BioChek, Berkshire, UK) was used to detect CSFV antibodies in swine serum, following the manufacturer's recommendations. For different in-house CSFV E2 ELISA, 100 μL purified E2 proteins diluted to 1 ng/microliter in coating buffer (KPL, SeraCare, Milford, United States) was added onto 96-well-plates, following manufacturer's instructions, and incubated overnight at Schematic plasmid map of the recombinant CSFV E2 construct. Sequences of E2 modified by truncation of the transmembrane domains and addition of the human tissue plasminogen activator (tPA) sequence at 5′-end with two restriction enzyme sites, NotI and BamHI, at 3′ and 5′ end of the sequences, respectively, were cloned onto the Mice immunization Twelve eight-week-old BALB/c mice were randomly separated into three groups. Each group was administered 50 μg of G1.1, 2.1d, or 3.4 CSFV E2 proteins in 0.2 mL of Montanide Gel 01 (Seppic, France) intraperitoneally and boosted with the same dosage at 14, 28, 42, and 56 days post-immunization (dpi). Hyperimmune mouse serum samples of different E2 levels were collected retro-orbitally and at 70 dpi, per the Institutional Animal Care and Use Committee (IACUC) guidelines. All procedures involving animals were performed following the regulations and with permission of the IACUC protocol (No. A10008) at the Animal Health Research Institute (AHRI, Council of Agriculture, Executive Yuan, Taiwan). Translational alignment and statistical analysis Translational alignment of all four E2 sequences were carried out using Geneious 9 (Version 9.1.8). 1 Data were analyzed using software GraphPad Prism (version 8.4.0) (GraphPad Software Inc., San Diego, CA, United States) and differences were considered significant by p-value (*p < 0.05; **p < 0.01). Expression and detection of different CSFV E2 glycoproteins After G418 selection, the expression of each CSFV E2 glycoprotein was successfully detected in HEK293 cells using an anti-V5 antibody. In each CSFV E2 plasmid-transfected cell line, more than 90% of cells were stained positive by ICC (Figures 2A,B). The expression medium collected contained 3-4.5 mg of E2 glycoprotein/L after purification. After protein purification of the supernatant of these CSFV E2 glycoprotein-expressing stable cell lines, proteins migrated to 100 kDa under non-reduction conditions and were suspected as homodimers. Proteins migrated to 50 kDa under reduction conditions corresponding to the predicted size of E2 monomer were confirmed by using an anti-V5 antibody ( Figure 2C) and the anti-CSF E2 specific antibodies, WH303 ( Figure 2D). Cross-reactivity of LPCV-induced antibody responses in pigs against different genotypes of CSFV E2 proteins To investigate the cross-reactivity of the LPC-induced porcine IgG against different genotypes of CSFV E2 proteins, a panel of 177 porcine serum samples from farms with and without an LPC-vaccination history was used. The binding activity of porcine sera against CSFV was first evaluated using IFA on LPC virus-infected PK-15 cells. Under fluorescent microscopic examination, a total of 78 serum samples were positive for IgG against LPC-infected PK-15 cells. Ninety-nine sera were negative. Using the IFA-characterized porcine sera, the cross-reactivity of these porcine sera against different genotypes of CSFV E2 proteins was investigated by ELISAs ( Figure 3). The S/P ratio of the commercially available ELISA was calculated following the manufacturer's recommendations. Using the mean value of the negative samples plus two standard deviations (mean + 2SD) as the cut-off values for in-house CSFV G1.1 E2-based ELISA had a cut-off O.D. of 0.71; the in-house CSFV G2.1a E2-based ELISA had a cut-off value of 0.71; the in-house CSFV 2.1d-based E2 glycoprotein ELISA had a cut-off value of 0.70; and the in-house CSFV 3.4 E2-based glycoprotein ELISA had a cut-off value of 0.66. After evaluating four in-house E2 glycoprotein-based ELISA of the 177 serum samples, all glycoproteins showed comparable sensitivity and specificity to the commercially available CSFV E2 ELISA (Table 1). Virus cross-neutralizing test for different CSFV E2 immunized mice After immunization, elevated anti-CSFV E2 IgG levels in the sera of different CSFV E2 immunized mice were detected. There Frontiers in Veterinary Science 05 frontiersin.org were no differences in the IgG-binding abilities of these sera against homologous and heterologous CSFV E2 proteins ( Figure 4). Antibody neutralizing (NA) in the sera of CSFV E2-immunized mice generally exhibited a better NA against the homologous genotype virus than against the heterologous viruses ( Figure 5). Mice immunized with CSFV G1. The deduced amino acid alignments of different E2 sequences The alignment result revealed several variations in known epitope regions of E2 glycoproteins ( Figure 6). As compared to the G 1. Discussion Historically, CSF has been controlled by extensive vaccination and complete stamp-out programs. However, outbreaks of G2.1d CSF occurred in a large number of C-strain-vaccinated pig farms in China, highlighting the importance of investigating the cross-reaction and cross-neutralizing activity of immune responses against different genotypes of CSFVs (23,43). In this study, ectodomains of G1.1, G2.1a, G2.1d, and G3.4 CSFV E2 glycoproteins were successfully generated in a mammalian expression system. Antibodies derived from the G1.1 LPCV-immunized animals were demonstrated to recognize all genotypes of E2 glycoproteins. We also demonstrated that different E2 antibodies exhibited better neutralizing abilities against homologous CSFV than heterogeneous viruses. The results provide information on the cross-reactivity of antibodies against different genogroups of CSFV E2 glycoproteins and suggest the importance of developing multivalent E2 subunit vaccines for CSF protection. Comparison of cross-reactivities of serums derived from pigs with/without LPC vaccine-immunization among the commercial CSFV E2 ELISA and different four different in-house E2 glycoproteins-based ELISA. Results of the commercial CSFV E2 ELISA and in-house E2-based ELISAs are represented as in S/P ratio and OD 405, respectively. Black round dots on the right column are IFA-confirmed negative serums and dots on the left column are the IFA-confirmed serums. Cut-off values are calculated according to the manufacture's guideline or the mean value of the negative samples plus two standard deviations (mean + 2SD) for in-house ELISA presented as red dotted lines. In this study, the high cross-reactivity of LPC-induced porcine IgG against different genotypes of CSFV E2 proteins was confirmed by ELISA. Our results also indicated that the HEK293 cell-derived E2 glycoprotein-based ELISA developed, exhibited comparable sensitivity and specificity to the commercially available CSFV E2 ELISA. Several CSFV E2-and E rns -based ELISAs have been developed to evaluate CSFV exposure and immunity in animals with E2 being the most widely adopted and commercially successful ELISA (48)(49)(50). Similar to our results, the indirect ELISA based on the Shimen strain (G1.1) E2 expressed by lentivirus-infected Chinese hamster ovary (CHO) cells has been reported to have 92.9% agreement with the viral neutralizing test and 92.2% agreement with the IDEXX blocking ELISA (48). In Spodoptera frugiperda (SF21) cells expression system, it has been demonstrated that the Brescia (G1.2) strain, Paderborn (G2.1) strain, and Kanagawa (G3.4) strain E2-based ELISA derived had high Ab binding activities against homologous strain-immunized swine hyperimmune serum (51). However, the E2 protein-based ELISA derived from the E. coli expression system had a relative sensitivity of 90.2% and a relative specificity of 55.3% compared with the IDEXX blocking ELISA kit with an overall concordance rate of 80.3% (52). The low specificity of the E. coli-expressed E2 proteinbased ELISA argues the accuracy of the method. Regarding cross-protectivity of CSFV vaccines, it has been shown that the C-strain vaccine and the LPC G1-based vaccines could prevent the circulation of most G1 CSFVs in the world and reduce the incidences of G3.2 CSFV in Korea, G3.3 CSFV in Thailand, and G3.4 CSFV in Japan and Taiwan (36,37,53). The tissue-adapted version of the C-stain vaccine Riemser vaccine has also been demonstrated to provide complete protection against G2.1 and G3.3, and G2.1c CSFVs (8,38). However, we have demonstrated herein that hyperimmune serum from CSFV E2 glycoprotein-immunized mice exhibited better neutralizing FIGURE 4 Cross-reactivity of IgG before and after G 1.1, 2.1d, and 3.4 E2 immunization in mice against different E2 glycoproteins detected by ELISA. Sera sample were collected retro-orbitally before immunization and at 70 days post immunization with different E2 glycoproteins and anti-E2 antibody levels were measured by ELISA coated with G 1.1, G 2.1a, G 2.1d, and G 3.4 E2 and read at OD 405 nm. The hollow icons represent serum samples collected prior to immunization and solid icons are at 70 days post immunization. Circular shape is against G 1.1 in-house E2-based ELISA, square shape is against G 2.1a in-house E2-based ELISA, triangular shape is against G 2.1d in-house E2-based ELISA, and diamond shape is against G 3.4 in-house E2-based ELISA. Comparison of different mice serum neutralizing antibody titer against different CSFV genotypes. Mice sera at 70 days after immunization with G 1.1, G 2.1d, or G 3.4 glycoproteins were collected, inactivated at 56°C, incubated with 100 TCID50 of LPC/AHRI strain (G 1.1), TD/96/TWN strain (G 2.1a), or 94.4/IL/94/TWN strain (G 3.4) of CSFV for 1 h at 37°C, and infected PK-15 cells. The highest dilution that is able to stop 50 percent of the cell from infection were recorded. *p < 0.05; **p < 0.01. Frontiers in Veterinary Science 08 frontiersin.org abilities against homologous CSFV than heterogeneous viruses. Notably, sera derived from mice immunized with the LPC strain E2 (G1.1) had a lower neutralizing antibody titer against G2.1 and G3.4 CSFVs. This is consistent with the previous findings (23,45). Our results might also explain, at least partially, occasional cases of the new G2.1b and G2.1d sub-genotypes of CSFV infection in a large number of C-strain-vaccinated pig farms in China (54,55) and the findings of C-strain-based vaccination could provide clinical but not pathological and virological protection against the G2.1d CSFV emerging in China (39). According to our results, we speculated that the antibody induced by the monovalent G1.1 E2 subunit vaccine might not be able to completely neutralize heterogeneous viruses. Since the above-mentioned vaccines are LAVs, further animal experiments to evaluate the crossprotectivity of different genotype E2 proteins against different genotype CSFVs to understand the immune efficacy and protectivity of E2 subunit vaccines are needed. To investigate potential mutations responsible for the reduction of neutralizing abilities of E2 antibodies against heterogeneous CSFVs, amino acid sequence alignment was performed. Several amino acid substitutions, D705N, L709P, G713E, N723S, and S779A, in the G2.1a and G2.1d E2 sequences reported as antigenic domains responsible for a decrease in the neutralizing ability of heterologous strains (23,26), were noted. Importantly, these mutations were demonstrated to lead to conformational changes in the antigenic epitope domain covering 773 FDGTNP 778 of the E2 protein predicted by SWISS-MODEL as compared with the CSFV G1.1 E2 protein in the present study (Supplementary Figure 1). This domain is a conserved linear B-cell epitope composed of three essential residues 773 F, 775 G, and 778 P, with 774 D and 777 N contributing to most of the epitope activity. Replacing of these residues has been demonstrated to abolish or remarkably reduce the reactivity of the epitope (56). We propose that the substitutions and structural alteration of the epitope domains of the E2 protein might be responsible for the differences in the lower neutralizing ability of the G1.1 LPC strain virus against the G2.1a and G2.1d CSFVs. Further investigations, such as mutagenesis assays of E2 proteins, to map critical mutations responsible for the viral neutralization are also needed. Live attenuated vaccines have been widely used to control CSFV. Among the currently used vaccines to combat CSFV, LAVs are the most common, with worldwide adaptation to region-specific strains or genotypes. However, LAVs have several disadvantages, including acceptance of maternal-derived Ab titer interference (57), lack of DIVA ability, the requirement for low-temperature transportation, and the possibility of virulence reversion (58). Combined with the progress in molecular biology and insight into the pathogenesis of CSFV infections, methods to distinguish vaccinated and clinically infected pigs can be developed using subunit E2 DIVA vaccines. However, variable results of vaccination-challenge experiments and transmission studies on E2 subunit vaccines (59,60) suggest limited capacity of monovalent CSFV E2 subunit vaccines to provide sterilizing immunity against heterogeneous field CSFV-strains in pigs (59)(60)(61). For CSFV subunit vaccine development, multivalent subunit vaccines should be essential based on the reduction in neutralizing ability against heterogeneous CSFV observed in the present study. Using the mammalian expression system of HEK-293 could provide unique opportunities for E2 proteins to process complex multi-dimensional folding and post-translational modifications. Four CSFV E2 proteins covering G1-G3 CSFV with proper mammalian glycosylation and able to elicit neutralizing antibodies against G1-G3 CSFVs generated in this study could be multi-covalent CSFV subunit vaccine candidates. Data availability statement The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The animal study was reviewed and approved by IACUC protocol (No. A10008) at the Animal Health Research Institute. Author contributions W-TC and H-ML contributed to the data acquisition, analysis, and interpretation. W-TC drafted and revised the manuscript. C-YC, M-CD, Y-LH, Y-CC, and H-WC contributed to the conception, design, and data acquisition. H-WC contributed to the conception and design, revised the manuscript critically for important intellectual content, and approved the final version to be published. All authors agreed to be accountable for all aspects of this work, ensuring that questions related to the accuracy and integrity of any part of this work were appropriately investigated and resolved.
2023-04-27T13:24:13.815Z
2023-04-27T00:00:00.000
{ "year": 2023, "sha1": "e059972b4cc2e264d5194716bd88bc993b0e7c44", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "e059972b4cc2e264d5194716bd88bc993b0e7c44", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
202029300
pes2o/s2orc
v3-fos-license
Dynamic changes in DNA methylation during embryonic and postnatal development of an altricial wild bird Abstract DNA methylation could shape phenotypic responses to environmental cues and underlie developmental plasticity. Environmentally induced changes in DNA methylation during development can give rise to stable phenotypic traits and thus affect fitness. In the laboratory, it has been shown that the vertebrate methylome undergoes dynamic reprogramming during development, creating a critical window for environmentally induced epigenetic modifications. Studies of DNA methylation in the wild are lacking, yet are essential for understanding how genes and the environment interact to affect phenotypic development and ultimately fitness. Furthermore, our knowledge of the establishment of methylation patterns during development in birds is limited. We quantified genome‐wide DNA methylation at various stages of embryonic and postnatal development in an altricial passerine bird, the great tit Parus major. While, there was no change in global DNA methylation in embryonic tissue during the second half of embryonic development, a twofold increase in DNA methylation in blood occurred between 6 and 15 days posthatch. Though not directly comparable, DNA methylation levels were higher in the blood of nestlings compared with embryonic tissue at any stage of prenatal development. This provides the first evidence that DNA methylation undergoes global change during development in a wild bird, supporting the hypothesis that methylation mediates phenotypic development. Furthermore, the plasticity of DNA methylation demonstrated during late postnatal development, in the present study, suggests a wide window during which DNA methylation could be sensitive to environmental influences. This is particularly important for our understanding of the mechanisms by which early‐life conditions influence later‐life performance. While, we found no evidence for differences in genome‐wide methylation in relation to habitat of origin, environmental variation is likely to be an important driver of variation in methylation at specific loci. Furthermore, it has been suggested that the epigenome is most sensitive to environmental influences during development (Jirtle & Skinner, 2007), giving rise to stable behavioral or physiological traits (Waterland & Jirtle, 2003;Weaver et al., 2004), which can subsequently affect fitness (Rubenstein et al., 2016). Epigenetic modifications could thus promote phenotypic plasticity and facilitate rapid adaptation (Jablonka & Raz, 2009), which may be especially important in changing or novel environments (Hu & Barrett, 2017). It is well understood that the early-life environment plays a fundamental role in shaping the phenotype, which can lead to permanent changes in physiology, behavior and morphology (Monaghan, 2008). Epigenetics are a prime candidate for mediating this developmental plasticity and linking early-life conditions with later-life effects (Gluckman, Hanson, Buklijas, Low, & Beedle, 2009;Gluckman, Hanson, Spencer, & Bateson, 2005). DNA methylation-the most widely studied epigenetic mark-plays a crucial role in regulating gene expression and genomic stability (Klose & Bird, 2006;Weber et al., 2007). It has been shown in a number of vertebrates, in the laboratory, that the methylome undergoes reprogramming during development. Genome-wide demethylation is followed by de novo methylation in the early stages of embryogenesis (Kafri et al., 1992;Li, Guo, Zhang, Gao, & Guo, 2014;Mhanni & McGowan, 2004;Monk, Boubelik, & Lehnert, 1987;Stancheva, El-Maarri, Walter, Niveleau, & Meehan, 2002;Usui et al., 2009). However, the extent and schedule of reprogramming varies widely, and the plasticity of the methylome also extends into later stages of development (Gluckman et al., 2009;Simmons, Stringfellow, Glover, Wagle, & Clinton, 2013). The established patterns of methylation are subsequently maintained through mitotic cell division and can thus create permanent changes in gene expression and consequently stable phenotypic traits (Jablonka & Raz, 2009;Weaver et al., 2004). This cycle of epigenetic reprogramming appears to be critical in directing embryonic development and determining patterns of DNA methylation in somatic cells (Bird, 2002). Indeed, dysregulation of methylation during development can lead to imprinting disorders (Waterland, Lin, Smith, & Jirtle, 2006) or developmental arrest (Li, Bestor, & Jaenisch, 1992). Studies of variation in DNA methylation in free-living vertebrates in an ecological context are lacking (Bentz, Sirman, Wada, Navara, & Hood, 2016;Laine et al., 2016; but see Lea et al., 2016;Liebl, Schrey, Richards, & Martin, 2013;Riyahi, Sanchez-Delgado, Calafell, Monk, & Senar, 2015;Viitaniemi et al., 2019), especially during the critical period of development (but see Rubenstein et al., 2016;Sheldon, Schrey, Ragsdale, & Griffith, 2018). Studies of epigenetics in the wild are essential for understanding the fitness consequences of environmentally induced epigenetic modifications, and they could offer valuable insight into the resilience of populations to environmental change. Dynamic genome-wide changes in DNA methylation have been demonstrated during embryonic and postnatal development in the precocial domestic chicken, revealing temporal and tissue-specific changes in methylation patterns (Li et al., 2014;Usui et al., 2009). However, it is important to note that these two studies have examined selected tissues and developmental stages, and the picture, in the chicken, is far from complete. Since altricial birds are at a much earlier stage of development when they hatch, compared with precocial birds, it is reasonable to expect that the establishment of the methylome could be different to that of precocial birds. To the best of our knowledge, no study has attempted to describe establishment of the methylome in an altricial bird, though methylation has been shown to be sensitive to environmental factors during development (Rubenstein et al., 2016;Sheldon et al., 2018). In the present study, we quantify changes in global DNA methylation during development in the great tit Parus major. DNA methylation was quantified in embryonic tissue at embryonic days 1, 3, 6, and 12, and in the blood of nestlings at 6 and 15 days posthatch. The great tit is a small passerine bird, with altricial development, and is a model species in evolutionary ecology. Using embryos and nestlings originating from both urban and rural environments, we also investigated the potential for environmentally induced variation in genome-wide DNA methylation levels. | Study system and field data collection Great tits produce a clutch of typically 6-9 eggs, which hatch following an incubation period of c. 13-14 days incubation. The altricial nestlings hatch naked, with eyes closed, and they require warmth and food to be provided by the parents; nestlings remain in the nest until fledging at 17-19 days. Eggs (n = 89) were collected from great tit nests (n = 56) during 21 April-6 May 2014. Eggs originated from an urban population (n = 44) in the city of Malmö (population: 300,000) and a rural population (n = 45) in the forest of Vombs fure, located 32 km from Malmö and with <5 inhabitants/km 2 in surrounding areas. The third and fourth eggs in the laying sequence were collected on the day of laying and stored for up to 5 days (median ± SD = 2.0 ± 0.95) at 12°C and in darkness, prior to artificial incubation (see below). Nests were followed through to fledging of young. Repeated blood samples were collected from 27 nestlings (nests = 26; 12 urban, 14 rural) at 6 days (n = 26) and 15 days (n = 23) posthatch from a random subset of nests from which eggs were collected. | Egg incubation procedure Eggs were randomly assigned to one of four incubation treatments on embryonic day 0; eggs were incubated in multiples of 24 hr and sacrificed at either embryonic day 1, 3, 6, or 12 (from here on denoted as E1, E3, E6, and E12, respectively). While methylation has a genetic component, large variation is likely to be introduced due to differential maternal allocation of hormones and nutrients among eggs in the clutch. We therefore opted to allocate two eggs per clutch to the same treatment group, as opposed to allocating four eggs per clutch across all incubation groups, and simultaneously minimize impacts at the individual-and population-level. Eggs across all treatment groups were representative of the range of egg-collection dates and storage times. Incubators (Ruvmax) were kept indoors in a dark, climate-controlled room. A divider rotates around a central axis, pushing eggs around the egg plate, ensuring that eggs were moved every few minutes. Upon removal from incubators, eggs were transferred to a freezer at −50°C. Egg mass was recorded before and after incubation. Most eggs were incubated in incubator A; eleven eggs were incubated in a second identical incubator B. Conditions within incubators were maintained at 37.04 ± 0.98 and 36.84 ± 1.32°C and 68.3 ± 3.16% and 69.2 ± 3.16% relative humidity in incubators A and B, respectively. Eight eggs failed to develop and were discarded, leaving 81 eggs (53 nests) for analysis. | Quantification of genome-wide DNA methylation Eggs were dissected to isolate embryos, which were subsequently washed with PBS and homogenized in 200 μl PBS using a TissueLyser (Qiagen). DNA was isolated from homogenized embryonic tissue and 4 μl whole blood using NucleoSpin Tissue and Blood kits (Macherey-Nagel), respectively, and according to the manufacturer's protocol. DNA quality was assessed using ultraviolet spectrophotometry (NanoDrop, Thermo Scientific). DNA methylation was quantified by enzyme-linked immunosorbent assay (EpiGentek MethylFlash P-1030) according to the manufacturer's protocol with input of 25-100 ng DNA. The range of methylation levels detected among samples was wider than the range of standards available, thus demanding variable DNA input to ensure detection (embryonic tissue: 50-100 ng; nestling blood: 25 ng). The assay volume was always constant, and methylation levels were corrected for mass of DNA. Samples from the same individual were run on the same plate, but due to the fact that many D15 nestling samples fell above the detectable range, a number had to be re-run at a lower concentration on a separate plate. Samples from different stages were randomly distributed among plates. DNA samples from E1 and E3 were of lower quality (260/280: mean ± SE = 1.08 ± 0.03) compared with all other developmental stages (mean ± SE = 1.99 ± 0.01); this could be due to lipid contamination as it proved difficult to remove all traces of yolk. Furthermore, all embryos from E1 and E3 fell below detectable levels and could therefore only be assigned the mean detection level of 0.65%, calculated as the limit of the blank (LoB; 1.645 SD from the blank). For these reasons, E1 and E3 were excluded from statistical analyses. Average intra-assay and inter-assay variation (mean ± SE) were 8.3 ± 0.3% and 30 ± 7.8%, respectively. | Statistical analyses All analyses were performed using lmerTest (Kuznetsova, Brockhoff, & Christensen, 2017) in R 3.2.4 (R Core Team, 2013). Linear mixed models (LMMs) were fitted to logit-transformed proportions of methylated DNA; models were fitted separately to data from embryonic tissue and nestlings. Starting LMMs included the fixed effects of developmental stage (two-level factor: 1 [referring to E6 and D6, in embryos and nestlings, respectively] or 2 [referring to E12 and D15]), habitat (two-level factor: urban or rural), and the interaction between developmental stage and habitat. To control for additional potential sources of variation in DNA methylation of embryonic tissue, primarily as a result of variation in maternal investment, the full LMM also included the fixed effects of laying sequence (two-level factor: 3rd or 4th), initial egg mass (covariate), and egg-storage time (five-level factor: 1-5 days). The LMM fitted to embryonic data included the random effects of nest identity and assay plate (1-6; to control for inter-plate variability); the LMM fitted to nestling data included the random effects of individual identity nested within nest identity and assay plate (as above). Fixedeffect terms were eliminated one-by-one if p > .05 when comparing a reduced model to the original model in likelihood ratio tests; the effects of stage and habitat were retained regardless of significance in a hypothesis-led approach. The significance of parameter estimates was estimated using conditional F-tests based on Satterthwaite approximation for the denominator degrees of freedom. | RE SULTS Whole-genome DNA methylation did not change between embryonic days 6 and 12 (Figure 1; β E12 = 0.056 ± 0.15, F 1,20.9 = 0.15, p = .7), yet genome-wide DNA methylation in blood of nestlings increased F I G U R E 1 DNA methylation levels (mean ± SE) at different stages of embryonic (E) and postnatal (D) development in the altricial great tit Parus major. DNA methylation was quantified from whole embryos at embryonic days 1, 3, 6, and 12, and blood of nestlings at 6 and 15 days posthatch. Embryos and nestlings originated from urban (orange) and rural (blue) populations. DNA methylation at E1 and E3 fell below detectable levels and was assigned at the detection limit of the assay (0.65%) and excluded from statistical analyses. Sample sizes are shown for each group Further studies should seek to determine the stability of the established genome-wide methylation patterns across an individual's lifespan, which is also a precondition of methylation as a regulator of developmental programming. | D ISCUSS I ON The results demonstrate that, similar to other vertebrates (Li et al., 2014;Monk et al., 1987;Stancheva et al., 2002), altricial birds undergo some level of change in DNA methylation during development. In mammals, genome-wide demethylation is followed by de novo methylation during the early stages of embryogenesis (Kafri et al., 1992;Monk et al., 1987). This is followed by tissue-and cell-specific methylation and demethylation in later stages of development, as shown in mammals (Chen & Riggs, 2011;Simmons et al., 2013;Song et al., 2009) We must also be cautious in drawing any conclusions concerning changes in methylation between early and late embryonic development, due to low DNA quality and undetectable levels of methylation at embryonic days 1 and 3. It is uncertain how the assay is affected by DNA quality, though we believe that the assay would still bind sufficient DNA to yield a detectable signal, if methylation levels were within the detectable range. It would therefore seem likely that levels of methylation in early embryonic development are lower, compared with later stages, but exactly how low and how they change between early embryonic stages is inconclusive. Given the evidence from other studies for tissue-specific differences in DNA methylation patterns, we must be cautious in jumping to the conclusion that genome-wide methylation increases between the end of embryonic development and the posthatch chick, since samples originate from different tissues. However, the twofold increase in DNA methylation in blood between 6 and 15 days posthatch confirms that changes in methylation continue to occur throughout postnatal development. Indeed, in mammals, it has been shown that the plasticity of the epigenome extends into postnatal development and until at least weaning (Song et al., 2009;Waterland et al., 2006 (Sheldon et al., 2018), it is more likely that environmental variation would induce both hypoand hyper-methylation at select loci. It thus remains possible that gene-specific environmentally induced changes in methylation patterns could occur between urban-and rural-reared nestlings, despite the fact that we found no evidence for differences in genome-wide DNA methylation. Developmental conditions are likely to be important in modulating epigenetic variation in the wild (Rubenstein et al., 2016), and much could be learned from studies of how variation in environmental factors, such as pollution, food quality and availability, and ambient temperature, affect DNA methylation patterns. ACK N OWLED G M ENTS Thanks to the City of Malmö and Sydvatten AB for permission to work in the respective field sites of Malmö and Vomb; and Andreas Nord, Johan Nilsson, Alejandra Toledo, and Cyndi Birberg Murua for support in the field and laboratory. CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest. AUTH O R CO NTR I B UTI O N S CI conceived the study; HW & CI designed the study; HW & PS carried out the study; HW performed laboratory and statistical analyses; HW drafted the manuscript with input from CI & PS. E TH I C A L A PPROVA L All procedures were carried out in accordance with national and European legislation, and approval was granted by the Malmö-Lund Animal Research Ethics Committee (M454 12:1). Eggs were collected under licence from Naturvårdsverket (NV-01657-14).
2019-09-09T21:21:47.843Z
2019-08-17T00:00:00.000
{ "year": 2019, "sha1": "d87688206ab419c8f89888cb31ec1c782bbab8df", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.5480", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9670ec7319ded1c3aec088cd93f4b1f2622603ba", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
263242599
pes2o/s2orc
v3-fos-license
Estrogens, Estrogen Receptors and Tumor Microenvironment in Ovarian Cancer Ovarian cancer is one of the most common cancers in women and the most concerning issues in gynecological oncology in recent years. It is postulated that many factors may contribute to the development of ovarian cancer, including hormonal imbalance. Estrogens are a group of hormones that have an important role both in physiological and pathological processes. In ovarian cancer, they may regulate proliferation, invasiveness and epithelial to mesenchymal transition. Estrogen signaling also takes part in the regulation of the biology of the tumor microenvironment. This review summarizes the information connected with estrogen receptors, estrogens and their association with a tumor microenvironment. Moreover, this review also includes information about the changes in estrogen receptor expression upon exposition to various environmental chemicals. Introduction Ovarian cancer (OC) is one of the most common cancers in women in developing countries. In recent years, it has been postulated that OC is one of the most concerning issues in gynecological oncology, causing more than 200,000 deaths worldwide in 2020 [1]. Due to a late diagnosis, treatment is challenging and the 5-year survival rate is still under 50%. As in every cancer, when OC is diagnosed early, most women may be treated effectively using standard therapeutic approaches. Unfortunately, once OC spreads to the pelvic and abdominal organs or/and beyond the peritoneal cavity, treatment becomes much more difficult. OCs are a molecularly truly diverse group of cancers. They may be classified based on their origin into epithelial, stromal and germ cells. Most ovarian neoplasms are of an epithelial origin. Another classification, based on histology and carcinogenesis, includes high-grade and low-grade ovarian neoplasms. The stage of ovarian cancer can also be presented on the FIGO scale, where stage I is the least advanced disease and stage IV is the most advanced disease [2]. Treatment of ovarian cancer mostly depends on surgery and chemotherapy treatment. Recently, it has been shown that usage of both 4-Hydroxy-Tamoxifen (4-OHT) and Gatipotuzumab has better outcomes, and may increase the efficacy of treatment [3]. This discovery seems to be important since the authors noticed that there is an association between TA-MUC1 and estrogen receptors (ERs) [3]. Although many research studies have been carried out on ovarian cancer, a detailed molecular mechanism has not been revealed yet. As far as we know, the expression of GPER1 in ovarian cancer tissues is not clear. To achieve a better understanding, and then better treatment, it is necessary to have a better knowledge about OC, especially in the case of both classical and non-classical ERs and tumor microenvironment (TME). Therefore, the purpose of this review is to summarize recent literature concerning ERs, the involvement of TME regarding them and the progression of OC. GPER1 is present in the endoplasmic reticulum; however, its presence was also observed in the plasma membrane [25]. It was proved that GPER1 binds with high affinity with estradiol. After binding, GPER1 is responsible for the rapid activation of numerous signaling pathways. It promotes the production of cAMP and activation of the epidermal growth factor receptor (EGFR) and it affects signaling pathways like PI3K/Akt and ERK/MAPK [25,26]. Ovarian Cancer Proliferation, EMT and Cell Invasiveness ERα was reported to modulate the expression of genes associated with cell proliferation and tumor growth in epithelial ovarian cancer [27]. In turn, Bossard et al. showed that mice with increased expression of ERβ have reduced tumor growth [28]. Moreover, increased expression of ERβ in BG-1 cells significantly decreased cell proliferation stimulated with estradiol [28]. Similar results were obtained in other cell lines, like breast and prostate cancer, where transfection of ERβ decreased motility and invasion of cells [27]. Therefore, it is believed that ERα has pro-cancerous abilities, while ERβ is anti-cancerous, which underlines that this disproportion may have a crucial role in ovarian carcinogenesis [28][29][30]. It is well known that estrogens stimulate proliferation, while anti-estrogen drugs abolish the proliferation of ovarian cancer both in vitro and in vivo [27]. Their ability to enhance the proliferation of neoplastic cells along molecular pathways may occur in a receptor-dependent and receptor-independent manner [31]. The first way is strongly associated with the ERα receptor. After binding estrogen with ERα, a signal cascade is triggered, which causes increased transcription of genes associated with cancer progression such as c-fos, c-myc, growth factors and cyclins that regulate cell cycle progression [31]. Migration of ovarian cancer cells and epithelial to mesenchymal transition (EMT) may be also stimulated with estrogens acting via ERα by decreasing the expression of E-cadherin and increasing EMT-related transcription factors: Snail and Slug [32]. It was also postulated that estrogens may affect adhesion to extracellular matrix proteins via ERα [30]. Increased expression of ERβ resulted in a decreased number of cells in the S phase and an increase in G2/M in a BG-1 cell line. Regulation of cell cycle progression was also seen through cyclin D1 and A2 modulation. Moreover, ERβ was found to modulate total retinoblastoma (Rb), its phosphorylated form, pAkt, cyclin D1 and A2 [28]. Also, ERβ was proposed to modulate the expression and activity of ERα; therefore, its clinical utilization may be worth it [28]. In another study, in a different cell line, the antitumoral effect of ERβ, however, was independent of ERα and estradiol [28]. The discrepancies between the cell lines may be due, among others, to the different mutations that occur in them. In OVCAR3 and OAW-42 cells, the usage of four different ERβ agonists resulted in decreased Ovarian Cancer Proliferation, EMT and Cell Invasiveness ERα was reported to modulate the expression of genes associated with cell proliferation and tumor growth in epithelial ovarian cancer [27]. In turn, Bossard et al. showed that mice with increased expression of ERβ have reduced tumor growth [28]. Moreover, increased expression of ERβ in BG-1 cells significantly decreased cell proliferation stimulated with estradiol [28]. Similar results were obtained in other cell lines, like breast and prostate cancer, where transfection of ERβ decreased motility and invasion of cells [27]. Therefore, it is believed that ERα has pro-cancerous abilities, while ERβ is anti-cancerous, which underlines that this disproportion may have a crucial role in ovarian carcinogenesis [28][29][30]. It is well known that estrogens stimulate proliferation, while anti-estrogen drugs abolish the proliferation of ovarian cancer both in vitro and in vivo [27]. Their ability to enhance the proliferation of neoplastic cells along molecular pathways may occur in a receptor-dependent and receptor-independent manner [31]. The first way is strongly associated with the ERα receptor. After binding estrogen with ERα, a signal cascade is triggered, which causes increased transcription of genes associated with cancer progression such as c-fos, c-myc, growth factors and cyclins that regulate cell cycle progression [31]. Migration of ovarian cancer cells and epithelial to mesenchymal transition (EMT) may be also stimulated with estrogens acting via ERα by decreasing the expression of E-cadherin and increasing EMT-related transcription factors: Snail and Slug [32]. It was also postulated that estrogens may affect adhesion to extracellular matrix proteins via ERα [30]. Increased expression of ERβ resulted in a decreased number of cells in the S phase and an increase in G2/M in a BG-1 cell line. Regulation of cell cycle progression was also seen through cyclin D1 and A2 modulation. Moreover, ERβ was found to modulate total retinoblastoma (Rb), its phosphorylated form, pAkt, cyclin D1 and A2 [28]. Also, ERβ was proposed to modulate the expression and activity of ERα; therefore, its clinical utilization may be worth it [28]. In another study, in a different cell line, the antitumoral effect of ERβ, however, was independent of ERα and estradiol [28]. The discrepancies between the cell lines may be due, among others, to the different mutations that occur in them. In OVCAR3 and OAW-42 cells, the usage of four different ERβ agonists resulted in decreased proliferation [33]. According to the previous results, the knockdown of ERβ stimulated the growth of OAW-42 cells [33]. Schüler-Toprak et al. also showed that the acting of the ERβ agonists is related to β-catenin (CTNNB1) and amyloid β precursor protein (APP) in OAW-42 cells. In SKOV3 cells, increased expression of ERβ resulted in decreased growth and migration [34]. The authors observed that these effects were associated with the modulation of cyclin-dependent kinase inhibitor p21 (WAF1), cyclin A2 transcripts and fibulin 1c [34]. Banerjee et al. showed that activation of ERβ with a newly developed agonist (OSU-ERb-12) abolishes the ability to grow and migrate and invasiveness of ovarian cancer cells [35]. Because EMT is strongly associated with the migration and invasion of cancer cells, an observation considering the role of ERα and ERβ has also been made. It was shown that increased expression of ERβ results in increased E-cadherin (E-cad) and decreased Snail expression [35]. At the same time, it was proved that ERβ agonists may downregulate stemness markers like SOX2, Oct4 and Nanog. Non-genomic effects that may stimulate cell proliferation rely on binding to GPER and thus induce extracellular-signal-regulated kinase (ERK), phosphoinositide 3-kinase (PI3K) and epidermal growth factor (EGFR) signaling [31]. Modulation of GPER1 has been shown to affect ovarian cancer cell growth. Yan et al. showed that the selective GPER-1 agonist (G-1) in a dose of 10 nM increases migration and invasiveness of an ERα-negative cell line via promotive production and activation of MMP-9 [36]. Knockdown of GPER-1 agreed with previous results-it resulted in a reduction in migration and invasion [36]. Changes in invasion, proliferation and migration also affected another cell line, SKOV3, where GPER1 knockdown resulted in their decrease and was also associated with changes in the expression and activity of MMP-2 and MMP-9 [37]. Additionally, Yan et al. presented that GPER1 may modulate the expression of ERα and ERβ [37]. In turn, it was also shown that usage of G-1 may decrease proliferation and induce G2/M cell cycle arrest [22]. It was presented that G-1 promotes the activation of mitotic-promoting factor (MPF) and phosphorylation of nuclear mitotic apparatus protein 1 (NuMA) [38]. Treatment with G-1 also causes a decrease in the number of cells in the G1 phase, an increase in prophase and a decline in metaphase, anaphase, telophase and cytokinesis. Wang et al. showed that G1 inhibits the proliferation of SKOV3 and IGROV-1 cell lines in a dose-dependent manner and that it disrupts the morphology of these cells [38]. The differences between these works may be associated with different doses used for experiments. Interaction of Environmental Chemicals, Estrogen Receptors and Ovarian Cancer Proliferation Some substances present in the environment as pollution, food/cosmetic additives and many others may have an impact on the hormone-dependent tissues, due to the structural and functional similarity to naturally occurring estrogens. Substances like these are called xenoestrogens and may directly and/or indirectly bind with ERs and thus change the biology of cells. Some of these compounds may be of a natural origin, others synthetic. Genistein belongs to a subgroup of isoflavones (the flavonoid family), at physiological concentrations, activates the nuclear estrogen receptors ERα and ERβ and affects TGFβ signaling pathways [39]. Genistein is also the most common natural substance with estrogen activity. Chan et al. showed that genistein and daidzein (another member of the isoflavone family) suppress the proliferation, motility and invasiveness of ovarian cancer cells via modulation of the expression of ERβ [40]. Moreover, they also observed that changes in the behavior of cells were associated with increased expression of p21 and E-cad, and with decreased expression of vimentin (VIM). The modulation of PI3K/AKT signaling was also described [40]. A similar observation has been made for resveratrol, a naturally occurring phytoestrogen, with downregulation expression of ERα IGF-1R, p-IRS-1, p-Akt1/2/3 and cyclin D1 [41]. Sang et al. showed that bisphenol A (BPA) induces the proliferation of ovarian cancer cells via the regulation of matrix metalloproteinase-2 (MMP-2), matrix metalloproteinase-9 (MMP-9) and intercellular cell adhesion molecule-1 (IMAC-1), but the addition of an ERα inhibitor abolished this effect, suggesting that BPA promotes ovarian cancer cells via the ERα signaling pathway [42]. This statement seems to be confirmed by Sang et al. The authors showed that proliferation, migration, invasion and adhesion stimulated with BPA in OVCAR3 cells depend on the activity of ERα [42]. Hwang et al. showed that genistein can reduce BPA-stimulated proliferation in BG-1 cells [43]. Moreover, Hwang et al. also presented that the mechanism is associated with the regulation of cell cycle progression [41]. Liu et al. observed that histamine in a dose of 50 ng/mL induces the proliferation of OVCAR3 cells after 48 h by regulating the expression of both ERα and ERβ [44]. The same team also observed that apigenin, a natural flavonoid found in many plant species, inhibits histamine-induced proliferation via the PI3K/AKT/mTOR pathway [44]. Similar results were obtained in other cell lines like cervical cancer or breast cancer [44,45]. The PI3K/AKT/mTOR pathway was also involved in the regulation of apoptosis and autophagy with Tanshinone I (Tan-I), an extract from the Chinese medicinal herb Salvia miltiorrhiza Bunge [46]. In our study, we observed that estrogenic mycotoxin (Alternariol; AOH) stimulates apoptosis in ovarian cancer cells via ERα and also modulates migration, proliferation and invasion [47]. Nevertheless, the effect on the expression of ERs in both studies has not been shown. Aconitine, a substance produced by plants, has also been described in the context of ovarian cancer. Wang et al. presented that aconitine decreased cell viability, colony formation and migration of A2780 cells. The same team also showed that treatment with aconite induces DNA damage and apoptosis in cells via regulation of the expression of ERβ and also other factors connected with estrogen signaling like vascular endothelial growth factor (VEGF) and hypoxia-inducible factor 1α (HIF1α) [48]. Ataei et al. showed that cadmium chloride induces the proliferation of ovarian cancer cells by affecting the expression of ERs and then activation of the ERK1/2/MAPK pathway and c-jun, c-fos and foxo3a transcription factors. It was also shown that the cadmium acting via ERs may affect the expression of progesterone receptors [49]. Some compounds were also investigated taking into consideration non-classical estrogenic pathways. Hoffmann et al. showed that tetrabromobisphenol A (TBBPA) stimulates the proliferation of OVCAR3 and KGN cells via the GPR30 pathway [50]. The addition of a GPER1 antagonist reversed this effect [50]. Summary information regarding the regulation of ERs expression is presented in Table 1. Tumor Microenvironment (TME) in the Progression of Ovarian Cancer In recent years, non-cancerous cells constituting TME are believed to be critical mediators of tumor progression. Moreover, the importance of the interplay between tumor cells, stromal cells, immune cells and extracellular molecules in TME is emphasized as it has a profound effect on antitumor immunity and immunotherapeutic response [52]. The importance of TME is emphasized with the fact that it is chosen as a therapeutic target in cancer treatment and enjoys great interest in both research and clinical trials [53]. Nevertheless, one of the main difficulties in targeting the TME in cancer therapy is that host cells or non-cellular components of the TME may have different associations with tumor cells [53]. Therefore, further research on the TME in cancers is necessary because there is a lack of in-depth knowledge about these cells and their interactions that results in failed therapy. As it was proved, each cancer is different in terms of cells that belong to TME; however, we can distinguish cell types that are always included in TME: cancer-associated fibroblasts (CAFs), tumor-associated macrophages (TAMs), myeloid-derived suppressor cells (MDSCs), lymphocytes T and B, natural killer cells (NK cells) and endothelial cells [52]. CAFs, TAMs and MDSCs have a crucial role in cancer cell proliferation. Estrogen signaling is also known to play an important role in the regulation of the immunological response [6]. Their role is also visible in the TME. Both ERs and aromatase, which is a key enzyme in the production of estrogens, are expressed in cells that belong to the TME [54]. For example, expression of ERα and ERβ was observed in CAFs and TAMs in local TME of ovarian cancer [54]. The interaction of the cells in the organisms is mostly tissue-specific and depends on many factors. However, based on the literature survey, many of the effects stimulated by compartments of TME are mediated via PI3K/MAPK/Akt pathways, which are described as well-known ER-mediated pathways. Epithelial ovarian cancer (EOC) is in some kind unique among other solid tumors in the context of TME since tumor cells migrate from the primary tumor to create malignant ascites in the peritoneal cavity [55]. Malignant ascites also have TME and ascite-derived tumor cells occur as single floating cells or also as spheroids [55]. It is generally known that microRNAs (miRNA) are short, non-coding RNAs that regulate gene expression [56,57]. Their dysregulation has been observed in most types of cancer including ovarian cancer. Recently, it has been proposed that they also may have an influence on TME [56,57]. Cancer-Associated Fibroblasts (CAFs) CAFs are the most predominant stromal cells that create TME. Zhang et al. observed that an increased amount of CAFs was in EOC than in benign tumors [52]. Moreover, CAFs isolated from EOC lesions were able to increase the invasion and migration of ovarian cancer cells [58,59]. Interestingly, Jin et al. showed that collapsin response mediator protein-2 (CRMP2) participates in these modulations through activation of the hypoxia-inducible factor (HIF)-1α-glycolysis signaling pathway [60]. CAFs relieve substances like chemokines and growth factors, which then stimulate pathways associated with tumor growth and progression [61]. Thongchot et al. showed that interleukin-8 (IL-8) released from CAFs increases the migration of ovarian cancer cells [62]. Similarly, fibroblast growth factor-1 (FGF-1) released from CAFs increased the growth of SKOV3 cells via the FGFR4/MAPK/ERK pathway [63]. In turn, Zhang et al. showed that CAFs induce EMT in OC cells via the Wnt/β-catenin pathway. Interestingly, it was also shown that progranulin (PGRN) stimulates the proliferation and invasion of ovarian cancer cells, indirectly via CAFs [64]. A similar observation has been made for TGF-β [65]. Wu et al. observed that collagen type XI alpha 1 (COL11A1) is upregulated in CAFs [66]. Moreover, they also noticed that its modulation may have an important role in the biology of OC, resulting in decreased invasiveness and tumor formation, giving hope that COL11A1 may have a key role in the future in the treatment of OC with elevated levels of TGF-β3 [66]. Yue et al. noticed that CAFs increase the metastasis character of ovarian cancer cells via the PI3K/Akt pathway [67]. Additionally, increased expression of E2-the responsive gene-was observed in CAFs compared to normal fibroblasts [68,69]. Kim et al. presented that expression of GLIS1 (Glis Family Zinc Finger 1) is increased both at gene and protein levels in CAFs. Moreover, the knockdown of GLIS1 decreased migration and invasion of ovarian cancer cells, suggesting that this factor may have an important role in the progression of OC [70]. It was observed that miRNAs in CAFs may affect their reprogramming. In the ovarian cancer microenvironment, and more specifically in the context of CAFs, miR-214 and miR-155 have been described at low and high expression, respectively [71]. It was further demonstrated that by disrupting their expression, it was possible to reduce the growth and metastasis of ovarian cancer through loss of the CAF-like phenotype, which seems to be very important in ovarian cancer treatment [71]. Tumor-Associated Macrophages (TAMs) It is generally known that macrophages belong to phagocytic cells and that they regulate immunological response. Nevertheless, it has also been shown that they may regulate the invasion and metastasis of cancer cells, mainly because they can create an inflammatory environment, which may stimulate mutations, growth, proliferation, metastases and many others [72]. In primary OC as well as in ascites, the main population of immune cells consists of macrophages. They may originate from embryonic yolk sacs and bone-marrow-derived monocytes. When macrophages (M0) are recruited to TME, dependent on stimuli, they differentiate into the M1 or M2 subtype [73]. M1 macrophages inhibit the progression of cancer via the secretion of cytokines like IL-12, TNFα or IFNγ [73]. In turn, M2 macrophages stimulate the proliferation of cancer cells via the secretion of matrix metalloproteinases, IL-4, IL-5, IL-6 and other factors like VEGF [73]. In the OC microenvironment, TAMs mainly occur as the M2 phenotype, highly expressing scavenger receptor class B (CD163) and the mannose receptor (MR, CD204) [55]. Despite chemokines and cytokines that are well known to modulate M0 macrophage polarization, in recent years, increasing evidence indicates that microRNAs may also play an important role. Ying et al. showed that miR-222-3p secreted by OC cells stimulates the polarization of M0 cells into M2 via the SOCS3/STAT3 pathway [74]. MiRNA-940, miRNA-200b, miRNA-181c-5p and miRNA-141-3p have also been found to promote the M2-like phenotype [75][76][77][78]. In turn, Jiang et al. showed that miR-217 inhibits polarization into M2 cells by affecting IL-6 and the JAK3/STAT3 signaling pathway [79]. Both cancer cells and cells belonging to the TME secrete substances that cause their mutual interaction. Earlier, we indicated substances secreted by cancer cells that affect the polarization of macrophages. Nevertheless, polarized macrophages also secrete substances that can further affect cancer cells. It has been shown that M2 macrophages increase the proliferation and progression of OC [80]. Steitz et al. showed that TAMs derived from ascites promote invasion of HGSC via transforming growth factor beta-induced (TGFBI) protein and tenascin C (TNC) [80]. Zeng et al. presented that M2-like macrophages secrete epidermal growth factor (EGF) and thus affect EGFR-ERK signaling, leading to the progression of OC [81]. Ke et al. showed that TAMs increase the invasion of OC cells via the TLR signaling pathway and its downstream nuclear factor NFκB and microtubule-associated proteins' (MAPs) kinases [82]. A co-culture of SKOV3 cells with macrophages resulted in increased migration and invasion [83,84]. It also increased the expression of NFκB, CXCL16 and CXCR6 and also affected the PI3K/Akt signaling pathway [83]. Myeloid-Derived Suppressor Cells (MDSCs) MDSCs are described as a heterogeneous population of cells of a myeloid origin in various stages of differentiation without mature myeloid markers [85]. It was observed that MDSCs are present in the peripheral blood derived from women with EOC [86]. In normal conditions, immature myeloid cells (IMCs) differentiate into mature forms like granulocytes, macrophages or dendritic cells [85]. Pathological conditions result in an expansion of IMCs due to the blocking of the differentiation and making them MDSCs. Interestingly, it was shown that obesity also has an impact on MDSCs and ovarian cancer [87]. Yang et al. showed that in obese mice, the proportion of MDSCs in peripheral blood was higher than in healthy mice [87]. MDSCs present higher expression of immune suppressive factors like arginase 1 (ARG1), and inducible nitric oxide synthase (iNOS). A characteristic is also increased expression of CD33+ on their surface and increased production of nitric oxide (NO) and reactive oxygen species (ROS) [85,88]. MDSCs have immune suppressive functions by directly affecting T and NK cells and also stimulating angiogenesis, proliferation, invasion and metastases of the tumor [89][90][91]. Cui et al. showed that MDSCs increased cancer stemness via inhibition of T cell activation and affection of the expression of miRNA101 and the corepressor gene C-terminal binding protein-2 (CtBP2) [91]. Li et al. also observed that MDSCs enhance the stemness of EOC cells [86]. The authors also verified genes that were modulated during a co-culture of MDSCs and SKOV3 cells and they observed that colony-stimulating factor 2 (CSF2), intercellular adhesion molecule 1 (ICAM1), baculoviral IAP repeat-containing 3 (BIRC3), TNFα-induced protein 3 (TNFAIP3) and interleukin-32 (IL-32) increased significantly [86]. Zheng et al. showed that upregulation of miRNA-211 decreased MDSC proliferation and affected pathways like NF-κB, PI3K/Akt and STAT3 [92]. Taki et al. presented that knockdown of Snail in mouse OC cells resulted in an inhibited growth and decreased amount of MDSC cells. Snail was also described as a factor that recruits MDSC in the OC environment via regulation of the expression of the interleukin-8 receptor, beta (CXCR2) ligands [93]. Vascular endothelial growth factor (VEGF) has also been proposed as a factor that recruits MDSCs [94]. Nevertheless, usage of anti-VEGF therapy increases granulocyte-monocyte colony-stimulating factor (GM-CSF) expression and thus recruits MDSCs; therefore, Horikawa et al. suggested that targeting GM-CSF may help overcome resistance to anti-VEGF therapy [95]. Summarized effects of TAMs, CAFs and MDSCs are presented in Figure 3. of nitric oxide (NO) and reactive oxygen species (ROS) [85,88]. MDSCs have immune suppressive functions by directly affecting T and NK cells and also stimulating angiogenesis, proliferation, invasion and metastases of the tumor [89][90][91]. Cui et al. showed that MDSCs increased cancer stemness via inhibition of T cell activation and affection of the expression of miRNA101 and the corepressor gene C-terminal binding protein-2 (CtBP2) [91]. Li et al. also observed that MDSCs enhance the stemness of EOC cells [86]. The authors also verified genes that were modulated during a co-culture of MDSCs and SKOV3 cells and they observed that colony-stimulating factor 2 (CSF2), intercellular adhesion molecule 1 (ICAM1), baculoviral IAP repeat-containing 3 (BIRC3), TNFα-induced protein 3 (TNFAIP3) and interleukin-32 (IL-32) increased significantly [86]. Zheng et al. showed that upregulation of miRNA-211 decreased MDSC proliferation and affected pathways like NF-κB, PI3K/Akt and STAT3 [92]. Taki et al. presented that knockdown of Snail in mouse OC cells resulted in an inhibited growth and decreased amount of MDSC cells. Snail was also described as a factor that recruits MDSC in the OC environment via regulation of the expression of the interleukin-8 receptor, beta (CXCR2) ligands [93]. Vascular endothelial growth factor (VEGF) has also been proposed as a factor that recruits MDSCs [94]. Nevertheless, usage of anti-VEGF therapy increases granulocyte-monocyte colony-stimulating factor (GM-CSF) expression and thus recruits MDSCs; therefore, Horikawa et al. suggested that targeting GM-CSF may help overcome resistance to anti-VEGF therapy [95]. Summarized effects of TAMs, CAFs and MDSCs are presented in Figure 3. . Schema presenting the interaction of the compartments of TME that stimulate ovarian cancer. TAMs-Tumor-Associated Macrophages, MDSCs-Myeloid-Derived Suppressor Cells, CAFs-Cancer-Associated Fibroblasts. The graphical illustration was prepared by using images Figure 3. Schema presenting the interaction of the compartments of TME that stimulate ovarian cancer. TAMs-Tumor-Associated Macrophages, MDSCs-Myeloid-Derived Suppressor Cells, CAFs-Cancer-Associated Fibroblasts. The graphical illustration was prepared by using images from Servier Medical Art by Servier. Minor modifications were made (e.g., the color of the stock images) (https://smart.servier.com/smart_image/, accessed on 21 August 2023). Conclusions In recent years, there is more and more evidence that estrogens play a key role in the formation of all kinds of hormone-dependent cancers, such as breast, prostate or ovarian cancer. From this review of the literature, it is clear that estrogens or estrogen-like compounds, acting through the ERs, regulate various cellular processes such as proliferation, EMT, invasiveness, differentiation and inflammation in ovarian cancer cells. It seems that all receptors play a very important role in the process of ovarian carcinogenesis, and keeping their proportions in a physiological state is necessary to stay healthy. Moreover, it seems that estrogens are also necessary for the regulation of the TME because most of the estrogen-related pathways are disrupted and thus contribute to the pro-tumor function of the TME. In conclusion, estrogens and estrogen-like compounds play an important role in ovarian carcinogenesis through direct or indirect involvement in molecular mechanisms stimulating the growth and proliferation of cancer cells. Further studies are needed to elucidate all molecular mechanisms of estrogen signaling in ovarian cancer cells and the components of TME, due to their crucial role in cancer progression and therapy. Enriched knowledge about estrogens, estrogen receptors and the tumor microenvironment may encourage us to look for new therapeutic options for patients. Limitations of the Review During the review associated with tumor microenvironment (TME), we focused on cells that are associated with ovarian cancer proliferation; nevertheless, it should be emphasized that more types of cells are included in TME and they may also be worth reviewing.
2023-09-30T15:08:33.901Z
2023-09-28T00:00:00.000
{ "year": 2023, "sha1": "b1e0827a24a8d161335fd53c4f02cc647302c0df", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/19/14673/pdf?version=1695880736", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f45b8f6da6d971e7ed65fa4be18d98457615cfce", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
119349405
pes2o/s2orc
v3-fos-license
Krull Dimension for Differential Graded Algebras We introduce a naive notion of a system of parameters for a homologically finite complex over a commutative noetherian local ring, and compare it to the system of parameters defined by Christensen. We show that these notions differ in general, but that they agree when the complex in question is a DG R-algebra. In this case we also show that the Krull dimension defined in terms of the lengths of such systems of parameters agrees with Krull dimensions defined in terms of certain chains of prime ideals. Introduction In this paper, R is a commutative noetherian ring with identity. The term "Rcomplex" is short for "chain complex of (unital) R-modules" indexed homologically. The infimum of an R-complex X is inf(X) := inf{i ∈ Z | H i (X) = 0}, and X is homologically finite if the total homology module i∈Z H i (X) is finitely generated. The Koszul complex over R on a sequence x = x 1 , . . . , x n ∈ R is denoted K R (x). Foxby [4] defines the Krull dimension of an R-complex X as dim R (X) := sup{dim(R/p) − inf(X p ) | p ∈ Supp R (X)} where Supp R (X) := ∪ i∈Z Supp R (H i (X)). If M is a finitely generated R-module, then dim R (M ) is the usual Krull dimension of M , given in terms of chains of prime ideals in Supp R (M ). If inf(X) > −∞, then dim R (X) ≥ − inf(X). When R is local, it is natural to seek a notion of a systems of parameters for homologically finite R-complexes. One such notion comes from Christensen [2], starting with the following version of minimal prime ideals for complexes. Let X be an R-complex such that inf(X) > −∞. A prime ideal p ∈ Spec(R) is an anchor prime for X if dim Rp (X p ) = − inf(X p ). Let Anc R (X) denote the set of anchor primes for X. Assuming that (R, m) is local and X is homologically finite, a system of parameters for X is a sequence x = x 1 , . . . , x d ∈ m such that m ∈ Anc R (K R (x) ⊗ R X) and d = dim R (X) + inf(X). Christensen [2,Theorem 2.9] shows that X has a system of parameters in this setting. The point of this paper is to explore the following different (possibly more naive) versions of these notions. A length system of parameters for X is a length sequence x 1 , . . . , x m ∈ m for X such that m = ldim R (X) + inf(X). Remark 1.2. Assume that (R, m) is local, and let X be a homologically finite Rcomplex. Any generating sequence for an m-primary ideal of R is a length sequence for X, so X admits a length system of parameters, and dim(R) ≥ ldim R (X) + inf(X). We have ldim R (X) ≥ − inf(X), with equality holding if and only if each H i (X) has finite length. Lemma 3.3 shows that ldim R (X) ≥ dim R (X). It is straightforward to show that one can have strict inequality here; see Example 3.4. On the other hand, the main result of this paper shows that this cannot occur when X is a DG R-algebra. It is stated next and proved in 3.6. See Section 2 for background on DG algebras. Theorem 1.3. Let A be a homologically finite positively graded commutative local noetherian DG A 0 -algebra such that (A 0 , m 0 ) is local noetherian. (a) Given a sequence x ∈ m 0 , the following conditions are equivalent: (i) x is a system of parameters for A; (ii) x is a system of parameters for H 0 (A); and (iii) x is a length system of parameters for A. DG Algebras and DG Krull Dimension We begin this section with a summary of terminology from [1,3]. Notation 2.1. Given an R-complex X, write |x| = i when x ∈ X i . Definition 2.2. A positively graded commutative differential graded R-algebra (DG R-algebra for short) is an R-complex A equipped with a binary operation (a, b) → ab satisfying the following properties: associative: for all a, b, c ∈ A we have (ab)c = a(bc); distributive: for all a, b, c ∈ A such that |a| = |b| we have (a + b)c = ac + bc and c(a + b) = ca + cb); unital: there is an element 1 ∈ A 0 such that for all a ∈ A we have 1a = a; graded commutative: for all a, b ∈ A we have ba = (−1) |a||b| ab ∈ A |a|+|b| , and a 2 = 0 when |a| is odd; positively graded: A i = 0 for i < 0; and Leibniz Rule: . Given a DG R-algebra A, the underlying algebra is the graded commutative R- is finitely generated for all i ≥ 0. We say that A is local if it is noetherian, R is local, and the ring H 0 (A) is a local R-algebra 1 . Example 2.3. Given a sequence x = x 1 , . . . , x n ∈ R, the Koszul complex K R (x) is a noetherian DG R-algebra under the wedge product; it is generated over K 0 = R by K 1 . If (R, m) is local and x ∈ m, then K R (x) is a local DG R-algebra. Definition 2.4. Let A be a DG R-algebra. A differential graded module over A (DG A-module for short) is an R-complex M equipped with a binary operation (a, m) → am satisfying the following properties: associative: for all a, b ∈ A and m ∈ M we have (ab)m = a(bm); distributive: for all a, b ∈ A and m, n ∈ M such that |a| = |b| and |m| = |n|, we have (a + b)m = am + bm and a(m + n) = am + an); unital: for all m ∈ M we have 1m = m; graded: for all a ∈ A and m ∈ M we have am ∈ M |a|+|m| ; , and set Remark 2.6. Let A be a DG R-algebra. The following facts are straightforward to verify. is prime if and only if I A ⊆ A is DG prime. The operation I → I A , considered as a map from the set of (prime) ideals of H 0 (A) to the set of DG (prime) ideals of A, is injective and respects containments. In particular, one has DGdim(A) ≥ dim(H 0 (A)). In the case where A is bounded, it follows that every element a ∈ A of non-zero degree is nilpotent, so the above argument applies. The following example shows that the assumptions on A (generated in odd degrees or bounded) are necessary in Proposition 2.7. Example 2.9. Let k be a field, and let A = k[X] denote the polynomial ring in one indeterminate X of degree 2. This is a DG k-algebra, using the trivial differential. The ideals 0 and A + = (X)A are DG prime. (Moreover, DGSpec(A) is precisely the set of graded prime ideals of A.) In particular, we have DGdim(A) = 1 > 0 = dim(k) = dim(H 0 (A)) since H 0 (A) = k. For our main theorem, we require some DG localization. Proof. Symmetry and transitivity are tedious but straightforward to verify. There is a tiny subtlety with reflexivity. To check that (m, u) ∼ (m, u), we need to consider two cases. The case where |u| is even is straightforward. For the case where |u| is odd, it follows that u 2 = 0, so we have u(um − (−1) |u||u| um) = 0. We define the DG localization U −1 M using the quotient rule: Proposition 2.13. Let A be a DG R-algebra, and let U ⊆ A be multiplicatively closed. Let M be a DG A-module (e.g., M = A). (a) Using the above definition, U −1 A is a DG R-algebra, not necessarily positively graded, and Proof. Note that if U contains an element u of odd degree, then everything is trivial: the fact that u has odd degree implies that u 2 = 0, so for all m/v ∈ U −1 M we have m/v = (u 2 m)/(u 2 v) = 0. Thus, for the remainder of this proof, we assume that U does not contain any elements of odd degree. It is straightforward to show that the addition and multiplication rules for U −1 A and U −1 M are well-defined and satisfy the standard axioms (associative, etc.). We show that the differential ∂ U −1 M is well-defined. (The special case M = A then follows.) To this end, let m/u = n/v in U −1 M . Since |u| and |v| are even, it follows that there is an element w ∈ U such that w(vm − un) = 0. Applying ∂ M to this equation, we have the first equality in the next display: The second and third equalities follow from the Leibniz rule, since |u|, |v|, and |w| are even. The fact that w(vm−un) = 0 implies that w∂ A (w)(vm−un). Thus, if we multiply the above display by uvw, we have the first equality in the next display: The second and fourth equalities follow from the fact that |u|, |v|, and |w| are even. The third equality follows from the condition w(vm − un) = 0. This explains the second equality in the next display The first two steps are by definition. The third step follows from the Leibniz rule for M , and the fourth step uses the fact that ∂ M ∂ M = 0 = ∂ A ∂ A . The fifth step is straightforward cancellation. For the sixth step, note that the fact that |u| is even implies that |∂ A (u)| is odd, so the element ∂ A (u) ∈ A is square-zero. The Leibniz rule for U −1 M (and hence for U −1 A) is straightforward. Dimension and Systems of Parameters Before proving Theorem 1.3, we require a few more preliminaries. Lemma 3.1. Let X be a homologically bounded below R-complex, and let m ⊂ R be a maximal ideal. If Supp R (X) = {m}, e.g., if X ≃ 0 and each H i (X) has finite length over R, then m ∈ Anc R (X). Proof. By definition, we have as desired. Here is an example showing that the converse of the previous result fails. Example 3.2. Let k be a field and set R = k[[T ]] with m = T R. Consider the following complex, which is concentrated in degrees 0 and 1: Since H 1 (X) ∼ = R, we have Supp R (X) = Spec(R). And we compute: So, we have m ∈ Anc R (X) and Supp R (X) = {m}. Lemma 3.3. Assume that (R, m) is local, and fix a homologically finite R-complex X. Each length system of parameters x for X satisfies m ∈ Anc R (K R (x) ⊗ R X). In particular, we have ldim R (X) ≥ dim R (X). Proof. Let x = x 1 , . . . , x m ∈ m be a length system of parameters for X. By definition of dim R (X), it suffices to show that m ∈ Anc R (K R (x) ⊗ R X). Since x is a length system of parameters for X, we know that each H i (K R (x) ⊗ R X) has finite length, so we have m ∈ Anc R (K R (x) ⊗ R X) by Lemma 3.1. Example 3.2 shows that equality can fail in the previous result, as we see next. Example 3.4. Let k be a field and set R = k[[T ]] with m = T R. Consider the following complex, which is concentrated in degrees 0 and 1: We have already seen that dim R (X) = 0. Since H 1 (X) ∼ = R does not have finite length, we have ldim R (X) > 0 = dim R (X). (More specifically, it is straightforward to show that ldim R (X) = 1.) Theorem 1.3 from the introduction follows from the next result; see 3.6. Proposition 3.5. Let A be a homologically finite positively graded commutative local noetherian DG A 0 -algebra such that (A 0 , m 0 ) is local noetherian. (a) Given a system of parameters x ∈ m 0 for H 0 (A), each H i (K A0 (x) ⊗ A0 A) has finite length over A 0 . In particular, we have dim(H 0 (A)) ≥ ldim A0 (A). (b) Given a system of parameters x ∈ m 0 for A, the ring H 0 (A)/(x)H 0 (A) is artinian. In particular, we have dim A0 (A) ≥ dim(H 0 (A)). Proof. (a) Let x ∈ m 0 be a system of parameters for H 0 (A). It follows that K A0 (x) ⊗ A0 A is a homologically finite local noetherian DG A 0 -algebra such that (A 0 , m 0 ) is local noetherian. Furthermore, the ring is local and artinian. Since each x ∈ m 0 be a system of parameters for A. By definition, this implies that m 0 ∈ Anc A0 (K A0 (x) ⊗ A0 A). This explains the second equality in the next display: The first equality is by the isomorphism H 0 (K A0 (x) ⊗ A0 A) ∼ = H 0 (A)/(x)H 0 (A) and Nakayama's Lemma, and the third one is by definition. Combining the claim with the previous paragraph, we have
2013-05-30T23:47:41.000Z
2012-10-26T00:00:00.000
{ "year": 2013, "sha1": "8c3566ba0ddbe74eb4e77219ba385525a4ff748e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1210.7270.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8c3566ba0ddbe74eb4e77219ba385525a4ff748e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
214612957
pes2o/s2orc
v3-fos-license
Pollen Alters Amino Acid Levels in the Honey Bee Brain and This Relationship Changes With Age and Parasitic Stress Pollen nutrition is necessary for proper growth and development of adult honey bees. Yet, it is unclear how pollen affects the honey bee brain and behavior. We investigated whether pollen affects amino acids in the brains of caged, nurse-aged bees, and what the behavioral consequences might be. We also tested whether parasitic stress altered this relationship by analyzing bees infected with prevalent stressor, Nosema ceranae. Levels of 18 amino acids in individual honey bee brains were measured using Gas Chromatography – Mass Spectrometry at two different ages (Day 7 and Day 11). We then employed the proboscis extension reflex to test odor learning and memory. We found that the honey bee brain was highly responsive to pollen. Many amino acids in the brain were elevated and were present at higher concentration with age. The majority of these amino acids were non-essential. Without pollen, levels of amino acids remained consistent, or declined. Nosema-infected bees showed a different profile. Infection altered amino acid levels in a pollen-dependent manner. The majority of amino acids were lower when pollen was given, but higher when pollen was deprived. Odor learning and memory was not affected by feeding pollen to uninfected bees; but pollen did improve performance in Nosema-infected bees. These results suggest that pollen in early adulthood continues to shape amino acid levels in the brain with age, which may affect neural circuitry and behavior over time. Parasitic stress by N. ceranae modifies this relationship revealing an interaction between infection, pollen nutrition, and behavior. INTRODUCTION The honey bee brain has evolved a remarkable complexity. The bee brain is a small and compact structure that contains approximately 960,000 neurons and is similar in size to a sesame seed (Giurfa, 2003). Despite the small size, honey bees exhibit sophisticated capabilities akin to higherorder cognition (Giurfa, 2003). Bees see the world in color, distinguish a range of odors, perceive shapes and patterns, and deftly navigate their terrain (Srinivasan, 2010). These capabilities likely evolved for honey bees to survive in dynamic environments in which food sources ebb and flow. Honey bees must rely on complex sensory and motor systems to forage for pollen and nectar, communicate the location and value of these resources, distinguish between hive odors to nurse developing larvae, defend their colonies from intruders; and perform many other nuanced behaviors to ensure colony survival (Pahl et al., 2010). To fuel these activities, adequate nutrition is necessary for growth and development (De Groot, 1953). Pollen is the primary source for amino acids and lipids, while nectar and honeydew are the natural carbohydrate sources. The consumption of these resources depends upon the age of the bee. Pollen is consumed primarily during the first 3 to 5 days of an adult worker's life (Hagedorn, 1967). Nectar, on the other hand, is transformed into honey inside the hive and is consumed throughout the bee's lifespan (De Groot, 1953;Haydak, 1970). Though a honey bee is considered an adult after emergence, there is substantial growth that occurs during the first 6 days (De Groot, 1953). There is an increase in body weight and total protein content increases by 25-50% (Haydak, 1934). To support this growth, honey bees consume considerable amounts of pollen, which may begin as early as 1-2 h post-emergence; reaching a maximum by Day 5 (Hagedorn, 1967). The hypopharyngeal glands, fat body, and other internal organs develop simultaneously. If pollen is not available, growth is stunted. Bees experience a loss of weight, diminished protein content, and a reduced lifespan (Haydak, 1970). As honey bees age, they perform a series of age-dependent behaviors or polyethism. The youngest adults perform tasks such as comb cleaning, nursing developing larvae or tending the queen. Bees will then switch to "out of hive" tasks such as guarding and foraging. The honey bee brain shows predictable patterns of change related to transitions in behavior (Withers et al., 1993;Mercer, 2001). For instance, the mushroom body, which supports olfactory memory, navigation, and behavioral choice, increases in volume in foraging bees and precocious foragers (Fahrbach and Robinson, 1996;Fahrbach et al., 2003). In the antennal lobe, the primary neuropil for olfactory processing, individual glomeruli change in volume and number of synapses with the onset of foraging (Brown et al., 2004). Interestingly, the plastic nature of the adult honey bee brain is not due to neurogenesis (Fahrbach et al., 1995). Instead, it is presumed that structural plasticity is dependent upon changes to existing neurons and networks (Farris et al., 2001;Mercer, 2001). These networks are modified by neurotransmitters, neuromodulators, and neurohormones that elevate in concentration with the onset of foraging (Wagener-Hulme et al., 1999;Schulz et al., 2002a,b). Therefore, if pollen is necessary for growth and development of the bee, pollen may also be necessary for optimal brain development and behavior. Amino acid levels in the brain may provide clues that connect pollen nutrition with neurodevelopment. All nutrients to an extent influence brain maturation, but protein, at least in humans and other mammals, appears most critical to the development of neurological functions (Morgane et al., 2002). Amino acids provide not only the building blocks of proteins and polypeptides, but play direct and indirect roles in neurotransmission. For example, amino acids can be precursors for neurotransmitters or be neurotransmitters themselves. They are also precursors of enzymes, neurohormones and neuropeptides. Less is known about how the honey bee brain uses amino acids from pollen. We know that the honey bee brain is plastic with age and undergoes structural and chemical changes, which involve amino acids. We also know that similar to other animals, the bee brain has metabolic needs that are distinct from the rest of the animal, and this metabolic capacity is higher in nurses than in foragers (Ament et al., 2008;Alaux et al., 2011). Yet, it is unclear whether amino acids from pollen affect the levels of amino acids in the brain, and whether levels change in the honey bee with age. Given that pollen is consumed in the first 5 days of adulthood, we hypothesized that pollen would have a lasting effect on amino acid levels with age and affect neural circuitry such that behavioral consequences could be measured. This hypothesis was tested using nurse-aged bees at Day 7 and Day 11. Nutrition in nurses is well-studied, with known effects on nurse physiology and behavior (Crailsheim and Stolberg, 1989;Pernal and Currie, 2003;DeGrandi-Hoffman et al., 2010;Corby-Harris et al., 2014. In this experiment, pollen was made available to some nurses, and unavailable to others. We compared levels of individual amino acids in the whole brain of bees from both groups and tested olfactory learning and memory to see if there was a relationship between diet and cognition. This hypothesis was further expanded upon by examining bees with parasitic stress from Nosema ceranae. N. ceranae is a mid-gut parasite known to disrupt both energy metabolism (Higes et al., 2007;Holt et al., 2013;Vidau et al., 2014;Kurze et al., 2016) and olfactory-guided behavior . In testing infected bees, we sought to gain a better understanding of how stress, either nutritional or parasitic, affects the relationship between a pollen diet and brain function. With these parameters, we assessed how pollen influences amino acid composition in the brain, its impact on olfactory learning and memory, and how the brain may change with stress. Animals Brood frames were collected from colonies at the Carl Hayden Bee Research Center apiaries in Tucson, Arizona between October and November 2016. Frames, from three or more hives, were taken from European colonies, Apis mellifera-L headed by queens from Pendell Apiaries (Stoneyford, CA, United States). Newly emerged bees were obtained from sealed brood frames kept overnight in a temperature-controlled dark environmental room (32-34 • C, 30-40% relative humidity). The emerged bees were separated into cages according to four treatment groups: (1) pollen-fed bees (+ P) (2) pollen-deprived bees (−P) (3) pollen-fed bees + Nosema (+ P + N) and (4) pollen-deprived bees + Nosema (−P + N). Bees were kept for 11 days in a Binder BD (E2) incubator (Binder GmbH, Tuttlingen, Germany) at 31.7 • C with 40% relative humidity, under constant dark conditions. Feeding Bee-collected corbicular pollen was gathered from Tucson, Arizona during the months of September and early October 2016 and kept at 20 • C until fed to bees. Each cage of 60 bees was given an insert with 3 g of ground pollen, a 50% sugar solution and water ad libitum. Sugar, water and pollen were changed at 7 days. Cages were checked daily and dead bees were removed. Fresh pollen was analyzed for amino acid content using Gas Chromatography -Mass Spectrometry (Figure 1). Identification of pollen species was determined via ITS sequencing as part of another study and included pollens Pectis sp. and Corethrogyne sp. from the daisy family, and Ambrosia sp. or ragweed. Nosema inoculum, Spore Counts, Species Detection Spores to create the Nosema inoculum were collected from bees found at the entrance of an infected hive the day before, or the day of, the inoculation. Infected bee abdomens were crushed with a mortar and pestle in water and the Nosema inoculum was prepared using methods described in Fries et al. (2013). Individual bees were placed into an Eppendorf tube cut with a hole, which was large enough for a proboscis to extend through to feed. Bees in Nosema treatment groups (+N bees) were handfed a 2 µL inoculum containing 100,000 spores in a 50% cane sugar solution. Bees in control treatment groups were handled identically and fed 2 µL of 50% cane sugar solution. Bees were removed from individual tubes and placed into cages (60 bees per cage, two cages per treatment). Pollen, sugar and water were kept from cages for 1 h after feeding to prevent trophallaxis. The effect of pollen on spore proliferation was determined using Day 7 bees from individuals used during the behavioral experiments. Twenty bees per treatment group were individually analyzed for spore intensity at Day 7 (Total = 80). Sample sizes were smaller on Day 11 as bees succumbed to the infection (N = 26). Nosema spore numbers per bee were determined by removing the abdomen, grinding it with a mortar and pestle in 1 mL of water, and placing a 10 µL sample on a hemocytometer (Fries et al., 2013). Five squares were counted, and spore number was estimated as described in Fries et al. (2013). To determine the Nosema species used in this study, abdomens from two infected bees from the Day 11 + pollen group (16,583,333 and 6,150,000 spores counted) were used to extract Nosema DNA. The methods for the molecular detection of Nosema species, including PCR parameters, were detailed in Fries et al., 2013. In brief, abdomens were crushed in 1 mL of DI water, microcentrifuged and the supernatant was removed. The sample pellets were re-suspended in 100 µL. The samples were briefly bead-beat to rupture Nosema cell walls. The samples FIGURE 1 | Amino acid (AA) composition of bee-collected pollen and whole bee brains ± pollen. (A) Amino acid composition of multi-floral, bee-collected pollen fed to bees. Pollen was collected during the fall season in Tucson, Arizona during 2016. (B) Amino acid composition of whole brain homogenates were measured from bees dissected on Day 7 and Day 11 post-eclosion. Bees were fed pollen (top, + pollen) and deprived of pollen (bottom, -pollen). Arrow direction denotes an increase (up) or decrease (down) in mean amino acid levels with pollen deprivation. For example, -pollen in Day 7 bees had lower average cysteine levels compared with + pollen Day 7 bees. Asterisks denote a significant change in amino acid levels from Day 7 to Day 11 within the same feeding group (Ex. Day 7 + pollen, Day 11 + pollen). * One asterisk denotes a significant increase in AA levels from Day 7 to Day 11, while * * two asterisks denotes a decrease. Amino Acid Analysis of Brain Tissue and Pollen Bees, on Day 7 and Day 11, were flash frozen in liquid nitrogen (N = 15 bees per treatment, N = 60 total). Bees were kept at −80 • C until whole brains (including antennal and optic lobes) could be dissected. Each brain was rapidly dissected and weighed using a Sartorius CP2P microscale (Sartorius, Göttingen, Germany). Each brain was then frozen in liquid nitrogen and stored at −80 • C until chemical analyses. The details of the amino acid analyses on individual brains are described in Gage et al. (2018). In brief, conventional acid hydrolysis with chloroformate derivatization was used to quantify all amino acids, with the exceptions of tryptophan, cysteine, and arginine. Tryptophan was recovered using base hydrolysis with chloroformate derivatization, while cysteine and arginine were quantified with sodium azide acid hydrolysis followed by phenylisothiocyanate derivatization. Using conventional acid hydrolysis, asparagine and glutamine were hydrolyzed to their acidic forms (Fountoulakis and Lahm, 1998). As a result, asparagine and glutamine levels were measured as concentrations of aspartic acid and glutamic acid. Cysteine, too, was quantified in its oxidative form, cysteic acid. Conventional acid-and base hydrolyzed samples were analyzed using EZ: Faast Amino Acid Analysis Kit for Protein Hydrolyzates by Gas Chromatography -Mass Spectrometry (Phenomenex, Torrence, CA, United States). The re-dissolved chloroformate derivatives were analyzed by EI GC-MS on an Agilent 7890A gas chromatography system coupled with a 5975C EI mass spectrometer detector. Sodium azide hydrolyzed samples were analyzed using a modified method from Elkin and Wasynczuk (1987). The re-dissolved phenylthiocarbamyl derivatives were analyzed by reverse-phase HPLC-PDA on a Thermo Scientific Spectra System coupled with a Finnigan Surveyor PDA Plus Detector. The methods to measure amino acids from pollen was published in DeGrandi-Hoffman et al. (2018). In brief, fresh pollen was ground, and six samples of 10 mg were homogenized using a mortar and pestle with liquid nitrogen. Each sample of 10 mg were sealed under nitrogen gas in a crimp vial and digested with 500 µL 6M HCl with 4% thioglycolic acid at 70 • C for 24 h. To remove acid residues, 50 µL of the acid hydrosylate was filtered and dried in a Savant 2200 Speed Vac (Thermo Scientific, Inc.) The material was then re-solvated, derivatized, and separated by the EZ-FAAST kit protocol (Phenomenex). Amino acid composition in pollen (Figure 1) was determined using the hydrolysis and derivatization steps described above. Odor Learning and Memory Learning and memory experiments took place in November and early December 2016. The night before associative-learning tests, sugar was removed from cages between 5 and 6 PM. The next morning, bees were restrained in a 1 mL pipette tip cut such that the body was restrained and the neck free to rotate. Wax was used around the opening for further restraint around the thorax. Bees were tested for proboscis extension reflex (PER) by applying a wooden applicator soaked in 50% cane sugar solution to the tip of the antennae. Bees were not allowed to lick. Only bees exhibiting a rapid, full PER proceeded to the study. Bees that were 7 days old were assessed for associative odor learning in a forward-paired conditioning paradigm. Clove oil (diluted 1:1000 in mineral oil; Sigma) was placed on filter paper in a 10 µL volume and inserted into a 0.5 mL glass syringe. The syringe was placed one inch from the bee, and connected to a solenoid-controlled air stream. The solenoid was powered by an Interval Generator 1830 (W.P. Instruments, Sarasota, FL, United States) to deliver a 5-s odor pulse (7 km/h). Vacuum suction was applied continuously. Three seconds into the pulse, a wooden applicator soaked with a 50% sugar solution was presented to the antennae. The bee was allowed to lick for 1 s. This sequence was repeated for three trials spaced 10 min apart. Three odor conditioning trials were found to be the least number of trials needed for long-term memory (Menzel et al., 2001). All experiments were performed between 9 AM and 1 PM under red light. These experiments took place over 4 days using 145 bees (30-39 each group). Animals were tested for odor learning and memory of the conditioned odor 10 min after training. Bees were presented with a 5-s odor pulse and observed for proboscis extension immediately following the odor. Statistical Analysis JMP 12.0 was used for all statistics. Amino acid concentrations were normalized using a log 10 transformation and equal variances were found between groups. A three-factor ANOVA (pollen, infection, age) was applied to each amino acid individually, followed by a Tukey-HSD post hoc test. The effect of pollen on spore count was determined by transforming the data using log 10 and applying a t-test. Odor learning and memory results were analyzed using non-parametric Wilcoxon/Kruskal-Wallis tests. To ascertain differences among treatment groups, post hoc Wilcoxon Each Pair tests were applied. Error bars denote standard error and significance was determined at α = 0.05 level. Pollen Modulates Amino Acid Levels in the Honey Bee Brain The first question asked was how the honey bee brain responds to pollen. Age-matched pollen fed (+ P) bees were compared with pollen deprived (−P) bees at Day 7 and Day 11. At Day 7, concentrations of five amino acids were significantly different between + P and −P bees. These compounds included cysteine, isoleucine, lysine, threonine, and tryptophan (Figures 1, 2 and Tables 1, 2). Cysteine, a non-essential amino acid, fluctuated most with pollen. Cysteine was higher in the brains of + P bees compared to -P bees by 54% [F (7 , 89) = 13.15, p ≤ 0.0001]. Isoleucine, an essential amino acid, was also significantly higher in + P bees, but with less variance (4%) [F (7 , 100) = 11.15, p = 0.04]. Interestingly, isoleucine was the only essential amino acid found in higher concentration with pollen. Lysine, threonine, and tryptophan were lower with pollen by −10, −39, and −8%, respectively. For means and statistics, see Tables 1, 2: 'Uninfected with Pollen Day 7.' A different profile emerged in Day 11 bees ( Table 1). Pollen elevated levels in the majority of amino acids tested, and non-essential amino acids in particular. These compounds included alanine, arginine, asparagine, cysteine, glutamine, glycine, proline, and tyrosine (Tables 1, 2). Cysteine levels fluctuated most with pollen (39%); while alanine (5%) varied least. Essential amino acids, histidine (10%), isoleucine (4%), and leucine (7%), were elevated with pollen, too. Other essential amino acids, however, maintained similar levels. Concentrations of lysine, methionine, phenylalanine, threonine, and valine did not change significantly with pollen, varying between 4 and 8%. For means and statistics, see Tables 1, 2: 'Uninfected with Pollen Day 11.' Pollen also influenced how amino acid concentrations changed with age (Figures 1, 2 and Tables 1, 2). Ten amino acids were higher in concentration from Day 7 to Day 11 (denoted as %). The majority of these amino acids were non-essential. Amino acids such as alanine, asparagine, glutamine, glycine, proline, serine, and tyrosine elevated in concentration from Day 7 to Day 11, increasing on average from 5 to 60% ( Table 1). The largest changes were in serine (60%) and glutamine levels (30%), while the smallest changes were in alanine (5%) and glycine levels (8%). Essential compounds, such as histidine ( 24%), threonine ( 89%), and tryptophan ( 10%), also followed this pattern. However, most essential amino acids maintained similar levels with age. Amino acids like isoleucine, leucine, lysine, methionine, and phenylalanine varied as little as 0-5%. Valine, a notable exception, lowered with age by 34%. For means and statistics, see Tables 1, 2: 'Uninfected with Age + Pollen.' In contrast, -P bees showed little change in amino acid levels with age (Figures 1, 2 and Tables 1, 2). Serine, in fact was the only compound found higher ( 32%) [F (7 , 93) = 10.25, p = 0.003]. FIGURE 2 | Amino acid levels in the honey bee brain with age: ± pollen, ± Nosema ceranae. The blue bars represent the mean concentration of amino acids present in bees at Day 7. The orange bars represent the mean difference in concentration in bees at Day 11 ( Day 11). In other words, blue bars represent the mean concentration at Day 7, and the orange bars denote the change with age by Day 11. Negative changes with age (Ex. Valine) are represented by orange bars to the left. They can be interpreted as the mean concentration difference from Day 7. For example, valine levels in pollen-fed bees was approximately 2.7 ng/mg and Day 11 was approximately 1.8 ng/mg. * Asterisks denote significant changes from Day 7 to Day 11 (p ≤ 0.05, three-way ANOVA, Tukey-HSD). Each group represents the average of 15 bees analyzed individually for 18 amino acids, N = 120. Numbers indicate the average amino acid concentration (log 10 ng/mg ± SEM). These values are reported for uninfected bees (top) and Nosema-infected bees (bottom). Day 7 and Day 11 refer to the age of the bee when brains were dissected, and values were separated according to whether bees were pollen-fed (+ Pollen) or unfed (− Pollen). "Effect of Pollen" denotes whether a pollen diet significantly affected an amino acid concentration. For example, "lower" indicates a drop in the average amino acid concentration with a pollen diet. The "Change from Day 7 to Day 11" indicates the percentage change from the Day 7 average amino acid concentration to the Day 11 average. Asterisks denote a significant change according to the following scale: * < 0.05, ** < 0.01, *** < 0.001, **** < 0.0001, three-way ANOVA, Tukey-HSD. Pollen Elevates Nosema ceranae Spore Count Nosema ceranae was determined to be the species of our Nosema inoculum based upon size and consistency with a N. ceranae positive control provided by the ARS-Beltsville laboratory. Nosema apis was not detected. The pollen-fed + Nosema bees (+P +N) had higher spore numbers than un-fed + Nosema bees (−P +N) (t (38) = 5.31, p ≤ 0.0001) ( Figure 3A). The uninfected bee abdomens in both the + P and −P groups contained zero spores (N = 40). Nosema-Infected Bees Have a Different Relationship With Amino Acid Levels and Pollen We next asked whether similar relationships with pollen and age were found in Nosema-infected bees (+N). We compared +N A three-way ANOVA with a Tukey Post hoc test was applied for each amino acid to determine how levels change in the brain with pollen and age in uninfected and infected bees. For example, "Uninfected with Pollen" denotes whether a pollen diet in uninfected bees altered an amino acid concentration and is separated according to Day 7 or Day 11 measurements. "Uninfected with Age" denotes whether an amino acid significantly varied from Day 7 to Day 11 and is separated according to pollen diet. *Asterisks highlight p < 0.05. bees that were pollen fed (+P +N) with those that were pollendeprived (−P +N) at Day 7 and Day 11. Similar and different patterns emerged. At Day 7, pollen lowered levels in 10 of 18 amino acids in +N bees (Figures 1, 2 and Table 1) and these amino acids included both non-essential and essential compounds. The non-essential amino acids included alanine, asparagine, glutamine, glycine, and tyrosine that lowered with pollen between −5 and −16%. The essential amino acids also lowered with pollen between −5 and −8% and included isoleucine, leucine, methionine, phenylalanine, and tryptophan. By Day 11, however, pollen did not affect a single amino acid in +N bees. This result, at Day 11, was the largest difference with +N bees. Valine appeared to rise with pollen at Day 11, but did not yield a significant result [F (7 , 97) = 10.43, p = 0.07]. For means and statistics, see Tables 1, 2: 'Infected with Pollen.' Pollen also influenced how amino acid concentrations changed with age in +N bees. When fed pollen, 11 amino acids were higher in concentration from Day 7 to Day 11. The majority of these amino acids were non-essential (Figure 2 and Tables 1, 2). These amino acids included alanine, arginine, asparagine, cysteine, glutamine, glycine, proline, and tyrosine. These amino acids rose with age between 6 −28%. Asparagine and glutamine levels fluctuated most at 28%, while alanine and glycine fluctuated least at 6%. Three essential amino acids also followed this pattern including histidine ( 18%), methionine ( 6%), and tryptophan ( 12%). Valine was an exception. Valine lowered in concentration from Day 7 to Day 11 by 49%. For means and statistics, see Tables 1, 2: 'Infected with Age + Pollen.' Without pollen, amino acid levels remained mostly consistent or declined in −P +N bees (Figure 1 and Table 1). Only three amino acids were elevated from Day 7 to Day 11; and included asparagine ( 13%), glutamine ( 13%), and proline ( 6%). Isoleucine ( −7%), phenylalanine ( −7%), and valine ( −81%) were lower with age. For means and statistics, see Tables 1, 2: 'Infected with Age -Pollen.' Nosema Infection and Pollen Show an Interactive Effect in the Brain We then wondered whether pollen differentially affected amino acids in the brain with infection. We compared Nosema-infected bees (+N) with uninfected bees (−N) when age-matched and diet-matched ( Table 3). We also examined the interaction between pollen and infection overall. Two main trends emerged: (1) With pollen, Nosema infection lowered levels of amino acids compared with −N bees; and (2) without pollen, Nosema infection raised levels of amino acids compared with −N bees. As a result, several amino acids showed an interaction effect between pollen and Nosema infection. A significant interaction term between Nosema infection and pollen was found in 12 of 18 amino acids. In 10 of these compounds, a similar relationship was observed (Figure 3). When pollen was fed to +N bees, levels of many amino acids in the brain were lower. When pollen was deprived, levels of these same amino acids were higher. Amino acids that showed this pattern were a combination of non-essential and essential amino acids. The former included alanine (Infection x Pollen, p ≤ 0.0001), asparagine (Infection x Pollen, p ≤ 0.0001), glutamine (Infection x Pollen, p ≤ 0.0001), glycine (Infection x Pollen, p ≤ 0.0001), proline (Infection x Pollen, p ≤ 0.0001) and tyrosine (Infection x Pollen, p = 0.0014). The latter included isoleucine (Infection x Pollen, p ≤ 0.0001), leucine (Infection x Pollen, p ≤ 0.0001), methionine (Infection x Pollen, p = 0.0006), and phenylalanine (Infection x Pollen, p = 0.003). Cysteine and valine also showed an interaction term with infection and pollen, though slightly different. Cysteine levels rose in + P−N bees compared with -P −N bees; but in Nosema-infected bees, levels remained high regardless of diet (Infection x Pollen, p ≤ 0.0001). Valine showed the opposite. Levels rose in + P +N bees compared with −P +N bees, but was not affected by diet in uninfected bees (Infection x Pollen, p = 0.027). Nosema-Infected Bees With Lower Spore Numbers Differ From Nutritionally-Stressed Bees We found that average spore numbers were higher in bees fed pollen (Figure 3). With this in mind, we wondered whether a low N. ceranae spore count (−P +N) differed from uninfected, nutritionally-stressed bees (−P −N). At Day 7, there were four amino acids that were different between −P +N bees and -P −N bees. These amino acids included both essential and non-essential amino acids: cysteine, isoleucine, phenylalanine, and serine. In all four compounds, −P +N bees had higher levels ( Table 3). At Certain amino acids were found to be significantly affected by Nosema infection when brains were dissected at Day 7 and Day 11. These results changed based on whether a pollen diet was provided. "The Effect of Infection" denotes whether infected bees had a higher or lower amino acid concentration compared with uninfected bees. Asterisks denote a significant change * < 0.05, ** < 0.01, *** < 0.001, **** < 0.0001, three-way ANOVA, Tukey-HSD. Day 11, there were five amino acids different between −P +N bees and −P bees. Four of these amino acids were non-essential amino acids and included asparagine, cysteine, glutamine, and glycine. Again, in all four of these compounds, Nosema-infected bees had higher levels (Table 3). Interestingly, valine continued to be the exception in this comparison. Valine levels were lower in Nosema-infected bees. Does Nosema Infection Resemble Nutritional Stress? We wondered whether amino acid levels could provide insight into whether Nosema infection is similar to nutritional stress. To evaluate whether Nosema infection in pollen-fed bees resembled pollen deprivation, we compared + P +N bees with −P −N bees at Day 11. We found that the addition of pollen in infected bees resulted in few changes compared with pollendeprived bees. There were three of 18 amino acids that were different. These amino acids included arginine [F (7 , 91) = 4.99, p = 0.0043], cysteine [F (7 , 89) = 13.15, p ≤ 0.0001], and tryptophan [F (7 , 100) = 7.78, p = 0.039]. In all three, levels were higher in + P +N bees. Amino Acids With Age, Pollen, and Behavior We gained insight into how the honey bee brain uses amino acids from pollen (Figure 5). Pollen is consumed primarily during the first 5 days of a honey bee's life; which constitutes the majority of protein they will ever consume. Yet, the honey bee brain remains plastic and adaptive to behavioral change throughout adulthood. Our results showed that a pollen diet affects amino acid levels in the brain after this period of consumption. Pollen affected FIGURE 4 | The effects of pollen and Nosema ceranae on odor learning and memory at Day 7. Proboscis extension response (PER) was measured among four groups of bees including: bees pollen (N = 39, 38) and N. ceranae-infected bees pollen (N = 30, 38). The PER response of each group was tested as bees were trained to associate an odor with a reward (Trial Learning, A). Memory retrieval (B) of the conditioned stimulus was measured 10 min after trial learning by presenting the odor alone. Significance( * ) was determined using separate Wilcoxon tests for Trial 2, Trial 3, and Memory Retrieval. Groups connected by the same letter are not significantly different. the brains of nurse-age bees at Day 7 and Day 11. Pollen also increased levels of amino acids with age. The majority of amino acids were present at a higher concentration at Day 11 compared with Day 7. In a previous study, this effect continued through Day 15 . These results suggest that pollen in early adulthood continues to affect neurodevelopment over time, which may affect later behaviors. The majority of amino acids affected by pollen were nonessential. In other words, pollen enhanced amino acids that could be made by the body; while many essential amino acids required by diet maintained similar levels. De Groot in 1953 determined that 'essential' or 'non-essential' classification of amino acids in the honey bee followed closely with other animals (De Groot, 1953). He observed that many of the non-essential compounds were 'stimulatory' for honey bee growth. They were not essential for growth (measured in dry weight and nitrogen content in total body), but provided an enhancement, or 'stimulatory effect' when supplemented. De Groot's observation may also apply to the honey bee brain. Many of the non-essential amino acids affected by pollen (e.g., ala, arg, asp, cys, glu, gly, pro, tyr) have direct or indirect roles on the nervous system. These include effects on neurotransmission, functioning and development, or as 'neuro-protectants' against inflammation and oxidative stress (Wu, 2009). Cysteine, for example, was the amino acid whose concentration was most affected by pollen in our study. Cysteine is a limited resource in most insects, and necessary for glutathione production, an antioxidant found to neutralize reaction oxygen species and support immune function (McMillan et al., 2017). Tyrosine, another amino acid found to alter with pollen and infection is directly involved in protein phosphorylation, nitrosation, and sulfation; and is a precursor for dopamine: a neurotransmitter in the honey bee brain involved in behaviors like learning and memory (Agarwal et al., 2011). The results we observed led us to consider how nutrients from pollen affect neural circuits. It is well-known that the honey bee brain is plastic with age, and behavior-regulating areas like the antennal lobe and the mushroom body appear to undergo continual optimization to support behaviors at a given time (Mercer, 2001). These changes are nutritionally expensive. We know from studies in humans and other mammals that prenatal or early life nutrition can impact neural circuitry (Morgane et al., 2002;Nyaradi et al., 2013). Deficits in nutrition can impair processes like learning and memory that are dependent upon synapse formation and retraction. Our results suggest pollen consumption by honey bees may be similar. It may be that increasing levels of amino acids with age support the growth and adaptability of the honey bee brain. It needs to be addressed whether pollen leads to structural changes in areas like the antennal lobe, but our results indicate that early life nutrition in honey bees, similar to humans and other mammals, may have consequences for neurodevelopment, healthy aging and possibly behavior. Olfactory learning and memory is an example of a complex neurobiological task that can be measured in the laboratory to assess the role of diet and behavior. We did not see an effect of pollen feeding on odor learning and memory in Day 7 bees. These bees performed poorly, and there was no difference FIGURE 5 | Summary figure. (Top) Bees fed pollen showed few changes in amino acids (AA) and no difference was found in olfactory learning and memory performance on Day 7. By Day 11, pollen-fed bees show higher levels of (AA) in the brain, especially in non-essential amino acids. (Bottom) Nosema-infected bees show a different profile with pollen. At Day 7, several AA in the brain are affected by pollen in addition to enhanced olfactory learning and memory behavior and spore intensity. By Day 11, pollen does not affect AA. found either in the association of an odor with a reward, or the recall. This may be because only a handful of amino acids were affected by pollen at Day 7 (cys, iso, lys, thr, trp), or this age may not support optimal odor learning and memory ability. Day 11 or a later age may have been a better choice. We chose Day 7 prior to our chemical analyses reasoning that the shortened lifespan of pollen-deprived bees (especially those that were infected) could be problematic to assess behavior at Day 11. There are reports, however, demonstrating that diet does influence olfactory learning and memory. Arien et al. (2015), found that fatty acids found in pollen, namely omega-3, were necessary for learning and memory performance in foragers. The role of amino acids in olfactory learning and memory remains to be tested. Essential Amino Acids A surprise in this study was the consistency and low variance of many essential amino acids in the brain of bees fed pollen. Essential amino acids such as isoleucine, leucine, lysine, methionine, and phenylalanine, clustered tightly in our data set and varied with age between 0 and 5%. Only isoleucine and leucine were significantly higher with pollen on Day 7 or Day 11. Isoleucine, methionine and phenylalanine synthesize other amino acids found to vary significantly with pollen such as glutamine, alanine, cysteine, and tyrosine (Wu, 2009). Lysine is directly involved in nitric oxide synthesis, a known neurotransmitter to affect memory in bees and moths (Muller, 1996;Gage et al., 2013;Gage and Nighorn, 2014); and leucine regulates protein turnover through cellular mTOR signaling and gene expression (Wu, 2009). Given these roles, the low variation in essential amino acids may not indicate a lack of effect. It could mean these compounds exert effects within a narrow concentration range. For example, lysine levels marginally increased with age in bees fed pollen, while lysine levels significantly dropped in pollen-deprived bees (Figure 2). This finding suggests that a marginal increase in essential amino acids may be sufficient. These results can generate hypotheses for future studies on how essential amino acids might be used in adult brain development and behavior. Amino Acids Under Nutritional Stress One of the most nutrient-sensitive tasks is nursing, which entails the ability to deliver brood food to developing larvae. The hypopharyngeal glands, which deliver this food, reduce in size and function under nutritional stress Moeller, 1969, 1971;Alaux et al., 2011;Corby-Harris et al., 2016) and irregular nurse behavior develops Crailsheim, 2001, 2002). Starved nurses will make fewer trips to the brood area (Schmickl and Crailsheim, 2002), and may even develop into precocious foragers (Schulz et al., 1998;Toth and Robinson, 2005). We found that pollen deprivation changed the amino acid profile in the nurse brain, which may underpin irregular worker task behaviors of nurse bees observed with nutritional stress. While pollen-fed bees experience elevated levels of amino acids with age, levels remained consistent in pollen-deprived bees. Tryptophan may be an important amino acid to examine further. Tryptophan is required for hypopharyngeal gland growth (Fengkui et al., 2015), and we saw tryptophan in the brain rise with age in pollen-fed bees. In pollen-deprived bees, however, tryptophan in the brain remained consistent. Tryptophan is the precursor for serotonin, a neuromodulator and hormone that elevates in the brain with age and affects nutrient intake, digestion, and feeding behavior in bees (French et al., 2014). Tryptophan may be useful as a supplement (Fengkui et al., 2015). It would be insightful to connect individual amino acids with hormonal signaling in the brain to understand the connection between hypopharyngeal gland physiology and irregular nurse behavior. Effects of Parasitic Stress From Nosema ceranae Clues were gained as to how pollen affects the brain in Nosemainfected individuals. We saw that pollen feeding affected amino acids levels at Day 7, and that the levels were lower in infected bees fed a pollen diet. A prior study yielded similar results . However, this lowering of amino acid levels was found only with a pollen diet, rather than from infection alone. Pollen-deprived bees with N. ceranae showed significantly higher levels of amino acids. In fact, two thirds of the amino acids tested showed an interaction effect with pollen and infection. Evidence from RNA-seq data indicates a link between diet and N. ceranae infection. Infection in nurses affected genes involved in amino acid production in the fat body . The infection appeared to affect the tricarboxylic acid cycle (TCA cycle: ame00020), which generates energy from proteins, fats, and carbohydrates. This effect appears to be N. ceranae specific, as pollen deprivation alone does not prompt changes in TCA genes (Corby-Harris et al., 2014). Specifically, genes involved in the synthesis of valine, leucine, and isoleucine were highlighted ; which were amino acids also affected in this study. The DeGrandi-Hoffman study also found that the expression levels of 399 genes were differentially affected by N. ceranae and pollen type. These genes were involved in the TCA cycle, pyruvate metabolism (ame00620) and metabolic pathways (ame01100); which encompass numerous amino acid processes. Other studies have also pointed toward a connection between protein metabolism and Nosema sp. Holt et al., 2013;Vidau et al., 2014;Glavinic et al., 2017;Li et al., 2019). In addition, Nosema has also been shown to affect gene expression in the brain (McDonnell et al., 2013;Doublet et al., 2016). The number of differentially expressed genes was small in response to infection (57 and 13;McDonnell et al., 2013;Doublet et al., 2016), with no significant functional groups identified, but genes affected included enzymes related to immune and antioxidant activity. One may also consider the parasitic-nature of N. ceranae (Vidau et al., 2014;Mayack et al., 2015;Gage et al., 2018). There are host-parasite examples where parasites use nutrients from hosts to fuel replication (Zuzarte-Luís and Mota, 2018). For example, Plasmodium blood-stage parasites can sense the levels of host nutrient intake and modulate replication and virulence based on nutrient availability (Mancio-Silva et al., 2017). We found that a pollen diet significantly increased the number of N. ceranae spores within the honey bee midgut similar to other studies (Zheng et al., 2014;Fleming et al., 2015;Jack et al., 2016). This suggests a similar host-parasite dynamic may exist between A. mellifera and N. ceranae. If we contemplate a dispersal strategy for N. ceranae, it may be beneficial for the parasite to replicate inside a healthy host, who can forage and drift; and reduce replication in a starved host with a shorter lifespan. We saw curious effects of infection on amino acids in the brain that may support this host-parasite dynamic. These results were most clear when comparing unfed bees. We found that Nosema-infected bees, even those with a lower spore count, differed from nutritionally-stressed bees. Eight amino acids were different between these groups, and all but one (valine) were present at higher concentration in infected bees ( Table 2). One might guess that parasitic and nutritional stress in combination would result in the lowest levels of brain amino acids, but we did not find this to be the case. This combination of stressors appeared to boost amino acid levels in the brain, which may support the idea of 'nutrient-sensing' by N. ceranae. In addition, pollen-fed infected bees more closely resembled nutritionallystressed bees. We compared groups at Day 11 (+P +N/− P −N) and found only three changes in amino acids: arginine, cysteine, and tryptophan, which were higher in infected bees. Still, infected bees live longer with pollen and protein content helps (Pasquale et al., 2013). These results may support the hypothesis that N. ceranae may sense nutrient levels in the host and modify its replication and virulence that weakens the host, but does not kill. Cysteine may be especially beneficial to infected bees. Our results showed generally higher levels of cysteine in Nosemainfected bees, and brain levels of cysteine varied most in all bees with pollen. Given cysteine's role in glutathione productiona process known to boost immunity and neutralize reactive oxygen species-a cysteine supplement may be of benefit to fighting infection. The interconnectedness of pollen and infection was demonstrated when testing olfactory learning and memory. Unlike uninfected bees, a pollen diet improved learning and memory performance. We saw 10 of 18 amino acids affected by pollen in infected bees, which may explain the learning and memory effect. Pollen-fed infected bees exhibited better responses during trial learning and in memory retrieval compared with both pollen-deprived infected bees and uninfected bees. It is unclear as to what this means, but there are a few possibilities. We saw this effect previously, in caged bees, and hypothesized that it could reflect the accelerated maturation , or precocious foraging, known to occur with infection (Goblirsch et al., 2013). Nurses tend to perform less well than foragers in the PER test (Ray and Ferneyhough, 1997), which may suggest heightened performance equates to accelerated maturation. In this paradigm, a pollen diet appears necessary for this effect. There are also examples of humans, rodents, and bumblebees displaying enhanced memory performance with stress (Schwabe et al., 2012;Muth et al., 2015). Circulating levels of stress hormones enhance memory-encoding processes (Schwabe et al., 2012), and this may be the case in infected bees fed pollen. Alternatively, better performance may reflect that infected bees are more motivated and sensitive to sugar. Infection increases sucrose sensitivity (Mayack and Naug, 2009) and this effect might improve performance in the PER tests. Infected bees generally showed better performance, but the unfed infected bees performed marginally better than uninfected bees. This result suggests sucrose sensitivity may not fully explain the enhanced learning and memory performance. There are also limitations of this study to consider. The most notable is that these effects were observed in caged bees, without the social setting of the hive. This study also focused on amino acids from pollen, and it is likely that lipids, vitamins, sugars and minerals also contributed to spore proliferation and behavior. Moreover, we decided to mimic more natural Nosemainfection conditions by delivering spores to bees in infected gut contents. Purified spores were not used, and minute amounts of pollen grains may have been delivered along with microorganisms found in gut contents. With these considerations, we propose that the honey bee brain is highly responsive to pollen, and a pollen diet continues to affect amino acids in the brain after consumption. Stress, either parasitic or nutritional changes this relationship. A better understanding of how pollen affects the brain of the honey bee provides insight into honey bee longevity, nutritional supplement improvement, and methods to understand and prevent the harmful effects of stress. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. AUTHOR CONTRIBUTIONS SG and GD-H conceived the study and wrote the manuscript. SG, NJ, SC, and MC performed the experiments and analyzed the data. All authors read and commented on the final version. FUNDING This work was supported with internal funds from the United States Department of Agriculture.
2020-03-24T13:15:30.376Z
2020-03-24T00:00:00.000
{ "year": 2020, "sha1": "24502571a53cafd9cd5cf553308f0e92145cb360", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.00231/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "24502571a53cafd9cd5cf553308f0e92145cb360", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
14851550
pes2o/s2orc
v3-fos-license
Development of Mobile Mapping System for 3D Road Asset Inventory Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed. Introduction The lifetime of Civil Engineering infrastructure projects are generally made up of three phases-design and planning, construction and maintenance. The use and maintenance phase has the longest duration and involves most of the project cost. Different construction engineering projects, such as development of roads, bridges, parks and other utilities are expected to be in a usable state for several years. However, damage and deterioration are unavoidable. The amount of money spent on repair or replacement of the assets are also very high. In order to make the process of maintenance and improvements more manageable, it is necessary to document the assets and inventories in an appropriate format. The recent advancements in digital maps and GIS (Geographic Information System) technology has improved the efficiency of asset and inventory management. GPS (Global Positioning System) is an instrument that is widely used to locate and map the positions of assets. The most important task in the development of an asset management system is the collection of accurate spatial data and its related attributes. Manual data collection using field data loggers with GPS and traditional survey methods are commonly seen for asset inventory data collection. Though this methodology provides accurate results, it involves more time and manpower. Several remote sensing products, such as aerial imageries, terrestrial photographs and laser point clouds with high accuracies also contribute to the development of an asset inventory database. This paper focuses on development and usage of a mobile mapping system for documenting transportation asset inventories. A typical mobile mapping system is equipped with a laser scanner, panoramic cameras, GPS and IMU (Inertial Measurement Unit) positioning setup, on-board computer and a storage device for data logging. Airborne LiDAR data and aerial images are highly suitable for extracting features, such as building footprints, roads and vegetation cover. Mapping of infrastructure facilities may require much higher level of detail including accurate positions of road signs and signals, curb height and width of pavements, availability of ramps on the sidewalks, etc. Mobile mapping system tends to be more advantageous in these scenarios. More recent mapping products with up-to date data is preferred over highly accurate outdated data for most cartography applications. Since the field techniques are largely subjective, it calls for a technology that is both economical and expedient. With the introduction of low cost and portable laser scanners in the market, most of the mobile mapping systems are equipped with laser scanners in addition to video cameras. Thus, a combination of georeferenced pictures and three-dimensional point cloud provides a better scope for extracting information from the scene. The paper is divided into following sections: The first section discusses the background for the study which includes a detailed description about existing methods of data collection and literature references supporting various advancements in applications of mobile mapping systems in the domain of asset management. The second section describes the methodology involved which includes the task of building the system, different components and their communication networks, co-registering multi-sensor data, basic processing of the data and integrating multi-sensor data into a single GIS system. Specific details about the experiments conducted, discussion about the proposed framework and concluding remarks. Background Spatial data about roadside asset inventories and other municipal utilities are generally collected using traditional land survey methods. There are two types of information recorded-inventory (structures and road signs) and condition of the inventories [1]. The field survey methods use GPS and total stations to record the location of various assets. GPS can be easily used to map street furniture like lamp posts, sign boards, etc. [2]. Though manual survey methods are highly accurate in two-dimensional space, it is difficult to obtain the third dimension of points with high accuracy. These methods are time consuming and cumbersome, as well [3]. There are several other data management and data integrity challenges associated with using traditional survey methods for replenishing asset inventory database. The data might not be consistent since a number of manual tasks are involved [4]. There are also cases where the data is incoherently spread across different systems, in different formats, depending on the survey crew and temporal aspects of data collection. There is a definite need to integrate the inventory data as a single system with a common database structure in order to add relevant attribute data that is relevant during supervision, maintenance and replacement jobs. A good comparison of techniques by manual and mobile mapping data collection is explained in [5]. The introduction of field data collectors into the market has improved the consistency of survey data and also resulted in a less cumbersome and systematic way of data collection. Field data collectors are devices with inbuilt GPS to collect and store details of features. These devices log the location and attribute data directly to a server or have a storage inbuilt [6][7][8]. Mobile GIS technology is also increasingly used for asset inventory data collection. Mobile applications with spatial analysis capabilities are increasingly found in the market which can be used to collect data about features using the internal GPS of the phone. Though the process of data collection is direct and simple, the accuracy of a phone GPS is low and the features are coarsely mapped. When asset inventory data is collected using field survey methods, most of the time and resources are spent on field work. The office work is limited to data migration from the data collector to the servers and entry of attribute data [5]. While using field data collectors, it is not always possible to verify the details without visiting the field again. However, newer instruments are equipped with cameras which aids in review and verification of the collected data. Though these field data collection techniques are widely used, the procedures are labor intensive and time consuming. There is also a considerable level of difficulty in three dimensional modelling of the assets. These limitation factors motivated technological advancements and eventually led to the development of a mobile mapping system which directly collects 3D data of the vicinity. The concept of a mobile mapping system which integrates multiple sensors on a moving platform has a history dating back to 1970s. A photo logging instrument was attached to a vehicle to collect images of transportation asset inventories. Though satellite positioning technology like GPS were unavailable during that time, a combination of accelerometers, gyroscopes and odometers were used to determine the course of the vehicle movement and direction. The photos were eventually georeferenced based on the recorded vehicle positions [9]. After the introduction of GPS, a combination of GPS, IMU and other sensors were used for positioning. Video cameras with time-stamped frames were used to record images. Urban development and growth in transportation infrastructure encouraged the development of more efficient and improved techniques of mobile mapping. Further, precise positioning using GPS in kinematic mode led to the possibility of direct geo-referencing. Combination of direct geo-referencing concepts and digital imaging technology led to lower costs, better accuracy and increased flexibility of MMS [10]. Cameras capture images of the scene. Though it is possible to get the third dimension by using multiple overlapping stereo images, the process is indirect and tedious. On the other hand, laser scanners determine the three-dimensional coordinate of each point in the scene directly by calculating the range and direction of the laser pulse. The rise of LiDAR technology led to compact, light-weight and inexpensive laser scanners in the market which could be easily mounted on the mobile mapping system. Thus, integrating laser scanners along with this system provides the coordinates of points in three dimensional space. Fusing these geo-referenced multi-sensor data provides a better opportunity to find solutions for specific problems in geospatial domain [11]. Mobile mapping systems reduce the data collection time significantly. Literature relating to the applications of mobile mapping system in the field of transportation are explained in detail by [12]. With an appropriate set up of the sensors, the efficiency of data collection can be maximized. In addition to imaging cameras and laser scanners, depending on the application, the system can also be equipped with thermal cameras, hyper spectral scanners or ground penetrating radars. The laser scanner typically rotates in a 2-D plane and collects data over 360 degrees. A vertical inclination of the laser scanner would lead to better coverage as it collects points from overhead structures that are even perpendicular to the vehicle movement [13]. For approaches aiming to detect the road markings and features on the road surface, it is advantageous to orient the cameras/laser scanners facing downward [14]. A pair of stereo-cameras in the place of normal cameras for capturing asset information improves the accuracy of third dimension [15]. It provides a possibility to measure feature points directly from images. The major limitation of this approach is inaccurate measurement of relative distances between objects. This can be attributed to the different challenges leading to inaccuracies in calibrating the cameras. In addition to the vehicle based systems, the mobile mapping system can also be mounted on different platforms, such as, backpacks, UAVs, boats, carts or balloons depending on the application. There are light-weight laser scanners which helps to build a system weighing less than 30 pounds paving way for multi-modal data collection [16]. The positioning components-GNSS and IMU were used to determine the quality of a newly proposed wheelchair tires in a novel research [17]. Assessment of the condition of the pavements turns out to be a byproduct of the system setup thereby, making it a viable technology for condition assessment for assets. The survey grade mobile mapping systems are basically equipped with a commercial laser scanner, panoramic cameras, GPS-INS, dead-reckoning devices, such as odometers and a dedicated post processing software. An average absolute accuracy of 0.5 m is expected from most commercial mobile mapping systems [18]. An important aspect to be considered while migrating to mobile mapping technology from traditional survey methods is the high initial cost of the system. Several approaches [19][20][21] explain the development of a low cost mobile mapping system with laser scanners and cameras on-board. It should be noted that, a low cost mobile mapping system provides data accurate enough to detect tree canopy from it [22]. The required accuracy of positioning components for a mobile mapping system is much lesser than that of an airborne LiDAR. Hence, the overall cost can be reduced by choosing a less accurate GPS-IMU system. Though mobile mapping systems reduce the data collection time tremendously, an equivalent amount of time and resources are generally spent on extracting required information from the collected data. Thus, most time and resources for creating an asset inventory is spent digitizing features and entering attribute details. Due to the intensive office work involved during data digitization, research involving semi-automatic and automatic methods of feature extraction from mobile mapping data gained importance. Different algorithms and methodologies utilizing multi-sensor data (LiDAR point clouds and images) have been proposed for road infrastructure modelling. There are also several proposed methodologies for automatic/semi-automatic detection of some urban scene features from mobile mapping data. The most prominent features include road centerlines, road boundaries, lanes, median, sidewalk, curbs, poles, signboards and roadside vegetation/trees. Depending on the type of the scene and the sensors used for data collection, there are some algorithms and concepts that are commonly used to automate feature extraction. RANSAC-RANdom SAmpling Consensus method of plane detection is used to detect and classify road planes from LiDAR data [23]. A method of classification of urban features using multi-level segmentation and then connecting coherent planar components is explained by [24]. This paper also describes region growing, mean-shift and connected component algorithms. An innovative method of detecting curbstones from high density LiDAR data prove that LiDAR point clouds from mobile platform provides better extraction accuracy when compared with airborne LiDAR data [25]. Experiments prove that automated detection of road markings by using a range dependent thresholding function on the rasterized LiDAR data can be used for detecting lane markings. Several other approaches for detecting road corridor features from multi-sensor data are given in [26][27][28][29][30]. An ideal solution for asset inventory data collection would be a fast and cost effective technology providing an easy method of data collection. Though most existing mobile mapping systems cater to the current requirements, there is a high cost involved. Time consuming manual feature extraction also adds to the disadvantage. This paper describes the process of building a cost-effective mobile mapping system using mobile LiDAR and cameras for data collection. The paper also discusses various semi-automatic and automatic techniques of feature extraction for detecting transportation assets. Methodology Transportation assets mainly consists of roads and pavements. However, other assets such as sign boards, signal lights, poles and electric, telecommunication and water utilities which are found above and below the roads and pavements also belong to the major components of transportation assets. These assets help to improve the efficiency of the roadway network thereby adding safety of transportation [31]. Additionally, it should be possible to monitor the assets regularly, which would help detect overshooting trees, condition of roads/pavements and roadside vegetation changes. Mobile mapping data can also be used for specific purposes, such as determining the presence of ramp in the sidewalk and adherence with other ADA (Americans with Disabilities) policies. Since it is important to collect and maintain a detailed inventory of these assets in an appropriate format, the mobile mapping system which helps in collecting images and three-dimensional point cloud data of the scene can be used to document the position and condition of the above mentioned assets. Required Sensors The primary components of the mobile mapping system being built for the purpose of transportation asset inventory data collection includes: ‚ One or more Cameras-Nikon 3200, 3300: Cameras capture pictures/video frames of the scene, thereby providing the asset managers with digital pictures portraying the conditions of assets. ‚ One or more laser scanners-Velodyne HDL-32E: LiDAR records 3D point data of the vicinity in the mapping frame, which, helps create 3D model of the scene and extract features. ‚ One or more GPS receivers, Gyros, Accelerometers-Geodetics, Geo-iNav: GPS-IMU integrated solution improves the frequency of recording positions from 1 Hz (GPS) to 125 Hz. The integrated navigation solution includes the position as well as the orientation of the vehicle (X, Y, Z, heading, roll and pitch). ‚ On-board computer-Brix: The software for controlling the functioning of the sensors are installed on the computer. It is important to Wi-Fi enable the computer, in order to connect it remotely using a laptop/tablet. ‚ External storage device-Samsung 1TB Solid State drive (SSD): The laser scanner can record up to 700,000 points per second. The navigation data is also quite voluminous. Since, the velocity of data recording is high, it is necessary to use a Solid state drive which provides high speed data logging. Velodyne HDL-32E The Velodyne laser scanner is widely used for mapping applications. There are a few survey grade laser scanners, such as Optech Lynx and Riegl VQ, which have accurate and high precision GPS-IMU solution. While, these scanners are very expensive, there are several inexpensive laser scanners that are cheaper than Velodyne. For instance, Hokoyu and ProtoX2D are some examples of cheap laser scanners. These scanners have just one laser diode and are mostly used for obstacle detection in robotics applications. They have a small range and insufficient accuracy for mapping applications. Velodyne has 32 laser channels covering a vertical swath of +10.67 to´30.67 degrees. The LiDAR head can rotate 360 degrees in the two-dimensional plane. The sensor works on infra-red band with wavelength of 905 nm. The sensor is light weight and compact. The HDL-32E Lidar collects up to 700,000 points per second. The time of flight method is used to determine the range of these points. The sensor typically has a range of 1 m to 70 m. i.e., points from the scene within a range of 70 m from the sensor are measured. These attributes make Velodyne an obvious choice for developing low-cost mapping solution. The points are stored as frames. Each frame corresponds to a 360 degree rotation of the laser scanner in the 2D plane. Nikon Cameras-3200, 3300 Cameras have been deployed in asset mapping. Charge Coupled Device (CCD) cameras were used initially which were soon replaced by SLR cameras. The large CCD arrays and pixel size of DSLR cameras when compared to the point and shoot digital ones makes it more apt for mobile mapping applications. Nikon D3200 has a fixed focal length of 35 mm and an angle of view which is 1.5 times the format equivalent of the focal length. Nikon D3300 has a variable focal length of 15 mm to 55 mm. The angle of view is also 1.5 times the format equivalent of the focal length. Both cameras capture either 30 or 50 frames per second in the video mode depending on the chosen resolution. The duration per video is 20 min and requires about 1.3 GB of memory. Technologically advanced cameras, such as GoPro, are also widely discussed about their role in mobile mapping. However, the fish-eye lens of GoPro is prone to create distortions and decrease the quality of the cartography product. Geodetics-Geo-iNav The Geo-iNav instrument from Geodetics provides a GPS-IMU integrated solution. It consists of a GPS which provides positions with a horizontal accuracy of 5 cm and vertical accuracy of 10 cm on L1/L2 real time kinematic mode and horizontal accuracy of 1.5 m and vertical accuracy of 2.5 m on L1 For a mapping application, the expected absolute accuracy of feature points is <20 cm. Hence, it is necessary to deploy an IMU with less than 3˝/h of gyro bias. The Geo-iNav unit weighs 567 gram and has size of 550 cubic cm which makes it portable and apt for mobile mapping applications. It provides navigation solution at a frequency of 125 Hz. The navigation and geodetic data provided by Geo-iNav includes, navigation solution (position, velocity and attitude) and raw GPS/IMU data which could be used for post processing. Assembly of Sensors Since the mobile mapping system could be mounted on a variety of mobile platforms from trucks to boats, it is essential to create a robust system on which these sensors could be assembled. The first important aspect to consider is the stability of the platform. The IMU, GPS antenna, cameras and the laser scanner are required to be in a stable position throughout the data collection. Additional care should also be taken to ensure that the IMU does not receive undesired vibrations due to the vehicle's internal mechanics. The second aspect of consideration is the position of the GPS antenna. It is important to mount the GPS antenna in a position such that the signal from the GPS satellites are not occluded. The orientation of the laser scanner and cameras also play a major role in the efficiency of data collection. By changing the vertical orientation of the laser scanner, it is possible to collect points from the roads as well as overhead structures. The laser scanner is mounted on top of an arm which could rotate about the horizontal axis as shown in the Figure 1. In addition to the sensors and navigation instruments, the mobile mapping system also needs a laptop/tablet computer which would connect to the brix thru Wi-Fi or wired network and allows the operator to control the sensors. For a mapping application, the expected absolute accuracy of feature points is <20 cm. Hence, it is necessary to deploy an IMU with less than 3°/h of gyro bias. The Geo-iNav unit weighs 567 gram and has size of 550 cubic cm which makes it portable and apt for mobile mapping applications. It provides navigation solution at a frequency of 125 Hz. The navigation and geodetic data provided by Geo-iNav includes, navigation solution (position, velocity and attitude) and raw GPS/IMU data which could be used for post processing. Assembly of Sensors Since the mobile mapping system could be mounted on a variety of mobile platforms from trucks to boats, it is essential to create a robust system on which these sensors could be assembled. The first important aspect to consider is the stability of the platform. The IMU, GPS antenna, cameras and the laser scanner are required to be in a stable position throughout the data collection. Additional care should also be taken to ensure that the IMU does not receive undesired vibrations due to the vehicle's internal mechanics. The second aspect of consideration is the position of the GPS antenna. It is important to mount the GPS antenna in a position such that the signal from the GPS satellites are not occluded. The orientation of the laser scanner and cameras also play a major role in the efficiency of data collection. By changing the vertical orientation of the laser scanner, it is possible to collect points from the roads as well as overhead structures. The laser scanner is mounted on top of an arm which could rotate about the horizontal axis as shown in the Figure 1. In addition to the sensors and navigation instruments, the mobile mapping system also needs a laptop/tablet computer which would connect to the brix thru Wi-Fi or wired network and allows the operator to control the sensors. Once the components are put together on the platform, the sensors must be calibrated before they are deployed. Amongst a number of systematic errors that reflect on the data, the error due to boresight calibration is very significant. Calibration is the process of compensation for the misalignment between different sensors belonging to the system [32][33][34]. In general, the misalignment involves both translation and rotation components. This misalignment is measured between the centers of all other sensors and the IMU center. Translation and rotation between the following pairs are measured. GPS antenna center to the center of IMU The origin and axes of the IMU are marked on its box. Since, the centers of IMU and GPS antenna are physically separated and identifiable, the distance between their centers can be manually measured along the X, Y and Z directions of the IMU as shown in Figure 2. The GPS antenna does not have any orientation. Hence, there are only translation components (dx, dy, dz) involved. These translation values are used by the IMU while integrating the GPS-IMU data. The measurement should be accurate to 10 cm or less in order for the post-processing software Once the components are put together on the platform, the sensors must be calibrated before they are deployed. Amongst a number of systematic errors that reflect on the data, the error due to boresight calibration is very significant. Calibration is the process of compensation for the misalignment between different sensors belonging to the system [32][33][34]. In general, the misalignment involves both translation and rotation components. This misalignment is measured between the centers of all other sensors and the IMU center. Translation and rotation between the following pairs are measured. 1. GPS antenna center to the center of IMUThe origin and axes of the IMU are marked on its box. Since, the centers of IMU and GPS antenna are physically separated and identifiable, the distance between their centers can be manually measured along the X, Y and Z directions of the IMU as shown in Figure 2. The GPS antenna does not have any orientation. Hence, there are only translation components (dx, dy, dz). The body frame of the sensor platform has its positive X-axis aligned in the direction of the vehicle, positive Y-axis extends out on the side (right hand) and positive Z-axis pointing down. Based on current setup of the IMU, the negative X-axis of IMU is aligned along the forward direction of the body-frame, the Z-axis of IMU is set along the vertical axis of the body-frame but point up and the Y-axes of both body frame and IMU frame are aligned. Based on the specified mounting parameters, the GPS is located in IMU's positive X direction. Hence, the displacement between IMU origin and GPS antenna along IMU's X-axis should be subtracted from the sensor measurements. Thus, while computing the boresight misalignment parameters, the value of dx is negative. For similar reasons, dy and dz are positive. These translation values are used by the IMU while integrating the GPS-IMU data. The measurement should be accurate to 10 cm or less in order for the post-processing software to accurately determine the offset. Sensors 2016, 16, x 7 of 18 mounting parameters, the GPS is located in IMU's positive X direction. Hence, the displacement between IMU origin and GPS antenna along IMU's X-axis should be subtracted from the sensor measurements. Thus, while computing the boresight misalignment parameters, the value of dx is negative. For similar reasons, dy and dz are positive. These translation values are used by the IMU while integrating the GPS-IMU data. The measurement should be accurate to 10 cm or less in order for the post-processing software to accurately determine the offset. 2. LiDAR (origin from which laser pulse is triggered) to the center of IMU The origin and axes of the IMU are marked on the box. On the other hand, it is difficult to manually identify the axes of the laser scanner. Hence, the translation and rotation between the two origins are determined by using a Terrestrial Laser Scanner (TLS). The TLS and the mobile mapping unit are placed on a levelled plane. Both the mobile laser scanner and the TLS scans the vicinity. The mobile scanner is fine scanned by the TLS such that the IMU markings are clearly visible on the point cloud. Points from the TLS point cloud belonging to the IMU center and IMU axes are picked. The origin and axes of the TLS point cloud are shifted to the IMU center and axes of the mobile mapping system. Since, the scale component is fixed, 3-D Helmert rigid body transformation is applied [35]. Now, the translation and rotation between the point clouds from mobile laser scanner and TLS (centered at IMU) are measured using Iterative Closest Point (ICP) methodology introduced by [36,37]. One implementation of ICP as an open source software-CloudCompare [38] is used. The translation and rotation values determine the boresight misalignment between IMU and Laser Scanner. 3. Camera Calibration [39][40][41] a. Interior Orientation-Interior orientation is a part of camera calibration where the measurements relating to the camera, such as the perspective center and focal length are determined. It also involves finding the scaling, skew factors and lens distortion. The interior orientation is performed in lab conditions by clicking pictures of a regular grid from different angles. By transforming the image pixels to real world lengths, the metrics of imaging can be determined. b. Exterior Orientation-Exterior orientation parameters change for every picture. It is the position and orientation of the camera with respect to a coordinate system, while capturing each photo. Space Resection is a conventional method used to determine exterior orientation parameters. It involves the measurement of ground control points and digitizing them on overlapping images to determine the camera position (X, Y, Z) and orientation (omega, phi, kappa). Camera Calibration [39][40][41] a Interior Orientation-Interior orientation is a part of camera calibration where the measurements relating to the camera, such as the perspective center and focal length are determined. It also involves finding the scaling, skew factors and lens distortion. The interior orientation is performed in lab conditions by clicking pictures of a regular grid from different angles. By transforming the image pixels to real world lengths, the metrics of imaging can be determined. Exterior Orientation-Exterior orientation parameters change for every picture. It is the position and orientation of the camera with respect to a coordinate system, while capturing each photo. Space Resection is a conventional method used to determine exterior orientation parameters. It involves the measurement of ground control points and digitizing them on overlapping images to determine the camera position (X, Y, Z) and orientation (omega, phi, kappa). Co-Registration of Multi-Sensor Data The mobile mapping system consists of two cameras, a laser scanner and a GPS-IMU instrument. Once the data collection is complete, the trajectory needs to be smoothed and the data from different sensors should be synchronized. The trajectory appears rugged mostly at places where there is a GPS signal outage. We smooth the trajectory, by creating a piecewise polynomial function. Trajectory Interpolation The frequency of Geo-iNav is 125 Hz. The combined solutions of GPS and IMU records a position for every 1/125th of a second. However, the Velodyne laser scanner collects up to 700,000 points per second. In order to geo-reference the points, ideally, we would need a position/orientation of the vehicle for approximately every 1/20,000th of a second. Since, the frequency of the laser scanner is much higher than that of the INS, the trajectory curve is interpolated using piecewise polynomial function in the shape of a cubic Spline (B-Spline) with timestamp (t) as its parameter as shown in Equation (1). Piecewise cubical Spline is preferred over linear interpolation because, it provides better continuity and also considered as the best fitting curve for the road trajectory [42,43]. Time Synchronization Coordinates of points on the trajectory, images and LiDAR point clouds are uniquely identified by their timestamp. The GPS time from Geo-iNav is considered as the standard and the clocks of all other sensors are corrected for GPS time. The Velodyne laser scanner records two timestamps: System time in microseconds and GPS time in seconds. The system time is the number of microseconds calculated from the beginning of the hour in UTC (Universal Time Coordinated). Thus, offset between the system time of Velodyne and GPS time is corrected using the GPS timestamp recorded by the Velodyne laser scanner. A video file has the following properties that can be used to timestamp the image frames -1 Modified date (timestamp)-Timestamp recorded when the video was stored to the SD card (Assumed as the finishing time of the video) 2 Duration of the video 3 Frames per second (fps) The SLR cameras record the system time when the videos are saved to the SD card. This is arbitrarily assumed as the finishing time of a video. The beginning timestamp of the video is calculated from the duration and finishing time. Using the value of fps, the timestamp for every image frame is interpolated. The timestamp of the image frames are calculated based on the camera's system time which is prone to have an offset from the GPS time. In order to determine the offset between camera's clock and the GPS time, a picture of the GPS log (Latitude, Longitude, Altitude, GPS Week/Second) from the visualization screen is captured before the data collection. The difference between the system time of the camera and the GPS time on the captured picture gives the offset between the two clocks. The timestamps of the video frames are corrected for this measured offset. Direct Geo-Referencing Direct Geo-referencing is the process of determining the position and orientation of the laser scanners and cameras (sensors) of the mobile mapping system by using the navigation data from the GPS-IMU device. The LiDAR and the cameras capture points in the scene which are external of the platform. The position and orientation of these sensors determined through direct geo-referencing techniques help in determining the coordinates of the points captured by these sensors in the mapping coordinate system [44]. The point clouds were registered by applying the interpolated rotation values (heading, roll, pitch) obtained from the INS, corresponding X, Y, Z position from GPS-IMU integrated navigation data and the boresight rotation and translation between IMU and the laser scanner as shown in Equation (2). where, P G -Coordinate of the captured point in ECEF system, p ECEF -GNSS sensor position in global ECEF system, R lG´R otation matrix from origin of WGS system to origin of local frame, in`long i˘´s in plat i q˚cos`long i˘c os plat i q˚cos`long ic os`long i˘´s in plat i q˚sin`long i˘c os plat i q˚sin`long i0 cos plat i q sin plat i q fi ffi fl long i , lat i are geodetic coordinates of p ECEF , R ba´R otation matrix from body frame to local frame, R sb -Rotation matrix from scanner to IMU (b-frame)-boresight rotation matrix P s -Coordinate of the point in scanner frame (as recorded by Laser Scanner) dT sb -Offset between scanner frame and IMU center (b-frame)-boresight translation. In order to improve the accuracy of direct geo-referencing, GCPs (ground control points) are collected and used to rectify any errors in the geo-referenced point cloud. It is important to choose control points that are clearly identifiable on the point cloud as well as images. The points ought to be well-distributed in the area where data is collected. The survey is carried out using GPS and total stations. The coordinates are recorded in WGS System. The measured points are identified on the registered point clouds. The point clouds are corrected by applying 3D Helmert rigid body transformation using the GCPs as parameters. Geo-Referencing Images The mobile mapping system captures images/video frames continuously. A scene is captured by multiple images with significant overlap between the frames. Hence, the first step in geo-referencing is to connect the consecutive images using common points-tie points [45]. The traditional aero-triangulation (AT) methodology is used to orient and connect the images. The indirect method of exterior orientation is considered very accurate and commonly used for registering images. It involves the use of ground control points to geo-reference the images. Once AT is performed, the measured values of the control points are used to geo-reference the images. With more accurate GPS-IMU navigation sensors, direct geo-referencing of images using the position of the sensor and calibration parameters are increasingly used in several mobile platforms, such as UAVs, mobile mapping systems etc. In order to use the direct method of geo-referencing, it is important to accurately determine the boresight misalignment parameters. That is, the separation of the sensor origin (camera) from the origin of the IMU. Since, it is unlikely to measure this separation with centimeter level of accuracy, a combination of direct and indirect geo-referencing techniques could be used in this case. Once bundle block adjustment is performed on the set of images using the GPS-IMU position data, the GCPs can be used to correct for the errors in calibration [46,47]. Quality Check Methodology It is important to assure the mapping quality of the collected data before populating the asset inventory database or using them for feature extraction. The measures of quality for the survey data involves determining the relative and absolute accuracies. A particular feature might be recorded in multiple image or point cloud frames. This may be due to overlapping frames or multi-path (repeat trajectory) data. Relative accuracy is the measure of drift between the same features measured on different frames. The major cause for inconsistency between consecutive frames are systematic or random errors of the sensors involved. Positioning error will reflect in the relative accuracy of multi-path data. Features like sign boards, pavement boundaries and building corners can be used to quantify relative errors. Absolute accuracy is the closeness of the collected data to the true position of the features. In order to check absolute accuracy of the collected data, a combination of the set of GCPs used for geo-referencing and a new set of GCPs are used. The coordinates of points from the geo-referenced point cloud and images are checked against the GCPs collected thru field survey. The root mean square deviation of these two sets of values determines the absolute accuracy of the laser scanning data and images. Once, the quality of data is assured, the data is processed so that, outliers are removed and only the relevant data is used for the experiment. Filtering of Data Laser scanning data may contain noise and erroneous returns. The first step is to filter these erroneous points. Different errors due to laser scanner and the ways they affect the quality of data are explained by [48]. Sparse returns which fall out of the typical range of the laser scanner may be considered as noisy returns and can be removed from the data. Further, unrealistic elevation values might also be the result of inconsonant returns or incorrect range calculation. Noisy points are filtered from the data by using a basic range threshold. Points not belonging to the range are treated as outliers and are removed. Secondly, it is important to narrow down the entire dataset to the regions related to the domain of interest and eliminate unrelated parts of the data. A transportation asset inventory database would typically contain roads and roadside features. In order to localize the point cloud, road centerline and road width are extracted from the TIGER (Topologically Integrated Geographic Encoding and Referencing) dataset [49]. The TIGER shapefile shows the centerline of the roads, the attribute data holds the type of the road which would provide a fair idea about the width of the road. The extent is determined by providing a buffer to the prescribed road width, in order to accommodate the roadside landscape, pavement and other features of interest. This process of localizing the point cloud to contain only the features of interest is very helpful in avoiding misclassification of features and also significantly reduces the processing and computation time. Automatic Bare-Earth for Cross Section The raw LiDAR data consists of returns from all features in the vicinity which includes both ground and non-ground points. In a road scene, there could be ground points, such as the road surface, sidewalk, median and there are also points from non-ground features like trees, electric lines, cars, poles, sign boards and other roadside structures. The easiest and straight-forward method to classify ground points from non-ground points is by exploiting their planarity property. Almost all ground points lie on well-defined planes. For example, the roads and pavements are typically planar. RANSAC method determines the best fit plane on a set of points. RANSAC algorithm, when run repeatedly on the outliers of each iteration, results in a number of significant planes that fits the point cloud. The points belonging to these planes are classified as the ground points (bare-earth) and the outliers of the final iteration of RANSAC are non-ground points. Thus, ground points can be differentiated from non-ground points using this procedure. Creation of GIS Database for Asset Management The concept of using GIS technology for Asset Management is well-established. GIS provides a concrete mapping between spatial data and its corresponding attribute details, which helps in making well-informed decisions about infrastructure management and maintenance [31]. Displaying an asset with its location on a scaled map provides better visualization and helps in decision making. Applying spatial relationship constraints like connectivity, containment and adjacency helps maintain data integrity and consistency. A number of commercial software tools with rich Computerized Maintenance Management System (CMMS) capabilities are very popular in the GIS industry for almost a decade. Beginning from electric and water up to transportation and public safety, GIS asset management systems are being implemented for almost every utility domain. Most of these frameworks have a customized database structure with internal relationships mapped between different components of the asset management system. Largely, they are exclusive for a particular utility. As discussed in the previous sections, most transportation asset management frameworks use the traditional methods of data collection using GPS and other field data collectors. The data is then imported into CAD format which is later converted into GIS compatible formats. The pioneers in asset management software support almost all data formats belonging to common field data collection methods. However, the best practice of integrating data from mobile mapping systems directly into a GIS database is a state-of-the-art research domain. The structure of databases and spatial relationships between the assets do not depend on the data collection methodology. However, mobile mapping system is not a direct method of data collection. It involves a number of processing tasks before documenting the actual assets of interest. It involves tasks that range from geo-referencing to feature extraction. An ideal framework for managing asset data collected using a mobile mapping system should include these processing tasks along with the implementation of a GIS database for asset inventory. Characteristics of an asset management framework using mobile mapping data: ‚ Ability to manage voluminous data efficiently-large point clouds ‚ Maintain consistency between multi-sensor data-images, LiDAR point clouds, thermal/ hyperspectral data etc. ‚ Appropriate versioning of collected datasets. ‚ Implementation of basic automatic feature extraction algorithms and interface to manually digitize features. ‚ Relational database design to store assets and their spatial relationships. ‚ Visualization, rendering and fluid user interface. Experiments The mobile mapping system with the described components are assembled as shown in Figure 1. The GPS antenna and IMU are fixed upright. The system is mounted on a uniformly levelled surface. The arm of the laser scanner is adjusted depending on the mounting of the mobile mapping system so that maximum points from the scene are collected. The system also requires a battery to supply power for the components to function. Once, the system is set up, boresight misalignment of each sensor with respect to the IMU center is measured. The center of the IMU and its axes are marked on its cover. The x, y and z misalignment values between GPS and IMU center are fed into the IMU processing software so that the integrated navigation solution is automatically corrected for this misalignment. The second set of calibration values which includes the rotation and translation between IMU and laser scanner/cameras are applied to the data during the process of geo-referencing. The system is powered and individual components are tested for their functionality. The SSD is also tested for proper data logging and completeness of data. Once, the data collection is complete, the trajectory is post-processed to apply differential corrections and to integrate GPS-IMU using "GeoRTD" from Geodetics. The data is converted into appropriate format for synchronizing with data from other sensors. The GPS-IMU data which is originally in binary format is converted into ASCII (American Standard Code for Information Interchange). The point cloud data from laser scanner is converted from PCAP (Packet Capture) to CSV (Comma Separated Values) format. Further, the videos recorded by the camera in MP4 (MPEG-4 AVC -Advanced Video Coding) format are converted to individual frames which are images in JPEG (Joint Photographic Experts Group). The data from different sensors are synchronized based on the recorded timestamp. The system time of all sensors are measured in UTC system. The data is compensated for the 17 s offset measured between UTC and GPS time. However, there can also be a possible drift between the internal system time recorded by the cameras and GPS time. A drift of 24.5 s and 28.5 s are measured between the system times of the two cameras and the GPS time respectively. At the end of this synchronization step, each return from the laser scanner, each frame recorded by the camera and each position recorded by the GPS-IMU sensors will be timestamped based on GPS time. A piece-wise polynomial, spline function is fitted for X, Y, Z, heading, roll and pitch values with timestamp as the parameter. Thus, the navigation data corresponding to each point from the laser point clouds/each image can be determined from the spline functions based on their respective timestamps. Additionally, the rotation and translation values corresponding to boresight misalignment/sensor calibration are also applied. Once the data is geo-referenced, the point clouds that are initially in sensor's coordinate system were converted to a global coordinate system. Ground Control Points are required to determine the exterior orientation (EO) parameters of images and also to check the quality of geo-referenced point clouds. Points that are well-defined on the ground, on images and on point clouds are considered for GCPs. GPS and Total Stations are used to measure GCPs. Based on rough reconnaissance of the area, GPS stations are determined. There are locations from which maximum number of GCPs lie in the vicinity. The GPS instrument is set up on these locations in static mode to record the station coordinates. A total station is then set up at these locations to determine the coordinates of the GCPs. The EO parameters of the images are determined by using the ground control points in addition to calibration values and navigation data. The quality of geo-referenced point clouds is determined by calculating the root mean square deviation between the measured GCPs and the corresponding points on the point cloud. The data collection was carried out by mounting the system on a truck ( Figure 3a) and a golf cart (Figure 3b). The former was primarily used to collect asset inventories for the City of Dania Beach-a larger area with high speed vehicles and traffic. The latter was used for data collection within the FAU, Boca Raton Campus. Since, the golf cart is limited to be used on pavements, it is unlikely that it covers wider roads in a single run. Additionally, golf carts are not equipped with shock absorbers. Hence, the undulations in the pavement reflect in the navigation data as noise. The vehicles were driven at an average speed of 10 mph in order to have good coverage of data. The laser scanning data is stored as number of frames with each frame having a time range of 0.2 s. The density of the point cloud depends on the number of returns. Since, the scanner is oriented at an approximate angle of 135 degrees as shown in Figure 1, the road surface is clearly captured. There are larger number of returns from the road surface because it is closer to the scanner. In an urban road side environment, an average of 100,000 points (returns) are collected in a frame. Pictures are collected at the rate of 30 frames per second by the Digital SLR camera. The pictures are extracted from the video at an interval of 0.2 s which is the same as the duration of a LiDAR frame. ( Figure 3b). The former was primarily used to collect asset inventories for the City of Dania Beacha larger area with high speed vehicles and traffic. The latter was used for data collection within the FAU, Boca Raton Campus. Since, the golf cart is limited to be used on pavements, it is unlikely that it covers wider roads in a single run. Additionally, golf carts are not equipped with shock absorbers. Hence, the undulations in the pavement reflect in the navigation data as noise. The vehicles were driven at an average speed of 10 mph in order to have good coverage of data. The laser scanning data is stored as number of frames with each frame having a time range of 0.2 s. The density of the point cloud depends on the number of returns. Since, the scanner is oriented at an approximate angle of 135 degrees as shown in Figure 1, the road surface is clearly captured. There are larger number of returns from the road surface because it is closer to the scanner. In an urban road side environment, an average of 100,000 points (returns) are collected in a frame. Pictures are collected at the rate of 30 frames per second by the Digital SLR camera. The pictures are extracted from the video at an interval of 0.2 s which is the same as the duration of a LiDAR frame. The LiDAR frames are loaded in Leica Cyclone environment to digitize the features of interest. The assets of interest for transportation inventory management includes, sidewalks, medians, guardrails, fencing, misc. concrete structures, lighting, landscape areas, delineators, striping, symbols and messages, crosswalks, stop bars, raised pavement markers, attenuators and highway signs. In addition to the spatial component, the attributes for these assets are also populated. The typical database structure for transportation asset inventory data management includes attributes, e.g., condition and dimensions. However, road signs should also be tagged with the symbol represented by the sign. The spatial component of the assets may be points, lines or polygons depending on its geometry and may also require some customized attributes. Hence, each asset is stored in an individual table with attributes as shown in Table 1. The Asset Id is unique for every asset. In addition to the process of manual digitization of assets, there are a few established automatic and semiautomatic methods of feature extraction that can be implemented in future. The LiDAR frames are loaded in Leica Cyclone environment to digitize the features of interest. The assets of interest for transportation inventory management includes, sidewalks, medians, guardrails, fencing, misc. concrete structures, lighting, landscape areas, delineators, striping, symbols and messages, crosswalks, stop bars, raised pavement markers, attenuators and highway signs. In addition to the spatial component, the attributes for these assets are also populated. The typical database structure for transportation asset inventory data management includes attributes, e.g., condition and dimensions. However, road signs should also be tagged with the symbol represented by the sign. The spatial component of the assets may be points, lines or polygons depending on its geometry and may also require some customized attributes. Hence, each asset is stored in an individual table with attributes as shown in Table 1. The Asset Id is unique for every asset. In addition to the process of manual digitization of assets, there are a few established automatic and semi-automatic methods of feature extraction that can be implemented in future. Extraction Methodology As discussed in the previous section, mobile mapping reduces the data collection time greatly. However, digitizing individual features from the geo-referenced point clouds and images requires huge amount of human effort and is time-consuming. Since, we have cameras and a laser scanner onboard, it improves the scope for automatic/semi-automatic feature extraction. Algorithms involving geometry of feature points can be used in combination with image processing algorithms to get better results. There are a number of different methods being proposed by researchers in geomatics and data mining scientists for automatic feature detection and extraction. Some common methods used in feature detection that can be used to extract different transportation asset inventories are described in this section. Automatic Road Marking Extraction The major advantage of a mobile mapping system with multiple sensors onboard is that the data from each sensor complements the other and helps extract maximum information about the scene. Implementing a combination of image processing and geometry algorithms on images and point cloud data helps in detecting the road markings with almost nil manual intervention. The edges of the sidewalk detected from the images are used to define the boundary of the road and the bare-earth LiDAR points within the boundary are considered as road points. In order to ensure clear visibility for the drivers, road lane markings have a much higher reflectance when compared to the road surface. The boundaries of the road markings can be extracted from the images [27] as well as by classifying the LiDAR point cloud based on the intensity of the return pulse [14]. An example of such road marking extraction is illustrated in Figure 4. Once the parts of the image corresponding to the road markings are localized (Figure 4a), the image is segmented. The segmented image is a smoothened form of the original image with less noise (Figure 4b). Finally, edges are detected from the segmented image using Canny's edge detection algorithm [50]. These edges are processed to extract road markings (Figure 4c). the sidewalk detected from the images are used to define the boundary of the road and the bare-earth LiDAR points within the boundary are considered as road points. In order to ensure clear visibility for the drivers, road lane markings have a much higher reflectance when compared to the road surface. The boundaries of the road markings can be extracted from the images [27] as well as by classifying the LiDAR point cloud based on the intensity of the return pulse [14]. An example of such road marking extraction is illustrated in Figure 4. Once the parts of the image corresponding to the road markings are localized (figure 4a), the image is segmented. The segmented image is a smoothened form of the original image with less noise (figure 4b). Finally, edges are detected from the segmented image using Canny's edge detection algorithm [50]. These edges are processed to extract road markings (figure 4c). Road Sign Extraction Road signs are important components of the transportation asset inventory database. Road signs are non-ground points and do not lie on a definite plane. They are made up of aluminum which is highly reflective and also clearly distinguishable on the point cloud. The shape of the sign board can be determined from the boundary of clusters of high intensity returns [29,51]. Signboards are of fixed dimension range, height and shape. Hence, by using the height and dimension constraints, points that do not belong to road signs can be eliminated and the precise shape of the sign board can be obtained (Figure 5a). It is also important to know the symbol on the road sign to add attribute information to the extracted spatial data. Since the content on the sign board cannot be well identified from the point cloud, geo-referenced images are used for this purpose. The location of the signboard can be determined on the set of referenced images by using the coordinate corresponding to the centroid of the sign board extracted from the point cloud (Figure 5b). Road Sign Extraction Road signs are important components of the transportation asset inventory database. Road signs are non-ground points and do not lie on a definite plane. They are made up of aluminum which is highly reflective and also clearly distinguishable on the point cloud. The shape of the sign board can be determined from the boundary of clusters of high intensity returns [29,51]. Signboards are of fixed dimension range, height and shape. Hence, by using the height and dimension constraints, points that do not belong to road signs can be eliminated and the precise shape of the sign board can be obtained (Figure 5a). It is also important to know the symbol on the road sign to add attribute information to the extracted spatial data. Since the content on the sign board cannot be well identified from the point cloud, geo-referenced images are used for this purpose. The location of the signboard can be determined on the set of referenced images by using the coordinate corresponding to the centroid of the sign board extracted from the point cloud (Figure 5b). . Extraction of Other Assets Several methods utilizing the geometry of point clouds and also the intensity of returns are used to extract assets like poles and lamp posts. Considering a high density LiDAR point cloud, the poles and lamp posts can be modeled as cylinders with a height and volume threshold [30]. However, in a sparse point cloud, poles and lamp posts are seen as upright lines. RANSAC Line Fit can be used to distinguish poles from lamp posts [23]. Other features of interest include building faces and road side landscape. Building faces can be extracted by using their planar property because they are always planes perpendicular to the ground surface [26]. Landscape on the sides of the road are typically ground points reflecting green. Combining intensity values from image pixels with the classified LiDAR ground points, roadside landscape can be extracted. Extraction of Other Assets Several methods utilizing the geometry of point clouds and also the intensity of returns are used to extract assets like poles and lamp posts. Considering a high density LiDAR point cloud, the poles and lamp posts can be modeled as cylinders with a height and volume threshold [30]. However, in a sparse point cloud, poles and lamp posts are seen as upright lines. RANSAC Line Fit can be used to distinguish poles from lamp posts [23]. Other features of interest include building faces and road side landscape. Building faces can be extracted by using their planar property because they are always planes perpendicular to the ground surface [26]. Landscape on the sides of the road are typically ground points reflecting green. Combining intensity values from image pixels with the classified LiDAR ground points, roadside landscape can be extracted. Discussion Field data collection methods possess a number of potential challenges such as high data collection time, increased manual effort, safety constraints and limitation of information collected. Additionally, several data management aspects like data transformation, storage and integrity are not efficiently streamlined when field data collection methods are used. For instance, the workflow currently adopted by the FDOT (Florida Department of Transportation) involves field data collection through traditional survey methods. The data is converted into CAD (Computer-Aided Design) format; the images captured are converted to appropriate GIS raster format and are used as an overlay on as-built drawings. These methods often involve multi-step processes which are difficult to automate for an organized work-flow. It is also challenging to integrate newly collected data with existing data. A mobile mapping technology overcomes almost all of these limitations. Though the initial cost of the system is high, once the system is built, it makes frequent data collection easier and helps procure up-to-date information. The experiment developed a very economic system whose cost sums up to 65,000 USD. Considering the recurring cost of man power involved in traditional field survey methods, the initial cost involved with a mobile mapping system providing mapping grade accuracy turns out to be optimally advantageous. Major challenges involved in using a mobile mapping system is the processing of voluminous data and laborious digitization of individual features. These limitations can be overcome using improved data management techniques and automatic feature extraction methodologies. The use of field data collection methods using improved positioning instruments and hand-held mapping devices is a huge leap from traditional surveying methods in the last decade and now, mobile mapping system is the newer mapping technique owing to the number of advantages over its predecessors. Advantages of a mobile mapping system: ‚ Less laborious data collection process. ‚ Possibility of frequent data collection for constantly changing scenarios ‚ Data can be reviewed for condition assessment at the comfort of office ‚ Minimum field work involved ‚ Direct and accurate elevation information from mobile LiDAR data ‚ Easy to conduct survey over inaccessible areas. ‚ Minimum/nil hindrance to commuters and traffic. Conclusions Considering the importance of a detailed asset inventory database and the limitations of traditional surveying methods to achieve that, there is a need for an economic solution with improved capabilities. Eventually, a cost-effective mobile mapping system was built to provide a solution to the limitations posed by the previous techniques. The geo-referenced data from the mobile mapping system provides a three-dimensional model of the entire scene. This provides a direct method to accurately determine the three-dimensional coordinates of road assets in the corridor. Though, there is no single well-established methodology for automatic extraction of features from the point clouds and images, a number of proven techniques to detect individual assets have been demonstrated with the experimental data. Additionally, the advantages of mobile mapping system over other techniques makes it an important technology for asset inventory mapping. The current framework includes geo-referencing tools and quality check methods for point clouds and images. The current research involve building a UAV-based mobile mapping system to monitor the condition of overhead road infrastructure. The future work will also include developing more automated feature extraction methods to further improve office processing cost in asset inventory management.
2016-05-12T22:15:10.714Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "25fb11e9bfab146a1893761823cd74df0536ce14", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/16/3/367/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25fb11e9bfab146a1893761823cd74df0536ce14", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }