id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
2829016
pes2o/s2orc
v3-fos-license
Optical Monitoring of BL Lacertae Object OJ 287: a 40-Day Period? We present the results of our optical monitoring of the BL Lacertae object OJ 287 during the first half of 2005. The source did not show large-amplitude variations during this period and was in a relatively quiescent state. A possible period of 40 days was derived from its light curves in three BATC wavebands. A bluer-when-brighter chromatism was discovered, which is different from the extremely stable color during the outburst in 1994--96. The different color behaviors imply different variation mechanisms in the two states. We then re-visited the optical data on OJ 287 from the OJ-94 project and found as well a probable period of 40 days in its optical variability during the late-1994 outburst. The results suggest that two components contribute to the variability of OJ 287 during its outburst state. The first component is the normal {\sl blazar} variation. This component has an amplitude similar to that of the quiescent state and also may share a similar periodicity. The second component can be taken as a `low-frequency modulation' to the first component. It may be induced by the interaction of the assumed binary black holes in the center of this object. The 40-day period may be related to the helical structure of the magnetic field at the base of the jet, or to the orbital motion close to the central primary black hole. INTRODUCTION Blazars represent a peculiar subclass of active galactic nuclei (AGNs). The most prominent property of blazars is their strong and rapid variability, which is believed to originate from a relativistic jet that is pointed basically towards an observer. Another characteristic of blazars is their high polarization, with the degree and position angle also being highly variable. Superluminal motions have been observed in a significant fraction of these radioloud flat-spectrum sources. Blazars can be classified into flat-spectrum radio quasars and BL Lac objects, depending on whether or not they show strong emission lines in their optical spectra. Ever since the discovery of blazars and their highly variable brightness, the efforts have never stopped in searching for a periodicity in their variability. The reason for doing that is that the periodicity can put strong constraints on the emission and variation mechanisms. Although most attempts failed to find any periodicity, some positive results have been claimed for several objects (e.g., OJ 287, Sillanpää et al. 1988;3C 345, Webb et al. 1988;3C 120, Webb et al. 1990; S5 0716+714, Quirrenbach et al. 1991;ON 231, Liu, Xie, & Bai 1995;Mrk 421, Liu, Liu, & Xie 1997;PKS 0735+178, Fan et al. 1997;BL Lac, Fan et al. 1998;Mrk 501, Hayashida et al. 1998; AO 0235+164, Raiteri et al. 2001). However, except for the case of OJ 287, none of the claimed periods have been seen repeatedly, and they appear to be only 'transient periods'. The BL Lac object OJ 287 is one of the best observed blazars. It is also the only blazar that shows convincing evidence for periodic variations. By good luck and its suitable location on the sky (very close to the ecliptic where most asteroid and comet searches are made), its optical photometric measurements can be dated back to more than 100 years ago. The most prominent feature in its historical light curves is the cyclic outbursts with an interval of about 12 years, based on which Sillanpää et al. (1988) proposed a binary black hole (BBH) model for this object and predicted that a new outburst would occur in late-1994. In order to verify the predicted outburst, an international project was organized to monitor OJ 287 in multi-wavebands. This is the OJ-94 project, which covered the time range from fall 1993 to the beginning of 1997. The predicted outburst was observed with one peak at 1994.8 and another at 1996.0 (Sillanpää et al. 1996a,b). This result confirms the 12-year periodicity in the optical variability of OJ 287. The OJ-94 project found a double-peaked structure and a quite stable color for the major outburst (Sillanpää et al. 1996a,b). Therefore, the "old" BBH model by Sillanpää et al. (1988) had to be modified, since it cannot explain the double-peaked structure. New models include the "old" hit and penetration model by Lehto & Valtonen (1996), the precessing disk/jet model by Katz (1997), and the beaming model by Villata et al. (1998). These new models also require a BBH system in the center of OJ 287, and can well explain the 12-year period, double-peaked structure of the outburst, and/or stable color (see a review by Sillanpää et al. 1996b). However, radio and polarization observations (Valtaoja et al. 2000;Pursimo et al. 2000) show that the first peak in late-1994 is a thermal flare lacking a radio counterpart, while the second peak in 1995-96 is apparently a flare dominated by synchrotron radiation and is accompanied by a radio outburst. Two previous outbursts in 1971-73 and1983-84 also have this property. These results can not be explained by the three new models mentioned above. By incorporating the radio and polarization results, Valtaoja et al. (2000) suggested a new hit and penetration model, in which the secondary black hole hits and penetrates the accretion disk of the primary during the pericenter passage, causing a thermal flare visible only in the optical regime. At the same time, the pericenter passage enhances accretion into the primary black hole, leading to increased jet flow and formation of shocks down the jet. These become visible as simultaneous radio-optical synchrotron flares and are identified with the second optical peaks. Later, Liu & Wu (2002) derived detailed parameters of the BBH system and estimated the mass of the primary black hole as 4 × 10 8 M ⊙ . Alternatively, in order to explain the lack of a simultaneous radio flare in the late-1994 outburst, Marscher (1998) proposed that the late-1994 outburst comes from the base of the jet, near the central engine, while the simultaneous radio-optical flare in 1995-1996 occur in the radio core region, about a parsec down the jet. Since the base of the jet must be utterly opaque to radio emission, the first flare is not observed in radio regimes. He also mentioned that a "duty cycle" of winding up of the magnetic field at the base of the jet would also result in major quasi-periodic injection of enhanced flow into the jet (Ouyed, Pudritz, & Stone 1997) and hence the observed periodic outbursts. In order to re-verify the 12-year period and to evaluate the various models on OJ 287, more intensive monitoring should be carried out, not only during the outburst phases, but also in its quiescent states. We monitored OJ 287 in the first half of 2005, about 1.5 year before the predicted next outburst (Valtonen & Lehto 1997;Kidger 2000;Valtaoja et al. 2000;Liu & Wu 2002). The aims are to record the variability in its quiescent state, to prepare a comparison to the variability in its outburst state, and to give more constraints to its physical model. Here we present our monitoring results and compare them with those of the 1994-96 outburst observed in the OJ-94 project. Section 2 describes our observations and data reduction procedures. The results are presented in §3, and §4 describes our re-analyses of the OJ-94 data. The physical processes responsible for the variability are discussed in §5, and a summary is given in §6. OBSERVATIONS AND DATA REDUCTION Our optical monitoring of OJ 287 was carried out on a 60/90 cm Schmidt telescope located at the Xinglong Station of the National Astronomical Observatories of China (NAOC). A Ford Aerospace 2048 × 2048 CCD camera is mounted at its main focus. The CCD has a pixel size of 15 µm and a field of view of 58 ′ × 58 ′ , resulting in a resolution of 1. ′′ 7 pixel −1 . The telescope is equipped with a 15 color intermediate-band photometric system, covering a wavelength range from 3000 to 10,000Å. The telescope and the photometric system are mainly used to carry out the Beijing-Arizona-Taiwan-Connecticut (BATC) survey (Zhou 2005). The monitoring covered the time from 2005 January 29 to April 28, or from JD 2,453,400 to 2,453,489. As a result of weather conditions and observations of other targets, there are actually 27 nights' data in total. We used the BATC e, i, and m filters. Their central wavelengths are 4885, 6685, and 8013Å, respectively 1 . In most nights, we made photometric measurements in only one cycle of the BATC e, i, and m bands, while in a small fraction of nights, more cycles of exposures were made. The exposure times are mostly 240 s in the BATC e and m bands and 150 s in the i band. The observational log and parameters are presented in Tables 1-3. The procedures of data reduction include positional calibration, bias subtraction, flatfielding, extraction of instrumental aperture magnitude, and flux calibration. The average FWHM of stellar images was about 4. ′′ 5 during our monitoring. So the radii of the aperture and the sky annulus were adopted as 5, 7, and 10 pixels (or 8. ′′ 5, 12 ′′ , and 18 ′′ ) respectively during the extraction. We used the comparison stars 4, 10, and 11 in Fiorucci & Toste (1996) for the flux calibration of OJ 287. Their BATC e, i, and m magnitudes are obtained by observing them and three BATC standard stars, HD 19445, HD 84937, and BD+17d4708, on a photometric night, and are listed in Table 4. Then, by comparing the instrumental magnitudes of the three comparison stars with their standard BATC magnitudes, the instrumental magnitudes of OJ 287 were calibrated into the BATC e, i, and m magnitudes, and the light curves in the three BATC bands were obtained. Light Curves The light curves in the three BATC bands are displayed in Figure 1. Here we plot only the nightly-mean magnitudes, since the amplitudes of variations during all individual nights (if there are multi-cycles of exposures, see §2) are mostly less than 0.2 mag. The variations in the three BATC bands are basically consistent with each other. The overall amplitude was about 1.3 mag during the whole monitoring period, and the object was in a relatively quiescent state, as expected. Two cycles of variations show up in the light curves: The object varied from a minimum on JD 2,453,403 to a maximum on JD 2,453,426, and then went back to a new minimum on JD 2,453,449. With a sharp turnover, the object brightened again and reached a second maximum around JD 2,453,474. After that, the object dropped its brightness again to a third minimum on JD 2,453,489 (see also Tables 1-3). The time intervals are 46 and 40 days between the successive minima, and 48 days between the two maxima. The average is 44 days, which could be taken as the period of the variations. One may argue that the minimum on JD 2,453,489 may not represent the actual end of the second cycle. But one can see from Figure 1 that the end parts of the light curves have very steep slopes, which implies that the object might get still fainter in magnitude, but will not spend too much in time to reach the apparent end of the second "cycle". In other words, it appears probable that JD 2,453,489 is close to, if not actually at, the end of a second cycle. Period of Variability Visual inspection of the light curves in Figure 1 indicates a period of 44 days in the variations of OJ 287. In order to quantitatively derive the period, we performed a structure function (SF) analysis on the light curves. SF is frequently used to search for the typical timescales and periods in the variability (Simonetti, Cordes, & Heeschen 1985). A characteristic timescale in a light curve, defined as the time interval between a maximum and an adjacent minimum or vice versa, is indicated by a maximum of the SF, whereas a period in the light curve causes a minimum of the SF (Smith et al. 1993;Heidt & Wagner 1996). SF is usually calculated twice by using an interpolation algorithm, first starting from the beginning of the time series and proceeding forwards, and secondly starting from the end and proceeding backwards. This may result in two slightly different SF curves but will provide a rough assessment of the errors caused by the interpolation process. Figure 2 shows the SF of the light curve in the BATC i band. There is a deep minimum at about 44 days, which confirms the 44-day period estimated with the above visual inspection. Besides the minimum at 44 days, there is a secondary minimum around 34 days on the SF curve. It should reflect the time interval between JD 2,453,426 and JD 2,453,460, the two consecutive maxima on the light curves (while the 44-day period mainly reflects the time intervals between consecutive minima). SF curves in the other two BATC bands also show both these 'periods'. In principle, the time intervals between any two consecutive in-phase points in a periodic light curve should be equal to each other and equal to the period. Here the difference between the two 'periods' may be the result of the unevenly sampled data (for example, the actual second maximum may be between JD 2,453,460 and JD 2,453,473 where we have no observations) and the relatively short time coverage of our monitoring. The two periods are expected to converge in a longer and more evenly sampled monitoring program. So here we take the mean of them, ∼ 40 days, as the actual period in the variations. Besides the 40-day period reported here and the prominent 12-year period, Fan et al. (2002) have reported a period of 5.53 years for the optical variability of OJ 287. On shorter timescales, Efimov et al. (2002) observed an apparent period of 36.56 days for the rotation of the position angle of the optical polarization. Small fluctuations in intensity with periods of 10-20 min have also been claimed (Carrasco et al. 1985;de Diego & Kidger 1990) but have been (at best) of a transient nature. Our 40-day period is somewhat consistent with the period reported by Efimov et al. (2002), which will be discussed later. Spectral Behavior Spectral behavior involved in the variability of blazars can put strong constraints on their variation mechanisms, as demonstrated by Wu et al. (2005). Optical spectral changes with brightness have been investigated for a number of blazars (e.g., Carini et al. 1992;Ghisellini et al. 1997;Speziali & Natali 1998;Romero, Cellone, & Combi 2000;Villata et al. 2002Villata et al. , 2004Raiteri et al. 2003;Vagnetti, Trevese, & Nesci 2003;Wu et al. 2005). Most authors have reported a bluer-when-brighter chromatism when the objects show fast flares and an achromatic trend for their long-term variability. OJ 287 was found to have an extremely stable color during the 1994-96 outburst (Sillanpää et al. 1996b). Here we investigate its spectral behavior in its relatively quiescent state. Following Raiteri et al. (2003) and Wu et al. (2005), we use color index to denote spectral shape, and calculate the color as e − m and brightness as (e + m)/2 for the BATC intermediate-band photometric system. As in §3.1 and §3.2, the nightly-mean magnitudes were used to calculate the colors and brightness. Figure 3 displays the color-brightness dependence. The dashed line is the best fit to the points, with the errors in both coordinates been taken into account (Press et al. 1992). The Pearson correlation coefficient is 0.504 and the significance level is 0.017. So there is a significant correlation between the brightness and color index, or in other words, there is a clear bluer-when-brighter chromatism. This is consistent with the bluer-when-brighter trend found by Vagnetti, Trevese, & Nesci (2003), but is different from the extremely stable color during the outburst in 1994-96 (Sillanpää et al. 1996b). The different spectral behaviors between the quiescent and outburst states may indicate different variation mechanisms. In fact, in the quiescent state, the essentially simultaneous optical and radio small flares (e.g., Pursimo et al. 2000;Valtaoja et al. 2000) and the bluerwhen-brighter chromatism found in this work support the hypothesis that shocks propagating along the relativistic jet and interacting with the hydrodynamically turbulent plasma and twisted magnetic field should be responsible for the variations in the quiescent state (Wagner & Witzel 1995;Marscher 1998). On the other hand, it is very likely that the bulk increase in brightness during the outburst state is the result of the impact of the secondary black hole onto the primary accretion disk and the subsequent enhanced accretion (Valtaoja et al. 2000), as mentioned in §1. THE 1994-96 OUTBURST REVISITED After deriving a possible period of 40 days and obtaining the properties of the variability of OJ 287 in its quiescent state, we then tried to search for new proofs to the period and compare the properties to those in the outburst state. The predicted next outburst is in 2006 (Valtonen & Lehto 1997;Kidger 2000;Valtaoja et al. 2000;Liu & Wu 2002). So we at first look into the data of the outburst in 1994-96. The data on the 1994-96 outburst are from the archive of the OJ-94 project 2 . With a visual inspection on the optical light curves of the outburst during 1994.7-1995.5 (e.g., Fig. 1 in Sillanpää et al. 1996b), we found that there were some small amplitude flares occurring at intervals of about 40 days overlaid on the prominent outburst. This is in excellent agreement with the 40-day period found in our monitoring program. We then analyzed the data in detail. Our data analyses focused on the Johnson and Cousin V , R, and I bands, which are the most densely sampled wavebands in the OJ-94 project. We at first analyzed the light curves from 1994.7 to 1995.5. In order to show the small flares more clearly, we at first made a Fast Fourier Transform (FFT) smoothing to the light curves. To have better smoothing results, the light curves were truncated at both ends where the samplings are very low. The smoothed light curves were then subtracted from the original ones, and the 'residual light curves (or variations)' were obtained. The procedure is illustrated in Figure 4. The large panels display the original light curves (pluses) and the smoothed ones (solid lines), while the small panels show the residual light curves. Here we carried out 240-, 100-, and 140point FFT smoothings to the original V -, R-, and I-band light curves, respectively. In all three small panels, the flares can be seen clearly around JD 2,449,638,2,449,670,2,449,717,2,449,752,2,449,790,and 2,449,832. Except for the first flares, all other consecutive flares have intervals of about 40 days. The mean variation amplitude of the flares is about 1.0 mag, similar to that during the quiescent state (see §3.1). Also notable is that there seems to be some sub-flares between the major flares mentioned above, i.e., at around JD 2, 449,655, 2,449,700, 2,449,735, 2,449,775, and 2,449,817. They are weaker but somewhat broader at peaks than the major flares. The time intervals between them are also about 40 days, but, of course, the intervals between them and the neighboring major flares are about 20 days. In order to derive the period quantitatively, the SFs and z-transformed discrete correlation functions (ZDCFs, Alexander 1997) were calculated (in auto-correlation mode for the ZDCFs) for the residual light curves. The results are displayed in Figure 5. All three SF curves have a deep minimum at about 40 days, and all three ZDCF curves show peaks at 40, 80, and 120 days. Both indicate a period of 40 days. That is to say, the SF and ZDCF analyses confirmed the 40-day period found with visual inspections. This 40-day period is in excellent agreement with the 40-day period reported in §3.2. We then investigated the spectral behaviors of the residual variations. Figure 6 displays the (∆V − ∆R) versus ∆V (left) and (∆V − ∆I) versus ∆V (right) distributions. As in §3, we used the nightly-mean 'residual magnitudes' to denote the 'brightness' and to calculate the 'color'. There are strong bluer-when-brighter chromatisms in both brightnesscolor diagrams. The dashed lines are the linear fits to the points. The Pearson correlation coefficients are respectively 0.506 and 0.488, and the significance levels are 8.07 × 10 −11 and 1.74 × 10 −9 , which indicate very strong correlations between the color and brightness. These bluer-when-brighter chromatisms are again in agreement with the color behavior of OJ 287 during its relatively quiescent state. After investigating the variations of the first peak in late-1994, we then checked the data piece of the second peak in 1995-96 and those before and after the two peaks. The second peak and the data after it do not show a similar period, while the light curves before the first peak show some signs of quasi-period oscillationsQPOs). Figure 7 displays the V -band light curve. Flares can be seen at JD 2,449,250, 2,449,293, 2,449,327, and 2,449,366, with intervals of about 40 days. After that, the light curve is characterized by three strong sharp flares (at JD 2,449,366, 2,449,415, and 2,449,476-482) separated by three weaker but broader flares (at JD 2,449,386-397, 2,449,440-459, and 2,449,501-514), a pattern very similar to that in the late-1994 outburst. The intervals between the sharp flares is 50-60 days. Because of the unevenly sampled data and the apparently changing intervals, we do not perform a quantitative assessment to this portion of the data, but the light curves may show QPOs. Physics of the 40-Day Period That the 40-day period shows up in the variations in both quiescent and outburst states gives us new insight into the prominent outburst of OJ 287. It seems that the variation during the outburst phase can be resolved into two components. The first component is the normal blazar variation (i.e., the residual variations obtained in §4, see the smaller panels in Fig. 4). It has the similar amplitude, period, and spectral behavior as the variations in the quiescent state. The second component can be taken as a 'low-frequency modulation' (the solid lines in the larger panels in Fig. 4) to the first component, and may be induced by the interaction of the assumed BBHs in the center of this object (Valtaoja et al. 2000;Liu & Wu 2002). The variability of blazars can be best explained with the shock-in-jet model (Wagner & Witzel 1995;Marscher 1996), although sometimes the geometric (e.g., Wu et al. 2005) or propagative (e.g. Rickett et al. 2001) effects, or some other internal or external factors may also play a role. In the shock-in-jet model, a twisted relativistic jet originates from the central black hole and contains a hydromagnetically turbulent plasma. It undergoes fluctuations in its energy input and this causes shock waves to develop and propagate down the jet. Variability occurs when the shocks encounter fluctuations in the density of relativistic electrons, in the magnitude of the magnetic field, and in the orientation of the magnetic field. For periodic variabilities, one usually turns to geometric origins, either a precessing jet or light house effects (e.g., Camenzind & Krockenberger 1992;Katz 1997;Lainela 1999;Wu et al. 2005). However, periodic variations resulting from the geometric effects are likely to have a stable color. The bluer-when-brighter chromatisms of OJ 287 reported in §3.3 and §4 suggest that both kinds of periodic variations presented in this paper are not likely to resulted from geometric effects. Also, the fact that the variations occur in the optical regime and the presence of essentially simultaneous optical-radio small flares (Pursimo et al. 2000;Valtaoja et al. 2000) argue against a propagative origin for them. In the first section, we have mentioned that Marscher (1998) has proposed that a "duty cycle" of winding up of the magnetic field at the base of the jet would result in major quasiperiodic injections of enhanced flow into the jet (Ouyed, Pudritz, & Stone 1997). Here we will not discuss this possibility in accounting for the 12-year period of OJ 287. But we suggest that Marscher's mechanism does provide a good idea for explaining the 40-day's periodic variations of this object. In fact, Efimov et al. (2002) have observed a 36.56-day's periodic rotation of the plane of the polarization in OJ 287, which they considered as a direct evidence for a helical magnetic field structure in the jet of this object. The 36.56-day period is consistent with our roughly 40-day period, and their observations provide a strong evidence for the scenario of winding up of magnetic field at the base of the jet. Another possibility for explaining the 40-day period may be related to the orbital motion of the accretion disk around the central primary black hole of OJ 287. At the redshift of 0.306, the 40-day period becomes ∼ 30 days at the rest frame of OJ 287. Adopting a mass of 4×10 8 M ⊙ for the central primary black hole (Liu & Wu 2002), the 30-day period corresponds to the orbital motion at a radius of r ∼ 17 r S , where r S is the Schwarzschild radius. This radius may represent the inner radius of the accretion disk and the place from which the relativistic jet originates. (For examples, according to recent numerical simulations, jets may originate from several to 100 r S , see Meier, Koide, &Uchida 2001 andHawley &Balbus 2002.) Some disk oscillations at this radius may 'propagate' and be kept in the jet, and result in the observed 40-day period. The 40-day periods both in the quiescent states and in the first peak in late-1994 can be explained with the two scenarios described above. Then why don't we observe the 40day period in the second peak in 1995-96? According to the models by Valtaoja et al. (2000) and Liu & Wu (2002), the structure of the inner accretion disk is undisturbed in the quiescent state and in the first peak of the major outbursts. However, when the effects of the impact of the secondary black hole onto the primary accretion disk (see §1) propagate to the base of the primary jet (∼ 17 r S ), the structure of the inner primary accretion disk and the properties (electron density, magnetic field, etc) of the primary jet are changed, and the accretion rate and hence the jet emission are significantly enhanced. Then either the periodicity is destroyed or the period changes to a new value. So we do not observe the 40-day period during the second peak of the major outburst. After the second peak, the changed structure and properties of the primary accretion disk and jet need some time to recover to the original states (in fact, the second peak is very broad and may extend to 1997, see Fig. 2 in Pursimo et al. 2000), so the 40-day period was not observed during the 1-2 years after the second peak. Our re-visitation of the OJ-94 data can put some constraints to the physical model of OJ 287. In the old hit and penetration model by Lehto & Valtonen (1996), the precessing disk/jet model by Katz (1997), and the beaming model by Villata et al. (1998), the two peaks of the major outbursts are resulted from the same physical processes and thus should have nearly the same behaviors. Now the 40-day period shows up in the first peak but not in the second, which gives further evidence for the assumption that the two peaks are resulted from different physical processes. In other words, our results are consistent with the models by Valtaoja et al. (2000) and Liu & Wu (2002), in which the first peak is thermal while the second is dominated by synchrotron radiation. SUMMARY During our monitoring of the BL Lac object OJ 287 in the first half of 2005, the object did not show large-amplitude variations and was in a relatively quiescent state. A possible period of 40 days was inferred from its light curves. A bluer-when-brighter chromatism was found in the variations, which is different from the overall spectral behavior during the outburst state. The different spectral behaviors indicate different variation mechanisms. The optical variability of OJ 287 during the OJ-94 project was re-visited and again a probable 40-day period was discovered. The physics responsible for the 40-day period is discussed. The 40-day period may be related to the helical structure of the magnetic field at the base of the jet, or to the orbital motion close to the central primary black hole. Except for the 36.56-day period discovered by Efimov et al. (2002), the 40-day period has not been reported before, even though OJ 287 has been observed for more than 100 years. There are only very sparse photometric measurements before the 1972 outburst (several or even only one measurement per year). After that and till 1993, much denser monitorings were carried out but were still not sufficient to reveal our claimed period of ∼ 40 days. The OJ-94 project, which aimed at confirming the predicted outburst in 1994 and lasted from 1993 to 1997, provided the best opportunity to find the 40-day period. The reason that no author has previously derived the 40-day period from the OJ-94 data may be that a) the focus of the OJ-94 project is on the 12-year period, and b) the methods to search for period in the variations (e.g., SF and ZDCF) cannot give correct results when the prominent overall outburst with large slope is not subtracted, as demonstrated by Smith et al. (1993) for the case of SF and illustrated in Figure 8 for the case of ZDCF. Our monitoring revealed the 40-day period in the optical variability of OJ 287 in its quiescent state, but the duration covers only two cycles and there are two gaps in the light curves. The late-1994 outburst shows the 40-day period, but the major outbursts in 1971 and 1983 have sampling rates too low to reveal this period. Although our monitoring results and the OJ-94 data do support each other, and suggest that the 40-day period is unlikely to be a transient period, more data in both quiescent and outburst states are needed to confirm this period. Fortunately, the predicted next outburst of OJ 287 is in 2006 (Valtonen & Lehto 1997;Kidger 2000;Valtaoja et al. 2000;Liu & Wu 2002). A large number of telescopes in the world will surely monitor this object around the predicted time, and we will keep on monitoring it intensively in order to confirm the 40-day period both in the quiescent and outburst states. Fig. 4. Compared to the right panels of Fig. 5, the maxima at 40, 80, and 120 days is not so evident, so one can hardly extract the 40-day period from this figure. 453,[473][474][475], so we do not present the data in these days in Tables 1-3 but give a point (the mean) per night in each light curve in Fig. 1. b The obs. date and time are of universal time. The same is for Tables 2 and 3.
2014-10-01T00:00:00.000Z
2006-05-22T00:00:00.000
{ "year": 2006, "sha1": "2307482ac059a8ee47931fd74c4039654bd9cb46", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1086/506197/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "e8aea9253efbced2e42bd73e915bf0d0215d4afc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
38815659
pes2o/s2orc
v3-fos-license
Detection of cocaine and amphetamine regulated transcript in the abomasum of slaughtered bulls with different daily body weight gains Despite numerous published studies, the relationship between the amount of secreted cocaine and amphetamine regulated transcript and the daily body weight gain has still not been well explained. The aim of this study was to determine the incidence of cocaine and amphetamine regulated transcript in the outlet wall of the abomasum of bulls with different daily weight gains. The study was performed on 15 bulls, breed crosses of local black and white milk cattle and Limousin bulls. The animals were slaughtered at the mean age of 543–549 days and body weight of 441.0–491.4 kg. Fragments of the outlet wall of the abomasum were sampled for analyses during routine slaughter. Immunohistochemical assays showed that slowly growing bulls (803 g/day) had significantly (P ≤ 0.05) fewer positive structures of cocaine and amphetamine regulated transcript (by 1.65 × on average) compared to bulls with large daily weight gains (905 g/day). This tendency was also observed in the case of cocaine and amphetamine regulated transcript distribution in particular layers of the abomasum wall. The most numerous positive structures of cocaine and amphetamine regulated transcript were found in the nerve fibres of the muscularis and in the muscle plexus, whereas they were evidently less numerous in the submucous plexus. Our results suggest that the number of cocaine and amphetamine regulated transcript immunopositive structures is associated with the growth intensity of the animals, and frequent occurrence of this neuropeptide in the nerve fibres and the muscular plexus proves its role in the control of stomach emptying. Stomach, wall abomasum, bulls, immunohistochemistry A functional link that integrates the alimentary tract and the encephalon is represented by numerous compounds, peptides and neurotransmitters. These substances work antagonistically, some of them are orexigenic, while others are anorexigenic (Vicentic and Jones 2007). The least known representative of the orexia control agents is the peptide, cocaine and amphetamine regulated transcript CART (Spiess et al. 1981; Douglass et al. 1995), which is also located in the alimentary tract, the intestine and the stomach (Asnicar et al. 2001; Kasacka et al. 2012;) and in the pancreas (Arciszewski et al. 2008). Earlier studies connected with the identification of CART in the alimentary tract showed its role in the secretion of gastric juice, stomach emptying and intensified colon peristalsis (Okumura et al. 2000). Only a few of them provide information on the presence of the peptide in question in the alimentary tract of the pig, sheep, cattle and humans (Arciszewski et al. 2009; Kasacka et al. 2012; Wojtkiewicz et al. 2012). The data for cattle are incomplete due to considerable difficulty in obtaining experimental material. The existing papers discuss CART location in the nervous system or CART expression in various cattle breeds (Zhang et al. 2008). Despite the available knowledge on the CART peptide, precise answers to the question of its role in particular organs, including the stomach, are missing. The intake of large volumes of feed by cattle must be associated with an efficacious use of the nutrients contained therein. This principally concerns those compounds that can be broken down through bacteria-aided digestion, such as proteins or methionine acta VEt. BrNO 2013, 82: 253–257; doi:10.2754/avb201382030253 address for correspondence: Krzysztof Młynek, PhD, M.Sc Department of Cattle Breeding and Milk Evaluation Siedlce University of Natural Sciences and Humanities Prusa 14, 08-110 Siedlce, Poland E-mail: mlyn@uph.edu.pl http://actavet.vfu.cz/ and lysine, a deficiency of which may limit production indicators of cattle (Sýkora et al. 2007). Effective digestibility is also associated with the performance of the structures in the abdominal wall of ruminants. Julius et al. (2011) have shown that the administration of feeds with a high concentration of nutrients in the dry matter may lead to changes that disturb this process. The effect of these stimuli is connected with the behavioral aspect of feeding, specifically with the control and metabolism of digestion. In the case of cattle, due to the specificity of digestion, feeding and body weight gains highly affect production. The aim of this study is identification of CART distribution in the abomasum outlet wall of bulls with different growth rates. Materials and Methods A functional link that integrates the alimentary tract and the encephalon is represented by numerous compounds, peptides and neurotransmitters.These substances work antagonistically, some of them are orexigenic, while others are anorexigenic (Vicentic and Jones 2007).The least known representative of the orexia control agents is the peptide, cocaine and amphetamine regulated transcript -CART (Spiess et al. 1981;Douglass et al. 1995), which is also located in the alimentary tract, the intestine and the stomach (Asnicar et al. 2001;Kasacka et al. 2012;) and in the pancreas (Arciszewski et al. 2008).Earlier studies connected with the identification of CART in the alimentary tract showed its role in the secretion of gastric juice, stomach emptying and intensified colon peristalsis (Okumura et al. 2000).Only a few of them provide information on the presence of the peptide in question in the alimentary tract of the pig, sheep, cattle and humans (Arciszewski et al. 2009;Kasacka et al. 2012;Wojtkiewicz et al. 2012).The data for cattle are incomplete due to considerable difficulty in obtaining experimental material.The existing papers discuss CART location in the nervous system or CART expression in various cattle breeds (Zhang et al. 2008).Despite the available knowledge on the CART peptide, precise answers to the question of its role in particular organs, including the stomach, are missing. The intake of large volumes of feed by cattle must be associated with an efficacious use of the nutrients contained therein.This principally concerns those compounds that can be broken down through bacteria-aided digestion, such as proteins or methionine and lysine, a deficiency of which may limit production indicators of cattle (Sýkora et al. 2007).Effective digestibility is also associated with the performance of the structures in the abdominal wall of ruminants.Julius et al. (2011) have shown that the administration of feeds with a high concentration of nutrients in the dry matter may lead to changes that disturb this process.The effect of these stimuli is connected with the behavioral aspect of feeding, specifically with the control and metabolism of digestion.In the case of cattle, due to the specificity of digestion, feeding and body weight gains highly affect production.The aim of this study is identification of CART distribution in the abomasum outlet wall of bulls with different growth rates. Animals The material consisted of 15 bulls, offspring of Polish Lowland Black-and-White cows mated with Limousin bulls.The bulls were kept under similar conditions but came from 3 farms.Fattening started when the calves weighed 150-180 kg (approximately after 6 months).In the autumn-winter period, the animals were fed hay ad libitum and corn silage (approximately 10 kg/24 h).In the spring-summer period, green fodder and straw were provided ad libitum and compound cereal meal was used as a supplement to the main diet (approximately 1.0 kg/24 h throughout the fattening). Before slaughter, the extent of daily weight gains (GI g/day) was computed for all the animals.The animals selected for the analyses were classified in two groups of daily weight gains (the limit value was GI g/day = 850 g/day).The bulls were slaughtered at a mean age of 543±32 days; the first group was characterised by a high GI g/day -549±21 days (n = 8), the second group with a low GI g/day -537±27 days (n = 7).The body weight of the slaughtered animals ranged from 491.4 ± 28 kg to 441.0 ± 21 kg, respectively. Sampling The experimental material was sampled during a routine carcass dissection procedure applied at meat plants.In a time below 30 min post mortem, the stomachs of the analysed bulls were prepared.Outlet area fragments of the abomasum were sampled (each time from the same spot).The tissue was ex tempore fixed in 4% buffered formalin for 72 h at room temperature.Subsequently, paraffin blocks were made based on standard procedures. Microscopy analysis The samples were analyzed and photographed with an Olympus BX41 light microscope with a video channel connected to a PC equipped with a Cell-B image analysis program (Olympus 114 Corp., Tokyo, Japan).When recording the microscopic images, particular attention was paid to the distribution of the structures showing immunoreactivity to the analysed antigen.Morphometric analysis was applied to diffuse neuroendocrine system cells that produced dark brown stains.Immunopositive neuroendocrine cells were counted in 10 randomly selected fields of vision (0.785 mm 2 ), at a 200 × zoom (20 × lens and 10 × eyepiece).The semi-quantitative evaluation of the density of CART-immunoreactive nerve fibres and a structure scale were used for the determination of the density of the remaining structures, where: (0) -none found, (1) -single, (2) -few, (3) -moderate number and (4) -dense. Immunohistochemistry In the immunohistochemical study, the EnVision technique was used according to Herman and Elfont (1991).The paraffin blocks were cut into 4 μm sections and mounted onto Superfrost Plus slides (Menzel, Braunschweig, Germany) and dried overnight at 37 °C followed by 1 h at 60 °C.Immunohistochemistry was performed using the EnVision (+) HRP Rabbit Detection System (No: K4011, Dako, Glostrup, Denmark).Sections were deparaffinized in xylene and rehydrated in decreasing concentrations of pure ethanol.For antigen retrieval, the sections were subjected to pretreatment in a pressure chamber heating for 1 min at 21 psi [One pound force per square inch (1 psi) equates to 6.895 kPa, conversion factor provided by United Kingdom National Physical Laboratory] at 125 ºC using Target retrieval solution pH = 9.0 (No: S2367).After being cooled to room temperature, sections were incubated with Peroxidase blocking reagent for 10 min to block endogenous peroxidase activity.Cocaine and amphetamine regulated transcript (Phoenix Pharmaceuticals, CA, Burlingame, USA, code H 003-61) was diluted (1:5,000) in antibody diluent (No: S 0809). The sections were incubated overnight at 4 ºC in a humidified chamber with the diluted antibody, followed by incubation with Labelled Polymer for 1 h.Bound antibodies were visualized by 1-min incubation with liquid 3,3'-diaminobenzidine substrate chromogen.The sections were finally counterstained in haematoxylin QS (H-3404 Vector), mounted, and evaluated under light microscope.Appropriate washing with Wash Buffer S 3006 (Dako) was performed between each step.The specificity test performed for the cocaine and amphetamine regulated transcript (CART) antibody included: negative control, where the antibodies were replaced by normal rabbit serum (Vector Laboratories, Burlingame, USA) at the respective dilution and positive control was done for specific tissue recommended by producer; for bovine CART it is human paraventricular nucleus of the hypothalamus. Statistical analysis Statistical analysis involved the determination of the homogeneity of the daily weight gains of the animals (GI g/day ) in each group.Levene`s variance homogeneity test was used for this purpose.The F value in this test amounted to 0.241 and P = 0.082.Further statistical procedure involved the calculation of the following indicators for the analysed groups (GI g/day ) and three abomasum wall structures: arithmetic mean ( ), extreme values (min and max) and standard deviation (SD).The study of the results included variance analysis of the animal groups with different growth intensity and of the three abomasum structures in question.Differences between the mean values were investigated with Duncan's test at P ≤ 0.05. results Table 1 shows daily body weight gains (g/day) and the number of cocaine and amphetamine regulated transcript immunopositive structures (CART-IR) in the abomasal outlet area of bulls with different growth intensities.As evidenced by the data from the table, the most numerous structures with positive immunohistochemical reaction, indicating the neuropeptide presence, were identified in the bulls from the second group with smaller daily gains.The bulls with the growth of 803 g/day had a mean number of CART-IR neuroendocrine cells of 2.90, which was on average 1.65 more compared to the group in which the mean gains were greater at a mean level of 905 g/day. The observed diffuse neuroendocrine system cells had various shapes, ranging from pyramidal through oval to polygonal.The cells were dispersed as single units in the entire abomasum wall or formed clusters composed of several cells (3-5). Based on the obtained microscopic images of the abomasal wall (Plate VI, Fig. 1), it was also shown that, on average, the most CART containing structures were present in the nerve fibres innervating the muscularis and in the myenteric plexus (Table 1).Differences between these two abomasum wall structures and the mean number of CART-IR structures found in Meissner's submucous plexus (1.34, on average) amounted to a mean of 1.69 and were significant (P ≤ 0.05). Discussion It is a known fact that one of the most important tasks for each living organism is the maintenance of homeostasis, i.e. a state of relative balance in relation to changing exterior and interior conditions.Through evolution numerous control mechanisms 255 Data are expressed as ± SD, differences between the mean values: a,b,c P ≤ 0.05 in columns, 1,2,3 P ≤ 0.05 in rows x x have developed that transmit indispensable information for the maintenance of the energy balance in the organism.One of the mechanisms is represented by the food intake control system that includes nerve centers located in the hypothalamus that act by the agency of cerebral-gastric-intestinal peptides.The cerebral-gastric-intestinal axis plays an essential role in the appetite control mechanism both at the level of the central nervous system and at the level of the peripheral effectors, i.e. the alimentary tract organs. The knowledge of specific peptides in the group of neuropeptides that affect the functioning of the alimentary duct has been vastly expanded over the last several decades.The relationship between cocaine and amphetamine administration and a considerable increase in specific mRNA expression in the rat corpus striatum shown by Douglass et al. (1995) made it possible to more precisely explore the role these peptides play in the physiology of the animal gastro-intestinal tract. The current knowledge of CART incidence in ruminants is incomplete, since it only refers to sheep and cattle (Zhang et al. 2008;Arciszewski et al. 2009).In the case of bovine studies performed inthe last years are fragmentary and concern the expression of this neuropeptide in the centers nerves system in different cattle breeds (Zhang et al. 2008).The results of these authors have shown a relationship between the amount of secreted CART and the daily body weight gains of the animals.The abomasal walls of the bulls whose mean gains amounted to 906 g/day were found to contain distinctly fewer CART-positive structures.Unfortunately, there are no publications dealing with this issue in available literature. As regards the assessment of CART-IR structures in the abomasal outlet wall in the muscularis and the muscular plexus, the dense web of CART-positive nerve fibres was similar to those found in the intestinal system of rodents (Ellis and Mawe 2003), pigs (Gonkowski et al. 2009;Wojtkiewicz et al. 2012) and humans (Kasacka et al. 2012).Despite the fact that the results stem from studies of monogastric organisms, this suggests that digestion control mechanisms have a similar origin. A lower number of CART-IR structures in the submucous plexus, however on the large intestine of pigs, was also observed by Gonkowski et al. (2009).Similar interrelation was identified in the case of numerous CART-IR positive structures within the gastrointestinal tract of the rat (Ekblad et al. 2003), in the intestine of the guinea pig and pig (Ellis and Mawe 2003;Wojtkiewicz et al. 2012).They found that the cells of the diffuse cell system also turned out to be a source of the CART peptide.The results proved convergent with the results of Ekblad et al. (2003), whereas Wierup et al. (2007) did not identify CART presence in pig neuroendocrine cells. Despite the available information on CART distribution and active mechanisms in the brain, alimentary tract and endocrine glands of many animal species, its participation in stomach control processes remains unclear and requires further research. The results of our study showed the relationship between the number of CART-positive structures and daily body weight gains of the analysed animals.The most numerous CARTcontaining structures were identified in the group of bulls characterised by low daily weight gains.An already well known function of CART in the gastrointestinal tract area is its strong neuroprojective effect that consists in its co-occurrence with numerous biologically active substances.Cocaine and amphetamine regulated transcript usually co-acts with such compounds as calbidin and nitric oxide synthase (Ellis and Mawe 2003), calcitonin gene related peptide (Wierup et al. 2007) and the vasoactive intestinal peptide (Wojtkiewicz et al. 2012). In light of the obtained results it must be stated that there is a relationship between the amount of secreted CART peptide and the daily body weight gain.The location and differences in the number of CART-LI structures observed in the abomasal wall indicate Fig. 1.Cattle pylorus immunostained for cocaine and amphetamine regulated transcript immunopositive structures.A -photomicrograph focused on cocaine and amphetamine regulated transcript immunoreactive nerve fibres within the muscle layers, B -muscular plexus and nerve cell bodies. Table 1 . Daily body weight gains (g/day) and the number of cocaine and amphetamine regulated transcript immunopositive structures (CART-IR) in the abomasal outlet area of bulls with different growth intensities.
2017-11-02T19:32:46.129Z
2013-11-17T00:00:00.000
{ "year": 2013, "sha1": "c98b07a8f96540d8056923a9b5aa810020e5fa16", "oa_license": "CCBY", "oa_url": "https://actavet.vfu.cz/media/pdf/avb_2013082030253.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c98b07a8f96540d8056923a9b5aa810020e5fa16", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
229065275
pes2o/s2orc
v3-fos-license
The Preparation of Some Benzothiazole Polymers and the Study of their Electrical Conductivity Properties As the conjugated polymers are doped with some electron donor or acceptor dopants, their electrical conductivity increased thoroughly to about 10 Ω.cm. The doping of the polymers may give an n or p semiconductor characteristic according to the types of the dopants that are used. Within the frame of this work, four types of conjugated polymers with benzothiazole as a major moiety in their backbone have been prepared. The prepared monomers and polymers have been characterized by FTIR spectroscopy. Elemental analysis of the polymers (CHN) demonstrates their chemical structure while the DSC thermal analysis illustrates its Tg. The polymers were doped with two types of dopants, iodine and sodium iodide. The electrical conductivity of the doped polymers was measured with three probe cell. The results show increasing in the electrical conductivity with dopant concentration to some levels. The activation energy of the electrical conductivity process was also studied by measuring the electrical conductivity in different temperature. According to the magnitude of the activation energy, we can conclude that the chain flexibility is the dominate factor that influenced on electrical conductivity. Hall Effect and hot probe measurements reveal that the polymer can be considered as n or p type according to the type of the doping. It was concluded that the doping with iodine produced an ntype while the doping with sodium iodide produced the ptype. Introduction: The field of electrically conducting polymers have been developed very rapidly since the discovery of the intrinsically organic conducting polymers. (1) The electrical conductivity of the organic conjugated polymers was increased by many orders of magnitude when they are doped with oxidizing or reducing agents. (2) Whereby the doped polymers can be seen as a concept of organic metal or semiconductors. The chemical structure of the conjugated polymer is confirmed of π electron system extending over long chain of monomer units. (3) The extending π system in the conjugated polymer gives the susceptibility of the oxidation and reduction with the electrical conductivity. (4) Through controlling the oxidation and reduction processes, the electrical and optical properties can be systematically varied. (5) The undoped conjugated polymers are established as intrinsic semiconductors. The band gap between HOMO and LUMO energy levels of these polymers depends on the chemical constitution of their backbone and on the nature of the substituents on the main chain. (6) Therefore the electrical and optical properties can be varied to a very large of extended properties by appropriate functionalization of the polymer chains. According to this assumption, many types of conjugated polymers with different backbone constitution and substituents have been prepared. The electrical conductivity of the prepare polymers can be controlled via chemical structure and doping processes controlling, whereby the conductivity arises due to delocalization of the valence electrons through along the conjugated chain (6) . Copolymers based on benzothiadiazole were synthesized since 1961 (7) . It was used to construct high performance electric devices (8) . The π extended benzothiazole has good planarity and high electron affinity originating from their sulfur atom (9) . The solubility problem of the polymer is the main reason in the limitation of their using in many important applications (10) . Different long chain aliphatic side groups have been substituted on the main chain on the main chain in order to get solution processable semiconductor polymers (11) . Materials: Table (1) shows all the used chemicals which were used as received without any purification except phenylene diamine that was recrystallized from ethanol before using. Equipment: F.T.IR spectra were accomplished by using (BRUKER F.T.IR Infrared Spectrophotometer). The elemental analysis were carried out by using(EuroEA3000/Italy) elemental analyzer. DSC thermal analysis were achieved by using DSC-60 differential scanning calorimeter, SHIMADZU. 2-Mercaptobenothiazole(A1): 0.025 mole(2.7g) of freshly purified phenylene diamine (m.p 140-142 o C) was mixed with 0.025 mole(0.8 g) of sulfur, 2.5ml abs. ethanol and 1.5 ml of carbon disulfide in an autoclave. The mixture was heated at 180 o C for a period of 6 hrs. After cooling to room temperature, the mixture was dissolved in 10ml of 10% sodium hydroxide, filtering, the filtrate was neutralized by 10% HCl. The precipitate was washed with ethanol and dilute HCl to get pure brown product (m.p. 245 o C). Thioether for 2-Mercaptobenzothiazole (M1): 1.8g of A1 was dissolved in 10ml DMSO. Add 0.4g of sodium hydroxide in 10ml ethanol. Reflux the mixture until a clear homogenous solution with blue green color was obtained. 0.3ml of methylene chloride (CH2Cl2) was added dropwise to the clear solution with continuous reflux for a period of 2hrs. The color was changed to a deep yellow. After cooling the solution, 50ml of cold water was added. The precipitated M1 was filtered washed with water and dry under vacuum. 2,2~ -bis Mercaptobenothiazole(A2): The same procedure was used in preparing of A1 with exception of duplicating the amount of sulfur and carbon disulfide. The product is yellow and has a melting point of 253 o C Terephthaloyl dichloride: 4g of terephthalic acid was refluxed with 30ml thionyl chloride in presence of some drops of DMF for a period of 1hr., distill the excess thionyl chloride, the residual Terephthaloyl chloride was recrystallized from n-hexane. Polyamide of M1 and M2 {P1 & P2}: 0.01 mole of the monomer terephthaloyl dichloride was dissolved in 10ml dry dichloromethane and introduced in three necked flask provided with nitrogen inlet tube. 0.01 mole of monomer M1 or M2 was dissolved in 10ml of 1:1 THF/pyridine mixed solvent and dripped slowly into the reaction flask under nitrogen atmosphere. The reaction mixture was stirred for 24hr. at room temperature. The precipitated polymer was poured into fivefold of methanol, filtered and dried under vacuum. Polymerization of A2 {P3&P4}: P3 polymer was prepared by dissolving 1.2g of A2 in appropriate amount of DMSO. 0.4 g of sodium hydroxide dissolved in absolute methanol was added; the mixture was heated to about 80 o , and then 0.23ml of methylene chloride was added to it. The whole mixture was heated for about 3hrs. The produced polymer was precipitated from water, filtered, washed with water and dried under vacuum. P4 polymer was prepared by polymerizing A2 with dibromobutane by following the above method whereby 0.5 ml of dibromobutane reacts with1.2g of A2. Doping of polymers: Two types of doping processes are achieved in this work; the first type is doping by mixing whereby the polymers were mixed thoroughly with different ratios of the dopant sodium iodide NaI. The second type of doping is the vapor-phase doping in which the polymer disc has been exposed to the vapor of the iodine in vacuum tube for different periods of time. (12) Two probes (hot and cold) were used in order to identify the charge carriers type ( ptype or n-type). The sharp ends of the probes are attached to the surface of the polymer disc while the second ends are connected to galvanometer as in figure (1). If the polymer is n-type, the liberated electron accumulates at the hot probe and the galvanometer pointed to negative side, while in the case of the p-type, the galvanometer pointed to positive side. The results were confirmed by measuring Hall factor (13) Electrical conductivity measurements: Films of 2 cm diameter and about 0.5 mm thickness from the pure polymers are prepared under 3-4 ton/cm 2 . Electrical volume conductivity measurements are performed using the standard 3-probe D.C technique according to the ASTM method (14) . Results and Discussions Synthetic routs: Monomer preparation: Scheme (1) illustrated the synthetic route for the preparation of monomers. The properties of the monomers are illustrated in table (2). Scheme (1): Monomers prepared from p-phenylene diamine The Preparation of Some Benzothiazole Polymers and the Study of their ….. Synthesis of Polymers: The four prepared polymers are specialized with benzothiazole moiety within their backbone. Scheme 2 illustrates the equations of the preparation of the polymers. Scheme(2): route of polymer synthesis The polymers were characterized by IR spectroscopy figure (5). Table (4) gives the frequencies of the special function groups within the polymer chains. In comparison with the spectrum of the related monomers, it was noticed that the frequency of NH moiety in monomers M1&M2 has been disappeared and a new signal was observed at about 1680 cm -1 related to the carbonyl group to confirm the formation of the polymers M1and M2. While figure (4) indicates the disappearance of SH group which give signal at 3255cm -1 in monomer and a new frequency has appeared around 3000cm -1 related to the aliphatic moieties. The fine elemental analysis (CHN) of the prepared polymers (table 5) demonstrate the suggested chemical structure of the prepared polymers. Thermal analysis of the polymers: The thermal DSC scans of the prepared polymers are shown in figure 6 . The results demonstrated that the polymer P3 and P4 have shown glass transition temperature between 142-183 o C and 135-168 o C respectively. The high rigid polymers P1 and P2 have glass transition temperature ranged 283-320 o C and 233-253 o C respectively. The results can be explained by that the polymers P1and P2 can be considered as a copolymers of amide and thioether, while the polymers P3 and P4 are homo thioether polymers more flexible than polymers P1 and P2. The amide moiety gives the polymer a hardness properties by their hydrogen bonding. The electrical conductivity of the conjugated polymers can be enhanced by doping with some donor or acceptor dopants. The values of the volume electrical conductivity of the investigated doped polymers are shown in tables (6 & 7) and figures (7 & 8). Generally it was clearly noticed that the conductivity increased systematically with increasing the dopant ratio. The data shows that after reaching some ratio of doping, the conductivity stop increasing or decrease, whereby at this point of doping the percolation point was reached. After the percolation point the dopant may start building some crystal units within the polymer chains and decrease the conductivity (15) . It was clearly noticed that there are many factors that have effects on the electrical conductivity of the polymers. The polymer chain constituents is the main factor. Interruption of the polymer chin with aliphatic moieties may have two different effects, increasing the flexibility of the chain or decreasing the delocalization of the π electrons along the chain. The first factor increases the conductivity while the second, decrease the conductivity (16) . The first factor was observed in a comparison made between P1 & P2 doped with iodine or NaI. While the latter factor was dominated in a comparison between P3&P4 doped with NaI. Types of dopants: Two types of dopant were carried out for all the prepared polymers. The first used dopant is iodine which is known as electron donor. The second dopant is sodium iodide, whereby the sodium ion can be considered as an electron acceptor. The resulted polymers are n-type and p-type respectively. Hot and Cold Probe Test proved that iodine doped polymers are of n-type while sodium iodide doped polymers are of p-type. The results were experimentally proved by measuring the Hall coefficient RH from (17) :  = RH , where  is the mobility of the carriers of (e and h) and  is the conductivity of the polymer. Increasing electrical conductivity with elevation in temperature can be considered by calculating the activation energy of this process. The explanation is established by a gradual rise in the population of electrons in conduction bands (excited state) (18) ,which needs activation energy between 1.5-2 Ev.mol -1 . In the other hand the increasing in the flexibility of the polymeric chains and electrical conductivity with temperature elevation gives lower activation energy (19) . Table (9) shows the activation energies of the doped polymers with sodium iodide. The low value of activation energy demonstrates the lower level of sensitivity of the electrical conductivity to temperature. According to this activation energy magnitude we can say that the chain flexibility is the dominate factor in explaining the conductivity mechanism. Conclusions: The electrical conductivity is a significant characteristic of polymers, according to which the polymers are guided to be used. It was seen that the chemical structure of the polymer is the main factor to give the polymers their physical properties and controlling the electrical conductivity. The electrical conductivity can be enhanced by doping with electron donor or acceptor dopants. The same polymer can be used an n-or p-type by using electron donor or acceptor dopants.
2020-12-03T09:03:05.031Z
2020-07-26T00:00:00.000
{ "year": 2020, "sha1": "5f6d9fa2e37fb7f21e687ce1f7a8ab2bd339dd37", "oa_license": "CCBY", "oa_url": "https://edusj.mosuljournals.com/article_165852_9d8865eba5a0eb69ae919506ee130e1a.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1ae8e2ec2d61bcd891daad2a265a098c6be81212", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
229494052
pes2o/s2orc
v3-fos-license
Improved Accuracy of Sentiment Analysis Movie Review Using Support Vector Machine Based Information Gain The quality of a movie can be known from the opinions or reviews of previous audiences. This classification of reviews is grouped into positive opinions and negative opinions. One of the data mining algorithms that are most frequently used in research is the Support Vector Machine because it works well as a method of classifying text but has a very sensitive deficiency in the selection of features. The Information Gain method as feature selection can solve problems faster and more stable convergence levels. After testing on two movie review datasets are Cornell and Stanford datasets. The results obtained on the Cornell dataset is the Support Vector Machine algorithm to produce an accuracy of 83.05%, while for the Support Vector Machine based on Information Gain, the accuracy value is 85.65%. Increased accuracy reached 2.6%. Then, the results obtained on the Stanford dataset is the Support Vector Machine algorithm yields a value of 86.46%, while for the Support Vector Machine based on Information Gain, the accuracy value is 86.62%. Increased accuracy reached 0.166%. Support Vector Machine based Information Gain on the problem of movie review sentiment analysis proved to provide more accurate value. Introduction A Language is a powerful tool for communicating and conveying information. Language is also a means to express emotions and sentiments. Sentiment analysis is the process of determining whether it is positive, negative or neutral in the contents of a dataset in the form of text (documents, sentences, paragraphs, etc.) [1]. Sentiment analysis in the field of user opinion mining on products, political reviews, movie reviews has now become more popular. Producers and moviemakers through social media and IMDb can find out reviews, views and thoughts from the viewers [2]. Many sites provide a review of a product that can reflect user opinion. One example is the Internet Movie Database (IMDb) site. IMDb is a website that deals with movie and movie production. The information provided by IMDb is very complete, such as who are the actors/actresses who played in the movie, a brief synopsis of the movie, links for movie trailers, release dates for several countries and reviews from other users. When someone wants to buy or watch a movie, other people's comments and movie ratings usually affect their buying behavior. There are several classification algorithms widely used for analysis review sentiments include Support Vector Machine, Naïve Bayes, and K-Nearest Neighbour [3]. Some research that has been done in the sentiment classification of online reviews including, Comparative machine learning for the classification of sentiment analysis movie reviews [2]. Sentiment analysis on movie review opinion using the SVM algorithm and PSO [4]. Sentiment classification in online reviews of travel destinations using the NB algorithm, SVM and Character Based N-gram Model [5]. Sentiment analysis of movie reviews and some Amazon.com products using the SVM and Neural Network algorithm [6]. Classification of restaurant review sentiments on the internet uses Cantonese using the NB and SVM algorithm [7]. One of the most frequently used algorithms for data classification is the SVM algorithm. SVM classifies by analyzing data and recognizing patterns, so-called supervised learning methods [4]. SVM has a powerful method for minimizing risk called the regulated linear classification method [8]. Being able to recognize separate hyperplanes that maximize margins between two different classes is also an advantage of the SVM algorithm [9]. However, SVM lacks the problem of selecting appropriate parameters or features [4]. Selecting features in SVM greatly influences the classification accuracy results [10]. Feature selection is important in text classification and greatly affects the classification performance. To enhance the effect of feature selection, many studies try to add optimization algorithms in the feature selection method. Comparative results of feature selection algorithms performed by Chandani [11] between Information Gain, Chi-Square, Forward Selection, Backward Elimination, obtained Information Gain as the best feature selection algorithm. Thus, SVM classifiers with IG as feature selection will be applied in sentiment analysis in movie reviews. The formulation of the problem in this research is an analysis of improved movie review using the SVM algorithm based IG for feature selection? Then, the purpose of this study is to improve the analysis of movie review analysis using the SVM algorithm based IG for the selection of features needed by users who are used to make decisions in determining movie quality. The benefits of this research are: (1) The benefits of this research are expected to make it easier for users to make decisions in determining movie quality. (2) Contribute to the development of theories related to sentiment analysis review using the SVM classifier by selecting features using IG to improve achievement. Method In completing the research, the authors make a framework of thought and proposed methods that are useful as a reference for research so that this research can be done consistently.The problem with this research is that SVM still has weaknesses in the selection of appropriate parameters or features, thus causing a low classification accuracy. The dataset used in this study is in the form of two movie review datasets, the first Cornell dataset consists of 1,000 positive review data and 1,000 negative review data. And the Stanford dataset consists of 12,500 positive review data and 12,500 negative review data. Tokenization, Stopwords Removal, and Stemming Methods are the stages of Preprocessing used in research. Information Gain is used as a method of feature selection. Testing uses 10 fold cross validation. Confusion matrix is used to measure the accuracy of the algorithm. Then, the AUC value is measured based on the results of the ROC curve. Experiments using the Weka 3.8 application. To see the framework of thought in detail, see Figure 1 below: The proposed method is the application of the feature selection method that is IG to improve accuracy in the SVM classifier. SVM algorithm is used because it is very popular and functions well as a text classification method. To see the proposed model in detail, see Figure 2 below: Results and Discussion The classification process uses probability calculation values to determine the sentence as a positive or negative class. Sentences are included in the positive class, if the probability value of the sentence for the positive class is greater than the negative class. Meanwhile, the sentence is stated to be included in the negative class, if the positive class probability value is smaller than the negative class. In this study, comparing several classification algorithms namely, SVM, NB, KNN and SVM + IG. Comparison of Kernel Functions In this study, the authors compares some kernel functions to the Support Vector Machine algorithm, to find out the best kernel that can be applied in sentiment review movie analysis. Linear, Polynomial, Radial Basis Function (RBF), and Sigmoid kernels are some of the kernels compared in this study. The following is the result of comparing the accuracy and ROC curves for each kernel, see Table 1 below: Based on experiments that have been carried out. Determination of the kernel obtained the best results obtained, namely the highest accuracy value reached 83.05% and the AUC value of 0.831 using RBF Kernel. Therefore, in this study, the RBF Kernel is used in the SVM classification algorithm. Parameter Comparison In this study, the authors conducted an experiment by entering the values of C and epsilon in the Support Vector Machine parameters, to find out the best C and epsilon values that can be applied in the movie review sentiment analysis. The following is the result of comparing the accuracy and ROC curves for each of the C and epsilon values, see Table 2 below: Comparison of Evaluation and Validation of Results In this study, testing uses 10 fold cross validation. Confusion matrix is used to measure the accuracy of SVM, NB, KNN, and SVM + IG on the Cornell and Stanford datasets. Then, the AUC value is measured based on the results of the ROC curve. Comparison of accuracy and AUC results can be seen in the Table 3 below: Comparison chart of the accuracy and comparison chart of the ROC curve values of each algorithm in the Cornell and Stanford datasets. Can be seen in Figure 3 and Figure 4 below: Conclusion In this research, the classification of sentiment analysis of film review was conducted with the classification of SVM. Because the SVM can function properly as a method of text classification. The study used two movie review datasets, the Cornell dataset with 2,000 data and the Stanford dataset with 25,000 data. From the data processing that has been done, the use of feature selection methods, namely IG, can improve the accuracy of the SVM classifiers. Movie review data can be classified properly into positive reviews and negative reviews. The accuracy of SVM in Cornell dataset before using the merge with the feature selection method reached 83.05%, with an AUC value of 0.831 included in the good classification after using the merge feature selection method accuracy increased to 85.65%, with an AUC value of 0.857 included in the good classification. Increased accuracy reaches 2.6%. Then, the accuracy of SVM in the Stanford dataset before using the merge with the feature selection method reached 86.46%, with an AUC value of 0.865 included in the good classification after using the merge feature selection method the accuracy increased to 86.62%, with an AUC value of 0.866 included in the good classification. Increased accuracy reaches 0.16%. So the SVM based on IG on the problem of classification analysis of movie review sentiments proved to provide a more accurate accuracy. To improve this research, the following suggestions are proposed: (1) In subsequent studies, it can use other text classification methods such as Naïve Bayes (NB), K-Nearest Neighbour s (KNN), C4.5 and others. (2) For the next research, it can use other feature selection such as Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Chi-Square and others so that optimal results can be compared. (3) For the next research, it is expected that not only to classify movie reviews but also can use other reviews such as book reviews, online shop reviews and others. (4) The language of the review used is not only English but can use Indonesian or other foreign languages.
2020-11-26T09:07:08.985Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "3c0a63d44d0a0849688f9a6f40dd423fa00ce19d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1641/1/012060", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "de5362e94dda40e967625aa2491ba6b3a52d7c7b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237274163
pes2o/s2orc
v3-fos-license
Modern trends in design of Higher educational institutions of Ukraine. Tendencias modernas en el diseño de Instituciones de educación superior de Ucrania The article deals with the problems of architectural and town-planning issues when creating and building buildings of higher educational institutions (hereinafter -HEIs) of Ukraine. The approaches to construction of the functional structure of the external and internal environments of the HEI of the architectural and planning, educational and laboratory blocks, three-dimensional spatial resources are analyzed. INTRODUCTION Higher education institutions have always played one of the main leading roles in the formation of the community centers and centers in the urban fabric of settlements and urban development, in the formation of the public centers, scientific and other important planning units. In today's conditions of the densely populated development, the role of the creation of the new university complexes of higher education institutions, as within the historically developed urban-type structure of a settlement (a range of works aimed at preserving existing historical buildings and new ones which create a comfortable environment for Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/ 2 educational the process of youth), as well as on the territories free from the development, the reconstruction of the restoration of historical ensembles, the creation of the new planning zones on the outskirts of cities according to increase of the architectural and the urban composition and the architectural qualities of the total urban environment specifically taken in the structure of the settlement. Along with this, in the conditions of the formation of new socio-economic relations in Ukraine, the need of quality of the higher education has increased significantly which in turn determined the need for the optimization of the chain of higher educational institutions. Creating of the comfortable educational environment becomes one of the priority tasks of the socio-economic and the urban development policy of the state. Reforming the education system on the basis of the principles of multivariate modeling of the subjects of learning objects, rehabilitation of higher educational institutions with the scientific and production institutions and organizations, other subjects of the objects of the social sphere, The size of the territory for the construction of educational buildings depends on the size and profile of the definite educational institution. CURRENT STATUS Theoretical studies have been identified, and design and construction practice has confirmed that the larger the contingent of students, the more efficient use of the territory. For example, the educational institutions of the technical profile (the most common) with the contingent of 4 thousand students in accordance with the regulations require a plot of 6 hectares. for 1000 people, and with a capacity of 10 thousand and more -the territory is almost 1.5 times less -at the rate of 4 hectares per 1000 students. A similar pattern is characteristics for higher educational institutions of another profile. Each of the marked areas, depending on the profile of the institution's educational establishment and the city-planning conditions, has its own peculiarities. The main structural element of the higher education institutions is the training area, which includes, in addition to the auditorium fund, as well as research sub-units of the unit. In the specific higher educational institutions of technical, agrarian, medical and other educational institutions, as a rule, a large group of research and teaching and production units is created. In this case, the independent educational, research and production and production area may be formed. University buildings of Designing of a Future University will definitely focus on creating more compact and versatile, flexible complexes. Therefore, it is possible to predict that the surface of buildings of these complexes (within the compacted building) of buildings will increase, which is why the important zoning of the territory is considered as the principle of planning decisions of educational institutions by the important scientifically grounded and verified project practice. This zoning has already determined the need for the division of the territory. It is recommended that the section of higher educational establishments be divided into educational, sports, residential and economic zones. More effective in this regard can be considered the centric scheme for the formation of the master plan, which determines the formation of the compact community center or the kind of student forum, around which will focus all major areas. The centerpiece provides the complex of composite integrity and expressiveness. When the training complex is located within the territories formed in the pre-formed part of the developed building or on the outskirts of the city. When there is a free territory for its rational further development in only one direction, it is expedient to apply the linear and mixed decision of the master plan. In this case, the sports area is located, as a rule, between the educational-scientific and the residential areas. The linear system sometimes allows more flexible consideration of the possibility of a long-term development of the educational institution. Such a scheme causes the formation of a linear center and a linear system under the centers. Thus, in designing of the new and reconstruction of existing educational institutions, the clear functional zoning of the territory is necessary to optimize the planning structure. In the central districts of l the large cities, it is expedient to consolidate the development of the territory of the institution of the higher education through the introduction of the effective teaching technologies and the improvement of the architectural and planning decisions of the educational buildings within each separate territory. Significant effects can be blocked and co-operated with educational institutions of different levels and profiles of education, the creation of educational complexes, centers, students' towns. Therefore, it is possible to predict that the effective direction of the development of the training zones will be blocking in one or adjacent territories of the several educational establishments with the co-opted use of engineering transport communications, buildings of educational and teaching auxiliary purposes. Co-operation is especially effective in the profile proximity or the coherence of the homogeneity of the educational and production functions Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/ 5 of institutions of the higher education that form the complex. In many cases, this allows the organization of the unified system of public services, joint research and production centers. CONCLUSION The analysis of the world and national practice of formation of higher educational institutions has revealed the tendency of their consolidation by blocking objects as well as Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/ 7 different levels of education and directional learning. Blocking of the objects allows to save money both for realization and for their exploitation. A characteristic feature is the flexibility, the "openness", the planning decisions, which makes it possible to expand, change and reorganize the area for the possibility of multifunctional use.
2021-08-23T20:52:22.756Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e088f42e2b72fa6129bb2c4039d7840dca2dcf9b", "oa_license": "CCBYNCSA", "oa_url": "https://portalrevistas.uct.cl/index.php/safer/article/download/2460/2065", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1bf44a8a083a7151839cebc5e4ade882a36d70e4", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
254660072
pes2o/s2orc
v3-fos-license
Eosinophilic granulomatosis with polyangiitis: case report and literature review Eosinophilic granulomatosis with polyangiitis (EGPA), previously known as Churg–Strauss syndrome, is a multisystem disorder characterised by asthma, blood and tissue eosinophilia and small-vessel vasculitis. Eosinophilic tissue infiltration and extravascular granuloma formation can lead to damage in any organ, but it is classically seen to cause pulmonary infiltrates, sino-nasal disease, peripheral neuropathy, renal and cardiac involvement, and rashes. EGPA is part of the anti-neutrophil cytoplasmic antibody (ANCA)-associated vasculitis syndromes, with the antibody being detected in ∼30–40% of cases and mostly against myeloperoxidase. Two genetically and clinically distinct phenotypes, defined by the presence or absence of ANCA have been identified. Treatment for EGPA focuses on inducing and maintaining disease remission. To date, oral corticosteroids remain first-line agents whilst second-line treatments include immunosuppressants such as cyclophosphamide, azathioprine, methotrexate, rituximab and mycophenolate mofetil. However, long-term steroid usage results in multiple and well-known adverse health effects and new insights into the pathophysiology of EGPA have allowed for the development of targeted biologic therapies, like the anti-eosinophilic, anti-interleukin-5 monoclonal antibodies. Introduction Eosinophilic granulomatosis with polyangiitis (EGPA) is a rare multisystem disorder, which was first identified in 1951 by the pathologists Jacob Churg and Lotte Strauss [1]. Asthma, blood and tissue eosinophilia, and vasculitic inflammation defined a condition that has remained complex and heterogeneous [2,3]. EGPA is part of the anti-neutrophil cytoplasmic antibody (ANCA)-associated vasculitis syndromes (AAV). It is one of the rarest AAV, with an incidence of 0.8-4 per million persons and a prevalence of around 8.1-22 per million persons [4]. Due to its low prevalence and limited epidemiological data, it remains challenging to diagnose and treat [5][6][7]. The mean age at diagnosis has been reported between 48 and 55 years [8][9][10]; however, patients have been diagnosed as young as 15 years of age [11,12]. Through the advent of novel monoclonal antibodies targeting cytokines responsible for upregulation and proliferation of eosinophils, the clinical prognosis of patients has improved as has diagnostic accuracy [13][14][15]. Pathogenesis The pathogenesis underpinning EGPA remains largely unknown; it is widely thought that different genetic and environmental factors come into play. Environmental factors including medications such as omalizumab and leukotriene receptor antagonists (LTRA), or irritants such a silica have been investigated as possible triggers. However, the evidence is weak [16] and in the case of omalizumab or LTRAs more likely the effect of steroid tapering rather than directly drug induced. EGPA is a T-helper (Th)2 cell associated disease; Th2 cells mediate the activation and maintenance of humoral, or antibody-mediated, immune responses through the production of cytokines. Biopsy samples from affected tissues are rich in Th2 related markers such as CD294 [17] and eosinophilic selective chemokines and cytokines such as eotaxin-3, CCL17 and interleukin (IL)-4, IL-5 and IL-13, which induce the maturation and delayed apoptosis of eosinophils [18]. Hyper-eosinophilia is a cardinal feature of the condition. Organ damage can be caused directly by eosinophilic infiltration as seen in cardiac disease, or by the release of proinflammatory cytotoxic granule proteins such as major basic protein, eosinophilic cationic protein and eosinophil peroxidase [19]. The identification of these cytokines, especially IL-5, has allowed for novel biologic therapies to be developed in the first instance for severe eosinophilic asthma (SEA) and, more recently, for use in a broader range of hyper-eosinophilic conditions. There is further evidence of Th17 involvement due to high levels of IL-17 later in the disease process [20]. Humoral immunity is also felt to play a role in the pathogenesis of EGPA. Elevation of IgE [21] and serum IgG4 is noted in EGPA; these are propagated by cytokines such as IL-4 and IL-5. The role of IgG4 in the pathogenesis remains unclear, the levels of IgG4 have not correlated with disease severity in current studies [22]. ANCA ANCA are identified in ∼40% of patients [2]. Immunofluorescence assays show a perinuclear pattern of ANCA. In most cases of EGPA, ANCA is directed against MPO rather than PR3 [23]. Our knowledge about their pathogenic role in inflammation and organ damage is developing constantly. A recent genome-wide association study has identified loci associated with the condition and has recognised two clinically and genetically distinct subgroups, shown in table 1. MPO ANCA + was associated with HLA-DQ, whereas the ANCA − subset was associated with IL5/IRF1 and the barrier protein GPA33, both of which were not associated with MPO + subsets [24]. These aren't completely distinct entities, with clinical findings overlapping in patients; cardiomyopathy and neuropathy can develop as a result of eosinophilic infiltration and vasculitis, so both processes are not mutually exclusive [24][25][26]. Clinical presentation Clinical findings and course Most patients experience a classic three phases pattern of symptoms and signs as shown in figure 1. EGPA may not necessarily manifest itself in such a defined order and distinct phases, with some patients only exhibiting some of these conditions. The mean time for the progression of patients from the initial prodromal allergic phase to the terminal vasculitic phase can take between 3 and 9 years [27]. Organ involvement Respiratory Asthma occurs in all cases of EGPA and tends to be severe with most patients becoming reliant on frequent courses or maintenance oral corticosteroids. A retrospective study looking at 157 patients reported that asthma preceded other systemic manifestations by a mean of 11.8 years and the severity of asthma increased 3-6 months before the onset of systemic disease. Patients were followed up for a mean of 7.4 years and 27% of patients went onto develop non-reversible obstructive symptoms [28]. Eosinophilic lung tissue infiltration was seen in 58% of patients including ground-glass changes, consolidation, pulmonary nodules and pleural effusions (figure 2d, e). Sinus Up to 75% of patients present with chronic rhinosinusitis. ∼55% of patients tend to have a background of nasal polyposis, with a past history of surgery or frequent use of oral corticosteroids [28]. Skin Patients may present with palpable purpura or petechiae (especially of lower limbs); however, other lesions such as subcutaneous nodules can also occur (figure 2b) [26]. Cardiac Cardiac manifestations range widely; they are caused both by eosinophilic infiltration and vasculitis. Cardiac disease is the primary cause of mortality in patients with EGPA. Coronary arteritis can lead to myocardial infarction and myocarditis, leading to fibrosis and thus restrictive cardiomyopathy. Untreated, this can lead to heart failure with associated morbidity and mortality. Other presentations include pericarditis, arrhythmias and pericardial effusions. Troponin at baseline and during flares is a useful and easily available blood marker. ECG and echocardiography are recommended for all patients. Cardiac magnetic resonance imaging (MRI) is the most sensitive diagnostic technique and a valuable tool to detect cardiac involvement early (figure 2c) [29,30]. Prodromal/allergic phase Characterised by upper respiratory tract symptoms: worsening of asthma and sino-nasal symptoms such as rhinosinusitis and nasal polyposis. Nonspecific symptoms such as malaise, weight loss, arthralgia and myalgia are also commonly observed. Eosinophilic phase Peripheral eosinophilia and eosinophilic organ involvement, primarily in the respiratory tract, gastrointestinal system and the myocardium. Gastrointestinal Eosinophilic infiltration in the gastrointestinal tract leads to abdominal pain, diarrhoea and nausea. This may present before or in conjunction with the vasculitis phase. Patients present with mucosal ulceration especially affecting the duodenum, rectal bleeding and ischaemic bowel. Emergency presentations can also occur such as bowel obstructions and perforation necessitating surgery [31]. Peripheral neuropathy Peripheral neuropathy can present in 75-80% of patients, with central nervous system involvement in 10-39%. Patients present initially with sensory impairment, followed by motor deficits. The most common presentation is mononeuritis multiplex, with axonal derangement, which leads to paraesthesia and pain. The most common areas tend to be the dermatomes supplied by the common peroneal nerves and tibial nerves. In the upper limb the tibial, ulnar and median nerves are usually affected. Nerve conduction studies can aid with the diagnosis of patients [32]. Renal Renal involvement occurs in ∼25% patients and the most common presentation is with necrotising crescentic glomerulonephritis. This is commonly found in ANCA positive cases. Clinical features can vary from isolated proteinuria, isolated haematuria to rapidly progressive glomerulonephritis [26]. Although it was initially thought that renal involvement in EGPA was benign; increasingly, studies have reported that nearly 20% of patients can develop end-stage renal disease in 4 years from first presentation, warranting regular monitoring and early management of renal disease [33,34]. Clinical outcomes The five-factor score (FFS), created by the French Vasculitis Study Group, can be used at diagnosis to assess the severity of EGPA and to predict prognosis. Initially proposed in 1996, it underwent revision in 2009 and assigns scores depending on organ involvement, clinical and biological parameters. One point each is attributed to older age (>65 years), renal insufficiency, cardiac insufficiency and significant gastrointestinal involvement, as well as for the absence of ENT (ear, nose and throat) manifestations. It has been reported that survival rate is improved when stratifying treatment according to the FFS and that the FFS score at diagnosis can help to predict the chance of relapse [35][36][37]. 2009 FFS values of 0, 1, and 2 are associated with respective 5-year mortality rates of 9%, 21%, and 40%; [37]. Factors which influence the frequency of relapse have noted to be lower eosinophil count at baseline and MPO-positivity [25,26,35]. A study looking at 50 patients reported that higher doses of steroids were needed to maintain remission in ANCA positive patients [38]. This was supported by another study which observed more frequent relapses in patients with MPO targeted ANCA [35]. Low-dose, long-term corticosteroid treatment seems to reduce relapse risk; however, this needs to be balanced against corticosteroid-related side-effects [34]. The most commonly reported sequelae in the disease are asthma and neurological symptoms [35]. Cardiac manifestations in the form of myocardial infarction, myocarditis and coronary arteritis have been demonstrated to affect survival [26], as did age of onset ⩾65 years [39]. Biologic therapies are expected to affect survival and relapse rates positively in the future. Classification of EGPA ANCA-associated vasculitides pose as a diagnostic challenge for many clinicians. Various diagnostic and classification criteria have been published. The 1984 Lanham criteria, the American College of Rheumatology classification criteria, the 2012 revised Chapel Hill Consensus conference and the EGPA consensus task force have helped to define and identify patients for clinical trials [5,[40][41][42]. The 2022 American College of Rheumatology/European Alliance of Associations classification criteria is a newly validated tool used to assist in diagnosing EGPA (table 2) [41]. It is applied to patients who have a confirmed diagnosis of a small-or medium-vessel vasculitis. It allocates different scores to different sub-criteria; a score ⩾6 is needed for a diagnosis of EGPA. This criterion was altered as previous versions led to frequent overlaps between different AAV. The criteria were validated with a group of 119 EGPA cases and 437 comparators; the sensitivity was found to be 85% (95% CI 77-91%) and specificity 99% (95% CI 98-100%) [41]. The European Respiratory Society formed the EGPA consenus task force which made 22 recommendations for the diagnosis and management of EGPA (table 3) [5]. This involved guidance around the diagnostic criteria for the condition. These criteria encompass the varying nature of patients' presentations, whilst also setting the requirement of having clinical and serological proof of EGPA. Relationship between SEA and EGPA Respiratory manifestations, especially in the form of asthma, tend to present several years prior to diagnosis of EGPA [28]. SEA is considered to be a prodromal phase of EGPA for some patients (see the case report). With initial respiratory tract symptoms affecting the upper and lower airways, patients then develop organ involvement due to eosinophilic tissue infiltration and vasculitis [43]. Due to the similarity in the pathogenesis of the two conditions, driven by hyper-eosinophilia, similar treatments have been and continue to be trialled and tested. This includes anti-IL-5 biological agents, which have acted as corticosteroid-sparing treatments in both SEA [44,45] and EGPA [13,46]. Diagnostic work-up There are currently no reliable biomarkers that can be used to identify and diagnose EGPA [47]; however, a comprehensive work-up of patients is essential to affirm diagnosis and set out treatment plans. This includes IgE titre levels, IgG and subsets, rheumatoid factor, C-reactive protein, erythrocyte sedimentation rate, antinuclear antibodies (ANCA), troponin, renal function, tryptase and vitamin B12. Stool and serology sampling for parasites ought to be considered depending on the clinical and travel history, as should a thorough drug and toxin review [7]. Organ specific testing should be utilised when appropriate. To assess pulmonary health, functional testing alongside radiological tests such as CT scans and chest radiographs are to be used. Echocardiogram or preferentially cardiac MRIs should be used to rule out involvement. Urinalysis should be sued to assess for haematuria and bone marrow biopsy to rule out haematological causes for eosinophilia [3,5,40]. Biopsy of affected organ systems can aid in diagnosing EGPA and monitoring relapses. Biopsies may illustrate eosinophilic infiltrates, small-to-medium sized vessel vasculitis and extravascular granulomas. The vasculitis consists of fibroid necrosis of the vasculature wall and can be associated with eosinophilic infiltrates and palisading granulomas [6,41]. An in-depth assessment will help to evaluate inflammatory activity and differentiate the disease from other hyper-eosinophilic conditions. Reproduced from [41] with permission. Differential diagnoses Vasculitic and eosinophilic disorders primarily form the differential diagnoses for EGPA. Other small vessel AAVs, such as granulomatosis with polyangiitis and microscopic polyangiitis, may present with similar clinical features to EGPA. However, the latter is characterised by elevated blood and tissue eosinophils and the presence of asthma [40], whilst the others are not. Chronic eosinophilic pneumonia (CEP) commonly presents with symptoms of breathlessness, cough, fatigue and low-grade fever as well as pulmonary infiltrates and hyper-eosinophilia and it can as such be difficult to distinguish from EGPA, especially EGPA in its early phase. Asthma can precede or accompany CEP in up to 50% of cases. However, EGPA is a vasculitic multisystem disease whereas CEP is not [48]. Non-myeloid neoplasms, including Hodgkin Lymphoma, T-cell neoplasms and some solid tumours, may be associated with hyper-eosinophilia and should be excluded [49]. Hyper-eosinophilic syndrome (HES) comprises of serum hyper-eosinophilia and tissue eosinophilic infiltration. This leads to organ damage and dysfunction [50,51]. Clinically, both HES and EGPA can present with similar symptoms due to the organs affected. Patients present with sino-nasal disease and eosinophilic pneumonia; however, in HES patients do not frequently present with asthma [52]. HES are heterogeneous with idiopathic or overlap syndromes most commonly presenting with pulmonary infiltrations; myeloid HES sees clonal eosinophilic involvement, such as FIP1L1/PDGFRA; in lymphoid-variant HES there is a clonal or phenotypically aberrant lymphoid population [50,51]. Scoring tools There are currently no specific assessment and monitoring tools that exist for EGPA. General vasculitis assessment tools such as the Birmingham Vasculitis Activity Score (BVAS) and asthma-specific tools such as the Asthma Control Questionnaire (ACQ) can be utilised. These can be used alongside clinical presentation and biochemistry to assess for severity of disease and monitor for flares. The BVAS tool looks at 56 organ-based symptoms and scores these depending on if they are new or worsening in the past 4 weeks [53]. It is not specific for EGPA and is more commonly used in other AAV. The ACQ focuses on the past 1 week and has seven items which patients respond to, including spirometry results. This, alongside other biomarkers allows for treatment to be started or escalated as needed [54]. A recent retrospective study looked at 119 patients with EGPA and compared accuracy of using the FFS and BVAS in assessing survival and concluded that the 2009 FFS had the best prognostic accuracy for survival [36]. Treatment: induction regimes Corticosteroids Due to the rarity of the condition, there is a paucity of randomised controlled trials looking at gold-standard remission induction and remission treatments for patients. In patients without poor prognostic factors, i.e. FFS score of 0, or limited disease corticosteroids alone are used to achieve remission. Corticosteroids can be given intravenously (usually methylprednisolone 500-1000 mg for 1-3 days in more severe cases) or orally 1 mg·kg −1 ·day −1 [55]. Due to the well-documented side-effects of corticosteroids, the dose ought to be tapered to the lowest dose possible. In patients experiencing disease relapse when the dose is tapered down [12], steroid-sparing agents, such as immunosuppressants, or anti-IL-5 biologics are used to maintain remission. Immunosuppressants Patients with an FFS score of 0 are found to enter remission with just corticosteroids; however, patients with FFS ⩾1 have a worse prognosis and are usually treated with glucocorticoid and immunosuppressants. Cyclophosphamide Cyclophosphamide has traditionally been used to induce remission whilst azathioprine, methotrexate and mycophenolate mofetil (MMF) are used to maintain remission alongside oral corticosteroids. Cyclophosphamide has successfully been used to induce remission in a randomised controlled trial comparing six and 12 cyclophosphamide pulses given with corticosteroids to 67 patients. Both groups achieved remission and were equally effective; however, relapses were more common in the six pulses group [35]. In granulomatosis with polyangiitis and microscopic polyangiitis vasculitides, cyclophosphamide was equally effective when given as continuous oral therapy versus intravenous pulses with a lower risk of adverse effects in the intravenous group. Side-effects of cyclophosphamide include bone marrow suppression and increased risk of cancer, as well as ovarian failure and sperm abnormalities [56]. The usefulness of other immunosuppressants for EGPA lacks prospective, EGPA-focussed trial data and their effectiveness remains controversial. A randomised controlled trial (CHUSPAN2) which looked at different non-severe AAV including EGPA, concluded that azathioprine was not superior to placebo in preventing relapse, inducing remission or reducing exacerbation rate of asthma when administered for 12 months [57]. A retrospective study of 188 Japanese patients with EGPA suggested that over a median follow-up period of 56 months, azathioprine may be an independent factor for lower relapse rates [25]. Azathioprine has been compared to other treatments and in patients with AAV it was found to be equally effective as methotrexate but with lower efficacy when compared with rituximab [58,59]. In a prospective trial of 28 patents with EGPA, methotrexate, when given alongside prednisolone, achieved remission in 72% but relapse occurred in 50% within 1 year [60]. A small observational study of 15 patients with newly diagnosed EGPA reported remission in 67% of patients who received MMF together with prednisolone at 12 months [61]. However, MMF was found to be less effective than azathioprine in non-EGPA AAVs when used for remission maintenance [62]. Similarly, in patients with relapsing and refractory EGPA, MMF was found to be less effective than rituximab [63]. Immunosuppressants require regular blood and lung function testing to monitor for organ toxicity. Rituximab Rituximab is an anti-CD20 chimeric mouse-human monoclonal IgG antibody that induces B-cell depletion. Case series and cohort studies report success rates in inducing remission in up to 80% of patients although lower rates are reported in ANCA negative patients [64,65]. The REOVAS trial, a double-blind controlled trial of rituximab in EGPA randomised patients with an FFS score of 0 into rituximab+corticosteroids versus corticosteroids and patients with FFS score of ⩾1 to rituximab+corticosteroids versus cyclophosphamide+corticosteroids. Rituximab was not found to be superior in either subgroup. Results of a trial looking at rituximab versus azathioprine as maintenance of remission are expected in due course (MAINRITSEG study; ClinicalTrials.gov identifier: NCT03164473). Earlier studies are presented in table 4. Treatment: anti-eosinophilic biologic therapies Monoclonal antibodies, including therapies which specifically target eosinophils through suppressing Th2 inflammation, have been trialled and proven effective in SEA, HES and more recently, EGPA. They have proven most successful in airways dominant EGPA with effective steroid-sparing activity. The results of recent studies are summarised in the table 4. IL-5 plays a vital role in the proliferation and differentiation of eosinophils and in preventing apoptosis. Monoclonal antibodies against IL-5 or the IL-5 receptor (anti-IL5/5R) have proven effective in disease control and steroid-sparing activity in asthma [72]. In EGPA, so far only mepolizumab has been tested in a randomised controlled trial [13], whereas the effect of benralizumab and reslizumab has been reported in small observational studies (table 4). A multicentre active controlled phase 3 study comparing the efficacy and safety of benralizumab versus mepolizumab in patients with relapsing or refractory EGPA is currently ongoing (MANDARA study; ClinicalTrials.gov identifier: NCT04157348), as are trials with reslizumab and benralizumab. Biologics may be a means through which complete remission can be achieved with reduced exposure to corticosteroids for patients. However, larger randomised controlled trials with longer follow-up periods are required to better understand the optimal use of these medications. Conclusion EGPA remains a rare but complex and heterogeneous multisystem disease. In the future, outcomes may improve through early diagnosis, updates in classification criteria and the use of novel biologic agents. Corticosteroids remain effective in inducing remission and maintenance of it, but their long-term side-effects can be deleterious. The dichotomy between ANCA positive versus ANCA negative EGPA shapes clinical presentation, diagnosis, monitoring and, importantly, treatments and outcomes. A deeper understanding of the pathophysiology of EGPA and its subgroups will support the development of effective, safe and personalised treatments. Key points • Respiratory manifestations, in the form of asthma and sino-nasal disease, tend to precede other clinical presentations in patients by several years. • No reliable biomarkers have been identified for the identification and diagnosis of EGPA; however, a thorough work-up using different laboratory and radiological techniques should be carried out in a patient suspected of having the condition. • Monoclonal antibodies targeting IL-5 have proven effective in small cohort studies for disease remission and as steroid-sparing agents, with larger randomised controlled trials being undertaken currently. Self-evaluation questions 1. According to a genome-wide association study, which genetic associations were identified with the MPO ANCA positive subgroup? a) GPA33 and IL5/IRF1 b) HLA-DQA1 and IL5/IRF1 c) HLA-DQA1 and HLA-DRB1 d) HLA-DRB1 and GPA33 2. Which scoring tool can be utilised to assess severity and estimate prognosis at diagnosis? a) Birmingham Vasculitis Activity Score b) Five-factor score c) Asthma Quality of Life Questionnaire d) Asthma Control Questionnaire 3. Which class of biologic agents does benralizumab belong to? a) Anti-CD20 b) Anti-IgE c) Anti-IL-5R d) Anti-IL-4 4. Involvement of which organ system is the primary cause of mortality in patients with EGPA? a) Cardiac b) Respiratory c) Gastrointestinal d) Renal Conflict of interest: V. Alam has nothing to disclose. A.M. Nanzer has received speaker's fees and conference support from AstraZeneca, Chiesi, Teva and Napp, outside the submitted work.
2022-12-15T16:22:05.837Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "a3e323e98515addf9af687871529ba7605d2b715", "oa_license": "CCBYNC", "oa_url": "https://breathe.ersjournals.com/content/breathe/18/4/220170.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ada237b07dc4cd030503083ec9f511e7877b81a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
262028191
pes2o/s2orc
v3-fos-license
On the Etiology and Characteristic of Pain in the Elderly Suffering from Dementia the Etiology and Characteristic of Introduction Pain is underestimated and undertreated mostly in elderly in all settings of care, but also in middle aged man and women [1,2].Its prevalence increases with age affecting 25 % y 83% of the elderly living in the community and up to 80 % living in nursing homes experience chronic pain [3,4].Dementia patients are also commonly affected by acute and chronic pain, which is often unrecognized and undertreated due lack of recognition and of insight that may mean that patients with dementia fail to report pain.Aphasia also could lead to problems expressing pain.It is estimated that between 20% and 50% of with moderate to severe dementia suffer from chronic pain [5,6].Likewise , Landi [7] found that individuals with dementia had a 20 % lower probability of receiving analgesics for daily pain than those with normal cognition and Morrison and Siu [8] found that after hip surgery, dementia patients received only one-third as much opioid analgesics as those with normal cognition.Pain can be accompanied by disability, sleep disturbances, reduced mobility, weight loss, depression, anxiety, and behavioral and emotional disturbances [1][2][3].Its persistence causes unnecessary suffering, compromises functionality and quality of life, and contributes Volume 7; Issue 02 Int J GeriatrGerontol, an open access journal ISSN: 2577-0748 to the progressive decline of the physiological reserve with age and to frailty [9].Caregivers often face difficulties in identifying it, due to insufficient education in pain management, limited use of pain assessment tools and resistance to use opioids and nonpharmacological measures [5].This situation reveals the need for improved research, training and understanding of pain among caregivers of dementia patients. Common Pain Problems in the Elderly and In Dementia Sufferers Dementia is defined by the American Psychiatric Association as "an irreversible mental state characterized by a decrease in intellectual function, personality change, impaired judgment, and often a change in affect" [10].It is a clinical diagnosis that requires memory impairment to be present along with at least one of other associated impairment, such as aphasia, apraxia, agnosia , or deterioration of executive function (planning, initiating, sequencing, monitoring, abstract thought) ,and complex behaviours [9,11,12].The high number of physical assaults on staff working in dementia wards, may be related to unidentified and unmanaged pain and often results in antipsychotic medication rather than person-centered care [13]. An essential aspect of dementia is that the cognitive impairment represents a change from baseline.With most of the dementia syndromes, the change is gradual and progresses over time [12].Dementias tend to become more prevalent after the age of 65, many sufferers share the acute or chronic persistent pain that accompanies the elderly.Thus, Mitchel et al. (14) followed for 18-month elderly people with severe dementia and found that pain to be the third most common cause of distress, affecting 39% of all cases, preceded by agitation in 54% and dyspnea in 46%.All pain cases require a complete clinical history delineating factors that could reduce or exacerbate it [1][2][3]11,15]. Pain is classified as acute-associated with trauma or injury or chronic (lasting longer than 3 months.(Table1) presents common diseases and causes of pain in the elderly [1,2,11,16].In acute pain, the most important causes are traumatic and inflammatory while in chronic pain are musculoskeletal pathology including previous injuries and areas of surgery, history of rib or limb fractures, cranial trauma or surgery, herpes zoster and treatments (anticholinergics, benzodiazepines, opioids, antipsychotics and antihypertensives, statins, chemotherapy), etc.If there has been gastric surgery, vitamin B12 deficiency may be suspected, and in case of frequent headaches, chronic use of analgesics.Other medical conditions, like cancer, heart disease or kidney disease, can cause pain.The different types of pain: nociceptive, neuropathic, visceral or mixed can be more difficult to assess.Frailty and dementia may increase the risk of medication-related harms and change goals of care.Caregivers may not realize the disease has worsened because patients cannot verbally express how they are feeling.Swelling or other symptoms may not be easily noticed if the person is bedridden.Mental pain can be exasperated by dementia.Patients may experience significant loss or grief, even when confused or disoriented.This can lead to social, spiritual or emotional pain, which is felt physically like other types of pain [5].L.C. Alvaro [17] and Mesioye A. [18] consider that there are several red flags that should always be assessed in this age group as follows: (a) herpes zoster, which can cause pain before and after the rash.Neuropathic pain is quite common and is eight times more common in people over 50 than in younger people [19]; (b) temporal arteritis, which causes headache, pain in the limbs, or proximal limb stiffness and weakness; (c) minor o major trauma, which can go unnoticed and cause chronic pain such as rib fractures, head injuries, subdural hematomas, severe osteoporosis ; (d) nocturnal or rest bone pain, indicating a possible tumor of inflammatory or infectious origin, with fever, chills, night sweats , urinary tract infection, recent instrumentation, i.e., spondylodiscitis; e) unexplained weight loss, loss of bladder and bowel, significant acute sensory deficit o motor weakness, and (f) acute limb ischemia due to fibrillation, or arteriosclerosis with pain in walking due to arterial obstruction. For Galicia-Castillo and Weiner [1], there are four common, overlooked painful conditions in the elderly: myofascial pain syndrome (MPS), chronic low back pain, spinal stenosis, and chronic diffuse pain.Myofascial pain syndrome is described as pain, numbness, and paresthesia in the neck, shoulders and other areas, with trigger painful myofascial points (TrPs) and distant radiation, resembling entrapment neuropathy with pain in any area [20].It usually occurs after direct trauma to the sacroiliac and gluteal region [1,20,21] and are very common in a wide variety of conditions.It is accompanied by altered gait, sacroiliac, and lumbar and hip pain and behaves like a radiculopathy with muscle dysfunction where patients complain of severe pain in the buttocks radiating to the leg and foot. Chronic low back pain is multifactorial.It accounts for 80% of all pain and is associated with multiple physical and psychological factors such as myofascial, sacroiliac joint syndrome, hip osteoarthritis, and/or anxiety or depression.Imaging tests should be ordered if there is a history suggestive of fracture, infection, or tumor.A majority (95%) of the elderly have degenerative disc or lumbosacral joint pathology that is often unrelated to pain. Lumbar spinal stenosis (congenital or acquired narrowing of the spinal canal) usually presents as pain, paresthesia, and weakness in the legs and calves during prolonged standing or walking (neurogenic claudication), with spasms and pain in the back.In most cases, spinal stenosis is asymptomatic.It is necessary to identify claudication and treat contributory factors such as being overweight before referral for surgery.Caudal or lumbar epidural blocks with local anesthetic and corticosteroids are effective.Surgery is indicated in pain that does not respond to treatment [21].Chronic diffuse pain is associated with osteoporosis or osteoarthritis and is a very frequent cause of disability.One can see that half of the patients with progressive neuromuscular disease report moderate to severe pain and that degenerative arthropathies are the second most common chronic condition after hypertension in 50% of the elderly and are accompanied by stiffness and pain in up to four joints [1,11,20,21]. Chronic pain affects some of the same areas of the brain that are affected by AD.The changes occur in the area called locus coeruleus and affect norepinephrine.Its effective management should consider not only the underlying pathology but also the most prevalent comorbidities and drug interactions that might contribute to the pain and its implications. Pain Pathways and the Expression of Pain Experience Pain according to the IASP is "an unpleasant sensory or emotional experience associated with, or resembling that associated with, actual or potential tissue injury [22]."Thecurrent IASP definition acknowledges that although tissue injury is a common antecedent to pain, pain can be present even when tissue damage is not discernible.This definition encompasses the objective part of pain, related to physiological aspects, as well as the subjective part, i.e., the affective or reactive emotional charge that qualifies the suffering associated with pain.It is valid for patients with mild dementia but as it progresses their difficulties in expressing the experience of pain increase [14,15,18,23,24]. Both neuropathological an neuroimaging studies have describe interconnected brail areas that are important intermediation of pain processing.Painful stimuli reach the brain via two pathways: the lateral system, where the sensation of pain (in its sensorydiscriminative-intensity aspects) travels through the anterolateral part of the spinal cord via the lateral spinothalamic bundles, until it reaches the hypothalamus, thalamus, and somatosensory cerebral cortex, and the slower medial conduction system, which is located in the medulla oblongata and periaqueductal midbrain to the cingulate cortex and terminates diffusely in the frontal and limbic lobes.The latter is responsible for the motivational-affective, cognitive-evaluative aspects of pain memory (unpleasant feelings) and its autonomic neuroendocrine responses (need references for all these statements).The cortical projection areas of the medial nociceptive pathway are strongly affected by the neuronal deposits characteristic of Alzheimer's disease 9,17-Overlap of the two systems might occur in the insula [5,17,22] (figure 1) Figure 1: The lateral and medial pain pathways [17] Figure .1 Schematic of the efferent pathways of the lateral pain system, which project from the ventral posterolateral nucleus of the thalamus to the primary parietal cortex, and schematic of the efferent pathways of the medial pain system, which reach numerous cortical areas and the hypothalamus, according to the more complex aspects of pain perception that they transmit.This may explain the alterations in the mental experience of pain in this condition.For L.C. Alvaro [17] and Price [22] the prefrontal cortex, anterior cingulate cortex, perisylvian areas, hippocampus, and hypothalamus are the areas responsible for the cognitive, evaluative, emotional, memory, and autonomic response dimensions of painful experiences.They are prepared to neutralize or defend it through the coordination of the cognitive-evaluative and the strictly sensory component of pain.The connections of the medial pain system with the limbic system, especially the amygdala and those of the hypothalamus, play a central role in aversive behaviors and in autonomic and neuroendocrine responses [17].The periaqueductal gray matter (PGS) decreases pain by facilitating the secretion of endogenous opioid derivatives [17,[25][26][27].In demented patients there is less response to analgesic treatments with absence of the placebo effect.Consequently, higher doses of analgesics are necessary in cases of AD [26]. Types of Dementia and Pain Dementia is a very common pathology over the age of 70, so it is to be expected that many people with dementia will suffer from pain.The most obvious and serious effect of dementia in chronic pain is its inability to relay subjective pain information accurately; otherwise, there is the possibility that chronic pain may be ignored, undertreated, or assumed to be nonexistent.Pain intensity and number of localized pain complaints bear small but significant negative impact to cognitive impairment [14,15]. The nature of chronic pain in dementia might be altered due to neuropathological changes that are involved and affects some of the same areas of the brain by AD, which in turn can alter pain perception and consequently might compromise anticipatory reactions and motor avoidance responses.Pain among nonverbal elderly or severely cognitively impaired individuals usually is expressed in the form of stereotypical pain behaviors, such as moaning, whimpering, withdrawal, restlessness, guarding, and protective postures.It is likely that the dementia process affects nociception and cognition and that the emotional components of pain overlap with the behavioral components in severe dementia, as shown in Table 2. Effect on nociception The dementia process damages the nervous system and can have a direct effect on pain pathways.It may decrease, increase, or alter pain sensation. Effect on pain cognition Dementia impairs all aspects of cognition, from memory to the conceptualization of pain. Effect on emotional response to pain Dementia can damage appropriate emotional responses and can have effects as varied as indifference or disinhibition. Dementia lesions are placed in the nociceptive pathways and accordingly pain may have different clinical features than those in the non-demented population.As dementia progresses, do does the likelihood that patients are experiencing pain.For this reason, the painful experience becomes different and distinctive for every lesional type.The lateral nociceptive pathway (lateral thalamic nuclei and primary parietal cortex), which is in charge of the primary pain perception, is preserved in dementia.Thereafter, the shear painful perception, including pain intensity and threshold, remains unmodified [11,17].Distinctly, the medial pain pathways are affected by dementia lesions in several cortical projection areas, including areas of expectation and integration of experience (prefrontal), memory (hippocampus), and autonomic and motor defense (amygdala, periaqueductal gray matter, hypothalamus).In this pathway are included: the intralaminar thalamic nuclei, the pons (locus ceruleus: LC), the mesencephalon (periacueductal grey substance: PGS), the hypothalamus (paraventricular nuclei, mamilary tuberculum) and different areas of the parietal (primary, secondary, operculum), temporal (amigdala, hyppocampus) and frontal (anterior cingular: ACC) [9,19,[22][23][24][25][26][27].As a consequence, the kind of pain evoked by these areas will be compromised and affect cognitive assessment, the mood and emotion inherent to pain, the pain memory or the autonomic responses are modified in dementia [28]. The medial regions of the temporal lobe (hippocampus and amygdala) are responsible for pain memory.The orbitofrontal cortex is crucial to the anticipation inherent in the placebo effect and locates the site of the painful stimulus.In a recent article in Nature Neuroscience, using surgical implants, scientists have recorded electric fluctuations in the orbitofrontal cortex, an area involved in emotion regulation, self-evaluation and decision making [24].The anterior cingulate cortex is important for perceiving both acute and chronic pain.Neurodegeneration also affects brain structures involved in the inhibitory control of pain, such as the raphe nucleus, the mesencephalic periaqueductal gray matter (MGP), the vegetative nervous system, and the intralaminar nucleus [17,22,24], influencing aversive and neuroendocrine pain behaviors.The main subtypes of dementia as related to pain, are the following: Alzheimer's disease (AD) is characterized by extracellular deposits of beta-amyloid and neurofibrillary tangles formed by tau protein that accumulate in the cytoplasm of neurons and axons and loss of neurons [9].The neuropathological changes are predominantly in the temporal and parietal cerebral cortex and hippocampus.As a result there is a reduction in the anticipatory and avoidance responses and also a flattening of the autonomic responses [9,18,25].These are essentially secondary to the degenerative changes in the medial temporal (pain memory) and anterior cingular: ACC (cognitive and mood aspects) areas.by decrements in all three levels of memory: sensory register, short-term, and long-term memory.It accounts for 60 %-80% of all dementias.In the mild to moderate forms, perception, pain threshold, and touch are relatively preserved [9,17].Due to dysfunction of cortical connectivity, there is impaired integration of information, with particular impairment of the ability to combine incoming stimuli and analyze them simultaneously in different cortical areas to produce a coherent response, although the ability to analyze each feature separately is retained [17,23].Further, in AD cases structures associated with pain perception, including the cognitive component, memory formation and with the vegetative autonomic response, patients require stronger stimuli to elicit equivalent autonomic responses as compared with controls.For example, blood pressure and heart rate are not altered except when the pain is very acute [17].Thus, patients developed a distorted mental experience of pain, leading to varying degrees of pain impairment that differ from the simple parietal sensory of the lateral pain pathway and the parietal cortex S1 and SII, which are usually less affected.They feel the pain, but do not anticipate it, remember it, or avoid it by withdrawal or autonomic defense [17,22,23,25,26]. Memory loss in older Vascular dementia or multi-infarct dementia (MID) (10%-20%) is the second most common type of dementia caused by multiple ischemic -hypoxic brain lesions or haemorrhagic cerebrovascular, cortical, or subcortical lesions, affecting both gray and white matter and resulting in cortical, subcortical, and hippocampal-hypothalamic disconnection [9].Multiple lacunar infarcts or deep white matter changes related to chronic isquemia may cause a subcortical dementia [9,30].MID has been associated with slightly higher pain prevalences.They can cause hyperactivity of the hypothalamic-pituitary-adrenal axis and increased painful emotional responses, similar to hyperpathia, such as cortico-subcortical deafferentiation pain due to disruption of normal sensory stimuli reaching the brain secondary to the white matter lesions, whereby the brain tends to create its own sensory experiences, as in phantom limb pain [9,17] or post stroke central pain.The consequence is the presence of hyperpathy and hyperalgesia.Patients experience more "felt" and vivid pain, both in intensity and in the variety of forms, which may explain, for example, why they have more headaches months or years after a stroke or more central neuropathic pain.Unlike AD, there is no suggestive evidence of a decreased pain threshold. Dementia with Lewy bodies (DLB) is the third most common cause of dementia, Pathologically ,it is part of the spectrum of synucleopathies, characterized by neuronal deposition of synuclein, leading to the formation of Lewy bodies(abnormal alpha-synuclein clamps).In this kind of dementia, there is a slowly progressive cognitive decline, accompanied by vivid hallucinations, motor features of parkinsonismo and fluctuating cognition with pronounced variations in attention and alertness. Reduced perception of pain and distress, is characterized by fundamental alteration in the parasylvian area [9,17,23,25,29] with naming loss than in AD, although they share some of the topography of the pathological deposits, more in posterior areas in MDI than in Alzheimer's disease.Postural instability, difficulty in walking and falls are their most important motor manifestations. Mixed dementia: This is a combination of two or more types of dementia because most sufferers have a vascular element and AD components that are difficult to quantify.Vascular changes are considered often as the trigger to AD symptoms; therefore, control of vascular risk factors is important for reducing the impact of AD and other dementias. Frontotemporal dementia (FTD) is a rare form of dementia that tends to occur before the age of 60.It is associated with abnormal amounts or forms of the tau and TDP-43 proteins.Frontal and lateral temporal atrophy in FTD are greater than in AD, with less flow in prefrontal cortex, orbitofrontal cortex, the insula and perisylvian areas as seen in SPECT or PET imaging scans.As a result, there is a reduction in the cognitive-evaluative component of the painful experience, so that the expression of pain is milder.It is possible that patients with FTD have an increase in pain threshold.It is linked to the above mentioned affective-emotional component, which is responsible for a higher tolerance to pain and for a lesser pain sensation [9,17,22]. Parkinson's and Huntington dementias (PD) often are accompanied by dementia in the last stages of the disease.Advanced Parkinson's present with tremors, rigidity, bradykinesia, and gait disturbances.Several pain syndromes exist in these cases, and it is linked to early lesions of the locus ceruleus or PGS, which reduce its prominent antinociceptive action through several peptides.There is a Lewy Bodies pathology in areas of the brain important for pain processing [9].Dysfunction of these areas which is very common [17], and may improve with levodopa therapy.However, erratic pain in different areas, with poor response to L-dopa adjustments, and even more common than those of other origins (joint, inflammatory, systemic, etc.).The lack of facial expression may lead to a reduction in the external expression of pain, as in the case of FTD as described above. In the demented patients in general there is a lack of expectation to analgesic treatments and an absence of the placebo effect, consequently higher doses of analgesics are necessary [17,26]. Conclusions Pain in dementia patients is fairly common but is usually is poorly recognized and under-treated due as dementia progresses, the persons ´s ability to communicate their need becomes difficult to notify it or interpreting their pain as the lack of appropriate education of further training and interest among the care givers and professionals.Next to easy self-report measures, observational , Volume 7; Issue 02 Int J GeriatrGerontol, an open access journal ISSN: 2577-0748 Alzheimer patients can be characterized Volume 7; Issue 02 Int J GeriatrGerontol, an open access journal ISSN: 2577-0748
2023-09-18T15:09:34.588Z
2023-09-08T00:00:00.000
{ "year": 2023, "sha1": "94f016a4beca18caa589462a01b1c24bebc07434", "oa_license": "CCBYSA", "oa_url": "https://www.gavinpublishers.com/assets/articles_pdf/On-the-Etiology-and-Characteristic-of-Pain-in-the-Elderly-Suffering-from-Dementia.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f633fea08f2b7a59992f611b658b7ff098ed5d26", "s2fieldsofstudy": [ "Medicine", "Art" ], "extfieldsofstudy": [] }
72661051
pes2o/s2orc
v3-fos-license
Respiratory symptoms of megaesophagus Megaesophagus as the end result of achalasia is the consequence of disordered peristalsis and the slow decompensation of the esophageal muscular layer. The main symptoms of achalasia are dysphagia, regurgitation, chest pain and weight loss, but respiratory symptoms, such as coughing, particularly when patients lie in a horizontal position, may also be common due to mi-croaspiration. A 70-year old woman suffered from a nocturnal cough and shortness of breath with stridor. She reported difficulty in swallowing food over the past ten years, but had adapted by eating a semi-liquid diet. Chest X-ray showed right hemithorax patchy opacities projecting from the posterior mediastinum. Chest computed tomography scan showed a marked dilatation of the esophagus with abundant food residues. Endoscopy confirmed the diagnosis of megaesophagus due to esophageal achalasia, excluding other causes of obstruction, such as secondary esophagitis, polyps, leiomyoma or leiomyosarcoma. In the elderly population, swallowing difficulties due to esophageal achalasia are often underestimated and less troublesome than the respiratory symptoms that are caused by microaspiration. The diagnosis of esophageal achalasia, although uncommon, should be considered in patients with nocturnal chronic coughs and shortness of breath with stridor when concomitant swallowing difficulties are present. Introduction Megaesophagus is commonly a late consequence of esophageal achalasia 1 which is a motility disorder involving both the smooth muscle layer and lower esophageal sphincter (LES). It is characterized by incomplete LES relaxation, increased LES tone and the lack of peristalsis of the esophagus. 2 This disorder has no known underlying cause, and only a very small proportion of cases occur secondary to Chagas disease (an infectious disease common in South America). 3 Megaesophagus can also result from esophageal obstruction due to cancer or fibrosis, 4 or to a tight gastric band that is used to treat obesity. 5 The main symptoms of achalasia are dysphagia, the regurgitation of undigested food, weight loss and chest pain. The chest pain that is experienced may be mistaken for a heart attack because it can be extremely painful. Dysphagia tends to become progressively worse over time and to involve both fluids and solids. Some people may also experience coughing, stridor, wheezing and other respiratory symptoms, particularly when lying in a horizontal position, because food and liquid, including saliva, are retained in the esophagus and may be inhaled into the lungs with microaspiration. 6 Aspiration can be more serious and cause pneumonia or airway obstruction due to inhaled materials. Treatment typically involves pneumatic dilatation or surgery, such as Heller myotomy. 7 Following pneumatic dilatation or surgery, proton pump inhibitors can help to prevent gastroesophageal reflux. A partial fundoplication or wrap is generally added during surgery to prevent excessive reflux. We report a case of megaesophagus due to esophageal achalasia with swallowing difficulties. This is often underestimated and less troublesome than the respiratory symptoms that are caused by microaspiration. Case Report Over the course of several months, a 70-year old woman suffered from a nocturnal cough and shortness of breath with stridor. She had no previous history of respiratory or cardiac disease. The family physician had scheduled her for a cardiological examination with an electrocardiogram and echocardiogram; parameters were within the norm. She also had an ear, nose and throat examination with fiberoptic laryngoscopy that showed mild laryngeal hyperemia. Recent routine blood tests, such as a complete blood cell count, erythrocyte sedimentation rate, and renal and liver function tests, were all normal. No thyroid masses were detectable, and a chest auscultation revealed decreased breath sounds in the right hemithorax. A careful assessment of her medical history revealed that the patient had reported difficulty in swallowing food over the past ten years. In the past few months, this dysphagia had worsened, but she had adapted by eating very slowly and ingesting a semiliquid diet. Furthermore, she reported frequent coughing, not only during the night and in a supine position but also particularly in the left decubitus position. Chest X-ray showed right hemithorax patchy opacities projecting in the posterior mediastinum ( Figure 1). Chest computed tomography scan revealed a severely dilated and tortuous esophagus with abundant retained food, such as that which occurs in achalasia (Figure 2A-C). A nasogastric tube was inserted to aspirate the contents of the esophagus and the patient was then referred for an endoscopy. This excluded other causes of obstruction, such as cancer or fibrosis, confirming the diagnosis of esophageal achalasia. The lower esophageal sphincter was tight but was successfully dilated using a balloon. Nevertheless, considering the severity of the condition, the patient was also referred for further surgical treatment of the achalasia, but she declined any surgical intervention at that time. [ The patient gave informed consent to treatment and to the publication of the results. Discussion and Conclusions In esophageal achalasia, the slow decompensation of the muscular layer causes severe dilatation in the long term, which is a characteristic of megaesophagus. Diagnosis depends primarily on a detailed clinical history that may lead to the administration of more specific investigations, such as a barium swallow, esophageal manometry and endoscopy, which is generally performed to rule out the possibility of cancer. In an emergency setting, the diagnosis may be suspected following a simple chest X-ray showing patchy opacities projecting from the posterior mediastinum. This is a radiological sign of striking dilatation of the esophagus containing alimentary material. Diagnosis is straightforward with chest computed tomography. This allows disease severity to be evaluated for possible emergency treatment and helps in the planning of future therapeutic management. Although uncommon, the diagnosis of esophageal achalasia should be considered in elderly patients with respiratory symptoms, which typically include a cough and shortness of breath with stridor in either the nocturnal or supine position, when concomitant swallowing difficulties are present. In the elderly population, swallowing difficulties are often underestimated and less troublesome than the respiratory symptoms that are caused by microaspiration. Any delay in the diagnosis of this exceptional condition in the elderly is critical because serious consequences, such as aspiration pneumonia and airway obstruction due to inhaled alimentary material, may occur. Some authors have indeed reported sudden deaths secondary to megaesophagus that were attributed not only to food asphyxia but also to the exacerbation of pre-existing underlying disease. 8
2019-03-10T13:04:23.667Z
2013-03-04T00:00:00.000
{ "year": 2013, "sha1": "f51c8a2c5707fd5a26071cf03d174e3aed9c8a76", "oa_license": "CCBYNC", "oa_url": "https://italjmed.org/index.php/ijm/article/download/itjm.2013.53/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "fce3babd3e8422022d1131e78c57afb51976bdc3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204954719
pes2o/s2orc
v3-fos-license
Development of Mandarin speech test materials for civilian pilots in China Supplemental Digital Content is available in the text Air-ground radiotelephony communication provides the only way for a civilian pilot and air traffic controller to communicate during all phases of a flight. The quality of communication during the flight is crucial for aircraft safety. For this reason, it is critical that pilots have good auditory function because of the lack of visual cues during flight. Hearing loss can be endangered among civilian pilots who are routinely exposed to loud occupational noise environments such as the cockpit, and can place the aircrew and passengers at risk. Radiotelephony is a type of semi-artificial language, based on imperative sentences that are standardized, procedural, articulate, and precise clear brachylogy. Mandarin radiotelephony terminology, which is widely used in Chinese domestic air routes, has its own characteristics and comprises three parts: Mandarin Chinese terminology, English capital letters, and English abbreviations of terminology. In China, only pilots who can meet the auditory fitness for duty (AFFD) [1] standards of the Civil Aviation Administration of China (CAAC; Beijing, China) are certified as possessing sufficient hearing to ensure a safe flight. The current CAAC's AFFD test is primarily dependent on pure-tone audiometry (PTA). However, previous studies indicate that PTA may be unsuitable for deciding the ability of an individual to perform the job satisfactorily because the ability to recognize speech in steady-state noise cannot be predicted with an audiogram. Therefore, in this study, we aimed to develop a set of speech audiometry materials that is specifically designed for civilian pilots as a functional additional AFFD test for CAAC. To be qualified as an auditory functional assessment, a suitable AFFD test must take into consideration the occupational environment, an individual's professional experience, and the auditory requirements for the job. Hence the stimuli in this study were based entirely on hearing critical tasks pertaining to radiotelephony communications. In view of the actual application situation of the Mandarin radiotelephony language in China, the content of the speech corpus should include the standard radiotelephony phraseologies required for each normal inflight phase as well as unusual situations. Therefore, the following four classic Mandarin radiotelephony communications textbooks were finally selected: (1) International Civil Aviation Organization (ICAO) Radiotelephony Communication; (2) Radiotelephony Communication Course (second edition); (3) 900 Sentences of Pilot English Proficiency Examination of China; and (4) Guidance on Radiotelephony Communications Under Unusual/Emergency Situations. After discussing with the captain pilots and linguist in research group, we finally identified twelve categories of hearing critical tasks, as following: (1) departure (eg, pre-flight, start-up, pushback, taxi-out, and climb-out); (2) flight altitude; (3) very-high-frequency omnidirectional range and waypoints; (4) en route; (5) flight speed; (6) flight direction; (7) descent and approach; (8) transponder frequency; (9) landing; (10) after landing; (11) query normal height; and (12) non-routine condition. Based on the above work, the development of the sentence lists needed to follow some basic principles: (1) sentences should be completely selected from speech communications with radiotelephony language; (2) intonation factors and phoneme balance should not be considered; (3) the balance of hearing critical tasks should be strictly maintained between the sentence lists; (4) excessive homogeneity and heterogeneity should be prevented; (5) sentences should be of variable lengths, ranging from 3 to 13 Chinese characters (to avoid interference from memory factors); (6) as much as possible, monosyllabic words and spondees should be chosen as the keywords and a few trisyllabic words should be selected, as appropriate; (7) only declarative and imperative sentences should be included; and (8) no duplicate sentences should be included. As a result, 20 sentence lists were developed, each list included 20 sentences with 100 keywords [Supplementary Table 1, http://links.lww.com/CM9/A110]. Then, the recording work was conducted in an anechoic chamber in the Chinese Academy of Social Sciences in Beijing, China. An experienced sound engineer obtained voice signal acquisition by using professional recording equipment and tools. The speaker was a 47-year-old male broadcaster with more than 20 years of broadcasting experience. After digital processing, the audio files with loudness equalization were finally produced and stored for use. The study was approved by the Ethical Committee of the Civil Aviation Medicine Center. A total of 40 male Chinese student pilots who worked for Shenzhen Airlines and held Class I certificates of CAAC were enrolled. The average age was 23.7 years (range, 21-26 years). The mean total flight time was 229.6 h (range, 205-279 h). All the subjects had good written and oral skills both in English and Mandarin. Participants with hearing loss in both ears, or medical history of ear disorders were excluded. After conventional audiometry, we selected the relatively healthy ear as the test ear to conduct the speech test. All subjects were tested in a double-walled acoustic cabin that met the American National Standards Institute 2004 specifications for audiometric test rooms. The clinical audiometer (ie, sound pressure level [SPL]) was calibrated in line with the International Standards (IEC 645-2:1993) before administering speech audiometry. Based on the results of a small sample preliminary experiment, six stimulus intensity levels were identified, and ranged 5 to 15 decibel hearing level (dB HL) in 2 dB steps. Latin square design was applied, and the following formula was used to calculate the word recognition score (WRS): WRS = the number of correct key words/100 Â 100%. All statistical analyses were conducted using SPSS 21.0 (SPSS Inc., Chicago, IL, USA). Logistic regression analysis was performed to obtain the performance-intensity (P-I) functions and calculate the regression slopes that could be used to evaluate the sensitivity of the system and the regression intercepts for all the lists. The values of the slope and intercept for each list are in Supplementary Table 2, http://links.lww.com/CM9/A110. These values were then put into modified logistic regression equation (Equation 1) that was designed to calculate the percentage of correct performance at any specified intensity level. [2] p In Equation 1, "p" is the WRS, "a" is the regression intercept, "b" is the regression slope, and "i" is the intensity level (in dB HL). By putting the regression slope, intercept, and intensity level into Equation 1, the percentage of correct keyword recognition could be calculated, and the P-I functions could be obtained by statistical curve fitting. .46 in ascending order, the maximum variability of the WRSs nearly 50%, and the minimum was nearly 0% and 100%. The mean threshold (50%) of P-I functions was 8.22 ± 0.35 dB HL, the mean slope at threshold was 11.34% ± 1.84% per decibel, and the mean slope of the linear region (20%-80%) was 4.50% ± 1.29% per decibel. The WRSs of the following six sentence lists (Lists 5, 7, 16-18, and 20) revealed that the non-monotonic characteristics followed the consecutive intensity levels. The data of PTA and total flight hours [Supplementary By one-way analysis of variance [Supplementary Table 4, http://links.lww.com/CM9/A110], we found that all the lists were equivalent in difficulty level (P > 0.05). We also conducted a reliability analysis on intensity levels, scores, and test results of 100 keywords in each list. The Cronbach's a value was 0.981 (i.e., >0.80), which suggested that the 20 lists had a high internal consistency; And the validity analysis for the remaining 14 sentence lists [ Figure 1] revealed that the Kaiser-Meyer-Olkin value of the test sentences was 0.905 and the Bartlett sphericity test result was P < 0.001, which indicated that the test materials also had good validity. In this study, there is a good consistency between the speech reception threshold (SRT) results in quiet and PTA (8.22-10.6 dB HL). The overall mean parameters across all P-I functions of the sentence lists obtained in this study were compared with several existing materials such as the Mandarin speech test materials, which have reported that the mean SRT of its sentence lists is 23.1 dB SPL (ie, 3.1 dB HL). [3] Another recent study reported a mean SRT of the Mandarin short sentence lists as 6.3 dB HL with a 7.2% per decibel mean slope at 20% to 80% linear score region. [4] We found that the P-I functions in our study exhibited a characteristic feature of relatively high SRTs and low slopes at the linear score region which meant a relatively high difficulty and low sensitivity. The first reason is that, despite our attempt to maintain homogeneity among the participants, it was very difficult to fully avoid the floor effect caused by the relatively high PTA level (M = 10.60 dB HL) and some hidden hearing loss. Although our recent studies found that extended highfrequency audiometry (EHFA), which is more sensitive to inner ear injury, may be helpful for the early detection of noise-induced hearing loss in civilian pilots. [5] However, the degree of speech intelligibility deficit varies from individual to individual and unfortunately cannot be predicted effectively by PTA or EHFA. The second reason is that most student pilots had undergone flight training in English-speaking countries and only a few of them had been trained at the Civil Aviation Flight University of China (Guanghan City, Sichuan Province, China). Moreover, the 40 Chinese student pilots participating in this study had normal hearing and slightly over 200 h of flight experience, but none of them had experience in piloting the commercial jet. Hence, the pilots who were trained abroad may not have a good knowledge of the radiotelephony language spoken in Mandarin. As reported in this manuscript, the SDs for intensity levels that produced approximately 50% correct word recognition was over ±20% (58.79 ± 22.77). This means the individual scores varied widely over approximately 90%, which was nearly the entire range of possible WRSs. Since all the subjects were audiometrically normal, these differences may be because of their fluency and familiarity with the Mandarin radiotelephony language that is directly related to flight experience. The decision to prevent an experienced pilot with hearing loss from flying should be reconsidered. Thus, in view of the fact that flight experience can compensate for hearing loss to some extent. It is concluded that the novel material is in line with the requirements for a suitable functional AFFD test. Further studies are needed to conduct the listening tasks (i.e., the 14-sentence lists) in real-world noise environments to establish the auditory pass/fail criteria for CAAC.
2019-10-30T13:04:39.017Z
2019-11-05T00:00:00.000
{ "year": 2019, "sha1": "c402caf78513917319ab82ed9160b067b8ccbf82", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/cm9.0000000000000491", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ef5250f435dab9c7284e043f40214389b9e2f5f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Political Science" ] }
22930551
pes2o/s2orc
v3-fos-license
Mitochondrial DNA variation reveals maternal origins and demographic dynamics of Ethiopian indigenous goats Abstract The Horn of Africa forms one of the two main historical entry points of domestics into the continent and Ethiopia is particularly important in this regard. Through the analysis of mitochondrial DNA (mtDNA) d‐loop region in 309 individuals from 13 populations, we reveal the maternal genetic variation and demographic dynamics of Ethiopian indigenous goats. A total of 174 variable sites that generated 231 haplotypes were observed. They defined two haplogroups that were present in all the 13 study populations. Reference haplotypes from the six globally defined goat mtDNA haplogroups show the two haplogroups present in Ethiopia to be A and G, the former being the most predominant. Although both haplogroups are characterized by an increase in effective population sizes (N e) predating domestication, they also have experienced a decline in N e at different time periods, suggesting different demographic histories. We observed seven haplotypes, six were directly linked to the central haplotypes of the two haplogroups and one was central to haplogroup G. The seven haplotypes were common between Ethiopia, Kenya, Egypt, and Saudi Arabia populations, suggesting common maternal history and the introduction of goats into East Africa via Egypt and the Arabian Peninsula, respectively. While providing new mtDNA data from a historically important region, our results suggest extensive intermixing of goats mediated by human socio‐cultural and economic interactions. These have led to the coexistence of the two haplogroups in different geographic regions in Ethiopia resulting in a large caprine genetic diversity that can be exploited for genetic improvement. | INTRODUCTION Ethiopia is home to more than 29 million goats (FAOSTAT, 2014;accessed February 25, 2016), a large number of which are of indigenous types kept mainly for subsistence. They inhabit a wide range of habitats and production systems ranging from the cool highlands to hot arid lowland environments (Abegaz, 2014). Based on their geographic location and associated ethnic community, Ethiopian indigenous goats are classified into 13 populations (see DAGRIS database at http://www.dagris. info/countries/192/breeds?page=2). These have been further categorized into four family groups based on geographic location, and two production systems across three agro-ecological zones (FARM-Africa, 1996). From the analysis of microsatellite markers, Tesfaye (2004) regrouped the 13 populations into eight types but with low bootstrap support (<50%). In line with the low bootstrap support, STRUCTURE analysis failed to resolve the eight groups adequately and showed a high level of admixture. Despite the lack of clarity on the classification of Ethiopian indigenous goats, a large gene pool of autosomal genetic diversity occurs in the country which can provide the raw material to support breeding programs for the indigenous stocks. However, Naderi et al. (2007Naderi et al. ( , 2008 suggested it to be the result of a single domestication coupled with the management of wild and semidomesticated individuals carrying diverse mtDNA lineages followed by geographic dispersion and subsequent extinction of some lineages. Globally, haplogroup A has the largest geographic distribution (Pereira, Pereira, Van-Asch, Bradley, & Amorim, 2005). Haplogroup B occurs in eastern and southern Asia, including Mongolia, and at low frequencies in South Africa and Namibia. Haplogroup C occurs at low frequencies in Mongolia, Switzerland, Slovenia, Pakistan, and India, while haplogroup D occurs only in Pakistan and Indian local goats. Haplogroup F is exclusive to Sicily while haplogroup G has been observed in Turkey, Iran, Iraq, Saudi Arabia, Kenya and Egypt. All the six haplogroups occur in the wild ancestor, Capra aegagrus, suggesting that domestication happened across southwest Asia (Naderi et al., 2007(Naderi et al., , 2008. Archeological findings show that goat domestication occurred around 10,500 years ago between the Zagros Mountains and the Fertile Crescent (Zeder, 2008;Zeder & Hesse, 2000). The analysis of mtDNA genomes (Doro et al., 2014;Nomura et al., 2013) side by side to that of the d-loop region show congruent clustering patterns suggesting a complex domestication process. Archeological evidence has shown that the Horn of Africa, and in particular Ethiopia, played a critical role in the history of dispersal of various domestic plant and animal species into and out of the continent (Gifford-Gonzalez & Hanotte, 2011;Oliver, 1983). In spite of this, majority of the studies performed so far on indigenous goats, have lacked samples from the region. On the other hand, various studies have shown that socio-anthropological (human movements, cultural exchanges, war, etc.) and natural (droughts, floods, etc.) events have contributed to the geographic dispersion and intermixing of different livestock species (Girma, 1988;Yilma, 1967) and they might have shaped the genetic landscape of indigenous domestic stocks in the region. In this study, we analyzed mtDNA d-loop sequences to investigate the within and between population maternal genetic variation and diversity, and demographic dynamics of indigenous goats in Ethiopia. | Sampling and DNA extraction A total of 309 blood samples representing 13 Ethiopian indigenous goat populations were sampled from farmer's flocks and used for the study. During sampling, all efforts were made to avoid closely related individuals. Genomic DNA was extracted from the blood samples following Shinde, Gujar, Patil, Satpute, and Kashid (2008). | PCR amplification and sequencing The entire mtDNA d-loop region (1,061 bp) was amplified using nested PCR (Table S1). The PCR reactions were carried out in 20μl reaction volumes made up of the AccuPower ® PCR Premix (Bioneer-Daejeon, Korea), 0.2 μM of each primer, 1.5% Hi-Di™ formamide (Applied Biosystems, USA), 0.005 mg of Bovine Serum Albumin (ThermoScientific), and 50 ng of template DNA. A two-stage touchdown PCR involving an initial denaturation at 95°C for 3 min followed by the first stage of amplification of five cycles (denaturation at 90°C for 10 s, annealing at 58°C for 40 s, and extension at 72°C for 30 s), and a second stage that involved the same profile but 30 cycles of amplification and an annealing temperature of 53°C, was performed. A final extension step at 72°C for 7 min completed the PCR. The PCR products were purified using the QIAquick ® PCR Purification Kit (Qiagen, Hilden Germany) following the manufacturer's instructions. The purified products were sequenced using the BigDye Terminator v3.1 Cycle Sequencing Chemistry (Applied Biosystems) and the ABI Prism 3130XL automatic capillary sequencer (Applied Biosystems, USA) following the manufacturers protocols. resulting in a large caprine genetic diversity that can be exploited for genetic improvement. K E Y W O R D S Bayesian skyline plot, genetic diversity, haplogroups, haplotypes, population expansion | 1545 TAREKEGN ET Al. | Data analysis For all the analyses undertaken here, the default values and parameters inherent in the algorithms and software's were used and only deviations from the default are mentioned. Prior to analysis, all the chromatograms were visualized with the CLC Workbench 7.0.4 (CLC Bio-Qiagen). The sequence fragments were edited manually using MEGA 6 (Tamura, Stecher, Peterson, Filipski, & Kumar, 2013) to correct possible base calling errors. Multiple sequence alignments were performed using ClustalOmega (Sievers et al., 2011), and variable sites were scored against the C. hircus reference sequence (Genbank accession number GU223571). We generated 309 sequences and determined the haplotypes with DnaSP v5 (Librado & Rozas, 2009). The level of genetic diversity, determined as the number of haplotypes, haplotype diversity, nucleotide diversity, and mean number of nucleotide differences between haplotypes and their standard deviations, were computed for each population and across all populations using Arlequin 3.5 (Excoffier & Lischer, 2010). To visualize the genetic relationship between individuals and populations, a phylogenetic tree was constructed using all the haplotypes generated in Ethiopian goats with the neighbor-joining (NJ) algorithm implemented in MEGA6. The level of confidence associated with each bifurcation was evaluated with 1,000 bootstrap replications. To obtain further insights into the genetic relationships between the haplotypes and determine the number of distinct mtDNA d-loop haplogroups present in the dataset, the median-joining (MJ) network (Bandelt, Forster, & Röhl, 1999) was constructed using Network v4.6 (www.fluxus-engineering.com). All the mutations and character states were weighted equally. To visualize the variation in Ethiopian goats in the context of the global caprine diversity, 229 sequences of domestic goats from 20 countries and representing the six globally defined mtDNA d-loop haplogroups (Naderi et al., 2007(Naderi et al., , 2008 and four haplotypes of C. aegagrus (Genbank accession number AJ317864-AJ317867) were retrieved from the Genbank (Table S2) and included in the NJ tree and MJ network analysis. As the reference haplotypes representing the six haplogroups are defined based on the variation in the first hypervariable region (481 bp) of the d-loop (Luikart et al., 2001;Naderi et al., 2007Naderi et al., , 2008, the haplotypes generated in Ethiopian goats were first truncated to 481 bp and then used in the construction of the NJ tree and MJ network. The HVI region corresponds to positions 15, 190 bp of the C. hircus mtDNA reference sequence (Genbank accession number GU295658). To partition genetic variation among populations and groups of populations, analysis of molecular variance (AMOVA) was performed following 1,000 permutations in Arlequin v3.5. The analysis was limited to Ethiopian goats and various hierarchical clusters were tested viz (i) assuming no clusters in the dataset, (ii) between the three groups of populations as defined by FARM-Africa (1996), and (iii) between population groupings revealed by the NJ tree and MJ network. Phi (ϕ) statistics representing haplotype correlations at various hierarchical levels (ϕ CT , ϕ SC , ϕ ST ) were calculated. Levels of significance of the variance components associated with the hierarchical clusters were evaluated with 1,000 nonparametric bootstrap coalescent simulations in Arlequin v3.5. The historical dynamics and demographic profiles of each population and haplogroup were inferred from mismatch distribution patterns (Rogers & Harpending, 1992). The chi-square test of goodness of fit and Harpending's raggedness index "r" (Harpending, 1994) statistics were used to evaluate the significance of the deviations of the observed sum of squares differences (SSD) from the simulated model of expansion (demographic or spatial) following 1,000 coalescent simulations. To complement the mismatch distributions, Fu's F S (Fu, 1997) and Tajima's D (Tajima, 1989) | mtDNA sequence variation and genetic diversity Three hundred and nine sequences spanning the entire 1,061 bp of the caprine mtDNA d-loop were generated. The sequences have been deposited with the GeneBank under accession numbers KY747687-KY747993. Following their alignment against the caprine reference sequence (Accession No: GU223571), 174 variable sites (165 transitions, six transversions, and three InDels) were observed. These defined 231 haplotypes (Table 1) of which 22 were shared by at least two populations (Table S3). All the 13 populations showed high levels of maternal genetic diversity (Table 1). The number of haplotypes ranged between 12 (in Agew) and 30 (in Afar). The lowest level of haplotype diversity (0.9500 ± 0.037) was observed in Keffa while the highest (1.0000 ± 0.020) was observed in the Short-eared Somali, Hararghe Highland, and Woyto-Guji. The nucleotide diversity ranged from 0.0143 ± 0.0019 in Afar to 0.0180 ± 0.001 in Abergelle. | Population phylogenetic analysis We used the HV-I (481 bp) sequences of Ethiopian goats and 229 HV-I haplotypes retrieved from the GeneBank and representing the six main haplogroups observed in goats to construct a NJ tree to assess genetic relationships. The NJ tree revealed three well-resolved clusters; two were specific to Ethiopian indigenous goats and the third one clustered together the haplotypes representing haplogroups B, C, D, F, and wild capras (Figure 1). To obtain further insights into the phylogenetic relationships and put the Ethiopian goats in the context of caprine global diversity, we included in the dataset used for NJ analysis, sequences from Egyptian, Saudi Arabia, Iran, Iraq, Pakistan, and Nigeria goats and generated the MJ network. As expected, the analysis also revealed two clusters in Ethiopian goats which were separated by 10 mutations (Figure 2). Both the NJ tree and the MJ network revealed the two clusters in Ethiopian goats were part of the globally observed haplogroups A and G ( Figure 2). Haplogroup A is the most common and included 185 haplotypes (80.1% of the total number of haplotypes) while haplogroup G comprised 46 haplotypes (19.9%). None of the two haplogroups was exclusive to a single population, geographic region, or production system ( Figure S1). We also observed 137 median vectors on the MJ network. | Population genetic structure AMOVA analysis incorporating the 13 populations assuming no hierarchical clusters, as well as, the three groups proposed by FARM-Africa (1996) showed that 97% of the total genetic variation present in Ethiopian indigenous goats occurred within individuals, less than 2% of the variation was due to genetic differences between populations and less than 1% could be explained by genetic differences between groups of populations (Table 2). Performing AMOVA taking into account the results of the NJ tree and MJ network revealed that 59.11% of the genetic variation occurred within the two haplogroups, while 40.89% was explained by genetic differences between haplogroups A and G (Table 2). | Population and historical demographic dynamics We assessed mismatch distribution patterns, for each population and for the two haplogroups revealed by the NJ tree and MJ network ( Figure 3), to elucidate the demographic dynamics of Ethiopian indigenous goats. The mismatch distribution patterns for each population were bimodal and the observed pattern did not deviate significantly from that expected under a null hypothesis model of either spatial or demographic expansion except for Abergelle (Table 3). The variations around the curves were also not significant except for Agew (Table 3). A bimodal pattern of mismatch distributions, with the observed pattern not deviating significantly from the expected, was also observed for the global dataset incorporating the 13 Ethiopian populations and the two haplogroups, respectively (Figure 3 and Table 3). These results were supported by Tajima's D and Fu's F S statistics; both were negative and significant for each population, with the exception of Abergelle and Gondar whose Tajima's D was negative but not significant, for the global dataset of 13 populations, and for the two haplogroups. The bimodal peaks observed in the two haplogroups were surprising and unexpected. We therefore counterchecked all the sequences against their respective chromatograms for base calling errors. The sequences turned out to be correct and there was no mix-up T A B L E 1 Maternal genetic diversity of 13 Ethiopian goat populations from the analysis of the HV-I region of the mtDNA d-loop (Naderi et al., 2007), Mongolia (Luikart et al., 2001) and China (Liu et al., 2006)); Haplogroup C (India (Joshi et al., 2004), Swizerland (Luikart et al., 2001), Spain (Naderi et al., 2007), China (Liu et al., 2006)); Haplogroup D (India (Joshi et al., 2004), Austria (Naderi et al., 2007), China (Liu et al., 2005)); Haplogroup F (Sicily: Sardina et al., 2006); Haplogroup G (Iran, Turkey and Egypt (Naderi et al., 2007)) (2015) appears also to show bimodal peaks for haplogroups A and G in Kenyan goats. Taken together, these results suggest either a spatial and/or demographic expansion for the Ethiopian indigenous goats and the two haplogroups, respectively. To obtain a better resolution of the demographic history and profile of Ethiopian goats, we modeled changes in maternal effective population sizes (N e ) over time by generating BSP's for the two haplogroups (Figure 4a,b). They reveal an increase in N e from around 55,000 and 21,500 YBP for haplogroups A and G, respectively. This increase is followed by a gradual decline in N e from around 5,000 and 1,500 YBP which continues to date, for each haplogroup, respectively. | DISCUSSION Although evidence indicates that the Horn of Africa was a gateway for various domesticates into the African continent (Hassan, 2000;Newman, 1995;Wetterstrom, 1993), To portray the genetic relationships among Ethiopian goats, we used the 231 haplotypes to construct a NJ tree ( Figure 1) and MJ network ( Figure 2). The clustering pattern revealed two well-supported haplogroups with no phylogeographic structure. The incorporation of reference haplotypes revealed them to be haplogroups A and G (Naderi et al., 2007(Naderi et al., , 2008. AMOVA showed that the two haplogroups accounted for 40.89% of the maternal genetic variation in Ethiopian goats. This provides further support for the genetic distinction of the two haplogroups and suggesting the introduction and presence of at least two distinct genetic groups of goats in the Horn of Africa. The two haplogroups could have been introduced from different geographic domestication areas as Nomura et al. (2013) showed that their divergence occurred prior to domestication. We also observed a high number of median vectors (n = 137) exceeding those observed in other populations (Amills et al., 2008;Chen et al., 2005;Joshi et al., 2004;Luikart et al., 2001;Naderi et al., 2007Naderi et al., , 2008Sultana, Mannen, & Tsuji, 2003). This large number of median vectors may likely be an inherent feature of the Ethiopian indigenous goats because we T A B L E 2 Results of AMOVA based on the analysis of the HV-I region of the mtDNA d-loop in 13 Ethiopian goat populations observed a large number of nodes (n = 192) and edges (n = 335) that represent subpopulations and population subdivisions, respectively (Huson & Bryant, 2006) in a phylogenetic tree that we constructed for the 13 populations using autosomal SNP markers (Mekuriaw GM, Liu B, Osama S, Zhang W, Tesfaye K, Dessie T, Mwai AM, Djikeng A, Mwacharo JM "unpublished data"). These results demonstrate not only the presence of high genetic variation in Ethiopian indigenous goats but also a likely complex maternal genetic history. The lack of phylogeographic structure appears to be a common feature of domestic goats; it has been observed in a worldwide dataset (Luikart et al., 2001;Naderi et al., 2007Naderi et al., , 2008, in the Indian subcontinent (Joshi et al., 2004;Sultana et al., 2003) and China (Chen et al., 2005). The ease of transporting goats, their use as items of trade and sociocultural exchange (to strengthen friendship and family bonds/ties), and their inherent ability to adapt to a diverse range of production and ecological environments, relative to for instance cattle, has been used to explain their lack of phylogeographic structure and high level of genetic diversity. Haplogroup A is the most diverse and has the widest geographic distribution across Ethiopia and the world (Naderi et al., 2007(Naderi et al., , 2008. Naderi et al. (2008) suggested that it originated from Eastern Anatolia. Haplogroup G has been observed in Turkey, Iran, Saudi Arabia, and Egypt and Naderi et al. (2007) suggested that it originates from Iran (Northern and Central Zagros). Both haplogroups (A and G) have been observed in Egypt (Naderi et al., 2007), one of the historical entry points of domesticates into the African continent, and recently in Kenya (Kibegwa et al., 2015). Given that the earliest archeological evidence for the presence of domestic goats in Africa dates to 5000 F I G U R E 3 Mismatch distribution patterns for each and across the 13 Ethiopian goat populations analyzed in this study and for the two haplogroups revealed by the NJ tree and MJ network analysis BC in North Africa, that is, Egypt, Libya, and Algeria (Hassan, 2000), it is likely that the two haplogroups arrived at their earliest in Egypt following terrestrial routes crisscrossing the Sinai Peninsula, Red Sea Hills, and Mediterranean Sea Coast (Hassan, 2000). Following their arrival in Egypt, archeological evidence indicates that, together with sheep, goats dispersed southwards into Sudan and Ethiopia following the Nile river basin (Chaix & Grant, 1987;Clutton-Brock, 2000). The fact that we observed two haplotypes (H128 and H133) of haplogroup A that are shared between Ethiopian and Egyptian goats, two (H164 and H166) shared between Ethiopia and Kenyan goats and one (H102) shared between Ethiopian and Saudi Arabian goats ( Figure 2 and We observed a bimodal pattern of distribution of mismatches in each, and across the 13 populations, of Ethiopian indigenous goats. This result appears to suggest the likely expansion of the two haplogroups into Ethiopia as they are found in each of the populations analyzed. However, this may not be the case. A separate analysis of the two haplogroups also revealed the bimodal pattern, suggesting the existence of large variation within the haplogroups. Colli et al. (2015) found at least seven subhaplogroups (A1-A7) within haplogroup A. Although not as distinct as we observe in our dataset, the data of Kibegwa et al. (2015) also appear to show bimodal peaks for haplogroups A and G in Kenyan goats and Chen et al. (2005) also observed a bimodal peak for haplogoup B in Chinese goats. Furthermore, the BSP analysis interestingly indicates that the expansion of the two haplogroups predates the time period of goat domestication, a finding that was also reported by Nomura et al. (2013) and Colli et al. (2015). Their introduction alone into Ethiopia is therefore not sufficient to explain the bimodal patterns. To our opinion, an alternative interpretation would be that the bimodal patterns indicate, in general, two independent expansion events of goats into Ethiopia and, most likely, the wider Horn of Africa region. The expansion depicted by the first peak could correspond to the initial introduction of goats to the region from either Egypt and/or the Arabian Peninsula and the second peak could represent the secondary dispersal of goats, through trade and socio-cultural interactions, within and across Ethiopia and the region at large. This secondary dispersal most likely contributed to the geographic inter-mixing of the two haplogroups. Indeed, molecular genetic evidence has revealed the absence of phylogeographic structure among Ethiopian ethnic communities (Christopher, 2011;Pagani et al., 2012) and indigenous cattle populations (Dadi et al., 2008;Edea et al., 2013). This has been attributed to past and recent extensive human movements as supported by historical, social, and anthropological evidences (Habitamu, 2014;Mpofu, 2002;Yilma, 1967). The BSPs revealed a reduction in N e beginning around 5,000 and 1,500 YBP for haplogroup A and G, respectively suggesting different demographic histories for the two haplogroups. The timing of this event seems to suggest that the decline in N e for haplogroup A started prior to its arrival in Ethiopia while that of haplogroup G started when it had already arrived in the country. While the decline in haplogroup A can be attributed to the bottleneck created by the introduction of a small number of individuals of the original genetic stock (Bruford, Bradley, & Luikart, 2003), that of haplogroup G may have been driven by the rinderpest pandemic of the 1800s (Blench, 1993;Payne & Hodges, 1997) and a sequel of severe droughts and political upheavals (Verschuren, Laird, & Cumming, 2000) that occurred in the wider Horn of Africa region. The latter could also have affected haplogroup A. A similar decline in N e dating to the same time period has also been observed in the East African shorthorn zebu cattle from western Kenya using SNP genotype data (Mbole-Kariuki et al., 2014). | CONCLUSIONS We observed a high level of maternal genetic diversity in Ethiopian goat populations which was explained by 231 haplotypes that defined two haplogroups (A and G) that lacked a clear phylogeographic structure. As observed in other populations, haplogroup A was the most diverse and geographically widespread. Human-mediated translocations through commercial trading, socio-cultural exchanges, and seasonal migrations in search of forage and water resources could explain the lack of phylogeographic structure. The initial introduction of the two haplogroups and their subsequent intermixing has created a genetic treasure-trove of caprine genetic diversity that can be exploited in breeding programs aimed at improving the species. ACKNOWLEDGMENTS We extend special thanks to flock owners and the district agricul-
2018-04-03T00:57:08.324Z
2018-01-03T00:00:00.000
{ "year": 2018, "sha1": "83a2989ed802c1d59125d53729665b49c1878e3f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/ece3.3710", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "344a06420e24b07a04976f6f572d141fbddb9e40", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
12312518
pes2o/s2orc
v3-fos-license
Structural Propensities of Human Ubiquitination Sites: Accessibility, Centrality and Local Conformation The existence and function of most proteins in the human proteome are regulated by the ubiquitination process. To date, tens of thousands human ubiquitination sites have been identified from high-throughput proteomic studies. However, the mechanism of ubiquitination site selection remains elusive because of the complicated sequence pattern flanking the ubiquitination sites. In this study, we perform a systematic analysis of 1,330 ubiquitination sites in 505 protein structures and quantify the significantly high accessibility and unexpectedly high centrality of human ubiquitination sites. Further analysis suggests that the higher centrality of ubiquitination sites is associated with the multi-functionality of ubiquitination sites, among which protein-protein interaction sites are common targets of ubiquitination. Moreover, we demonstrate that ubiquitination sites are flanked by residues with non-random local conformation. Finally, we provide quantitative and unambiguous evidence that most of the structural propensities contain specific information about ubiquitination site selection that is not represented by the sequence pattern. Therefore, the hypothesis about the structural level of the ubiquitination site selection mechanism has been substantially approved. Introduction The fate of many eukaryotic proteins is controlled by the ubiquitination process [1,2], in which a targeted protein is conjugated with small protein ubiquitins that are organized as either monomers or polymer chains of certain topology [3]. The information embedded in the conjugated ubiquitins is generally deciphered by the ubiquitin binding domains [4], as such the degradation, localization or interaction of the targeted protein is regulated accordingly [5]. Human protein ubiquitination has also been reported to be associated with a number of diseases like Huntington's disease [6], breast cancer [7] and acquired immune deficiency [8]. Despite early awareness of a wide range of biological processes regulated by ubiquitination [9], only with recent breakthrough in proteomic techniques can the widespread ubiquitination sites (Ubsites) in the human proteome be extensively characterized in the large-scale studies [10][11][12][13][14]. These experiments have revealed unique features of Ubsites in comparison with other post-translational modification (PTM) sites. On the one hand, in addition to the topology of ubiquitin chains, the selection of which lysines in the substrate protein to be ubiquitinated is non-trivial. The amino acid pattern in the context (i.e. the flanking sequences) of human Ubsites appears to be discernible [10,12,13] and has been exploited to predict human Ubsites with acceptable accuracy [15][16][17]. On the other hand, in contrast to the primary hypothesis of ubiquitination motifs [18,19], which are in analogy to phosphorylation motifs that determine phosphorylation site specificity, human Ubsites exhibit noticeable variability during evolution and characteristic ubiquitination motifs are hard to find [10,13]. Altogether, these results have motivated us to investigate the preferences of human Ubsites from an alternative and potentially insightful, structural perspective. Large-scale computational structural analyses can provide valuable insights into the underlying mechanism and functional impacts of PTMs. Such analyses have become feasible with the rapid growth of protein 3D structural data. For example, an extensive analysis of phosphorylation sites revealed distinguishable amino acid preferences in their structural neighbors [20]. Based on the calculations of binding energy change, the stronger influence of phosphorylation on the formation and stability of transient protein complexes was closely investigated and quantified [21]. Through the comparison of multiple structures of the modified proteins, the significant influences of PTMs on the protein conformation dynamics were discovered [22]. Despite the aforementioned success of computational structural analyses of other types of PTMs, little knowledge about Ubsites has been gained from the substrate structure. To the best of our knowledge, Catic et al. carried out the only pioneering study to investigate yeast Ubsites in protein structures. They observed higher solvent accessibility and preference for random-coil of yeast Ubsites using a small set of 23 protein structures [18]. However, further quantification and extensive validation of these observations were prohibited by the limited amounts of data at that time. Instead, we are encouraged by a recent study showing that human Ubsites, unlike their yeast counterparts [23], can be frequently mapped to structured domains [24]. In this study, we have performed a systematic analysis of 1,330 human Ubsites in 505 PDB chains. Our analysis confirms and further quantifies the higher accessibility of human Ubsites with various parameters like the relative accessible surface area (RSA) and the protrusion index. Besides, our results suggest that the centrality emerges as a novel trait of Ubsites and we have extensively analyzed and discussed its implication for the wide functional associations of Ubsites. Third, we compare the information included in the sequence context and the structural microenvironment in detail. Finally, we demonstrate the complementary relationship between the sequence pattern and the structural propensities in discriminating Ubsites from non-ubiquitination sites (Non-Ubsites). Dataset The human Ubsites identified from five recent proteomic assays [10][11][12][13][14] were mapped onto the UniProt [25] protein sequences (release 2012_09). To achieve high confidence, only Ubsites identified by at least two experiments were retained. Moreover, this dataset was further enriched by including the human Ubsites manually curated from literature by UniProt [25], Hagai et al. [26] and our group [16]. Lysine residues that have not been annotated by any of the aforementioned five proteomic assays or through literature search were initially treated as Non-Ubsites. The Non-Ubsite data were further filtered against the Ubsites collected by the PhosphoSitePlus ® database [27] (http://www.phosphosite.org). The Ubsites and Non-Ubsites were further mapped onto the structures in PDB (http://www.pdb.org) to obtain their structural information. The redundant (sequence identity>50%), mutant or low resolution (worse than 4.0 Å or missing all side-chain atom coordinates) PDB chains were discarded. We also restricted the retained PDB chains to have at least one Ubsite and one Non-Ubsite. Thus, PDB chains that contain Ubsites only (e.g., the ubiquitin itself) were also abandoned. As a result, 1,330 Ubsites and 5,465 Non-Ubsites were mapped onto the 505 PDB structures (Table S1), which cover 151 folds and 229 families according to the latest SCOP [28] annotations. To facilitate the analyses, we further established the numbering correspondence between the residues in the PDB chains and those in the Uniprot sequences, and removed the unmapped residues (e.g. protein expression tags and alternatively spliced regions). In the case of alternative conformations of the same residue or multiple structure models of the same chain (i.e. the case of 68 chains solved by NMR), only the first one was kept. We noted that 17 Ubsites and 53 Non-Ubsites in the NMR structures exhibit large conformation flexibility (i.e. average C α RMSD>5.0 Å). It is possible that these residues are in disordered state and may not be included in the structural analysis. However, because these residues comprise only a small fraction (about 1%) of our dataset, our conclusions are unlikely to change if these high flexible residues are removed. The hydrogen atoms were removed to avoid the confusion of some analytical programs used in this study. We also noted that some modified residues that were presented as HETATM records in the PDB files could be ignored by some analytical programs. Thus, we restored these modified residues to their unmodified ATOM records following the guidance of PDB annotations. Statistical Tests Unless stated otherwise, Wilcoxon test and Fisher's exact test were used for two sample value comparison and enrichment test, respectively. We also report the effect size r for Wilcoxon test to estimate the amplitude of the difference between two samples. An r value around -0.1 indicates a small but observable difference. All statistical tests were performed in R (http://www.r-project.org). Accessibility Calculation and Residue Contact Network Analysis The RSA was calculated by the NACCESS software (http:// www.bioinf.manchester.ac.uk/naccess/) with an upper-bound value of 100. We further introduced 918 acetylation sites collected from the PhosphoSitePlus ® database [27] as the positive control in the RSA analysis 2 alternative accessibility parameters, i.e. the protrusion index CX and the depth index DPX [29], were calculated using PSAIA [30]. By calculation, each atom in a residue was assigned with one pair of CX and DPX. We chose the maximum CX value and the average DPX value for a residue to depict its protrusion and depth due to higher discriminative power (alternative choices do not affect the conclusion; see Figure S1A and B). In a Residue Contact Network (RCN), one pair of contacting residues are depicted as two nodes connected by one edge. The RCN was constructed by defining two residues as a contacting residue pair if the distance between their C β atoms (C α for glycine) was less than 7.5 Å [31]. We also validated the results using an alternative definition of the residue contact, where two residues were considered as a contacting pair if the distance between any two atoms from each residue was smaller than 4.0 Å [32]. Two key network topology parameters, degree and closeness centrality, were extracted from the networks using the igraph package [33] in R. In terms of topology interpretation, the degree of a node measures how many nodes are connected to it, while the closeness centrality depicts how few steps are required to move from one node to all other nodes throughout the network [33]. The physicochemical interpretation of these two parameters is more straightforward: high degree residues are densely packed [34], and residues with high closeness centrality are located near the geometric center of a protein [35]. To validate the closeness centrality, we also calculate the distance for each Ubsite/Non-Ubsite to the protein geometric center. One may refer to the Text S1 for the detailed calculation. Functional Site Annotations The catalytic sites were assigned by the Catalytic Site Atlas database [36]. The POCKET software [37] was utilized to perform ligand binding pocket prediction, and only the largest pocket in each structure was considered. We used a computational alanine scan method provided by the FoldX software [38] to measure the contribution of a lysine residue to protein folding (see also Text S1). Ideally, a folding hotspot residue can be identified if its mutation to alanine results in a significant energetic loss of the folded protein (ΔΔG>2 kcal/ mol). The protein complex structures were constructed according to the REMARK350 records in the PDB file (which describe how monomer structure should be duplicated, moved and rotated to establish the complex structure). The 3D-complex database [39] was employed as a supervisor of the construction process. 290 protein complex structures carrying at least one ubiquitination site were constructed. A residue was considered as interface residue if the difference of its solvent accessible surface area between the monomer state and the complex state (i.e. ΔASA) was larger than 5 Å 2 . We further grouped the protein complexes according to their stability [40] and calculated the propensity of Ubsites being located on the interfaces for each group (see Text S1). Secondary Structure, Structural Alphabets and Microenvironment The eight-type secondary structure and 22-state structural alphabet [41] were calculated by DSSP [42] and our in-house program, respectively. The structural alphabet is a classification of protein local conformation state based on the κ and α angles formed by the neighboring C α atoms [41]. Note that structural alphabet states "Y" and "A" were merged as suggested in the original work [41]. See Table 1 and Table 2 for the lists of secondary structure types and structural alphabet states, respectively. Using the TwoSampleLogo tool [43], we plotted the logo illustrations that indicate the enriched and depleted residues, secondary structure types or structural alphabet states at each position in the context (i.e. the sequence neighbors). In addition to the context, the microenvironment (i.e. the structural neighbors) of a functional site may also exhibit distinguishable residue usage. One example is the case of enzyme catalytic sites [35]. In this study, we defined a three-shell microenvironment according to the C β distance from a central lysine to its neighboring residues: 0~7.5Å for the first shell, 7.5Å~11.5 Å for the second shell and 11.5 Å~15.5 Å for the third shell. The residue propensity in each shell is calculated as the residue's frequency in this shell divided by its frequency in the whole structure. Analyzing the Ubiquitination Site Indicators via ROC Curve We used the closeness centrality value and the CX value as the centrality indicator and accessibility indicator, respectively. The CX values were linearly scaled into the range of 0~1 for the comparison [35]. For other indicators like sequence pattern, local conformation frequencies or residue propensities in the microenvironment, the likelihood scores were derived from either Naïve Bayes model or random forest model via five-fold cross-validation (see Text S1). The receiver-operating characteristic (ROC) curves were plotted based on the indicators (propensity values and likelihood scores). We also plotted the ROC curves for the combination of indicators based on the combined scores. The combined scores are the sum of the parameter values and the likelihood scores with preliminary optimized weightings (Table S2). The area under the ROC curve (AUC) was also calculated for individual indicators and the combined scores, in order to measure their capabilities to discriminate Ubsites from Non-Ubsites. Intuitively, the higher the discriminative capability of one indicator is, the larger AUC can be measured. If two indictors strongly complemented each other, a significant augment of AUC would be observed when they were combined. The statistical significance of the difference between two AUC values was tested by DeLong's test from the pROC package [44] in R. Higher accessibility and centrality of human ubiquitination sites We started with the RSA analysis of Ubsites and Non-Ubsites. As can be seen in Figure 1A, the vast majority (92.8%) of Ubsites tend to be exposed to the solvent (with an RSA>20%). Statistical test confirmed a distribution shift toward higher RSA for Ubsites compared with Non-Ubsites (p=2.9×10 -10 ). However, while some other PTM sites like phosphorylation sites exhibit highly prominent discrepancy in accessibility compared with non-modified residues [45], the discrepancy between Ubsites and Non-Ubsites seems not obvious at first glance (effect size r=-0.075). In contrast to phosphorylation substrate residues (S, T and Y), lysines are unlikely to be buried due to their charged nature. Thus, one may fail to observe prominent discrepancy in RSA between Ubsites and Non-Ubsites, as the RSA of Non-Ubsites should also be high. In fact, Ubsites show a slightly higher RSA even compared with the acetylation sites [Another type of important lysine PTM [46]; Acesites in Figure 1A, p=0.027], which further confirms the high accessibility of Ubsites. It has been previously observed, in a small set of 23 structures, that yeast Ubsites tend to be highly accessible [18]. Our results quantitatively consolidated this observation. Moreover, we found the protrusion index CX and the depth index DPX could also discriminate Ubsites from Non-Ubsites. Ubsites tend to have remarkably higher CX ( Figure 1B, p=9.9×10 -17 , r=-0.10) and lower DPX (Figure S1C, p=3.1×10 -5 , r=-0.049). These results imply that Ubsites are highly protruding and less buried, making them readily accessible to solvent and ubiquitination enzymes. We further analyzed the location of Ubsites utilizing the degree and closeness centrality parameters from RCNs. Our results indicated that Ubsites have lower degree (p=2.7×10 -7 , r=-0.061) compared with Non-Ubsites, which is in agreement with their lower DPX. Unexpectedly, however, Ubsites show significantly higher closeness centrality compared with Non-Ubsites (p=3.0×10 -18 , r=-0.10). This is an exceptional observation because the closeness centrality shows a positive correlation with the degree parameter in our dataset (correlation coefficient=0.18, p<10 -50 ). The differences in degree and closeness centrality are also clearly reflected by the two-dimensional probability density maps ( Figure 1C). A considerable fraction of Non-Ubsites are localized in the region of degree larger than 8, but this region is less favored by Ubsites. The discrepancy is more significant for closeness centrality: Ubsites are aggregated in the region with closeness centrality of about 0.18, resulting in a holistic upper-shifted distribution compared with Non-Ubsites. The higher closeness centrality was confirmed with an alternative definition of residue contact ( Figure S1D). The closeness centrality can also be explained as the geometric centrality, that is, Ubsites prefer to be located closer to the geometric centers of proteins (p=1.5×10 -9 , r=-0.072; Figure S2A). One may note that the absolute distance between a Ubsite/Non-Ubsite and the protein geometric center should be partly correlated with protein size. Nevertheless, after corrected for the protein size, Ubsites still showed closer localization to the geometric centers of proteins (p=3.9×10 -7 , r=-0.060; Figure S2B), confirming the higher centrality of Ubsites. As many protein functional sites also tend to locate at the geometric centers of proteins, the centrality has been further shown to be indicative of a wide spectrum of protein functional sites [35,47,48]. Therefore, it is of particular interest to test if Ubsites are associated with certain functional sites in the structures. We investigated the relationship between Ubsites and multiple functional sites, which is detailed in the next section. Potential Functional Impacts of Ubiquitination Sites Ubiquitination Sites and Enzyme Catalytic Sites. We first examine the relationship between the enzyme catalytic sites and Ubsites, because the enzyme catalytic sites showed the strongest association with centrality among several types of functional sites [47]. We used both experimental and predicted catalytic sites from the Catalytic Site Atlas database [36] since the experimental ones are not always available. In this way, we assigned catalytic sites for 88 PDB chains (enzymes) in our dataset. Indeed, Ubsites are generally located closer to the catalytic residues (C β distance, p=0.0041, r=-0.044). Nevertheless, the absolute distance between a Ubsite and a catalytic site should be close enough to let the attached ubiquitin molecules block the catalytic site directly. Accordingly, we set a C β distance cutoff of 11.5 Å (which is approximately the radius of ubiquitin) to define direct association. By this definition, only 31 Ubsites are directly associated with the catalytic residues, and show no relative enrichment (Fisher's exact test, p>0.2). Similar results could be obtained if a more Structural Propensities of Ubiquitination Sites PLOS ONE | www.plosone.org stringent cutoff of 7.5 Å was adopted (data not shown). Therefore, we conclude that direct association with the enzyme catalytic site is not likely the exclusive way for Ubsites to influence enzyme activities. Instead, some Ubsites may regulate the enzyme activity in indirect fashions. We will test this hypothesis in the next sub-section. Ubiquitination Sites and Ligand Binding Sites. In our dataset, 236 out of 505 PDB chains bind at least one ligand. However, the shortest distances between Ubsites and the ligands are not significantly smaller compared with Non-Ubsites (p>0.2). This result would be an underestimation considering that the ligands were not always presented in the structures. To better understand this, we predict the presence and location of ligand binding site (i.e. the largest pocket) on the structure. However, no clue for closer distance between Ubsites and ligand binding pockets was found (p>0.2). Therefore, Ubsites are more likely to be associated with specific types of ligands only. Through a careful investigation, we found that Ubsites were located significantly closer to two types of ligands ( Figure 2A), namely energy currency & electron carriers (e.g., ATP and NADP; p=5.2×10 -4 , r=-0.15) and bivalent metal ions (e.g., Zn 2+ ; p=3.1×10 -4 , r=-0.14). We have shown above that direct association between Ubsites and the catalytic sites is not widespread. By contrast, 52 Ubsites appear to be directly associated with these specific ligands (shortest distance<11.5 Å), accounting for 28% of all ligand-associated Ubsites. As these ligands often play a role as enzyme co-factors in vivo [49], it is plausible that for some enzymes ubiquitination regulates their activity via the regulation of co-factor binding, instead of the direct blockage of the catalytic sites. Ubiquitination Sites and Folding Hotspots. Protein unfolding may be a prerequisite for ubiquitination-mediated protein degradation, because the catalyzing enzyme complex 26S proteasome has a narrow substrate translocation channel [50]. As a consequence, one tends to speculate that the conjugated ubiquitins themselves can induce protein unfolding to help the attached substrates pass through this narrow channel. Computational molecular simulation of a yeast protein supported this idea that the protein folding could be substantially disrupted when being conjugated with ubiquitin chains [51]. But whether ubiquitination tends to target residues important for folding stability (i.e. the folding hotspots) has not been tested. According to the results of computational alanine scan, no larger energy contribution of Ubsites was indicated, as Ubsites have lower energy contribution on average (ΔΔG, 0.55 kcal/mol vs. 0.60 kcal/mol, p=0.0042). Furthermore, Ubsites do not seem to favor folding hotspots: only 3.0% of Ubsites correspond to the folding hotspots, while the fraction is slightly higher (3.6%) for Non-Ubsites. Nevertheless, it should be noted that in principle our results neither approved nor declined the role of ubiquitin as a destabilizer of protein folding. Instead, the results highlight potentially extensive functional impacts of ubiquitination where the folding hotspots targeted by ubiquitination represents only a small portion of the functional sites that may be influenced by ubiquitination. Ubiquitination Sites and Protein-protein Interaction Sites. Generally, 170 out of 884 Ubsites in the protein complexes settle on the interface, but this fraction is only marginally higher compared with Non-Ubsites (p=0.039). This indicates that only few subsets of complexes are relatively enriched for Ubsites on their interfaces. Similar to [21], we grouped the complexes to four different groups (unstable, weakly stable, moderately stable and highly stable) based on their stability and found that the interfaces of unstable complexes seem to be the most favorable target for ubiquitination ( Figure 2B). However, this result is not statistically significant probably because of the small sample size available. The unstable complexes are usually maintained by transient protein-protein interactions, which are also likely to be regulated by other PTMs like phosphorylation [21]. Therefore, it is interesting to ascertain if Ubsites tend to be located on the interface core (ΔASA>85 Å 2 ) to unleash a strong regulatory capability. We found that Ubsites are generally located on the rim of the interfaces (ΔASA<25 Å 2 ), even for the unstable complexes ( Figure S3A). However, a noticeable subset of Ubsites instead favor the interface core of the unstable complex ( Figure S3A, yellow line). This phenomenon was not observed for Non-Ubsites ( Figure S3B), indicating that the Ubsites play at least a partial role in regulating the transient association of unstable complexes. By contrast, the interface cores of highly stable complexes seem to avoid being ubiquitinated ( Figure S3A). This tendency can be attributed to the difficulty of these highly stable complexes to be dissociated to expose a ubiquitination substrate lysine on their interface core. Multi-functionality of Ubiquitination Site. Taken together, the association between Ubsites and specific functional sites has been observed. Our results also complement the computational analyses of Ubsite function that were rooted from the evolutionary conservation [26]. However, as shown in Figure 2C, Ubsites seemed to influence various types of functional sites, which rarely overlap with each other in most cases. These results suggest that the broad spectrum of functional sites that can be influenced by Ubsites. An example for the multi-functionality of Ubsites is showcased by the farnesyl pyrophosphate synthase (PDB entry: 3N45). This dimeric enzyme can catalyze sequential reactions to produce farnesyl pyrophosphate [52]. The inhibition of this enzyme is of clinical significance as its product can serve as not only intermediates for several metabolic pathways, but also substrates for a few PTMs like farnesylation [52,53]. Five Ubsites (LYS332, LYS123, LYS112, LYS210 and LYS352) scattered on the enzyme's structure, and each has a distinct potential functional impact, either direct or indirect ( Figure 2D). LYS332 is located at the bottom of the enzyme substrate pocket, with a close distance (5.5 Å) to the cofactor Mg 2+ ions. LYS123 does not point to the substrate pocket, but stretches into the allosteric pocket and binds the allosteric inhibitor [53]. LYS112 lies in a densely packed region accompanied by two folding hotspots. Though it has only moderate folding energy contribution itself, it may play a role in the communication of the two neighboring hotspots. LYS210 is on the dimer interface, but it is excluded from the interface core like many other Ubsites in stable complexes. Finally, LYS352 is located away from the aforementioned typical functional sites in this structure. Instead, it appears to be a key component of the KEN motif that mediates protein degradation [54]. The Context and Microenvironment of Ubiquitination Sites The context (sequence neighbors) and/or the microenvironment (structural neighbors) of a functional site often have specific sequence and structural preferences. It has been widely accepted that the sequence pattern in the context is the most distinguishable signature of Ubsites [12,13,[15][16][17]23]. As shown in Figure 3A, the sequence logo representations of ±25 residues around Ubsites. As previously suggested [16], this sequence logo displays a concentrated distribution where residues in ±6 range is much more discernible than those in more distal positions. Hydrophobic and small residues are favored in the proximity of Ubsites, while charged residues are under-represented. Nevertheless, it should be noted that these preferences are position-specific. Detailed discussion about the characteristic sequence patterns can be found in our previous study [16]. What we would like to address here, however, is the structural propensities of Ubsites' context. To address this, we first plotted the secondary structure logo of the context. This logo illustration does not show the centric distribution, and some proximal positions exhibit little secondary structure propensity ( Figure 3B). Previous studies It is therefore unexpected that even in distal positions like positions +22 ~+25, there exist discernible secondary structure propensities. Moreover, because eight-type DSSP secondary structure assignment [42] was applied here, we were able to identify more subtle details. Previous analyses suggested that coils were favored but helices were disfavored for yeast Ubsites [18]. Our results coincided with this observation, and further showed that the most favored coil type is the highly curved coil (S). Besides, distinct types of helices also exhibit different propensities. While α-helix (H) is widely depleted in the context, the 3 10 -helix (G) is somewhat favored at the proximal positions of Ubsites ( Figure 3B). The depiction of the structural propensity was enriched by introducing the structural alphabet. We plotted the 22-state structural alphabet logo in Figure 3C. Note that structural alphabet correlates but not necessarily coincides with the secondary structure assignment. For example, Ubsites prefer a highly curved coil conformation (V), which is in good agreement with their favored secondary structure type (S). However, no depletion of helix can be observed in this position from the structural alphabet logo. The situation is more obvious for positions -1, +5 and +6. While each of these positions favors specific structural alphabet state ( Figure 3C), but little secondary structure propensity can be identified at the corresponding positions ( Figure 3B). Generally, this logo exhibits the most discrete distribution, which plausibly results from the neighborhood-dependent nature of the structural alphabet. We speculate that this trait may be efficiently utilized to further enhance the discriminative capability of Ubsites' context. We will test this possibility later in the next section. In addition to the context, we defined a three-shell microenvironment for each Ubsite or Non-Ubsite. For each shell, the average amino acid propensities were calculated and plotted ( Figure 4B to D). For comparison, we also plotted the average residue frequency of the proximal context (±6 residues; see Figure 4A). We observed that for the first shell, the residue propensities qualitatively agreed well with the residue frequencies of the context (Figure 4A and B). Similar results were obtained for the second shell with the exception of the enrichment of arginine ( Figure 4C). The discrepancy between Ubsites and Non-Ubsites appears to be marginal for the second shell, and almost disappears for the third shell ( Figure 4D). Therefore, the residue usage in the microenvironment of Ubsites appears to be distinguishable, within the scope of the first two shells. Sequence Pattern and Structural Propensities Are Complementary Indicators of Ubiquitination Sites Structural Propensities Are Non-random Features of Ubiquitination Sites. One may note that the differences between Ubsites and Non-Ubsites in the structural propensities are not intuitively prominent. However, this by no means implies uselessness of the structural propensities. Our two computational analyses based on 10,000 artificial samples (see Text S1) indicate that a difference is unlikely to be achieved by random feature values ( Figure S4A) or induced by random noise (Figure S4B), when it meets a stringent p-value cutoff (i.e. p<5.0×10 -5 ). Therefore, most of the structural propensities should be considered as non-random features of Ubsites. It is also worth mentioning that our estimation of the differences in the structural propensities is conservative, since there could be other PTM sites and undiscovered Ubsites annotated as Non-Ubsites in our dataset. For example, after removing Acesites and possible undiscovered Ubsites (the Non-Ubsites whose proximal context sharing 50% or more sequence identity to that of any Ubsite), the difference between Ubsites and Non-Ubsites in CX could be further amplified (from p=9.9×10 -17 , r=-0.10 to p=3.8×10 -25 , r=-0.13). Thus, we expect higher usefulness of structural propensities, when the knowledge of PTM sites becomes more completed. Structural Propensities Are Complementary to Sequence Pattern. We tested the complementary relationship between sequence pattern and structural propensities using ROC analysis. The ROC analysis is frequently used for predictor assessment. However, here it was introduced for a distinct purpose (i.e. quantifying the complementary relationship) because we do not aim at developing a new Ubsite predictor in this work. Based on the ROC analysis, several structural propensities are suggested as moderate indicators of Ubsites, and they substantially complement the information embedded in the sequence pattern. We first assigned the likelihood score for ubiquitination according to the positional sequence pattern of the proximal context (±6 residues). This sequence pattern-derived likelihood score is the best single indicator of Ubsites in current analyses (AUC=0.633; Figure 5), in agreement with previous conjectures and results [13][14][15][16]19]. We next generated the likelihood score based on the local conformation (structural alphabet) frequencies within the same range. This local conformationderived likelihood score is a moderate indicator of Ubsite (AUC=0.562). Similarly, the sequence propensities in the first two shells of the microenvironment could also help distinguish Ubsites, though the discriminative capability seemed to be limited according to current ROC analysis results ( Figure 5). More interestingly, the accessibility and centrality indicators have achieved noticeable discriminative capability (AUC=0.573 and 0.576, respectively), in contrast to their relatively simple calculation formulae. Finally, the aforementioned six indicators, when combined together, could achieve a significant improvement of discriminative capability compared with the sequence pattern-derived likelihood score alone (AUC 0.673 vs. 0.633, DeLong's test, p=1.9×10 -13 ; Figure 5). These quantitative results highlight the complementary relationship between the sequence pattern and the structural propensities. Structural Propensities Do Not Result from Sequence or Structural Redundancy. Another concern about the observed structural propensities might be raised from the de-redundancy criterion used to compile our dataset. That is, the 50% sequence identity cutoff would be too high to filter against redundant sequences and structures. Therefore, to further validate our results, we have constructed two additional datasets using more stringent de-redundancy criteria. For the first dataset, a 30% sequence identity cutoff was applied. The numbers of resultant chains, Ubsites and Non-Ubsites of this dataset are presented in Figure S5. Intuitively, such a strict identity cutoff did not result in a dramatic shrinkage of the sample size. In fact, we found that the sample size could be largely kept across a wide range of sequence identity cutoffs ( Figure S5A), implying that most sequences in our main dataset (i.e. the dataset using 50% sequence identity cutoff) are indeed non-redundant. Results based on this validation dataset indicate that our conclusions are not likely to be influenced by the alteration of the sequence identity cutoff. That is, Ubsites tend to have significantly higher accessibility and centrality, as measured by the protrusion index CX and the closeness centrality, respectively (p<10 -10 ; Figure S6A and B). According to the ROC analysis, the local conformation and the microenvironment also exhibit marginal but detectable differences, thereby facilitating the discrimination of Ubsites from Non-Ubsites ( Figure S6C). As indicated by the highest AUC of the combined indicator ( Figure S6C), the ROC analysis also validates the complementary relationship between the structural propensities and the sequence pattern. We further generated the second validation dataset by discarding redundant structures. We used the TM-align tool [55] to compare PDB chains through pair-wise structure alignments. If two PDB chains shared significant structural similarity (i.e. TM-score>0.5), only one of them would be retained. Note that this structural similarity cutoff can ensure that most of proteins in the second validation dataset do not share the same structural fold [56]. Not surprisingly, by applying this rigorous de-redundancy criterion, the sample size decreased considerably ( Figure S5). Nevertheless, the Figure S7). In summary, the observed structural propensities of Ubsites are unlikely to be artifacts caused by a specific de-redundancy criterion. It is argued that 50% sequence identity is an acceptable threshold to reduce the redundancy while maintaining a sizable dataset that facilitates our comprehensive analyses. Conclusions The underlying mechanism of Ubsite selection has been a long-standing question. Thanks to the rapid growth of ubiquitination proteome data and protein structure information, we performed systematic analyses and demonstrated the structural propensities of Ubsites, which include accessibility, centrality and local conformation. Moreover, our analyses have revealed wide associations between Ubsites and multiple functional sites in the structures. Our quantitative analysis also clearly demonstrates that the structural propensities complement the sequence pattern to influence Ubsite specificity. Because most of current Ubsite predictors solely rely on sequence-derived information, we anticipate that such a complementary relationship may be efficiently exploited to improve the performance of dedicated Ubsite prediction tools. Further, considering some structural propensities and functional site associations observed in this study have rarely been tested for other PTM sites, we also expect that these propensities and associations will be further interrogated for other PTM sites in the future, in order to uncover the structurallevel selection mechanisms of PTM sites. Last but not least, we hope that our computational pipeline can be readily applied to analyze other types of functional sites and proved useful to gain comprehensive structural insights into these functional sites. Figure S1. The difference between Ubsites and Non-Ubsites in accessibility and centrality using alternative parameters. (A) Average protrusion index CX; (B) Maximum depth index DPX; (C) Average depth index DPX; (D) Closeness centrality in the residue contact networks (RCNs) generated using another definition of residue contact (i.e. two residues are considered as a contacting pair if the distance between any two atoms from each residue is smaller than 4.0 Table S2 for the weights). The AUC values were calculated according to the structural propensities, the likelihood scores derived via five-fold cross-validation of the corresponding models or their combinations (see Text S1 for details). The larger the AUC value, the stronger the indicator. Figure S6. Validation of structural propensities using a dataset with 30% sequence identity cutoff. (A) Boxplots illustrating the difference between Ubsites and Non-Ubsites in the protrusion index CX. (B) Boxplots illustrating the difference in the closeness centrality. Note that the ranges of whiskers (dashed lines) in all boxplots were doubled to avoid displaying too many outliers. (C) The ROC curves measuring the discriminative capability of the individual Ubsite indicators and their combination. The AUC values were calculated according to the structural propensities, the likelihood scores derived via five-fold cross-validation of the corresponding models or their combinations (see Text S1 for details). For combination, individual indicators were combined by a weighted summing scheme (see Table S2 for the weights). The combined indicator is significantly more powerful than the sequence pattern indicator alone (DeLong's test, p= 1.2×10 -16 ). (TIF) Figure S7. Validation of structural propensities using a dataset without structural redundancy. (A) Boxplots illustrating the difference between Ubsites and Non-Ubsites in the protrusion index CX. (B) Boxplots illustrating the difference in the closeness centrality. Note that the ranges of whiskers (dashed lines) in all boxplots were doubled to avoid displaying too many outliers. (C) The ROC curves measuring the discriminative capability of the individual Ubsite indicators and their combination. The AUC values were calculated according to the structural propensities, the likelihood scores derived via five-fold cross-validation of the corresponding models or their combinations (see Text S1 for details). For combination, individual indicators were combined by a weighted summing scheme (see Table S2 for the weights). The combined indicator is significantly more powerful than the sequence pattern indicator alone (DeLong's test, p= 2.3×10 -10 ). (TIF)
2018-04-03T00:30:53.400Z
2013-12-11T00:00:00.000
{ "year": 2013, "sha1": "3cd53cbe8f4a045b26c977b5d24b0b7d8285cd96", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0083167&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3cd53cbe8f4a045b26c977b5d24b0b7d8285cd96", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
238233338
pes2o/s2orc
v3-fos-license
Pre-COVID-19 Social Determinants of Health Among Mexican Migrants in Los Angeles and New York City and Their Increased Vulnerability to Unfavorable Health Outcomes During the COVID-19 Pandemic COVID-19 has disproportionally affected underrepresented minorities (URM) and low-income immigrants in the United States. The aim of the study is to examine the underlying vulnerabilities of Mexican immigrants in New York City (NYC) and Los Angeles (LA), its correspondence with area-level COVID-19 morbidity and mortality, and to document the role of trusted and culturally sensitive services offered during the pandemic through the Ventanillas de Salud (i.e. VDS, Health Windows) program. The study uses a mixed-methods approach including a cross-sectional survey of Mexican immigrants in LA and NYC collected in the Mexican Consulates at the onset of the pandemic, complemented with a georeferencing analysis and key informant interviews. Data suggested an increased vulnerability to COVID-19 given participants reported health status, health care profile and place of residence, which coincided with the georeferencing analysis. The key informant interviews confirmed the vulnerability of this population and the supporting role of VDS in helping immigrants navigate health systems and disseminate health information. Mexican immigrants had an increased vulnerability to COVID-19 at the individual, geographic and systemic levels. Trusted and culturally sensitive services are needed to overcome some of the barriers and risk factors that increase the vulnerability of URM and immigrant populations to COVID-19. Supplementary Information The online version contains supplementary material available at 10.1007/s10903-021-01283-8. Introduction The COVID-19 pandemic has disproportionally affected underrepresented minorities (URM) and low-income immigrants in the United States (U.S.) [1]. Studies based on geographic analyses document that COVID-19 case rates are higher in counties with a higher concentration of lowincome and undocumented immigrant populations [2]. COVID-19 infection and mortality rates are also greater in counties and states with high Hispanic/Latino populations [3] and monolingual Spanish speakers [3][4][5]. Social determinants of health in Hispanic communities may be contributing to the disproportionate rate of infections and deaths [6]. In 2020, life expectancy declined one year in the U.S., however, 2 years for Hispanics [7]. The available evidence indicates that Hispanic immigrants in the US are a vulnerable population disproportionately affected by the COVID-19 pandemic due to individual (i.e. type of employment [4,8], burden of chronic diseases [1,5]), system related (i.e. limited access to health care due to low English proficiency or health insurance coverage [9,10], citizenship status and public program eligibility [11][12][13][14], stigma and fear of deportation), and area-level factors (e.g. overcrowded housing [8], limited access to healthy foods [15]). The combination of pre-pandemic vulnerabilities at the individual, system and area-level likely contributed to increasing the risk of COVID-19 morbidity and mortality [16,17]. Hispanic immigrants disproportionately participate as essential, front-line, low-wage, and uninsured workers in activities critical for operational functions and support of crucial supply chains [18], such as meatpacking, agricultural and service-based industries, which do not allow for remote work, and increase the risk of infection [4]. Therefore, shelter-in-place policies were unlikely to protect them [8]. Their limited access to health care, lack of familiarity and fear of interacting with the health system as a consequence of immigration enforcement and recent changes to public charge rules [15,19,20], may have discouraged testing and timely treatment for COVID- 19. The pandemic will not be controlled unless all individuals have equal access to health care [13]. Policies need to be put in place to expand coverage to the remaining uninsured, including undocumented immigrants [14]. This highlights the need to address the racial/ethnic disparities of the COVID-19 pandemic, including culturally appropriate and community competent interventions that consider the nuances of immigrant communities, families and individuals [8]. The aim of the study is to examine the underlying vulnerabilities of Mexican immigrants in NYC and LA to improve future preparedness for public health emergencies. Our approach uses mixed methods with three different sources of data that intend to inform individual, system and area level determinations. The main objectives are: (1) to describe prepandemic migrants' health and psychosocial characteristics and vulnerabilities through a survey collected at the Mexican Consulates of NYC and LA; (2) to examine with a spatial analysis the correspondence between the survey respondents' place of residence and area-level COVID-19 morbidity and mortality to confirm if Mexicans with identified underlying risk factors lived in affected areas; and (3) to interview key informants from Mexican Consulates in both cities to contextualize how Mexican immigrants faced the pandemic, and to document the role of trusted and culturally sensitive services offered during the pandemic through the Consulates' Ventanillas de Salud (i.e. VDS, Health Windows) program. We triangulate the findings of the three sources to gauge a more granular understanding of individual, system, and area level factors shaping the vulnerabilities and experiences of Mexican immigrants during the pandemic, and the potential buffering role of trusted and culturally sensitive services, as those offered through the VDS. To our knowledge, there are no prior studies triangulating such types of data. We selected NYC and LA because both cities have been greatly affected by the pandemic, and they have a large community of Mexican immigrants. It has also been documented that in NYC individuals living in high poverty areas and with high shares of URMs experienced the highest COVID-19 case and death rates [17]. Likewise, in January 2021, the average death rate among residents of Los Angeles County's (LA) poorest neighborhoods was three times as high as that in the wealthiest areas [17]. Importantly, both cities have a well-established VDS program. The VDS is a promising example of a trusted and culturally sensitive outreach program that started in 2003 as a joint initiative between the Mexican Ministries of Health and Foreign Affairs. The goal of the VDS is to enable Mexican immigrants access to health care and local community resources [21]. Even though the Mexican government funds the VDS in 49 US cities, they partner with different public and private organizations to provide culturally and linguistically sensitive basic health services in a safe and trustful environment. A recent scoping review found that VDS mostly offer three free types of services: healthy lifestyles information and counseling; immunizations and early disease detection; and referral to local community clinics [22]. The VDS serves nearly 1.5 million individuals a year [21], mostly undocumented immigrants, and it was a key support for Mexican immigrants during the COVID-19 pandemic in NYC and LA. Data Sources The study used a survey with a cross-sectional design, complemented with a georeferencing analysis and with key informant interviews. The survey was conducted before the onset of the COVID-19 pandemic among Mexican immigrants between 18 and 64 years who resided in the NYC and LA Metropolitan areas and who identified themselves as living in the U.S. Data were collected at the main offices of the Consulate General of Mexico in both cities. Consulates provide services to both documented and undocumented Mexican immigrants, including renewal of Mexican passports, issuance of a consular ID (matrícula consular), legal counselling, health information and referral to local health care providers (provided through the VDS), among others. The advantages of conducting data collection at the Consulates have been highlighted in prior research [23,24]. Mexicans visit the consulates regardless of their migratory status and most services are scheduled through a telephone appointment system that allocates time slots randomly. This appointment mechanism reduces unobserved biases compared to alternative sampling sites such as churches, community centers, clinics, or other settings where visitors are self-selected. In addition, Mexican immigrants are more open to participate within the premises of the Consulates, as they know this is a safe space, which reduces potential fears linked to their migratory condition and distress of ethnic stigmatization. A convenience sampling approach was used in the general waiting areas of the Consulates and in the VDS located in the Consulates. Immigrants doing their paperwork at the Consulates' offices were approached individually by four previously trained, bilingual and bicultural research assistants. They informed potential participants about the study details and invited eligible subjects to participate in a faceto-face survey that took approximately 20 min. Those who agreed to participate and met the eligibility criteria signed a consent form. Research assistants administered the survey in the waiting areas and electronically collected responses in tablets. The survey had previously been piloted in the NYC Mexican Consulate with the population it serves to assess it was culturally sensitive and was available both in English and Spanish. In NYC data was collected between May and June 2019; in LA data collection started the first week of March 2020 and needed to be abruptly discontinued due to the physical distancing COVID-19 measures. The sample size in NYC was n = 193 and n = 77 in LA. The original design of the study aimed for a minimum sample of 100 interviews per city, which was estimated based on demographic information provided by the Consulates. Participants were subsequently matched through their zip codes of residence with COVID-19 morbidity and mortality data from LA and NYC to examine the burden of disease using a geospatial description. To build the NYC maps, we first matched the reported zip codes, neighborhoods and counties to the list of modified zip codes (ZCTA) of NYC counties and to a shapefile [25]. Then we matched the COVID-19 county-level data in NYC generated by the NYC Department of Health and Mental Hygiene 1 [26]. We followed a similar process to build the LA maps. We matched the reported zip codes from the survey to a California list of zip codes [27] and then to the County of Los Angeles' zip code 2 and city/community classification [28]. Then we matched zip codes to COVID-19 cases and deaths in each city/community reported by the Los Angeles County Department of Public Health [29]. We used age-adjusted rates per 100,000 population to account for differences in the distribution of age in the underlying population. For both cities we used two cut-off dates, August 10, 2020 and January 17, 2021. To link our analysis of vulnerabilities among Mexican immigrants to their experiences during the COVID19 pandemic, we conducted semi-structured key informant interviews (KII) (n = 4). We selected the personnel directly responsible of the COVID-19 response from the VDS in the Consulates in both cities during October 2020. The purpose of the interviews was to gain a closer understanding of the VDS role during the pandemic and to document the type of culturally appropriate services offered and how operations were adapted during the pandemic. These KII were also useful to document the systemic factor barriers faced by low-income Mexican immigrants during the pandemic (i.e. access to health care, distrust in health systems, fear of "public charge") and to validate the sample and some of the key findings from the geospatial analysis. The interview guide is available in Spanish in the supplementary material. Measures of Health Status Consistent with prior studies [30], we collected self-reported health status and coded it as a three-level categorical variable (excellent and very good, good health, and fair or bad health). As previously published research suggests, poor self-rated health can be a predictor of mortality [31] and of pathological changes prior to disease diagnosis [32]. In addition, we collected data on 7 self-reported health problems; diabetes, hypertension, heart disease, asthma or bronchitis, cancer, depression, and arthritis, which have been established as some of the main morbidity causes among the study population [33]. Survey respondents were asked if a physician or another medical professional diagnosed any of the listed conditions. Dichotomous variables were generated for each condition, and favorable responses were added into a variable with values ranging from 0 to 7. Measures of Healthcare Access and Utilization A large body of empirical research highlights the effect of health insurance on access to and use of health care [12][13][14]. Hence, participants were asked if they had health insurance coverage in the U.S.. Participants were also asked how frequently they had received non-urgent health care during the last 6 months. Responses were coded in a Likert-scale (i.e. never, sometimes, usually, always, not required). These are common proxy measures that are generally used when studying vulnerable populations [34]. Social Capital We used four items as proxies of social capital, each scored with Likert-type responses: "I can trust most people in my community", "I can get help from my neighbors whenever I need it", and "I feel safe when I walk alone at night in my community" (strongly agree [4] to strongly disagree [1]), and a rating of neighborhood perceived safety (very secure [4] to very insecure [1]). These measures have been used in prior studies [35,36] as they have been found to be associated with adverse health outcomes and health care underuse among vulnerable populations [37][38][39]. We added the responses to each of these measures to generate an overall score of individual subjective perception of trust and safety ranging between 4 and 16; the higher the score, the greater the individual´s social capital. Employment Employment status was elicited from participants (i.e., not working, working, looking for a job), and among those working we inquired about their type of job. Through a qualitative coding we then assessed if it matched the classification of an essential worker during the pandemic, as defined by the U.S. Department of Homeland Security [18], which in May 19, 2020, issued an advisory memorandum with a list of workers who are essential to continue with critical infrastructure viability, working construction, operational functions, and support of crucial supply chains. Per this document, essential workers included health sector workers (including the ones providing eldercare); workers in laundromats, laundry services, and dry cleaners; workers from the construction sector (including technicians in maintenance of buildings, hospitals and residences); food and agriculture workers, restaurant and quick serve food operations (including dark kitchen and food prep-centers, carryout, and delivery food workers [18]. This list is non-exhaustive nor exclusive. Sociodemographic Characteristics Sociodemographic characteristics collected during the survey included age, gender, educational attainment, years of U.S. residence, and food insecurity collected through the Latin American and Caribbean Food Security Scale (ELCSA) to classify households by food security status (food secure, mild, moderate or severe food insecurity) [40]. Place of Residence To ascertain migrants' area of residence, respondents provided their zip code. Analysis Data from our survey was used for a comparison of means analysis between NYC and LA including: health-related variables, health care access, social capital, employment and sociodemographic characteristics. We used Fisher exact test for categorial variables and Mann-Whitney test for nonnormally distributed data. All statistical analyses were performed with Stata 15 [41]. For the geospatial analysis, we mapped the participant's area of residence based on their reported zip code and merged this information with age-adjusted COVID-19 morbidity and mortality. The spatial analysis consisted of joining two map layers: first, from the adjusted COVID-19 morbidity and mortality -where darker shades of pink denote higher cases by 100,000 inhabitants-, with self-reported place of residence. Larger bubbles indicate more survey participants live in the zip code. The geographic analysis was conducted with the "sf" package in R software [42,43]. The qualitative semi-structure KII were analyzed using content analysis. An initial codebook followed the objectives of the interviews. Two separate researchers coded the qualitative material and contrasted the nodes and its content. We collapsed similar nodes and edited some remarks to facilitate the communication of the results. We kept the responses by city to highlight the similarities in all nodes. This study was reviewed and approved by the Research Ethics Committee at Universidad Iberoamericana, and was reviewed and given exempt status by the IRB at UCLA. Table 1 summarizes the characteristics of Mexican migrants who participated in the NYC and LA surveys. They were on average 40 years old and those interviewed in NYC had lived in the US for approximately 17 years, while individuals in LA reported an average of 23 years. In both cities most participants were employed at the time of the interview (75% in NYC and 79% in LA), however, a larger share of NYC respondents was classified into "essential work" activities compared to respondents in LA (47% to 31%, respectively). On average, migrants had low educational attainment, with about one third of respondents not completing middle school. Despite being a young sample overall, close to half of the respondents in NYC and about one third in LA reported a fair or bad health status. However, they reported less than one diagnosed comorbidity on average (0.59 in NYC and 0.57 in LA). The prevalence of previously diagnosed T2D was high in both cities, 11.4% in NYC and 11.7% in LA, slightly higher compared to the prevalence of diagnosed T2D reported by the CDC in 2020 among Hispanics, 10.3% [44]. Results Almost half of our survey respondents recounted that they never or only sometimes got regular non-urgent care in the last 6 months. While it reached 51.5% in NYC, in LA this proportion was significantly lower (38%), although still inadequate. A consistent vulnerability across both cities was food insecurity. Around one quarter of the respondents reported Place of residence is a source of inequity in COVID-19 morbidity and mortality. Thus, we used zip code data provided by survey respondents to map their neighborhoods of residence using morbidity and mortality adjusted data at two different time points. Figure 1 corresponds to NYC and Fig. 2 to LA. The top panels show morbidity and mortality COVID-19 rates per 100,000 population in both cities up to August 2020, while the lower panels show the same information updated to January 2021. In the second morbidity map, the scale varies, as testing increased, and more cases were added. The Consulates' personnel validated both samples with expressions like "good sample", "it is very well represented", "coincides with the strongest outbreaks" and were able to pinpoint where Mexican immigrants live and where the pandemic struck. Across the NYC maps, the circles show a consistent pattern of higher COVID-19 case and mortality rates in the zip codes where Mexican immigrants lived. Some specific clusters in each of the boroughs can be identified. In Manhattan a cluster is visible in the east Harlem and Washington Heights. In the Bronx the morbidity trends concentrate in areas such as Soundview, Morrisiana, and Morris and University Heights. In Queens, the area comprised of the neighborhoods of Jackson Heights, Corona and Elmhurst, is a particularly worrisome cluster for Mexican immigrants due to persistent higher rates of morbidity and mortality. In Brooklyn we observed a cluster around Sunset Park and Greenwood Heights, although more concerning in terms of morbidity than mortality. Finally, in Staten Island, a point in the neighborhood of Port Richmond shows high rates of cases in the two time periods. Figure 2 zooms into the central and southern region of LA country. Most respondents were clustered in the East Los Angeles, Vernon Central and in Miracle Mile-La Brea. LA portrays a similar pattern compared to NYC, as respondents resided in areas with high COVID-19 morbidity and mortality. The bottom left panel shows that the City of Vernon had the county's highest case rate. Over 10% of survey respondents lived close to this area. Like NYC, cases and deaths were concentrated in areas with lower income, i.e., closer to the east, portraying a geographical disparity. We interviewed key informants from the VDS to triangulate our findings, examine system factors faced by Mexican immigrants and discuss their response to the COVID-19 pandemic in both cities. The summary of the key themes that emerged is available in Table 2. The VDS in NYC and LA confirmed that the Mexican immigrant population they serve is mostly employed in low-wage and front-line jobs in activities classified as "essential services" (i.e., "farming", "transportation", supermarkets", "restaurants", and "deliveries"). Importantly, they explained that Mexican immigrants were highly exposed to the COVID19 virus through their jobs, often working without protective gear, fearing losing their jobs if they complained about their working conditions. An informant summarized the increased risk as a combination of "the economic needs and misinformation, the language barrier, they don't know how to keep protecting themselves, and believing that taking one aspirin will keep fever away". Both cities provided free testing and emergency insurance during the pandemic, however, health care use was restrained due to mistrust in the health system and Table 2 Golden quotes from in depth interviews with key informants from Los Angeles and New York City Theme New York City Los Angeles Types of employment of population served by the Mexican consulates The problem with our population is that the industries of construction, restaurant, deliveries and the jobs of the Mexicans here, did not suspend and worked when the contagion was at the highest. They really were the ones who kept the City going Here the majority of the population works in services; supermarkets, transport, and everything that has to do with the food chain. In other counties, like Ventura, 95% work on farms because it is a very rural area; we are definitely in the essential sectors Exposure associated with essential workers The risk they were exposed to was when they were working at the supermarkets or delivering food without the adequate protection, even feeling sick, they didn't stop working because they couldn't. The fact that they are more exposed is linked with the economic needs and misinformation, the language barrier, they don't know how to keep protecting themselves, and believing that taking one Aspirin will keep fever away Unfortunately, our community can't give the luxury to stop working. The sick and diagnosed person has to do it, but with the risk of losing his job. Many have informal employment and reactions are diverse. They are those who keep working with their relatives sick in the next bed and also the proper sick person who can't have the privilege to isolate himself Access to health services The emergency was paid to people with COVID, but many of them thought they had to pay and didn't seek care. We explained that emergency Medicaid would be activated. To all the people with COVID who entered at hospitals we applied the emergency Medicaid and the state of NY and Trump care paid for it. The problem now is that COVID health consequences aren't being paid Here the option is My Health LA, and in case that you're not eligible, there's the community clinic, the cost of healthcare is really low and they provide basic healthcare services. Specifically, for COVID, the test is free for people without health insurance. For intensive hospitalizations they use emergency MediCal, which has financial consequences for the family Distrust in health system They think they have no rights, they believe access to healthcare can mark their immigration status. The pandemic was so big that you needed to survive, to eat. People had to go to the hospital or they would die. It was really important to highlight that going for food didn't breach a health burden. The fear existed but the need was bigger Many times, people are afraid even to receive a service; if they have to fill out forms, they ask us "where is this going?" and we tell them that their information is completely private, but there is always the fear about who has access to it, and that it may be a deportation cause Fear of generating "public charge" Yes, there were special programs and financial aid; the restaurants and rental costs were important. The problem is that they don't ask for this back up for fear that it could generate future problems. We announced the NY students' card and the questions were about public charge or the consequences of taking it The fear is much greater with the pandemic because they know that there will be unaffordable medical costs. And they fear they're gonna be a public charge. Even though the person could only become a public charge with specific medical coverage and certain programs for which they are not eligible due to their immigration status, the fear is always there VDS as a key source of trusted information We went from face-to-face to a permanent telephone service, a 24/7 number was set up for any health issue. Then the COVID resource guide for the community was made; they were receiving information in Spanish verified by people from the consulate, in which they could see where to find food and the different measures to take. They were also helped to find funeral homes, when they reached the limit and were no longer serving. We use Facebook a lot; now we do seminars and online talks to improve health. The challenge comes to re-explain how to access services, explain navigation in the health system and that they do have access, and integrate telemedicine We replaced normal activities with more presence on social media, educating the community about different programs to keep them afloat during the pandemic. We focus on COVID-19 symptoms, where to go, where to get tested, and everything related to that topic. People could contact the consulate with very specific questions. They had endless needs that showed up, we helped by giving information about county programs; how to access health coverage, how to find out where the nearest food bank was, how to apply for unemployment insurance, etc. Now we are doing other kind of workshops, such as nutrition, how effective telemedicine is and much more Emergency support from VDS The mobile service window is doing COVID tests and influenza vaccination days. They keep with pantry deliveries, they were the first. The pantries, the food, were the basic needs. And we partner to have tests on community centers or churches. Many of the VDS partners stayed on and gave us the details to work online or by phone. What we did was have clinicians, two doctors and two nurses, who spoke to people to see their health problem and then made the corresponding referral, to a primary health service in person or with allies that we know, whether to receive medical service or for COVID A COVID test center opened at the consulate with a capacity for 1400 tests by week; people considered it a safe space, they know they can come without appointment, because it's an obstacle that people don't know how to schedule one, not even online and other processes aren't user-friendly. (Health Windows) We also start doing influenza vaccination days. And we have been distributing COVID-19 protective equipment to peddlers in the area for two weeks. Besides, we do medical advisory services through appointments Mental health needs and support We interviewed people with the VDS system about their emotional status, and we basically found anxiety, stress, fear, sadness and depression. And the part of domestic violence, which the city has detached a lot. Access to mental health in your language has been a very important challenge, in addition to breaking the myth that exists "I'm not bad, I don't need to." The line to make referrals in health services was opened and calls to relatives of the deceased were increased to provide emotional support directly; it was a volunteer program. It wasn't exactly a line of support; we put them in contact with a shrink who can follow up with them. The major advantage is that it was direct, in Spanish, and delivered by culturally appropriate staff We began to have virtual workshops about the impact of the pandemic on the emotional well-being of the community, which consisted of giving them tools to get through those difficult times that they had to be at home with their children, often without work. Then we went with other closely related issues that we knew were going to increase, such as loss and grief, family violence, suicide prevention, alcohol, and substance use. And about stigma because in the community there was a lot of stigma around mental health. concerns over the use of personal data. The constant fear of generating a "public charge", even if unwarranted, was a key obstacle [46]. In NYC, for example, some eligible individuals refrained from accepting the economic support for restaurant employees. The VDS had a vital role in the response to the COVID-19 pandemic. This health outreach program provided information about the virus, testing, how to navigate the health system if symptoms appeared and referred relatives to consular services for funeral arrangements and repatriation of remains. They were able to convert most of their services to telephone and online modalities-both the physical site and the mobile consulate. VDS became a reliable, trusted, and accessible source of information for migrants, ranging from basic explanations of transmission mechanisms and protective measures to specialized information such as health care eligibility and economic support. Moreover, VDS provided direct services like free COVID-19 testing, influenza vaccination, and health information. At a critical point of economic inactivity, they became food pantries, which underscores food insecurity as a critical need. In addition, the VDS helped identify anxiety, depression, grief, and domestic violence as serious consequences of the lockdown measures and were able to connect users with Spanish-speaking mental health providers. Discussion In this study we sought to complement existing literature that has documented the impact of COVID-19 on the Hispanic community as a high-risk population [2,3,6]. Our study used three different data sources to investigate individual, systems and area levels of vulnerability faced by low-income Mexican immigrants during the COVID-19 pandemic. We used primary data to assess pre-pandemic vulnerabilities among Mexican immigrants in NYC and LA. From our survey data, we observed statistically significant differences between cities in terms of age, proportion of essential workers, which was above 30% in both cities, health status, health care utilization and average length of residence in the U.S. Despite these differences, we identified some commonalities: (i) study participants had a high share of uninsured individuals that exceeded the uninsured rates in California and NY (9% and 6.2% respectively) [47]. (ii) Vulnerability factors were low health care access, high food insecurity [48], and high T2D diagnoses [49], with a prevalence higher than the U.S. and Mexican prevalence, in spite of respondents' young average age. These pre-COVID vulnerabilities partially result from health and migratory polices that limit immigrants' access to health care, constrain health insurance options [25], and alienate them from the health system through lack of eligibility, fear of detention, deportation, and public charge [20,50]. In addition, social determinants of health faced by low-income Mexican immigrants such as residential segregation, overcrowded housing, and exposure to food deserts, could contribute to health conditions such as obesity and overweight, T2D and related comorbidities that might explain why, despite having a young sample (of about 40 years), self-reported health status was overwhelmingly poor. Low-income Mexican immigrants are particularly vulnerable to unfavorable area-level social determinants of health [17,51]. Our study showed a close correspondence between area-level COVID-19 morbidity and mortality rates with respondents' zip codes of residence. The geospatial distribution of Mexican immigrants places them in areas of residence with disproportionally high COVID-19 cases and deaths. Our findings are consistent with previous research [1,2,4,52], and with the narrative described in our KII. In addition to access to care barriers, occupation emerged as a key risk factor. In our sample, a large share of the migrants, had a job that placed them at higher risk of infection, as were customer-facing occupations and sectors that did not allow for remote work. This has been previously highlighted [8,15] and also emerged as a central topic when interviewing key informants. Three important lessons emerge from this research. First, the study confirmed the vulnerability of Mexican immigrants in terms of limited access to health care and health insurance, and low self-reported health status especially considering their average age. Mexican immigrants contribute to the NYC and LA local economies and are disproportionately employed in low-wage essential services [17]. Despite their contribution, they are fearful of deportation and somehow distrustful of their communities [20]. Second, our study underscores the importance of understanding how such vulnerabilities transform into challenges for reaching lowincome and noncitizen migrants with low or non-English fluency during the pandemic. Reaching these populations requires trusted and culturally sensitive navigation resources to facilitate access to COVID-19 testing, treatment, and vaccination, as well as prevention services of common side effects of the pandemic such as mental health conditions and increased food insecurity. Our research shows the importance of VDS at the onset of the pandemic. Through a rapid adaption response in NYC and LA, VDS were able to offer culturally tailored information about COVID-19 and remote clinical assessments of symptoms with the aid of health professionals. They also helped immigrants to navigate administrative services; verify in Spanish information received; explain how to access health services during the pandemic including telemedicine, testing and vaccination sites; and access local government support programs and food pantries [53]. The use of social media (i.e. Facebook) was fundamental in the adaptation of such services. The Mexican government should continue to invest and expand this health outreach programs that are closely coordinated with local U.S. health care providers, U.S. governments (federal, state, local), health care providers, stakeholders, and advocates [54]. Lastly, vulnerable communities should not be stigmatized for having higher rates of infection and mortality. Such outcomes are driven by structural inequalities and area-level social determinants of health which are manifest, amongst others, through residential segregation [55]. Among our limitations are statistical representation, including sample size, and potential self-selection into Consular visits. However, we aim to describe, not to make causal claims. The external validity of our study findings is also limited, since our findings apply to low-income Mexican migrants from two large U.S. metropolitan areas labeled as "sanctuary cities". Foreign-born migrants from other countries, those working in rural areas, or in smaller cities and towns in non-sanctuary cities or states are not represented in this study. Two important strengths of our study are the triangulation of three different data sources to show the different levels of vulnerability faced by low-income Mexican immigrants to COVID-19. Moreover, we worked with Mexican interviewers from within the Mexican Consulates, which have the trust of migrants regardless of their legal status. Likewise, we obtained relevant information about the ongoing challenges that this population has faced during the pandemic, through our KII with the front-line consular officers of VDS that have provided a culturally appropriate health information and navigation resources. Further studies would benefit from including the perspectives of the staff in the Consulates performing other tasks, the organizations supporting the VDS, and the users of VDS. In addition, while the KII helped triangulating the other sources of data, it would be beneficial to have a more detailed description of the adaptations and services provided by the VDS during the pandemic through an implementation science perspective. Conclusions In this study we analyze individual, area-level and systemic levels of vulnerability faced by low-income Mexican immigrants during the COVID-19 pandemic. We argued that prior vulnerabilities linked to immigration such as type of employment, food insecurity status, chronic conditions, health status, and access to care barriers, placed low-income Mexican immigrants at higher risk of COVID-19. Our study shows a close correspondence between the zip codes where respondents lived with areas disproportionately affected by COVID-19 morbidity and mortality in both cities. These findings suggest unfavorable area-level social determinants of health that reinforce the pre-Covid-19 individual and systemic vulnerabilities faced by the study population. Health outreach programs such as VDS have been key to disseminate information about the virus, testing, health care navigation, and handling of the deceased. The VDS model may be used by other countries as a blueprint for community outreach and eventually as a network to expand health care access and promote healthy lifestyles nationwide [22]. Further investments in culturally appropriate programs and coordination with local health care providers, advocates and stakeholders is needed to reduce health disparities.
2021-10-01T13:36:49.406Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "e1be790796eba02dbdf832c435f311f441e1136d", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10903-021-01283-8.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "e1be790796eba02dbdf832c435f311f441e1136d", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
270313411
pes2o/s2orc
v3-fos-license
Hybrid xyloglucan utilisation loci are prevalent among plant-associated Bacteroidota The plant hemicellulose xyloglucan (XyG) is secreted from the roots of numerous plant species, including cereals, and contributes towards soil aggregate formation in terrestrial systems. Whether XyG represents a key nutrient for plant-associated bacteria is unclear. The phylum Bacteroidota are abundant in the plant microbiome and provide several beneficial functions for their host. However, the metabolic and genomic traits underpinning their success remain poorly understood. Here, using proteomics, bacterial genetics, and genomics, we revealed that plant-associated Flavobacterium, a genus within the Bacteroidota, can efficiently utilise XyG through the occurrence of a distinct and conserved gene cluster, referred to as the Xyloglucan Utilisation Loci (XyGUL). Flavobacterium XyGUL is a hybrid of the molecular machinery found in gut Bacteroides spp., Cellvibrio japonicus, and the plant pathogen Xanthomonas. Combining protein biochemistry, computational modelling and phylogenetics, we identified a mutation in the enzyme required for initiating hydrolysis of the XyG polysaccharide, an outer membrane endoxyloglucanase glycoside hydrolase family 5 subfamily 4 (GH5_4), which enhances activity towards XyG. A subclade of GH5_4 homologs carrying this mutation were the dominant form found in soil and plant metagenomes due to their occurrence in Bacteroidota and Proteobacteria. However, only in members of the Bacteroidota spp., particularly Flavobacterium spp. was such a remarkable degree of XyGUL conservation detected. We propose this mechanism enables plant-associated Flavobacterium to specialise in competitive acquisition of XyG exudates and that this hemicellulose may represent an important nutrient source, enabling them to thrive in the plant microbiome, which is typified by intense competition for low molecular weight carbon exudates. Introduction Plants provide soils with the 'fresh' carbon (C) required to support microbial growth, generating 'hotspots' of activity in regions of C deposition, such as the rhizosphere (1,2). Microbial processing of plant-derived C therefore represents the entry point for new matter and energy into the microbial C pump.This biological pump determines the balance of CO2 liberated during aerobic respiration versus that channelled into microbial anabolism and ultimately the accumulation of recalcitrant C (3). Overtime, this C becomes part of the stable C pool, which is approximately 3x larger than that stored in animals and plants.Each year, soil respiration releases 10-15x more C than that emitted from anthropogenic activities (4). Therefore, any change in the balance of production versus respiration in response to global change will have significant ramifications for the global C cycle (3).Plant-derived C is partitioned into two major fractions: 1) Low molecular weight (LMW) C, which can be transformed by microbial enzyme activity within hours; and 2) Complex high molecular weight (HMW) C, e.g.glycans, which can take years to be fully degraded into their monomeric subunits (5).HMW C is believed to escape microbial attack, initiating the formation of soil aggregates (5,6) and thus directly contributing to soil C accumulation.In addition to biogeochemical cycling, nutrient inputs have a significant influence on plant microbiome assemblage and community structure (7), evidenced through the impact of crop domestication (8). Plant glycans (polysaccharides) are major components of plant biomass, of which hemicelluloses, such as xyloglucan (XyG), typically constitute 5-50% (9).Recent data has revealed XyG is a major component of root mucilage exudate.XyG is secreted at the root tip and along the entire root axes and functions to help produce the rhizosheath, a region made up largely of glycans that serves to protect roots from abrasion and desiccation (6,10,11). Through this process XyG also influences the degree of microaggregate formation, a prerequisite for soil C accumulation (6).Hence, these secreted HMW C polymers play an integral role in soil C storage and are likely influenced by the degree of microbial degradation. XyG binds border cells at the tip of growing roots and is an abundant component of mucilage (12).XyG also plays a role in regulating the severity of oomycete pathogen attack in soybean (10). Historically, mycorrhizal and saprophytic fungi were considered the major plant glycan degraders, however, soil bacteria are emerging as integral players in their breakdown (13).In forest soils, leaf litter microbial communities are enriched with members of the phyla Pseudomonadota, Actinomycetota, and Bacteroidota (14).Likewise, in agricultural soil Bacteroidota and Pseudomonadota were reported as the primary consumers of cellulose, crude plant root or leaf material (15).Plant pathogens, such as Xanthomonas spp.also utilise XyG and this metabolism is considered a virulence factor, enabling the bacterium to enter plant cells (16).However, an understudied plant-microbe interaction is the effect of HMW C exudation on plant microbiome assemblage.This knowledge gap is driven largely by the dearth of experimentally validated genes and pathways required for hemicellulose degradation in soil bacteria, except for Cellvibrio japonicus (17,18) and Chitinophaga pinensis (19)(20)(21). Glycan degradation requires the possession of specialised gene sets encoding carbohydrate-active enzymes (CAZymes) to initiate degradation (5,13,22).CAZymes are categorised into broad functional groups, i.e., glycosyl hydrolases (GH) and carbohydrate esterase (CE) and are incredibly diverse (~200 GH families), reflecting the enormous variety of naturally occurring carbohydrate structures, particularly glycans.In Bacteroidota, these gene sets are typically colocalised into discreet operons referred to as Polysaccharide Utilisation Loci (PUL) and their bioinformatic prediction has rapidly outpaced experimental validation of their precise function (22,23).PUL are a hallmark of the Bacteroidota, a deep branching group of Gram-negative bacteria that specialise in HMW polymer degradation in marine and gut microbiomes (22,24).Through the efficient capture of glycans, PUL provide a competitive advantage for Bacteroidota in glycan-rich environments, such as the human gut or leaf litter (22).Unlike their gut and marine relatives, the contribution of soil Bacteroidota towards plant or microbial glycan degradation, particularly hemicelluloses, is limited (5,13,20).Whilst C. pinensis can utilise a variety of glycans including several hemicelluloses, this bacterium surprisingly lacks the ability to efficiently utilise XyG (19)(20)(21). Flavobacterium, a genus within the phylum Bacteroidota, are enriched in numerous wild and domesticated plant microbiomes relative to the surrounding bulk soil (25)(26)(27)(28)(29). Recent evidence suggests that they are one of the most metabolically active taxa in the plant microbiome, accounting for 27% of RNA reads when comprising only 6% of DNA reads (30). Bacteroidota are considered indicators of good soil health (25) and have ecological roles in suppressing various fungal and bacterial plant pathogens (30)(31)(32)(33)(34).However, their general ecological role and function remains poorly characterised in plant microbiomes, relative to other environments (35).Recently, we discovered Flavobacterium spp.have adapted to life in the plant microbiome by specialising in organophosphorus utilisation and likely play a key role in increasing phosphate availability for plants (36,37).Analysing our same proteomics dataset, we further identified several CAZymes that are candidates for plant glycan utilisation, suggesting that HMW C utilisation represents a key lifestyle strategy for these bacteria (38). In this study, we demonstrate Flavobacterium spp.are efficient utilisers of the plant hemicellulose XyG through possession of hybrid XyG utilisation loci (XyGUL).These gene clusters contained elements of the archetypal PUL identified in Bacteroides ovatus as well as gene clusters found in C. japonicus and Xanthomonas spp.Furthermore, we identified a XyGspecific endoglucanase associated with the XyGUL related to the glycosyl hydrolase family 5 subfamily 4 (GH5_4), subclade 2D (39).XyG-specific GH5_4 homologs within this clade carry a key mutation increasing their specificity and activity towards XyG.We further investigated the presence of GH5_4 homologs in soil and plant metagenomes, revealing this XyGspecialised form is prevalent in the terrestrial environment, especially in plant-associated Bacteroidota. XusC and D were the most differentially abundant proteins during growth on XyG (Figure 1b). The Flavobacterium XyGUL showed high degree of conservation and synteny across all plantassociated strains analysed (Figure 1c), in contrast to the XyGUL found in Bacteroides.spp.(41), with no rearrangements and only few instances of gene insertions.GH39, predicted to hydrolyse the Xyl(α1-2)Araf linkage found in solanaceous plants, such as tomato, was only present in Flavobacterium.Together, these data suggest Flavobacterium harbour a specialised XyGUL capable of capturing and breaking down XyG from various plant species. XyGUL encoded proteins are essential for efficient growth on XyG in F. johnsoniae To determine the in vivo contribution of XyGUL encoded proteins to growth on XyG, two knockout strains of F. johnsoniae were generated.The first had a deletion of fjoh_0774, encoding the GH5 enzyme predicted to initiate depolymerisation of the XyG polysaccharide, and the second was an fjoh_0781-2 mutant lacking the XusCD system predicted to be required for oligosaccharide uptake (Figure 1d).The isogenic wild-type parent and both mutant strains grew comparably on either glucose or GalM, but, unlike the wild type, Δ0774 was unable to grow on XyG, whilst the growth of the Δ0781-2 mutant was significantly curtailed (Figure 2).These data are consistent with the proteomics analysis (Figure 1b), demonstrating that the predicted XyGUL is essential for growth on XyG (Figure 2).Complementation of each mutant with an in trans copy of the respective gene(s) restored their ability to grow on XyG (Figure 2).As expected, the Δ0774 mutant, lacking the outer membrane initiator enzyme FjGH5, was capable of growth on commercially synthesised on XyGOs (Figure 2).However, Δ0781-2 also grew on XyGOs, albeit at a slower rate, in contrast to its phenotype on XyG (Figure 2).This suggests that either FjGH5 and XusCD interact for efficient hydrolysis of the polysaccharide backbone prior to import, or that XyGOs produced by FjGH5 are not the same as those present in the hepta-, octa-, nona-saccharides commercial mix (MEGAZYME), and that other SusCDlike complexes can import the latter. Microdiversity of GH5_4 homologs in Flavobacterium spp. suggests functional diversification In several plant-associated Flavobacterium spp., BLASTP identified multiple ORFs encoding GH5_4 homologs.Phylogenetic reconstruction of these homologs alongside BoGH5, CjGH5d, and other previously characterised GH5_4 homologs eliciting mannanase, xylanase, and glucanases activity revealed the presence of two distinct Flavobacterium GH5_4 groups (Figure 3a).These two subgroups (Type I and Type II) shared greater similarity to each other than to the archetypal BoGH5A.Whilst 7/8 residues, previously shown to be involved in XyG hydrolysis in BoGH5A (41) were conserved across Type I and Type II homologs, residue Trp252 in BoGH5A was not (Figure 3a & S1).Trp252 is conserved in all Type I homologs, including FjGH5, the GH5_4 enzyme encoded by OSR005_04227 in Flavobacterium sp.OSR005 (Figure 1), hereafter termed 005GH5-1 (Table 1), and CjGH5D (17).In the majority of Type II homologs, Trp252 is replaced with either Ala or Gly.The genes encoding type I homologs are all found in XyGUL, however the genes encoding Type II GH5_4 homologs were all found in distinct PUL (XyGUL2 in Figure 3b).This was confirmed by increasing the number of plantassociated Flavobacterium genomes screened, including the addition of MAG retrieved from plant rhizosphere metagenomes (Table S3).Even the genes encoding the few Type II forms carrying the Trp residue (Figure S1) were found in XyGUL2-like PUL.XyGUL2 is present in fewer Flavobacterium genomes and has far less gene synteny and conservation than XyGUL1 (Figure 3c).These alternative PUL contain ORFs for various exo-acting GHs, distinct SusCD-like systems, and in some cases a GH74 homologue similar to the endoxyloglucanase recently shown to be functional in Xanthomonas spp.(16).In addition, Flavobacterium sp.OSR005, harbours a Type II GH5_4 (hereafter referred to as 005GH5-2), encoded by OSR005_03871, which contains an Ala in place of the aforementioned Trp252 residue.Neither 005GH5-2 nor other XyGUL2 proteins were detected during growth on XyG (Figure 1b, Table S5), suggesting they do not play a role in XyG utilisation in Flavobacterium sp.OSR005. To determine if Type II GH5_4 homologs were functional, we complemented the F. johnsoniae Δ0774 mutant with the genes encoding 005GH5-1 and 005GH5-2 expressed from the constitutive ompAFj promoter.Both 005GH5-1 and 005GH5-2 restored the ability of Δ0774 to grow on XyG as the sole C source, with the 005GH5-1 strain showing a greater initial growth rate and 005GH5-2 the slowest (Figure 3c).To test if the lower growth rate observed for 005GH5-2, which carries the W252A substitution, was due to a lower enzyme activity, we purified recombinant OSR005-1, 005GH5-2 and the archetypal BoGH5A following heterologous over-production in E. coli.Recombinant 005GH5-1 had a significantly greater turnover rate (Kcat = 566.2min -1 ) than recombinant 005GH5-2 (Kcat = 223.5 min -1 ) and a lower Km (OSR005-1 = 1.3 mg ml -1 , OSR005-2 = 5.7 mg ml -1 ) (Figure 3d).Recombinant BoGH5A modified with either W252A or W252G substitutions replicated this dramatic reduction in endoxyloglucanase activity (Figure 3e), with W252G having the greatest reduction, requiring 10x more enzyme to detect observable activity (Figure S2).Neither OSR005-1, OSR005-2, BoGH5A, BoW252A nor BoW252G conveyed substrate promiscuity towards other glycans typically found in the plant microbiome (Figure 3f).Based on structural homology modelling and previous structural data for BoGH5A and CjGH5d (18,41), Trp252/209 interacts with the xylose residue occupying the -2 glucose position in XX(X)G-type saccharides, such as tamarind XyG (Figure 3g; Figure S3).This would explain why mutation of Trp252 results in the observed decrease in activity.Modelling the surface hydrophobicity revealed that possession of Trp252 likely generates a stacking interaction which may stabilise the docking of XXXG-type XyG.In 005GH5-2 and other promiscuous GH5_4 enzymes where Trp252 is absent, a clear cavity is present that would significantly reduce this stacking interaction between the aromatic residue and the xylose occupying the -2 subsite (Figure 3g; Figure S3).Taken together, these data reveal Type II GH5_4 homologs may have subsequently evolved to specialise on another glycan or variation of XyG, perhaps the XXGG-type typical of solanaceous plants. GH5_4 homologs are enriched in plant-associated Bacteroidota genomes Next, we investigated if XyG utilisation in Flavobacterium is an adaptation to life in the plant microbiome by analysing our previous database containing ~100 genomes representing Flavobacterium spp.isolated from distinct ecological niches (37).In addition to searching for GH5_4 homologs, we also searched for homologs related to other XyGUL components and candidate GH10 endoxylanases (pfam00331) required to hydrolysis xylan backbones (42). Xylan is another hemicellulose secreted from plant roots (43).ORFs encoding XyGUL components were more prevalent among plant-associated and closely related strains (Figure 4a).Likewise, GH10 homologs followed a similar pattern.Several plant-associated Flavobacterium strains sometimes possessed up to six closely related GH5_4 homologs, each associated with either Type I or II.The most prevalent were the canonical GH5-1 forms found in the XyGUL, followed by homologs related to 005GH5-2 (group GH5-3 in Figure 4a).Some Flavobacterium spp.possess a second Type I GH5_4 homolog (GH5-2 in Figure 4a), typically located adjacent to the GH5-1 in the XyGUL (e.g.CF136 and OSR001 in Figure 1d).(19)(20)(21) and BLASTP confirmed that this strain lacks either a GH5_4 homolog or a XyGUL.To determine whether XyGUL is restricted to plant-associated Flavobacterium or found within the Bacteroidota phylum more widely, we screened genomes deposited in the IMG/JGI database (Table S2) for the presence of GH5_4 homologs.Genomes were restricted to those retrieved from terrestrial environments, i.e., soil and plant, and encompassed Chitinophagaceae, Sphingobacteraceae, Flavobacteriaceae, and Cytophagaceae.We detected both inter-and intra-genus variation in the occurrence of GH5_4 homologs in the genomes of Bacteroidota spp.(Figure 4b).The highest percentage of genomes possessing GH5_4 homologs belonged to Flavobacterium (54%), with almost all plant-associated strains possessing the gene cluster.Despite belonging to the family Flavobacteriaceae, we found no GH5_4 homologs in plant-associated Chryseobacterium.Likewise, no GH5_4 homologs were found in the Pontibacter and Hymenobacterium genomes we screened.Genomes related to both Mucilaginibacter (51%) and Chitinophaga (46%) also had a relatively high number with at least one GH5_5 homolog present. C. pinenesis DSM2558 cannot efficiently grow on XyG Given that several genomes possessed multiple GH5_4 homologs, we performed phylogenomics to determine whether they belonged to Type I or Type II forms (Figure S4).Most GH5_4 homologs identified in non-Flavobacterium Bacteroidota fell into the Type II subgroup.However almost all harboured the Trp residue, except for a few containing Tyr, and some being closely related to the Ala-and Gly-harbouring Type II forms.Genomes from the class Sphingobacteriia often possessed two or more homologs.Two major clusters of Chitinophaga were present, all harbouring the Trp residue, and these were typically mutually exclusive within genomes and found in distinct PUL.Interestingly, no Bacteroidota genomes possessed only a Type II GH5_4 carrying the Ala or Gly mutation, strengthening the hypothesis that this form has an auxiliary role in XyG hydrolysis.Taken together, whilst there is a large diversity of GH5_4 and XyGUL-like clusters, whether these are all functional as part of dedicated XyG utilisation pathways remains uncertain. In other Bacteroidota spp. the organisation of PUL harbouring Type II GH5_4 homologs carrying the Trp residue differed substantially from the Flavobacterium XyGUL (Figure 4c).These PUL resembled the organisation and features, such as carbohydrate binding domains (CBMs), associated with the XyGUL2 cluster found in Flavobacterium spp., which was not induced during growth on XyG in OSR005 (Figure 1b, Table S5).Therefore, whether the XyGUL-like clusters identified in other Bacteroidota genera also specialise in XyG utilisation remains an open question. GH5_4 subclade 2D has radiated in soil and plant microbiomes The GH5_4 family has recently been structured into three main clades (named I, II, III) and subclades (44), with BoGH5A and CjGH5d belonging to subclade 2D (44).Given the high prevalence of GH5_4 homologs in plant-associated Bacteroidota, we performed BLASTP on over 700 plant/soil metagenomes deposited in the IMG/JGI database (Table S3).Two GH5 sequences were used as queries: FjGH5 (Fjoh_0774) and a GH5_4 from Paenibacillus sp. Root144 (IMG gene id, 2644426200), the latter closely related to a commercial Paenibacillus endoxyloglucanase (Megazyme) and represents GH5_4 homolog from subclade 1.All environmental ORFs retrieved (n=7636) were locally aligned (BLASTP) to all GH5 enzymes in the CAZYdb (n = 1123) (45).In total, 7136 ORFs aligned to 254 ORFs from CAZYdb and were all related to the GH5_4 subfamily.Homologs related to Bacteroidota (N= 39150) and Proteobacteria (n= 39031) constituted much of the diversity found in soil (Figure S5).At the genus-level, homologs related to Capsulimonas (Actinomycetota, n=11783) and Flavobacterium (n=11232) were the most abundant, followed by Cellvibrio (8700), Mucilaginibacter (n=8697), and members of the family Chitinophagaceae (Pseudobacter; n =7967, Chitinophaga; n=5516).Phylogenetic reconstruction revealed most environmental homologs were related to GH5_4 clade 2, with most sequences belonging to subclade 2D.This subgroup contains FjGH5, 005GH5-1, CjGH5d, and all homologs related to Bacteroidota, including Flavobacterium (Figure 5b).Meanwhile, GH5_4 homologs related to Gram-positive bacteria, primarily, Actinomycetota and Bacillota, were found in subclades 1 and 2. As observed for Flavobacterium Type I and Type II GH5_4 homologs, the eight residues involved in XyG binding and hydrolysis by BoGH5A (41) were highly conserved between clades I, II, and III, again with the exception of Trp252.This residue was predominantly substituted with either His or Gly in clades 1, 3, and subclades 2A, 2B, and 2C (Figure 5b).Despite the occurrence of W252A-and W252G-GH5_4 Type II forms in isolates related to several Bacteroidota genera, in soil/plant metagenomes only Flavobacterium Type II forms were detected.Most GH5_4 homologs related to other Bacteroidota and Proteobacteria spp.were Type I. Together, these data demonstrate subclade 2D has radiated in soil and become the dominant form.Furthermore, these analyses highlight a possible role for horizontal gene transfer of the GH5_4 enzyme between Bacteroidota and Proteobacteria, such as Cellvibrio, in response to occupying a similar niche. Discussion Here, we demonstrate Flavobacterium spp.can efficiently utilise the plant hemicellulose XyG and, through the identification of molecular markers, show that this metabolic trait is prevalent in plant-associated Bacteroidota spp.The explosion of next-generation sequencing studies investigating the composition of plant microbiota has revealed Bacterdoidota, particularly Flavobacterium, are highly enriched in this niche (27,28,46,47).However, these bacteria are typically not enriched when LMW substrates, such as sugars, organic acids, aromatics and phenolics, are supplemented to soil samples under laboratory conditions (48)(49)(50).The possession of XyGUL and similar hemicellulose utilisation systems may therefore provide Bacteroidota with a competitive advantage when invading and persisting in the plant microbiome, facilitated through resource diversification (24).The high prevalence of XyGUL in plant-associated genomes and low prevalence in those retrieved from other environmental niches, such as seawater, further suggests a strong selection for XyG utilisation as a strategy to succeed in the plant microbiome, similar to their organophosphorus utilisation capabilities (37).These data are also consistent with previous comparative genomics analyses that indicated terrestrial Flavobacterium have a greater ratio of GH enzymes relative to peptidases, including those predicted to target plant pectins (51).Whilst Gammaproteobacteria, such as Cellvibrio and Xanthomonas, possess endo-acting xyloglucanases, these bacteria only contain a TonB-dependent transporter, akin to XusC (16,17), lacking the surface-exposed glycan binding domain (XusD) identified in this study. Therefore, possession of XusCD may increase the competitive ability of Flavobacterium to capture these complex exudates (52), consistent with the ecological function of these transporters in marine and gut microbiomes (53,54). Terrestrial Bacteroidota can utilise other HMW substrates, including pectin (55), alternative hemicelluloses (13,20,21), fungal polysaccharides (19,56) and alternative plant cell wall components (13,21).Together with our data, these observations support a model whereby HMW C is the preferential nutrient and energy source for Bacteroidota in soil and plant microbiomes.The domestication of agricultural crops is driving a significant loss of various key microbiota, including Bacteroidota, hypothesised to be a consequence of changes in crop root exudation profiles with a relative increase in the ratio of LMW:HMW (8).This reduction in beneficial microbes, such as Flavobacteraceae and Chitinophagaceae, may have negative impacts on agricultural soil health (57) and the plants ability to suppress pathogens (31,32,(58)(59)(60).Interestingly, the relative abundance of genes encoding XyGUL components, such as GH5, GH31, GH3, and GH95 were also significantly higher in healthy versus diseased pepper plants, when challenged with Fusarium (61).Collectively, these studies and ours highlight a possible link between Bacteroidota, HMW C utilisation, and plant disease suppression.We propose, future research should focus on explicitly linking the connection between HMW exudation and the assemblage of Bacteroidota in the plant microbiome in the context of crop domestication and host disease These studies are essential to better understand the drivers of Bacteroidota assemblage and host-microbe interactions in the plant microbiome (35). Given the proposed importance of plant polysaccharides in soil aggregation and the long-term storage of C (3,6), degradation of these molecules may represent a significant and relatively overlooked cog in the global C cycle.The comparatively efficient utilisation of glycans by Bacteroidota relative to non-Bacteroidota, as observed in marine systems (52,62,63), may therefore have consequences for the microbial C pump (3), which can be altered by changes in bacterial C use efficiency (64,65).Microbial polysaccharides also represent a major fraction of recalcitrant or 'stabilised' C in soil, a fraction which is vulnerable to microbial attack in response to a climate-induced influx of labile C or changes in land-use intensity (3,4,64,65,66).Whether shifts in Bacteroidota abundance and diversity, which are known to be good indicators of soil health, could influence this key step in the terrestrial global C cycle warrants further investigation (8,57). The lack of XyGUL in certain genera related to Bacteroidota, e.g., Chryseobacterium, coupled with a 10-60% occurrence of GH5_4 homologs in other Bacteroidota genera, suggests some level of functional partitioning within this phylum.Indeed, Chryseobacterium spp.possess an enhanced capability to degrade microbial polysaccharides associated with Grampositive peptidoglycan compared to F. johnsoniae or Sphingobacterium sp.(67).C. pinensis also lacks the ability to utilise XyG despite its capability to grow on other hemicelluloses and fungal glycans (19)(20)(21)68) and our comparative genomics confirmed this bacterium lacks a GH5_4 homolog.Hence, whilst Bacteroidota likely specialise in HMW C utilisation in situ, resource partitioning or metabolic heterogeneity within this phylum exists to target different HMW C substrates. Our data also reveals subclade 2D of the GH5_4 has radiated in soil microbiomes and is the dominant form, in contrast with the abundant forms found in engineered systems or animal guts (44).Clade 2D carried a distinct mutation at Trp252 (position in BoGH5A), which is typically Gly, Ala or His in clades 1 and 3. Clade 3 GH5_4 enzymes possess high activity towards multiple polysaccharides in addition to xyloglucan, in contrast to clade 2D homologs produced by C. japonicus (CjGH5d, CjGH5e, CjGH5f) (18,39,44).Hence, the presence of clade 1 and 3 GH5_4 enzymes in Actinomycetota and Bacillota may reflect a trade off whereby these bacteria carry fewer CAZymes with greater individual substrate ranges relative to Bacteroidota in order to scavenge complex C molecules in bulk soil away from plant roots (5,24,69,70).However, enzyme specificity versus promiscuity is likely driven by many more mutations that influence active site architecture through alterations in secondary structure (39).This may explain why mutation of Trp252 BoGH5A did not broaden its substrate range. In gut Bacteroidota, distinct PUL are required to degrade simple and complex arabinoxylans, which are differentially regulated in response to these different forms of the polysaccharide (23,71).The existence of Type II GH5_4 homologs carrying a single mutation and typically found in PUL that significantly differ from the conserved Flavobacterium XyGUL in their organisation and overall complexity may present something similar.Indeed, XyG is often part of a larger polysaccharide exudate complex, which includes pectin and xylan complexes (11).These complex Type II-harbouring PUL may therefore represent specialisation in utilising either non-exudate plant polysaccharides, particularly those associated with plant cell walls or root tip border cell-mucilage matrices (9,21) or more complex forms released by plant roots (11). In summary, using Flavobacterium as the model, we identified highly conserved XyGUL among plant-associated members of this genus.Whilst the initiator enzyme for XyG polysaccharide hydrolysis, GH5_4, is found in the genomes other Bacteroidota and Proteobacteria spp., we hypothesise the specialised Flavobacterium XyGUL, harbouring the active Type I form, enables these bacteria to competitively acquire this complex carbohydrate.Given the emergent knowledge that most plants, including globally important crop species, exude significant quantities of XyG, we propose this hemicellulose may present an important nutrient source for plant-associated Flavobacterium and underpins their ability to successfully invade and persist in a highly competitive plant microbiome. Comparative proteomics of Flavobacterium spp. Methods adapted from (37,74) were combined.Briefly, 25 mL cell cultures (n=3) grown to an OD600 ~0.6-1 were harvested by centrifugation at 3200 x g for 45 min at 4 o C. Cells were resuspended in 20 mM Tris-HCl pH 7.8 and re-pelleted at 13000 x g for 5 min at 4 o C. Cell lysis was achieved by boiling in 100 μl lithium dodecyl sulphate (LDS) buffer (Expedeon) prior to loading 20 μl onto a 4-20% Bis-Tris sodium dodecyl sulphate (SDS) precast gel (Expedeon).SDS-PAGE was performed with RunBlue SDS Running Buffer (TEO-Tricine) 1X (Expedeon) at 140 V for 5-10 min.Gels were stained with Instant Blue (Expedeon).A single gel band containing all the protein was excised.Gel sections were de-stained with 50 mM ammonium bicarbonate in 50% (v/v) ethanol, dehydrated with 100% ethanol, reduced and alkylated with Tris-2-carboxyethylphosphine (TCEP) and iodoacetamide (IAA), washed with 50 mM ammonium bicarbonate in 50% (v/v) ethanol and dehydrated with 100% ethanol prior to overnight digestion with trypsin.Samples were analysed by nanoLC-ESI-MS/MS using an Ultimate 3000 LC system (Dionex-LC Packings) coupled to an Orbitrap Fusion mass spectrometer (Thermo Scientific, USA) using a 60 min LC separation on a 25 cm column and settings as previously described (75).Resulting tandem mass spectrometry (MS/MS) files were searched against the relevant protein sequence database (F.johnsoniae UW101, UP000214645, Flavobacterium sp.OSR005 (Table SX) using MaxQuant with default settings and quantification was achieved using Label Free Quantification (LFQ).Statistical analysis and data visualisation of exoproteomes was carried out in Perseus (76). Bacterial genetics To construct various XyGUL mutants, the method from (77) was adapted, as per our previous study (36).Briefly, fragments ~1.5 kb in length upstream and downstream of the targeted genes were cloned into the plasmid pYT313 using the HiFi assembly kit (New England Biosciences).A full list of primers can be found in Table S1.Plasmid inserts were verified by Sanger sequencing.The resulting plasmids were transformed into the donor strain E. coli S17-1 λpir (S17-1 λpir) and mobilised into F. johnsoniae via conjugation: overnight (5 mL) F. johnsoniae wild type and pYT313-transformed S17-1 λpir cultures were inoculated (20% v/v) into fresh CYE (5 mL) and incubated for a further 8 h.Cells were pelleted at 1800 x g for 10 min @ 22 o C and washed in 1 mL CYE, and a 200 μL donor: recipient (CYE) suspension (1:1) was spotted onto CYE containing CaCl2 (10 mM) and incubated overnight at 28 °C.Biofilms were scraped from the agar surface and resuspended in 1 mL minimal A medium (no C source).Transconjugants were selected by spreading 5 to 100 μL aliquots on CYE containing erythromycin (100 μg mL -1 ).Colonies were restreaked onto CYE erythromycin and single homologous recombination events were confirmed by PCR prior to overnight growth in CYE followed by plating onto CYE containing 10% (w/v) sucrose to select for a second recombination event resulting in plasmid excision.To identify double homologous recombinants, colonies were replica plated onto CYE containing 10% (w/v) sucrose and CYE containing erythromycin.Erythromycin-sensitive colonies were screened by PCR. For complementation of the F. johnsoniae ΔxusCD mutant, both genes and the 300-bp upstream region were cloned into pCP11 using the HiFi assembly kit.The insert was verified by Sanger sequencing and the plasmid was mobilised into DSM2064 via conjugation using S17-1 λpir as the donor strain.The method was identical to that described above for transfer of the suicide plasmid, pYT313, except that 1 mL overnight cultures of donor and recipient were directly washed and resuspended in 200 μL CYE prior to spotting onto CYE containing CaCl2 (10 mM).Cells were scraped from the solid medium and transformants selected by creating a serial dilution (10 -1 to 10 -5 ) from the cell suspension and spotting 20 μL of each dilution onto CYE containing erythromycin (100 μg mL -1 ). Production and purification of recombinant GH5_4 homologs Genes encoding the GH5_4 homologs (Fjoh_0774, BACOVA_02653, OSR005_04227 and OSR005_03871) lacking the N-terminal signal peptide and stop codon were amplified by PCR and ligated into the NdeI and XhoI sites of pET21a.Site-directed mutagenesis of the Trp252 residue in BoGH5A was performed using the QuikChange II Site Directed Mutagenesis (SDM) Kit (Agilent Technologies) according to the manufacturer's protocol. For production of recombinant proteins, a single colony of E. coli BL21 (DE3) transformed with the desired plasmid was inoculated in 5 mL LB broth with 100 μg/mL ampicillin and shaken (220 rpm) at 37 °C overnight (16 h) before transfer to 1 L LB culture (in a 2 L conical flask) supplemented with 100 μg/mL ampicillin.Cultures were shaken at 37 °C at 220 rpm until an optical density at 600 nm (OD600) of ~ 0.6 was reached.Following induction of gene expression with 0.4 mM (final concentration) IPTG, cells were incubated at 18°C overnight for a further 16 h before recovery by centrifugation at 8,000 x g for 15 min at 4°C.Pellets were resuspended in 30 mL binding buffer (25 mM HEPES pH 7.4, 1 M NaCl, 5 mM imidazole) and stored at -20 °C until purification.Cells were thawed and lysed by sonication.The lysate was centrifuged at 13,000 x g for 15 min at 4 °C and the supernatant was loaded onto a 5 ml chelating Sepharose column charged with nickel (II) sulphate pre-equilibrated with 50 mL of binding buffer.Following washes in binding buffer with increasing concentrations of imidazole, proteins were eluted with 25 mM HEPES pH 7.4, 400 mM Imidazole, 100 mM NaCl. Fractions containing the target protein (as identified by SDS-PAGE) were pooled and concentrated to a volume of 1-2 mL using a Vivaspin centrifuge concentrator (Sartorius) with a 30,000 kDa molecular weight cut off.The concentrated sample was loaded onto a size exclusion chromatography (SEC) (S200 16/60 Cytiva) column equilibrated in 50 mM Tris-HCl, 200 mM NaCl, 10 % (w/v) glycerol and protein was separated at a flow rate of 0.5 mL min -1 .Purity of peak fractions was analysed by SDS-PAGE and protein was stored at -20 o C until required. Enzymatic assays of recombinant glycoside hydrolases Purified recombinant GH5_4 homologs were screened for enzyme activity using the 3,5-Dinitrosalicylic Acid Assay (DNSA) method (78).Briefly, for enzyme kinetics between 10-250 nM purified recombinant protein (n=3) was incubated with decreasing concentrations (starting from 8 mg mL -1 ) of XyG.At each time point a subsample was taken and mixed with a stop solution (DNSA working reagent containing 10 mg ml -1 glucose), prior to boiling at 95 o C for 15 min to develop the colour.To calculate the initial maximum velocity of the reaction (Vo), at least five measurements were taken within the linear kinetics range.Absorbance at 575 nm was quantified.A standard curve (n=3) against known concentrations of glucose was used to convert A575 to the amount of freely available reducing ends produced during cleavage of the beta-glucan backbone of XyG.All assays were typically repeated with two separate batches of protein.For screening the promiscuous activity of OSR005-1, OSR005-2, BoGH5A, BoW252A or BoW252G, 1 μM of protein was incubated with 4 mg mL -1 polysaccharide for 30 min. Comparative genomics and metagenomics The online platform IMG/JGI (79) was used to conduct most comparative genomics analyses described in this study .Genomes and metagenomes were stored in genome sets (detailed in Tables S2 and S3), and BLASTP searches (E-value e -40 ) were performed using the "jobs function" using either Fjoh_0774 or a homologue (IMG gene id; 2644426200) of the commercial recombinant endoxyloglucanase (GH5_4) from Paenibacillus sp.(Megazyme, CAS -76901-10-5).The latter was used as it represents a sequence from outside GH5_4 subclade 2. For the metagenome searches, retrieved open reading frames (ORFs, n=7136) were locally aligned (BLASTP) against all GH5 ORFs (n=1246) deposited in the CAZy database (CaZydb) (80). Figure 1 . Figure 1.Xyloglucan utilisation by soil Bacteroidota.(a) F. johnsoniae grown on plantassociated Flavobacterium isolated from various crop rhizospheres were grown on either glucose (Glu), galactomannan (GalM), xyloglucan (XyG) hemicelluloses, or xylooligos (XyGOs) as the sole C and energy source.Data represents the mean of triplicate cultures and error bars denote standard deviation.(b) Proteins enriched in the whole-cell proteomes (n=3) of either F. johnsoniae or Flavobacterium sp.OSR005 when grown on XyG compared to growth on glucose.Red data points denote statistically significant (FDR-corrected p < 0.05) proteins with greater than 2-fold enrichment.Proteins in the predicted XyGUL are highlighted.(c) XyGUL shares modules from X. citri and B. ovatus and are highly conserved among plantassociated Flavobacterium spp.(strain identifier labelled).(d) The predicted function and localisation of proteins encoded in the induced XyGUL with locus tags for F. johnsoniae provided.Colours in c and d represent the corresponding open reading frames and proteins.Locus tags correspond to F. johnsoniae.Numbers in d correspond to the predicted glycoside hydrolase family in the CAZY database.Asterisks represent the strain used in 1a.Abbreviations: OM, outer membrane; IM, inner membrane. Figure 2 . Figure 2. Genetic basis of xyloglucan utilisation in Flavobacterium johnsoniae.The wild type (blue circles), the outer membrane GH5_4 endxyloglucanse (Δ0774) mutant (red circles), the outer membrane TonB-dependent transporter and cognate lipoprotein (Δ0781-2) mutant (yellow circles) were grown on either glucose, XyG, XyGO, or GalM as the sole C and energy source.Both mutants were complemented (triangles) with their respective native genes.Growth assays were performed in triplicate and error bars denote the standard deviation from the mean. Figure 3 . Figure 3. Characterisation of GH5_4 homologs in Flavobacterium spp.(a) Phylogenetic reconstruction of GH5_4 homologs identified in Flavobacterium spp.alongside those previously characterised, showing the variable Trp252 residue (BoGH5A) and each adjacent amino acid residue.The genomic localisation of the GH5_4 homologs is given in columns to the right of the residues.Note, the Trp-containing forms in Flavobacterium (green branches) are almost exclusively associated with XyGUL.I and II represent the identified Type I and Type II Flavobacterium GH5_4 homologs.Abbreviations: Est, esterase; Pept, peptidase; CBM, Figure 4 . Figure 4.The occurrence and diversity of GH5_4 homologs in terrestrial Bacteroidota spp.(a) Phylogenomic analysis of our previously generated multi-loci maximum-likelihood consensus tree, inferred from the comparison of 10 housekeeping and core genes present in 102 Flavobacterium isolates (37).The presence (filled symbol) or absence (hollow symbol) of CAZyme ORFs associated with PUL are displayed, as well as the genome size of each isolate (outer ring).The inner ring denotes the environmental niche the genome was isolated.(b) The prevalence of GH5_4 homologs in the genomes from different genera within the phylum Bacteroidota, determined through BLASTP (cut off, e -40 ).The number of genomes screened per genus is given in the parentheses.Colours denote the associated class rank.(c) Selected PUL containing GH5_4 homologs identified in other Bacteroidota spp.Numbers denote glycoside hydrolase family predictions.Abbreviations: CBM, carbohydrate binding module, HTCS, hybrid two component sensor.Colour schemes as per previous figures. Figure 5 . Figure 5. Distribution of GH5_4 homologs in soil-and plant-associated metagenomes.Reconstructed phylogeny (maximum likelihood method, bootstrap 1000) of GH5_4 homologs in the CAZyme database that best represent the ORFs retrieved from the metagenomes.The amino acid present at each of the key residue sites experimentally determined in previous studies are presented as coloured rings.The outer bar plots represent the overall gene abundance across all metagenomes.Branches are coloured based on their taxonomic classification at the class level.The outer ring represent the GH5_4 clades (I,II, or III)previously identified by(44).
2024-06-08T13:11:00.653Z
2024-06-03T00:00:00.000
{ "year": 2024, "sha1": "0d85d028b088f4db3b23d01383fe4111a1755206", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2024/06/03/2024.06.03.597110.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "0d85d028b088f4db3b23d01383fe4111a1755206", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
253097587
pes2o/s2orc
v3-fos-license
Sitagliptin Versus Placebo to Reduce the Incidence and Severity of Posttransplant Diabetes Mellitus After Kidney Transplantation—A Single-center, Randomized, Double-blind Controlled Trial Background. Postkidney transplant diabetes mellitus (PTDM) affects cardiovascular, allograft, and recipient health. We tested whether early intervention with sitagliptin for hyperglycemia (blood glucose >200 mg/dL) within the first week of transplant and discontinued at 3 mo could prevent development of PTDM in patients without preexisting diabetes. Methods. The primary efficacy objective was to improve 2-h oral glucose tolerance test (OGTT) by >20 mg/dL at 3 mo posttransplant. The secondary efficacy objective was to prevent new onset PTDM, defined as a normal OGTT at 3 mo. Results. Sixty-one patients consented, and 50 patients were analyzed. The 3-mo 2-h OGTT (end of treatment) was 141.00 ± 62.44 mg/dL in the sitagliptin arm and 165.22 ± 72.03 mg/dL (P = 0.218) in the placebo arm. The 6-mo 2-h OGTT (end of follow-up) was 174.38 ± 77.93 mg/dL in the sitagliptin arm and 171.86 ± 83.69 ng/dL (P = 0.918) in the placebo arm. Mean intrapatient difference between 3- and 6-mo 2-h OGTT in the 3-mo period off study drug was 27.56 + 52.74 mg/dL in the sitagliptin arm and −0.14 + 45.80 mg/dL in the placebo arm (P = 0.0692). At 3 mo, 61.54% of sitagliptin and 43.48% of placebo patients had a normal 2-h OGTT (P = 0.2062), with the absolute risk reduction 18.06%. There were no differences in HbA1c at 3 or 6 mo between sitagliptin and placebo groups. Participants tolerated sitagliptin well. Conclusions. Although this study did not show a significant difference between groups, it can inform future studies in the use of sitagliptin in the very early posttransplant period. INTRODUCTION Kidney transplantation improves the quality of life and expectancy of transplant recipients compared with those who remain on dialysis. Although the benefits of transplantation are numerous, risks and complications also accompany this life-changing therapy. One complication that occurs after kidney transplantation is posttransplant diabetes mellitus (PTDM). Diabetes mellitus carries risk of future development of infectious complications, cardiovascular disease, decreased allograft survival, and increased patient mortality. [1][2][3][4] Historically, new diagnoses of diabetes mellitus affect up to 15% to 50% of posttransplant patients. 5,6 Known risk factors for PTDM include family history of diabetes, age, obesity, genetic predisposition, impaired insulin release, impaired insulin uptake by muscle and adipose tissues, and impaired suppression of glucagon release. Transplant-specific factors associated with development of PTDM include hepatitis C and cytomegalovirus infections, polycystic kidney disease history, and use of immunosuppressant medications such as corticosteroids and calcineurin inhibitors. 4,7,8 Delos Santos et al Corticosteroids lead to increased gluconeogenesis and decreased glycogen synthesis by the liver, increase lipolysis and triglyceride release from adipose tissue, increase central and visceral adiposity, and decrease glucose uptake by muscles. Calcineurin inhibitors lead to impaired glucose tolerance and diabetes through decreased insulin gene expression and induction of islet cell apoptosis. 4,8 The majority of patients who are not previously diabetic may develop new PTDM in the first 3 mo after a transplant. [8][9][10] Conversion from hyperglycemia to PTDM is not fully predictable, and prevention of new PTDM is paramount in a population that faces other health concerns. The stresses of surgery, along with high-initial doses of corticosteroids to promote graft acceptance and higher calcineurin inhibitor doses to help avoid early rejection, are contributing factors. Patients documented to have impaired glucose tolerance by the oral glucose tolerance test (OGTT) pretransplant are at greater risk of developing new PTDM posttransplant. 11 Performing this test routinely is not done because of logistics and costs because candidates can spend years on the kidney transplant waiting list. Hyperglycemia in the very early postkidney transplant period is very common, with a reported rate of 90%. Nondiabetics with significant hyperglycemia (blood glucose >200 mg/dL) during the first week posttransplant have higher rates of newly diagnosed PTDM. 6 Although labs are monitored frequently in the early posttransplant period and recognition of hyperglycemia is apparent, there are limited strategies to prevent the development of new PTDM. Rapid tapering of corticosteroids and lowering calcineurin inhibitor levels may not occur early posttransplant because of the concern for acute rejection. Previous investigators have used various medications to treat posttransplant diabetes mellitus, including metformin, insulin, and dipeptidyl-peptidase-4 (DPP-4) inhibitors. [12][13][14] One notable study has attempted to prevent PTDM through isophane insulin use within the first 12 mo posttransplant. 12 Although effective at preventing PTDM compared with standard-of-care at 12 mo, there were incidences of asymptomatic hypoglycemia in the insulin-treated group. 12 There have been no studies to our knowledge using DPP-2 inhibitors starting the first week of transplant to prevent PTDM in previously nondiabetic transplant recipients. Sitagliptin, an orally active antidiabetic agent, is part of the family of DPP-4 inhibitors. Normally, endogenous incretin hormones, glucagon-like peptide (GLP-1) and glucose-dependent insulinotropic polypeptide (GIP) are rapidly inactivated by DPP-4. DPP-4 inhibitors work by inhibiting plasma DPP-4 function, thus allowing increased levels of incretin hormones GLP-1 and GIP in the postprandial setting. This potentiates glucose stimulated insulin production and release and slows glucagon release without causing hypoglycemia. 15,16 In diabetic animal models, it has led to improvement in beta cell function and neogenesis. 17 The lack of hypoglycemia and other manifestations of gastrointestinal side effects makes DPP-4 inhibitors generally well tolerated among patients. A prospective study has shown that treatment of PTDM with sitagliptin was effective in lowering hemoglobin A1c (HbA1c) in kidney transplant patients with new onset diabetes after transplant and importantly did not alter calcineurin inhibitor levels. 18 To our knowledge, the use of sitagliptin for prevention of new PTDM has not been described in the literature. Given its mechanism of action and safety profile, we hypothesized that sitagliptin is an ideal pharmacologic agent to prevent new PTDM in the first 3 mo of transplant. Study Design and Subjects This was a single-center, randomized, double-blind, and placebo-controlled trial in kidney transplant recipients to evaluate the role of sitagliptin in the prevention of PTDM. Participants in the trial provided written informed consent. The trial adhered to the principles of the Declaration of Helsinki. The study was approved by the Washington University in Saint Louis institutional review board (IRB ID no. 201306111 and registered NCT number NCT01928199). All adults aged 18 y or older who received a living or deceased kidney allograft after July 25, 2013, were evaluated for inclusion. Those without a history of diabetes were screened for postoperative hyperglycemia (random blood glucose >200 mg/dL) within the first 72 h following transplantation. In June 2015, the inclusion time point was expanded to 120 h posttransplant based on the finding that there were patients who screened positive outside the initial 72 h. Exclusion criteria included a history of diabetes including diet-controlled diabetes; history of insulin or oral hypoglycemic agent use before transplant; inclusion of the patient being placed on medication then taken off as a result of improved glycemic control; HbA1c ≥6.5% immediately before transplant; recipient of simultaneous kidney-pancreas, kidney-liver, kidney-heart, or kidneylung transplant; or history of prior nonrenal transplant. Study Procedures Those that screened positive for postoperative hyperglycemia were approached to participate in the study. Once consented, patients were randomized by the research pharmacist by block randomization to receive sitagliptin or placebo, and they kept record of patient allocation, whereas the rest of the study team and participants remained blinded. Placebo tablets were provided that looked identical in shape, color, and size to a 25-mg sitagliptin tablet. Randomization was based upon a stratification of pretransplant HbA1c (<5.7% or 5.7%-6.4%). The target enrollment was 50 patients but was later extended to 61 patients to account for drop out and loss to follow-up. After randomization, patients continued on the study drug, adjusted for renal function for 3 mo. At 3 mo' time, the study drug was discontinued, and patients were followed for an additional 3 mo off the study drug. Participants had outpatient study visits at 1 mo, 3 mo, and 6 mo after discharge. Patients were instructed to monitor their blood glucose at least 1 time per day via finger stick glucometer; however, monitoring of fasting blood glucose and 2 h after the largest meal of the day was requested. At the 1-mo and 3-mo visit, an investigator reviewed the blood glucose logs and assessed for adverse effects. Additionally, at the 3-mo study visit, HbA1c, 2-h OGTT blood glucose, and fasting C-peptide level were obtained. Following discontinuation of the study medication at the 3-mo study visit, participants had a 6-mo study visit where repeat HbA1c, 2-h OGTT blood glucose, and fasting C-peptide level were obtained. In addition to the monitoring for the study described above, patients obtained routine posttransplant laboratory monitoring including a complete blood count, renal panel, and tacrolimus trough level. Labs were assessed per center protocol twice weekly for 2 wks, weekly between 2 through 12 wks, and every 2 wks for the duration of the study. Elevated blood glucose readings were assessed by the study investigators. Diagnosis and treatment of diabetes was in accordance with published guidelines with the exception that add-on therapy to the study medication could not include use of a DPP-4 inhibitor because the investigators were blinded to the randomized arm. Primary and Secondary Endpoints The primary efficacy objective was to assess the efficacy of sitagliptin to improve 2-h OGTT by >20 mg/dL at 3 mo posttransplant in patients without preexisting diabetes undergoing living-donor or deceased-donor kidney transplant compared with placebo. The secondary efficacy objective was to assess the efficacy of sitagliptin to prevent new onset posttransplant diabetes mellitus as determined by a normal OGTT at 3 mo posttransplant in patients without preexisting diabetes undergoing living-donor or deceased-donor kidney transplant compared with placebo. The primary efficacy endpoint was a difference in 2-h OGTT blood glucose of ≥20 mg/dL assessed at the 3-mo study visit. For reference, a normal fasting glucose is <100 mg/dL; at 2 h, the expected normal value of glucose for an OGTT is <140 mg/dL; 140 mg/dL to 199 mg/ dL indicates impaired glucose tolerance; and a value >200 mg/dL is diagnostic of diabetes mellitus. These levels were compared with the values at the 6-mo study visit. The secondary endpoint assessed in this study was the attainment of a normal 2-h OGTT blood glucose at 3 mo in the study group compared with placebo. Other outcomes included the improvement of HbA1c by ≥0.5% between groups, differences between fasting OGTT blood glucose, fasting blood glucose, postprandial blood glucose, fasting C-peptide, and fasting insulin within the groups from the end of treatment compared with the end of the observation period. Assessments of graft function and immunosuppression serum levels were also assessed between study groups. Maintenance Immunosuppression Maintenance immunosuppression was standardized and included administration of methylprednisolone 7 mg/kg on the day of transplant followed by oral prednisone 1 mg/kg (maximum 80 mg) on postoperative days (PODs) 1 and 2 followed by 20 mg daily for 10 d, 15 mg daily for 7 d, 10 mg daily for 7 d, and then 5 mg daily thereafter. Tacrolimus was initiated on POD 1 and adjusted to a target tacrolimus trough level of 7 to 10 ng/mL for 1 mo, then 3 to 7 ng/ mL for the remainder of the study period. Mycophenolate sodium was initiated at 720 mg BID and reduced to 360 mg BID once the target tacrolimus trough level was achieved. Statistical Analysis Demographic and admission variables were compared between groups using the 2-sample t test or Mann-Whitney U test for continuous variables as appropriate. Categorical variables were tested using the chi-square test or Fisher's exact test as appropriate. Analysis was based on the intention-to-treat principle. Efficacy outcomes were defined as change from month 3 to month 6. Mean change of outcomes was compared between groups using the 2-sample t test. All statistical tests were 2-sided at significance level 0.05, and analyses were performed with SAS, version 9.4 (SAS Inc, Cary, NC). Sample Size Calculation Based on a sample size of 25 patients per group, this study has 80.7% power to detect a mean difference of 20 mg/dL in 2-h OGTT based on the 2-sample t test at significance level 0.05. The calculation assumes a SD of 25 mg/dL or less between groups. 19 RESULTS In total, 61 patients consented to be in the study, 32 in the sitagliptin arm and 29 in the placebo arm. CONSORT flow diagram Figure 1 indicates enrollment and follow-up for the study. By the end of study, 27 patients in the sitagliptin arm and 23 patients in the placebo arm completed at least the 3-or 6-mo visit for inclusion in the efficacy analyses. Table 1 shows baseline characteristics between the 2 groups. There were no differences in sex, race, cause of end-stage renal disease, type of transplant, or comorbidities between the groups. The mean baseline HbA1c in the sitagliptin and placebo arms was 5.27% (+0.46%) and 5.13% (+0.41%) (P = 0.23), respectively. The mean participant body mass index at study entry was 28.82 kg/ m 2 + 4.58 in the sitagliptin group and 29.40 kg/m 2 + 4.27 in the placebo group (P = 0.6133). The time of day for the qualifying blood glucose >200 mg/dL was divided between midnight to 8:00 AM, 8 Primary Efficacy Analysis The 3-mo 2-h OGTT at the end of study drug treatment duration was 141.00 ± 62.44 mg/dL in the sitagliptin arm and 165.22 ± 72.03 mg/dL in the placebo arm, with a difference of 24.22 mg/dL (P = 0.218). This was not statistically significant. The sitagliptin arm mean 2-h OGTT was just above the normal cutoff of 139 mg/dL for a normal OGTT. The 6-mo 2-h OGTT was 174.38 ± 77.93 mg/dL in the sitagliptin arm and 171.86 ± 83.69 ng/dL (P = 0.918) in the placebo arm. Figure 2 depicts the 2-h OGTT for months 3 and 6 for placebo and sitagliptin groups. The mean intrapatient difference between months 3 and 6, 2-h OGTT (6-mo OGTT-3-mo OGTT) was 27.56 + 52.74 in the sitagliptin arm and −0.14 + 45.80 in the placebo arm (P = 0.0692). Table 2 shows the findings of the primary efficacy analysis. Secondary Efficacy Analysis The percentage of patients in the sitagliptin and placebo arms at 3 mo with a normal 2-h OGTT was 61.54% and 43.48% (P = 0.2678) with the absolute risk reduction 18.06%. At 6 mo, when both groups had a 3-mo washout period without the study drug, the percentage of patients in the sitagliptin and placebo arms with a normal 2-h OGTT was 41.67% and 47.62%, respectively (P = 0.900). No difference was seen in HbA1c between treatment groups at the end of the treatment period or at the end of the follow-up period. Comparing sitagliptin versus placebo, at 3 mo, the HbA1c was 5.52% ± 0.58% versus 5.78% ± 0.85% (P = 0.2333), and at 6 mo, the HbA1c was 5.91% ± 0.84% versus 5.79% ± 0.69% (P = 0.5998). The absolute mean difference in HbA1c between 3 and 6 mo (6 mo-3 mo) was 0.39% and 0.01% (P = 0.0568) in the sitagliptin and placebo groups, respectively. Figure 3 illustrates the change in HbA1c between 3 and 6 mo for placebo and sitagliptin groups. Glycemic Outcomes Fasting OGTT, average fasting blood glucose, average postprandial blood glucose, and fasting insulin and fasting C-peptide values at the 3-and 6-mo time points, as well as OGTT results divided between normal, impaired glucose tolerance and diabetes, are shown in Table 3. There were no significant mean or proportion differences in these outcomes. Figure 4 illustrates the fasting OGTT change from months 3 to 6 for placebo and sitagliptin groups. One month into study participation, 2 patients in the sitagliptin arm and 3 patients in the placebo arm became diabetic and required additional medication to treat hyperglycemia. By the 3-mo follow-up, a total of 3 patients in the sitagliptin arm and 5 patients in the placebo arm were already identified as diabetic, requiring additional medication to treat hyperglycemia. Additional patients were identified at the time of the 3 mo follow-up OGTT. At the final 6-mo follow-up, a total of 5 in the sitagliptin group and 6 in the placebo group were diabetic and required additional medication to treat hyperglycemia, with additional patients identified at the time of the 6-mo OGTT. Medications used to treat these patients for diabetes included metformin, sulfonylureas, and insulin. Safety Outcomes Overall, the study medication was well tolerated. Both groups had similar renal function and tacrolimus levels through the follow-up period (Table 4). No patients experienced acute cellular or antibody mediated rejection or allograft failure during the follow-up. Three patients had serious adverse events by month 3 in both the sitagliptin arm and the placebo arm (11.54% versus 13.04%, P = 0.8729). Events included a urine leak with need for surgical intervention, volume overload, and right arm weakness in the sitagliptin group. In the placebo group, 1 patient had a hernia repair, another had shortness of breath, and another underwent elective bilateral native nephrectomies. There were 2 additional patients with serious adverse events by month 6 in the sitagliptin arm and one in the placebo arm (7.41% versus 5%, P = 0.7414). This included admission for nausea and vomiting and a ureteral stricture, respectively. However, no patients discontinued study medication in either group because of adverse effects. Events were unlikely to be related to side effects from the study drug. DISCUSSION To the best of our knowledge, our study is the first randomized, double-blind, placebo-controlled study to evaluate the use of sitagliptin initiated at the time of transplant to prevent the development of PTDM in kidney transplant recipients without preexisting diabetes but who had early posttransplant hyperglycemia. We used a medication class that would be beneficial in this population given its mechanism of action, lack of interaction with immunosuppressants, and low risk of hypoglycemia or other adverse effects. Posttransplant use of DPP-4 inhibitors has been examined in prior studies. A study conducted by Lane et al evaluated sitagliptin in 15 patients who were on average 4.7 years out from transplant but developed diabetes posttransplant. 20 They showed that sitagliptin was effective in lowering the HbA1c without affecting immunosuppression levels for tacrolimus or sirolimus. Werzowa et al explored the use of vildagliptin or pioglitazone in comparison to placebo over a 3-mo period in 48 renal transplant patients at least 6 months posttransplant. 21 Vildagliptin, pioglitazone, or placebo were 13 Our trial used sitagliptin within the first week posttransplant to improve glycemic control when blood glucose levels were elevated because of the physical stress of transplant surgery and medications that increase blood glucose. There is only 1 other study to our knowledge that sought to treat early hyperglycemia posttransplantation with the intention of reducing PTDM. This randomized trial compared early postoperative intermediate acting (isophane) insulin (treatment group) with treatment using short-acting insulin and oral sulfonylurea (control/standard-of-care group) according to consensus treatment guidelines, with the primary endpoint being the difference in HbA1c at 3 mo. 12 Medications were titrated up or down based on blood glucose levels recorded the evening prior. Although the HbA1c increased from a baseline obtained at the time of transplant for both groups, the investigators found that, at 3 mo, there was a lower mean difference in HbA1c of 0.52% in the treatment isophane insulin group compared with standard-of-care (control). Five patients experienced asymptomatic hypoglycemia, defined in the study as blood glucose 41 to 60 mg/d, in the treatment group compared with 1 patient as the control group. Although the strategy of early tight blood glucose control early after transplant seemed to help prevent long-term PTDM, hypoglycemia remains a significant concern in newly transplanted patients when medications are changing regularly, particularly in patients new to insulin who may not be fully aware of the symptoms associated with hypoglycemia. An advantage of our study design was to use a DDP-4 inhibitor with a low risk of hypoglycemia. We did not observe any incidence of hypoglycemia in our study. The kidney transplant population is a unique group of patients because their risk of newly diagnosed diabetes mellitus posttransplant is greatest at the beginning of transplant. Perhaps the most well-known diabetes prevention study in the general population is the Diabetes Prevention Program, which studied standard lifestyle recommendations plus placebo, standard lifestyle recommendations plus metformin, or intensive lifestyle modification over the course of 2.8 y. 22 The incidence of diabetes was reduced by 58% in the intensive lifestyle modification and 31% in the metformin group compared with placebo. 22 These methods have limitations, preventing implementation in the early posttransplant period because of fluctuating and oftentimes elevated Cr in the first several weeks after transplant and thus the inability to prescribe metformin early on, and patients are unable to tolerate moderately intensive exercise in the early part of their transplant because of recent surgery. The primary objective of our study was to assess sitagliptin ability to improve 2-h OGTT by ≥20 mg/dL at 3 mo posttransplant. We observed a lower 2-h OGTT in the sitagliptin group (141.00 ± 62.44) than in the placebo group (165.22 ± 72.03); although this was not statistically significant (P = 0.218), it may be clinically significant and paralleled the reported average blood sugar 2-h postprandial at 3 mo (sitagliptin 140.00 ± 24.34 and placebo 162.13 ± 51.93, P = 0.4451). After discontinuation of the study drug at 3 mo, there was no difference between sitagliptin or placebo arms in 2-h OGTT at 6 mo posttransplant (sitagliptin 174.38 mg/dL ± 77.93 versus placebo 171.86 mg/dL ± 83.69), with a mean increase in HbA1c between 3 and 6 mo of 0.39% in the sitagliptin arm compared with 0.01% in the placebo arm (P = 0.0568). The secondary objective of this study was to assess sitagliptin ability to prevent new onset PTDM at 3 mo versus placebo. We found that 61.54% of the sitagliptin arm has a normal 2-h OGTT at 3 mo (2-h BS <−140 mg/dL) compared with 43.48% of the placebo arm, not reaching statistical significance (P = 0.2062). Notably, more patients in the sitagliptin arm required treatment for diabetes by the 6 mo OGTT, suggesting that they may have derived some benefit to blood glucose lowering with the sitagliptin. Given the lack of meeting statistical significance for both the primary and secondary objectives of this study, our findings were ultimately negative. Our study was limited by sample size, and findings can be useful to power a future study. Although the HbA1c test can be affected by anemia and renal function, which can fluctuate within the first several months after transplant, we evaluated the baseline and 3-and 6-mo Hgb levels and serum creatinine and mean tacrolimus levels and found no significant difference between the sitagliptin and placebo groups ( Table 4). The use of OGTT as the primary test for patients in this study was used because both anemia and renal functions affect HbA1c, and our goal was to use a test that was not affected by labs that may be fluctuating as we performed our study procedures. However, patients in the study did have their first OGTT on the last day of taking their study drug, and thus, those on the sitagliptin would be expected to have a lower OGTT. There is evidence from prior studies that poor glycemic control early posttransplant is associated with increased risk of perioperative infections. 4,[23][24] Although this current study did not specifically evaluate for a difference in early posttransplant infections related to hyperglycemia, no significant infections were reported. It is not clear whether continuing patients on the study drug through the first 6 mo would provide benefit in prevention of new onset PTDM or impaired glucose tolerance. Withdrawal of sitagliptin at the 3-mo follow-up led to an increase in the 2-h OGTT and HbA1c for the intervention patients, indicating a potential benefit of extending sitagliptin. The advantage of sitagliptin is the lack of hypoglycemic episodes, ease of use and tolerability by patients with minimal side effects, decreased morbidity with respect to infectious complications, and lack of potential interaction with immunosuppression. We believe our study, although not showing statistical significance, can better inform future studies that evaluate the management of posttransplant hyperglycemia in patients with no preexisting diabetes.
2022-10-25T06:17:04.940Z
2022-10-20T00:00:00.000
{ "year": 2022, "sha1": "fab0189605bed9041632d29f3d20d4a08bcbe2b3", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/transplantjournal/Fulltext/9900/Sitagliptin_Versus_Placebo_to_Reduce_the_Incidence.213.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2d8abea1734b4aa2183b1240615af3e8a55863f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257055455
pes2o/s2orc
v3-fos-license
Mutation N856K in spike reduces fusogenicity and infectivity of Omicron BA.1 fusogenicity Dear Editor, COVID-19 lung pathology is characterized by interstitial pneumonia with cell-cell fusion-induced syncytia and extensive tissue damage. 1 The correlation between viral fusogenicity and pathogenicity has been reported in SARS-CoV-2 variants. 2 Compared with the previous Delta or D614G variants, Omicron BA.1 has been proven to be less fusogenic and pathogenic, 3 potentially due to the high burden of mutations in spike, including 8 mutations in NTD, 15 in or adjacent to RBD, 2 in the furin-like cleavage motif (FL), and 6 in S2. However, the newly emerged Omicron sub-lineage BA.4/5 showed enhanced infectivity and fusogenicity compared to the BA.1/2. 4 It seems that progressive mutational evolution of the spike in Omicron sub-lineages leads to increased transmissibility. 5 To evaluate the fusogenicity of SARS-CoV-2 variants, we established a SARS-CoV-2 spike-mediated cell-cell fusion system using HEK293FT cells transiently expressing spike-GFP fusion protein ( Supplementary Fig. 1a) as effector cells and Vero-E6 naturally expressing ACE2 as target cells. The spike-GFP fusion proteins of D614G, Delta or Omicron sub-variants were expressed on the cell membrane of effector cells ( Supplementary Fig. 1b) with similar expression levels ( Supplementary Fig. 2a). Large syncytia with strong fluorescence were observed when the D614G or Delta spike-GFP expressing cells were co-cultured with Vero-E6 after 48 h (Fig. 1a). Contrastingly, no obvious syncytia were formed in the Omicron BA.1 spike-GFP or GFP groups (negative control, NC) (Fig. 1a), even after 73 h ( Supplementary Fig. 1c). Quantitative analysis showed that the cell fusion induced by Omicron BA.1 spike-GFP suffered a 9.2 (p < 0.0001) and 11.9-fold (p = 0.0003) reduction when compared to that induced by D614G and Delta spike-GFP, respectively (Fig. 1b), indicating a weakened capability of syncytia formation of the Omicron BA.1 spike. Interestingly, other Omicron sub-lineages, BA.2, BA.4/5, BA.2.12.1, and BA.2.75, maintained low fusogenic but caused more syncytia formation compared with BA.1 ( Fig. 1b and Supplementary Fig. 1d). Spike-mediated infection activity was tested using a VSV based pseudovirus (PsV) assay. Similar spike expression levels were displayed on viral membrane of different SARS-CoV-2 variant PsVs ( Supplementary Fig. 3). Those PsVs with normalized viral particles were used to infect Vero-E6, Huh-7, and Calu-3 (human airway epithelial cell line). The measured relative light unit (RLU) of the BA.1 PsV infected target cells was significantly lower than that of the D614G or Delta variant. Omicron BA.2 showed similar infectious activities to BA.1 PsV, while Omicron BA.4/5, BA.2.12.1, and BA.2.75, were more infectious than BA.1 (Fig. 1c). The tested virological feature of decreased cell fusion activity and lower viral replication of Omicron BA.1 is consistent with reported fusogenicity and infectivity of live BA.1 viruse. 3 Omicron BA.1 contains a high burden of mutations scattered in different domains of the viral genome, especially in the spike protein. To identify domains in the spike protein responsible for the altered fusogenicity and infectivity of the Omicron BA.1, the Spike-based cell fusion assay and PsV infection assay were carried out with engineered spike proteins by domains swapping between D614G and BA.1 ( Supplementary Fig. 4a). The results showed that the exchange of S2 regions between D614G and BA.1 severely reversed their fusogenicity and infectivity, while the exchange of NTD, RBD and FL regions had little effect (Supplementary Fig. 4b-d). Thus, the D614G spike-GFP mutants containing single site substitution in S2 region of Omicron BA.1 (N764K, D796Y, N856K, Q954H, N969K or L981F) were further evaluated. Interestingly, when compared with the parental D614G variant, a dramatic reduction in cell fusion was observed in the D614G spike containing the N856K substitution ( Fig. 1d and Supplementary Fig. 5a), while minimum impact on fusogenicity was observed in the spike mutants with the other five substitutions ( Supplementary Fig. 5b, c). We speculated that the introduction of a N856K substitution into the Delta spike would also reduce its fusogenicity and other virological features, despite it being reported to have enhanced fusogenicity as a result of a P681R substitution. 6 As expected, only small syncytia were observed in the fusion assay of the Delta-N856K spike compared to the naive Delta spike ( Fig. 1d and Supplementary Fig. 5a). We then introduced the restorative mutation K856N into the Omicron BA.1 spike (BA.1-K856N) and found that the K856N substitution indeed increased the fusion of BA.1 ( Fig. 1d and Supplementary Fig. 5a). Similarly, the infectivity of PsV containing D614G-N856K or Delta-N856K spike was significantly decreased compared to the corresponding parental D614G or Delta (Fig. 1e). However, PsV containing BA.1-K856N spike significantly increased the infection activity of BA.1 PsV (Fig. 1e). Similar to that of the BA.1-K856N, new sub-lineages of Omicron, BA.2, BA.4/5, BA.2.12.1 and BA.2.75 harbored K856N in S2 region showed slightly increased cell fusion and infection compared with that of Omicron BA.1 (Fig. 1b, c). Collectively, the above results suggested that N856K is a key mutation in the Omicron BA.1 variant responsible for the attenuated fusogenicity and infectivity. However, the mechanisms by which N856K reduces the fusogenicity and infectivity of BA.1 need to be further explored. We found neither spike-GFP expression ( Supplementary Fig. 2) nor ACE2 binding (Supplementary Fig. 6) was correlated with cell-cell fusion or PsV infection in Vero-E6, Huh7, or Calu-3. S1/S2 cleavage efficacy was also evaluated, and the N856K did not alter the S1/S2 cleavage ability of the native and the engineered spike proteins (Supplementary Fig. 7). We then hypothesized two mechanisms by structural analysis. Firstly, the N856K may disturb the functional fusion peptide structure for membrane insertion. By electrostatic potential calculation, the negative charge potentials around E819 and D820 in the fusion peptide with N856K mutation were shown to be attenuated (Fig. 1f), which may lead to weaker electrostatic interactions and probably hinder the binding of E819 and D820 with Ca 2+ (Fig. 1h), an important regulator for cell fusion. 7 To investigate the structural disturbance effect, we introduced N856D into the D614G. The D614G-N856D reduced cell fusion activity from 47 to 35% (1.4-fold, p = 0.0088) (Fig. 1h). Instead of a hydrogen bond between N856 and E819 in the WT fusion peptide (Supplementary Fig. 8a), the N856D substitution may introduce a repulsive interaction (Fig. 1h) that is disadvantageous for maintaining a wedge-shaped structure. 8 Besides, a neutral amino acid substitution at N856, D614G-N856S, did not decrease the fusion activity (data not shown). Letter Secondly, the N856K may stabilize the pre-fusion spike conformation and encumber its post-fusion transformation required for the cell fusion. The N856 formed only one hydrogen bond with the backbone A852 in the same protomer in the D614G, while the K856 formed a hydrogen bond with T572 9 and a salt-bridge with D568 in another S1 protomer in the Omicron BA.1 10 (Fig. 1g), which could potentially prevent S1 shedding and hamper the conformational change of spike. To interrupt the saltbridge between D568 and K856, an additional D568N mutation was introduced into the D614G-N856K. The double mutations (N856K/D568N) partially restored cell-cell fusion of D614G-N856K spike from 19 to 31% (1.7-fold, p = 0.0203), reaching a similar level as D614G-N856D (35%) (Fig. 1h). Then we introduced another double mutation D568K/N856D with switched residual types in Omicron into the D614G. The D614G-D568K/N856D modeling showed that a salt-bridge could be formed between K568 and D856 ( Fig. 1h and Supplementary Fig. 8b). Consistent with the modeling analysis, D614G-D568K/N856D showed a further reduced fusion activity from 35 to 19% (1.8-fold reduction, p = 0.0040) compared to that of D614G-N856D, reaching a similar level as D614G-N856K (19%) (Fig. 1h). In summary, N856K significantly reduced the fusogenicity and infectivity of Omicron BA.1. Reverse mutation K856N in Omicron BA.1 partially restored cell fusion activity (4.4-fold enhancement) and PsV infectivity (1.8 to 2.5-fold increasement), highlighting the importance of continuous monitoring and further investigations. Given the potential link between fusogenicity and disease severity of SARS-CoV-2 infection, it is useful and important to monitor the fusogenicity of future variants in addition to their transmissibility and vaccine immune evasion capabilities. DATA AVAILABILITY The data that support this study are available from the corresponding author upon reasonable request. Source data supporting the findings of this study are provided within the paper. f Electrostatic potential distribution on the solvent-accessible surfaces of the fusion peptides of D614G and N856K based on the PDB 7MY8 and a modeled structure, respectively. The electrostatic potential distribution was generated using the APBS Tools 2.1 in PyMOL. The surface potential representation has charge levels from −5kT/e (red) to +5kT/e (blue). The circles labeled with yellow lines indicated the binding region of Ca 2+ . g The interactions between residue 856 and its surrounding residues. The S2 subunits containing residue 856 in one protomer are in blue. The S1 subunits in another protomer are in green. Residues are shown as sticks. The hydrogen-bond and salt-bridge between residue 856 and other residues are shown as dashed lines in sky blue or dark blue, respectively. Superposition of the structures of the N856D/D568K mutant with that of the D614G S2 was superimposed and aligned. The predicted mutant structures were colored in yellow and the salt-bridge interaction between D856 and K568 was shown. h The schematic graphs of speculated structures in FP and S1/S2 interaction by structural analysis or modeling of D614 spike with residue 856 and/or 568 mutation designs and their speculated influences/experimental results on fusion activity. The panel of "FP Distortion" shows the proposed structure of fusion peptide (FP) when the type of residue 856 is asparagine (N), lysine (K), or aspartic acid (D). The panel of "S1-S2 Salt-bridge" shows that whether a salt-bridge was formed between residue 856 in S2 and residue 568 in S1 or not. The "Fusion Reduction" indicates the speculations for weakening fusion activity (+) or not (−). The panel of "Cell-cell fusion" indicated the experimental results of different D614G spike mutation constructs, which was showed by "cell fusion (%)". Images were shown as merging of the GFP and DAPI signals, scale-bar: 100 μm. Quantitative analysis of cell fusion of different D614G spike mutants was shown below. The experiments were performed in triplicate and the data were plotted as mean (n = 3); Comparison was performed (1) among D614G and its N856 single amino acid mutants designed for verifying the FP disturbance speculation, and (2) between single amino acid mutants and double amino acid mutants designed for S1/S2 interaction speculation. Data analysis was performed by unpaired two-tailed Student's t-test with or without Welch's correction for all statistical analyses. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001, NS = p > 0.05
2023-02-22T15:15:15.687Z
2023-02-22T00:00:00.000
{ "year": 2023, "sha1": "7d9d82b94be0bec13ec20d66c305e611eb9446b6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "7d9d82b94be0bec13ec20d66c305e611eb9446b6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
258845015
pes2o/s2orc
v3-fos-license
SQANTI3: curation of long-read transcriptomes for accurate identification of known and novel isoforms The emergence of long-read RNA sequencing (lrRNA-seq) has provided an unprecedented opportunity to analyze transcriptomes at isoform resolution. However, the technology is not free from biases, and transcript models inferred from these data require quality control and curation. In this study, we introduce SQANTI3, a tool specifically designed to perform quality analysis on transcriptomes constructed using lrRNA-seq data. SQANTI3 provides an extensive naming framework to describe transcript model diversity in comparison to the reference transcriptome. Additionally, the tool incorporates a wide range of metrics to characterize various structural properties of transcript models, such as transcription start and end sites, splice junctions, and other structural features. These metrics can be utilized to filter out potential artifacts. Moreover, SQANTI3 includes a Rescue module that prevents the loss of known genes and transcripts exhibiting evidence of expression but displaying low-quality features. Lastly, SQANTI3 incorporates IsoAnnotLite, which enables functional annotation at the isoform level and facilitates functional iso-transcriptomics analyses. We demonstrate the versatility of SQANTI3 in analyzing different data types, isoform reconstruction pipelines, and sequencing platforms, and how it provides novel biological insights into isoform biology. The SQANTI3 software is available at https://github.com/ConesaLab/SQANTI3. transcript ends (Fig.1c). The Reference Match (RM) subcategory is defined as see Methods). These results indicate that the proposed TSS ratio metric is a 245 reasonable short-reads-based alternative to CAGE-seq support. 246 To understand the relationship between TSS-related attributes and tran-247 script length completeness, we compared TSS metrics for transcript models 248 in the FSM and ISM categories. While the vast majority of FSM transcripts 249 showed both CAGE-seq support and TSS ratios larger than 1.5 (Fig.2d, top 250 panel), the ISM category was enriched in transcripts without an overlapping 251 CAGE-seq peak (Fig.2d, bottom panel). We identified 7,599 ISM isoforms with 252 a reliable TSS according to both CAGE-seq data and TSS ratio, one-third of 253 which were novel TSS with respect to the reference annotation. Conversely, 254 4,591 FSM isoforms showed low TSS ratios and lacked support from the refer-255 ence annotation or CAGE-seq data, suggesting that the TSS of these transcript 256 models might not be correctly defined. Fig.A2b). This suggests that long-read sequencing methods detect alternative 268 3' ends with higher sensitivity than Quant-seq, possibly because Quant-seq 269 requires high expression of the alternative TTS isoform to call a polyA site. 270 Transcripts that were likely to be a product of intrapriming (defined as 60% As 271 downstream of the TTS [23]), rarely contained a polyA motif or the polyA was 272 located closer to the 3'end than expected ( Supplementary Fig.A2c). Overall, 273 these results further support polyA motif detection as a reliable indicator of a 274 bona fide transcription termination site. 275 The results above demonstrate the usefulness of SQANTI3 QC for evalu-276 ating the 3' and 5' ends of lrRNA-seq transcript models. However, they also 277 suggest that conducting a more in-depth analysis of TSS/TTS variability pat-278 terns is advisable, which can be achieved through the novel FSM and ISM supported by CAGE-seq, and 70.9% had a TSS ratio greater than 1.5 (Fig.2e). 288 The subcategory-level analysis of FSM, therefore, reveals incompleteness in SQANTI3 reference annotation and suggests that novel combinations of known start/end 290 sites and intron chains are yet to be described. 291 The analysis of ISM subcategories showed that 3' Fragment (3'F) was by 292 far the most abundant group with a total of 47,594 transcript models (70.2% 293 of ISM), where only 9.4% had a known TSS, 16% had CAGE-seq support 294 and 39.5% displayed an above-threshold TSS ratio (Fig.2e). This pattern was 295 recapitulated by the 13,236 mono-exonic ISM transcripts, for which most 3' 296 ends were validated by orthogonal data, whereas 5' ends remained largely 297 unsupported (Fig.2e). Moreover, transcripts from the mono-exon subcategory 298 presented a larger difference in length ( Supplementary Fig. A2d) and exon 299 number ( Supplementary Fig.A2e) with respect to their matched reference tran-300 script than the rest of ISM, ruling out the possibility that these were fragments 301 of initially shorter molecules. These results suggest that ISM transcripts were 302 enriched in 5'end degradation products. 303 The diversity of TSS and TTS patterns in lrRNA-seq is apparent in the 304 NEXN gene (Fig.2f ). Although all detected multi-exon transcripts were asso- flagged as a potential intrapriming artifact due to a 20-bp stretch with 90% 320 As found immediately downstream of the TTS, which, together with the loca-321 tion of the polyA motif 39 bp from the 3' end, suggested a TTS annotation of 322 poor quality (Fig.2f ). Notably, the degree of isoform diversity and TSS/TTS 323 support variability exhibited by the NEXN gene was frequent in the WTC11 324 lrRNA-seq transcriptome estimated by Isoseq3. 325 In summary, the IsoSeq3 processing of the WTC11 cDNA-PacBio data 326 presents a high level of TSS and TTS variability, which can be effectively char-327 acterized using SQANTI3 (sub)categories in combination with complementary 328 data sources. Our analyses suggest that a combination of artifacts and true 329 biological variability causes the observed diversity at transcript ends. These 330 and previous SQANTI results [23] motivated the design of a comprehensive 331 strategy for removing lrRNA-seq artifacts. (Fig.3f ). 447 To evaluate the rescued elements, the reference transcriptome was quanti-448 fied at the gene and transcript levels using short reads (see Methods). Genes for SQANTI3 which at least one isoform passed the filter showed the highest expression val-450 ues (Extended Data Fig.A6a), independently of the complementary data input. 451 Genes initially removed and then rescued had consistently higher expression 452 than those that remained excluded. Similar results were obtained when per-453 forming these analyses at the transcript level ( Supplementary Fig.A7). Finally, 454 to validate the biological importance of the rescued transcript set, we retrieved 455 their functional scores (TRIFID scores) from the APPRIS database [34]. 456 Known transcript models that were rescued had higher scores than those that 457 were not rescued (Fig.A6b) of the novel TP transcripts (Fig.3g). SQANTI3 Rescue was also robust to the 489 confounding factor introduced by reference over-annotation, with only between 490 1 and 3 FP transcripts being selected by the algorithm. 491 To better understand the impact of each step of the SQANTI3 pipeline on 492 transcriptome quality, performance metrics were computed for the HIS results ( Fig.3h). The initial reconstruction by IsoSeq3 yielded almost perfect sensi-494 tivity (94%), while precision was much lower (32%). As previously observed, 495 precision significantly improved after filtering, whereas sensitivity decreased 496 slightly due to some TP transcripts being flagged as artifacts. Importantly, the 497 F-score revealed that overall performance steadily improved after every step 498 of the SQANTI3 curation pipeline. Specifically, a stronger F-score increase 499 was observed when using the ML filter (82%) than when applying the rules The SQANTI3 pipeline is designed to perform data-based QC and curation of 519 transcriptomes, particularly those created using tools with high detection levels 520 and low reference dependence, resulting in a significant proportion of novel iso- with poor orthogonal support, we adjusted the rules filter to require SJ cover- The SQANTI3 framework offers not only quality control and curation but 587 also the integration of IsoAnnotLite, which allows for isoform-level functional 588 annotation. This feature facilities downstream analyses on isoform biology, 589 for example, using the tappAS software [19]. To demonstrate this capability, The SQANTI3 analysis shown here confirms that novel combinations of 3' 707 and 5' ends with intron chains are still to be discovered, even in well-annotated 708 organisms such as human and mouse, and that many novel transcripts can be 709 found with considerable support. This implies that the generation of sample- input and output files are described. 734 The Quality Control module is the cornerstone of the SQANTI3 pipeline. 735 It is designed to characterize transcriptomes built using lrRNA-seq data and were defined in SQANTI [23]. 770 In addition to short and long-read data for coverage and expression based- Processing of short read data: 782 To facilitate the integration of matching short-read data, SQANTI3 has been 783 upgraded to run STAR [41] and Kallisto [42] internally for mapping and [44] in which the characteristics that make an isoform 862 reliable are specified. 863 The JSON file is structured in two different levels of hierarchy: rules and 864 requisites. A rule is made of one or more requisites, all of which must be 865 fulfilled for an entry to be considered a true transcript. This means that requi-866 sites will be evaluated as AND in terms of logical operators. If different rules 867 (i.e. sets of requisites) are defined for the same structural category, they will 868 be treated independently from one another. In that case, to pass the filter, 869 transcripts will need to pass at least one of these independent rules, mean-870 ing that rules will be evaluated as OR in terms of logical operators. Rules 871 can be set for any numeric or categorical column in the classification file. Matches between each rescue target and its same-gene candidates are next 917 found by mapping candidate sequences to targets, a process known as rescue-918 by-mapping. To achieve this, minimap2 [45] is run in the long-read alignment 919 mode using the -a parameter, combined with the -x map-hifi (i.e. high- The WTC11 cell line is an induced pluripotent stem cell (iPSC) line derived 997 from human fibroblasts, often used as a model for cell differentiation [46]. 998 The data used in this paper was generated within the LRGASP project [31], 999 where this cell line was deeply sequenced using different technologies. We only 1000 used cDNA-PacBio data for reconstructing transcripts models. Raw data used The PacBio cDNA lrRNA-Seq datasets used in the present study, i.e. Rules filter For the WTC11 dataset, the Rules Filter was defined as follows. The 5'-ends 1092 were considered valid if: 1) they overlapped a CAGE-Seq peak or an annotated refTSS site; 2) the distance to any other annotated TSS in the same gene was 1094 less than 50bp; or 3) they had a TSS ratio 1.5. Similarly, 3'-ends were accepted 1095 if: 1) they were supported by Quant-Seq data or by polyASite annotation, 2) 1096 the distance to any other annotated TTS was less than 50bp, or 3) there was 1097 a canonical polyA motif close to the TTS. FSM and ISM were required to 1098 have support in both their 5' and 3'-ends to pass the filter. For the rest of the 1099 transcript models, it was required that all SJ were supported by at least three 2) The rest of the LR-defined transcript models filtered out (rescue candidates) are mapped against the reference transcriptome combined with the accepted LR-defined isoforms (rescue targets), allowing several hits per candidate. 3) Reference transcriptome was previously evaluated and filtered with the same data and criteria as the LR-defined transcripts. 4) Rescue is completed by evaluating targets. They need to pass the filtering and do not increase the redundancy, meaning that if the target is an LR-defined transcript present or it is a reference transcript already represented as an FSM in the filtered transcriptome, these targets will not be added to the final annotation. Upper track (black) represents the GENCODE annotation for that locus, which includes 5 isoforms for the GST5 gene coded in the negative strand. The bottom track (light blue) shows the LR-defined isoforms identified in the WTC11 sample, while the tracks in between (gold, red, turquoise, and dark blue) are the orthogonal data available to validate those isoforms. The bottom table indicates which filters passed each of the isoforms regarding the informative situation simulated: Low Input (LI), High Input Reference (HIR), and High Input Sample-specific (HIS). SQANTI3 Filtered isoforms (orange) were those FSM/ISM that did not pass the corresponding filter and were not eventually rescued because of unacceptable quality attributes. On the other hand, LR-defined known transcripts lost during filtering but recovered by introducing transcript models from the reference (dark blue) are the the rescue strategy's goal. In some exceptional cases, genes not initially detected were included in the final transcriptome (yellow) if a discarded sequence mapped them and the filtering criteria were fulfilled.
2023-05-24T13:11:02.648Z
2023-06-03T00:00:00.000
{ "year": 2023, "sha1": "2e70bc476890ab5ea90bf3279a99dd24407c11bc", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/05/21/2023.05.17.541248.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "2e70bc476890ab5ea90bf3279a99dd24407c11bc", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3846043
pes2o/s2orc
v3-fos-license
Lattice field theory applications in high energy physics Lattice gauge theory was formulated by Kenneth Wilson in 1974. In the ensuing decades, improvements in actions, algorithms, and computers have enabled tremendous progress in QCD, to the point where lattice calculations can yield sub-percent level precision for some quantities. Beyond QCD, lattice methods are being used to explore possible beyond the standard model (BSM) theories of dynamical symmetry breaking and supersymmetry. We survey progress in extracting information about the parameters of the standard model by confronting lattice calculations with experimental results and searching for evidence of BSM effects. Introduction In 1974, Kenneth Wilson invented lattice Quantum Chromodynamics (QCD), a non-perturbative approach to Nature's strong force [1]. Wilson's formulation was based on using elements of the Lie group SU (3), rather than elements of the Lie algebra, which is used in the continuum formulation of the theory. This approach allowed Wilson to exactly preserve gauge invariance, which was not possible when formulating the theory in terms of finite difference operators applied to elements of the Lie algebra. Gauge invariance is familiar to us from electromagnetism, but in QCD it is much richer, as it is based on the non-Abelian group SU (3), not the Abelian group U (1). Gauge symmetry determines three of Nature's forces: electromagnetic, strong, and weak. In the years since Wilson's initial paper, which discussed quark confinement based on a strong coupling expansion, there have been monumental advances in algorithms, formulations of the theory, and computer power. In the early days, it was necessary to neglect the contribution of quantum fluctuations related to quark-antiquark production and annihilation in the vacuum. This is called the quenched approximation and cannot be systematically improved. However, it is now possible to include the quantum fluctuations of the four lightest quarks: up, down, strange, and charm. The still heavier bottom and top quarks have negligible effect at current precision. We can even make the up and down quarks as light as in Nature, which has traditionally been a difficult computing challenge. Work on lattice QCD has progressed to the stage that a number of interesting quantities can be calculated to sub-percent level, and there have been predictions of particle masses and decay properties, not just postdictions. Lattice QCD is now extensively used in theoretical studies of elementary particle and nuclear physics. The proton-neutron mass difference, the spectrum of excited baryons, parton moment distributions, and properties of light nuclei have all been studied with varying degrees of precision. The properties of QCD at non-zero temperature have been studied to understand the quark-gluon plasma produced in heavy-ion collisions. In particle theory, the quark masses for up, down, strange, charm, and bottom, i.e., all except top, have been calculated. So, has the strong coupling α s and many weak decays that are needed to determine the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix. Lattice field theory has also been used to study theories with dynamical symmetry breaking as an alternative to the spontaneous symmetry breaking of the simple Higgs boson theory. As exciting as it was to discover the Higgs boson at the LHC, knowing the mass of the Higgs boson does not solve the mysteries of the Standard Model, and we would dearly love to find evidence of Beyond the Standard Model physics. This might come from seeing new particles at the LHC, but it could just as easily come from observing anomalies in high precision experiments, where the anomalies come from small interactions caused by new (virtual) particles that are too heavy to be produced at the LHC. Lattice QCD has an important role to play here in determining the elements of the CKM matrix. Another area of significant recent progress is in the formulation of lattice theories with supersymmetry. Unfortunately, I cannot cover all of these topics, so I shall point the interested reader to the annual International Symposium on Lattice Field Theory. The most recent one was held in July, 2015 in Kobe, Japan [2]. There were over a dozen plenary talks relevant to nuclear and particle physics. In this talk, I have relied heavily either on results from my own collaborations, MILC and Fermilab Lattice-MILC, (referred to as FNAL/MILC below), or state-of-the-art summaries prepared by the Flavor Lattice Averaging Group (FLAG) [3] an international group of scientists who critically review work on a large number of quantities in lattice QCD and prepare averages for ease of use by a wider (non-lattice) community. The last FLAG review appeared in 2013 [3] and a new one will appear in early 2016. I am pleased to be a member of FLAG. The Standard Model and lattice QCD The Standard Model is a theory of quarks, leptons, gauge bosons, and the Higgs boson. It describes only three of the known forces, as gravity is not included. The model is described by its symmetries and the matter content. The symmetry is SU (3) × SU (2) × U (1). The group SU (3) is the symmetry of QCD and SU (2) × U (1) is that of the electroweak interactions. Spin-1 particles are the force carriers. They are called gluons (for QCD), and the photon and weak bosons (specifically, W ± and Z) for the electromagnetism and the weak force, respectively. The Higgs boson has no intrinsic spin. The quarks and leptons are spin-1/2 matter particles. The quarks interact with all the force carriers. The charged and neutral leptons don't interact with the gluons, but they do interact with the weak force carriers. The charged leptons interact with the photon, but the neutral ones (neutrinos) do not. One of the reasons we think there is physics beyond the the Standard Model is that the model has many undetermined parameters. These parameters must be determined from experiment (with various inputs from theory). In a more fundamental theory, there might be relations between the parameters, so they would not seem as arbitrary as they do now. For each of the three symmetry groups there is a coupling constant. For SU (3), it is called g s . For SU (2) × U (1), the two couplings are g and g . There are six quark masses. There are three masses for the charged leptons. Now that we know neutrinos have mass, there are also three neutrino masses. The Cabibbo-Kobayashi-Maskawa quark mixing matrix (detailed in the next section) is complex and unitary. An arbitrary 3 × 3 complex matrix would have 18 real parameters; however, because of unitarity and our ability to chose some phases of the quark fields, there are only four independent parameters that determine the CKM matrix. These are commonly described as three angles and a complex phase factor that determines CP violation. The combination of discrete symmetries charge conjugation C, and parity is denoted by CP. There is a similar matrix for the neutrinos called PMNS for Pontecorvo, Maki, Nakagaw and Sakata. However, since the neutrinos do not interact strongly, we will have no more to say about that. Lattice QCD input is important for determination of ten parameters of the Standard Model: α s = g 2 s /(4π), the four parameters that determine the CKM matrix, and m u , m d , m s , m c , and m b . The sixth quark, the top quark, decays weakly before it can form bound states, so we do not need lattice QCD to study its mass. Lattice QCD provides a nonperturbative treatment of the quantum field theory that describes the strong interaction. Because the coupling is strong, many phenomena cannot be calculated perturbatively. Quantum field theories require regularization and renormalization. The lattice technique provides one such regularization. However, numerical errors must be carefully controlled. Errors come from the non-zero lattice spacing (continuum limit), finite volume (infinite volume limit), and unphysical light quark masses (chiral extrapolation). In parentheses, we have the limit or operation that must be done to control the systematic error from each effect. In addition, there are statistical errors. Groups are increasingly able to work with up and down quark masses very close to their physical value, which greatly reduces errors from the chiral extrapolation that were seen in earlier calculations. There are at least five popular ways to deal with the quarks in lattice QCD. They go by the names: Wilson/Clover, staggered, domain wall, twisted mass, and overlap. Each method has different systematic errors at nonzero lattice spacing, so it is useful to use different methods and compare the final results after all errors are controlled. The number of dynamical flavors used also varies by collaboration. The most phenomenologically relevant calculations use dynamical up, down, and strange quarks, or those plus charm. These are denoted N f = 2 + 1 or 2 + 1 + 1, respectively, because the up and down quarks are usually treated as if their masses were identical. (Their average mass is used.) CKM matrix It has been observed for many years that the Universe contains much more matter than antimatter. This is known as the baryon asymmetry. Kobayashi and Maskawa won the Nobel prize for their realization that with three (or more) generations we can have CP violation, which might explain the baryon asymmetry of the Universe. However, we now know that the CP violation in the strong interaction is probably too weak for this purpose, and it may be the CP violation appearing in the PMNS matrix for neutrino mixing that accounts for the baryon asymmetry. Here is the CKM mixing matrix (bold notation) augmented with some of the decay or mixing processes that can be used to determine each matrix element: (1) In the second and fifth rows, we have meson decays called leptonic, because only a charged lepton and a neutrino appear in the final state. The third and six rows contain decays called semi-leptonic because there is also a meson in the final state. The last row contains two meson mixing processes that determine the CKM matrix elements just above them. Since the CKM matrix is unitary, each row and each column is a complex unit vector. Also, each row (column) is orthogonal to the other rows (columns) leading to the so-called unitarity triangle in the complex plane. Violations of unitarity are evidence of non-standard-model physics. Further, if two different processes are used to determine an element of the matrix and they do not agree, that is evidence for new BSM physics, which we would dearly love to find. We will examine both these tests of the SM. If we could do experiments on free quarks, it would be easy to determine mixing; however, confinement means we need to deal with bound states. Thus, LQCD input for decay constants and form factors is needed to determine elements of the CKM matrix. For example, the branching fraction for the leptonic decay of a D (s) meson is given by where the unknowns are |V cq |, the absolute value of the CKM matrix element with q = d or q = s for the D or D s meson, respectively, and f D (s) is the corresponding decay constant, which needs to be calculated in LQCD. The other quantities, such as masses, lifetimes and the Fermi constant are easily found from experiment. Light quarks and the first row We will begin our discussion with results for mesons that contain only the three lightest quarks up, down, and strange. The ground states are called pions and kaons. In Fig. 1 [4] or a non-LQCD method. Calculations with different numbers of dynamical quarks are considered separately. In some cases, f π has been used to set the lattice spacing (or scale), so only f K is shown. We see excellent agreement with the values from the PDG. The ratio f K /f π can be calculated accurately and used to determine |V us /V ud | from precise measurements of the ratio of pion and kaon decay rates which show that Vus Let's turn to the semileptonic kaon decay. Semileptonic decays have three-body final states, so there is one kinematic variable, usually denoted q 2 , which is the square of the momentum transfer to the leptons. From 4-momentum conservation, p K = p π +q l +q ν and q = q l +q ν , where we have used p for hadron momenta and q for lepton momenta, with the subscript denoting the particle. To extract |V us |, we just need the form factor at zero momentum transfer, i.e., f + (0) as experiment tells us that |V us |f + (0) = 0.2163 (5). This can be combined with the FNAL/MILC N f = 2 + 1 + 1 result [5] f + (0) = 0.9704(24)(220 to determine an error band for |V us |. First row unitary states |V ud | 2 + |V us | 2 + |V ub | 2 = 1. However, as we will see below |V ub | ≈ 4 × 10 −3 , so the last term can be neglected as the errors on the first two terms are a few times 10 −4 . Thus, the unitarity constraint will be a straight line in the |V ud | 2 -|V us | 2 plane. In Fig. 2(L), we show the unitary constraint as a black line, along with the angled error band from leptonic pion and kaon decay, the horizontal error band from kaon semileptonic decay, and a vertical error band from nuclear β-decay that is independent of LQCD calculations. We see that there is some tension between the two types of decay studied in LQCD, but that unitarity, leptonic decays, and β-decay are in good agreement. A summary of many determinations of |V ud | and |V us |, based on either leptonic or semileptonic decays, has been prepared by FLAG. This is shown in Fig. 2(R) where squares indicate leptonic decays and triangles indicate semileptonic. The blue points, as usual, are from nonlattice calculations. Careful inspection of the N f = 2 + 1 + 1 2014 results from MILC and FNAL/MILC shows the tension we have seen above between leptonic and semileptonic decays. Other calculations do not yet have the precision to confirm the discrepancy or rule it out. The errors on the FLAG estimates for N f = 2 and 2 + 1 are larger than for 2 + 1 + 1 and they do not have a tension with unitarity. The FLAG estimate for 2 + 1 + 1 does show some tension with unitarity. First row unitarity is not the first place we would expect to find evidence for BSM flavor physics, so it will be interesting to improve these calculations, particularly to calculate the kaon semileptonic form factor over its entire kinematic range. There is also the interesting tension between the results near the bottom of the figure, based on τ decays, and those for pion and kaon decay. Charm decays and the second row Leptonic and semileptonic decays of the D and D s mesons have been studied on the lattice, but not as extensively as for the pion and kaon. It has been about a decade since decay constant predictions of FNAL/MILC were tested at CLEO-c [7]. Initial errors were about 10%, but current errors from FNAL/MILC are only 0.6%. A great deal of the improvement is due to the use of highly improved staggered quarks (HISQ) that were developed by the HPQCD collaboration. Figure MeV, and f Ds /f D + = 1.1712(10)( +29 −32 ) for the decay constant ratio, for which there is some cancellation of the systematic errors. For references to the other work and the HISQ action see Ref. [6]. To make use of these decay constants, we rely on the work of Rosner and Stone [8] to summarize the experimental results. They find f D |V cd | = 46.06(1.11)MeV, and f Ds |V cs | = 250.66(4.48)MeV. The experimental errors are 1.8-2.4%. Combining the experimental results and the FNAL/MILC decay constants gives |V cd | = 0.217(1)(5)(1), and |V cs | = 1.010(5)(18) (6), where the errors are lattice, experiment and structure-dependent electromagnetic, respectively. Thus, the experimental errors are currently dominant. In Fig. 3R, we see evidence for an ≈ 1.8σ tension with unitarity for the two leptonic charm decays. The black line is the unitarity constraint. The horizontal blue band is for D s decay and the vertical green band is for D + decay. Once again, the third element of the row V cb is too small to make a difference at the current level of precision. The semileptonic form factors for D (s) mesons are much less studied than for light quarks; however, there should be some updates in the coming year. We refer the reader to FLAG [3] for details. Figure 3. (L) Summary of results for the charm meson decay constant prepared by FNAL/MILC [6]. "This work" refers to the calculation presented there. (R) Unitarity test for the second row of the CKM matrix from Ref. [6]. B meson decays The b quark is the heaviest one that forms bound states that can be studied with LQCD. Both leptonic and semileptonic decays of B and B s mesons have been studied. In addition to the usual decays that produce a charged lepton and a neutrino, there are a number of rare decays that involve the so-called flavor changing neutral current (FCNC). In the SM, the FCNC vanishes at the tree level, so small quantum loop effects from new physics may be visible. That makes the rare decays a promising topic to study. In fact, there have been some recent results from the LHCb experiment at the Large Hadron Collider that show tensions with the SM predictions. These rare decays also present an alternative way to determine |V td | and |V ts | that can be compared with the meson mixing processes indicated in our CKM matrix. FLAG has summarized results for decay constants f B and f Bs [3]. The errors on these decay constants are about 2% for N f = 2 + 1 and 2 + 1 + 1. For N f = 2 + 1, f B = 190.5(4.2)MeV and f Bs = 227.7(4.5)MeV. Unfortunately only B → τ ν has been observed so far and the error is about 20%. So, in this case the LQCD calculation is ahead of the measurement. This allows a determination of |V ub |, but it is not competitive with the value from semileptonic decay. The semileptonic decays B → π ν and B s → K ν have been studied on the lattice. The former has been observed at BaBar and Belle, but the latter has not been observed. Another way to determine |V ub | is from inclusive decays. There is a long standing tension between that determination and the one from B → π ν. The central value of |V ub | based on the SM analysis of the leptonic decay is between that from the semileptonic exclusive decay and the inclusive method. However, its error bar, limited by experiment, is too large to help clarify the situation. Belle II will improve the B → τ ν measurement, which should really help resolve these issues. The last entry in the second row of the CKM matrix, V cb , can be studied in the exclusive decays B → D * ν and B → D ν. It can also be determined in inclusive decays where the decay products must include charm quarks. FNAL/MILC has recently compared |V cb | based on both determinations and there is again some tension between inclusive and exclusive results. As can be seen in Fig. 5(L), the errors from the decay to D * are somewhat smaller than that from decay to D. With current errors, those decays agree with each other reasonably well and the real tension is between B → D * ν and the inclusive value of |V cb |. Turning to rare B decays, FNAL/MILC has recently calculated the form factors needed for both SM and BSM decays through a FCNC [10,11]. As mentioned above, this is a promising place to look for new physics. There is some tension between the SM prediction and recent LHCb measurements of B + → π + µ + µ − and B + → K + µ + µ − . The LHCb measurement is smaller than the SM prediction in three of four fairly wide bins of q 2 , the square of the momentum transfer to the muons. Figure 5(R) shows the comparison for B + → K + µ + µ − where the difference is more pronounced. In Ref. [12], both processes are shown. For all four bins for the two processes, the tension is 1.7σ. [13] summary of results for |V cb |, including determination from inclusive B decays to D or D * , and inclusive decays. (R) Branching fraction for B + → K + µ + µ − showing tension between SM prediction [12] and recent LHCb measurement. Conclusions Much progress has been made in lattice QCD, and more generally in lattice field theory. We concentrate here on quantities needed for the study of the CKM matrix. Calculational precision is now high enough that we can begin to look for evidence of BSM physics. We have seen a number of tensions between 1.5 and 2 standard deviations related to the CKM matrix. It will be interesting to see if reduced errors from both theory and experiment result in stronger hints (or perhaps significant evidence) of BSM physics. In the oral presentation, quark masses and α s were also discussed. (See Ref. [3] for details.)
2016-06-29T20:00:10.000Z
2016-06-29T00:00:00.000
{ "year": 2016, "sha1": "eb708fa9287116d17773dcd860409526b8e81c8f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/759/1/012007", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "eb708fa9287116d17773dcd860409526b8e81c8f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257775285
pes2o/s2orc
v3-fos-license
Innovative method with two-stage surgery for Ewing sarcoma with personalized distal clavicle reconstruction: A case report and diagnosis review A 13-year-old boy presented with a growing lump on his left clavicle for 5 months. The plain radiograph revealed an osteolytic mass with aggressive periosteal reaction, suggesting a malignant lesion. The results of advanced imaging and histopathological examinations revealed that the patient had Ewing sarcoma without metastasis. The two-stage surgery was as follows: resection–observation–reconstruction. The underlying rationale was that Ewing sarcoma has a high recurrence. After 2 years of resection, the patient had remission, and he currently has a personal 3D-printed titanium implant with intact shoulder function. Introduction Ewing sarcoma is the second most common bone tumor in children and young adults (1). This type of cancer can develop in any bone but usually occurs in the lower extremities, most commonly in the pelvis and femur (2). Although occurrence in the clavicle has been reported, it is considered a rare case (3). Previous studies have reported alternative surgical techniques; however, to our knowledge, a suitable management strategy has yet to be established for Ewing sarcoma of the clavicle (4). Background A 13-year-old boy presented with a growing lump on his left clavicle for 5 months. The patient had a 10-cm painless, smooth, and palpable mass with a rubbery-to-hard consistency on his left distal clavicle. The patient's blood reports were normal. Plain radiography of the left clavicle showed an abnormal, enlarged osteolytic lesion with a moth-eaten, lamellated appearance. It also showed a focally disrupted aggressive periosteal reaction throughout the distal half of the clavicle, with a swollen overlying soft tissue ( Figure 1). Skeletal scintigraphy revealed increased focal tracer uptake at the left clavicle, corresponding to the primary bone lesion. The magnetic resonance imaging result showed a 10.1 × 4.1 × 3.3-cm, expansible, intramedullary lesion at the left clavicle bone with intermediate T1 heterogeneous signal and hyperintensity T2 with heterogeneous enhancement, which suggested Ewing sarcoma ( Figure 2). No lung and whole-abdomen abnormalities were identified. The results of a histopathologic analysis showed a round cell tumor corresponding to Ewing sarcoma of the left distal clavicle. The patient was prescribed with neoadjuvant therapy with 14 cycles of vincristine, doxorubicin, cyclophosphamide, ifosfamide, and etoposide. The first stage of operation is wide resection at the left distal clavicle and reconstruction with plate and cementation. With no evidence of recurrence noted in 2 years, the patient was subjected to the second-stage operation with a patient-specific 3Dprinted personalized left distal clavicle reconstruction. The first stage of this treatment involved the wide resection of the left distal clavicle and its subsequent reconstruction with a plate and cement. The results of a pathological analysis of the specimen collected from the intraoperative site showed neither viable nor tumor cells. The Plain radiograph of the left clavicle showing an abnormal, enlarged osteolytic lesion with a moth-eaten, lamellated appearance, and focally disrupted aggressive periosteal reaction throughout the distal half of the clavicle, with a swollen overlying soft tissue. patient recovered well, and his left shoulder functions and range of motion remained intact. A year after the first surgery, the patient showed no signs of recurrent Ewing sarcoma (Figure 3). At 2 years later, the surgery's second stage was performed with a personalized 3D-printed titanium prosthesis for reconstruction to ensure a long-term replacement. Immediate postoperative plain films were obtained again, which showed satisfactory results ( Figures 4A-C). At 2 months after the second-stage surgery, the wound showed good healing with minimal scarring. At 2 years after the operation, the shoulder and arm functions remained intact. Discussion Osteosarcoma, the most common primary bone tumor, should be one of the first differential diagnoses for patients in this age group presenting with these symptoms. Osteomyelitis is an additional diagnosis characterized by inflammatory pain with or without an abnormal mass. However, retrospective studies and case reports have shown that Ewing sarcoma, one of the most common tumors of flat bones, is the most common tumor occurring at the clavicle (1). The children and adolescents presented with an abnormal clavicle mass. The differential diagnoses of Ewing sarcoma, osteomyelitis, and osteosarcoma were considered ( Table 1). The results of advanced imaging revealed the presence of a local disease at the left clavicle, likely Ewing sarcoma, without solid evidence of distant metastasis to another organ. In addition, a histopathological analysis of biopsy specimens resulted in the identification of small round cells, thus supporting the initial diagnosis that this patient had Ewing sarcoma. Radiograph of the left clavicle at 1 month after surgery and reconstruction. The patient showed no signs of recurrence or disease progression. A treatment protocol for Ewing sarcoma was selected following the guidelines published by Womer et al. (2012) (5). More specifically, the treatment comprised 14 cycles of alternating neoadjuvant therapy VDC/IE regimen every 2 weeks: vincristine, doxorubicin, cyclophosphamide/ifosfamide, and etoposide. The standard surgical operation for Ewing sarcoma involves a wide tumor resection with or without reconstruction. Previous studies have shown similar results and functions in total claviculectomy regardless of whether reconstruction was performed (6). The necessity of reconstruction of the clavicle after resection is still questioned, as similar functional results have been obtained in patients treated without reconstruction with a tendency to a lower incidence of complications and surgical revisions (7,8). However, the prosthesis reconstruction was selected for esthetic reasons because it would provide the young patient with a lifelong implant instead of cementation. Previous reports on Ewing sarcoma have also attempted autograft, allograft, and cement for clavicle reconstructions in adult patients (9). Nevertheless, graft selections are limited regarding varieties and associated costs in Thailand. Owing to the high recurrence rate of this disease, it is not financially reasonable to immediately perform reconstructive surgery. Therefore, a two-stage surgery was performed: wide resection and reconstruction with medical cement, observation for possible recurrence, and definite reconstruction surgery; all procedures were explained to the patient and his family. Clavicle reconstruction with cement as a prosthesis has been previously performed as a practical and feasible short-to-medium-term therapeutic procedure with acceptable results. A year after the first surgery, the patient showed no signs of Ewing sarcoma. His left shoulder function and range of motion remained intact and provided independence in most daily activities. However, high recurrence rates have been reported, with >70% of relapsed cases occurring approximately 24 months after diagnosis and delayed recurrence occurring 10 years after remission. In this case, remission was determined after close observations and 1-year follow-up after surgical resection. At 2 years after diagnosis and initial treatment, we then proceeded to perform the second stage of the surgery as planned. Because of the limited graft selection mentioned above, we used a 3D-printed personalized prosthesis for reconstruction. Titanium with a hydroxyapatite surface coating was selected as the prosthetic material because it has good biocompatibility and promotes improved in vitro osteointegration. In the anterior chest wall reconstruction, the titanium representation is the flexible material Plain film -cortical and medullary osteolytic bone destruction -moth eaten/permeative transition zone -aggressive periosteal reaction -tumor matrix ossification and calcification, fluffy or cloud-like lesion -location usually at perimetaphysis suitable for chest wall movement (10). In addition, the implant was designed to be partially hollow to reduce the material cost and allow the insertion of a bone graft for improved osteointegration. The implant also had screw holes designed in such a way that they could fit onto an identical plate that the patient already had. Concluding remarks The two-stage surgery-the recurrence observation in the first stage and then the 3D-printed personalized prosthesis for the second stage (substantial reconstruction)-provides a reasonable balance between recurrence risk and financial risk if the tumor was recurring. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
2023-03-29T13:16:00.763Z
2023-03-29T00:00:00.000
{ "year": 2023, "sha1": "143d3858dbd4455f3dca93504e869f6e18804516", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "143d3858dbd4455f3dca93504e869f6e18804516", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
59040372
pes2o/s2orc
v3-fos-license
The Effect of Using Different Detergents in Cleaning Cows ' Udders on The Microbial Content of Produced Milk This study investigated the effect of different detergents used to clean cows' udders on the microbial content of the produced milk using twenty cows in Ajloun, a northern city in Jordan. The milking process was repeated from same cows on three successive days. On day 1, we milked the cows after cleaning their udders using water only. This was repeated on the two successive days. Thereafter, the cows were milked after cleaning their udders by a different detergent each day. The process was also repeated for three successive days for each detergent. Microbial Analysis was carried out on the collected milk samples. The results indicated that cleaning cows' udders before milking has improved the hygiene conditions and reduced the total bacterial count, total coliform, staphylococci and enterococci spp counts and the values of yeast and molds. Different detergents had different effects on the microbial counts. Finally, the effectiveness of the detergent differed according to its brand. Our findings are important to public health because milk has been a traditional food and ironically a very potent carrier of gastrointestinal infections, if contaminated. Introduction Milk is one of the most essential foods to human beings; it is rich in nutrients vital for growth and maintenance of a healthy body (Vilela, 2002).It is an emulsion or colloid of butterfat globules within a water-based fluid that contains dissolved carbohydrates and protein aggregates with minerals (Jost, 2007).It is rich in proteins, fats, carbohydrates (lactose), mineral salts, vitamins, conjugated linoleic acid, sphingomyelin, butyric acid, among other substances, which provide immunologic protection and essential nutrients to its consumers (Sordillo et al., 1997;Oliveira et al., 1999).A variety of dairy products are produced from milk, such as cream, butter, yogurt, ice cream, and cheese.Modern industrial processes use milk to produce casein, whey protein, lactose, condensed milk, powdered milk, and many other food-additives and industrial products. Nevertheless, milk is vulnerable to contamination by many microorganisms including pathogenic microbes, which can cause the food-borne illness and are a threat to consumer's health.Thus, it has no protection from external contamination and can be contaminated easily when it is separated from the source animals like cows or buffaloes (Agarwal, 2012).Moreover, milk is a suitable medium for most bacteria because of its chemical characteristics such as high water content, approximate neutral pH value and its nutrient contents.Contamination of milk could occur at any stage of production starting from the circumstances surrounding the milking process to the delivery of the final product.The level of contamination is influenced by several factors such as, animal health and nutrition, housing and feeding facilities, parlor design, milking procedures, herd management techniques, herd size and milk yield (Bramley et al., 1992;Sanaa et al., 1993;Ko¨ster et al., 2006).This study focused on the contamination that may occur during the milking process.Different premilking cleaning regimes have been studied previously (Galton et al., 1984(Galton et al., , 1986;;Pankey 1989, Gibson et al., 2005, 2008).however, as far as the author knowledge, there are no recommendations for the proper pre-milking treatment to reduce microbial load. This study evaluated the microbial content of raw milk before and after using different detergents used to clean cows' udders in Jordan.The remaining of the study is organized as follows: Section 2 describes materials and methods, Section 3 shows results, Section 4 is a discussion and Section 5 concludes. Samples Collection This study was conducted in Ajloun, a northern city in Jordan.Twenty healthy cows were selected from a cow farm.The milk was collected from same cows on a daily basis but in different conditions.Six treatments were conducted and each treatment was repeated three times on three successive days.The first treatment (control sample) included milking the cows after cleaning their udders with water only.The other five treatments included milking cows after cleaning their udders by a different liquid hand wash detergent for each treatment.The detergents used were Gersy (detergent 1), Al Emlaq (detergent 2), Dove (detergent 3), Pass (detergent 4) and Hygiene (detergent 5).The samples were moved from the farm to the lab in a cooled box.Thereafter, a microbial analysis was carried out on the collected milk samples. The milking process was conducted automatically using a milking machine.In order to minimize the chances of any contamination, the following milking steps were applied: 1) Milker preparation: The hands of a person milking cows can become contaminated, so that the milker was wearing latex gloves which were replaced periodically through the milking process. 2) Cleaning the udders: The udders were prepared by thoroughly cleaning them and the teats either with water (control sample) or with a certain detergent. 3) Drying the udders: The udders were dried thoroughly using a separate dry towel (a sterilized cloth). 4) Application of the machine: The milking machine was applied within one minute of the initial wiping of the teats to take maximum advantage of the milk letdown response. 5) Detaching the machine at the end of milking: The vacuum was turned off before the machine was removed. 6) The milking machine and the milk utensils were sterilized over time. Microbial Analysis Before platting the samples, they were diluted by adding 1 ml of each milk sample into sterile test tube having 9 ml peptone water.After thoroughly mixing, the sample was serially diluted up to 1:10 -7 .Thereafter, the samples were platted on selective media and incubated at the appropriate temperatures.The total bacterial counts were enumerated on Nutrient agar (NA) (Difco); plates were incubated for 48h at 32˚C, (Difco, 1984).The total coliform bacteria were counted on the MacConkey agar medium (Difco); plates were incubated at 37˚C for 48h. The MSA (manitol salt agar) medium (Biolife) were used to enumerate the total staphylococci; plates were incubated at 37˚C for 48h.Enterococci spp.were counted using BEA (bile esculin agar) medium (Difco); plates were incubated at 37˚C for 48h.Finally, yeast and molds were enumerated on potato dextrose agar (acidified) medium, plates were incubated at 25˚C for 7 days. Statistical Analysis Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS, Version 21).Analysis of Variance (ANOVA) was used to examine the differences among samples.Moreover, a Post Hoc analysis was performed using LSD test to compare means differences at a significance probability rate of 0.05 (P ≤ 0.05). Results Table 1 shows the following results: Total Bacterial Count: There was a highly significant effect (P ≤ 0.01) of the different detergents on total bacterial count.When comparing the means, we notice that all the detergents total bacterial count means showed significant differences (P ≤ 0.05).There was a significant reduction (P ≤ 0.01) in total bacterial count by detergent1 as the total bacterial count value was reduced from 6.53 log cfu/ml for the control to 4.36 log cfu/ml for detergent 1.The highest reduction in total bacterial count was by detergent 5 where the value dropped to 4.04 log cfu/ml. Total Coliform: There was a highly significant effect (P ≤ 0.01) of the different detergents on the total coliform.There was a significant difference (P ≤ 0.05) among the means of different values of total coliform, except the means for detergents 2 and 3 and also for detergents 4 and 5 where there were no significant differences among the means.The total coliform values dropped in all detergents as compared to the control sample.The highest reduction in total coliform was by detergent 5. Staphylococci: The results of staphylococci showed a highly significant (P ≤ 0.01) reduction in its values as a result of using different detergents.There were no significant differences (P ≤ 0.05) among the means for detergents 1, 2, 3 and 4 (P > 0.05).However, the values of staphylococci were reduced comparing to the control sample as a result of using different detergents, for example, the staphylococci value was reduced from 2.04 log cfu/ml for the control to 1.3 log cfu/ml for detergent 1.The highest reduction in staphylococci occurred as a result of using detergent 5 where the value decreased to 0.7 log cfu/ml. Enterococci spp.: There was a highly significant effect (P ≤ 0.01) on enterococci spp.values as a result of the different detergents.When comparing the means, there was a significant difference (P ≤ 0.05) among the means. The lowest value of enterococci spp.occurred as a result of detergent 5 where it was reduced to 1.00. Yeast and Mold: There was a highly significant effect (P ≤ 0.01) of the detergents on yeast and mold, where they were reduced from 2.77 log cfu/ml for the control sample to 2.63 log cfu/ml for detergent 1.There was a significant difference among the means except for the detergents 3 and 5 where there was no significant difference (P > 0.05).The lowest value was for the fifth detergent where yeast and mold value was reduced to 2.01 log cfu/ml.*** HS = highly significant difference (P ≤ 0.01). Discussion Numerous studies have focused on the microbial content of milk.Marcondes et al. (2014) evaluated the quality of raw milk in different production systems and its variation throughout the year.Their data were collected from 943 dairy farms in Brazil.They found that total bacterial count was affected by production, thus confinement systems present a better total bacterial count content.Both month and year are factors that interfere with the total bacterial count and the best patterns were found in the coldest periods of the year.Mišeikienė et al. (2015) investigated the influence of pre-milking teat antiseptic solutions on total bacterial contamination of teat skin.Three udder antiseptics were applied: Dermisan 0.5% (active ingredient -aminopropyl laurylamine), 0.2% solution with active ingredient iodine, and foaming solution of natural compounds (lactic acid+glycerol+allantoin).Cow teats were swabbed before and after application of udder preparations.The total bacterial contamination on cows teat skin was determined employing serial dilutions and plate count method. The results showed that the udder applications with lactic acid and iodine had the highest probability of reducing total bacterial contamination.The use of udder antiseptics for premilking teats preparation reduced the levels of coliforms, coagulase negative staphylococci and Streptococcus uberis but with exception of iodine, no effect was found on reducing Candida genus yeasts.Agarwal et al. (2012) evaluated the effect of household practices on the microbiological profile of milk.Milk samples of pasteurized, ultra heat treated (UHT) as well as unpasteurized milk (Vendor's milk) were collected.The effect of different storage practices and treatments on the microbiological profile (standard plate count (SPC), coliform, E. coli, Salmonella, Shigella, Staphylococcus aureus, yeast and moulds, anaerobic spore count, and Listeria monocytogenes) of milk was studied using National/ International Standard Test Methods.The results indicated that average SPC in vendor's milk was found very high as compared to pasteurized milk.Coliform, yeast and moulds, E. coli, and Staphylococcus aureus were detected in the samples of vendor's as well as pasteurized milk.Boiling the milk reduces SPC and kills the other microorganisms.Storage of boiled milk under room temperature or refrigerated condition resulted in a similar increase in SPC at the end of 24 h, but storage of un-boiled milk even under refrigerated conditions increased SPC manifold after 24 h.Gibson et al. (2008) studied the effectiveness of premilking teat-cleaning regimes in reducing the teat microbial load and effect on milk quality.The effectiveness of several premilking teat-cleaning regimes in reducing teat microbial load was assessed using 40 cows on each of the four commercial UK dairy farms with herringbone parlours during two sampling periods.The cleaning regimes included dry wipe, Alcohol-based medi-wipe, Iodine-based dip and dry and Hypochlorite wash and dry.The results showed that all of the cleaning techniques studied reduced teat microbial load, however, the chlorine wash and dry was the most effective.Anderson et al. (2011) investigated the presence and levels of microbes in unexpired pasteurized milk from randomly selected supermarkets in Kingston, Jamaica.They collected 20 representative milk samples from six (6) supermarkets.Microbiological tests such as methylene blue reduction, standard plate count (SPC), coliform plate count (CPC), purity plate culture, gram staining and biochemical tests were performed.They found unacceptable levels of Enterobacter spp.and Escherichia coli in most of the samples.To the best of author's knowledge, this is the first study in Jordan that investigated the effect of using different detergents in cleaning cows' udders on the microbial content of produced milk.Our findings indicated that using detergents significantly reduces the counts of the examined bacteria and the values of yeast and molds.Moreover, There were significant differences in the results of different detergents, thus, which type of detergent to use is important to have a more healthy milk. Conclusion The efficacy of using five different detergents to clean cows' udders before milking on the microbial content of milk was investigated.Using samples from twenty cows in Jordan and based on microbial analysis, the results showed that cleaning cows' udders by detergents before milking significantly reduces the total bacterial count, total coliform, staphylococci and enterococci spp counts and the values of yeast and molds.Moreover, there were significant differences in the results of using different detergents.Thus, some detergents were more effective than others in reducing the microbial counts. Table 1 . The results of microbial analysis (log cfu/ml) ** Values having different letters in the same column are significantly different (P ≤ 0.05).
2018-12-15T10:21:05.568Z
2018-06-14T00:00:00.000
{ "year": 2018, "sha1": "1d5ca55a41bfca6313b3fce871e56ff1f2c93a3a", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/ijb/article/download/74810/42024", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1d5ca55a41bfca6313b3fce871e56ff1f2c93a3a", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
248799512
pes2o/s2orc
v3-fos-license
The activity patterns of nonworking and working sled dogs There are limited studies investigating the combined effects of biological, environmental, and human factors on the activity of the domestic dog. Sled dogs offer a unique opportunity to examine these factors due to their close relationship with handlers and exposure to the outdoors. Here, we used accelerometers to measure the activity of 52 sled dogs over 30 days from two locations in Canada. The two locations differ in the working demands of dogs, therefore we used linear mixed effects models to assess how different factors impact daytime and nighttime activity of working versus nonworking dogs. During the daytime, we found that males were more active than females among nonworking dogs and younger dogs were more active than older dogs among working dogs. Alaskan huskies had higher activity levels than non-Alaskan husky breeds in working sled dogs during the day. Nonworking dogs were slightly more active during colder weather, but temperature had no effect on working dogs’ activity. The strongest predictor of daytime activity in working dogs was work schedule. These results indicate that the influence of biological factors on activity varied depending on dogs’ physical demands and human activity was the most powerful driver of activity in working dogs. www.nature.com/scientificreports/ changes due to their short sleep-wake cycle 17,18 . This suggests that daytime activity demands should not significantly impact the nighttime activity of dogs. In addition to biological influences on dog activity, individuals who spend a considerable amount of time outdoors may be impacted by environmental conditions. First, activity may change in response to temperature. A previous study on sled dogs did not find any correlation between activity levels and temperature, but the observation period for that study did not include winter months 7 . Studies on pariah dogs in West Bengal 19 and village dogs near Colola beach in Mexico 20 found that dogs were less active at higher temperatures. This trend was also seen at higher latitudes in closely related species, like wolves (Canis lupus), that displayed a drop in activity once temperature rose above 20 °C 21 . Second, the nighttime activity of canids may fluctuate with the lunar cycle as light can influence predator feeding patterns, as well as prey behavior 22 . Higher activity levels on more moonlit nights have been recorded for wolves 21 , African wild dogs 23 , and coyotes 24,25 . The opposite trend (i.e., lower activity on more moonlit nights) has been observed in wild maned wolves 26 and black-backed jackals 27 . Research on jackals suggest that the activity response to moonlight is variable and dependent on resource availability 28 . This is further supported by the more diurnal activity pattern of captive wolves who do not need to hunt for food 29 . Third, the housing condition of dogs can influence their activity patterns. Group-housed dogs exhibited greater activity than dogs that were housed individually 30 . The number of kennel mates may influence the activity patterns of dogs, especially at night when dogs are more restricted to their housing condition. In summary, existing evidence show that the activity level of sled dogs may decrease with increased temperatures and increase with more kennel mates, but no research has looked at how lunar cycle affects dog activity. Dogs were the first domesticates, and have a shared history with our own species spanning back to at least 15,000 years ago 31,32 . Desired morphology, physiology, and behavior have been generated through artificial selection 33 . Kerepsi et al. 34 found high levels of behavioral coordination between dogs and their owners during cooperative interactions. Multiple studies have shown that dog activity is heavily influenced by human presence and behavior [5][6][7]10,15,16,35 . Higher activity levels during the weekend compared to weekdays is likely a response to greater interaction with owners 5,35 . This trend was visible in companion dogs who reside with their owners, and also in free-ranging and working dogs 6,16 . The activity of free-ranging dogs in West Bengal, India was highest during times when humans were most active 16 . This is expected since free-ranging dogs in India have been observed to rely on human-derived foods 36 . Activity in guard dogs was reactive to external stimuli with barking correlated mainly with human and dog activity 15 . Higher levels of activity in sled dogs were also associated with human movement 7 . Griss et al. 6 recently compared the activity patterns of free-ranging dogs to farm and family dogs. They found that dogs who were more independent from humans expressed a bimodal activity pattern which was not always observed in family dogs. The activity pattern of family dogs was more correlated with owner activity. These studies highlight the adaptability in the activity patterns of dogs as a response to human influence. As past studies have demonstrated, dog activity is affected by a multitude of biological, environmental, and human factors (Supplemental Table S1), however few studies have evaluated the combined effects of these factors. In this context, we examine the activity data of outdoor-housed sled dogs from two separate locations in Canada. One of the sled dog facilities is located in Haliburton, Ontario and the other is in Canmore, Alberta. Haliburton sled dogs did not work during the study period; therefore, they represented nonworking dogs (Table 1). Canmore sled dogs represented working dogs (Table 2). We collected activity data using CamNTech MotionWatch 8 accelerometers placed on the inner collars of sled dogs for a period of 30 days. The aim of our study was to quantify the activity levels of working and nonworking sled dogs in order to answer the question: What are the effects of biological, environmental, and human variables on sled dog activity? Past findings mentioned above indicate that the activity patterns of companion and working dogs, and to a degree, free-ranging dogs, seem to be entrained to human activity. Therefore, in working sled dogs, we predicted human-mediated influences (work intensity/ schedule, day type) would have larger effects on dog activity than biological (age, sex, weight, intactness, breed) and environmental (temperature, moon illumination) factors. In nonworking sled dogs, we predicted biological and environmental factors would have larger effects compared to human influences. Results In our analyses, we included individuals over the age of two, since this is when dogs have reached adulthood 37 . We only included individuals who were still working (i.e., not retired). We report results for the 52 individuals who met our inclusion criteria. These were 25 females and 27 males, 29 dogs were from Haliburton and 23 from Canmore. The mean age was 5.33 years old (± SD 2.46) and mean weight was 26.09 kg (± SD 4.58). All sled dogs at the Haliburton location were Alaskan huskies, while dogs from Canmore varied across husky breeds (Supplementary Table S2). At Haliburton, the mean temperature during the daytime was − 4.90 °C (± SD 3.76, range − 13.59 to 1.43) and during the nighttime was − 6.01 °C (± SD 4.42, range − 15.5 to 1.30), and the mean proportion of moon illumination was 0.53 (± SD 0.36, range 0 to 1). At Canmore, the mean temperature during the daytime was 4.51 °C (± SD 4.01, range − 10.86 to 2.31) and during the nighttime was − 5.71 °C (± SD 4.10, range − 12.60 to 1.19), and the mean proportion of moon illumination was 0.53 (± SD 0.36, range 0.002 to 0.999). The response variables of interest in our analyses were daytime and nighttime activity, which were expressed as the sum of all 1-min MotionWatch (MW) activity counts over the daytime or nighttime period (see "Materials and methods" section). The mean daytime activity was 160,065 (± SD 94,176; median: 146,295; range 28,633 to 517,041) MW counts in Haliburton and 264,995 (± SD 187,356; median: 218,821; range 22,675 to 885,358) MW counts in Canmore dogs. The mean nighttime activity was 5382 (± SD 3382; median: 4871; range 508 to 30,279) MW counts in Haliburton and 9208 (± SD 7372; median: 7298; range 849 to 70,009) MW counts in Canmore dogs. We performed two sets of linear mixed-effects models, one for each location, to evaluate the effects of human-mediated influences, as well as biological and environmental variables on daytime and nighttime activity. (Table 1). We found that males were more active than females (β = 0.702, P = 0.031) and dogs were less active during warmer days (β = − 0.055, P = 0.002). In Nighttime model 1, we found that only temperature had a significant effect on Haliburton dogs' activity (Table 1). Dogs were less active during warmer nights (β = − 0.041, P = 0.001). Canmore dogs. In Daytime model 2, we found day type, work, age, and breed to have significant effects on Canmore dogs' activity ( Table 2). Dogs were more active during the day on weekends compared to weekdays (β = 0.244, P < 0.001; Fig. 1a) and were more active on work days than days off (β = 0.895, P < 0.001; Fig. 1b). Older dogs were less active than younger dogs (β = − 0.186, P = 0.012) and non-Alaskan husky breeds were less active than Alaskan huskies (β = − 0.454, P = 0.005; Fig. 2). In Nighttime model 2, we found that only moon illumination had a significant effect on Canmore dogs' activity ( Table 2). Dogs were less active during nights with greater moon illumination (β = − 0.036, P = 0.033). However, Nighttime model 2 was not significantly different from the null model, therefore we caution against interpreting this significant finding. www.nature.com/scientificreports/ Discussion In this study, we examined how the daytime and nighttime activity of outdoor-housed working and nonworking sled dogs were affected by intrinsic and extrinsic factors. In line with our prediction, we found that working dogs' activity intensity was mainly driven by human-induced factors like day type and work schedule. Among the biological variables we examined, sex had an effect on daytime activity in nonworking dogs, while age and breed affected activity in working dogs. Of the environmental variables, temperature had an effect on nonworking dogs; activity increased during colder temperatures. Taken together, our findings show that working sled dogs' activity were more entrained to human activity than nonworking sled dogs. Additionally, biological factors had a stronger influence on activity than environmental conditions. Human influences. Both Haliburton and Canmore dogs are heavily reliant on handlers and therefore we expected the activity of sled dogs to be heavily influenced by human activity and behavior, which is a pattern observed across multiple studies examining human influence on companion, working, and free-ranging dogs [5][6][7]10,15,16,35 . Daytime model 2 (Canmore) showed that work schedule had the largest effect out of all the predictor variables. In other words, whether dogs worked or not explained the most variation in their daytime activity. When examining the influence of day type on the Daytime models, we found that dogs at Haliburton showed no significant differences in activity during the weekdays compared to weekends but Canmore dogs showed more activity during weekends. Canmore dogs likely expressed this pattern due to more tour bookings during the weekend since visitor availability is greatly constrained by the Monday to Friday work week. As a result of the COVID-19 lockdown in Ontario, the Haliburton dogs would not have experienced the difference in visitation between weekdays and weekends since visitations were restricted to the public during the study period. Research on companion dogs show that the "weekend effect" was also present with more activity expressed by dogs during weekends when owners were home 5,35,38 . While our study examined the influence of day type and work schedule on sled dog activity, there are a number of additional human related factors that can also influence dog activity. These factors may include feeding time or handler presence in the dog enclosure. However, these variables differ from day to day, and thus were outside the scope of our research question. Future studies should consider these variables for a more detailed assessment of how specific human activity or behaviors influence dog activity. In addition, selective breeding by humans for performance traits in dogs should be factored in when possible. Biological variables. Out of the five biological variables we examined, sex significantly influenced daytime activity in nonworking Haliburton dogs and age significantly influenced daytime activity in working Canmore dogs. A recent study by Woods et al. 5 found that female companion dogs were more active than males during the day, however they did not control for differences due to breed. Among Haliburton dogs, males were more active than females, which corroborates previous results from sled dogs 7 . However, no significant sex differences were observed in Canmore dogs. This suggests that working demands may reduce the variation observed in activity due to sex. We had predicted that intact dogs would be more active than neutered dogs based on owner reports that neutering led to decreased roaming and restlessness behavior in companion dogs 11,13 . On the contrary, we did not detect differences in activity between intact and neutered individuals for male and female sled dogs. While the specific hormonal pathways affecting the motivation toward, and sustainment of, physical activity in dogs remain unclear, reductions in testosterone and estrogen have been shown to impair physical activity in rodents 39 . Further research is needed to clarify whether removal of sex hormones through spaying and castration decreases activity in neutered individuals, particularly in dogs partaking in prolonged, high-intensity activity. When examining the influence of age, we found no significant differences in the activity levels of dogs in the daytime and nighttime models for Haliburton, but age did influence the activity of sled dogs from Canmore during the daytime. Many studies show a decline in daytime locomotor activity associated with ageing 4,37,40 . Although Zanghi et al. 4,37 observed age-related changes associated with nighttime activity, this was not observed in Siwak et al. 40 or in nonworking sled dogs in our study. The lack of significant activity differences due to age in Haliburton dogs could be due to sampling bias since we only included adult dogs in this study, therefore our results do not reflect young or retired senior dog activity. On the other hand, Canmore dogs showed a decrease in activity with age in the daytime model which suggests that the influence of age on activity, even among healthy working dogs, is noticeable when dogs engage in high intensity activity. Few studies have examined the effect of weight on the activity patterns of healthy weight dogs. Existing research vary in results with Jones et al. 8 and Griss et al. 6 showing a negative relationship between weight and activity while Hoffman et al. 10 and Woods et al. 5 found no significant effect. In this study, we found no significant effect of weight on the activity of sled dogs from both locations. Canmore dogs consisted of several different breeds, such as Siberian husky, Seppala husky, and Alaskan malamute. While the aforementioned breeds are all common sled dog breeds, our daytime model found that Alaskan huskies were significantly more active than non-Alaskan husky breeds. Alaskan huskies are the result of selectively breeding among several working dog breeds to achieve the desired traits (e.g., speed, endurance, work ethic) for sled dog racing 41 . Staff at Canmore confirmed that Alaskan huskies were the hardest working breed and ideal for running sled tours due to their strong desire to pull for long durations (J. Arsenault, personal communication). Alaskan huskies may have been preferentially chosen to pull longer tours, therefore, leading to higher activity levels detected by the accelerometers. As such, the difference in activity between breeds could be a combination of biological (i.e., physiological) and human (i.e., staff preference) factors. Environmental variables. Overall www.nature.com/scientificreports/ environment compared to wild species (wolves) because domesticates rely less on external conditions for hunting, mating, and survival 42 . Our results showed that temperature and moon illumination had significant, albeit small, effects on sled dog activity. We found that Haliburton dogs exhibited more activity during colder temperatures than warmer temperatures. However, Canmore dogs did not express significant differences in activity with changes in temperature. The National Research Council reported the lower critical temperature for Siberian huskies to be 0 °C 43 , but we believe the lower critical temperature for sled dogs is substantially lower than 0 °C because these dogs spend all of their time outdoors and have developed greater tolerance for cold winters than companion counterparts 44 . Temperatures at Haliburton were on average lower than Canmore, so the higher activity in Haliburton dogs could indicate the onset of nighttime shivering, huddling, and burrowing behaviors as temperatures at Haliburton approached or surpassed their lower critical threshold. Canmore dogs were also fed later in the day and had a more high-fat diet compared to Haliburton dogs. In addition, it is plausible that working sled dogs are more tolerant to temperature fluctuations. More research is needed to elucidate the physiological mechanisms associated with diet and physical activity that may underlie dogs' tolerance to cold temperature. Another environmental variable that is known to affect foraging and predation patterns of animals is moonlight 22 . Activity patterns relative to moonlight is shown to be variable in some canid species and is dependent on resource availability 28 . While wild wolves expressed greater activity on moonlit nights 21 , captive wolves showed a more diurnal activity pattern 29 . Sled dogs are provisioned and do not rely on predation for food, so we did not expect moonlight to affect their activity. Contrary to our prediction, sled dogs from Canmore expressed less activity levels in the nighttime model when there was more moonlight. It is important to note that the overall nighttime model for Canmore was not significant, so results should be interpreted with caution. The observed pattern may be a response to heightened wildlife activity during moonlit nights since the study location is surrounded by expansive forested environments. Dogs have been selectively bred for a number of roles such as hunting, guarding, and herding 33,45 which require high attentiveness to environmental stimuli 15,18 . On the other hand, Haliburton dogs did not show any significant trends in activity levels associate with moonlight. The effects of moonlight may not be as strong in Haliburton dogs since these dogs are kept in their grouped kennels ( Supplementary Fig. S1a), which would reduce their exposure to moonlight compared to Canmore dogs who can roam out of their covered housing as they please. Future studies should record ambient noise to assess whether the activity pattern of dogs is correlated to environmental sounds or the sounds of neighboring dogs. The effect of number of kennel mates on activity was modelled for Haliburton dogs which were kept in groups of two or three during the night and parts of the day. Hubrecht et al. 30 found that group-housed dogs expressed more activity than solitary dogs. Our study found that the number of dogs in each group did not influence dog activity. We were not able to compare group housed dogs to solitary dogs because the Haliburton dogs included in this study did not have a kennel to themselves. Although each kennel houses two or three dogs, the kennels are proximate to one another ( Supplementary Fig. S1a) which may still allow neighboring dog activity to influence individual dog activity. Nighttime activity. Interestingly, Nighttime model 2, which only included Canmore dogs, was not significantly different from the null model, which means none of the predictors were notable drivers of differences in nighttime activity among working sled dogs. Woods et al. 5 also found that none of the variables influencing daytime activity (sex, age, day type) had an effect on nighttime activity in companion dogs. An explanation for the lack of significant effects on nighttime activity in working sled dogs is their sleep-wake pattern. Dogs are polyphasic sleepers (i.e., have multiple sleep bouts) 4 and Adams and Johnson 17 found that dogs had on average 23 sleep bouts lasting 21 min each over the course of 8 h. These short sleep cycles in dogs may be key for quick recovery from schedule changes. In drug-detector dogs, it was found that dogs only experienced a "first-night" effect of disrupted sleep if they were returning from an extended break (i.e., several weeks), and their sleep-wake cycles resumed normalcy after the first night 46 . Working sled dogs from Canmore maintained a consistent work schedule during the study duration without any prolonged breaks; in fact, all dogs undergo training a month prior to the busy season. Dogs' polyphasic sleep pattern (i.e., multiple sleep bouts throughout the 24 h period), coupled with the short duration of sleep-wake cycles, likely underlie their ability to quickly adapt to changing daytime schedules 17,46 . Conclusions The comparison of working and nonworking sled dogs showed that environmental conditions, like temperature and moonlight, had relatively minor effects on dog activity despite sled dogs being outside all day. Biological factors, such as sex and age, had different effects on activity depending on dogs' physical demands. Overall, work demands mediated by human schedule was the strongest driver of differences in activity of working sled dogs. While the current study focused on the intensity of activity, a future study could include variation in the patterning of daily activity (i.e., when dogs are active during the 24 h period), which could provide further insight into why we failed to detect any significant drivers of nighttime activity in working sled dogs. While most studies conclude that dogs are diurnal 4,10,15,16 , there is evidence that dogs exhibit bimodal activity peaks and may actually be cathemeral (i.e., active during the day and night) 5,7 . It is plausible that C. familiaris may be a facultative cathemeral species that have adapted to match humans' more diurnal activity pattern. Finally, we demonstrate that the use of non-invasive technology, such as accelerometry, can facilitate research on dog activity with minimal interruption to their regular daily tasks, which is very useful when studying working dogs. . These locations will be referred to as Haliburton and Canmore, respectively, throughout the paper. At Haliburton, dogs were housed in sex-specific outdoor enclosures (49.6 m long and 12.2 m wide) with kennels measuring 1.5 m by 2 m on average. Kennel size varied depending on how many individuals were housed in the kennel, which ranged from one to three dogs (Supplemental Fig. S1). During the daytime, sled dogs from both Haliburton and Canmore were given time to roam free in their outdoor enclosures when they were not participating in the trail runs. The amount of time sled dogs were loose in their enclosure rather than restricted to their individual kennel varied day to day and by individual-this was dependent on a number of factors such as weather conditions and dog behavior. Dogs were fed high-performance kibble once per day, at 3:00 pm, and received water throughout the day. All Haliburton sled dogs were Alaskan huskies. At Canmore, dogs were also housed in sex-specific outdoor enclosures, measuring 8000 m 2 in area (Supplemental Fig. S1). Each dog had their own house (length: 1 m, width: 0.7 m, height: 0.6 m), built for insulation during winter months. Dogs were tethered to their house and could roam up to 2.5 m around their house throughout the day. Dogs were fed high-performance kibble and diet was supplemented with meat. At Canmore, the first feeding was between 6:30 and 8:30 am, dogs received water and soup throughout the day, and the last feeding session was between 4:00 and 7:00 pm. There were multiple dog breeds at Canmore (e.g., Alaskan husky, Canadian Indian mix, and Seppala Siberian). At both locations, staff estimated dogs to be in contact with humans for a minimum of eight hours a day. We asked staff to provide information on dog breed, sex, weight, medical history, whether they were neutered, as well as working schedules, if applicable (see Supplemental Table S2 for dog signalment information). Data collection. We collected data between December 6, 2020 and January 19, 2021. We chose this period because this is the busiest part of the working season for sled dogs, thus, dogs would be the most active. Unfortunately, due to Ontario's COVID-19 provincial lockdown, Haliburton cancelled all sled dog tour bookings during the study period. Handlers at Haliburton occasionally practised running routes with the dogs, but none of the dogs worked during the study. Canmore on the other hand, was able to continue with tours due to different provincial guidelines. To record dog activity, we used the CamNtech MotionWatch 8 accelerometer to quantify movement. The accelerometer has a piezoelectric film that detects and records movements as acceleration waveforms over each second. Acceleration measurements are then processed by MotionWatch 8's on-board software to produce MotionWatch (MW) counts, which represents a measure of activity for a predetermined time period. We set the accelerometer to record MW counts for 1-min epochs (i.e., each data point is the sum of MW counts over 60 s). Accelerometers have been previously used to measure dog activity (e.g. 5,10,48 ).The MotionWatch 8 accelerometer measurements were: 36 mm (length), 28.2 mm (width), 9.4 mm (depth), and 9.1 g (weight). We attached the accelerometer to the inner side (i.e., side in contact with dog) of a nylon neck collar using Gorilla Tape and we made sure that only smooth, non-adhesive materials were in contact with the dog (see Supplemental Information Fig. S2). Similar attachment methods have been used by Hoffman et al. 10 . We prepared and shipped all equipment to the two locations and staff placed collars on the dogs-researchers did not interact with the dogs. At the end of the study duration, all equipment were shipped back to us, and we downloaded the raw activity count data using MotionWare (version 1.2.23). Environmental data. We obtained hourly Canmore temperature data from a weather station located 17.49 km from the study location (climate.weather.gc.ca). We collected hourly temperature data from Haliburton using a Kestrel 5400 Heat Stress Tracker placed next to the outdoor enclosure. For analyses, we calculated the average daytime and nighttime temperatures for each date. Moon illumination information was obtained from dateandtime.com and daily sunrise and sunset times were from sunrise-sunset.org. Moon illumination is the proportion of the moon's visible surface that is illuminated by the sun when the moon passes the local meridian. Values range from 0 (no illumination during new moon) to ~ 1 (maximum illumination during full moon). Data analysis. For each individual, we included 30 days of data (December 12, 2020 to January 10, 2021 for Haliburton dogs and December 18, 2020 to January 16 for Canmore dogs). We first divided the 24 h period into "daytime", which was from 6:00 am to 8:59 pm and "nighttime", which was from 9:00 pm to 5:59 am. Daytime included the earliest possible wake-up times (i.e., earliest handler arrival at Canmore was 6:30 am) and 2 h after the last possible feeding session (i.e., 7:00 pm at Canmore). To test our predictions, we evaluated the relative effects of human, biological, and environmental variables on working and nonworking sled dogs' daytime and nighttime activity using a multivariable approach where we looked at all the fixed effects in the same linear mixed-effects model (LMM). In Daytime model 1 and Nighttime model 1, we only included data from Haliburton dogs. In Daytime model 2 and Nighttime model 2, we only included Canmore dogs. The response variable in the daytime models was the daytime total activity, which was the sum of all 1-min MW counts over one daytime period. The response variable in the nighttime models was the www.nature.com/scientificreports/ nighttime total activity, which was the sum of all 1-min MW counts over one nighttime period. To meet model assumptions, we used log transformation on the response variable since activity count was positively skewed 49 . In model 1, the fixed effects were day type (weekend/weekday), sex (male/female), age (continuous), weight (continuous), intact (yes/no), kennel (two/three roommates), temperature (continuous), and moon illumination (continuous). We also included an interaction effect for sex and intact because mating behaviors differ in males and females, so the intact condition could have different effects on activity in the two sexes. In model 2, we included all the fixed effects from model 1 except for kennel since Canmore dogs had individual houses. We also added breed (Alaskan husky/non-Alaskan husky) and work schedule (whether dogs had worked the day prior: yes/no) as additional fixed effects. We coded breed as a binary variable because the sample size for some of the non-Alaskan breeds were very small (e.g., 1 Alaskan malamute) and there were several mixed breeds (Supplemental Table S2). To allow for comparison of effect size across variables, all continuous variables were scaled by subtracting the mean and dividing by the standard deviation 50 . We set ID as a random effect since we had repeat observations (one each per day) for each individual. For all LMMs, overall model significance was determined using a likelihood ratio test comparing the full model to a null model with only the random effect. We also reported the marginal and conditional R 2 values for all LMMs 51 . We used the following R functions and packages: lmer() function in lme4 52 for fitting LMMs, simulateResiduals() and plot() in DHARMa 53 for model diagnostics, and r.squaredGLMM() in MuMIn 54 for calculating marginal and conditional R 2 values. All analyses were performed in R version 4.0.1 for Mac OS X 55 . All statistical tests were two-tailed with alpha set to 0.05. www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-05-16T06:21:42.486Z
2022-05-14T00:00:00.000
{ "year": 2022, "sha1": "6f3c72ff22d7f47ed75ee39ef3d03641abc3f452", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "713af0911ffc47f37a2581fb1cfffdc53ff0aa7b", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
240032436
pes2o/s2orc
v3-fos-license
Symmetry Detection and Analysis of Chinese Paifang Using 3D Point Clouds The Chinese paifang is an essential constituent element for Chinese or many other oriental architectures. In this paper, a new method for detection and analysis of the reflection symmetry of the paifang based on 3D point clouds is proposed. The method invokes a new model to simultaneously fit two vertical planes of symmetry to the 3D point cloud of a paifang to support further symmetry analysis. Several simulated datasets were used to verify the proposed method. The results indicated that the proposed method was able to quantity the symmetry of a paifang in terms of the RMSE obtained from the ICP algorithm, with resistance to the presence of some random noise added to the simulated measurements. For real datasets, three old Chinese paifangs (with ages from 90 to 500 years) were scanned as point clouds to input into the proposed method. The method quantified the degree of symmetry for the three Chinese paifangs in terms of the RMSE, which ranged from 20 to 61 mm. One of the paifangs with apparent asymmetry had the highest RMSE (61 mm). Other than the quantification of the symmetry of the paifangs, the proposed method could also locate which portion of the paifang was relatively more symmetric. The proposed method can potentially be used for structural health inspection and cultural studies of the Chinese paifangs and some other similar architecture. Introduction The Chinese paifang is one of the most important components of ancient Chinese architecture. It is an archway structure first appeared at the Spring and Autumn period (BC 771-476) in China. It can be found in a wide range of places and communities, e.g., giant palaces or towns. After many years of evolution, the paifangs not only act as gateways to divide different regions, but also serve as monuments to recognize celebrities or important events. The paifangs are mainly composed of pillars and eaves, usually built with a highly symmetrical style [1]. Their symmetry is one of the common characteristics found in Chinese architecture due to the cultural background [2]. Even though almost all Chinese paifangs were built with symmetry, their symmetries have rarely been investigated quantitatively in the literature. This is likely due to the fact that the degree of symmetry of the paifangs can hardly be quantified (and then analyzed) without an appropriate geometric model to estimate their symmetry. The symmetry analysis of the paifangs can help us gain more insight into the construction skills and techniques of ancient Chinese architecture, and more importantly, to examine the structural stability for civil engineers. With the recent advances in three-dimensional (3D) laser scanning and photogrammetric reconstruction techniques [3,4], surfaces of an object can be readily measured and recorded as a vast number of points represented by their 3D coordinates (known as point clouds). Detecting symmetry from the 3D point cloud of an object has been an important task in the field of computer vision, photogrammetry and architectural design [5][6][7]. For example, Ecins et al. estimated the positions of the plane of symmetry from the point clouds [8]. They first used an initial plane of symmetry to divide groups of points and then pair up the points which were considered as symmetrical. Then, they used the Levenberg-Marquardt estimator to compute the planar parameters. Xue et al. [9] developed a derivative-free optimization method to detect the symmetry from point clouds of buildings. They first divided the point clouds into different slices and then estimated the central axis of each slice to speed up the computation. These methods are mainly based on the least-squares optimization techniques that estimated sets of parameters to compose the symmetry. For computation of the symmetry using machine learning approaches, Gao et al. developed a convolutional neural network (CNN) to estimate the plane of symmetry of a set of objects including tables, cabinets and boats [10]. The CNN required training data to input into the model. Ji and Liu presented a framework using a deep learning and pointbased classification technique to estimate the plane of symmetry from point clouds [11]. In their work, the random sample consensus (RANSAC) algorithm was employed to classify the points into different groups for the deep learning. Furthermore, Wu et al. proposed a symmetry detection method for occluded 3D point clouds based on deep learning [12]. Their method invoked a CNN for segmentation of the normal vectors to find out the plane of the symmetry. These methods are accurate, but applying them for the paifangs would be a problem as the paifang is always covered by different or unique statuary. As a result, accurate sets of training data for the paifangs would be difficult to obtain. The aforementioned methods mostly focused on the detection of a single plane of symmetry for the reflection symmetry. Reflection symmetry refers to a type of symmetry for which a plane of symmetry divides an object into two halves and any one half is the reflection of the other. A cuboid has three planes of symmetry that are mutually perpendicular, forming reflection symmetry [13]. A Chinese paifang and many other Chinese architecture components such as the bell and drum towers [14], possess two (vertical) planes of symmetry in general. Therefore, we need a method which can accurately and simultaneously estimate two vertical planes of symmetry for such types of architecture to quantify the degree of symmetry and support related studies. Some of the paifangs are covered with complicated statues, and they are supposed to be symmetric regardless of the complexity. Estimating two perpendicular planes of symmetry can facilitate a thorough analysis of the degree of symmetry of the paifangs. Moreover, the positions (e.g., the center of the planes) and orientations of these planes of symmetry should also be estimated for the paifang to support further applications. In this paper, a new method for the computation of parameters of the two vertical planes of symmetry is proposed to detect and analyze the reflection symmetry of a Chinese paifang. The proposed method consists of a new approach for the parametrization of the two vertical planes of symmetry. Rather than fitting conventional planar parameters [15,16], the proposed method breaks down the plane-fitting problem into a line-fitting problem which is more straightforward and tractable [17]. One of the advantages of the proposed method is that there is no need for the users to estimate the point-to-point correspondence using an initial plane of symmetry or any training datasets prior to the actual planar parameter estimation. The paper is organized as follows: Section 2 describes the workflow of the proposed method and the geometric models; Section 3 focuses on the collection of the simulated and real datasets; Sections 4 and 5 present the results of the analysis and conclude the work, respectively. Overall Workflow Before the fitting is performed, raw point clouds should be processed with multiple steps (e.g., registration and ground filtering). The workflow of the proposed method is shown in Figure 1. The paifang should be scanned in a way that the scanner positions are evenly distributed to keep the point density as unified as possible. This will result in a more accurate geometric fitting of the paifangs. The scanned point clouds should be accurately registered (e.g., [18]) before further processing is performed. Then, a ground filtering (e.g., with the cloth simulation filter [19]) and some manual editing should be applied to extract the entire point clouds of the paifangs. After the registration and ground filtering, the initial values of the rotation angle about the Z-axis (Ψ) of the paifang should be computed so the least-squares solution can converge within several iterations. This can be achieved by minimizing the difference (ξ) between the absolute values of the range of the X and Y coordinates of the paifang using the golden section search method [20], as the model fits the data at a slope of 45 • in the X-Y plane (this will be explained in the next subsection). ξ can be expressed as follows: where: After the initial value of Ψ is estimated, it can be used to rotate the paifang's point cloud for voxelization [21]. The voxelization is needed for a downsampling process that reduces the point density differences found in the point cloud. When the paifang is scanned with terrestrial scanners mounted on tripods, the points at lower positions usually have higher point densities. The average point number after the voxelization can be used as a threshold to reduce the point density for those voxels with higher point density, so that the accuracy of the subsequent fitting can be improved. After all the aforementioned steps are performed, the fitting can be executed. The parameters obtained from the fitting are then used to transform the point cloud into a nominal position and then into an initial position, so that points separated by the planes of symmetry can be readily reflected and grouped for the symmetry analysis, based on the computation of the RMSE after the symmetric parts of the paifang undergo the reflection. Proposed Planar Model for the Reflection Symmetry Instead of fitting the 3D point cloud of a Chinese paifang to the conventional geometric model (also known as the general equation of a plane which consists of four parameters [22,23]) in the Cartesian space, the proposed method is based on fitting the point cloud to a set of new geometric models for two mutually perpendicular vertical planes as the planes of symmetry. Rearranging the geometric models, the functional models for the subsequent leastsquares fitting are expressed as follows: x and l are the vector storing the parameters and the observations (the coordinates of the 3D point cloud), respectively; (X c , Y c ) and (X c , Y c ) are the center of the two perpendicular planes of symmetry on the X-Y plane at the nominal and original positions, respectively; Ω, Φ and Ψ are the Euler angles about the X, Y and Z-axes, respectively; R 1 , R 2 and R 3 are the rotation matrices about the X, Y and Z-axes, respectively. Figures 2 and 3 show the model parameters in a 3D and bird-eye view, respectively. Observations of the paifang with arbitrary positions and orientations are first rotated into a nominal position (Figures 2 and 3) so that the two purely vertical and perpendicular planes of symmetry (Equations (3) and (4)) are fitted simultaneously to them based on the least-squares criteria (i.e., sum of the squares of the planar residuals are minimized). It is worth noting that the X-Y center of the paifang (X c , Y c ) is not first translated to the origin, which is different from the other 3D models [7,24,25], because X c and Y c are absolutely correlated if they are translated in that way. The geometric models of the purely vertical planes are the same as that of the 2D straight lines (the direction cosines for the Z direction are zero), only one coordinate (X c /Y c ) can serve as the X/Y-intercept. Therefore, X c and Y c cannot be simultaneously translated to the origin. Instead, X c and Y c ' are estimated in the model. After the parameters are estimated, the center of the planes of symmetry that divided the original point cloud (at the original positions) can be readily calculated using the following equation (backward rotation sequence): Least-Squares Estimation For simplicity, the observations are replicated so that two identical sets of observations are input into the Gauss-Helmert least-squares model [26] for the fitting as the observations are constrained to satisfy the two planar models simultaneously. The linearized adjustment model is: whereδ is the correction vector for the model parameters; A is the design matrix of partial derivatives of the linear/planar models with respect to the parameters; B is the design matrix of partial derivatives of the linear/planar models with respect to the observations; v and w are the residual and the misclosure vectors for the models, respectively. As two planes of symmetry are estimated, the observations are duplicated to fit to the models to solve the same set of parameters. A is broken down into A 1 and A 2 for f 1 and f 2 , respectively. The normal equation for the least-squares fitting is: where P is the weight matrix for the observations. In practice, the terrestrial scanner is usually precisely levelled for scanning, and there is an assumption that the paifang is almost purely vertical. Therefore, estimation of the vertical angles (Ω, Φ) can be skipped to increase the degree of freedom. In addition, this waives the process of estimating an accurate set of initial values of the vertical angles for the fitting. Symmetry Analysis After the model parameters are estimated, the original point cloud of the paifang can be translated to an initial position centered at the origin, as illustrated in Figure 4, by using the following equation: The paifang can now be divided into four quadrants based on the models and the estimated parameters. The points in quadrants 1 and 2 can be translated using Equation (9) (reflection about the X-axis) to quadrants 3 and 4. After the translation, the translated points are matched to the original points at quadrants 3 and 4 using the iterative closest point (ICP) method [27]. The root-mean-square error (RMSE) obtained from the ICP is then used for the quantification of the degree of symmetry (f 1 ). For this case, the RMSE can be rewritten with subscripts so that it becomes RMSE x . Similarly, the points in quadrants 2 and 3 can be translated using Equation (10) (reflection about the Y-axis) to quadrants 1 and 4 to perform the ICP and obtain RMSE y for the quantification of the degree of symmetry (f 2 ). Simulated Datasets Two paifangs were simulated as point clouds using MATLAB for the method verification and analysis. The simulated point clouds of the paifangs are shown in Figure 5. Both paifangs consist of two pillars, bases and eaves They are identical except the one shown in Figure 5b is not perfectly symmetric as an additional part (blue) is added to the top of the eave above one of the pillars. The point clouds of the paifangs were then processed under different conditions (e.g., with different random noise added) for further analysis. The details of the simulated paifangs are shown in Table 1. Real Datasets Three ancient Chinese paifangs were scanned as point clouds ( Figure 6) using the Trimble SX 10 scanner mounted on a levelled tripod in the Guangdong Province, China. The point clouds of the paifangs are labelled as paifang A, B and C. Their details are shown in Table 2. Paifang A is named Jinshi, it was built around the late Ming dynasty (AC 1368-1644). Paifang B was built inside the Sun Yat-sen University campus in 1935. Paifang C was named Tianbaojiexiao. It was built in the mid-Qing dynasty (AC 1636-1912). Figure 7 shows the RMSEs obtained from the ICP matching versus the random error (RE) added to the simulated point cloud of a perfectly symmetric paifang, S1 (Figure 5a). It can be seen that the trend of the RMSE x and RMSE y are almost identical, they are increasing along with the RE. The RMSEs are almost equal to each other because the paifang is perfectly symmetric in both directions. The RE causes the asymmetry of the simulated paifang. When there is no RE, the RMSE x and RMSE y are both zero as the paifang is perfectly symmetric in both directions. As the RE is being added to the paifang, the RMSEs serve as quantities to measure the symmetry. The larger the RE is, the higher the RMSEs are. When the simulated paifang becomes asymmetric in the X-direction, S2 (Figure 5b), the RMSE y increases significantly as shown in Figure 8. The RMSE x stays the same as that of the original simulated paifang. This is because the paifang is still symmetric in the Y-direction. It is worth noting that the RMSE x stays around 29 mm regardless of the increment of RE. This indicates that the RMSE is significantly governed by the asymmetry, and is resistant to the change of RE. Therefore, RMSE can reflect the degree of symmetry regardless of a reasonable range of random errors found in the measurement. When there is no RE and the paifang is perfectly symmetric in both directions, both RMSEs vanish. The proposed method invokes the ICP to match the different parts divided by the two mutual perpendicular planes of symmetry. As a result, the proposed method is shown to be efficient in reflecting the degree of the symmetry using the RMSE regardless of the existence of the random errors in the method. Figure 9 shows the estimated planes of symmetry (magenta) for paifang A. It can be seen that the planes divide the paifang into four quadrants as expected. It is assumed that most of the points are symmetrical, so the estimated planes are closest to the planes of symmetry. Similarly, the estimated planes of symmetry are shown in Figures 10 and 11 for paifangs B and C, respectively. It can be seen that the planes divide the paifangs into four quadrants. It is worth noting that the base stones are missing on one side for Paifang C as indicated by Figure 11b. However, the planes of symmetry still divide the paifang at the central positions because most of the points (or the majority of the points) in the paifang are still lying on a symmetrical pattern. Figure 12 shows the histograms of the point distribution for paifang A. Figure 10a depicts the points distributed in quadrants 1-2 and 3-4 along the X-direction. It can be seen that most of the points from the two quadrants have a very close distribution pattern. However, it is not the case for the points distributed in quadrants 2-3 and 1-4 along the Y-direction as seen in Figure 10b. These suggest that paifang A is more symmetric along the X-direction compared to Y-direction. If paifang A is perfectly symmetric in both directions, the distribution patterns in both directions for those quadrants should be consistent (they should completely overlap each other). As shown in Figure 13, paifang B is a relatively thin paifang as the Y-coordinate spans only a narrow range (approximately 2 m). Similar to the case of paifang A, paifang B is more symmetric along the X-direction compared to the Y-direction, but the degree of asymmetry for paifang B is lower than that of paifang A. However, the degree of asymmetry of paifang C in the X-direction is the highest compared to that of the other two paifangs as shown in Figure 14. This is due to the missing stone base on one side (Figures 11b and 15). Nevertheless, the degree of asymmetry of paifang C in the Y-direction is high as well (Figure 14b). This is likely attributed to the fact that paifang C was transplanted from another place and the structure was altered (if it was assumed that paifang was originally built with a high degree of symmetry). The RMSE x and RMSE y for paifangs A, B and C are tabulated in Table 3. It can be seen that the RMSEy of paifang C is the highest, indicating it is the most asymmetric among all three paifangs. This is again due to the missing stone base on one side, as shown in Figure 15. Comparing the three sets of RMSE for paifangs A, B and C, it can be seen that paifang B is the most symmetric since it possesses the lowest RMSE. It is consistent with their ages. Paifang B was built only around 90 years ago but paifangs A and C are much older (400-500 years). Considering the RMSE (excluding the RMSEy for paifang C) is relatively low (less than 3 m), it is concluded that paifangs A, B and C are very symmetric. This is consistent with the assumption that Chinese paifangs are usually built with a highly symmetrical standard. To locate which part of the paifang is relatively symmetric, the conjugate points separated by the planes of symmetry were found by using the k-nearest neighbors algorithm [28]. All the points in quadrants 1-2/2-3 were first translated to quadrants 3-4/1-4, and then a nearest neighbor search was performed to find out the conjugate points within a sphere defined by a radius. The symmetric conjugate points are shown in Figures 16-18. Results for the Real Datasets It can be seen that the middle parts of paifangs A, B and C are the most symmetrical as the points with conjugate points found within a 1 cm radius sphere focused on the middle, as seen in Figures 16a, 17a and 18a, respectively. It is because points further away from the central line are usually prone to external forces and more serious bending/deformations occur. From the left column of Figures 16-18, it can be seen that the lower portions of the paifangs are usually more symmetrical. This is because the lower portion is less exposed to the wind and other environmental factors that can cause chronic deformations. In addition, it can be seen that paifang A is the most asymmetrical in the Y-direction, mostly in the eaves. It is a giant paifang that was built around 500 years ago, with a relatively more complicated structure compared to the other two paifangs. The eaves had likely been deformed over time along the Y-axis, and therefore it caused such an asymmetry. Points symmetric (red) and asymmetric (yellow) with respect to the planes of symmetry for paifang C: (a,c,e,g,i) along X-direction with search radius of 1, 2, 3, 4 and 5 cm, respectively; (b,d,f,h,j) along Y-direction with search radius of 1, 2, 3, 4 and 5 cm, respectively. Conclusions In this paper, a new method for the detection and analysis of the reflection symmetry of 3D point clouds of Chinese paifangs was proposed. The proposed method is composed of a new model for simultaneously fitting two vertical planes of symmetry to the point clouds of the paifangs, via breaking down the plane-fitting problem into a line-fitting problem. After the parameters of the planes of symmetry were estimated, the point cloud of the paifang could be transformed and then divided into four equivalent quadrants, resulting in evaluation for the degree of symmetry based on the ICP algorithm. Several simulated datasets were used to verify the proposed method. It was found that the proposed method was able to quantify the degree of symmetry regardless of the presence of some random noises added to the measurements, indicating that the proposed method is practical. Meanwhile, real datasets for three old Chinese paifangs (with ages ranging from 90 to 500 years old) were collected using a Trimble scanner to input into the method for the symmetry analysis. The results showed that the degrees of symmetry could be quantified in term of the RMSEs obtained from the ICP, which ranged from 20 to 61 mm. The results revealed that the paifang with apparent asymmetry had the largest RMSE (61 mm) among all three Chinese paifangs. It was shown that the method not only could quantify the reflection symmetry of the paifang, but also could locate which portion of the paifang was relatively more symmetric. Therefore, the proposed method has a high potential for structural health inspection and cultural studies of Chinese paifangs and other similar types of architecture.
2021-10-28T15:17:06.975Z
2021-10-23T00:00:00.000
{ "year": 2021, "sha1": "cf947c8f98fdbb16142797a09eeb0015b3a82d4b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-8994/13/11/2011/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "395cb4f22da9726a21b4dc6e9320158f46c29232", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
15337281
pes2o/s2orc
v3-fos-license
Screening for Genes Coding for Putative Antitumor Compounds, Antimicrobial and Enzymatic Activities from Haloalkalitolerant and Haloalkaliphilic Bacteria Strains of Algerian Sahara Soils Extreme environments may often contain unusual bacterial groups whose physiology is distinct from those of normal environments. To satisfy the need for new bioactive pharmaceuticals compounds and enzymes, we report here the isolation of novel bacteria from an extreme environment. Thirteen selected haloalkalitolerant and haloalkaliphilic bacteria were isolated from Algerian Sahara Desert soils. These isolates were screened for the presence of genes coding for putative antitumor compounds using PCR based methods. Enzymatic, antibacterial, and antifungal activities were determined by using cultural dependant methods. Several of these isolates are typical of desert and alkaline saline soils, but, in addition, we report for the first time the presence of a potential new member of the genus Nocardia with particular activity against the yeast Saccharomyces cerevisiae. In addition to their haloalkali character, the presence of genes coding for putative antitumor compounds, combined with the antimicrobial activity against a broad range of indicator strains and their enzymatic potential, makes them suitable for biotechnology applications. Introduction There is an increasingly urgent need for new active biomolecules and enzymes for use in industry and therapy [1]. However, the rate of discovery of new useful compounds has been in decline [2,3] and because of this there is an interest in investigating previously unexplored ecological niches [4,5], particularly extreme environments. These environments have provided a useful source of novel biologically active compounds in recent years [1,6,7]. Extreme environments are distributed worldwide. These ecosystems were thought to be lifeless as insurmountable extreme physical and chemical barriers to life exhibit. With the advancement of our knowledge, we now see them as yet another niche harbouring "extremophiles" [8]; major categories of extremophiles include halophiles, thermophiles, acidophiles, alkaliphiles, and haloalkaliphiles [6,9]. The haloalkaliphiles bacteria have attracted a great deal of attention from researchers in this last decade [9]. In 1982, the term haloalkaliphile was used for the first time to describe bacteria that are both halophilic and alkaliphilic [10]. This group of bacteria is able to grow optimally or very well at pH values at or above 10 along with high salinity (up to 25% (w/v) NaCl) [11]. To encounter such harsh conditions, haloalkaliphilic microorganisms have found various physiological strategies to sustain their cell structure and function [12,13]. These bacteria have widely been identified and studied from the 2 BioMed Research International hypersaline environments, soda lakes, solar saltern, salt brines, carbonate springs, and Dead Sea [14]. Their survival obviously indicated the widespread distribution of such organisms in natural saline environments [12,15]. The interest in haloalkaliphilic microorganisms is due not only to the necessity for understanding the mechanisms of adaptation to multiple stresses and detecting their diversity, but also to their possible application in biotechnology [9]. The present work involved the isolation and characterization of new haloalkalitolerant and haloalkaliphilic bacteria able to produce extremozymes and elaborate natural bioactive compounds effective against pathogenic bacteria and fungi as well. The screening for genes coding for putative antitumor compounds by PCR with three sets of primers was also performed. We have been interested in soils of Algerian Sahara Desert, which is one of the biggest deserts and encompasses one of the most extreme environments worldwide (Sabkha and Chott). However, it is also considered to be one of the less explored parts. Our team has been interested in these magnificent ecosystems for many years and the few studies that have been published have shown great active biomolecules [16][17][18], biodiversity of interesting new taxa [19][20][21][22][23][24], and enzymes [25,26]. Sampling and Strains Isolation. Samples from different soils (7 sites) of Algeria's Sahara Desert were collected on March 2010 (100-300 g per site in sterile bags) ( Figure 1). Most samples were saline and alkaline soils, with an electrical conductivity between 1.4 and 20.2 mS/cm (at 20 ∘ C) and pH range of 7.5-9; the temperature varies from 22 ∘ C north to 44 ∘ C south of the Sahara. One gram from each sample was suspended in 9 ml sterile water (of 0.9%, 10%, and 20% NaCl w/v) and serial dilutions to 10 −4 . For each dilution and for each concentration, soil particles were allowed to sediment; then 0.1 mL of the liquid phase was spread onto the surface of each of the modified International Streptomyces Project 2 (ISP2) [27] media agar supplemented with NaCl with respect to the various concentrations of salt used for dilutions (0.9%, 10%, and 20% NaCl w/v) and adjusted to either pH= 7 or pH = 10 by adding 5 M NaOH before autoclaving and spread onto nutrient agar plates. The plates were maintained at constant humidity incubated at either 30 ∘ C or 50 ∘ C for 15 days. Colonies were picked out and repeatedly restreaked until purity was confirmed. All bacterial culture isolates were stored at 4 ∘ C in the same medium used for isolation. Physiological Growth Parameters. Physiological growth parameters for the thirteen selected strains were determined by agar plate method on modified ISP2 medium depending on the modified parameter. Salinity tolerance was examined for 0, 1, 5, 10, 15, 20, and 25% NaCl w/v. The pH growth range was investigated between pH 5 and 12 at intervals of 1 pH unit. The temperatures tested were 4,10,15,20,25,30,37,40,42,45,55, and 60 ∘ C. Incubation time was one week for Actinobacteria and two days for non-Actinobacteria. The nucleotide sequences for the 16S rRNA gene of the different strains were carried out by GATC Biotech (UK). The isolates were identified using the EzTaxon-e server (http://eztaxon-e.ezbiocloud.net/) on the basis of 16S rRNA sequence data [32]. Optimized PCR conditions were as follows: (1) The Molecular Evolutionary Genetics Analysis (MEGA) software, version 4.0.2, was used to assist the phylogenetic analyses and the phylogenetic tree construction [33]. Similar 16S rRNA gene sequences for the studies of the strains were obtained by using Eztaxon [32]. Multiple alignments of data were performed by CLUSTAL W [34]. Evolutionary distances were calculated by using maximum composite likelihood method and are in the units of the number of base substitutions per site [35]. Phylogenetic tree was reconstructed with the neighbour-joining algorithm [36]. Topology of the resultant tree was evaluated by bootstrap analyses of the neighbour-joining dataset, based on 1000 resamplings [37]. The sequences reported in this study have been submitted to NCBI GenBank and the accession numbers are listed in appendices. Primers and Molecular Screening. From the thirteen selected strains, six were subjected to molecular screening for genes coding for putative antitumor compounds using three primer sets (Table 1). These strains were chosen on the basis of the presence of nonribosomal peptide synthetases/ polyketide synthases (NRPS/PKS) genes within their genomes (data not published). The first set designed by Decker et al. [29] amplified dNDP-glucose dehydratase genes. The second set was that of Chang and Brady [30] used to screen for biosynthesis of the antitumor substance BE-54017. The final set was used from the study of Ouyang et al. [31] targeting the jadomycin cyclase gene which intervenes in angucycline production. The PCR mixture included 1-2 L of genomic DNA, 15 L master mix (Sigma,UK), 1 L each of forward and reverse primers (10 M each) (Sigma, UK), 1 L of BSA (10 mg/mL) (Promega, Madison, WI, USA), and 6 L sterile distilled water in a final volume of 25 L. PCR was performed with Mastercycler pro (Eppendorf). Agarose gels (1% w/v) were photographed after staining with ethidium bromide at 0.5 g mL −1 with a minivisionary imaging system. Sizes of the fragments were estimated using the Fermentas 1 kb Plus DNA ladder (Fermentas, UK). Antimicrobial Activities Test. Antimicrobial activity was determined by the agar cylinder diffusion method. A 6 mm diameter cylinder was taken from solid cultures and put on preseeded nutrient agar plate of the targeted microorganisms mentioned below. Up to five cylinders of different bacteria per plate were tested. Inhibition zones were expressed as diameter and measured after incubation at 37 ∘ C for 24 h for bacteria and at 28 ∘ C for 48-72 h for the filamentous fungus and yeasts [38]. Reference strains used in this study were as follows. Enzymatic Screening. Enzymatic activities "amylolytic, proteolytic (caseinase), and lipolytic" were screened using zone clearance assays. The enzymatic substrate was incorporated to the media, and the strains were restreaked by spots [39]. The tests were conducted with respect to physiological growth parameters of each strain. Strains Isolation and Selection. Isolation plates developed various types of colonies. Sixty to hundred colonies were found per plate in the first dilution for almost all soils, two to ten colonies were observed in the third dilution, and almost nothing in the fourth dilution plates. We have also seen that for the same dilution the number of colonies decreases when the concentration of NaCl increases. One to five colonies which looked less represented were selected from each plate with respect to the haloalkaliphilic character. A total of thirtynine isolates were distinguished. Amongst these thirty-nine isolates (17 were filamentous, 17 bacilli form, and 5 were cocci form), thirteen strains-eleven with particular morphology (filamentous, which may indicate Actinobacteria that are best known for the production of active biomolecules), one bacilli form, and one cocci form-were the subject of our study. The macroscopic and microscopic aspects of three of the thirteen strains are represented in Figure 2. The molecular identification by EzTaxon-e, physiological growth parameters, and enzymatic screening are described in Table 2. The alphabetical strains code used in our study refers to the geographical area origin of isolation; the numerical strains code part is a simple sequential order to differentiate strains. Physiological Growth Study. All strains could tolerate up to 5% NaCl. Strains Reg1, Ker5, and HHS1 were able to tolerate up to 10%, whereas Bisk4 could tolerate up to 15%. Tag5 growth started at 1% and M5A started growing at 10%; these two strains could grow up to 20% NaCl. Reg1, Ker5, and HHS1 are considered as halotolerant. M5A and Tag5 are considered to be halophilic [40]. Beside the alkalitolerant character of strain A60, it presented a thermophilic profile (45-60 ∘ C). With the exception of strain Bisk4, which may be considered as thermotolerant bacteria since it grows up to 55 ∘ C, the other selected bacteria are considered to be mesophiles. Identification. Most isolated strains belonged to the genus Streptomyces (AT1, ASB, GB1, Ig6, and GB3). The five Actinobacteria, other than Streptomyces, were identified as follows: Reg1 and Ker5 as two different Nocardiopsis sp., HHS1 as Pseudonocardia sp., M5A as Actinopolyspora sp., and Bisk2 as Nocardia sp. Bisk2 looks like a new member as it branches out 100% of the time from its nearest relative Nocardia jejuensis determined by EzTaxon-e with 95% similarity for the 750 recovered bases. One filamentous strain A60 was identified as Thermoactinomyces sp. The bacilli Bisk4 is part of the Bacillus mojavensis complex and the cocci Tag5 belonged to the genus Marinococcus (Table 2; Figure 3). Screening for Genes Coding for Putative Antitumor Compounds. Glu1/Glu2 primer set had 4/6 positives. High intensity band was registered for the strain Ig6. The primers targeted two different regions for the strain Bisk2. Multiple bands were recovered from the strain GB1 while no one range 500-700 pb. PCR using this primer was negative for the strain A60 (Figure 4(a)). The StaDVF/StaDVR primer set was positive in one strain (Figure 4(b)). The PCR with AuF3/AuF4 primer set was negative for all tested strains (Figure 4(c)). Antimicrobial Activity. The antimicrobial activity of the thirteen selected strains differed between strains (Table 2; Figure 5). Among these, eight showed at least an antimicrobial activity against one of the targeted microorganisms. (Figure 5(c)). However, none of the thirteen strains demonstrate specific and unique activity against the gram negative bacteria. Enzymatic Activity. Strains from Bacillus and Streptomyces were more enzymatically active and possess at least two of the screened enzymes. The strain Thermoactinomyces sp. (A60) was able to degrade casein and lipids. Strains Bisk2, TAG5, and HHS1 seemed to have none of these screened enzymes ( Figure 6). Discussion In this study we looked at extreme environment of the Algerian Sahara Desert as a source for novel strains possessing interesting bioactive properties. In total, we isolated a collection of thirty-nine haloalkalitolerant and haloalkaliphilic isolates, thirteen of which were selected and screened for genes coding for putative antitumor compounds, as well as screening for antimicrobial and enzymatic activities. All strains were identified using 16S rRNA gene sequencing. This study represents novelty in looking at the relatively understudied areas of Sabkha and Chott and has yielded at least thirteen strains which potentially have antitumorgenic, antimicrobial, and enzymatic properties. Although often extreme and hostile ecosystems diversity and abundance of bacteria can be low ranging from 10 to 10 4 UFC/g of soil where the physicochemical parameters are controlling factors [19], the strains retrieved and identified in our study, in particular, of Actinobacteria strains, which belong to various taxa, indicate a great diversity. Diversity in environments such as the one in this study has previously been investigated such as in Tunisia [9], China [41], and Table 2: Physiologic characterization, antitumoral genes, enzymatic activity, antimicrobial activity, and most related species of the thirteen selected strains of this study. previously in the Algerian Sahara soils [19,42], which has revealed that members of these extreme ecosystems are mainly halotolerant or halophilic organisms. Many of the isolated taxa in this study have previously been found in this environment, particularly of the Actinopolyspora, Nocardiopsis, and Marinococcus [9,[41][42][43]. Despite this their community structure differs both quantitatively and qualitatively for each different ecosystem. This would be due not only to the adaptation to environmental obstacles but also to the geolocalisation [43], the difference of the study protocol (method, media) [41], and the sampling sites [42]. Strains Genome sequencing followed by bioinformatics analysis for some of the already sequenced microorganisms such as Actinobacteria and Bacillus has revealed the presence of several gene clusters per genome that can produce different molecules [44]. Among the validly described halotolerant and halophilic bacteria, particularly Actinobacteria, only few numbers have been subjected to analysis of their bioactive compounds [45]. In addition, many compounds are usually produced in very low amounts (or not at all) under typical laboratory conditions [46]. PCR based methods for specific enzymes activating specific molecules are excellent screening tools for these strains; they would not only indicate the presence of probable genes clusters but also help in biochemical characterisation of the molecules. These methods would help in reducing the number of strains that need to be screened by cultural methods. The PCR based methods not only are limited to genomic DNA but also can be applied for the screening of eDNA that lead to the discovery of new active biomolecules [30]. Screening for potential production of a particular type of biomolecules such as antibiotics and antitumorales, without going through the tedious biochemistry process, is more efficient when the typing protocol is targeting the biosynthesis gene cluster rather than the taxonomic marker genes (e.g., 16S rRNA gene) which often give misleading results [47,48]. In our study, we have been interested in molecular screening of bioactive genes coding for putative antitumor compounds. The degenerate primers Glu1/Glu2 for the conserved N-terminal sequence of dNDP-glucose 4,6dehydratase genes have been extensively used to screen out for clusters of active biomolecules with antitumoral activity such as novobiocin [49], enediyne [50], elloramycin [51], sibiromycin [52], ravidomycin, and chrysomycin [53]. The primer set has also been reported in other screening studies for talosins A and B cluster, an antifungal [54], for caprazamycin biosynthesis, an antimycobacterial [55], and more recently we have used this set to screen for amicetin biosynthesis gene cluster, an antibacterial and antiviral agent [56]. The second primer set was designed by Chang and Brady [30] who screened a previously archived soil eDNA cosmid library by PCR using degenerate primers designed to recognize conserved regions in known oxytryptophan dimerization genes (StaD/RebD/VioB etc). The oxytryptophan dimerization enzymes were chosen as probes because this enzyme family is used in the biosynthesis of structurally diverse tryptophan dimmers, which have shown an antitumoral activity. Both indolocarbazole biosynthetic gene clusters (e.g., staurosporine, rebeccamycin, K-252a, and AT2433) and violacein biosynthetic gene clusters contain homologous enzymes that carry out the oxidation (StaO/RebO/VioA) and subsequent dimerization (StaD/RebD/VioB) of tryptophan. One among the six screened strains was positive for the set of the primers, strain M5A. This would signify that the strain M5A could produce tryptophan dimmers compound(s). The sequencing result followed by blast for the PCR products of M5A using StaDVF/StaDVR primers set (GenBank: KJ560370) has shown 76% homology to the uncultured bacterium clone AR1455 rebeccamycin-like tryptophan dimer gene cluster (GenBank: KF551872) that was studied by Chang and Brady [30], while, for the strains Streptomyces sp. Ig6, it has shown a mixed PCR product; we think this is probably due to the presence of multiple variable copies of this gene in this strain. The different patterns of activity against the targeted microorganisms observed in this study may indicate a variety of the produced active biomolecules. The antimicrobial activity of Bisk2, most closely related to Nocardia jejuensis [57], has never been reported to our knowledge. This result encourages us to consider Bisk2 as probably a new member or at least a new strain of Nocardia. Genome sequencing, DNA-DNA hybridising, and molecular chemotaxonomy would give more knowledge about its taxonomic position among the Nocardia species. The Sahara Desert is subject to large fluctuations in parameters such as temperature, pH, or salinity. It is populated by communities of organisms with intrinsic genomic heterogeneity for adaptation. The mechanisms of cell adaptation engage several enzymatic processes that may be a source of enzymes that show a higher level of stability and activity over a wider range of conditions. The screened enzymes found in this study (proteases, amylases, and lipases) would be economically valuable since they were screened from such environments and are likely to exhibit rare properties; these extremozymes are of great value to biotechnology industries [7,58,59]. Conclusion Exploration of biodiversity and biotechnological potential of desert microorganisms has gone several steps forward in recent years. The Sahara Desert is one of the biggest worldwide. It spreads upon several countries of Africa. These countries are among the countries worldwide to have the smallest registration rates of biodiversity in biological databases [60]. In addition to the insights on the biodiversity of Algerian Sahara Desert, to our knowledge, this is the first time to use the molecular screening of these genes coding for putative antitumor compounds to analyse Algerian strains. In this study, we have highlighted the interesting presence of diverse haloalkalitolerant and haloalkaliphilic strains with potential antitumorigenic, bioactive, and other interesting enzymes. Future work will concentrate on more cloning and sequencing for whole clusters, chemical characteristics, identification by application of mass spectrum, and other enzymatic and biochemical techniques that would be more suitable for better determination of the nature of the elaborated compounds produced by the strains identified in this study particularly of Nocardia sp. Bisk2, Actinopolyspora sp. M5A, and Streptomyces sp. Ig6.
2016-10-26T03:31:20.546Z
2014-05-27T00:00:00.000
{ "year": 2014, "sha1": "4da253f0a5f6e20d1a87e1fe41c26088b0c4c35c", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2014/317524.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81c77a975ff461aa572662438b3113f46ee6bcac", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
226964780
pes2o/s2orc
v3-fos-license
A warm molecular ring in AG Car: composing the mass-loss puzzle We present APEX observations of CO J=3-2 and ALMA observations of CO J=2-1, 13CO J=2-1 and continuum toward the galactic luminous blue variable AG Car. These new observations reveal the presence of a ring-like molecular structure surrounding the star. Morphology and kinematics of the gas are consistent with a slowly expanding torus located near the equatorial plane of AG Car. Using non-LTE line modelling, we derived the physical parameters of the gas, which is warm (50 K) and moderately dense (10$^3$ cm$^{-3}$. The total mass of molecular gas in the ring is 2.7$\pm$0.9 solar masses. We analysed the radio continuum map, which depicts a point-like source surrounded by a shallow nebula. From the flux of the point-like source, we derived a current mass-loss date of $1.55\pm0.21\times10^{-5}$ solar masses / yr. Finally, to better understand the complex circumstellar environment of AG Car, we put the newly detected ring in relation to the main nebula of dust and ionised gas. We discuss possible formation scenarios for the ring, namely, the accumulation of interstellar material due to the action of the stellar wind, the remnant of a close binary interaction or merger, and an equatorially enhanced mass-loss episode. If molecular gas formed in situ as a result of a mass eruption, it would account for at least a 30$\%$ of the total mass ejected by AG Car. This detection adds a new piece to the puzzle of the complex mass-loss history of AG Car, providing new clues about the interplay between LBV stars and their surroundings. INTRODUCTION Over the course of their lifetime, massive stars have a great impact in the chemistry, structure and dynamics of the interstellar medium (ISM). This impact intensifies as they enter the luminous blue variable (LBV) phase, a brief (∼ 10 4 years) and unstable post-main sequence stage, in which massive stars exhibit the highest mass-loss rates (up to 10 −3 M yr −1 ). Such a copious mass loss occurs by virtue of dense and steady stellar winds and, sporadically, violent outbursts like the Great Eruption of Car in the 19th century. The interaction between this mass loss, interstellar material and different wind regimes shapes the closest circumstellar environment, leading to the formation of large multi-phase nebulae. Nebulae around LBV stars (LBVNs) therefore emerge as valuable laboratories to understand the feedback mechanisms between ★ E-mail: cristobal.bordiu@inaf.it the parent stars and their environment. The multi-wavelength study of this circumstellar material is a crucial tool to trace back the massloss record of these sources throughout their different evolutionary phases (e.g. Umana et al. 2005Umana et al. , 2009Umana et al. , 2010Cerrigone et al. 2014;Agliozzo et al. 2014;Buemi et al. 2017). However, most of the research so far has focused on the dust and ionised gas content of LBVNs, paying little attention to the missing piece of the picture: the molecular gas. In recent years, carbon monoxide and a handful of other simple molecular species have been detected in the surroundings of several candidate and confirmed LBVs, such as G79.29+0.46 (Rizzo et al. 2008, Rizzo et al. 2014), [GKF2010] MN101 (=MGE 042.0787+00.5084, , and the well known Car (Smith & Davidson 2001, Loinard et al. 2012, Loinard et al. 2016, Smith et al. 2018a, Gull et al. 2020. All these successful detections demonstrate that, provided the adequate physical conditions, conspicuous amounts of molecular gas can arise and survive for some time in the hostile out-skirts of these stars. By analyzing the emission at mm-and sub-mm wavelengths from rotational transitions of CO and other species, one can obtain valuable kinematic information that allows for precisely reconstructing previous mass-loss events and place constraints on the timescales of the observed structures. Therefore, the very existence of these molecules opens a complementary window to learn about the mass-loss phenomena in these hot, massive stars, beyond what is visible at other wavelengths. In addition, the determination of the relative chemical abundances and isotopic ratios in the molecular gas provides valuable snapshots of the stellar chemistry. Among the scant LBV family, AG Car stands out as one of the most luminous members (L ∼ 1.5 × 10 6 L , Groh et al. 2011). Located in the far side of the Carina arm, the literature establishes a canonical distance of 6 ± 1 kpc (Humphreys et al. 1989). Revisions on this value were recently made by Smith & Stassun (2017) and Smith et al. (2019) based on Gaia parallaxes. They concluded that AG Car may be 20% closer, at a distance of 4.7 +1.2 −0.8 kpc, which is consistent, within the uncertainties, with the previous estimate. Hereafter in this work, we adopt the canonical distance of 6 kpc. AG Car has been widely observed across the whole electromagnetic spectrum for decades. The star is surrounded by a slightly bipolar ring nebula of about 30 × 40 arcsec that was first reported by Thackeray (1950). Further dynamic studies of the nebula in the light of H , [N ] and [S ] by Thackeray (1977) concluded that the nebula was a hollow shell with hints of bipolarity, expanding at ∼50 km s −1 . Humphreys et al. (1989) determined a kinematic age for the nebula of ∼ 10 4 years, consistent with the duration of the LBV phase. Broad-band optical continuum observations by Paresce & Nota (1989) revealed a helical jet-like structure in the NE-SW direction, apparently arising from the star, which gave rise to questions about the possible binarity of AG Car. However, no signature of a companion star was detected (Nota et al. 1992). Smith (1991) and Nota et al. (1992) revisited the dynamics of the nebula, establishing an expansion velocity of 70 km s −1 and identifying a bipolar outflow at 83 km s −1 distorting the NE side of the shell. Later, ATCA observations at 3 and 6 cm by Duncan & White (2002) detected radio continuum emission arising from the star and the shell, with a spectral index consistent with thermal radiation from ionised gas. Finally, the dust content of the nebula was thoroughly investigated by Voors et al. (2000) and Vamvatira-Nakou et al. (2015) (hereafter VN15) by means of infrared imaging and spectroscopy. The latter estimated a total nebular mass of ∼15 M and proposed a mass eruption with a kinematic age of 1.7 × 10 4 yr as the origin of the structure. None the less, what makes AG Car particularly interesting is its extreme variability. AG Car exhibits yearly micro-variations of 0.1-0.5 mag superimposed to a more than a decade-long S Dor cycle, with visual changes of ∼ 2 mag between the hot and cool states 1 (Humphreys & Davidson 1994;Stahl et al. 2001;Sterken 2003). For that reason, AG Car is an excellent laboratory to learn about the formation and destruction of molecules in a changing environment, in terms of physical conditions and time scales. The first attempts to detect molecular gas associated with AG Car were made by Nota et al. (2002) (hereafter N02), targeting the CO = 1 → 0 and = 2 → 1 lines with the SEST telescope. With a series of single-dish pointings, they coarsely sampled the region around the star, with beam sizes of 45 and 23 arcsec and spacings of 45 and 12 arcsec respectively. Their spectra displayed multiple narrow velocity components, likely related to intervening sheets of molecular gas, but also a broad component centred at 26 km s −1 . The authors interpreted this broad component, which presented a pseudo-gaussian profile, as arising from a circumstellar expanding envelope or disk, for which they estimated a minimum mass of 2.8 M . Despite this promising result, the region was never properly imaged at higher resolution to confirm the existence of such a structure. In this work, we report APEX and ALMA observations of CO and 13 CO towards AG Car, confirming the detection of a molecular ring surrounding the star. The paper is structured as follows: in Sect. 2 we summarize the molecular line observations; in Sect. 3 we describe the main findings; in Sect. 4 we analyze the morphokinematic features of the structure, derive the physical parameters of the gas and present a kinematic model. In the same section, we also discuss possible formation mechanisms. In Sect. 5 we put together the available data into a single, unified picture of the mass-loss record of AG Car; and finally, in Sect. 6 we present the conclusions of this study and lay the groundwork for further research on AG Car. APEX observations We observed AG Car with the Atacama Pathfinder EXperiment (APEX) telescope, located at Llano de Chajnantor (Chile). Observations took place on 2014, September 23, as part of the program E-094.D-0598A (P.I: G. Umana). The front-end used was the APEX2 receiver of the Swedish Heterodyne Facility Instrument (SHEFI, Vassilev et al. 2008), targeting the CO = 3 → 2 transition at 345.796 GHz. The eXtended Fast Fourier Transform Spectrometer (XFFTS, Klein et al. 2012) provided an instantaneous bandwidth of 2.5 GHz with 32768 channels, resulting in an effective velocity resolution of about 0.07 km s −1 at the observing frequency. An On-The-Fly map of 100 × 100 arcsec around the source was done in Total Power mode, using the off position = 10 h 49 m 23 s .3, = −60 • 58 47 .7 (J2000). At the rest frequency of the line, the APEX primary beam is 19.2 arcsec. Observations were performed under average weather conditions (1-2 mm of precipitable water vapour). Calibration was done using Mars and X TrA, and pointing and focus were checked regularly during the night. The raw spectra produced by the APEX standard pipeline was reduced using the 2 software package. This process involved the removal of bad scans, a linear baseline fitting and a velocity smoothing to a final resolution of 1 km s −1 to improve rms. Conversion from the original antenna temperatures ( * ) to mainbeam brightness temperature (T mb ) was done by correcting for the forward-hemisphere efficiency and the beam efficiency of the antenna, so that mb = * (1) For the APEX telescope, eff = 0.97 and eff = 0.73 at 345 GHz. The typical rms noise per 1 km s −1 channel in the final cube is 50 mK. ALMA observations Follow up observations of AG Car were conducted on 2019, October 2 and 6, with the ALMA 7-m and total power arrays, respectively, in the context of the program 2019.1.01056.S (P.I: L. Cerrigone). It was mapped the spatial distribution of the rotational = 2 → 1 transitions of CO and 13 CO at 1.3 mm. The source was observed with three total-power antennas under excellent weather conditions (pwv ∼0.25 mm) and with ten 7-m antennas under good conditions (pwv ∼1.25 mm). The spectral setup consisted of two spectral windows centred at 220.398 and 230.538 GHz plus two additional windows for continuum. The correlator was tuned to provide an instantaneous bandwidth of 1850 MHz in 2048 channels, with a moderate velocity resolution of about 1.3 km s −1 for each line. ACA data were calibrated with CASA v.5.6.1-8 and the ALMA pipeline version 42686 that comes with it. Calibrators J1047-6217 and J1058+0133 were used for bandpass and flux calibration. Cubes of line emission, with a characteristic beam of 7 × 5 arcsec, were constructed from the calibrated visibilities using the CLEAN algorithm after continuum-subtraction. CASA v5.6.1-8 and pipeline v.42686 were also used for the reduction of the single-dish data. The observations were performed in position-switching mode, with an off position chosen by the ALMA Observatory among known clean areas for Galactic sources. The off position was about 4 deg away from our target. Such a large angular distance between the target and the off position caused ripples in the spectra, which we did not attempt to remove in post-processing, since their amplitude is negligible with respect to the brightness of the spectral lines from our target. The flux calibration was based on Kelvin-to-Jansky factors estimated from calibration campaigns performed by the Observatory. At our frequencies the factors had a value of about 41.5 K/Jy. Finally, to recover all the spatial scales and avoid missing flux, we combined the ACA and the Total Power (hereafter ALMA TP) cubes with the CASA feather task, which involved regridding the single-dish data and correcting it for the primary beam response. The feathered cubes recover 99.7% of the single-dish flux. The resulting products were then re-scaled from flux densities to brightness temperature using the standard equation where denotes the flux density in Jy beam −1 , is the reference frequency in GHz, and maj and min are the major and minor axes of the beam in arcsec, respectively. The typical rms noise level per channel of 1.3 km s −1 ranges from 5 to 10 mK. Similarly, the rms noise level of the continuum map is ∼ 1.2 mJy beam −1 . A summary of the observing parameters is presented in Table 1. Throughout this paper, we use the following conventions: (1) intensities of emission lines are expressed in mb scale; (2) intensities of continuum are expressed in flux density scale; (3) velocities refer to the local standard of rest frame (LSR); and (4) positions are offsets from the coordinates of AG Car, = 10 h 56 m 11 s .5779, = −60 • 27 12 .8095 (J2000) (Gaia Collaboration 2018). Fig. 1 shows the averaged APEX CO = 3 → 2 spectra in the field of AG Car. Most of the emission arises in the velocity range (0, 50) km s −1 . We identify multiple velocity components within this range, coincident with the ones reported by N02 in the CO = 1 → 0 and CO = 2 → 1 transitions: two narrow, dominant components at 18.5 and 29.7 km s −1 plus additional minor components at 11, 18.5, 21.5 and 29.5 km s −1 . N02 attributed these components to background or foreground contamination from ambient gas moving perpendicularly to the line of sight. We also detect a broad component at 26.5 km s −1 , the same one that N02 tentatively associated with the star. APEX sigle-dish data Careful inspection of the spatial distribution of each component shows that all the narrow lines are clumpy and somewhat extended, mainly arising from the southeast and the west. The same applies to the broad component, which is particularly intense towards the centre and the south of the map. However, we note that the 29.5 km s −1 narrow line presents a 'shoulder-like' feature consistent with a partially overlapping line at a slightly higher velocity. This component was indeed marginally detected in N02's spectra, without clear hints of any particular spatial distribution, probably being just a high-velocity wing of the broad component at 26.5 km s −1 . This newly identified component, centred at 32.5 km s −1 , has a velocity extension of about 6 km s −1 and is significantly weaker than the others. We attempted to isolate the line by removing the neighbouring components through a gaussian fitting, but this resulted in the introduction of multiple artefacts that altered the line shape. None the less, integration of the approximate velocity range of the line, i.e. at least 30 to 36 km s −1 , represented in grey in the figure, reveals the existence of an elongated structure towards the centre of the field (Fig. 1, bottom panel). Despite being possibly contaminated by the adjacent narrow component, the structure exhibits a certain degree of bipolar symmetry with respect to AG Car, with two 'lobes' along the SE-NW axis. None of the other ambient components shows a comparable spatial distribution. CO and 13 CO emission The feathered CO = 2 → 1 and 13 CO = 2 → 1 spectra show essentially the same features visible in the CO = 3 → 2 APEX data. Fig. 2 compares the feathered (filled) and the ACAonly (black) spectra for the two observed lines. The interferometer filtered out most of the emission at LSR < 30 km s −1 , including the broad line at 26.5 km s −1 . Considering the ACA maximum recoverable scale is 28 arcsec at 230 GHz (our shortest baseline was ∼ 8.9 m), these components necessarily correspond to extended, large scale emission, as previously discussed. On the other hand, a significant fraction of the flux in the velocity range (+30, +40) km s −1 is preserved, suggesting that this emission is considerably more compact than the ambient material. From the peak temperature, we estimate that ACA recovers ∼50% of the single-dish flux in this range. Fig. 3 shows the integrated intensity map of the CO = 2 → 1 line in the range (+29, +36 km s −1 ). ACA's synthesized beam (of ∼7 arcsec) improves angular resolution with respect to APEX by a factor of ∼3. Such improvement allows us to resolve the emission, revealing an elliptical structure enclosing AG Car and apparently detached from the star. Two remarkably symmetric 'lobes' dominate the emission, southeast and northwest of the star, peaking at (10 ,-10 ) and (-10 , 10 ) respectively. The lobes appear connected by two fainter arcs. A large clump at position (-25 , -5 ) disrupts the structure in its westernmost side. This clump is related to residuals from the narrow component at 29.5 km s −1 that are visible in the ACA spectra (perhaps the densest part of an extended cloud). The S/N ratio across the ring ranges from 6 to 14 , which means that even the faintest parts are genuine. By fitting an ellipse to the lobes, we derive an approximate angular size of 35×15 arcsec, with a position angle of 135 degrees east of north. The orientation and size of the ellipse match perfectly the NW-SE elongation of the infrared shell reported by Voors et al. (2000) from continuum and [Ne ] imagery. In the 13 CO = 2 → 1 integrated intensity map (not shown), only the lobes of the structure are visible, with an S/N ratio slightly above 5 . The non-detection of fainter arcs is likely due to the limited sensitivity. Radio continuum emission Radio continuum emission traces the ionised gas in the nebula of AG Car. Fig. 4 shows the ACA radio continuum map of AG Car at 225 GHz. Emission is dominated by an unresolved compact source in the center of the field, presumably related to the star. Weak emission arising from the brightest parts of the optical nebula are visible as well above the 3 level. The radio morphology is slightly different from the 5.5 and 8.8 GHz ATCA maps presented by Duncan & White (2002), which depict a compact source surrounded by a bright and very clumpy detached nebula. Note, though, that ATCA data has a higher spatial resolution (1 arcsec at 8.5 GHz) and provides a more complete uv coverage. We fitted a two-dimensional gaussian to the compact source, measuring a flux density of 30.2 ± 2.2 mJy. The uncertainty is the quadratic sum of the rms noise in the map and a calibration error of 5%. The resulting value does not match the expected flux density obtained by extrapolating the spectral index = −0.1 reported by Duncan & White 2002, which is 0.89 mJy. Such discrepancy implies that either the spectral index is variable, and thus the current physical conditions in the star are quite different from those in the 1994-1996 period (when the ATCA data were gathered), or that another emitting component is superposed at mm-wavelengths, such as thermal dust close to the star, effectively pushing the spectral index toward more positive values. Finally, if we ignore the 25 year gap between the observations and combine the fluxes, we obtain a spectral index ∼ 0.8, compatible with a typical stellar wind. We determined a nebular flux of 40 ± 15 mJy after subtracting the contribution of the point source. Again, this value does not match the extrapolation of the −0.1 spectral index reported by Duncan & White (2002), which yields a slightly higher flux. We note, though, that the LAS (largest angular scale) of ACA at 225 GHz is about 1/3 of the LAS of ATCA 750B configuration at 5.5 GHz, so it is very likely that we are missing a fraction of the extended flux. Morphology and kinematics To establish a physical connection between the CO structure and AG Car, we need to analyze the morphology and kinematics of the gas and put them in context with the surrounding ISM. AG Car lies in projection amid the massive star clusters Tr 14 and Tr 16 and the large Car OB2 association. Toward these structures, i.e. from = 287 • to 290 • , several giant molecular clouds are observed. Most of them arise at negative systemic velocities, comprised between −10 and −30 km s −1 , corresponding to distances of 2-3 kpc on the near side of the Carina arm (Cohen et al. 1985). These clouds are dense, clumpy and continuously eroded by intense UV radiation from hundreds of newborn OB stars (e.g. Rizzo & Arnal 1998;Rathborne et al. 2002;Wu et al. 2018). On the contrary, the emission in the field of AG Car appears at LSR >10 km s −1 . In this direction, these positive velocities are expected for gas on the far side of Carina (Grabelsky et al. 1987), which agrees with the distance of 6 kpc assumed for the star. A closer look at the distribution of the emission in the field of AG Car highlights the differences between the central CO structure and the other velocity components. Fig. 6 shows the APEX CO = 3 → 2 velocity-integrated intensity maps of the dominant lines, i.e. the two narrow lines at 18.6 and 29.6 km s −1 and the broad component at 26.5 km s −1 that N02 tentatively linked with the star (labelled C1, C2 and C3 in order of increasing velocity). As mentioned earlier in Sect. 3.1, none of the three components seem to be morphologically associated with the star, unlike the component around 32.5 km s −1 (hereafter C4), shown again for reference. The "arm" seen in component C3, extending over the source to the East, is due to contamination by the low velocity wing of component C4 (the two lines overlap significantly). Most of the emission from C1, C2 and C3 is resolved out by the interferometer, preserving the small-scale features of component C4. As seen in the ACA map, the spatial distribution of C4 is consistent with a circumstellar ring or torus seen at a certain inclination angle. Assuming that such structure is perfectly axisymmetric, we can make an elliptical fit and use the axial ratio ab to estimate the viewing angle, such that = arctan( ab ). This method yields an angle of = 68 ± 3 degrees, which implies that the ring is viewed nearly edge-on, with its semimajor axis following the direction in which the infrared shell is slightly elongated (Voors et al. 2000). The deprojected characteristic radius of the structure is thus 15 arcsec, which is equivalent to ∼0.4 pc at a distance of 6 kpc. This physical size is in good agreement with the inner radius of the infrared dusty shell derived by VN15. The ring extends from 29 to 36 km s −1 , with a central velocity of 32.5 km s −1 . In the literature, it is possible to find estimates for the radial velocity of AG Car spanning from 0 to 10 km s −1 (e.g. Wolf & Stahl 1982;Humphreys et al. 1989;Stahl et al. 2001;Groh et al. 2009a). Such discrepancy is the consequence of the inherent uncertainty in determining systemic velocities for LBV stars, whose variable stellar winds and circumstellar envelopes often contaminate spectroscopic measurements. Contrarily, the velocity of the CO structure does not depend on any stellar properties. Therefore, in our analysis, we consider a systemic velocity of 32.5 km s −1 . This velocity is compatible with the distance assuming small departures from galactic rotation. Fig. 5 shows the channel maps of the CO = 2 → 1 emission in the velocity range of interest. We note that the moderate velocity resolution of our data, of just 1.3 km s −1 , does not allow us to perform a detailed analysis of the observed kinematic structure, . Spatial distribution of the main components in the field of AG Car. APEX CO = 3 → 2 velocity-integrated intensity maps of components C1 (from 17.5 to 19.7 km s −1 ), C2 (from 19.3 to 28.2 km s −1 ), C3 (from 28.4 to 30.8 km s −1 ) and C4 (from 30 to 36 km s −1 , same as Fig. 1). The colour scale is relative to the peak intensity of each map. Contours at 2.5, 3.5, 4.5, 5.5, 6.5, 8.5, 10.5 and 12.5 K km s −1 . The red marker indicates the position of the star. which is sampled by only seven channels. The first two channels, corresponding to LSR = 28.2 and 29.5 km s −1 , appear highly contaminated, with a prominent clump towards the west particularly bright in the latter. As discussed in Sect. 3.2, this clump is likely associated with the narrow component at 29.5 km s −1 : the map of C3 presents relative maximum near position (-25 , -5 ), as seen in Fig. 6. The rest of the channels are mostly devoid from contamination, except for small faint clumps towards the south. Overall, the emission moves clockwise around the star, from the southwest to the northwest, as velocity increases. The lobes are present across all the channels, with the southeast one being slightly more prominent in the blueshifted channels and vice-versa. Contrarily, the arcs that connect the lobes are only visible in the central channels: opening towards the northeast at 29.5 and 30.8 km s −1 , and towards the southwest at 33.3 and 34.6 km s −1 . The velocity gradients observed along the minor and major axes of the structure are mainly compatible with gas moving radially. Two physical scenarios are possible: (1) an accretion disc of gas falling onto the star; or (2), an expanding ring, composed of stellar ejecta or compressed interstellar gas. From an evolutionary perspective, the accretion disk scenario is highly improbable. Infalling disks around massive stars are expected only in pre-MS stages, and the evolved status of AG Car is more than confirmed. On the other hand, an expanding ring is a much more plausible hypothesis. In this case, the orientation is constrained by the observed velocity gradient: the southeastern part, blueshifted, is the approaching near side, and the northwestern part, redshifted, is then the receding far side. Correcting for the inclination, we estimate an expansion velocity of exp = 3.5 ± 0.5 km s −1 . This value is surprisingly low, considering that typical velocities of LBV winds range from 50 to 100 km s −1 . For a characteristic radius of 0.4 pc, the kinematic age of the ring would be ∼ 10 5 years. This age must be seen as a strict upper limit, since the structure may be slowing down due to the interaction with the surrounding medium. Yet, the resemblance to the CO torus found in MN101 is noteworthy, having comparable sizes, expansion velocities and dynamic time-scales ). These slow structures could well be the ageing relatives of the 200-yr-old, disrupted molecular torus in Car (Smith et al. 2018a), which is more compact (∼4000 au) and chemically rich Gull et al. 2020). The idea of a common formation mechanism for these rings is worth to be studied in more detail. We explore the possible origins of the CO ring in Sect. 4.3. Physical parameters of the gas If the gas arises from the star -i.e. formed from wind or ejecta-, its physical parameters should reflect substantial differences with respect to the ISM. Line ratios are useful diagnostic tools to roughly estimate these parameters. We have three available lines to work with: two transitions of CO plus an additional line of 13 CO. However, the analysis is restricted to the single-dish data, i.e. APEX and ALMA TP, since we lack interferometric CO = 3 → 2 observations. Thus, we can only provide average values for the structure. We first computed beam-averaged spectra for the three transitions, centred in the stellar position. Then, we smoothed the resulting averaged spectra to a common velocity resolution of 1.3 km s −1 , and we attempted a gaussian fitting on components C1...C4 with , as shown in Fig. 7. The results of the fitting are compiled in Tab. 2. In some cases, the fitting was rather problematic due to significant line blending, derived from the limited velocity resolution. Therefore, we imposed the condition that every velocity component C should have a same FWHM in each of the three lines, so that the derived line ratios were consistent. To calculate the line ratios, we used the line intensities integrated over the FWHM of each component, formally expressed as = ∫ mb d . The intrinsic uncertainty associated with this magnitude, , is given by where rms is the spectrum noise as measured in line-free channels, is the velocity resolution, and is the number of channels (e.g. Mangum & Shirley 2015). Then, we computed the total uncertainty as the quadratic sum of the intrinsic uncertainty plus the calibration uncertainty, so that tot = √︃ 2 + 2 cal . The calibration uncertainties considered are 14% for APEX (APEX2, Dumke & Mac-Auliffe 2010) and 5% for ALMA (Bonato et al. 2018). Beam dilution becomes a critical issue, as we work with mean values from single-dish data gathered at different frequencies with different instruments. To compute physically meaningful line ratios, we need to apply a beam-filling factor to the integrated line intensities. We adopt very conservative estimates of ≤0.2 and ≤0.38 at 230.538 and 345.799 GHz respectively, assuming an emitting area comparable to the ACA beam or smaller. Hereafter we refer to the CO = 3 → 2 to CO = 2 → 1 line ratio as 32 and to the CO = 2 → 1 to 13 CO = 2 → 1 as 12/13 . In the ring (component C4) we measure a 32 = 0.45 ± 0.07. This value is somewhat low, indicating that the upper level is less populated. On the other hand, we measure a 12/13 of 25 ± 4 in C4. This value lies halfway between typical ISM values (Wilson & Matteucci 1992) and those measured in the outskirts of some evolved massive stars, which show nebular 12 C/ 13 C ratios as low as 1-5 ( in agreement with theoretical predictions for extremely processed CNO material (Meynet et al. 2006). But surprisingly, we measure even lower values in the other velocity components. To model the excitation of the CO and 13 CO lines, we employed the non-LTE radiative transfer code (van der Tak et al. 2007). solves the statistical equilibrium equations using the escape probability formulation (Sobolev 1960) to predict the line intensities and level populations. The code presents two important peculiarities. First, it is entirely agnostic to the source geometry, and hence the predicted line intensities need to be corrected for beam dilution. And second, it works under the assumption of a homogeneous medium, meaning that all transitions sample the same gas volume with the same excitation conditions. These are obvious oversimplifications, yet extremely convenient to provide first-order estimates, especially considering the nature of our data. We created a grid of models parameterized by the kinetic temperature k , the CO column density (CO) and the H 2 volume density (H 2 ). The grid covered k from 10 to 200 K, (CO) from 10 13 to 10 19 cm −2 and (H 2 ) from 10 2 to 10 5 cm −3 . We used H 2 as the only collision partner, with a cosmic background temperature of 2.73 K. We also adopted a representative FWHM of 6 km s −1 . First, for each k we used 32 and CO(2−1) to constrain the regions of the parameter space that result in physically meaningful solutions. Sample plots of the fitting are shown in Fig. 8. We find that (CO) is well determined, being relatively insensitive to temperature and density and taking values of a few 10 16 cm −2 . On the Table 2. Line fitting parameters for each transition: systemic velocity, FWHM, peak temperature, velocity-integrated line intensity and its error. Values have NOT been corrected for the filling factor. other hand, (H 2 ) is only loosely constrained, with valid solutions in the range 10 3 -10 4 cm −3 . Regarding the temperature, though, solutions for k > 120 K are unlikely: the required volume densities are comparable to those of diffuse clouds (∼ 100 cm −3 ), falling well below the critical density of the = 2 → 1 transition. Thus we favour solutions with k < 100 K. Once we constrained the parameter space, we followed a reduced 2 minimisation technique to find the model that better reproduces the observed line intensities, such that is the integrated intensity of the ( + 1) → transition and is the corresponding filling factor. Fig. 9 shows the fit 2 surface as a function of k and (H 2 ). We find that the model that best fits the observations is that with k = 50 K, (CO) = 2.4×10 16 cm −2 and (H 2 ) = 1.3×10 3 cm −3 . The solution points to a moderately thick opacity in the emitting medium, with optical depths of 21 = 1.2 and 32 = 0.8 for this temperature range. As a cross check, we used the derived parameters to predict the 13 CO = 2 → 1 integrated intensity, for a range of 13 CO column densities. The best match is obtained for ( 13 CO) = 7.5 × 10 14 cm −2 , which yields an isotopic ratio of ∼ 30. This value agrees with the 12/13 previously discussed. Finally, we repeated the same procedure to study the excitation conditions of the other components, C1 to C3. In this case, as the emission is widespread -in fact, much larger than the beam-we did not apply a correction for beam dilution. We obtained column densities below 10 16 cm −2 , but we were unable to properly constrain the kinetic temperature of the gas. We found many possible solutions, starting from the typical temperatures of a cold cloud, around ∼ 10 K. All these solutions occurred roughly at constant pressure ( × ), which suggest an equilibrium situation. In addition, the lines were always more optically thin ( < 1), in contrast with the gas of the ring, which is significantly more opaque. These differences suggest that component C4 is indeed subject to slightly different excitation conditions. LVG results of the four components are compiled in Table 3. Estimate of the ring mass We can use the average CO column density derived with to provide a crude estimate of the mass of the ring. The total mass of molecular gas is given by where m H is the mass of the hydrogen atom, He is the He correction factor, CO is the relative [CO/H 2 ] abundance and Ω sou is the solid angle subtended by the source. Adopting a cosmic [CO/H 2 ] abundance of CO = 10 −4 and considering that the ring subtends an area of about ∼ 0.63 pc 2 at = 6 kpc, we obtain a total mass of 2.7 ± 0.9 M , a result comparable with molecular gas masses found in other LBVN: in Car, the equatorial torus contains 1-5 M of molecular material (Smith et al. 2018a), and the molecular ring of MN101 has a mass of 0.6±0.1 M . It is worth to highlight that this estimate is strongly affected by the determination of the CO factor and its associated uncertainties. Many LBVN are deficient in C and O, and consequently underabundant in CO (e.g. Car, Morris et al. 2017). Should this be the case for AG Car as well, the mass of the ring could be substantially higher. Origin of the molecular ring As shown in previous sections, the ring surrounding AG Car is composed of moderately dense, warm gas slowly expanding into the ISM. The physical connection between the molecular gas and the star, as inferred from morpho-kinematic features and excitation conditions, seems evident. The ring is seen nearly edge-on, with an inclination of ∼70 • . This is in excellent agreement with the viewing geometry proposed by Groh et al. (2006Groh et al. ( , 2009b, who measured a remarkably high rotation velocity for the star ( sin = 220 km s −1 ) and concluded that we necessarily see AG Car from the equator. Considering this, we can safely assume that the ring lies in -or at least close to-the equatorial plane of the star. Such an assumption is further supported by the orientation of optical continuum jet found by Nota et al. (1992), which should be relatively aligned with the rotation axis of AG Car. Fig.10 shows the EFOSC (ESO Faint Object Spectrograph and Camera) optical continuum image of AG Car, with the CO contours superimposed. The ring and the jet appear perfectly perpendicular in projection, with P.A. of 135 and 225 • respectively. However, an equatorial ring may be the result of very different physical processes. Below we discuss possible formation scenarios and their implications. Swept-up pre-existing material One possibility is that CO traces a remnant of the parent molecular cloud in which the star formed, that has been compressed by the action of the stellar wind. However, this interpretation involves some problems, as massive stars like AG Car sustain very fast winds of ∼ 1000 km s −1 in the main sequence. For several Myrs, these winds sweep up the stellar neighbourhood, eroding and even completely destroying the natal clouds and carving humongous cavities in the ISM. These so-called wind-blown bubbles have scales much larger than the size of ring, even spanning several tens of pc (Garcia-Segura et al. 1996). Should the CO arise from compressed ISM material, one would expect to find it farther from the star, toward the edges of a wind-blown bubble. Herschel images revealed that AG Car is in fact immersed within a cavity about 5 (∼ 9 pc) in diameter, where material has been mostly evacuated (see fig.3 in VN15). Even so, as Nota et al. (2002) pointed out, aspherical winds due to stellar rotation could result in slower velocities near the equatorial Figure 10. Relative orientation of the optical jet and the molecular ring. EFOSC1 coronographic continuum image of AG Car and its nebula in color scale (Vamvatira-Nakou et al. 2015). ACA+TP CO = 2 → 1 intensity map superimposed as contours. Contours start at 50% of the peak intensity, showing only the ring blobs, for the sake of clarity. The dashed ellipse is displayed to highlight the extent of the structure. The red marker indicates the position of the star. The bright spike to the east is an imaging artefact. plane of the star, thus 'protecting' ISM material at low latitudes. Yet, the long-term survival of a compact structure so close to the star is mainly determined by the wind density: a fast but not very dense stellar wind would struggle to evacuate the densest parts of a molecular cloud. The dynamical age of the ring, however, favours a more recent, post-main sequence origin, not more than ∼10 5 years ago. A binary interaction remnant As a second formation hypothesis, one may invoke a binary scenario. The possible binary nature of AG Car has been long discussed in literature ever since the discovery of its bipolar nebula, and particularly, its helical jet, which strongly suggests some kind of precession (Paresce & Nota 1989;Nota et al. 1992). In recent times, the traditional view of LBV stars in the frame of single-star evolution has been challenged. The apparent isolation of most LBVs from OB associations, their phenomenological heterogeneity as a class, and their role as potential SN progenitors, have led some authors to propose an alternative evolutionary paradigm, in which LBVs likely arise from binary interaction. Under this novel framework, two possible formation pathways are considered: either LBVs are rejuvenated mass gainers in close binary systems, that end up kicked out by their companion's SN; or they are produced following a binary merger event (Justham et al. 2014;Smith & Tombleson 2015;Smith 2016;Aghakhanloo et al. 2017). Assuming that AG Car is part of a close interacting binary, a non-conservative Roche Lobe overflow (RLOF) episode would result in a mass-leakage through the outer Lagrange point of the system (L2). This leakage would effectively create a ring-like structure, slowly expanding outwards. Besides, the gainer not only accretes mass in the process, but also angular momentum, speeding up its rotation, which would be consistent with the enhanced rotation of AG Car. This kind of non conservative mass-transfer interaction has been previously suggested as the driving mechanism in other non-LBV sources that exhibit equatorial ring-like structures, such as the less massive binary RY Scuti (Smith et al. 2011) or SBW1, a very luminous, 18-25 M BSG (Smith et al. 2007). Still, any assumptions on the binarity of AG Car are highly speculative and must be regarded with caution. So far, the search for a companion has been unfruitful, yielding negative results even at X-ray wavelengths (Nazé et al. 2012). However, spectroscopic signatures of potential companions may be hidden by the rotationally broadened lines of AG Car. In any case, binarity does not seem to be an extremely common feature among the LBV family, with only a handful of them being present-day binaries, such as Car (Damineli et al. 2000) and HR Car (Boffin et al. 2016). On the other hand, the binary merger hypothesis allows us to circumvent the lack of observational evidence for a companion of AG Car. The merger mechanism described by Justham et al. (2014) depicts a massive binary in which the primary, expanding as it evolves beyond the main sequence, fills its Roche lobe transferring mass onto the secondary. For certain mass ratios, the system enters a contact phase due to expansion of the secondary's envelope. Eventually, the system destabilizes and a merger occurs. This mechanism could also lead to the formation of rings, as equatorial mass outflows are expected during the merging process as a consequence of the common envelope rotation. The merger idea, which was already introduced two decades ago for some B[e] stars (Langer & Heger 1998;Pasquali et al. 2000), has recently become one of the preferred explanations for Car's Great Eruption in the 1840s. The outburst would have been triggered by a merger within a hierarchical triple system, leading to the current LBV + O/WN binary (Portegies Zwart & van den Heuvel 2016;Smith et al. 2018b;Toonen et al. 2020) and giving rise to an equatorial torus. While it is difficult to infer whether AG Car has undergone a similar process, we note that many of its features (bipolarity, enhanced rotation, He and N abundances) are predicted for the BSG/LBV products of massive binary mergers, making this hypothesis a fascinating possibility worth to be further explored. In the general picture, the portrait of LBVs as products of binary interaction seems able to explain many of their peculiarities. The formation of the slowly expanding triple ring system in SN1987A has been equally linked to a possible binary merger (Mor-ris & Podsiadlowski 2007), promoting binarity and rotation as two key ingredients of the sometimes elusive connection between LBVs and supernovae. An equatorially enhanced outflow Finally, another possible explanation sticking to the traditional single-star scenario is that the molecular ring in AG Car originated from an equatorially enhanced mass-loss episode. This kind of non-spherical mass-loss seems to be a common phenomenon in LBV stars, as many of them are surrounded by dusty or gaseous equatorial tori, such as Car (Morris et al. 1999;Smith et al. 2018a), HD168625 (O'Hara et al. 2003, and MN101 ). However, little is known about the mechanisms behind the formation of disks or ring-like structures around single massive stars. Stellar rotation is thought to play a crucial role in the shaping of stellar winds throughout the different evolutionary stages of massive stars. In this context, we may explain the formation of a gaseous ring in AG Car by invoking a rotationally-induced bistability mechanism. The bistability jump would produce increased mass fluxes and slower winds at low latitudes near the stellar equator, as a direct outcome of the decrease in the effective gravity eff and the subsequent gravity darkening (Lamers & Pauldrach 1991;Pelupessy et al. 2000). Similarly, at higher latitudes, the winds would be faster but less dense. By virtue of this mechanism, aspherical winds and equatorially enhanced mass-loss are strongly favoured in fast-rotating stars with high radiation pressures (Maeder & Desjacques 2001). AG Car, which rotates at a significant fraction of its break-up velocity at least during its hottest phase (up to 86%, Groh et al. 2006), satisfies these two conditions. Leitherer et al. (1994) found two independent pieces of evidence that support this scenario: (1) a significant -and variable-degree of polarization in AG Car, interpreted as an equatorial density enhancement of the stellar wind, and (2), a two-component wind structure: a slow and dense wind, traced by recombination lines, in coexistence with a faster and less dense component, seen in ultraviolet absorption lines. Another important issue to address is the survival of molecular gas in the proximity of AG Car. For molecules to form out of the stellar wind, a certain degree of shielding against the strong FUV radiation must be provided. In this regard, AG Car may somehow be an analog of B[e] supergiants, where slowly expanding, rotating disks or rings of neutral material have been observed. The formation mechanism of such structures has been tentatively linked to nearlycritical stellar rotation as well (Zickgraf et al. 1986;Oudmaijer et al. 1998;Curé et al. 2005;Kraus 2006;Kraus et al. 2010). The equatorial winds of B[e]SGs provide the adequate conditions, in terms of temperature and density, for the formation and survival of significant amounts of dust and molecules (Liermann et al. 2010), but B[e]SG disks are much smaller, spanning only a few hundreds of AU. In the case of AG Car, the expansion of the wind over large scales would involve an important decrease in density. Perhaps, an inhomogeneous stellar wind -which is a common feature in many hot, massive stars, see e.g. Contreras et al. (2004)-would make possible the formation of molecules within denser clumps, that would provide the necessary protection against ionizing photons. Interestingly, modelling of the nebular dust content by Voors et al. (2000) predicts a population of very large grains (with sizes up to 40 m), which are typically found in circumstellar disks. If such a disk exists in AG Car, it would effectively shield the gas, favouring the formation of molecules in its cold outskirts. Our estimate of the dynamical age of the ring is slightly high compared to the characteristic timescales of the LBV phase, a few 10 3 -10 4 years. Thus the structure may have formed in a pre-LBV stage. Along this line of reasoning, some authors have proposed that the main nebula of AG Car was expelled before reaching the LBV phase. Smith et al. (1997), Voors et al. (2000) and Lamers et al. (2001) found nebular abundances of N and O different from the values expected for CNO-enriched material, suggesting that the nebula is composed of 'mildly processed' matter, still far from CNOequilibrium and thus consistent with older ejecta from a previous BSG or RSG phase. These findings, although not entirely conclusive, might also explain the relatively high 12/13 that we measure, more compatible with a moderate degree of processing. Likewise, the low expansion velocity of the ring is another interesting issue. If we work under the hypothesis that the ejection took place in a pre-LBV stage, a exp of 3.5 km s −1 is in agreement with the typical velocities of RSG winds, of 5-10 km s −1 . The possible occurrence of an RSG phase in AG Car, however, needs to be investigated, in view of the apparent lack of very luminous RSGs in the HR diagram (the so-called 'red-supergiant problem', Walmswell & Eldridge 2012). On the other hand, BSG winds are faster and more compatible with the expansion velocity of ∼70 km s −1 measured in the main nebula. This probably indicates that the ring originated in a separate episode of mass-loss, which may have been relatively steady, in contrast with the main nebula, related to a more eruptive event. Never the less, the measured expansion velocity only reflects the current situation, but the wind may have been faster in the past, resulting in a younger structure as discussed in Sect. 4.1. In such case, depending on the age, we may obtain time-scales consistent with an ejection event during the LBV phase, that would require further explanations for the observed chemistry. A closer view to CO kinematics We have built a simple model to better understand the kinematics of the ring, and also to cross-check the validity of the fit. To do so, we used (Brinch & Hogerheijde 2010), a 3D non-LTE code that solves radiative transfer by simulating ballistic photon propagation through unstructured Delaunay grids, calculating level populations and predicting the resulting spectra for a given transition, according to the LAMDA database collisional coefficients (Schöier et al. 2005). Due to the modest spatial and spectral resolution of our data, we made several assumptions to keep the model as simple as possible; we supposed an axisymmetric ring located at 6 kpc and described by power-law density and temperature profiles of the form −2 . To avoid overinterpreting our maps, instead of fitting the observed data, we qualitatively compared the overall morpho-kinematic features in the structure with those obtained with different kinematic models. Therefore, we adopted the density and temperature outputs of as reference values for the power-law profiles. Regarding the geometrical parameters, we adopted an inner radius of 0.3 pc, a half opening angle of 10 • , an inclination of 70 • and a position angle of 135 • east of north. We created four models to explore different possibilities for the velocity structure of the gas, namely: a purely rotating ring with differential rotation (model 1); a radially expanding ring with an outflow velocity of 3.5 km s −1 (model 2); an expanding ring with a differential rotation component (model 3); and an expanding ring with macro-turbulence, i.e. with random departures from the expansion velocity law, of up to 0.25 km s −1 (model 4). Model parameters are compiled in Tab. 4. For each of the models, produced a Model Velocity law Parameters 1 Rotation = 1 km s −1 2 Expansion exp = 3.5 km s −1 3 Expansion + rotation = 1 km s −1 , exp = 3.5 km s −1 4 Expansion + turbulence exp = 3.5 km s −1 , turb = 0.25 km s −1 synthetic CO = 2 → 1 cube, which was later convolved to the ACA beam and re-gridded to allow for a direct comparison with the original data. Fig. 11 compares the four models as channel maps in the velocity range of interest. Not surprisingly, the rotating ring (model 1) fails to reproduce the shape and velocity extent of the ring, with all the emission significantly concentrated around the systemic velocity. The other three models are able to reproduce both the extent of the ring and the overall clockwise pattern satisfactorily. Still, we note that the arc-like features that connect the lobes at intermediate velocities (from 29.5 to 30.77 and 34.58 to 35.85 km s −1 ) can only be reproduced by models 3 and 4, i.e. if we add rotation or turbulence to the expanding ring. None of the models reproduces the asymmetric spatial distribution between 32 and 33.3 km s −1 correctly. Ring inhomogeneities or clumpiness, not considered here, may account for the observed differences. Models 3 and 4 also reflect the overall morphology of the integrated intensity map from 29 to 36 km s −1 (e.g. Fig. 12 for model 3). Indeed, we found the integrated fluxes to be consistent within a ∼10%, thus confirming that the average values for density and temperature obtained with are able to reproduce the observed intensities. While the presence of turbulent motions in the wind of an unstable star is somewhat expected, the physical feasibility of the expanding-rotating ring model deserves a comment. In principle, any material expelled from the stellar surface, whatever the mechanism for triggering the ejection, will tend to move radially in the long-term. In the case of a fast-rotator like AG Car, the ejecta could be initially lifted with a non-negligible rotational component to preserve angular momentum. However, this component will progressively fade as the ring dissipates radially and interacts with the surrounding medium. Moreover, to keep a stable rotation, the material needs to be either gravitationally or magnetically bound to the star. These mechanisms, though, operate only at short distances, but the molecular ring of AG Car is < 10 5 years old and far away from the star. Consequently, if the gas actually rotates as indicated by model 3, there should be another mechanism at work responsible for that rotation. Further observations at higher velocity resolution will allow for a better understanding of the gas dynamics. MASS LOSS HISTORY OF AG CAR The different evolutionary stages of AG Car have left a footprint in its environment, shaping its circumstellar medium (CSM) in complex manners due to variable stellar winds and episodic mass eruptions. The detection of a molecular ring surrounding the star provides additional information about its recent mass-loss history. It is therefore important to understand how this molecular gas relates to the other components in the CSM. Broadly speaking, the CSM around AG Car can be described in terms of two independent nebular structures: a detached dusty Figure 11. Comparisons of the observed CO = 2 → 1 channel maps (top row) with different kinematic models: a rotating ring, a purely expanding ring, a expanding ring with rotation and an expanding ring with macroturbulence. All the maps share the same intensity scale. Contours at 0.5, 1, 2, 4 and 8 K. The red marker indicates the position of the star and the dashed line represents the direction of the major axis of the structure. The LSR velocity is displayed in the top-left corner of each panel. shell, bright in the infrared; and an ionised nebula, clearly visible in optical and radio continuum images. These structures present strong morpho-kinematic hints of bipolarity. Across the NE-SW direction, two symmetric bright clumps at P.A. ∼35 and ∼225 • are evident. In addition, the measured expansion velocity of the shell increases notably (by about 20 km s −1 ) along this direction (Nota et al. 1992). Both the orientation of the clumps and the velocity increase are signposts of a bipolar mass outflow, in which case the bright clumps would correspond to gas density enhancements in the polar 'caps' of the structure, where more material is being accumulated. Fig. 13 shows the location of the CO gas with respect to the dust and the ionised structures. Although an accurate spatial comparison is not possible given the limited resolution of the molecular data, we note that, while the dust and the ionised gas are mostly co-spatial, the molecular gas seems to be complementary: the most intense CO clumps are found along the SE-NW axis, i.e. the direction of the slight elongation of the nebula (Voors et al. 2000). The molecular ring somewhat constrains the overall geometry of the CSM: an inhomogeneous, nearly spherical expanding shell, disrupted by a slightly faster bipolar outflow and enclosed by an equatorial density enhancement. What is clear, though, is that the equatorial density enhancement should be older than the shell, since the two structures have comparable scales but very different expansion velocities. This geometry, coherent from an evolutionary perspective, can be explained attending to the stardard wind-interaction model, which has been proposed to explain the morphologies of other LBVN with aspherical symmetries (e.g. Frank et al. 1995). In this particular scenario, we can imagine AG Car suffering an equatorial mass-loss enhancement, in the form of a dense, steady wind, some ∼ 10 5 years ago. This wind would slowly expand into the ISM, sweeping any remaining ambient material. At some point, the wind would get colder and form molecules. A few ∼ 10 4 years later, a faster isotropic outflow may take place. This new wind would eventually reach the slower, previous one, interacting with a torus-like density distribution. The result of such interaction is the formation of a bipolar nebula. The final shape -i.e. the degree of bipolarity-is modulated by several factors, including the rotation velocity of the star (Maeder & Desjacques 2001), and may range from an ellipsoid to a peanut-like structure. The most extreme case of such interaction is, of course, Car and the Homunculus Nebula. In this sense, VN15 suggest that the ejection of the shell took place in a period of slow rotation, which is consistent with the moderate degree of bipolarity observed. VN15 estimated a total mass of ionised gas of 6.6 ± 1.9 M , from the H and radio continuum fluxes. Similarly, they derived a dust mass of 0.20 ± 0.05 M from the SED modelling. Using our continuum data and following their approach, we obtain a nebular ionised mass of 3.9±1.9 M , smaller but still in agreement within uncertainties with VN15's estimate. However, as noted in Sect. 3.2.2, we may be missing some flux with ACA. The total combined mass of ionised and molecular gas thus adds up to 9.3 ± 2.8 M (or 5.9 ± 2.8 M if we take our estimate). This agrees again, within uncertainties, with the value proposed by VN15 (15±4.5 M ), who adopted a gas-to-dust ratio of ∼ 40 to estimate the contribution of neutral gas. This means that, assuming a stellar origin, the molecular gas around AG Car traces at least a ∼ 30% of the mass lost by the star. The gas and dust masses can be translated into mass-loss rates by estimating the kinematic duration of the events that produced the observed structures. This can be done by computing the temporal difference between the inner and outer shell radius for a constant expansion velocity. Following this method, VN15 derived a kinematic duration of 1.1 × 10 4 years for the mass-loss episode that created the infrared nebula, taking an inner and outer radii of 0.4 and 1.2 pc respectively. It is a useful exercise to do the same and provide a crude estimate of duration of the molecular outflow. However, we warn that the reliability of this method is limited, in the sense that the emitting region may be smaller than the beam; therefore, we decided to conservatively adopt the size of the ACA beam, of about 7 , as the reference 'width' of the ring. In addition, we work under the assumption that the expansion velocity does not change and there is no velocity dispersion across the ring, which is probably unrealistic. For exp = 3.5 km s −1 , we obtain an approximate duration of 5.6 × 10 4 years. This is just a loose upper limit, but still, it is interesting to realise that it is about five times larger that the duration of VN15 for the isotropic mass-loss episode. This probably suggests that the equatorial enhancement was more steady and less eruptive, which is, again, in line with the proposed formation scenario. The average mass-loss rate that produced the molecular ring is thus = (4.8 ± 1.6) × 10 −5 M yr −1 , but it could be substantially higher if the structure is more compact than the ACA beam. We can compare this value with the current mass-loss rate. Under the assumption that the flux density of the point-like source measured at 225 GHz is due to stellar wind, we can approximate the current mass-loss rate of AG Car using the empirical formula by Panagia & Felli (1975) for expanding envelopes around hot stars, in which we assume full ionization and standard cosmic abundances, so that = 6.7 × 10 −4 ∞ 3/4 3/2 ( × ff ) −0.5 where ∞ is the terminal wind velocity in km s −1 , is the distance in kpc, is the measured flux in mJy and is the central frequency in Hz. ff is the free-free Gaunt factor, which we approximate as ff = 9.77(1 + 0.13 log 3/2 ), where is the wind plasma temperature (Leitherer & Robert 1991). Providing a single value for the terminal wind velocity of AG Car is tricky. Long-term spectroscopic monitoring by Stahl et al. (2001) between 1989 and 1999 revealed abrupt changes from 225 to 30 km s −1 . The wind also changes drastically between consecutive visual minima, with values of 300 km s −1 between 1985-1990 and 105 km s 1 in 2000-2001. Therefore, since we are now on the way to a new visual maximum, we adopt a conservative value of 100 km s −1 , compatible with the wind velocity measured near the 1995 maximum. For a wind temperature of 10 4 K and a distance of 6 kpc, we obtain a mass loss rate of = (1.55 ± 0.21) × 10 −5 M yr −1 . This mass-loss rate is in very good agreement with Groh et al. (2009a) estimates of the quiescent of AG Car, of 1.5 × 10 −5 M yr −1 . CONCLUSIONS We present APEX single-dish and ALMA/ACA interferometric observations of CO, 13 CO and continuum towards AG Car, confirming the existence of a molecular structure associated with this LBV star. Below we summarize the key findings in this work: (i) By means of CO and 13 CO ALMA/ACA data, we report the detection of a molecular ring-like structure around AG Car, confirming the hypothesis by Nota et al. (2002). The morpho-kinematic features of the structure can be explained by a slowly expanding ring or torus. (ii) We model the excitation conditions of the gas under non-LTE conditions. We used the available CO and 13 CO = 2 → 1 Fig. 3.(b) The same as (a) but with the ATCA radio continuum contours at 5.5 GHz (Duncan & White 2002) and CO = 3 → 2 single-dish data to provide average estimates for the whole ring. The gas in the structure is warm, with a kinetic temperature of about 50 K, not very dense, with H 2 densities of a few 10 3 cm −3 , and moderately thick. The total mass of molecular gas in the ring is 2.7 ± 0.9 M , subject to uncertainties in the determination of the [CO/H 2 ] relative abundance. This accounts for at least a ∼30% of the mass expelled by the star. (iii) We develop a simple kinematic model to better study gas dynamics, showing that some deviations from pure radial expansion are required to properly reproduce some of the morpho-kinematic features observed in the data, such as: (1) the addition of a turbulent component or (2) the superposition of a differential rotational field. (iv) To explain the presence of a ring-like structure of molecular gas around AG Car, we discuss a number of possible formation scenarios, namely: (1) the compression of a remnant of the parent molecular cloud, (2) a RLOF mass transfer episode or a merger event in a close binary, or (3) an equatorial mass outflow. We regard the mass outflow as the most promising, taking into account the evolutionary stage of the star and the evidence available. (v) We put together a global view of the mass-loss record of AG Car, proposing an overall morphology that integrates dust, ionised and neutral gas into a physically feasible, unified picture. We also derive a lower limit of the average mass-loss rate of the ring and compare it with the current mass-loss rate of the star derived from the ACA continuum. The detection of a molecular ring around AG Car adds the missing piece to the mass-loss history of this intriguing source. This work shows that molecules may trace a significant amount of the total mass lost by an evolved massive star. This has important implications for stellar evolution models, that sometimes struggle to properly reproduce the evolution between O/B and Wolf-Rayet stars, in part due to inaccurate mass-loss rate estimates. Moreover, the results in this paper pave the way for further exploration of AG Car and its complex CSM. To confirm the origin of the ring, explain its survival and better understand its kinematics, observations at higher angular and velocity resolution are necessary. Looking for other molecules, such as PDR tracers, would also be extremely useful to obtain a more complete chemical characterization of the star. Finally, deeper radio continuum observations would provide valuable hints about the nebular excitation mechanisms and their dependence on the S-Dor cycles. AG Car is, undoubtedly, a unique object, and as such it represents an outstanding chance to learn about the nature of LBV phase and its associated mass-loss phenomena. DATA AVAILABILITY The derived data generated in this research will be shared on reasonable request to the corresponding author.
2020-11-17T02:01:06.124Z
2020-11-16T00:00:00.000
{ "year": 2020, "sha1": "24087a5e9136e0836009c915b02af1fc54b989a9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2011.08161", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "24087a5e9136e0836009c915b02af1fc54b989a9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231667234
pes2o/s2orc
v3-fos-license
Traditional Sports and Games: Intercultural Dialog, Sustainability, and Empowerment From Traditional Sports and Games (TSG) we have not only learned different ways of living time as well as inhabit space and a particular mode of practicing sports and games from distinct cultures, but also promoting universal dialog among people. TSG presents sustainable and ecological references for living needed even before the advent of the COVID-19 pandemic. Nowadays, environmentally friendly policies and production methods must be taken more seriously. TSG may reveal a path to sustainable development, considering our corporeality and cultural diversity. TSG are expressions of human groups that historically reproduce their way of life-based on modes of social cooperation and specific forms of relationship with nature, traditionally characterized by sustained environmental management. The purpose of this article is to discuss how TSG promotes intercultural dialog with a focus on sustainability, and how it empowers people and creates equality among its players. We understand that TSG can break socio-cultural barriers. For this study, we considered data from a Brazilian experience of TSG’s Festival held at a public school in the city of São Paulo (Brazil), organized in collaboration with our study group. Data consists of observations recorded in pictures and films during the processes of organization, preparation, implementation, and evaluation of a TSG Festival, held in a public school in São Paulo, Brazil from the years of 2017 and 2018, with the participation of 800 students from the first to the ninth grade of elementary school, aged between 7 and 17 years. The first step in our analysis is taken from a dynamic called “Talking Circles,” where researchers registered dialog about experiences and used specific literature about TSG, from a philosophical perspective. The team and students from our study group that organized these events were invited to participate in four different Talking Circles. Approximately 20 people participated in each one of these meetings. Recurrences that emerged from these Talking Circles are presented in the results and explored afterward. What does this experience–from bodies in movement, artistic or sporting, or both–teach about intercultural dialog and empowerment? Such gestures indicate a cultural heritage and corporeal wisdom that allows humans to face new encounters and understanding in peace, recognizing humanity common to all of us, regardless of our origins. Ethical and aesthetical results of such dialog reveal possibilities to be explored in our relationship with different cultures and the environment, providing points of sustainable development through TSG. INTRODUCTION Since we all play together, Traditional cultural expressions like Traditional Sports and Games (TSG) demonstrate prominence in the ethical and aesthetic dimensions considering the fluidity and relevance of the groups in the elaboration of affects. Important research studies have been carried out about the diversity and richness of knowledge and learning provided by TSG all over the world (Parlebas, 2002(Parlebas, , 2010Lavega, 2004;Eichberg and Nygaard, 2009b;Gómez et al., 2012;Marin and Stein, 2015;Renson, 2016;Young Lee, 2016;Lavega-Burgués et al., 2020). For the scope of this paper, we understand TSG as a traditional community's expression based on research and dialogs with Central and Latin America, especially in Colombia and Mexico, which have traditional games well consolidated both in academic studies and public policies (Herrera Velásquez et al., 2018). Based on this assumption, the purpose of this article is to discuss how TSG promotes intercultural dialogs with a focus on sustainability and how it empowers people and promotes equality among its players. Along these lines, we begin by defining traditional communities and their expressions, taking into account the Brazilian experience, and their current contributions. We would like to highlight the role of events and festivals in these communities, and then present the Brazilian case of a TSG Festival, held every year since 2017 at a public elementary school in São Paulo. Observations recorded in pictures and films during the processes of organization, preparation, implementation, and evaluation of this TSG festival help us to discuss some fundamental elements present in bodily manifestations of traditional communities in Brazil. After the festival, we conducted four Talking Circles as a qualitative methodology to identify the recurrences observed by the researchers. Through them, we verified what seems to be most significant aspect for each person, seeking for more universal aspects. For humanities research, recurrences are indicators of meaning and potency (Bachelard, 2008) and guide the elements for the analysis. Our reflections were driven by the following question: what does this experience-from bodies in movement, whether artistic or sporting, or both-teach about intercultural dialogs and empowerment? Hence, the article focuses on the contribution of TSG for the discussion of three issues: intercultural dialog, sustainability, and empowerment. TRADITIONAL COMMUNITIES IN BRAZIL AND ITS FESTIVALS Brazil, with continental dimensions, holds a diversity of communities preserved, to some degree, due to the vast proportions of the country. These communities historically reproduce their way of life-based on modes of social cooperation and specific forms of relationship with nature, traditionally characterized by sustained management of the environment. Even in different habitats, such as large cities, we could verify that TSG maintains the expressions of such way of life. Tião Carvalho, a master of traditional Brazilian knowledge, says this about the values he tries to spread in his classes and festivals: "These values, which are actually very old, are new to people in the city" (testimony in Saura, 2008). He is talking about the purpose of the festivals he promotes in São Paulo city. His festivals include plays and dances which originate from remote northern areas of the country. For about 20 years, Tião Carvalho has been organizing these festivities in the largest population center in Brazil, São Paulo State, situated in the rich southeast region. Although the festivities are eccentric for most of the city's population due to its popular religiosity and original themes, each event brings together millions of people. Why does this festival, coming from a particular and exotic context, seduce so many people in the metropolis? "Faith in the festivities, faith in the encounter, " explains Mr. Tião, showing that TSG expressions are traditional but also contemporary and alive today. Here we try to clarify how TSG operates with principles of collectivity, bringing together generations, valuing diversity, integration, and respect for the environment. These elements present different values from the valued individualism and meritocracy encouraged in multiple and complex ways nowadays, especially in large urban centers. Lévi-Strauss (2013) reflected on the loss of reference that "Western type civilization" suffers: "Long an act of faith, the belief in a material and moral progress destined to go on forever is facing its gravest crisis. Western-style civilization has lost sight of the model it had set up for itself and is no longer bold enough to offer that model to others. Is it not therefore fitting to look elsewhere, to broaden the traditional frameworks to which our reflections on the human condition have been restricted? Ought we not to integrate social experiments that are more varied, more different from our own than those within the narrow horizon to which we have long confined ourselves?" (p. 4-5). The ancient values derived from traditional expressions reflect a way of life that we are invited to look at, since traditional societies have been inhabiting the planet for 99% of the time of human life on earth, whereas, in contrast, our society as we know it today, 1% of this time (Lévi-Strauss, 2013). Therefore, our contemporary civilizations are exceptions as a reference of possible human existence. By traditional community we mean: "culturally differentiated human groups that historically reproduce their way of life, relatively isolated, based on modes of social cooperation and specific forms of relationships with nature, traditionally characterized by the sustainable management of the environment" (Diegues, 2000, p. 22). This notion refers to indigenous populations and Afro-descendant communities (quilombos), among others, in an intense relationship with the environment. In the case of Brazil specifically, where exuberant natural areas are still preserved, there is a rich diversity of these communities, more or less isolated. They maintain their respective ways of life despite the impositions of the Western capitalist system. Here, at least 15 different human groups were identified. Each group has its particularities in their way of life, as well as in terms of TSG. Taking a group of Afro-descendants as an example, there are about 700 quilombos that practice capoeira regularly today. Capoeira, a dance influenced by martial arts, acts as an expression that reflects the group's identity and recalls the still recent struggle for freedom (Saura, 2019). These populations steadfastly protect their environment and habitat, living in complete interdependence with the environment, practicing close observation, and especially, by the way, they prevent external factors from threatening their areas (Brasil Ministério do Meio Ambiente, 2000). Therefore, we are talking about different ways of understanding and acting in the world. Contrary to the misconception that traditional is stagnant and unchangeable, these communities show us how knowledge and tradition are always being updated. However, they maintain certain fundamental structures . According to Cunha (2007): "There are at least as many traditional knowledge regimes as there are peoples. For while there is, by hypothesis, a single regime for scientific knowledge, there is a legion of traditional knowledge regimes. There is no doubt, however, that scientific knowledge is hegemonic. Modern hegemonic science uses concepts, traditional science uses perceptions" (p. 79). As in science, knowledge is always being built on the process of constant update. However, although all knowledge rests on the same logical operations and responds to the same thirst for learning and understanding the world around us (Lévi-Strauss, 2013), there is this significant difference between them: concepts and perception. Knowledge and other expressions from traditional communities reveals a learning process that necessarily passes through the human body and the senses. It is worth calling attention to the fact that traditional practices of building knowledge can perceive and anticipate discoveries validated afterward by hegemonic science due to the ability to understand the context, the interrelations and the circumstances. Their historical relationship and observation of all the phenomena made them discover and anticipate issues for science, as we have seen in biology, pharmacology, preservation, and reproduction of species, and even in social sciences, by the way many of these societies are organized. The accurate observation of nature places these populations in a prominent spot in the production of knowledge, and in the symbolic and mythical system that makes up the repertoire of humanity since ancient times (Bachelard, 2008). For these populations, the environment has been part of their symbolic system and appears on TSG expressions. This way, TSG reveals and updates the fundamental images and values from these populations, as master Tião Carvalho reinforces. Hence, the notion of the environment is extended and inclusive: "The environment came to include not only the environment of humans, but the environment of all life forms. Life then came to include more than living things encompassing rivers, landscapes, cultures and ecosystems. One started to talk about the living earth" (Breivik, 2019, p. 65). Clearly we can see how TSG is related to the environment, culture, history, and political struggles (Calegare et al., 2014). These practices, made up of simple elements and complex, human nature observation technology show us a path to intercultural dialog, sustainability, and empowerment. Very often, games take place on important dates and celebrations for each one of these communities. The festivals are events that reflect the symbolic capacity of humankind to give meaning to things and events from what one observes. These festivities range from birth celebrations, hunting rituals, harvest parties, thanks for abundance and food, to other manifestations concerning human observations of the movement of nature, seasons, and life. They are translated into everything produced for these celebrations: body props and space, music and dance, food, rituals, narratives, artistic and craft productions. They go back to times of mystery, to the active human with creative imaginationthis ability that, in addition to logic and reason, allows us to create, invent and assume the impossible (Durand, 2012). As many of these celebrations end up with a race or a game where everyone participates, TSG plays a role in these systems. The festivities promote bodily engagement (Merleau-Ponty, 1962) of the whole community, concentrated in a single event. The notions of dialog and sharing are enriched by the experience of intercoporeality (Coelho Junior, 2003). Celebrations and festivities are present among all peoples and nations and help us all to align values and develop new meanings for our daily practices. These perspectives show different possibilities of being with others and introduce new forms of existence. And as Mr. Tião reminds us, presenting something new is not necessary. It is important to note that the recognition of interdependence relations is notably directed at personal relationships. In light of a traditional community's way of living and expressions, we prefer to align our discussion with a less anthropocentric perspective and more ecocentric premises. METHODOLOGY For this study we used data from TSG Festivals organized by the study group PULA, from the Center of Sociocultural Studies at the School of Physical Education and Sport/University of São Paulo/Brazil, in collaboration with a public school from the city of São Paulo, the EMEF Desembargador Amorim Lima. The "TSG and Street Leisure Festival" has been running since 2017 and engages a whole primary school, including students, teachers, and staff. Data for this article consists of discussions from observations registered in fieldwork during the process of organization, preparation, implementation, and evaluation of the TSG festival from the years of 2017 and 2018. For data collecting, we organized four Talking Circles after the festivals coordinated by two professors in each edition of the festivals. In the first and second meetings, ten researchers involved in the Festival identified issues related to the Festival itself. In the third meeting, we listened to the students from two Physical Education (PE) undergraduate disciplines, which participated in the festival. And in the last meeting, we brought the findings to the research group, so we could identify the recurring themes that emerged from both groups. Talking Circles is a proven methodology that has been applied in traditional communities works, and integrates academic research with traditional populations and native peoples (Tachine et al., 2016). This annual festival lasts an entire day and is attended by students from this public school that has a partnership with our university. In total, about four hundred students attend in the morning and another four hundred students in the afternoon. Students are from the first to the ninth grade of elementary school, aged between 7 and 17 years. Students from this school reflect the socio-economic and ethnic heterogeneity present in the surrounding neighborhood. Although all social classes are represented in this school, a predominance of families are from lower income levels. This heterogeneity is highly enriching from the perspective of solidarity. According to the school's evaluation, students from families with very different income levels establish friendships and bonds that favor exchanging life experiences. At each event, games and activities from childhood culture are selected. A commonality to these games is the fact that they are strongly present in the traditional communities in which we work. The festival takes place in the street, considered to be a public space. Every step of the process requires preplanning. It is extremely important to consider TSG's background to organize a festival properly. There are many challenges presented by contemporary societal dynamics, and it is essential to consider previous experiences and research to support educational perspectives (Eichberg, 2009). The first step was to start planning with the school EMEF Desembargador Amorim Lima. We held meetings, one for each event, with the Head of School, teachers, and staff at the beginning of the school semester. The school is located close to our university campus, making it easier to work with undergrad students. Together with the school's team, we discussed which games will be played and equipment needed. Games that do not require major material investments were given preference. These meetings also helped teachers prepare the students for the event. During the school year, students researched and played traditional games. In parallel, we trained PE students to assist teachers and students during the festival. Activities at the festival include jump rope, spinning, marbles, stretch games, "peteca, " hula hoop, and yo-yo, among others. We consider that one important note-learned from other Festivals initiatives-are gestural inspirations provided by experts in these games. Thus, at each event, there are special guests, experts in one or other game as yo-yo or top, who always dazzle children. The choice for using a public space in front of the school encourages neighborhood participation. It reinforces the importance of valuing the public character of streets and sidewalks. The street in front of the school is closed for cars, creating a safe environment for playing. During the festival, undergraduate students, researchers, and schoolteachers stand at different points on the street, according to the game they're participating in. Children can move freely without space or time limits, without age, gender or grade restriction. For everyone, according to the reports collected, the day is over in the blink of an eye. Teachers and PE training teachers were available to help students with materials when needed. The idea is to leave the students free to choose what they want the most, relating to the use of materials, the place, and the time available. This study has a qualitative approach, considering its characteristics and theoretical references. Talking Circles proved to be the most appropriate methodology for identifying recurring themes among what researchers and undergraduate students considered most significant in the Festivals. Talking circles started with us all sharing audiovisual materials from the festivals. Each group-undergrad students, schoolteachers, and researchers-took pictures focusing on relationships, recurrences, or anything else notable. For this study, we selected images with a focus on relationships between the participants. After sharing and selecting audiovisual records, direct questions were asked to encourage discussion. Teachers, researchers and undergrad students at the festival commented on issues involving intercultural dialogs, sustainability and empowerment. Definitions about these issues considering the urban context were previously made by the conductors, and presented at the "Results" section. Approximately 20 people from the research group participated in Taking Circle. Participants were free to talk about subjects that they considered to be relevant in the experience of promoting the Festival and observing the children's engagement. Recurrences considered for discussion in this work were mentioned more than five times in the Talking Circles. The search for recurrences-what is repeated everywhere regardless of the cultural environment in which people are present-is part of this methodological approach. This research attempts to investigate subjectivities. Such perspective presupposes considering the first person perspective as the one who lives the experience. But the collection of this rich and human material presents itself not only as an individual or particular component, in this perspective it may bring traces of our human existence. Therefore, this approach emphasizes experience, or personal experience, but seeks among them what is true for every human being, to be able to make more general postulates and not just private or individual observations. In cultural terms it is not possible to generalize conclusions from one group to another but adopting a philosophical perspective it is possible to explore elements that belong to humanity (Martinková and Parry, 2011). In this sense, it is crucial to understand subjectivity and recognize its relevance in how knowledge is produced. It is methodologically significant specifically in the perception of human manifestations where the human being cannot be studied only as an object, considering the complexity of the phenomena. Talking Circles as a methodology requires attentive listening to promote a horizontal relationship between professors and students, aligned with traditional communities' values. It harkens back to oral tradition, to horizontality, to listening more than talking, respecting diversity and understanding of one another (Tachine et al., 2016). Based on the principles of Paulo Freire (2015), Talking Circles was the moment to elaborate on everything that we saw, felt, and learned from the festivals. It was the moment to reflect on what enchanted us at TGS Festivals and why. RESULTS The team and students from our study group that organized these events were invited to participate in four different Talking Circles. Recurrences that emerged from these Talking Circles are presented below in the results and explored afterward. Triggering questions were: "What does this experience from bodies in movement, which updates children's gestures in traditional games, promote among its players? What does enchanted us all on TSG Festivals, and why?" The participants were surprised at the high level of engagement from the children and their teachers. Everyone in the Talking Circles noticed that cell phones were hardly used throughout the day. This engagement was a pleasant surprise, which showed the strength of TSGs even for children growing up in the city. That led us to other inquiries related to intercultural dialog, sustainability and empowerment. Talking Circles focused on each one of these main themes. Specific and general questions were asked about the topics, as indicated in Table 1. Most participants gave similar answers and expressed surprise at the results overall and level of children's engagement in particular. For this paper, we consider broad definitions of intercultural dialog, sustainability, and empowerment. These notions that are derived from fieldwork in traditional Brazilian communities bring other indirect questions to researchers when applied to an urban setting. For instance, in academic literature, intercultural dialogs refer to the dialogical encounter between subjects from different cultures. In the case of Traditional Games and Sports that occurs at these festivals in large urban centers, intercultural dialogs may be identified as bringing together children with different backgrounds playing together. We have shown how corporeality favors this dialog between people from different backgrounds. Teachers witnessed how, throughout the year, children form cohesive groups and often do not mix. Those barriers were broken during the festival, where children were more interested in the games and activities, which encouraged them to play together. Analysis of materials collected from the Festivals shows that even seemingly watertight social barriers can be broken. As an example, we can mention how teachers' roles were reversed in many festival situations, as their students taught the teachers how to play. Similarly, in Talking Circles and by observing pictures and videos, the participants noticed the reversal of other structural differences. They identified that children of different ages, children and people from different social classes and ethnic groups, children with neighbors from different backgrounds, as well as adults, were are all playing together. In urban environments, sustainability takes on a different meaning. They refer to using public spaces, encountering other people, the empathy promoted by playing together, and the dialog that is established with others. It refers to realizing that there is no need for expensive toys or large investments in material resources for playing games. Children who participate in TSG festivals learn to share spaces and simple equipment. They also learn to make their own toys from simple material. Expensive toys and the entire entertainment industry are unnecessary for us to be together, learn and have fun. Participants from the Talking Circle also noticed that there were many moments of interaction with the school community. Through games, gestures and rules, it is clear that traditional communities values, such as learning together, healthy and happy competition, are shared. For this work, we consider empowerment as the appropriation of bodily knowledge established in relationship with the world. It is a concept that reveals a learning process that necessarily passes through the human body and the senses and emerges when we incorporate something new. It is an "I can" movement (Merleau-Ponty, 1962) that requires full presence and usually comes from a desire to achieve specific objectives-the player tries, insists and succeeds. During the Circles, many participants mentioned that some children were playing alone, but only until they learned the game's techniques. After acquiring some new skills, they would find other children to play, teach, and show. At this point, repetition is part of this process, deepening the game structures in new ways. Freedom and spontaneity also seem essential in running the festival: children play with what and whom they wish. The goal of playing together was for the sake of the game and improvement. Although we are not in one of the contexts of traditional Brazilian communities, it is necessary to understand how these communities' values can be updated in the bodies that play. Besides, even if it was not the festival's intention, as we can see below, "doing-together" stood out as a premise of freedom, potential, intercultural dialog, sustainability, and empowerment. Certainly, doing together, one of the core values that touch traditional communities, was present in the three themes observed at the festivals. Masters of traditional knowledge often respond when asked how they teach: "I don't teach, I do it together" (testimony by Tião Carvalho in Saura, 2008). Traditional communities consider the voice of the elderly like being the voice of the world's past. This existential doingtogether dialogs with traditional perceptual knowledge and happens without the need for words. This knowledge emphasizes the primacy of experience, gestural and corporal reference. Boaventura de Souza Santos refers to the primacy of the senses in the production of knowledge as inherent to the Epistemologies of the South (Santos, 2019). In his proposal, the "South" represents excluded, silenced and marginalized populations exploited by colonialism and capitalism. So, "the global South is not a geographical concept, even though the great majority of its populations live in countries of the Southern hemisphere" (Santos, 2016, p. 18). Doing-together is a recurrent modus operandi among these populations. Children and the elderly are included within the knowledge transmission system, which triggers our shared corporeality (Merleau-Ponty, 1962). Observing gestures from the more experienced players or creating technologies is part of a learning process for children. When they reach a particular manual skill level, they can follow the process. This bodily perspective appears in different possibilities of being with other people and introduces different ways of aligning values and developing new meanings for our daily practices. Doing-together requires trust, active participation and the understanding that being a child is a presence of now, and not only a promise of a future. DISCUSSION In the film "Promises" (Goldberg et al., 2001), Israeli and Palestinian children share the harsh reality of separation and hatred, despite being just 10 min away from each other. The cameraman interviews them to ask what they think of each other. They speak of themselves through past generations' voices talking about war, the enemy, and anger. The echoes of many previous generations are in the children's voice. When invited to a meeting with each other, they hesitate, but finally, they accept the documentary's filmmaker idea. The proposal now is to establish a real communication between them, a body of communication in a playful situation. At that moment, ingrained resentments and hatreds are set aside. Children play together. Eichberg and Levinsen (2009a) also provide an example of the power of a Popular Movement Culture for conflict management. They explore the experience of a popular football festival, different from a competitive and standardized sport, in the Balkans. The festival was the first multi-ethnic event, including Muslim refugee children, to take place in Srebrenica after the civil war and brought children together through a common language. Similar examples are found elsewhere in the world. This paper aims to discuss how TSG promotes intercultural dialogs with a focus on sustainability and how it may empower people taking into account the experience of TSG festivals in a public school in São Paulo-Brazil that attends 800 children from 7 to 17 years old. We analyzed some recurring elements that emerged from within the Talking Circles with the researchers from our study group and PE undergrad students who participated in TSG Festivals. Intercultural dialog, notions of sustainability, and empowerment through TSG were widely discussed. Some schoolteachers wrote to us gratefully, giving their testimonies of what they perceived during the festivals and later, on the impact of day-to-day activities with the children. These testimonies are not presented in this paper but reflect the interaction among all participants. Here in this section, first we present our experience on TSG area, how we had inspiration for realizing a festival, and how recurrences of gestures called our attention. Considerations of the corporeal engagement and the importance of Festivals for children and for the city are taken. We highlight how the practice of traditional games may promote global health, adding elements of the main subjects: sustainability, intercultural dialog and empowerment. The originality of this session is to situate the TSG not as local event, but as bodily practices that dialog with the humankind, regardless of cultural origins, as they are located in our symbolic gestures and corporeality. The Festivals Gets Into the City-The Experience of a TSG Festival in São Paulo City It has been a long way for us to understand TSG as an important phenomenon for thinking academic research and public policies. After conducting an International Seminar delimiting the field of knowledge with invited researchers from universities of Europe, Asia and North America our department turned to Latin and Central America researchers, understanding that among them "knowledge is embodied" (Santos, 2019, p. 136). In the event that followed, we received researchers from the University of Antioquia (Colombia) and from the Mexican Traditional and Autochthonous Games and Sports Federation (FMJDAT/Mexico), who presented their good academic practices and research experience. A year later, on Colombian soil, we witnessed the "Traditional Play from the streets" (JRTC) in Caldas, municipality of Antioquia. In this festival, which is in its 37th year, we saw an entire city in the streets to play for five consecutive days (Gómez et al., 2012). Conceived by Master José Humberto Gomez, the phenomenon inspired the Street Games Festivals in São Paulo, carried out by the PULA Study Group at the Sociocultural Studies Center-EEFE-USP. This background has been fundamental to support our actions and research on TSG. The international community mentioned the same persistent gestural recurrences we see in TSG festivals in Brazil. Some materials, the ones that we see in traditional communities, encourage the same gestures even from school children, taking into account that many of them have never left the city. By the same gestures, we mean a familiarity of body behavior in dialog with similar provocations. The well-known work of Mauss (1979), first published in 1935, brings to light the cultural mark of body techniques. The anthropologist highlights the relationship between biology and culture registered as corporeality. From this, it may be assumed that even universal games, such as the spinning top, promote a cultural encounter. Through detailed observation, one might notice that some children use different fingers to hold the top or tie the beard in different ways to find a variety of results. However, there is a proximity of gestures that results from the relationship that the player establishes with the elements of the game. As Parlebas (2003) advises, it is important not to be tempted to reduce the game to the context's characteristics or the player. The game has an intrinsic reality that is perceptive by traces of motor action. These motor behaviors are noticeable in the player's relations with his environment: space, objects, time, and other players. From a phenomenological perspective (Merleau-Ponty, 1962), we understand that the materials and equipment suggest a corporeal engagement, like a language that puts us all in dialog sharing differences based on common soil. This is what can make the game a fruitful locus for bringing different cultures together. Through the body, we are invited to enter cultural and mythical aspects (Bachelard, 2008) in the same eco-sustainable postures that we frequently find in our research field. Also, despite its low cost, the high social impact of the material used to play is remarkable. Considering the experience from the so called "TSG and Street Leisure Festival" and analysis from different fieldwork carried out over the last 10 years by the Pula Study Group Zimmermann, 2018), it is clear that traditional games actually preserve structural elements of human kind in magnificent reproductions of body images. A Greek painting depicts the movement of a boy playing a spinning top in the ancient age. Their moving image is the same as boys and girls in Brazil and the world today. Incredibly and without any apparent contact with each other, the same gestures appear. The scenes are repeated in schools, streets, alleys, in a game that historically has existed since ancient times, throwing with the same effusiveness arms and eyes of attentive boys and girls. During the Talking Circles, this was the first strong aspect that everyone noticed. "Children know how to play. Feels like they remember. They all forgot their cell phones when they saw a top. They were delighted. It was like responding to an inner call." (Testimony in the Talking Circle). We speak of this place where the tradition is located in the gestures that children show us today-here and now-with archetypal elements that cross different times and spaces, from ages ago to the present day. TSG embodies the collective human memory. Game creators have already identified this potential. In the case of the spinning top, one can mention updates of the object that have become real "fevers" among children, not by chance, like the blay-blade and the spinner. Groups understood games and activities as a kind of dialog (Zimmermann and Morgan, 2011). This dialogical condition is ethical since rules are created and recreated in intermittent debates (Bruhns, 2004), which above all require full presence in what children do. Moreover, one recurrence and even a surprising character was to perceive that the relationships among the festival participants were horizontal. PE Students expected to teach. In the beginning, they were unsure about our organization; they were in a much smaller number than the children. But some children knew how to play much better than our students. In TSG, there is no clear border between who teaches and who learns: facing the materials, everyone feels challenged and attracted by the game. They all notice that other common barriers were dissolved during these festivals: social status, age, and gender. In this case social barriers do not matter for playing since the same rules apply to everyone. Many mixed groups were seen playing together. Older participants may be a reference for how to do it, but there is not a hierarchy. PE Students also mentioned they do not perceive gender or age barriers, since the groups were mixed. The educators are the children themselves, and the players are all. New elaborations appear on all sides, precious, surprising scenes. In fact, the objects surrounding the practice of TSG do not require significant investments and are like gunpowder, material provocateurs of this human repertoire, as Bachelard (2008) already reminded us. During the Festivals, students invite teachers to play together, some of them ask for help, while others teach their friends. Besides, different elaborations with the same materials were experienced. New groups of children from different ages were spontaneously organized, including students from inclusion programs with disabilities. The spinners are set aside in front of a spinning top-firm strings between the fingers. A member of staff is willing to teach the game of his boyhood to children. A vibrant wheel is formed; the tops spin in the air, enigmatic, spectacular, flying. It can be seen that when a child plays, in a very authentic way, it strengthens ties with the most human of each of us. The Talking Circles noticed that how the children organized into groups was always changing. The street becomes alive in a safe, playful environment and during two festival events no incidents were reported. The own children solved minor conflicts, and different solutions were found for everyone to play. Teachers reported that the project has helped to improve the repertoire of games and the students spontaneously chose TSG as a theme for a big annual festival at the school. Another interesting report was that after noticing that some children were having several relationship problems during their free time, the school made TSG materials available and the problems were over. Research conducted in a very different context showed promising results on the improvement of students' relations and socialization in primary education after traditional games implementation (Kovačević and Opić, 2014). For the teachers who conducted the activity, none of this was entirely new, as we mentioned before, there is Caldas, municipality of Antioquia, Colombia, where an entire city gets to the streets annually to play (Created by master José Humberto Gomez, the "Festival de Juegos Recreativos Tradicionales de La Calle" showed its expertise and inspired these similar events in São Paulo). At that festival, experienced practitioners were a reference for children and young people. Therefore, for these "TSG and Street Leisure Festival, " Anselmo Gomes, the Brazilian national yo-yo champion was present and left the children in amazement. In a city where contact with childhood becomes increasingly reduced, where the landscape is increasingly "grown-up, " we see, less and less, boys and girls playing in public spaces. Since the festival breaks down school cliques and takes place in the surrounding streets, children and young people realize that the street is a place of possibilities, a common space for all of us. Considering these investigations, it is currently believed that TSG should be part of public policies. Play is important not only for small children but for everyone. Moreover, TSG in PE and Sport enhances experiences, not only of classes and training but also of a whole relationship with the body, with space, the street, the school, with the diversity of the whole city at last. Research, fieldwork and interviews with partners reveal expressive bodies that are willing to play again. Intercultural Dialog, Sustainability, and Empowerment According to United Nations Educational Scientific and Cultural Organization [UNESCO] (2015) TSG are considered an intangible heritage of humanity. UNESCO stresses, "the practice of traditional games promotes global health." What seems to be the ethos of TSG is this bodily dialog, which reveals the humanity in all of us, regardless of geographical, social or cultural aspects. From this perspective, sports and games are a common language, spoken with a material call to action. Children are, for example, completely infatuated by the bow and arrow. It is a material that invites them to play. With the equipment, players repeat the same gestures and archetypal stances of warriors existing in all nations. We are talking from this understanding of the traditional as something located in the gestures that children show us today-in the here and now-with archetypal elements that cross different times and spaces. It is possible to find the same gestures in the traditional communities of Brazil (Hackerott et al., 2017). The game evokes, in the body, the very materiality of the arrow, with images of speed and straightness (Bachelard, 2008). Moreover, similar gestures are found around the world. In these recurrences, we find something prior to the traditional nature-culture dichotomy. We see a gesture that is a precedent, which is characteristic of humankind. (For instance, the first records found of the top spinning game, a traditional children's toy that turns on itself: a Greek painting of a boy playing the top in the old age). The image of the gesture of a child playing a top is the same as that performed by boys and girls today, whether in the interior of the Amazon rainforest, on the northeastern beaches, among Xingu Indians, in schools or on the outskirts of urban centers. It is a traditional game that dialogs with boys and girls from all historical times and without defined geography. So we have seen how TSG promotes intercultural dialog, coming from a knowledge that takes place in the corporeal experience of the gesture. Like knowledge developed in traditional communities, knowledge produced by TSG is of the order of perception. In Caldas, Dom Antônio manufactures spinning tops in broad daylight, surrounded by avid boys and girls. The master boasts of having trained the best players in the country. His tops are considered the best in the world, perfect and exact. When we see him perform with the children who grew up under his care and learning, we were all shocked. The tops dance with their players' bodies, are launched into the air infinite times and never touch the ground. This hypnotic fly, turning around its own axis in space, continues to enchant, over and over, people from all over the world without causal explanations. Mastering this lively and aerial dance performed by the top is the desire of many players. Durand (2012, p. 60), inspired by Merleau-Ponty, considers that "the whole body collaborates in the constitution of the image, " and that there is "close concomitance between body gestures, nerve centers and symbolic representations." In this anthropological perspective, body language is also considered as a symbolic language, that is, of generating meaning for human existence, so perhaps the existential insistence on performing certain gestures. Sports and games have been presented as a fruitful dialog for the culture of peace . Regardless of the complexity of its rules and the competitive content employed, the game's dialogic character is one of its main characteristics (Zimmermann and Morgan, 2011). This dialogical character protects ethical possibilities to the play. The game requires players to be fully present. In the game, the impediments of age, gender, space, those who teach or learn, languages, social classes, and different cultures are mitigated. The knowledge that the game requires is based on corporeal experiences . It is also an aesthetic issue, as the game comes from this perceptual, subjective knowledge, the bodily experience about which it is not always possible to explain. That is why the theme was chosen by United Nations Educational Scientific and Cultural Organization [UNESCO] (2015) as one of its priorities for action. It was also selected as an activity to bring together a divided country, such as Korea. For UNESCO TSG practice "promotes global health, " placing players in an intersection between past and future, reaffirming the specific identity of peoples in the age of globalization, while their recurrences and similarities reveal certain thematic that are universals. It is perhaps in this way that games promote peace: through intercultural and corporal dialog that can be established when these themes are in action, that is, in the very act of playing (United Nations Educational Scientific and Cultural Organization [UNESCO], 2015). Considering elements from fieldwork and references from different cultures we may understand that TSG bears the mark of many cultures and connects with humanity that which permeates us all. It is possible to observe an approximation of gestures in similar games, similar ways of establishing relationships with the environment and the relationship of proximity with organic elements, offered by nature. As we highlighted previously, tradition is not something that is frozen in time. We have seen that tradition embraces new behaviors, as long as it doesn't lose its structure and central elements. These central elements are those that dialog with who we are as humans, with our biocultural body, our universal aspects. It is not a matter of culture, race, gender, social condition, age and so on. Fighting games, kites, and spinning tops are games that reflect this proximity and do not require words to bring different cultures together. This gestural proximity also facilitates the approximation with those games that we are not familiar with, and through these we also establish a dialog with the difference. The horizontal relations of the game allow for an appreciation of other perspectives. The close relationship with the environment and the development of equipment with resources from nearby nature also fall in line with sustainability. TSG frequently uses organic resources, and care for the environment is fundamental to the possibilities of play. There is no need for equipment or highly elaborated spaces with external resources as games are organized according to the spaces and resources available, they also end without signs of depredation. During a serious environmental crisis that we are going through, this is an issue that has attracted the attention of sports researchers (Edgar, 2020). Children who participate in TSG festivals learn to share spaces and simple equipment. They also learn to make their own equipment from organic elements: wood, stones, ropes, straw, leather. It is important to notice that they wish to make this equipment, to work with their own hands, tools and materials as every human did before them. It seems to be important nowadays to establish clearer differences between having and being, and TSGs contribute to this inference. The major sustainability issue for traditional communities is the achievement of sustainable management of species. This happens through the accurate observation of the environment, understanding it, the feeling of interdependence with the habitat, and empathy with an ecocentric vision. In traditional communities, it is common for children's games to be exercises to improve skills and perceptions. In the Talking Circles, the researchers identified these same perceptual exercises at these festivals in the city. Moreover, getting to know our body and how we play is to understand who we are, and it is very revealing and empowering for oneself. By playing games, we better understand our potential. Empowerment is fostered in a collective construction if we consider the possibility of an ethical debate from TSG practices. During Festivals, we saw children experiencing limits, possibilities and new elements. In a playful relationship, there is always the possibility of learning from the more experienced and helping the beginners. It is a dialogical teaching experience. Heeding the call of a game that attracts us is also going toward who we are. Traditional Sports and Games promotes global health as it allows for diversity and dialog, the meeting between tradition and novelty. So, TSG Festivals have much to contribute to empowerment by facilitating accessibility to all levels of cultural diversity-schools, public spaces, networks and associations. FINAL CONSIDERATIONS Traditional cultures and childhoods know the world through bodily and perceptual knowledge. This knowledge is created and recreated with imagination, logical reasoning, thought, and intimate relationship with the living world. It is also remembered in our body, updated in our gestures. Once embodied, this knowledge fosters respect, cooperation, and solidarity. Besides, considering the dialog with simple equipment and values from traditional communities, they bring relevant examples of respect for the environment. In its applicability, TSG proved to be important as an educational practice in public teaching programs, to present ways of inhabiting public spaces and promoting health, with the festive presence of human movement. This article presents results from an initial study, and the topic further researches with different methodologies, which we hope to inspire. Beyond research, Festivals on TSG should be a theme to promote and look at from different perspectives. Play and movement had been associated with health not only by its objectives elements but also for its symbolic and significant ways of operating this body language and knowledge that we tried to highlight. We recommend and encourage Festivals and further research about it. Comparative elements could enrich and reinforce the results we found in this research. Although they seem initial, they sustain outcomes from many years of theoretical study and field observation. By relating the concepts of sustainability, equality, and empowerment of traditional communities with events in urban centers, we believe it was possible to outstand how a TSG Festival may be invigorating to people in ecology, even in the city. In this sense, TSG reflects a way of life that we are invited to look at, especially nowadays, as it reveals and updates the fundamental images and values from these populations in an intercorporeal dialog. "Doing-together, " a widely found premise in traditional communities, stood out to promote intercultural dialog, sustainability and empowerment. This existential doingtogether dialogs with traditional perceptual knowledge and happens without the need for words. This knowledge emphasizes the primacy of experience, gestural, and corporal reference. Observations from a particular TSG Festival, in this case held in São Paulo, do not allow universalization of results. However, elements from this experience may indicate the potency of TSG. Through this study, we could verify how human gestures are updated in TSG, in the relationships established between community events and festivals in the city. Moreover, it is not only the gestures that are updated, as values, sustainable ways of thinking, and livings are present in this bodily repertoire. Empowerment happens in the body, under the "I can" premise. Considering that TSG is updated consistently in dialog with new generations, they also present the possibility to overcome barriers such as gender. This possibility creates a chance for learning how to solve conflicts. Playing together embraces us all. It is a profound and serious subject, although festive and playful. That is how TSG festivals operate: monitoring, updating, and keeping the challenges of producing humankind knowledge alive. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT All procedures followed were in accordance with the ethical standards fpr studies involving human subjects. The participants provided their written informed consent to participate in this study.
2021-01-22T14:16:03.196Z
2021-01-22T00:00:00.000
{ "year": 2020, "sha1": "bcdf73fdd617cbb94738831ef2f41e59b386780a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2020.590301/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bcdf73fdd617cbb94738831ef2f41e59b386780a", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
218597131
pes2o/s2orc
v3-fos-license
168-195 GHz Power Amplifier With Output Power Larger Than 18 dBm in BiCMOS Technology This paper presents a 4-way combined G-band power amplifier (PA) fabricated with a 130-nm SiGe BiCMOS process. First, a single-ended PA based on the cascode topology (CT) is designed at 185 GHz, which consists of three stages to get an overall gain and an output power higher than 27 dB and 13 dBm, respectively. Then, a 4-way combiner/splitter was designed using low-loss transmission lines at 130-210 GHz. Finally, the combiner was loaded with four single-ended PAs to complete the design of a 4-way combined PA. The chip of the fabricated PA occupies an area of 1.35mm2. The realized PA shows a saturated output power of 18.1 dBm with a peak gain of 25.9 dB and power-added efficiency (PAE) of 3.5% at 185 GHz. A maximum output power of 18.7 dBm with PAE of 4.4% is achieved at 170 GHz. The 3-dB and 6-dB bandwidth of the PA are 27 and 42 GHz, respectively. In addition, the PA delivers a saturated output power higher than 18 dBm in the frequency range 140-186 GHz. To the best of our knowledge, the power reported in this paper is the highest for G-band SiGe BiCMOS PAs. I. INTRODUCTION With the recent progress in the speed of solid-state devices, millimeter-waves and terahertz technologies are continuously improving in terms of cutoff frequency (f T ) and maximum oscillation frequency (f max ). This includes transistors based on III-V (InP, GaAs) and Si (SiGe BiCMOS, CMOS) technologies [1]- [6]. Generally, III-V (InP) transistors demonstrate higher f T /f max , and breakdown voltages (BV CEO and BV CBO ) compared to Si devices. Nevertheless, Si devices are preferred over III-V devices due to their advantage in largescale and high-level integration. Besides, advanced SiGe BiCMOS technologies exhibit f max up to 500 GHz, which have allowed the development of solid-state systems above 100 GHz for different applications including high-speed The associate editor coordinating the review of this manuscript and approving it for publication was Vittorio Camarchia . One of the key issues in the realization of high-frequency systems is the limited output power-level attainable with SiGe HBTs (heterojunction bipolar transistors). The low breakdown voltage and scaling down of the transistor periphery to increase the f T and f max leads to the limitation of obtaining high-output power. Additionally, dominant high-frequency effects in terms of high conductor/substrate losses and increased parasitic elements of the transistor reduce its output power. These challenges make it difficult to design efficient and high power solid-state circuits such as voltage controlled oscillators (VCOs) [16], frequency multipliers [17], [18], and power amplifiers (PAs) [19], [20]. To increase the output power (P out ) and power added efficiency (PAE), various high-frequency PAs based on SiGe BiCMOS technology have been reported in [19]- [25]. They can be categorized into single-ended, N -way power combined, and balanced (differential) PAs. These PAs are designed using cascode topology (CT), common emitter (CE) and stacked configurations. Single-ended PAs are usually compact and can be utilized in power combiner circuits to obtain large output power. Examples of single-ended PAs are reported in [19], [20] at various frequencies in the range of 110-140 GHz, which are based on CE and CT. Among them, a maximum output power of 13.8 dBm with PAE of 11.6 % is achieved at 116 GHz. The second most popular category to generate high output power are N -way power combined PAs, which can be further classified by the nature of combining networks into reactive, sub-quarter-wavelength balun, transformers, Wilkinson, and antenna-based free-space combiners [20]- [32]. For instance, the authors of [20], report an 8-way reactive power combined PA using CE configuration at 116 GHz. This PA despite consuming a large amount of chip area delivers a maximum output power of 20.8 dBm with peak PAE and gain of 7.5 %, and 15 dB, respectively. A PA solution with reduced area consumption based on sub-quarter-wavelength baluns and stacked configuration in [21] delivers a peak output power of 22 dBm with PAE of 3.6 % at 120 GHz. However, the peak power gain of this PA is limited to 7.7 dB. Contrarily, solutions based on transformers, Wilkinson combiners, and free-space antenna combining networks [22]- [25], become unpractical for the design of high-frequency PAs. In fact, the Wilkinson and transformer combiners introduce higher losses when the number of combining networks increases, while the free space losses and poor radiation efficiency of the high-frequency antennas limit the effective attainable output power. The third category of high-frequency PAs is based on differential configurations, which benefits from the availability of virtual ground and presents high common-mode rejection ratio (CMRR) for common-mode noise cancellation. Baluns are required for differential to single-ended transformation, which eases the characterization of the differential PAs. Examples of differential PAs without utilizing power combining networks are reported in [26]- [29], where a maximum output power of 14 dBm with a peak gain of 27 dB is demonstrated at 160 GHz. The designs of 4-way combined differential PAs at 170 and 240 GHz using T-junctions are discussed in [30], [31]. A maximum output power of 18 dBm is achieved at 170 GHz by de-embedding the loss of RF pads and baluns. However, this PA is partially characterized, while a full characterization to see the large-signal performance across all frequency bands should be shown. There are very few examples of the high-frequency PAs around 160 GHz in SiGe BiCMOS technology and most of the reported designs demonstrate high-output power in the lower D-band. G-band PAs above 160 GHz with large output power are crucial to drive frequency multipliers for the generation of high-power sub-THz signals. In this paper, we report a G-band (168-195 GHz) 4-way combined solid-state PA based on 130-nm SiGe BiCMOS technology [32]. The complete G-band PA includes an input splitter, four single-ended PAs, and an output combiner. For the first time, a G-band silicon PA exceeding an output power of 18 dBm with a PAE larger than 3 % has been demonstrated. This performance has been made possible by exploiting the unique features of the 130-nm SiGe technology from IHP with f T /f max of 300/450 GHz and back-end-of-theline (BEOL) process suited for millimeter wave applications. The circuit can be used in frequency multiplier chains, radar sensors, and high-speed wireless communication transceivers operating in the 140-200 GHz band. The paper is organized as follows : In section II, the features of 130-nm SiGe BiCMOS technology are discussed. Section III presents the architecture of the 4-way combined PA. Then, section IV presents the detailed design procedure for the single-ended PA, 4-way combiner, and their integration for the realization of the overall PA. The experimental results and comparisons with state-of-the-art works are presented in section V. Finally, the conclusion and summary are reported in section VI. II. 130-nm SiGe BiCMOS TECHNOLOGY The 4-way combined PA proposed in this paper was designed and fabricated with commercial IHP's 130-nm SiGe BiCMOS process known as SG13G2. The process offers high performance heterojunction bipolar transistors (HBTs) with f T /(f max ) of 300/450 GHz and breakdown voltages of BV CEO = 1.7 V, and BV CBO = 4.8 V, respectively [1]. The HBTs are highly suitable for the design of various mmwave and sub-THz circuits, which is further supported by the back-end-of-the-line (BEOL) process as shown in Fig. 1. The BEOL provides seven metal layers based on aluminum(Al), which include two thick low-loss metals (TM 2 3 µm thick and TM 1 2 µm thick) and five thin metal layers (M 5 -M 1 each 0.49 µm thick). The top thick metals allow higher current densities and present lower sheet resistance, while the lower thin metals help to form various metal contact patterns for different Si-devices. The metal layers together with their heights and thickness enable the customized realization of highquality inductors, metal-oxide-metal (MOM) capacitors, and transmission lines such as microstrip and coplanar lines. The lower metal layers permit the design of ground plane and dense metal interconnections. In addition, the process offers polysilicon resistors and high quality-factor metal-insulatormetal (MIM) capacitors. Fig. 2 shows the block diagram of the G-band 4-way combined PA. It consists of an input splitter, four-unit cells of single-ended PAs, and an output combiner. The proposed structure was aimed to provide a peak output power of more than 18 dBm with a power gain larger than 27 dB at 185 GHz. Assuming ideal lossless components in the design, a unit cell PA (single-ended PA) is able to provide an output power more than 12 dBm with a power gain of 27 dB, which leads to an output power of 18 dBm in a 4-way combination. However, in the actual case with lossy components, the unit cell should provide an output power well above 12 dBm to compensate the loss of power combiner. Assuming the input and output of a unit cell is matched to 50 , the splitting/combining networks must be designed to ensure conjugate matching of the input and output of the 4-way combined PA and external 50 standard terminations. Such transformation is ensured with the following steps: the transmission lines TL 1,2 were used to properly compensate the overall parasitic effects while roughly maintaining 50 on the up and down side of section A (i.e., at A the impedance seen is roughly 25 ) then, through the transmission lines TL 3,4 the impedance is transformed to 50 . This will again make the impedance seen at the junction of node B 25 , which is finally transformed to 50 at node C using TL 5,6 and parasitic capacitance of the RF pad. IV. CIRCUIT DESIGN CONSIDERATIONS A. PA TOPOLOGY The design of the high-frequency PA requires the investigation of various topologies in terms of gain, output power, and power added efficiency (PAE). CE and CT are the two most adopted solutions for the design of high-frequency PAs. The schematics of a CE and a CT are shown in Fig. 3(a) and 3(b), respectively. To compare the two topologies, the transistors shown in Fig. 3 are biased at a collector current density of 2.0 mA/µm. Biasing resistors R 1−2 with values in the range 300-500 were added at the bases of each transistor for optimal gain and output power, which allows the effective collector-emitter breakdown voltage to increase well above BV CEO by presenting an external low impedance to the bases of the transistors [20], [34]. Fig. 4 shows the maximum available gain (MAG) and stability factor (µ 1 ) for the CE and CT without including the physical metal interconnections. It is noted that the CT shows higher gain and stability compared to the CE. Specifically, at 185 GHz, the CT and CE show a MAG of 12.3 dB and 6.3 dB, respectively. Moreover, better isolation between the input and output of the CT ensures higher stability with respect to the CE. For the design of high-frequency power amplifiers, it is certainly more interesting to compare the large-signal performance of the two topologies. Load-pull simulations were performed to find the optimum load impedances resulting in maximum output power and PAE at 185 GHz, while the input was terminated on the conjugate matching condition. Table 1 shows the relevant parameters (input and load impedances, peak gain, output power at 1 dB compression and at saturation, and peak PAE) of CE and CT, while Fig. 5 illustrates the power sweep curves resulting from nonlinear harmonic balance simulations under optimum load/source terminations at 185 GHz. It is noted that the two topologies show almost similar input impedance. The CT shows larger optimum load resistance (R Lopt,CT = 87 ) with respect to CE (R Lopt,CE = 22.42 ). In terms of output power and gain, the CT shows higher values than CE. Both CE and CT show almost similar PAE of about 20 %. However, when the inherent losses introduced by the interconnections and matching networks are taken into account at 185 GHz, it will result in a severe reduction of the attainable gain, output power and PAE. The lower gain resulting from the CE makes such solution less attractive in comparison to the CT, which was adopted for the design of the single-ended PA at 185 GHz. B. UNIT CELL: SINGLE-ENDED PA The high-frequency PA typically consists of a power stage and finite driver stages. The former provides the required output power, whereas the latter allows to satisfy the gain specification. To design a power stage with output power higher than 13 dBm at 185 GHz, the active area of each transistor in CT was picked as an aggregation of two parallel 8-finger HBTs (see Fig.3(b)). This provides a saturated output power of 15.8 dBm without including metallic interconnections. Suitable metal interconnections were designed for the power stage to satisfy metal current density and electromigration rules. The interconnections were EM-simulated in ADS-Momentum from Keysight to see their effect on f T /f max , output power, gain and PAE. Fig. 6 shows the f T /f max of the transistor with parasitics extracted (with interconnects). The f T /f max drops from 300/450 GHz to 270/350 GHz, which is still better than CMOS [35]. This degradation in f T /f max reduces the output power, gain and PAE of the PA. Load-pull simulations were repeated to optimize output power, and PAE at 185 GHz. The resulting parameters of the power stage without (lossless, i.e., ideal) and with interconnections are summarized in Table 2, while Fig. 7 shows the output power, gain and PAE curves at 185 GHz. It is noted that the interconnections introduce a loss of 1.6 dB, which reduces the peak gain from 12.2 to 10.6 dB. The output power, PAE, and optimum input/load impedances are also affected by the interconnections. The maximum power and PAE are dropped to 14.7 dBm, and 14.6 %, respectively, due to the loss introduced by the interconnections. To drive the power stage and attain an overall gain larger than 27 dB, two identical driver stages were adopted. The transistor's size in the driver stages is half with respect to power stage. Table 3 provides summary of the different parameters of the two driver stages biased with voltage supplies of 3 V and 4 V for the 1 st and 2 nd stage, respectively. 4 V supply could be selected for the 1 st driver stage instead of 3 V to increase the overall gain. However, 3 V supply was opted to reduce the total DC power consumption of the PA. The optimum impedances required by each stage were properly synthesized through the design of the input, interstages, and output matching networks to complete the design of the single-ended PA. The matching networks were designed using MIM capacitors, and 50 transmission lines with different electrical lengths. The DC-block capacitors were optimized as part of the matching networks. Unlike high impedance lines, 50 transmission lines have been utilized for sustaining higher current densities, which improves the reliability of the PA. The three stages of the single-ended PA were AC coupled by using bypass capacitors. The matching networks, including bypass capacitors were EM-simulated in ADS momentum. Fig. 8(a) shows the final schematic of the single-ended PA, while the corresponding 3D-layout is shown in Fig. 8(b). Fig. 9 shows the simulated S-parameters of the singleended PA. A good matching is achieved at the input (|S 11 | < −10 dB) and output (|S 22 | < −10 dB) with a gain of 28.2 dB at 185 GHz, which is inherently smaller than the summation of the gains of each stage due to the loss of matching networks. Fig. 10 shows the gain, output power, and PAE of the single-ended PA at 185 GHz. A peak output power of 13.1 dBm with a PAE of 7.7 % is obtained. To quantify the losses associated with the matching networks, Fig. 11 was generated where the output power is reported at various nodes (see Fig. 8(a)). For the ease of readability, the powers are also specified for a fixed input power of −18 dBm in the VOLUME 8, 2020 C. 4-WAY SPLITTER/COMBINER To combine the four-unit cells of single-ended PAs, a lowloss 4-way splitter/combiner earlier discussed in section III was designed. Fig. 12 shows the final layout of the 4-way combiner/splitter illustrating various transmission lines with different characteristic impedances and electrical lengths. The combiner was optimized in ADS-momentum, while ensuring good matching at each nodes (see points A, B, and C in Fig. 2). Fig. 13 shows the simulated S-parameters of the 4-way combiner. A good matching is achieved in wide bandwidth with both |S 11 | and |S 22 | better than −10 dB at 130-270 GHz. The combiner presents an insertion loss less than 1 dB at 140-205 GHz. D. 4-WAY COMBINED PA IMPLEMENTATION The transmission lines used in the matching networks and power combiner were realized using top thick metal (TM 2 ) for the RF lines with bottom thin metal M 3 acting as the ground plane (See Fig. 1). The lower metals M 1 and M 2 were used for various interconnections to route DC lines. Separate collector and base supplies were used to bias the transistor. Besides, collector supplies of each stage are separated to improve stability. Additionally, Large bypass capacitors with a series resistor of 10 are included in the supply lines. The full layout of the 4-way combined PA consists of integrated input splitter, four-unit cells of single-ended PAs, output combiner and bonding pads. The final PA was fabricated with the standard 130-nm SiGe BiCMOS process and its micro-photograph is shown in Fig. 14. It occupies a very small area of 0.97 × 1.4 mm 2 , including RF and DC pads. It is noted that the chip contains dummy metal layers, which were included to satisfy the metal density rules of the 130-nm BEOL process. Such dummy metal layers were placed at least 30-50 µm away from the RF lines to avoid any coupling with the main circuitry, which was further verified by including them in EM-simulation during the design of the matching networks. V. EXPERIMENTAL RESULTS The realized power amplifier was characterized on wafer under small and large signal conditions. During the characterization, the supply voltages were set as V CC1 = 3 V, V CC2 = 4.0 V, and V CC3 = 4.0 V. A. SMALL-SIGNAL The small-signal (S-parameters) characterization was carried out using the setup shown in Fig. 15. The Rohde & Schwarz ZVA67 VNA (Vector Network Analyzer) and ZC260 frequency extender were used to perform measurements and LO in = (RF − 279MHz)/12. The LO in signal is further divided into two using a power divider. These RF in and LO in signals then feed the two ZC260 frequency extenders, which generates the required RF out signals needed to feed the two terminal device under test (DUT i.e., PA in this case). The DUT contains an identical set of DC and RF probes, and waveguide fixtures at each side. The reflected and measured intermediate frequency (IF) signals (279 MHz) are later captured by the VNA. The multiplication factors for the ZVA260 frequency extenders are 12. In addition, the attenuator of the frequency extender can be used to adjust the incident power of the RF signal feeding the DUT. Similar setup with the replacement of RF probe and frequency extender (ZC170) was used to measure the S-parameters of the PA in D-band (110-170 GHz). A photo of the measurement setup available at IHP laboratory is shown in Fig. 16. Fig 17(a) and Fig 17(b) show the stability-factor (µ 1 ) and S-parameters of the 4-way combined PA, respectively. In general a good co-relation is found between the simulation and measurement. The PA is stable as the µ 1 -factor is above 1. The measured stability is only shown at 110-260 GHz due to non-availability of RF probes and sources to cover a large frequency range from DC to f max . Nevertheless, the simulated stability of the PA was assessed under even and odd-mode excitation and it was found unconditionally stable. Also, the single-ended and 4-way combined PAs are unconditionally stable. The PA shows a maximum small-signal gain of 25.9 dB with |S 11 | and |S 22 | better than −10 dB at 185 GHz. The measured gain at 185 GHz is slightly (1.3 dB) lower than the simulated one (27.2 dB). The discontinuity at 170 GHz is clearly related to the change of measurement set-up (D-or G-band). The 3 dB and 6 dB bandwidth of the PA are 27 GHz and 42 GHz, respectively. B. LARGE-SIGNAL The large-signal characterization of the PA was performed using a setup similar to the small-signal test bench with the difference in the use of a power meter at the output of the DUT. Fig. 18 shows the D-band output power measurement setup. The input signal feeds the DUT, which is generated using similar method discussed in section V-A. The amplified signal at the output of DUT is detected by the VDI Erickson PM5 power meter. For the characterization in G-band, the frequency extender ZC170 was replaced with ZC260, while WR 4.0 probes were adopted. The loss of RF-probes found from the data-sheet was de-embedded in the power measurement. Fig. 19 shows the measured large-signal parameters of the 4-way combined PA compared with the simulation. These parameters include output power, gain, and PAE at various frequencies in G-band. The PA shows almost similar performance in terms of maximum output power with the difference in gain. For instance, at 185 GHz, the PA achieves 18.1 dBm of saturated output power with 25.9 dB of peak gain and 3.5 % of PAE. Similarly, a maximum output power of 18.7 dBm with a PAE of 4.4 % is demonstrated at 170 GHz. The degradation in measured PAE is due to the output power and gain, which are lower than the simulation. Nevertheless, a good correlation is found between them. The PA consumed maximum overall dc current of 431 mA, which is the sum of currents from each supply The resulting overall dc power consumption is about 1.6 W. Fig. 20 shows the simulated and measured output power at 3-dB back-off and at saturation. In the simulation, the saturated output power is found for a fixed input of 5 dBm. The PA provides a 3-dB back-off output power larger than 15 dBm and a P sat larger than 18 dBm at frequencies ≈140-186 GHz. Such performances make the PA highly desirable for various broadband applications. Table 4 shows summary of the various parameters of stateof-the-art PAs based on both Si (SiGe BiCMOS, CMOS) and III-V (InP, GaAs) technologies at various frequencies in D and G-bands. Advanced InP process achieves high output power and PAE than GaAs and Si based technologies. However, SiGe BiCMOS is still getting close to InP with highest power demonstrated in the lower D-band around 115 GHz. Few works have shown high output power in the upper D-band, which are still not fully characterized. The PA presented in this work delivers high output power in the G-band at f ≥ 170 GHz. Also, it achieved saturated output power ≥ 18 dBm over a wide range of frequencies at 140-186 GHz, which is state-of-the-art for SiGe BiCMOS technology. VI. CONCLUSION A fully integrated G-band PA based on the 130-nm SiGe BiCMOS technology with an output power larger than 18 dBm is presented. The large-signal performance of the PA relies on the low-loss wide-band on-chip power combiner and single-ended PAs. Besides, detail procedure about the design of 4-way combined PA is presented, which includes selection of the topology, effect of the interconnections, schematic and complete layout of the single-ended PA, and four-way combiner. The future work can focus on improving the smallsignal bandwidth, reducing the DC power and chip area consumption of the PA without significantly degrading the large-signal performance. The procedure adopted in this paper can be applied at higher frequencies above 200 GHz to design high power PA. Also, the PA can be utilized in the design of different solid-state systems for various future applications.
2020-04-30T09:03:53.766Z
2020-04-27T00:00:00.000
{ "year": 2020, "sha1": "91acfb586bdd55da5b74f0596e48f986f9cffbf3", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09079539.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "9bb060330a3c4e1a1b6dacf3082e2118a5c4ecb9", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Computer Science" ] }
235730377
pes2o/s2orc
v3-fos-license
Diagnosing delivery capabilities on a large international nature-based solutions project Nature-based solutions (NBS) are increasingly at the centre of urban strategies to mitigate heatwaves and flooding, improve public health and restore biodiversity. However, on-ground implementation has been slow, inconsistent and often limited to demonstration sites. A broad literature consistently highlights institutional barriers as a major reason for the observed implementation gap. In this study, we developed and deployed an assessment tool to identify barriers to NBS delivery on a European Commission Horizon 2020 project spanning seven cities. We found that practitioners were effectively navigating challenges in the areas where they had significant control, including community engagement, strategy development and technical skills. The greatest barriers were outside the influence of project teams: understaffing, a lack of intra-organisational processes, and risk-averse organisational cultures. These findings emphasise that after cities embrace NBS at the strategic and political level, it is vital that executives follow through with the necessary pragmatic reforms to enable delivery. INTRODUCTION Nature-based solutions (NBS) are increasingly recognised as an effective response to a number of major urban challenges. These include heatwaves 1-3 , flooding 4-6 , water quality 7,8 and public health and wellbeing [9][10][11] . While the concept of NBS emerged as recently as 2015 12 , the idea of using urban nature to address these issues also features prominently in the more established fields of ecosystem services (which emerged in 2005) 13 and green infrastructure (2002) 14 . Despite mounting evidence of their benefits, strategies built around NBS are seldom practically realised 15 ; implementation in cities has been slow, inconsistent and often limited to demonstration sites [16][17][18][19][20][21] . For example, 6 years after Copenhagen embraced green infrastructure as a response to its acute flooding problems in 2011, the implementation of green infrastructure was only just 'taking off' in 2017, and remained highly contested 22 . Even retaining existing urban NBS remains a challenge; tree canopy cover, central to mitigation of urban heat island effects, is declining in many cities 23,24 . For example, metropolitan Melbourne experienced a loss of 2000 hectares between 2014 and 2018 25 . In the US, an average of 36 million trees were lost each year from urban areas between 2009 and 2014 26 . Barriers within the organisations responsible for implementing NBS are frequently identified as a primary reason for limited NBS delivery 16,17,23,[27][28][29] . Delivery organisations have significant path dependencies; existing regimes are self-enforcing, and change is difficult 30,31 . The reasons for non-delivery have been characterised in detail in a broad range of literature. Barriers are highlighted in studies focused on urban forestry 32,33 , urban water management 18,29,34 , nature-based solutions 35,36 and climate adaptation 37,38 . The issue has been investigated through the lenses of mainstreaming 39,40 , governance 19,21,33,34,41,42 , transitions 31,36,[43][44][45] and general analyses of barriers 17,23,46,47 . Papers describing the barriers to NBS have drawn on interviews with experts 17,23,33,34,36,48 , and direct project experiences 49,50 . At the time of writing, we are aware of nine review papers that present typologies of barriers to NBS delivery, based on systematic reviews of the considerable literature 16,18,21,29,32,37,41,46,47 . These studies have identified a largely consistent set of eight essential (and frequently lacking) traits for successful NBS implementation in local government. Leadership support is critical, both at the political and executive level 18,20,21,29,34,41,51,52 . A project team with the right capacity and timeframes to implement projects is also important 32,37,51,53,54 , as is a framework of internal mechanisms that facilitate the delivery of NBS, including clear approval processes, supportive policies and laws, and well-established standards for NBS design and maintenance 16,18,20,35,47,[55][56][57] . A positive, supportive organisational culture for delivering new projects is also necessary, recognising that new NBS projects often have inherent (and novel) risks and tradeoffs 17,19,20,32,33,47,52,58 . Finally, access to teams within the organisation that are both suitably skilled and supportive is vital 16,18,23,46,54,59,60 . Beyond the organisation itself, it is common for other levels of government to play an important role in approving aspects of NBS projects; an absence of support or clear process from higher regulatory authorities can pose a significant barrier 32,33,38,44,51 . Effective community engagement is also noted as important, recognising that many NBS need public support and/or private property owner consent to be successful 23,37,38,46,49,[61][62][63][64] . While the barriers to NBS delivery have been the subject of significant attention, the implementation gap persists, with recent publications continuing to note the difficulty of NBS delivery 33,36,52,57 . A range of theoretical frameworks offer insight into how the implementation gap might be addressed. In the field of governance, The Policy Arrangement Model 65 has been used to conceptualise governance in urban forestry 32,33 and urban stormwater management 21,34 as the temporary balance of a set of actors, discourses, rules and resources; changes to these variables may lead to changes in governance. In the Policy Arrangement Model, each of these four elements is significant, as is their interplay 33 . The actors included (or excluded) in a policy arrangement are crucial, given the range of agendas in typical stakeholders (e.g. politicians, community groups, chambers of commerce, financiers etc.), as are the relations between these actors (some may operate as coalitions, or as antagonists). Discourses include tacit and explicit conceptualisations of what the policy problem is, how it should be solved, and what values matter most. These are important in lending legitimacy to rules, which define interactions and roles between actors. These may be as formal as laws and design standards, or as informal as a set of undocumented organisational processes and norms (e.g. "talk to Anne in our compliance branch, she usually decides what is safe"). These elements are all vital in determining who deploys resources such as staff time, skills, budgets or equipment, and how they are deployed. Collectively, the dynamics between these four elements constitute a policy arrangement; changes in one element have the potential to affect others, and in turn spark shifts in governance 65 . However, these systems can be strongly entrenched 66 . Governance shifts are theorised to be typically driven by at least four factors: policy entrepreneurs (or 'champions'), shock events, sociopolitical changes and 'adjacent arrangements' (developments in policy domains in related sectors or institutions) 67 . Policy entrepreneurs are also a focus of mainstreaming research, which highlights how these individuals advance NBS uptake by working within organisations to involve key stakeholders, engage citizens and contract technical expertise while incrementally introducing NBS considerations into planning practice 35 . This work is conceptualised as 'horizontal' mainstreaming, as officers champion NBS across their organisations, but it is argued that this must be supported by 'vertical' actions by topdown actors (such as executives and elected leaders) with the power to determine resource allocations and organisational structures 39,40 . To support NBS development and planning, the European Union's Horizon 2020 programme initiated a series of large international demonstration projects, each involving collaborations between a number of cities, consultancies and universities. These include the UnaLab, ProGIreg, Connecting Nature, Grow-Green, Urban GreenUP and EdiCitNet projects 68 . These projects generally fund dedicated staff, as well as on-ground delivery of NBS, and have potential to address some or all of the barriers to NBS delivery that cities face. When considered in terms of the Policy Arrangement Model, the new actors, discourses and resources introduced by these projects all challenge the 'temporary balance' theorised to constitute the organisational status quo 65 . These projects also may encourage governance shifts 67 , by both facilitating the hiring of NBS policy entrepreneurs, and increasing a city's exposure to influential adjacent arrangements in other centres of NBS expertise, such as university research units or exemplar municipalities. With the involvement of organisational champions/policy entrepreneurs, mainstreaming activities such as stakeholder outreach, citizen engagement and intra-organisational collaboration become increasingly possible 35 . This paper investigates the Horizon 2020 NBS project, Urban GreenUP. Urban GreenUP focuses on supporting partner cities to prepare NBS plans, as well as funding a multi-million Euro programme of investment in NBS interventions including floating vegetated islands, green walls on private structures, and streambank renaturalisation. The seven cities participating actively as project partners are Liverpool (UK), Ludwigsburg (Germany), Mantova (Italy), Valladolid (Spain), Izmir (Turkey), Quy Nhon (Vietnam) and Medellín (Colombia). This group of cities represents a wide range of governance arrangements and urban contexts in which NBS delivery occurs; Liverpool is a significant post-industrial centre emerging from sustained economic challenges compounded by government austerity, while Mantova has large areas of UNESCO world heritage and a legacy of industrial pollution. Quy Nhon is a coastal holiday town fairly new to NBS, while Ludwigsburg has extensive environmental legislation and has already successfully carried out major streambank restoration works on their local river. The former is largely governed by provincial government, with more operational management at a local level, whereas the latter has individual portfolio mayors, including one for the city's environment. Valladolid has a population of 300,000 and a fairly compact urban form, whereas Medellin numbers over two million residents. We were able to work with each of these cities, effectively capturing the full range of capabilities and experiences in the Urban GreenUP project, and a significant variety of landscapes and organisational contexts in which NBS may be implemented. While the 'generalisability' of case studies is often limited, the constraints can be at least partially addressed through strategic sampling of cases 69 . Our sample is diverse, and while limited to seven cities, it does represent the full cohort of cities participating in this major EU programme designed to promote NBS innovation. A smaller sample size allows for a close, qualitative study of each case. Urban GreenUP presents a valuable opportunity to investigate the persistence of NBS barriers within local governments with ambitions for NBS implementation, with implications both for future innovation-oriented programmes such as Horizon 2020, and potentially the broader practice of NBS delivery in cities. Many cities beyond the Urban GreenUP group are preparing new NBS plans and programmes, and could benefit from insights arising from this study. Each GreenUP city is embarking on NBS planning and delivery and, at the time of our research, each had an individual or team employed with a specific NBS delivery role (with potential to serve the policy entrepreneur role highlighted in the literature). Local government often plays a key role in the implementation of urban NBS 39,47,70 , and Urban GreenUP places these organisations at its centre. Citizen engagement is an explicit focus of the project, as is the making of plans; these are both emphasised as opportunities for mainstreaming new practices 35,71 . We analyse the NBS implementation capacity of the cities within this study using an approach generally consistent with the practice of theory-based evaluation 72,73 . This 'theory-based' method breaks an implementation programme into its component elements, and assesses each element against available theory regarding what is required for success. This poses significant advantages over other evaluative methods because it focuses on the causative elements that lead to policy success or failure, rather than just the final outcomes achieved 74 , and is therefore more conducive to reforms of the institutional barriers discussed above. Theory-based evaluation typically takes place at the end of projects, but ours is ex ante; an approach noted by Weiss in her seminal outline of theory-based evaluation as having the potential to improve programme planning 72 . Evaluative practices have been noted as a particular weakness in local government NBS programmes, both at the political and officer level, due to a fear that acknowledging problems would lead to criticism of failures 36 . We sought to mitigate this issue both through use of an ex ante approach, and by designing a tool that creates distance between evaluators and individual practitioners. This paper investigates the enduring difficulties faced by cities seeking to deploy NBS, in the context of a major NBS project spanning seven cities. We had two key research questions. First, do case study cities have the capabilities required for successful NBS delivery, or are there barriers that continue to make this difficult? Second, does identifying and measuring a city's NBS delivery capabilities facilitate improvements in these capabilities? We elicited organisational barriers from NBS practitioners in participating cities using a purpose-built diagnostic tool. We used this approach because tools that enable organisations to learn about their success factors have the potential to address implementation gaps 44,71 . The tool was developed to bring lessons from the literature into an operational context, by enabling practitioners to assess and rate their organisation's capability levels across eight key areas, such as political support, alignment between teams, and technical knowledge (refer to Table 1). The tool posed a series of questions pertaining to each of these eight capability areas; users answered by selecting from a set of pre-defined answers. The tool associated each answer with a level of capability, which was reported as a final assessment of an organisation's capabilities. This was provided to the practitioner directly on completion of the tool's questions, enabling an immediate estimate of their organisation's NBS delivery capacity, and a diagnosis of any key barriers they would be likely to face in future NBS projects. These results formed the basis for our reflections on NBS delivery capacity within Urban GreenUP, as well as discussions with practitioners to understand how results of the tool were received. Our research proceeded in three steps. First, we drew on the literature to define a set of eight key capabilities-which we call 'success factors'-for NBS; these formed the basis of the tool. Second, practitioners within NBS teams in participating cities used the tool to identify their capability levels. Finally, we interviewed users to evaluate the impact of the tool. City capabilities The tool assesses eight broad capability areas for NBS delivery. These are grouped according to the barriers that were common across multiple disciplines (such as urban forestry and integrated water management) and framings of the problem, especially in review papers that outlined typologies of barriers 16,18,21,29,32,37,46,47 , and are detailed in Table 1. Most of the participating cities faced deficits in multiple success factors, at a level ranked by the tool as either very challenging or critically challenging (Fig. 1). Furthermore, five cities have results rated challenging in at least half of the eight success factors. Stable political/executive support is a challenge in five cities. Four cities have unsuitable internal processes, strategies, regulations and/or policies. Notably, almost every city reported shortages of staffing, and for the majority of teams it appeared to be a serious issue. Challenges were noted in organisational culture in four cities, as were difficulties with other government departments in four cities. Factors considered relative strengths across the group included technical capability, community engagement and supportive internal departments. Two cities (City 3 and City 6) reported strengths in almost all capability areas. Reviewing individual responses to the questions within each success factor revealed more specific strengths and weaknesses (Fig. 2). NBS approval processes were a key issue among the grouped issues of 'processes/standards/policy/regulation'. Four cities do not have clear processes for NBS approval, meaning this must be negotiated case-by-case with other parts of the organisation (Fig. 2, question 2.1). Staffing shortages were a consistent cause of low ratings in the 'adequate and empowered staffing' success factor (Fig. 2, Table 1. Eight success factors for urban NBS in local government. Success factor Barriers addressed by this success factor Stable executive and political support • Political support is lacking; either unclear, inadequate or obstructive 18,20,21,34,51,80 • Executive leadership and support is lacking 20,21,29,41 • Teams face instability and uncertainty due to electoral volatility or leadership churn 55,81 Internal processes, standards, regulations and policy • Internal approval processes are not suitable for NBS and are either obstructive or unclear and require laborious case-by-case negotiation 18,47,55 • Laws may make NBS implantation illegal or create an unreasonable risk of liability 16,20,21,34,35,47,56 • Design standards do not exist for NBS so individual designs must be negotiated; standards for other kinds of urban infrastructure preclude NBS (e.g. no trees on median strips) 16,21,46 • Policy and strategy does not clearly direct the organisation to deliver or even conserve NBS 16,17,29,35,51 A well-resourced team • Staffing is often inadequate for the scope of projects and capacity issues are aggravated by high turnover 32 35,49,63,82,85,86 Supportive internal departments • Diverse departments and disciplines that are often essential to successful project delivery 62 tend to be external to core delivery teams (i.e. 'siloed') and may be uncertain of their potential role, or even obstructive. These include engineering, maintenance and design teams but a range of other work areas can be barriers if projects are siloed 16,19,20,[33][34][35]37,47,58 Culture of innovation and risk tolerance • Organisational cultures are skeptical of new challenges, outputs and ways of working 17,19,20,32,33,47,58 • Failures are punished instead of seen as learning opportunities 30,58 • Trade-offs are difficult as risks and disbenefits are weighted heavily compared to benefits 16,23 Supportive departments in other level of government • NBS may require approvals from other levels of government that do not consider NBS an important part of their work, or have conflicting policy and/or values 29,32,33,38,44,51 • Where approvals are required, they lack a clear and facilitative process 16,18 Access to suitable technical skills • Staff lack experience and skills (or access to contractors) in the following key technical fields including engineering, design of NBS, horticulture, substrates and plant selection and construction and maintenance 16,18,20,23,32,46,54,59 question 3.1). Culture in relation to risk was also highlighted as a key barrier for most cities. When asked, "When a new initiative fails or is difficult, how do leaders and executives tend to respond?", four users answered, "That was worth a try but let's never do it again" (Supplementary Table 1; question 6.2). This response was rated 'challenging' by the tool but this may prove to be a more severe issue in cities establishing new programmes within riskaverse cultures. A few nuances not highlighted in Fig. 2 are also notable. Low ratings for the first success factor 'Stable executive and political support' were reflective of a wide range of reasons-some cities lacked executive support (1.1), some, political support (1.2) and some, stability (1.3); there was no acute cross-cutting issue. Most cities reported having good, but general, policy and strategic support for NBS already in place (Q2.4). General support is not as effective as a detailed implementation framework, but it is also clear that policy is at least not an active source of obstruction. The comparatively good ratings for the 'Alignment of Internal Departments' success factor was indicative of cities reporting that their engineering, design or maintenance departments "have basic knowledge and are tentatively supportive if risks are managed well" (Q5.1-5.3). While not obstructive, this tentative support is perhaps a less encouraging result than the rating suggests. While the reasons for low overall success factor ratings varied, the tool revealed three consistently problematic capability areas: (1) acute shortages in staffing; (2) strongly risk-averse culture; and (3) obstructive or unsuitable processes for NBS approval. Impact of the tool Users generally reported that the tool accurately characterised their success factors, and considered the tool useful. "I definitely agree with the success factors, I liked this very much. The multiple-choice options you provide are quite accurate, it's quite easy to answer and reflects the differences between answers well. The answer options are well selected, the results accurately reflect my answers. When I saw the results I felt reflective about thatwhat I see now in the screen, Advanced Community Engagement Skills may be a problem for us. I know this is true, this part of the thing works quite well." Team member, City 5 "Yes, I think this is very good to make it more clear for us. We have it in mind, okay, this might be the problem but you collect the answers and then you see yes, this is difficult for us and how can we maybe solve the problem. So yes, I like this overview because it's very clear. It covered the main factors for us." Team member, City 1 While the tool's findings were endorsed, practitioners who used the tool already clearly had tacit knowledge of their own missing success factors. Users found it useful to have their strengths and weaknesses made explicit; however, the tool did not encourage new actions (such as organisational reforms), because knowledge was not what was preventing improvements to success factors. "We agree. We know these are the barriers. (…) The problem is transferring between theory and reality (…) Our motivation is great, but it does not depend on us, the staff we have an organisation, they decide who must work on this thing. This is a problem that we cannot solve alone (…) At the moment we cannot change it." Team member, City 2 A number of interviewees noted that if other teams were asked about the success factors, their answers might differ. In one city, users felt they could only use success factor ratings if they were produced through consultation with all the relevant internal teams. Despite these limitations, users noted other potential strengths of the tool, including that it would be useful as a means to guide stakeholder deliberations, as part of the planning process within organisations to build alignment in early project phases, and in building awareness in decision-makers around the organisational requirements to execute projects effectively. Interviews also shed light on the two very positive responses in a way that underlines the tool's limitations. One team appeared uncomfortable with the tool's probing questions, and another was newly-formed when the tool was offered to them. This highlights the tool's dependence on informed and willing user input. DISCUSSION Organisational barriers continue to limit NBS delivery, despite the popularity of the concept and years of scholarship identifying the barriers through a range of theoretical lenses. To interrogate the persistence of this problem in a practical context, we provided a tool for local government practitioners to identify success factors in their organisations. We sought to diagnose problems and encourage actions to build delivery capability. We found that the cities embarking on NBS delivery as part of Urban GreenUP had strong capabilities in a number of areas typically identified in the literature as barriers. These included good access to technical skills and openness to citizen involvement. General (albeit tentative) support from teams across the organisation was common, as was a broadly supportive policy environment. This represents an important difference between our studied organisations, and the literature to date; project champions in Urban GreenUP are overcoming some typical barriers. Three major enduring barriers were reported by most of the cities in this study. First, we identified a lack of clear organisational processes by which NBS are delivered. The absence of process creates a requirement for ad hoc negotiation within and between organisations to deliver NBS. This kind of negotiation is typically with teams that have other priorities, such as traffic engineering or park maintenance. While these areas are not necessarily actively opposed to NBS, the absence of process has been noted in the literature to make NBS projects more time-intensive and uncertain 17,23,75 . This is especially difficult given that the second barrier we identified was an acute shortage of staffing to manage NBS delivery. The third barrier we noted was that cultures around risk were cautious and punitive, in a way that has been emphasised in the literature as a barrier to innovation 30 . Revisiting the Policy Arrangement Model, we may conclude that actors and discourses are shifting, but the intertia is primarily in the allocation of resources, and the 'rules of the game' (both in the tacit rules about risk, and more explicit rules about NBS approval). However, the findings of this study are especially interesting when considered in terms of the dichotomy of vertical and horizontal mainstreaming 39,40 . Horizontal mainstreaming, typically carried out by policy entrepreneurs, includes the establishment of new activities such as strategies and pilots, collaborations across the organisation, technical skill development and engagement of the public. The teams we spoke to were generally strong in these areas, with policy entrepreneurs actively pursuing these activities, aided in part by funding and knowledge resources provided by Urban GreenUP (for example, stakeholder engagement advice and access to specialist engineering consultancies). Vertical mainstreaming, typically a responsibility of executives, includes a set of top-down actions within the organisation. These include setting new organisational norms (for example in relation to risk), modification of organisational rules and working structures to facilitate delivery, and the allocation of resources to support delivery (for example, by hiring staff). These correspond closely to the weaker success factor results in our study; it is this area of mainstreaming activity that appears to be much less developed in the cities we studied. The need for vertical mainstreaming activities was underlined by our interview findings. Practitioners considered the tool effective and useful in the way it explicitly identifies and measures success factors, but it was clear that the most pervasive barriers in the studied cities were already very familiar to practitioners. The diagnoses revealed through application of the tool did not encourage actions to improve success factors; these problems persist because not because they are unknown, but because addressing them was not within the authority of the practitioners we spoke to. Returning to the question of what can be generalised, we note a few points in our case studies that could apply to wider practice. The contrast between case and theory has again validated the suite of issues that cities face in NBS delivery, with practitioners affirming the eight success factors as comprehensive and familiar. The accuracy of the tool suggests it has promise as an efficient mechanism for injecting theory-based evaluative practice into workplaces, particularly as a focus for deliberation. Perhaps most interestingly, the way that the most enduring barriers (Risk Aversion, Resourcing and Rules) correspond to typical executive responsibilities suggests that this actor group may need to play an expanded role in cities seeking to initiate NBS programmes. Certainly, in large innovation-oriented programmes such as Horizon 2020, a more explicit stream of actions for senior organisational leaders appears to be warranted, as only these actors have the power to reform the challenging organisational barriers that we have identified. We note three key limitations of the tool used in this study. First, this is a very diverse set of cities, and our tool is based on a literature that may not have given due attention to non-western governance models. For example, in Vietnam, most implementation is carried out by a City People's Committee, subordinate to the Provincial People's Committee, while the locally-elected People's Council plays a fairly limited supervisory role. Second, the tool's reliance on user input exposes the tool to the typical limitations of self-assessment approaches. For example, while a willing and experienced practitioner with an interest in building capacity might use the tool honestly to receive a meaningful response, other scenarios (for example, one where the user perceives reputational risk or a chance to access additional resources) may incentivise 'gaming' of the tool. Future studies could account better for this risk by establishing more anonymous input conditions, and including stakeholders beyond core delivery teams. Finally, there is likely scope to refine the tool's categorisation of responses; in particular, we note instances where very general support from policy or other teams was considered 'functional', where this may amount to simply a lack of obstruction rather than any real facilitation of NBS delivery. METHODS To arrive at our findings, we employed three key methodological steps: 1. Defining a set of 'success factors' for urban NBS, based on a review of the academic literature, for use in the self-assessment tool; 2. Assessing capabilities in seven case study cities via the selfassessment tool; and 3. Reflecting on the value of the self-assessment tool through semistructured interviews with practitioners. Defining success factors Given that we sought to carry out a theory-based evaluation, our first step was to assemble and synthesise existing theoretical understanding of the capabilities required for NBS delivery. To do this, we identified barriers to NBS delivery through a review of the academic literature. Key references in this field were retrieved using combinations of the following search terms in Elsevier ScienceDirect: (Path dependence or institutional barrier or organisational barrier or transition or institutional capacity) AND (NBS or Green Infrastructure or Living Infrastructure or SUDS or WSUD or IWM or urban ecology or urban forest or green space) From the results of this search, we eliminated results that did not pertain to both organisational barriers and urban greening, producing a total of 37 peer-reviewed articles, including nine review papers that each outlined a typology of barriers to the implementation of urban NBS programmes and strategies 16,18,21,29,32,37,41,46,47 . Each reference was reviewed with a focus on identifying barriers to the implementation of urban NBS interventions. Following this, individual barriers were synthesised into a set of eight overarching success factors for urban NBS. Barriers were reframed as 'success factors' to assist when communicating with participating cities, in an attempt to promote uptake and optimism in tackling organisational challenges. To measure capability in the eight success factors defined above, the tool was prepared in a spreadsheet. For each success factor, The tool poses three to five questions designed to evaluate competency and reveal critical issues. For each question, the tool offers a set of pre-defined responses. To facilitate comparison between cities, each question had a limited set of response options. The responses represented a range of capability, from very high to critically low. Answers to individual questions were considered in the context of their potential impact on successful NBS implementation, and were categorised as 'optimal', 'functional', 'challenging', 'very challenging' and 'critically challenging' based on the literature review. As this tool to rate success factors for NBS delivery is novel, our categorisation of each answer relied on our interpretation of the literature, as well as our experience as NBS practitioners. This necessitated a judgement under some uncertainty; the scoring and rating approaches used in decision models can be a topic of significant expert debate, even when such tools are already in active use 76 . To mitigate this we tested the tool with practitioners by providing an early draft before their final use, as well as including an interview question about whether users agreed with the tool's rating of their capabilities, and included a field in which users could adjust the final ratings the tool had allocated. Our testing and user feedback affirmed the tool's ratings, with only one user applying the override to slightly adjust the tool's assessment, and general support of the tool's conclusions reported in feedback sessions. Assessing capabilities This research reflects on and critically analyses success factors for NBS delivery, using the Urban GreenUP project as a case study. Urban GreenUp is a large European Union research project, funded under the Horizon 2020 programme 77 . Urban GreenUP focuses on preparing 'Renaturing Urban Plans' for these cities and aims to more broadly demonstrate techniques for NBS planning. The Renaturing Urban Plans will guide the strategic rollout of NBS in these cities to tackle local challenges such as urban heat islands, flooding, air quality and urban renewal. NBS was a strong focus for the city teams involved in the project at the time of our research; this context was ideal for our case study of barriers to NBS delivery. The tool was issued to project officers in the local government of each participating city. Users were free to fill the questions with whomever they wished (for example alone, with their teammates, or drawing in consulting and academic partners). Seven cities used the tool; each received a tabulated output that offered the following information: • The tool's overall assessment of the city's capability for each of the eight success factors (terms ranged from 'this is a strength for your organisation' to 'this may be a serious problem') • A list of specific challenges flagged within each success factor (e.g. if one's maintenance department has nobody with the suitable skills, this would be flagged as a critical issue). The provision of plain English assessments of capability and flagging critical issues was included to aid interpretation of results and encourage active responses. Reflecting on the tool's findings In our follow-up interviews, we ascertained how each partner used the tool and considered its findings. To encourage transparency, respondents were advised that results were anonymous. Accordingly, results are presented in a non-identifiable format. This was in accordance with the RMIT University ethics approval that was granted for this research (approval number: CHEAN A 21953). Six months after teams completed their self-assessments using the tool, we followed up to understand how the tool was used, and to determine whether it was useful to understand the team's strengths and weaknesses. Follow-up interviews were semi-structured, with responses prompted by a series of guiding questions, and completed in person or via video conference. Interviews were conducted with a single member of the participating team. The questions posed to each participating team to prompt responses were as follows: T. Croeser et al. • How was the tool completed (e.g. alone, or with other people)? • Did you agree with the tool's assessment of your government's capabilities? • Will you change anything about your planning approach following the tool's identification of your strengths and weaknesses? • Are there further improvements you would recommend to the tool? • Will you use the tool again? How? Conversations took~30 min and were recorded and transcribed. A thematic analysis was applied, coding answers manually and then identifying and counting common responses across the seven sets of responses 78 . For example, we received responses that indicated the tool was completed either alone, in pairs, with a team, and with multiple teams. Each interviewee's answers were coded into these categories to enable counts. These are included in Supplementary Table 2. DATA AVAILABILITY The data generated and analysed during this study are described in the following data record: https://doi.org/10.6084/m9.figshare.14401898 79 . The tool developed to carry out this research can be accessed via the same data record. Coded interview data is available in Supplementary Table 2. Individual city outputs of the tool and full survey transcripts are not shared due to identifiability concerns, which would contravene the terms of our ethics approval (RMIT CHEAN A 21953).
2021-07-05T13:47:54.342Z
2021-07-05T00:00:00.000
{ "year": 2021, "sha1": "85b47c62ab50a7678ba244c9f7bbf33930f0c918", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42949-021-00036-8.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "85b47c62ab50a7678ba244c9f7bbf33930f0c918", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
244835842
pes2o/s2orc
v3-fos-license
Intracluster Sulphur Dioxide Oxidation by Sodium Chlorite Anions: A Mass Spectrometric Study The reactivity of [NaL·ClO2]− cluster anions (L = ClOx−; x = 0–3) with sulphur dioxide has been investigated in the gas phase by ion–molecule reaction experiments (IMR) performed in an in-house modified Ion Trap mass spectrometer (IT-MS). The kinetic analysis revealed that SO2 is efficiently oxidised by oxygen-atom (OAT), oxygen-ion (OIT) and double oxygen transfer (DOT) reactions. The main difference from the previously investigated free reactive ClO2− is the occurrence of intracluster OIT and DOT processes, which are mediated by the different ligands of the chlorite anion. This gas-phase study highlights the importance of studying the intrinsic properties of simple reacting species, with the aim of elucidating the elementary steps of complex processes occurring in solution, such as the oxidation of sulphur dioxide. Introduction Pollution and other environmental issues are typically associated with the atmospheric emissions of exhaust flue gases produced by power plants and industries [1]. Different technologies, collectively known as flue gas cleaning processes, attempt to mitigate the release of greenhouse gases deriving from the burning of coal to generate electrical power [2]. Most efforts in this field are aimed at planning pollutant-control strategies to reduce sulphur dioxide which is referred to as the main precursor of acid rainfalls and atmospheric particulate [3][4][5]. To this end, the European Union established the 2016/2284/UE Regulation that intends to progressively reduce SO 2 emissions until 2029 and for the next few years [6]. Among the flue gas desulphurization (FGD) methods, the wet scrubbing system is a low-cost and simple technology based on the reaction between SO 2 and an alkaline sorbent, typically limestone [7][8][9]. Although engineers mostly design separate air-cleaning devices for individual gas emission removal, the search for multi-pollutant control systems would reduce the need for large installation areas and operation costs [10]. To this end, sodium chlorite (NaClO 2 ) is one of the most effective reagents for the simultaneous removal of oxides of sulphur (SO x ) and nitrogen (NO x ) [11]. The addition of this compound to seawater solution has been recently exploited to improve the elimination of SO 2 and favour the development of environmentally friendly seawater-based FGD [12]. The strong oxidative properties of NaClO 2 allows the conversion of sulphites (SO 3 2− ) produced by SO 2 absorption to the harmless sulphates (SO 4 2− ) that are easily solubilized in water and thus removed [13,14]. Nevertheless, many factors can affect the outcome of the scrubbing process (e.g., pH, temperature, oxidant concentration, oxidant/gas contact time, volumetric gas, liquid flow rates) and the influence of these parameters has to be carefully evaluated in the design of the operating systems [15][16][17]. For instance, solution salinity is known to increase SO 2 absorption efficiency, and under alkaline conditions needed for SO 2 /SO 3 − conversion, the occurrence of a gas-solid interface reaction between SO 2 and NaClO 2 gives rise to the formation of Cl · , ClO · and OClO · chlorinated species which may enhance the concomitant NO oxidation in multi-pollutant removal plants [18,19]. On the other side, the above-mentioned factors can contribute to masking the intrinsic reactivity of NaClO 2 towards sulphur dioxide preventing the elucidation of the mechanistic details that lead to the oxidation of SO 2 and the formation of collateral products. A successful strategy to avoid solution interfering effects and investigate the chemical processes at a strictly molecular level consists in performing gas-phase studies by mass spectrometry [20][21][22][23][24][25][26]. This technique is one of the most routinely employed for analytical purposes in a plethora of research fields spanning, inter alia, from foods and drugs to biology [27][28][29][30][31][32][33][34] or from geology to atmospheric chemistry [35][36][37][38][39]. Less well known is the use of mass spectrometry in fields such as catalysis, nevertheless, in the last years, mass spectrometry has been widely employed to assess the elementary steps of a chemical transformation by unravelling mechanistic pathways and elucidating the factors which affect the reaction outcome [40][41][42][43][44][45][46]. Accordingly, ion-molecule reaction (IMR) experiments were largely intended to investigate the reactivity of ionic reagents generated at their ground state towards neutral species under single-collision conditions. The gas-phase reaction of free ClO − and ClO 2 − anions towards SO 2 has actually provided important information on the intrinsic properties of naked chlorite leading to the oxidation of sulphur dioxide to SO 3 , SO 3 ·− and SO 4 ·− , with the concomitant formation of the chlorinated species ClO − , ClO · , and Cl · [47]. These reaction channels, respectively referred to as oxygen-atom (OAT), oxygen-ion (OIT), and double oxygen transfer (DOT), may represent simplified models of large-scale reactions occurring in the atmosphere or involved in the flue-gas desulphurization processes. In addition, electrospray ionization mass spectrometry has been long-time devoted to the study of salt speciation [48][49][50] showing its capability in controlling the size and charge of cluster ions. As a result, ionic clusters can be considered miniaturized systems to investigate the intrinsic features of matter aggregation phenomena [51,52]. Accordingly, the study of the gas-phase reactions of SO 2 with positive and negative carbonate cluster ions contributed to highlighting the major role of the charge in the kinetics of smallest clusters, as well as the different reactivity when charged cluster are ligated to a NaOH molecule [53]. Indeed, a point-charge ligand can generate oriented external electric fields able to change thermodynamics and kinetics of a gas-phase thermal process by controlling the reaction mechanism, efficiencies, and product distribution [54][55][56]. Continuing with our studies focused on the chemistry of sulphur dioxide [57][58][59][60][61][62], here we report on the gas-phase reactivity of negatively charged chlorite cluster ions, [NaL·ClO 2 ] − (L = ClO x − with x = 0-3), towards SO 2 investigated by ion-molecule reaction experiments. In this way, the effect of the ligation of a neutral molecule to ClO 2 − that changes the ion size and charge distribution of the cluster has been evaluated based on the known reactivity of naked ClO 2 − species with SO 2 . Results and Discussion Oxo-halogenated ions investigated in this work were generated by the negative electrospray ionization of NaClO 2 solutions typically yielding a series of singly-charged cluster ions in which NaClO 2 is clustered to the ClO 2 − anion to form aggregates resembling the general formula [(NaClO 2 ) n ·ClO 2 ] − , n varying from 1 to 4 in the m/z range 100-500 ( Figure S1). Aggregation phenomena are indeed characteristic of electrosprayed saline compounds [49] and are influenced by the solute concentrations and source parameters [53]. Furthermore, the electric field applied between the capillary and the skimmer plate accounts for the occurrence of electrochemical reactions at the conductive contact-solution interface near the ES emitter [63]. The detection of ClO x − (x = 0, 1, 3) anions in addition to the ClO 2 − parent species suggests the effective occurrence of in-source redox processes. For x = 1 and 3, the corresponding ClO − and ClO 3 − anions do not undergo significant aggregation phenomena. On the contrary, Cl − anions promote aggregation with NaClO 2 to form [(NaClO 2 ) n ·Cl] − ions (n = 1-5), and mixed clusters of general formula [Na x Cl y O z ] − were also identified as minor species, as shown in the Supplementary Materials ( Figure S1). The simplest ClO 2 − clusters for n = 1 were found at m/z 125 and 157 and respectively attributed to the 35 chlorine isotopologue of [NaCl·ClO 2 ] − and [NaClO 2 ·ClO 2 ] − species. The assignment was based on the distinctive 35/37 Cl isotope pattern and on the corresponding collision-induced dissociation (CID) mass spectra. The ion [Na 35 Cl· 35 ClO 2 ] − at m/z 125 predominantly fragments by losing a Na 35 Cl neutral counterpart giving rise to the 35 ClO 2 − daughter ion at m/z 67 (Figure 1a). The gas-phase decomposition of the corresponding 35/37 Cl isotopomer (m/z 127) predictably leads to the formation of an equal ratio of 35 ClO 2 − and 37 ClO 2 − fragments at m/z 67 and 69, respectively (Figure 1b). The parent ion can be therefore described as a complex of the type [Cl·Na·ClO 2 ] − in which both the chloride (Cl − ) and chlorite (ClO 2 − ) anions are coordinated to the sodium cation (Na + ). In particular, the chlorite moiety is reasonably consistent with an OClO − species rather than with the more stable ClOO − isomer, the presence of which can be excluded considering the structure of the precursor salt, NaClO 2 , and the high energy barrier to the isomerization, calculated to be 51.1 kcal·mol −1 , [47] which cannot be overcome by the ions during the ionization process. For x = 1 and 3, the corresponding ClO − and ClO3 − anions do not undergo significan aggregation phenomena. On the contrary, Cl − anions promote aggregation with NaClO to form [(NaClO2)n•Cl] − ions (n = 1-5), and mixed clusters of general formula [NaxClyOz were also identified as minor species, as shown in the Supplementary Materials (Figu S1). The simplest ClO2 -clusters for n = 1 were found at m/z 125 and 157 and respective attributed to the 35 chlorine isotopologue of [NaCl•ClO2] − and [NaClO2•ClO2] − species. Th assignment was based on the distinctive 35/37 Cl isotope pattern and on the correspondin collision-induced dissociation (CID) mass spectra. The ion [Na 35 Cl• 35 ClO2] − at m/z 12 predominantly fragments by losing a Na 35 Cl neutral counterpart giving rise to the 35 ClO daughter ion at m/z 67 (Figure 1a). The gas-phase decomposition of the correspondin 35/37 Cl isotopomer (m/z 127) predictably leads to the formation of an equal ratio of 35 ClO and 37 ClO2 − fragments at m/z 67 and 69, respectively (Figure 1b). The parent ion can b therefore described as a complex of the type [Cl•Na•ClO2] − in which both the chloride (Cl and chlorite (ClO2 − ) anions are coordinated to the sodium cation (Na + ). In particular, th chlorite moiety is reasonably consistent with an OClO − species rather than with the mor stable ClOO − isomer, the presence of which can be excluded considering the structure o the precursor salt, NaClO2, and the high energy barrier to the isomerization, calculated t be 51.1 kcal•mol −1 , [47] which cannot be overcome by the ions during the ionizatio process. Each ionic species described above was in turn isolated into the ion trap and expose to unreactive gas (He) over long accumulation times. Since no remarkable signal lo occurred, these ions can be considered rather stable gaseous chlorine-based aggregate When reacted with SO2, they showed a noteworthy reactivity. In the following, th nantly remain in the cluster and are not released as free species. Accordingly, the who mechanistic picture of the reactions between [Cl•Na•ClO2] − and [ClO2•Na•ClO2] − anions wards SO2 was outlined by identifying direct and consecutive pathways, measuring t rate constants for each reaction channel, and structurally characterizing the ionic produ by CID experiments. Reactivity of [Cl•Na•ClO2] − Cluster Anion [Cl•Na•ClO2] − cluster anions react with SO2 at room temperature giving rise to t products shown in Scheme 1, through a complex series of parallel and consecutive rea tions. A kinetic plot showing the time progress of the reaction is displayed in Figure The identity of the ionic products from reactions 1-5 has been probed by collision-induc dissociation as discussed in the following. As reported in Table 1, the reaction [Cl•Na•ClO2] − has a rate constant (kdec) of 2.88 × 10 -10 (±30%) cm 3 s −1 mol −1 and an efficien (k/kcoll) of 24.2%. Although the larger size of [Cl•Na•ClO2] − is predictably responsible for the decrea of the overall reaction rate compared to that of naked ClO2 − (2.88 vs. 9.10 × 10 −10 cm 3 mol −1 ), the intrinsic reactivity of the two ionic species is comparable, except for small d ferences in the branching ratios of the three oxygen transfer reactions. For the sake of cl ity, the reactivity, OIT, OAT and DOT, is indicated in each reaction channel. The main reaction of [Cl•Na•ClO2] − leads to the formation of the ionic produ [Cl•Na•SO3] ⸳ − at m/z 138 and a ClO ⸳ radical species (eq. 1). The reaction proceeds quick Each ionic species described above was in turn isolated into the ion trap and exposed to unreactive gas (He) over long accumulation times. Since no remarkable signal loss occurred, these ions can be considered rather stable gaseous chlorine-based aggregates. When reacted with SO 2 , they showed a noteworthy reactivity. In the following, the reactivity of selected cluster ions, [Cl·Na·ClO 2 ] − and [ClO 2 ·Na·ClO 2 ] − , will be described in depth, starting from the simplest [Cl·Na·ClO 2 ] − . At the occurrence, the formula of the reacting species is written with the sodium cation in the centre, to highlight the reactive anionic units. Similar to the reactions observed with the non-clustered ClO 2 − ions [47], both [Cl·Na·ClO 2 ] − and [ClO 2 ·Na·ClO 2 ] − cluster ions promote oxygen-atom transfer (OAT), oxygen-ion transfer (OIT), and double oxygen transfer (DOT) towards SO 2 . The main difference with the free ClO 2 − is that when SO 2 is oxidised, the oxidised products predominantly remain in the cluster and are not released as free species. Accordingly, the whole mechanistic picture of the reactions between [Cl·Na·ClO 2 ] − and [ClO 2 ·Na·ClO 2 ] − anions towards SO 2 was outlined by identifying direct and consecutive pathways, measuring the rate constants for each reaction channel, and structurally characterizing the ionic products by CID experiments. Reactivity of [Cl·Na·ClO 2 ] − Cluster Anion [Cl·Na·ClO 2 ] − cluster anions react with SO 2 at room temperature giving rise to the products shown in Scheme 1, through a complex series of parallel and consecutive reactions. A kinetic plot showing the time progress of the reaction is displayed in Figure 3. The identity of the ionic products from reactions 1-5 has been probed by collision-induced dissociation as discussed in the following. As reported in Table 1, the reaction of [Cl·Na·ClO 2 ] − has a rate constant (k dec ) of 2.88 × 10 -10 (±30%) cm 3 s −1 mol −1 and an efficiency (k/k coll ) of 24.2%. with a rate constant k1 of 2.13 × 10 −10 (±30%) cm 3 s −1 mol −1 (Table 2), and a branching ratio of 74.2% (Table 1). The collision-induced dissociation of the product ion at m/z 138 gives rise to the SO3 − ion at m/z = 80 ( Figure S3) through the loss of a neutral NaCl consistent with a [Cl•Na•SO3] ⸳− connectivity, hinting at the occurrence of an intracluster oxidation of SO2 to SO3 − through an oxygen ion transfer (OIT) process. Furthermore, [Cl•Na•SO3] ⸳− was found to be unreactive towards SO2 thus confirming the presence of the two notoriously inert Cl − and SO3 ⸳− moieties [47]. The Cl − anion only plays a spectator role, whereas the sodium cation is reasonably involved in the coordination of both Cl − and SO3 ⸳− anions. A minor channel gives rise to [Cl•Na•ClO] − at m/z 109 and SO3 (eq. 2), with a rate constant k2 of 2.51 × 10 −11 (±30%) cm 3 s −1 mol −1 (Table 2), and a branching ratio of 8.8% (Table 1). The product ion [Cl•Na•ClO] − at m/z 109 resembles an aggregate in which a Cl − spectator anion and a ClO − moiety are both coordinated to sodium cation, as evidenced by its CID with a rate constant k1 of 2.13 × 10 −10 (±30%) cm 3 s −1 mol −1 (Table 2), and a branching ratio of 74.2% (Table 1). The collision-induced dissociation of the product ion at m/z 138 gives rise to the SO3 − ion at m/z = 80 ( Figure S3) through the loss of a neutral NaCl consistent with a [Cl•Na•SO3] ⸳− connectivity, hinting at the occurrence of an intracluster oxidation of SO2 to SO3 − through an oxygen ion transfer (OIT) process. Furthermore, [Cl•Na•SO3] ⸳− was found to be unreactive towards SO2 thus confirming the presence of the two notoriously inert Cl − and SO3 ⸳− moieties [47]. The Cl − anion only plays a spectator role, whereas the sodium cation is reasonably involved in the coordination of both Cl − and SO3 ⸳− anions. A minor channel gives rise to [Cl•Na•ClO] − at m/z 109 and SO3 (eq. 2), with a rate constant k2 of 2.51 × 10 −11 (±30%) cm 3 s −1 mol −1 (Table 2), and a branching ratio of 8.8% (Table 1). The product ion [Cl•Na•ClO] − at m/z 109 resembles an aggregate in which a Cl − spectator anion and a ClO − moiety are both coordinated to sodium cation, as evidenced by its CID Although the larger size of [Cl·Na·ClO 2 ] − is predictably responsible for the decrease of the overall reaction rate compared to that of naked ClO 2 − (2.88 vs. 9.10 × 10 −10 cm 3 s −1 mol −1 ), the intrinsic reactivity of the two ionic species is comparable, except for small differences in the branching ratios of the three oxygen transfer reactions. For the sake of clarity, the reactivity, OIT, OAT and DOT, is indicated in each reaction channel. The main reaction of [Cl·Na·ClO 2 ] − leads to the formation of the ionic product [Cl·Na·SO 3 ] ·− at m/z 138 and a ClO · radical species (Equation (1)). The reaction proceeds quickly with a rate constant k 1 of 2.13 × 10 −10 (±30%) cm 3 s −1 mol −1 ( Table 2), and a branching ratio of 74.2% (Table 1). The collision-induced dissociation of the product ion at m/z 138 gives rise to the SO 3 − ion at m/z = 80 ( Figure S3) through the loss of a neutral NaCl consistent with a [Cl·Na·SO 3 ] ·− connectivity, hinting at the occurrence of an intracluster oxidation of SO 2 to SO 3 − through an oxygen ion transfer (OIT) process. Furthermore, [Cl·Na·SO 3 ] ·− was found to be unreactive towards SO 2 thus confirming the presence of the two notoriously inert Cl − and SO 3 ·− moieties [47]. The Cl − anion only plays a spectator role, whereas the sodium cation is reasonably involved in the coordination of both Cl − and SO 3 ·− anions. A minor channel gives rise to [Cl·Na·ClO] − at m/z 109 and SO 3 (Equation (2)), with a rate constant k 2 of 2.51 × 10 −11 (±30%) cm 3 s −1 mol −1 (Table 2), and a branching ratio of 8.8% (Table 1). The product ion [Cl·Na·ClO] − at m/z 109 resembles an aggregate in which a Cl − spectator anion and a ClO − moiety are both coordinated to sodium cation, as evidenced by its CID mass spectrum. Through this path, SO 2 is therefore oxidised to SO 3 by an oxygen atom transfer (OAT) reaction. Once formed, [Cl·Na·ClO] − displays the distinctive reactivity of the surrounding ClO − moiety towards SO 2 [47] that consists in a further SO 2 to SO 3 conversion (Equation (2.1)), through a second OAT process, and in an intracluster reaction giving [Cl·SO 3 ] − at m/z = 115 through an OIT process (Equation (2.2)). The rate constants of the two competitive processes are respectively k 2.1 = 7.53 × 10 −10 (±30%) and k 2.2 = 7.43 × 10 −11 (±30%) cm 3 s −1 mol −1 ( Table 2). Not surprisingly, the OAT undergone by [Cl·Na·ClO 2 ] − (Equation (2)) is slower with respect to the same process undergone by [Cl·Na·ClO] − (Equation (2.1)), reflecting the different reactivity of the free ClO 2 − and ClO − species [47]. The first preferably oxidises SO 2 through an OIT process, whereas the OAT is faster in the case of ClO − . Finally, the [Cl·Na·ClO 2 ] − parent ion is involved in different reactions collectively responsible for a double oxygen transfer (DOT) to SO 2 with the formation of product ions containing a sulphate anion, SO 4 ·− (Equations (3)- (5)). The sulphate moiety can be either found as a clustered ion, as in Equations (3) and (4), or it can be a free anion as in Equation (5). In Reactions (3) and (4), upon the oxidation of SO 2 to SO 4 ·− , a NaCl or Cl · neutral moieties are respectively released. In any case, it is formed through an overall O 2 − transfer and the DOT processes account for a branching ratio of 17.0% (Table 1). The ionic product at m/z 154 (Equation (3)) is consistent with a [Cl·Na·SO 4 ] − structure according to its fragmentation into SO 4 ·− ion at m/z = 96 ( Figure S4) and loss of the neutral NaCl. The reaction occurs with a rate constant k 3 of 3.06 × 10 −11 (±30%) cm 3 s −1 mol −1 and represents the main DOT path. Alternatively, the SO 4 ·− moiety can remain attached to the Cl · radical, and releasing a NaCl moiety leads to the product ion [Cl·SO 4 ] − at m/z 131 with a k 4 of 1.47 × 10 −11 (±30%) cm 3 s −1 mol −1 (Equation (4)). According to the electron affinity values for SO 4 (EA = 5.10 eV) and Cl (EA = 3.61 eV) [64], the negative charge of the [Cl·SO 4 ] − product ion is mostly located on the SO 4 moiety, as confirmed by the dissociation of this cluster into SO 4 ·− ion at m/z = 96 ( Figure S5). Finally, SO 4 ·− is also generated as a free ion through reaction 5 with a k 5 of 3.4 × 10 −12 (±30%) cm 3 s −1 mol −1 . Not surprisingly, the free SO 4 ·− ion is the least abundant product formed through the DOT paths. In the clustered species, [Cl·SO 4 ] − and [Cl·Na·SO 4 ] ·− , the negative charge can be more favourably dispersed in larger species. The comparison with the reactivity of the free ClO 2 − anion shows that also with the [Cl·Na·ClO 2 ] − clustered anions the OIT remains the main reaction channel. When ClO 2 − was reacted with SO 2 , the small differences in the electron affinities between ClO − (EA of 2.27 eV) and SO 3 (EA of 2.06 eV) [64] only resulted in close energies (−24.6 and −25.6 kcal mol −1 ) calculated for the two alternative exit channels, namely SO 3 ·− (+ClO · ) and SO 3 (+ClO − ) [47]; therefore, the prevalence of the OIT process was attributed to kinetic factors. Contrarywise, thermochemical factors favoured the OAT reaction over the OIT process in the reactivity of the free ClO − anion with SO 2 due to the significantly higher electron affinity of Cl (EA = 3.61 eV) with respect to that of SO 3 (EA = 2.06 eV). Accordingly, in the reactivity of [Cl·Na·ClO] − ion, OAT (Equation (2.1)) prevails over the OIT process (Equation (2.2)) by a ratio of ca. 10/1. The in-depth theoretical analyses performed on the free ClO 2 − species [47] can also give some insights into the reactivity observed with the clustered chlorite anions. The potential energy surface (PES) of [OClO-SO 2 ] − system, was characterised by an early transition state that accounts for the almost barrierless formation of SO 3 ·− . In the TS, the negative charge is exclusively located on the preformed SO 3 group (1.02 e − ), that is prone to rapid dissociation into the sulphite radical anion, and that strongly competes with the OAT and DOT processes. The formation of SO 3 and SO 4 ·− occurs through common intermediates, found on the double well PES, which dissociate reflecting the endothermicity of the two processes. This theoretical analysis is well suited to also explain the reactivity observed for the ligated [Cl·Na·ClO 2 ] − cluster ions with SO 2 , and that of the other ligated species described in the following sections. The NaCl ligand does not affect the outcome of the oxidation reactions. Rather, it seems to have the effect of spreading the charge on the cluster, eventually lowering the reaction rate. Overall, an increase of DOT and OAT processes at the expense of OIT channel is evidenced for the [Cl·Na·ClO 2 ] − cluster ion with respect to the non-clustered ClO 2 − anion. Effect of the Ligand To deeply investigate the role of the NaCl ligand in the reactivity of [Cl·Na·ClO 2 ] − ion towards SO 2, Cl − was first replaced by X anion (X = F, Br, I) to form the corresponding [X·Na·ClO 2 ] − reactive species and subsequently Li + was inserted in place of Na + to evaluate the role of the cation. Only non-redox-active ligands were used in order to make a comparison with the effect of salinity in solution, where it is known that the increase in ionic strength determines an increase in the absorption efficiency of SO 2 , [18]. Regarding the branching ratios of the three reaction channels, [I·Na·ClO 2 ] − and [Br·Na·ClO 2 ] − show a reactivity distribution comparable to that of the [Cl·Na·ClO 2 ] − ion. On the contrary, an increase of the OIT process at the expense of the OAT and DOT channels was reported for [F·Na·ClO 2 ] − cluster species. As a consequence of the high charge density on the fluoride ion, the [F·Na·SO 3 ] ·− ionic product arising from the OIT process adds an SO 2 molecule giving rise to a labile adduct of the type [F·Na·SO 3 ·SO 2 ] ·− never observed with the other reactant ions. The OAT process increases down the halogen series, whereas the opposite occurs with the OIT process. Passing to the cation effect, an opposite trend was observed, as the reaction rate decreases by increasing the positive charge density on the metal. The overall rate constant for the [Cl·Li·ClO 2 ] − + SO 2 reaction is indeed almost three times lower than the corresponding value for the [Cl·Na·ClO 2 ] − + SO 2 system, highlighting the central role of an external electric field in modelling the reaction kinetics. [54][55][56] Minor effect of the charge density is instead reported for the general reactivity scheme of [Cl·Li·ClO 2 ] − anion. The main role played by the different ligands described above might be due to the spreading the negative charge of ClO 2 − within the cluster, the effect of which is reflected in the oxidative capacity of ClO 2 − ion: the higher the charge density of the ligand, the faster the reaction. Passing from F − to I − , the former forms a tighter ion pair with Na + , making the chlorite anion more available to oxidise sulphur dioxide. A second effect concerns the steric hindrance to the approach of SO 2 due to the neutral ligand, whereby ligands of larger dimensions lead to a decrease in the overall reaction rate. An opposite effect is observed with the lithium cation, whose small size increases the interactions with both Cl − and ClO 2 − , reducing the oxygen transfer rate of the latter. The effect of the non-redox ligands here tested is different to that played in solution, where an increase in ionic strength (i.e., the salinity) has the effect of increasing the SO 2 absorption efficiency by the solutions and therefore the overall efficiency of wet scrubbing processes [18]. Reactivity of [ClO 2 ·Na·ClO 2 ] − Cluster Anion Passing to [ClO 2 ·Na·ClO 2 ] − cluster ion, the reactive channels reported in Scheme 2 have been observed from the reaction with SO 2 . The identity of the ionic products from Reactions (6)- (11) have been probed by collision−induced dissociation experiments as discussed below. The reaction of [ClO 2 ·Na·ClO 2 ] − with SO 2 at 298 K is fast and efficient showing an overall rate constant (k dec ) of 7.48 × 10 −10 (±30%) cm 3 s −1 mol −1 at 298 K. This value is only 0.82 times lower than the corresponding one for the bare ClO 2 − species, whereas the efficiencies (k/k coll ) of the two processes are similar (66.0% vs. 63.8%, Table 1). The intrinsic reactivity of the [ClO 2 ·Na·ClO 2 ] − anion towards SO 2 is comparable to that of the [Cl·Na·ClO 2 ] − species, as demonstrated by rather close branching ratios for the three reaction pathways (Table 1) Reactivity of [ClO2•Na•ClO2] − Cluster Anion Passing to [ClO2•Na•ClO2] − cluster ion, the reactive channels reported in Schem have been observed from the reaction with SO2. The identity of the ionic products fro reactions 6-11 have been probed by collision−induced dissociation experiments as d cussed below. The reaction of [ClO2•Na•ClO2] − with SO2 at 298 K is fast and efficient showing overall rate constant (kdec) of 7.48 × 10 −10 (±30%) cm 3 s −1 mol −1 at 298 K. This value is on 0.82 times lower than the corresponding one for the bare ClO2 − species, whereas the e ciencies (k/kcoll) of the two processes are similar (66.0% vs. 63.8%, Table 1). The intrin reactivity of the [ClO2•Na•ClO2] − anion towards SO2 is comparable to that of [Cl•Na•ClO2] − species, as demonstrated by rather close branching ratios for the three re tion pathways (Table 1). Nonetheless, the concomitant presence of two reactive ClO2 − m eties in the [ClO2•Na•ClO2] − ion gives rise to an intricate reaction picture as shown in above Scheme 2 and in the kinetic plot of Figure 4. The main reaction of [ClO2•Na•ClO2] − ion at m/z 157 leads to the ionic product at m/z 170, attributed to [ClO2•Na•SO3] ⸳− , and a ClO ⸳ radical species (eq. 6). The CID mass spectrum of the ionic product at m/z 170 shows a major dissociation into SO3 ⸳− , which accounts for a [ClO2•Na•SO3] ⸳− structure ( Figure S7). As in the case of Cl-clustered species [Cl•Na•ClO2] − , the main reaction of [ClO2•Na•ClO2] − consists of an oxygen ion transfer, resulting in a fast intracluster oxidation of SO2. The rate constant k6 is 6.26 × 10 −10 (±30%) cm 3 s −1 mol −1 (Table 3), and a branching ratio of 81.8% (Table 1). The intracluster formation of SO3 ⸳− gives rise to a negatively charged product in which one of the two ClO2 − moieties only played a spectator role, whereas the sodium cation is reasonably involved in the coordination of the ClO2 − and SO3 ⸳− anions. However, the presence of a residual ClO2 − moiety in the product ion [ClO2•Na•SO3] ⸳− is responsible for the consecutive reactivity of this species, which is deeply discussed in the next paragraph (vide infra). The complete reactive scheme of [ClO2•Na•ClO2] − , integrated with the reactivity of [ClO2•Na•SO3] ⸳− , is reported in The main reaction of [ClO 2 ·Na·ClO 2 ] − ion at m/z 157 leads to the ionic product at m/z 170, attributed to [ClO 2 ·Na·SO 3 ] ·− , and a ClO · radical species (Equation (6)). The CID mass spectrum of the ionic product at m/z 170 shows a major dissociation into SO 3 ·− , which accounts for a [ClO 2 ·Na·SO 3 ] ·− structure ( Figure S7). As in the case of Cl-clustered species [Cl·Na·ClO 2 ] − , the main reaction of [ClO 2 ·Na·ClO 2 ] − consists of an oxygen ion transfer, resulting in a fast intracluster oxidation of SO 2 . The rate constant k 6 is 6.26 × 10 −10 (±30%) cm 3 s −1 mol −1 (Table 3), and a branching ratio of 81.8% (Table 1). The intracluster formation of SO 3 ·− gives rise to a negatively charged product in which one of the two ClO 2 − moieties only played a spectator role, whereas the sodium cation is reasonably involved in the coordination of the ClO 2 − and SO 3 ·− anions. However, the presence of a residual ClO 2 − moiety in the product ion [ClO 2 ·Na·SO 3 ] ·− is responsible for the consecutive reactivity of this species, which is deeply discussed in the next paragraph (vide infra). The complete reactive scheme of [ClO 2 ·Na·ClO 2 ] − , integrated with the reactivity of [ClO 2 ·Na·SO 3 ] ·− , is reported in the Supplementary Materials (Scheme S1), showing the complex and intricate reactivity of an only apparently simple species. A second path, indeed a minor one, leads to an ionic product at m/z 141 and SO 3 (Equation (7)), formed through an OAT from one of the two ClO 2 − unit to SO 2 . The branching ratio is only 4.8% (Table 1) and a rate constant k 7 of 3.64 × 10 −11 (±30%) cm 3 s −1 mol −1 ( Table 3). The ionic product at m/z 141, resembles an aggregate in which a ClO 2 − anion and a ClO − reactive moiety are both coordinated to sodium cation, although its fragmentation into the ClO 3 − species at m/z 83 seems to account for a [Cl·Na·ClO 3 ] − structure ( Figure S8). Nonetheless, the [Cl·Na·ClO 3 ] − ion obtained by spraying a NaCl/NaClO 3 (1:1) millimolar solution resulted to be not reactive towards SO 2 ( Figure S9). Therefore, it seems more likely to attribute the [ClO·Na·ClO 2 ] − connectivity to the ion at m/z 141 that rearranges to [Cl·Na·ClO 3 ] − upon CID, thus demonstrating the interaction of sodium cation with the ClO 2 − and ClO − anions, rather than Cl − and ClO 3 − species. The presence of two potential reactive units, ClO and ClO 2 , make the [ClO·Na·ClO 2 ] − cluster ion quite reactive. The consecutive OAT process observed (Equation (7.1)) has a rate constant k 7.1 of 9.57 × 10 −10 (±30%) cm 3 s −1 mol −1 (Table 3), which is much higher than k 7 relative to the similar OAT process in Equation (7). Again, as for reactions 2 and 2.1, the reason lies in the different reactivity of the free chlorite and hypochlorite anions, the first undergoing faster OIT and the second faster OAT processes. The product ion at m/z 125 corresponds to the [Cl·Na·ClO 2 ] − species, as demonstrated by its characteristic fragmentation pattern and the distinctive reactivity discussed in the previous paragraph ( Figures S10 and S11). Four different DOT channels were reported for the [ClO 2 ·Na·ClO 2 ] − parent ion. The first three pathways (Equations (8) Again, the formation of the free [SO 4 ] ·− product ion represents the lowest DOT process (k 10 = 1.34 × 10 −11 cm 3 s −1 mol −1 ). In addition, a fourth DOT channel was observed only for the [ClO 2 ·Na·ClO 2 ] − parent ion which is worthy of note. In this case, the oxidation of SO 2 leads to the product at m/z 119 with a rate constant k 11 of 2.22 × 10 −11 cm 3 s −1 mol −1 (Equation (11)) which subsequently adds a further SO 2 molecule with a k add of 5.74 × 10 −11 cm 3 s −1 mol −1 (Equation (11.1), Table 3). Although the structure of the ionic species at m/z 119 could not be directly probed owing to its unproductive CID, a possible [Na·SO 4 ] − formula can be reasonably supposed. The corresponding ion at m/z 119 was also obtained by electrospraying a solution of Na 2 SO 4 which, once isolated and reacted with SO 2 , gave a ligated [Na·SO 4· SO 2 ] − addition product with a rate constant consistent with k add , of Equation (11.1), thus confirming the identity of the parent species at m/z 119 ( Figure S12). The [Na·SO 4 ] − formula accounts for the oxidation of the sulphur atom of sulphur dioxide and the eventual reduction of the chlorine atoms of the [ClO 2 ·Na·ClO 2 ] − ion. Both the ClO 2 − units may be involved in the reaction, in which each ClO 2 − anion transfers an O ·− moiety to SO 2 giving rise to an SO 4 2− species and the release of two ClO − radicals. This hypothesis was confirmed by replacing one of the two ClO 2 − anions with the similarly oxygenated, but intrinsically unreactive ClO 3 − anion to obtain the [ClO 3 ·Na·ClO 2 ] − parent ion. When exposed to SO 2 , this species shows an intrinsic reactivity comparable to that of the [ClO 2 ·Na·ClO 2 ] − ion with the only exception of the product at m/z 119 that was not observed, thus highlighting the involvement of both ClO 2 − anions in the double O ·− transfer. Reactivity of [SO 3 ·Na·ClO 2 ] ·− Cluster Anion To better investigate the consecutive reactivity of the product ion at m/z 170, arising from the [ClO 2 ·Na·ClO 2 ] − parent species through Equation (6), the putative [ClO 2 ·Na·SO 3 ] ·− ion was isolated from the sequence 157 to 170 (MS 2 -isolated) and separately reacted with SO 2 . The reactivity observed is illustrated in Scheme 3. Molecules 2021, 26, x FOR PEER REVIEW 11 of Reactivity of [SO3•Na•ClO2] ⸳ − Cluster Anion To better investigate the consecutive reactivity of the product ion at m/z 170, arisi from the [ClO2•Na•ClO2] − parent species through eq. 6, the putative [ClO2•Na•SO3] ⸳ − ion w isolated from the sequence 157 to 170 (MS 2 -isolated) and separately reacted with SO2. T reactivity observed is illustrated in Scheme 3. The overall reaction shows a rate constant (kdec) of 3.74 × 10 −10 (±30%) cm 3 s −1 mol −1 a an efficiency of 33.0% at 298 K (Table 1). These values agree with those reported for t other [X•Na•ClO2] − parent species analysed and having a unique ClO2 − reactive moiety (T ble 1). Regarding, instead, the distribution of the three reaction paths, an even more pr nounced increase of the DOT channels, accounting for a total amount of 22.7%, was o served ( Table 1). The time progress of the reaction is described by the kinetic plot in Figu 5 and the rate constants of each pathway are reported in Table 4. The intracluster OIT process (eq. 12) proceeds quickly, showing a k12 of 2.63 × 10 (±30%) cm 3 s −1 mol −1 (Table 4) and leading to the formation of an ion at m/z 183. Unfort nately, the CID mass spectrum of this species does not allow to distinguish between [SO3•Na•SO3] − or a [Na•SO4•SO2] − structure ( Figure S14), the latter already observed as product of the DOT process involving the [ClO2•Na•ClO2] − parent ion. Nonetheless, bas on the reactivity of naked ClO2 − and knowing that the SO3 ⸳ − moiety is notoriously unrea tive with SO2, it is reasonable to suppose a [SO3•Na•SO3] − general formula for this specie The [ClO2•Na•SO3] ⸳ − parent ion is also involved in an OAT reaction proceeding w a k13 of 2.60 × 10 −11 (±30%) cm 3 s −1 mol −1 and giving rise to an ion at m/z 154 that is consiste with a [ClO•Na•SO3] ⸳ − structure. The consecutive OAT reactivity of this species leading the ion at m/z 138 (eq. 13.1; k13.1 = 1.09 × 10 −10 (±30%) cm 3 s −1 mol −1 , Table 4 The overall reaction shows a rate constant (k dec ) of 3.74 × 10 −10 (±30%) cm 3 s −1 mol −1 and an efficiency of 33.0% at 298 K (Table 1). These values agree with those reported for the other [X·Na·ClO 2 ] − parent species analysed and having a unique ClO 2 − reactive moiety (Table 1). Regarding, instead, the distribution of the three reaction paths, an even more pronounced increase of the DOT channels, accounting for a total amount of 22.7%, was observed ( Table 1). The time progress of the reaction is described by the kinetic plot in Figure 5 and the rate constants of each pathway are reported in Table 4. were also previously observed for the [Cl•Na•ClO2] − and [ClO2•Na•ClO2] − parent clusters (eqs. 4 and 9, eqs. 5 and 10). All these pathways show rather similar formation rate constants of the 10 −11 cm 3 s −1 mol −1 order of magnitude (Table 4). [ClO2•Na•SO3] ⸳− also reacts with SO2, leading to the [Na•SO4] − product ion at m/z 119 (eq. 16). The reaction, showing a k16 of 4.57 × 10 −11 cm 3 s −1 mol −1 , proceeds with an intracluster O2 − transfer. Such unusual reactivity probably involves both the ClO2 − anion that triggers a classic O2 − transfer and the SO3 ⸳− moiety that may be responsible for an electron transfer, giving rise to an SO4 2− anion through a concerted rearrangement. As previously reported (eq. 11.1), the consecutive addition of an SO2 molecule to the [Na•SO4] − product ion is observed, thus confirming the identity of the ion at m/z 119. Finally, as to the reactivity of higher species such as [(NaClO2)n•ClO2] − , only the rate constant relative to cluster with n = 2 has been measured (Table 1), which does not appear to be affected by the number of additional NaClO2 units compared to [ClO2•Na•ClO2] − . The intracluster OIT process (Equation (12)) proceeds quickly, showing a k 12 of 2.63 × 10 −10 (±30%) cm 3 s −1 mol −1 (Table 4) and leading to the formation of an ion at m/z 183. Unfortunately, the CID mass spectrum of this species does not allow to distinguish between a [SO 3 ·Na·SO 3 ] − or a [Na·SO 4· SO 2 ] − structure ( Figure S14), the latter already observed as a product of the DOT process involving the [ClO 2 ·Na·ClO 2 ] − parent ion. Nonetheless, based on the reactivity of naked ClO 2 − and knowing that the SO 3 ·− moiety is notoriously unreactive with SO 2 , it is reasonable to suppose a [SO 3 ·Na·SO 3 ] − general formula for this species. The [ClO 2 ·Na·SO 3 ] ·− parent ion is also involved in an OAT reaction proceeding with a k 13 of 2.60 × 10 −11 (±30%) cm 3 s −1 mol −1 and giving rise to an ion at m/z 154 that is consistent with a [ClO·Na·SO 3 ] ·− structure. The consecutive OAT reactivity of this species leading to the ion at m/z 138 (Equation (13.1); k 13.1 = 1.09 × 10 −10 (±30%) cm 3 s −1 mol −1 , Table 4, Figure S11) accounts for the presence of the surrounding reactive ClO − moiety in [ClO·Na·SO 3 ] ·− (m/z = 154). When MS 3 -isolated into the ion trap by the sequence 170 to 154 and exposed to SO 2 , the ionic species at m/z 154 is only partially reactive towards this neutral gas. A portion of the ionic population at m/z 154 survives over time, hinting at the concomitant presence of the unreactive [Cl·Na·SO 4 ] ·− species together with the [ClO·Na·SO 3 ] ·− isobaric ion that is consumed in the consecutive reaction ( Figure S15). The [Cl·Na·SO 4 ] ·− species can reasonably arise from a direct intracluster DOT channel (Equation (13b)), as previously observed in analogous processes involving the [Cl·Na·ClO 2 ] − and [ClO 2 ·Na·ClO 2 ] − parent ions (Equations (3) and (8)). As a result, the O 2 − transfer from ClO 2 − to SO 2 triggers the release of a neutral SO 3 moiety according to the electron affinity values of the species involved in the reaction [27][28][29][30][31][32][33][34]65]. Unfortunately, it was not possible to independently measure k 13b , which is therefore included with that of the OAT transfer, k 13 . As a consequence, the branching ratio of the OAT might be slightly overestimated, at the expense of that for the DOT process, which could therefore be underestimated. Two other DOT pathways reported in Equations (14) and (15) were also previously observed for the [Cl·Na·ClO 2 ] − and [ClO 2 ·Na·ClO 2 ] − parent clusters (Equations (4) and (9), Equations (5) and (10)). All these pathways show rather similar formation rate constants of the 10 −11 cm 3 s −1 mol −1 order of magnitude (Table 4). [ClO 2 ·Na·SO 3 ] ·− also reacts with SO 2 , leading to the [Na·SO 4 ] − product ion at m/z 119 (Equation (16)). The reaction, showing a k 16 of 4.57 × 10 −11 cm 3 s −1 mol −1 , proceeds with an intracluster O 2 − transfer. Such unusual reactivity probably involves both the ClO 2 − anion that triggers a classic O 2 − transfer and the SO 3 ·− moiety that may be responsible for an electron transfer, giving rise to an SO 4 2− anion through a concerted rearrangement. As previously reported (Equation (11.1)), the consecutive addition of an SO 2 molecule to the [Na·SO 4 ] − product ion is observed, thus confirming the identity of the ion at m/z 119. Finally, as to the reactivity of higher species such as [(NaClO 2 ) n ·ClO 2 ] − , only the rate constant relative to cluster with n = 2 has been measured (Table 1), which does not appear to be affected by the number of additional NaClO 2 units compared to [ClO 2 ·Na·ClO 2 ] − . However, it was not possible to evaluate the branching ratio of the OIT, OAT and DOT processes of [(NaClO 2 ) 2 ·ClO 2 ] − , due to the low intensity signals relative to parent cluster ions, and to the complex array of peaks resulting from the reaction with SO 2 . Materials and Methods Mass spectrometric experiments were carried out on an LTQ-XL linear ion-trap mass spectrometer (Thermo Fisher Scientific) that was in-house modified to perform ionmolecule reactions (IMR) [53]. Water-acetonitrile (1:1) solutions of NaClO 2 at millimolar concentrations were injected into the ESI (electrospray ionization) source of the instrument at a flow rate of 5 µL min −1 via the on-board syringe pump and using nitrogen as sheath and auxiliary gas (flow rate = 11 and 2 arbitrary units respectively, a. u.~0.37 L min −1 ). Other [ML·ClO 2 ] − cluster anions (L = F, Br, I, ClO 3 ; M = Li, Na) investigated in this work were obtained from millimolar solutions of 1:1 ML and NaClO 2 salts dissolved in wateracetonitrile (1:1). To generate chlorite cluster ions and optimize the ion transmission, spray voltage was tuned in the 1.8-3.2 kV range, whereas the capillary temperature was set at 275 • C. The distribution of the ionic aggregates strictly depends on the capillary and tube lens voltage. Hence, these parameters were in turn optimized to increase the signal intensity of the parent ion under investigation. Once generated, reagent ions were transferred to the vacuum region of the trap, massto-charge isolated and reacted with sulphur dioxide. Each reaction product was then mass selected by a further step of isolation, that is typical of MS n experiments performed by Ion Trap mass spectrometers, and the consecutive reactivity of these species was probed towards SO 2 to unravel a complete reaction picture. Furthermore, the ionic reactants and products were structurally characterized by collision-induced dissociation (CID) experiments performed by increasing the energy of mass-selected ions in the presence of helium as collision gas (pressure of ca. 3 × 10 −3 Torr). Depending on the species of interest, normalized collision energies ranging between 20% and 40% were typically applied with an activation time of 30 ms. Ions were isolated with a window of 1 m/z, and the Q value was optimized to ensure stable trapping fields for all the ionic species under investigation. Sulphur dioxide was introduced into the trap through a deactivated fused silica capillary that enters the vacuum chamber from a 6.25 mm hole placed in the backside of the mass spectrometer. The pressure of the neutral gas was kept constant by a metering valve and measured by a Granville-Phillips Series 370 Stabil Ion Vacuum Gauge. Owing to the position of the Pirani gauge, the actual sulphur dioxide pressure was estimated after calibration of the pressure reading [66]. Typical pressures of SO 2 ranged between 1.1 × 10 −7 and 8.0 × 10 −7 Torr, the uncertainty was estimated to be ± 30%. The signals of the ionic reactant and products were monitored over time as a function of the neutral concentration and for each reaction time an average of 10 scan acquisitions was recorded. The normalized collision energy was set to zero and the activation Q value was optimized to ensure stable trapping fields for all the ions. Xcalibur 2.0.6 software was used to acquire all the displayed mass spectra. All the reactions can be regarded as pseudo-first-order processes due to the excess of neutral gas relative to the reactant ion in the trap. DynaFit4 software package [67] was used to perform nonlinear least squares regression to simultaneously fit reactant and products concentration versus time. Experimental data from the kinetic analyses were fitted to a mathematical model consistent with the postulated reaction mechanism. To check the validity of the kinetic schemes, the obtained unimolecular rate constants were used to simulate the time progress of the reactions using the kinetic simulation function contained in DynaFit4. Bimolecular rate constants k (cm 3 molecule −1 s −1 ) were obtained dividing the pseudo-first-order constants (s −1 ) by the concentration of neutral reagent gas. The branching ratios between the three channels (OIT, OAT, DOT) were calculated from the constants of formation of the primary direct products for each reactive species. The reaction efficiency was calculated as the ratio of the bimolecular rate constant k to the collision rate constant (k coll ), according to the average dipole orientation (ADO) theory [68]. To ensure the accuracy of the k values, approximately 15 independent measurements for each precursor ion were performed on different days over a sevenfold neutral pressure range. The standard deviation in the fitting parameters of the kinetic modelling used is usually evaluated between 10-20%, whereas the uncertainty attached to the measurement of the neutral pressure is typically evaluated ±30%. Conclusions Mass spectrometry has been used to elucidate the gas-phase reactivity of [NaL·ClO 2 ] − cluster anions (L = ClO x − with x = 0-3) with sulphur dioxide. These charged species can be taken as simplified models of large-scale reactions occurring in solution or in the flue-gas desulphurization processes, which are accomplished by sodium chlorite solutions. The kinetic analysis has shown that SO 2 was efficiently oxidised by oxygen atom transfer (OAT), oxygen ion transfer (OIT), and double oxygen transfer (DOT) respective to SO 3 , SO 3 ·− and SO 4 ·− . In the case of OIT and DOT processes, an intracluster reaction was observed, by which the oxidised ionic forms of SO 2 , namely SO 3 ·− and SO 4 ·− , remain within the cluster and are not released as a free species. The results here reported show that when ClO 2 − is ligated to a non−redox active molecule, the complexation leads to a moderate reduction in the rate of oxidation processes, without substantially influencing the branching ratio. This effect contrasts, but not surprisingly, with what is observed in solution, where dissolved salts increase the SO 2 capture by increasing ionic strength of the solutions. In the gas phase, the direct and strong interaction of the chlorite anion with the ligand is detrimental to the reaction rate. However, the effect of redox active ligands, metallic or metal-free, could be quite different, as suggested by the reactivity observed with [ClO 2 ·Na·ClO 2 ] − , in which the second reactive ClO 2 − moiety succeeds in increasing the rate of the oxidation. Therefore, the ligation with a redox active group, different from the chlorite one, could succeed in tuning the oxidation processes. Supplementary Materials: The following are available online. Figure S1: Full-scan mass spectrum of a NaClO 2 salt solution; Figure S2: Ion-molecule reaction between isolated [Cl·Na·ClO 2 ] − cluster ion and SO 2 ; Figure S3: CID mass spectrum of [Cl·Na·SO 3 ] ·− ion at m/z 138; Figure S4: CID mass spectrum of [Cl·Na·SO 4 ] ·− ion at m/z 154; Figure S5: CID mass spectrum of [Cl·SO 4 ] − ion at m/z 131; Figure S6: ion-molecule reaction between isolated [ClO 2 ·Na·ClO 2 ] − cluster ion at m/z 157 SO 2 ; Figure S7: CID mass spectrum of [ClO 2 ·Na·SO 3 ] ·− product ion at m/z 170; Figure S8: CID mass spectrum of (a) [ClO·Na·ClO 2 ] − product ion at m/z and (b) [Cl·Na·ClO 3 ] − standard ion at m/z 141; Figure S9: mass spectrum of the ion-molecule reaction of (a) [ClO·Na·ClO 2 ] − ion at m/z and (b) [Cl·Na·ClO 3 ] − standard ion at m/z 141 towards SO 2 ; Figure S10: mass spectrum of the ionmolecule reaction between [Cl·Na·ClO 2 ] − consecutive product ion at m/z 125, MS n -isolated from the reaction sequence m/z 157 to m/z 141 to m/z 125 and SO 2 ; Figure S11: magnified plot of the kinetic reported in Figure 4; Figure S12: mass spectrum of the ion-molecule reaction of (a) [Na·SO 4 ] − product ion at m/z 119, MS n -isolated, and (b) [Na·SO 4 ] − standard ion at m/z 119 towards SO 2 ; Figure S13: mass spectrum of the ion-molecule reaction between [ClO 2 ·Na·SO 3 ] − product ion at m/z 170, MS n -isolated from the reaction sequence m/z 157 to m/z 170 and SO 2 ; Figure S14: CID mass spectrum of product ion at m/z 183; Figure S15
2021-12-03T16:38:22.926Z
2021-11-24T00:00:00.000
{ "year": 2021, "sha1": "96cbe64fa9e1665dad1eaf2a93c0ab63fd3e159f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/23/7114/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66de9552067870c31af3f0f29ca2e2e23e70c6a5", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
257208664
pes2o/s2orc
v3-fos-license
Effective production of kojic acid in engineered Aspergillus niger Background Kojic acid (KA) is a widely used compound in the cosmetic, medical, and food industries, and is typically produced by Aspergillus oryzae. To meet increasing market demand, it is important to optimize KA production through seeking alternatives that are more economic than current A. oryzae-based methods. Results In this study, we achieved the first successful heterologous production of KA in Aspergillus niger, an industrially important fungus that does not naturally produce KA, through the expression of the kojA gene from A. oryzae. Using the resulting KA-producing A. niger strain as a platform, we identified four genes (nrkA, nrkB, nrkC, and nrkD) that negatively regulate KA production. Knocking down nrkA or deleting any of the other three genes resulted in a significant increase in KA production in shaking flask cultivation. The highest KA titer (25.71 g/L) was achieved in a pH controlled batch bioreactor using the kojA overexpression strain with a deletion of nrkC, which showed a 26.7% improvement compared to the KA titer (20.29 g/L) that was achieved in shaking flask cultivation. Conclusion Our study demonstrates the potential of using A. niger as a platform for studying KA biosynthesis and regulation, and for the cost-effective production of KA in industrial strain development. Supplementary Information The online version contains supplementary material available at 10.1186/s12934-023-02038-w. results of these isotope labeling studies suggest that the direct conversion of glucose to KA without breaking the pyranose ring is a major pathway of KA formation in A. oryzae. The genes involved in KA biosynthesis in A. oryzae were first identified by Terabayashi et al. [13]. In their pioneering work, they revealed that two closely linked genes, AO090113000136 (FAD dependent oxidoreductase, named kojA) and AO090113000138 (a major facilitator superfamily (MFS) transporter, named kojT) in the genome of A. oryzae might be responsible for the production of KA [13]. KojA and KojT were proposed to be directly involved in the biosynthesis and secretion process, respectively. Further study indicated that a third gene located between kojA and kojT, AO090113000137 (Zn(II)2Cys6 transcription factor encoding gene, named kojR), is also involved in KA production through the regulation of the transcriptional expression of kojA and kojT [14]. In addition to the pathway-specific regulator kojR, KA biosynthetic genes could also be regulated by more global regulators, including the global transcriptional regulation gene laeA and the nitrate transporterencoding gene nrtA [15,16]. It is believed that the culture period-dependent production of KA is related to these global regulators. Studies also showed that KA production may be regulated by numerous other factors in A. oryzae, such as KpeA, Aokap1, Aokap2, Aokap4, and Aokap6 [17][18][19][20][21]. To date, kojA is the only enzyme-encoding gene confirmed to be involved in the biosynthesis of KA. However, the specific reaction catalyzed by KojA in the KA biosynthesis pathway remains unknown. It is also not clear if any other pathway-specific genes are required for the process. The lack of understanding of the KA biosynthesis pathway hinders efforts to genetically improve KA production. To gain a better understanding of KA biosynthesis and regulation, we decided to study it in the heterologous host of A. niger. A. niger was selected for several reasons: firstly, its conidia are uninucleate, in contrast to the multinucleate conidia of A. oryzae (which typically have two to four or more nuclei) [22], making genetic manipulation of A. niger easier compared to A. oryzae. Secondly, A. niger has a long history of safe use in the production of enzymes and organic acids, and has shown excellent performance in the production of organic acids such as citric acid and malic acid [23,24]. This makes A. niger a good candidate for the development of an acidogenic chassis. In addition, A. niger grows on a wide range of substrates under various environmental conditions [25], which can be helpful in establishing a cost-effective fermentation process. In this study, we achieved the heterologous production of KA in A. niger for the first time by reconstructing the biosynthetic pathway from A. oryzae. Starting from the KA-producing A. niger, we constructed mutant strains with knockout or knockdown of each of the homologs of genes in the predicted gene cluster for KA biosynthesis (ranging from AO090113000132 to AO090113000145 in A. oryzae). From this mutant library, we identified four genes that had negative regulatory functions in KA production. These findings demonstrate that A. niger is a useful platform for studying KA biosynthesis and regulation, and that A. niger-based cell factories have significant potential in creating industrial strains for cost-effective KA production. In silico comparison of the putative KA biosynthetic genes of A. oryzae to the A. niger strain ATCC 1015 genome It has been reported that three genes, AO090113000136 (kojA), AO090113000137 (kojR) and AO090113000138 (kojT), are involved in the biosynthesis of KA in A. oryzae RIB40 [13,14]. These three closely linked genes are located in a gene cluster ranging from AO090113000132 to AO090113000145 [13]. Comparative genomics of Aspergillus nidulans, Aspergillus fumigatus, and A. oryzae also showed that this gene cluster is specific to A. oryzae [26]. Many secondary metabolism-related genes are often clustered in the genome [27], so the genes in the A. oryzae-specific gene cluster may have functions related to KA biosynthesis. However, the roles of most of the genes in the cluster in KA production, aside from kojA, kojR, and kojT, have not been well studied. To determine if the homologs of the putative KA biosynthetic genes are present in A.niger, we performed a homology search using BLAST based on the A. niger genome sequence available from the NCBI. The alignment sequences with the most similarities were selected. As shown in Table 1, homologs for most of the genes in the putative KA biosynthetic gene cluster were found in the genome of A. niger, except for kojA and AO090113000145. All these genes have high sequence similarity with their homologs in A. niger (between 50 and 88%). It is worth noting that AO090113000141 and AO090113000142 match the same gene (ASPNIDRAFT_209619) in the genome of A. niger. AO090113000141 and AO090113000142 encode proteins with 243 and 187 amino acids, respectively, while their homolog (ASPNIDRAFT_209619) in A. niger encodes a protein with a length of 673 amino acids. Sequence alignment showed that the proteins encoded by AO090113000141 and AO090113000142 align well with the central and C-terminal parts of the protein encoded by ASPNIDRAFT_209619, respectively (Additional file 1: Fig. S1), indicating that there has been gene fusion/fission during the evolution of the corresponding proteins. Therefore, for 11 of the 13 genes in the putative KA biosynthetic gene cluster, 10 homologs (corresponding to 10 genes scattered in different loci) were found in the genome of A. niger (Table 1). The A. niger genome does not have close homologs of either kojA or AO090113000145. Heterologous production of KA in A. niger As analyzed above, A. niger lacks the homologs of kojA and AO090113000145 among the genes in the putative KA biosynthetic cluster. Given that kojA is an enzymeencoding gene that has been confirmed to be involved in KA biosynthesis, we focused on kojA for the reconstitution of the KA biosynthesis pathway in A. niger. The expression of kojA in A. oryzae is dependent on the growth phase and culture conditions [13]. Its expression is also regulated by the pathway-specific regulator KojR [14], the global regulator LaeA [15], and several other regulators such as NrtA, KpeA, Aokap1, etc. [16][17][18]. To ensure its expression in A. niger, the kojA gene was placed under the control of the promoter of glyceraldehyde-3-phosphate dehydrogenase (PgpdA), a strong and constitutive endogenous promoter in A. niger [28], to create the expression cassette for kojA (Fig. 2a). Citric acid and oxalic acid are two major organic acids produced by A. niger ATCC 1015. To redirect glucose metabolic flux towards KA in the engineered A. niger, we used the double deletion strain of cexA and oahA (A. niger S834), an A. niger ATCC 1015 derivative that is unable to produce citric acid and oxalic acid [29], as the host strain for the reconstitution of the KA biosynthesis pathway. The kojA overexpression cassette was integrated into the genome of A. niger S834 through Agrobacteriummediated transformation (AMT) to obtain the kojA overexpression strain A. niger S1991 (OEkojA). The successful integration of the expression cassette was verified by PCR (Additional file 1: Fig. S2). qRT-PCR was performed to show the high expression of the introduced kojA gene in A. niger S1991 (Fig. 2b). A. niger S1991 (OEkojA) was cultivated in the KA production medium, and the formation of KA was monitored in the supernatant using a colorimetric method [30]. The parental strain A. niger S834 was cultivated under the same conditions as a negative control. As shown in Fig. 2c, a red color was produced in the cultivation medium of A. niger S1991 after fermentation for 5 days when the colorimetric method was used to detect KA, while no color reaction was observed in the cultivation medium of strain A. niger S834, indicating the successful production of KA in strain A. niger S1991. HPLC analysis further confirmed the production of KA in the culture of A. niger S1991 (Fig. 2c). The yield of KA at 7 days of culture reached up to 5.53 g/L in strain S1991. No KA production was detected in the parental strain S834 (Fig. 2d). This demonstrates that the introduction of a single gene, kojA, is sufficient for KA production in A. niger. Considering that AO090113000145 and its homologs were not included in the engineered KA producing A. niger S1991, Roles of homologues of the putative KA biosynthesis genes in the production of KA in A. niger As previously mentioned, 10 homologs corresponding to the 11 putative KA biosynthetic genes from A. oryzae were found in the genome of A. niger (Table 1). To determine the roles of these homologs in the production of KA, we attempted to construct disruption mutants for each of them in the genetic background of the KA-producing A. niger strain constructed above. To do this, we first eliminated the hygromycin resistance marker (hph) in A. niger S1991 using the Cre-loxP system [31], so that the selection marker for hygromycin could be used in the subsequent round of transformation. The successful excision of the hph gene, which confers resistance to hygromycin, was confirmed by PCR (Additional file 1: Fig. S3), and the resulting marker-less strain was designated as A. niger S2132. Starting with A. niger S2132, gene deletion experiments were performed for all 10 homologs separately using the hph gene replacement through homologous recombination. Expression levels of kojA in the parent strain S834 and kojA overexpression strain S1991 respectively. The transcriptional levels in each strain at 5 d and 7d after inoculation are indicated. All experiments measuring transcription via qRT-PCR were normalized using the actin gene as a housekeeping control. c KA can form a red chelated compound with ferric ions. Color reaction of 5 day culture of S1991 with ferric ions indicates the kojic acid secretion by strain S1991. Similar results were obtained by HPLC using the commercial KA as standard. d The amount of KA produced by A. niger S1991 in shake flasks for 5 days and 7 days was determined by HPLC. Commercial kojic acid was used as a standard However, multiple attempts to obtain deletion mutants for ASPNIDRAFT_42619 and ASPNIDRAFT_56871 were unsuccessful, suggesting that both genes may be essential for the survival of A. niger. To overcome the difficulty of generating gene knockout strains for both genes, we used RNA interference (RNAi) technology to repress the expression of ASPNIDRAFT_42619 and ASPNIDRAFT_56871, respectively. To do this, we used an RNAi initiated by a hairpin construct, where duplicate sequences of 500 bp of target gene were cloned as inverted repeats separated by a 101-bp spacer of green fluorescent protein (GFP) encoding sequence, as previously described in Cryptococcus neoformans [32]. To control the expression of the interfering RNA, we used the promoter of pyruvate kinase A gene (PpkiA) [28], which is a strong constitutive promoter used in A. niger. RNAi cassettes targeting ASPNIDRAFT_42619 and ASP-NIDRAFT_56871 were constructed (Fig. 3a), and were introduced into A. niger S2132 through AMT. The correct insertion of the RNAi cassettes was confirmed by PCR and the confirmed RNAi constructs were designated as A. niger S2930 (RNAi-ASPNIDRAFT_42619) and A. niger S2933 (RNAi-ASPNIDRAFT_56871) respectively. qRT-PCR were conducted to measure the expression levels of ASPNIDRAFT_42619 and ASPNIDRAFT_56871 in A. niger S2930 and A. niger S2933, respectively. The results showed that the expression level of ASP-NIDRAFT_42619 in A. niger S2930 was 12% of that in the control strain A. niger S2132 (Fig. 3b), and the expression level of ASPNIDRAFT_56871 in A. niger S2933 was 14% of that in the control strain (Fig. 3c). These results suggest that the expression of ASPNIDRAFT_42619 and ASPNIDRAFT_56871 was significantly suppressed in the corresponding RNAi strains. Out of the 10 constructions, including 8 gene deletion mutants and 2 RNAi strains, 9 of them had similar colony morphologies on PDA plate as the parent strain A. niger S2132. However, the RNAi strain A. niger S2933 (RNAi-ASPNIDRAFT_56871) showed a severe reduction in conidiation phenotype (Fig. 4a), indicating that ASP-NIDRAFT_56871 may play a crucial role in the morphological development of A. niger. These 10 constructions were also cultivated in the KA production medium at 28 °C for 7 days, and the production of KA was monitored in the supernatant. The parental strain A. niger S2132 was grown under the same conditions as a control. As shown in Fig. 4b, the production of KA was significantly increased in strains S2933 (RNAi-ASPNIDRAFT_56871), S2430 (∆ASP-NIDRAFT_209619), S2435 (∆ASPNIDRAFT_186610) and S2437 (∆ASPNIDRAFT_131173). These strains Fig. 3 Construction of A. niger strains with RNA interference targeting ASPNIDRAFT_42619 and ASPNIDRAFT_56871 respectively. a Illustration of RNAi cassettes designed with inverted repeats of 500 bp of coding sequence of the gene of interest separated by a spacer segment of GFP sequence. pLH1738 was used to interfere with ASPNIDRAFT_42619 expression, pLH1739 to interfere with ASPNIDRAFT_56871 expression. b qRT-PCR analysis of target gene expression for the parent strain S2132 and the RNAi strains. the expression of ASPNIDRAFT_42619 and ASPNIDRAFT_56871 was interfered with in S2930 and S2933 respectively. Results were first standardized against actin, with S2132 expression set arbitrarily to 1 produced 1.82-fold (9.95 g/L), 3.67-fold (20.05 g/L), 3.71-fold (20.29 g/L), and 3.60-fold (19.70 g/L) titer that was achieved in the control strain S2132 (5.47 g/L), respectively, after fermentation for 7 days. This suggests that these four genes function as negative regulator in KA production. Therefore, the four genes of ASPNIDRAFT_56871, ASPNIDRAFT_209619, ASP-NIDRAFT_186610 and ASPNIDRAFT_131173 were designated as nrkA (negative regulator of KA production A), nrkB, nrkC and nrkD respectively in this study. The production of KA in the remaining 6 strains did not show a statistically significant difference compared to the parental strain (Fig. 4b). This indicate that the corresponding 6 genes may not be involved in KA biosynthesis. Effects of multiple gene disruption (silencing) on KA production in A. niger As demonstrated above, four genes (nrkA, nrkB, nrkC and nrkD) that function as negative regulators of KA production were identified in A. niger. We then sought to determine whether the combined disruption of these negative regulator encoding genes could further increase KA production. To do this, we first eliminated the hph gene from the high-yielding strain S2435 (ΔnrkC) using the Cre-loxP system. The resulting marker-less strain A. Screening of genes related with KA biosynthesis using kojic acid producing A. niger as a platform. a Colony Phenotype of 10 mutant strains grown on PDA for 4 days. The strains are listed as follows: the marker-less kojA overexpression strain S2132 (OEkojA) used as the parent strain, ASPNIDRAFT_50239 deletion mutant S2624 (Δ50239), ASPNIDRAFT_171597 deletion mutant S2922 (Δ171597), RNAi strain targeting ASPNIDRAFT_42619 S2930 (RNAi-42619), ASPNIDRAFT_189096 deletion mutant S2924 (Δ189096), ASPNIDRAFT_43217 deletion mutant S2929 (Δ43217), ASPNIDRAFT_53284 deletion mutant S2626 (Δ53284), RNAi strain targeting ASPNIDRAFT_56871 S2933 (RNAi-56871), ASPNIDRAFT_209619 deletion mutant S2430 (Δ209619), ASPNIDRAFT_186610 deletion mutant S2435 (Δ186610), ASPNIDRAFT_131173 deletion mutant S2437 (Δ131173). b KA production by the parent strain S2132 and the 10 derivative mutant strains. The strains are listed as described above. The titers of KA produced by A. niger S2132 and 10 derivative mutants in shake flask cultivations for 5 d and 7 d were shown niger S2743 (ΔnrkC) was used as the starting strain for the next round of transformation. When we attempted to delete the remaining three negative regulator encoding genes from strain S2743 through homologous recombination, only nrkD was successfully deleted. The resulting strain was designated as A. niger S2684 (ΔnrkC, ΔnrkD). The failure to delete nrkA in strain S2743 (ΔnrkC) is consistent with our previous results from the single gene deletion study in A. niger S2132 (OEkojA). However, in contrast to our successful deletion of nrkB in A. niger S2132 (OEkojA), the failure to delete nrkB in S2743 (ΔnrkC) suggests that nrkB and nrkC may have redundant functions in an essential cellular physiological process. We then used RNAi technology to knockdown the expression of nrkA and nrkB in the genetic background of nrkC and nrkD double deletion. Starting with A. niger S2684, and using the Cre-loxP system to efficiently recycle selection marker (hph), we performed two rounds of RNAi cassette transformation to obtain A. niger S3058 (RNAi-nrkA, ΔnrkC, ΔnrkD) and A. niger S3119 (RNAi-nrkA, RNAi-nrkB, ΔnrkC, ΔnrkD) respectively. The construction details of plasmids and strains are described in the Method part. A. niger S2743 (ΔnrkC), A. niger S2684(ΔnrkC, ΔnrkD), A. niger S3058 (RNAi-nrkA, ΔnrkC, ΔnrkD) and A. niger S3119 (RNAi-nrkA, RNAi-nrkB, ΔnrkC, ΔnrkD) were inoculated in PDA, and the colony phenotype was compared. As shown in Fig. 5a, both S3058 and S3119 displayed severely reduced conidiation phenotype, which is similar with that of S2933 (RNAi-nrkA). The expression of nrkA is downregulated in all three strains by RNAi. The results further support the putative function of nrkA involved in the morphological development of A. niger. The four strains were cultivated in the KA production medium for 7 days, and the production of KA was monitored in the culture supernatant using HPLC. As shown in Fig. 5b, KA production did not significantly differ among them, indicating that multiple gene knockout (knockdown) of the four negative regulator encoding genes can not further increase KA production. KA production in pH controlled batch cultures In this study, we also evaluated KA production in a pH controlled bioreactor using the A. niger strain S2435 (OEkojA, ΔnrkC), which contains the least genetic modification and displays efficient KA production activity in shake flask cultivations. The bioreactor was operated at pH 6.0 by adding HCl or NaOH as needed based on pH sensor feedback. The same medium used in shake flask cultivations, but without the addition of MES, was used in the bioreactor. As shown in Fig. 6, cell growth reached its maximum after 5 days and obvious KA accumulation can be detected at48 hours after inoculation, increasing steadily to reach 21.39 g/L after 6 days of fermentation. After that, KA productivity decreased and the titer increased slowly, reaching a maximum of 25.71 g/L after 8 days of fermentation. A similar trend was observed in glucose uptake, with the rate increasing at 48 h after inoculation and remaining constant until the 6th day of fermentation. After that, the glucose consuming rate decreased and 34 g/L of glucose still remained after 8 days of fermentation when the KA titer reached its maximum. After 7 days of cultivation, the bioreactor fermentation with pH control (22.80 g/L) had a higher titer than MES-buffered shaking flask fermentation (20.29 g/L), indicating that MES supplementation can be avoided in controlled bioreactor fermentation. However, after 8 days of fermentation in controlled aerobic batch culture, only 66% of glucose was consumed, and only 49.4% of the consumed glucose was converted to KA, indicating the need for further optimization of fermentation conditions such as medium composition, pH, and dissolved oxygen to improve KA yield in A. niger. Discussion To date, production of KA in a heterologous host has not been reported, mainly due to the lack of clarity surrounding the biosynthesis pathway of KA. More than ten years have passed since three genes of kojA, kojR and kojT were identified to be involved in the KA biosynthesis process in A. oryzae [13]. However, to this day, no biosynthetic intermediates have been identified in the KA biosynthesis process, and the exact number of genes essential for KA production remains unknown. Based on the structural differences between glucose and KA, it is believed that at least one oxidation step (CHOH → CO) and two dehydration steps are required for the conversion of glucose to KA (though the exact order is unknown). Therefore, it has been predicted that at most two or three enzymes are needed for KA biosynthesis [1]. KA production is limited to a small number of species within Aspergillus, Acetobacter, and Penicillium [1]. A. niger does not produce any detectable KA. In this study, we report for the first time the heterologous production of KA in A. niger by introducing the kojA gene from A. oryzae. The protein encoded by kojA is predicted to be a FAD-dependent oxidoreductase. It is unlikely that KojA has the activity for the full transformation process from glucose to KA. Our study results suggest the availability of the direct precursor for the reaction catalyzed by KojA in A. niger. Further studies on the KojA-involved reaction in A. niger will contribute to a better understanding of the KA biosynthesis pathway. Our finding that the introduction of kojA in A. niger results in KA production suggests the presence of an endogenous transporter for exporting KA in the organism. AO090113000138 (kojT), a gene encoding a MFS transporter, was reported to be the major transporter gene responsible for KA transportation in A. oryzae [13]. Upon deletion of ASPNIDRAFT_43217, the closest homolog of kojT in A. niger, there were no significant changes in the KA yield of the resulting gene deletion strain compared to that of the parent strain S2132. This suggests that other genes in A. niger play a more important role in transporting KA out of the cell. A blastP analysis showed that 6 more homologs with 60% or higher protein sequence similarities KojT to exist in A. niger's genome (ASPNIDRAFT_132090, ASPNIDRAFT_174815, ASPNIDRAFT_181773, ASP-NIDRAFT_183073, ASPNIDRAFT_207820, ASP-NIDRAFT_39368). Further genetic studies on these candidate genes will be helpful in identifying all KA transporter encoding genes in A. niger. Of the 13 genes in the putative KA biosynthetic gene cluster (from AO090113000132 to AO090113000145 in A. oryzae genome), besides the three closely linked genes kojA, kojR, and kojT, AoKap4 (AO090113000139) and Aokap6 (AO090113000133) were reported to also Fig. 6 Kinetics of cell growth and kojic acid production by A. niger S2435 in 2 L controlled batch bioreactors. KA production, dry cell weight and residual glucose were determined. The results shown are from a single representative experiment contribute to KA production in A. oryzae [19,20]. AoKap4 and Aokap6, encoding an MFS protein and a protein with unknown function, respectively, were reported to positively regulate KA production upstream of kojT and kojA in A. oryzae [19,20]. However, deletion of ASP-NIDRAFT_53284 (the closest homolog of AoKap4) and ASPNIDRAFT_171597 (the closest homolog of Aokap6) in A. niger S2132 resulted in similar KA production as the parent strain S2132. These findings suggest that regulation patterns for KA production vary between the native producer A. oryzae and the engineered KA producer A. niger S2132. In this study, we identified four genes (nrkA, nrkB, nrkC and nrkD) that negatively regulate KA production in A. niger after screening a library composed of 10 different mutant strains. Our study showed that single knockout (or knockdown) of the four negative regulators leads to increased KA production in the resulting strain, while the combined knockout (knockdown) of all four genes does not further enhance KA production, suggesting that the four genes may participate in a shared biological process that affects the precursor supply or pathway gene expression for KA production in A. niger. Among the four negative regulator encoding genes, nrkB encodes a putative protein containing a GAL4-like Zn2Cys6 binuclear cluster DNA-binding domain and a fungal_TF_ MHR domain. A similar domain composition is present in a large family of fungal zinc cluster transcription factors [33]. Considering that kojA in these KA-producing A. niger strains is driven by PgpdA, a constitutive promoter widely used in A. niger [28], we speculated that NrkB might regulate other unknown gene(s) which is involved in the biosynthesis of KA. nrkD encodes a sulfatase domain-containing protein. Sulfatases are enzymes that can catalyze the hydrolysis of sulfate ester bonds of a wide variety of substrates [34]. The remaining two genes (nrkA and nrkC) encode proteins with unknown functions that do not show any similarity to characterized proteins. The variable functions of the four genes indicated the complex regulation mechanism of KA biosynthesis in A. niger. More studies are needed to clarify the exact regulation mechanism behind this. Further research to elucidate the regulation mechanisms and functions of the four negative regulators is ongoing in our laboratory. KA has various applications in fields such as the cosmetic industry, medicine, and food industry [3]. To meet the increasing market demand, it is crucial to optimize KA production by seeking alternatives that are more economical and have a higher production yield than current A. oryzae-based methods. To the best of our knowledge, our work in this study represents the first demonstration of KA production in a heterologous host. A. niger is one of the most important industrial filamentous fungal species. It is able to grow in a wide temperature range of 6 °C-47 °C and over an extremely wide pH range of 1.4-9.0, and it is able to ferment various cheap raw materials [25]. A. niger has shown advantages over other microorganisms for the commercial production of organic acids including citric acid and gluconic acid [23,35]. In this study, we show that highly efficient KA-producing A. niger strains can be obtained through just two steps of genetic manipulation: the introduction of a foreign gene (kojA) plus the knockout (or knockdown) of an endogenous gene (nrkA, nrkB, nrkC or nrkD). As shown in Fig. 6, the KA titer of the engineered A. niger can reach up to 25.71 g/L, which is superior to most wild-type KA-producing strains [36]. It should be noted that the acid production medium used in this study was modified from the medium used in A. oryzae [13], and may not be optimal for A. niger. As shown in Fig. 6, glucose was not fully consumed during batch fermentation. Further research into the fermentation process optimization is ongoing. The results of our study strongly support the notion that A. niger-based cell factories have the potential to create industrial strains for cost-effective KA production. Conclusion In this study, we demonstrate the successful reconstitution of the KA biosynthesis pathway in the heterologous host of A. niger by introducing the kojA gene from A. oryzae. Using the KA-producing A. niger strain (OEkojA) as a platform, we constructed a mutant library consisting of 10 mutant strains, including 8 gene deletion strains and 2 RNAi strains. Through screening of this mutant library, we identified four genes (nrkA, nrkB, nrkC, and nrkD) that function in the negative regulation of KA production. The best-performing strain (OEkojA, ΔnrkC) achieved a KA titer of 20.22 g/L after 7 days of fermentation in a shaking flask. This efficient KA production was also maintained when the strain was cultivated in MESfree medium in a controlled batch bioreactor, reaching a titer of 25.71 g/L after 8 days of fermentation. These results demonstrate that the engineered KA-producing A. niger can serve as a useful platform for the study of KA biosynthesis and regulation, and that A. niger-based cell factories have significant potential for the cost-effective production of KA. Strains and growth conditions. All strains used in this study are listed in in Table 2. The A. niger strain S834, derived from A. niger ATCC 1015, was used as the parent strain [29]. All other transformants in the study were derived from A. niger S834. A. niger strains were cultured at 28 °C on potato dextrose agar medium (PDA) supplemented with 250 μg/mL hygromycin B when required [38]. Complete medium (CM) was used for transformant screening, and minimal medium (MM) was used for selecting the glufosinate resistance marker (bar) and inducing the elimination of the hygromycin B phosphotransferase gene (hph) cassette (loxP-hph-loxP) integrated into the genomes of transformants [37]. Escherichia coli JM109, used for constructing and amplifying plasmids, was grown at 37 °C in Luria Bertani media (LB) supplemented with 100 μg/mL kanamycin as needed. Agrobacterium tumefaciens AGL-1, used for Agrobacterium-mediated transformation (AMT) of A. niger, was grown at 28 °C on LB supplemented with 100 μg/mL kanamycin [37]. The KA fermentation medium used in shake flask cultivation (consisting of 10% glucose, 0.25% yeast extract, 0.1% K 2 HPO 4 , 0.05% MgSO 4 -7H 2 O, 0.75 M 2-morpholinoethanesulphonic acid (MES), pH 6.0) was modified from the medium used for KA production in A. oryzae [13]. Bioinformatic analyses To compare the putative encoding sequences of the 13 genes (ranging from AO090113000132 to AO090113000145) in the genome of A. oryzae, BlastP searches were conducted using the genome of the A. niger ATCC 1015 strain of A. niger (ACJE00000000.1) (https:// blast. ncbi. nlm. nih. gov/ Blast. cgi). The alignment sequence with the highest similarity was selected as the closest homolog for each search. Multiple sequence alignment analysis between the A. oryzae genes and their homologs in A. niger was performed using the Clustal Omega program (https:// www. ebi. ac. uk/ Tools/ msa/ clust alo/). Construction of plasmids All plasmids used in this study are listed in in Table 2. All primers used in this study are listed in Additional file 1: Table S1. kojA overexpression plasmid: The kojA overexpression plasmid (pLH1081) was derived from plasmid pLH454 [31] by inserting the open reading frame (ORF) of kojA downstream of the glyceraldehyde-3-phosphate dehydrogenase promoter (PgpdA) in pLH454. This was achieved through the following process: First, PCR was performed using cDNA from Aspergillus oryzae as the template and primer pair p3650/p3651. The PCR product was then digested with BamHI and EcoRI and ligated into the corresponding sites of pLH454 to obtain pLH1081. plasmids used for gene disruption: the recombinant plasmid pLH1527, used for deleting gene ASP-NIDRAFT_50239, was constructed using pLH594 as the parent vector [37]. The construction process was the same as previously described [37]. Specifically, A. niger ATCC 1015 genomic DNA was used as the template to amplify the upstream and downstream fragments of ASPNIDRAFT_50239 using PCR and the primer pair P4567/P4568 and P4569/P4570, respectively. The resulting products were then digested and ligated sequentially into the flanks of the hygromycin resistance cassette (loxP-hph-loxP) in pLH594, resulting in the ASPNIDRAFT_50239-deletion plasmid pLH1527. The same strategy was used to construct the recombinant plasmids pLH1735, pLH1736, pLH1737, pLH1526, pLH1496, pLH1497, and pLH1498, which were used for deleting ASPNIDRAFT_171597, ASPNIDRAFT_189096, ASPNIDRAFT_43217, ASPNIDRAFT_53284, ASP-NIDRAFT_209619, ASPNIDRAFT_186610, and ASP-NIDRAFT_131173, respectively. plasmids used for RNAi-mediated gene silencing: constructs for RNA interference (RNAi) were designed using inverted repeats of 500 bp of the coding sequence of the target gene separated by a spacer segment of green fluorescent protein (GFP) sequence, as described previously [32]. To construct the gene silencing vector, the recombinant plasmid pLH1453 was first created. It contains the hygromycin resistance cassette (loxP-hph-loxP), the pyruvate kinase A promoter (PpkiA), a spacer segment of GFP sequence, and the trpC terminator (TtrpC). This was achieved by inserting a spacer segment of GFP sequence downstream of the pkiA promoter in pLH509 [24]. The process involved PCR using eGFP gene as the template and primer pair P3937/P3938, followed by digestion with KpnI and ligation into the corresponding sites of pLH509 to obtain pLH1453. The ASPNIDRAFT_42619 gene silencing vector pLH1738 was then constructed using pLH1453 as the parent vector. A portion of the coding sequence of ASPNIDRAFT_42619 was PCR amplified from cDNA of A. niger ATCC1015 using primer pair P4237/P4238, and the antisense of ASPNIDRAFT_42619 was PCR amplified from the same cDNA using primer pair P4239/P4240. The resulting products were digested and ligated sequentially into the flanks of the spacer segment of GFP in pLH1453 to obtain the ASP-NIDRAFT_42619 gene silencing plasmid pLH1738. This same strategy was also used for the construction of pLH1739 (for ASPNIDRAFT_56871 silencing) and pLH1803 (for ASPNIDRAFT_209619 silencing). Construction of strains kojA over-expressing A. niger strain: the A. niger strain S1991 with overexpression of kojA was obtained by transforming pLH1081 into A. niger S834 through Agrobacterium-mediated transformation (AMT). The transforming process, as previously described by Xu et al. [31], involved introducing pLH1081 into A. niger S834 and screening transformants on PDA with 250 μg/mL hygromycin B. PCR analysis was then used to confirm the integration of the kojA expression cassette, as shown in Additional file 1: Fig. S2. The verified strain was designated as A. niger S1991. Marker-less kojA over-expressing strain: The A. niger strain S2132, which over-expresses kojA and exhibits the hygromycin B-sensitive phenotype, was obtained by eliminating the hph selection marker from the genome of A. niger S1991 using the Cre-loxP system [31]. This process involved spreading approximately 400 conidia of S1991 on a modified MM plate supplemented with 30 μg/ mL DOX, incubating at 28 °C for 5-7 days, and transferring the resulting clones to PDA plates with or without 250 μg/mL hygromycin B. The hygromycin B-sensitive colonies were selected and examined for hph excision using PCR analysis with primer pair hph-F/hph-R (see Next, we introduced the RNAi plasmid pLH1739 (targeting nrkA) into A. niger S2991 to obtain A. niger S3058 (RNAi-nrkA, ΔnrkC, ΔnrkD). We then eliminated the hph selection marker from the genome of A. niger S3058 to obtain the marker-less strain A. niger S3067 (RNAi-nrkA, ΔnrkC, ΔnrkD). Finally, we introduced the RNAi plasmid pLH1803 (targeting nrkB) into A. niger S3067 to obtain A. niger S3119 (RNAi-nrkA, RNAi-nrkB, ΔnrkC, ΔnrkD). The downregulation of nrkA and nrkB in A. niger S3067 was confirmed by qRT-PCR (Additional file 1: Fig. S5). RNA purification and transcription analysis Real-time quantitative reverse transcription PCR (qRT-PCR) was performed as previously described by Cao et al. [37]. Mycelia for RNA isolation were harvested from a kojic acid production medium in shake flask cultivation. Total RNA was extracted from the shake flask culture using the E.A.N.A.TM Fungal RNA Kit (Omega Bio-tek, Inc.) according to the manufacturer's protocol. Complementary DNA (cDNA) was synthesized from 300 ng of total RNA using the PrimeScript RT Reagent Kit (TaKaRa Biotechnology Co., Ltd.) according to the manufacturer's protocol. For real-time RT-PCR, reactions were prepared using SYBR PremixEx TaqII kit (TaKaRa Biotechnology Co., Ltd.) and run on a StepOnePlus Real-Time PCR System (Applied Biosystems). The calculated threshold cycle (Ct) for each gene amplification was normalized to the Ct of the reference gene beta-actin, and changes in gene expression levels between the selected transformants and the parental strain were calculated using the formula 2 − ΔΔCt. For the heterologous group, the relative gene expression levels between mutant strain and the parent strain were analyzed using the qualifed Ct (2 −ΔCt ). The primers used in this assay were designed to amplify partial cDNA sequences of the target genes and are listed in the Additional file 1: Table S1. Shaking flask fermentation of A. niger To evaluate kojic acid production, 1 × 10 6 conidia/mL of the engineered A. niger strain were inoculated into 50 mL of kojic acid fermentation medium in 250 mL Erlenmeyer flasks and incubated at 28 °C and 200 rpm for 7 days. Fermentation broths were collected at designated time points for kojic acid analysis or RNA extraction. Bioreactor fermentation of A. niger Seed cultures were prepared by inoculating the engineered A. niger and incubating for 24 h at 28 °C and 200 rpm in 250 mL Erlenmeyer flasks containing 50 mL of kojic acid fermentation medium without MES (10% glucose, 0.25% yeast extract, 0.1% K 2 HPO 4 , 0.05% MgSO 4 -7H 2 O). The seed culture was then inoculated into a 1.26 L medium in a 2 L bioreactor (Baoxing Biological Engineering Co. Ltd, China) and fermented at 28 °C for 9 days using the same kojic acid fermentation medium without MES. The pH was maintained at 6.0 by automatic addition of HCl (4 M) or NaOH (4 M). The stirring speed was set at 250 rpm and the air flow rate was set at 1vvm (volume of air per volume of medium per minute). Fermentation broth was collected at intervals of 24 h, with the supernatant used for kojic acid production determination and the mycelium filtered through a preweighed microfiber filter and dried at 80 °C for dry cell weight measurement. Analytical method Kojic acid concentration was qualitatively determined using the colorimetric method of Bentley [30] and quantitatively determined by high-performance liquid chromatography (HPLC) as described by Ariff [39]. Glucose concentration was determined using the SBA-40E biosensor analyzer (Biology Institute of Shangdong Academy of Sciences, China). Statistical analysis All data points shown in this study represent the average values from three independent experiments, with error bars representing standard deviations. Statistical analysis was performed using a two-tailed Student's t-test. Statistical significance was determined as follows: *P < 0.05, **P < 0.01, ***P < 0.001.
2023-02-27T14:46:46.972Z
2023-02-27T00:00:00.000
{ "year": 2023, "sha1": "1c93b68b00626a46096a7687e259ef15cec958d2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "1c93b68b00626a46096a7687e259ef15cec958d2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245548195
pes2o/s2orc
v3-fos-license
A Novel Architecture of Multimode Hybrid Powertrains for Fuel Efficiency and Sizing Optimization Hybrid powertrains have widely been developed as eco-friendly system and commercialized in the passenger vehicle market with clear benefits over conventional powertrains. Accordingly, there have been various research topics on architectures of hybrid power systems to further improve the system performance ability and sizing optimization for the packaging and production cost reduction. In this study, a novel multimode power split hybrid architecture has been suggested, which is based on multiple driving modes such as input- and output-power split and parallel hybrid modes in order to achieve fuel efficiency improvement and sizing optimization. The performance ability and sizing aspects of the invented system have been analyzed in comparison with Toyota Hybrid System (THS), which is a kind of typical power split hybrid architecture. The fuel efficiency of the suggested system has been compared by a backward-facing simulation with Dynamic Programming (DP) for representative driving test cycles from Environmental Protection Agency (EPA). In terms of the component sizing, the maximum torque and speed variation trends of motors have been analyzed according to the velocity variation. In the simulation and analysis results, the invented system shows opportunities to improve fuel efficiency with multiple driving modes and to reduce component sizing of power electronics which is related to the production cost reduction as well as the vehicle packaging space minimization. I. INTRODUCTION Eco-friendly vehicles such as hybrid vehicles and pure electric vehicles have widely been researched and developed by most automotive companies due to the reasons like the unstable oil price and the environmental regulations by governments. According to the current situation, hybrid electric vehicles (HEVs), among the different kinds of eco-friendly vehicles, are currently the most popular alternatives to conventional vehicles. Multiple types of hybrid systems have been commercialized by automotive manufacturers in the passenger vehicle market. There have been various research topics related to hybrid vehicle systems. The research topics of the recent studies include diverse research areas such as driving and regenerative control strategy of HEVs [1]- [3], optimal design of component sizing and electrical energy storage [4], [5], driving energy management strategy for HEV [6], a design methodology for compound-split hybrid electric vehicles with the compound lever diagram [7], comparative analysis of classical energy management optimization with reinforcement learning [8], a parametric model for power split hybrid transmissions [9], and thermal management and analysis of electrified drive systems and hydraulic hybrids [10]- [13]. For the research on architectures of hybrid systems, there have existed a number of studies for past decades since the 1970s and diverse recent studies like a multimode power split hybrid transmission [14], hybrid powertrain architectures for a four-wheel drive [15], series-parallel and power split architecture based dedicated hybrid transmissions (DHTs) [16], powertrain configuration of single motor hybrid systems [17], and novel powertrain architectures with Ravigneaux and planetary gear trains [18]. In this study, a hybrid architecture based on multiple driving modes with power split and parallel hybrid modes, named Dual Split Hybrid System (DSHS), has been studied in order to analyze the performance ability and other aspects. The invented system has been compared with Toyota Hybrid System (THS), which is a typical power split hybrid system, in terms of the performance ability and component sizing aspects. In order to analyze the performance ability, both systems have been simulated by a backward-facing method with Dynamic Programming (DP) for Urban Dynamometer Driving Schedule (UDDS), which is also called FTP-72, and Highway Fuel Economy Test (HWFET), which are among the representative Environmental Protection Agency (EPA) driving test cycles. The component sizing aspect has been compared by analysis of the torque and speed variation of motors according to an example driving case. The potential and advantages of DSHS have been described in comparison with THS and discussed with the results from the simulation and analysis. II. SYSTEM DESCRIPTION Architectures for hybrid vehicles can mainly be divided into series, parallel, and power-split systems according to how the power paths are constructed. The suggested novel architecture consists of multiple different driving modes such as power split and parallel hybrid modes. The power split hybrid system is mainly classified into two architectures; the input-split (or output-coupled) power split and output-split (or input-coupled) power split systems. For both systems, three mechanical paths from a planetary gear train are connected to the engine, the final drive, and a motor, respectively. The difference is the location of the secondary motor. A motor is mechanically coupled with the path of the final drive for the input-split system, while for the output-split system, a motor is connected to the mechanical path between the engine and the planetary gear train. The mechanical layout difference between input-and output-split power split architectures results in different power distribution and system efficiency according to the speed ratio between the engine and the final drive [19]- [21]. The driving modes of power split hybrid systems can mainly be divided into power additive, full mechanical, and power recirculation modes. When the system is under the power additive mode with the power ratio between the electrical and mechanical paths greater than zero, the power flow from the engine splits into the mechanical and electrical paths. For the power recirculation mode with a power ratio between the electrical and mechanical paths less than zero, a part of the power from the mechanical path recirculates through the electrical path and is delivered again through the mechanical path. The power recirculation is undesirable because the system efficiency gets worse as the amount of power recirculation through the electrical path increases. When there is no power through the electrical path, the system is under the full mechanical mode, which shows the highest efficiency among the three modes due to no electrical power loss. The input-and output-split power split hybrids have different driving modes according to the speed ratio and it results in a different system efficiency distribution. Fig. 1 shows the hybrid system efficiency and power path ratio for the power split hybrid transmissions according to 2592 VOLUME 10, 2022 the speed ratio, which is drawn with assumptions of the constant motor efficiency and standing gear ratio of 2.6 for the planetary gear train. Fig. 1 (a) and (b) are figures for the input-split power split system and the output-split power split system, respectively. The trends of efficiency and power ratio for input-and output-split power split systems are different according to the speed ratio of the engine and wheel speeds. As expected, both systems have the highest efficiency when the speed ratio is the full mechanical point of 1.385 with the full mechanical mode. When the speed ratio is greater than the full mechanical point, the input-split power split system is under the power additive mode, while the output-split power split system is under power recirculation mode. On the other hand, when the power speed ratio is less than the full mechanical point, the input-and output-split power split systems have the power recirculation and power additive modes, respectively. Since the efficiency of the power additive mode is higher than that of the power recirculation mode in general, the input-split power split system has higher system efficiency compared to the output-power split hybrid system for the speed ratio greater than the full mechanical point. On the other hand, the output-split power split hybrid system shows higher efficiency than the input-split power split hybrid system for the speed ratio less than the full mechanical point. Typical examples of the input-and output split power split systems are THS and Chevrolet Volt, respectively [22], [23]. As described, since the output-split power split system has relatively low efficiency for the low vehicle speed, the outputsplit power split system usually includes another driving mode for the low vehicle speed such as full electric and series hybrid modes. There are 12 possible power split architectures for the power split hybrid systems with a planetary gear train [24]. THS is based on an input-split power split transmission among them. Fig. 2 shows the schematic of the power split hybrid architecture for THS with a single planetary gear train, where R is the ring gear, C is the carrier, and S is the sun gear. The engine, final drive, and MG-1 are connected to the carrier, ring gear, and sun gear of the planetary gear train, respectively. MG-2 is coupled with the mechanical path by an external gear between the ring gear and the final drive. The invented hybrid architecture, DSHS, consists of multiple driving modes including input-and output-split power split hybrid modes. The architecture of DSHS is similar to THS in terms of the mechanical connections of the planetary gear train to the engine, final drive, and MG-1. parallel and output-split power split hybrid modes in addition to the input-split power split hybrid mode by just adding a mechanical synchronizer to the typical architecture of THS. Fig. 4 shows the power flow diagram for different driving modes of DSHS, which includes three different driving modes. First, when the mechanical path of the mechanical synchronizer is connected to the right side, MG-2 is coupled with the final drive and the system operates as an input-split power split hybrid transmission as shown in Fig. 4 (a). Second, if the sleeve of the mechanical synchronizer is in the neutral position, MG-2 is connected to both the final drive and the carrier of the planetary gear train as in Fig. 4 (b). As a result, the speed ratio between the engine and final drive becomes constant and the system operates as a parallel hybrid transmission. Lastly, as in Fig. 4 (c), when the sleeve of the mechanical synchronizer is connected to the left side, MG-2 is connected to the carrier of the planetary gear train, which makes the system work as an output-split power split hybrid transmission. Since the driving modes of DSHS involve multiple hybrid modes, the fuel economy can be improved by using different driving modes for the different driving conditions. Even though there have been several studies on multimode hybrid systems, DSHS has a relatively simple structure compared to other multimode hybrid transmissions. DSHS uses only a mechanical synchronizer in addition to the structure of THS in order to achieve multiple kinds of hybrid driving modes. Another advantage of DSHS is that the sizing of the electrical components can be reduced because the amount of the power through the electrical path is less than other typical power split hybrid transmissions like THS. The detailed analysis results and discussion for the fuel efficiency and component sizing are described in the following sections. III. ANALYSIS METHOD The backward-facing simulation was applied with DP for the fuel economy analysis of the systems, which can provide globally optimized solutions for the given driving test cycles. Since the purpose of the study is the performance analysis of the suggested system, the methods in recent studies were not been applied in this paper [8], [25]. In the backward-facing simulation, the speed and torque of components are calculated from the vehicle speed data without driver models [26]. The wheel speed is obtained from the driving test cycle and the wheel demand torque is calculated by the derivative of the vehicle speed, which are used for the vehicle dynamics during the system modeling. A. SYSTEM MODELING For the vehicle dynamics, the required wheel traction force can be calculated from the force balance equation, which is given by, where m is the mass of the vehicle, a is the acceleration of the vehicle, F t is the traction force, F g is the gravitational force, F r is the rolling resistance, and F D is the aerodynamic drag. The traction force, gravitation force, rolling resistance, and aerodynamic drag can be expressed as, where M w is torque loaded on the wheel, r dyn is the wheel dynamic radius, g is the gravitational acceleration, θ is the angle of the slope, C r is the rolling resistance coefficient, ρ is the density of the air, A f is the frontal area of the vehicle, C d is the drag coefficient of the vehicle, and v is the velocity of the vehicle. Table 1 shows the simulation conditions for vehicle dynamics. The weight and road load coefficient are chosen from EPA fuel economy test data for Toyota Prius Prime with the model year of 2020 [27]. The planetary gear train can be modeled by the speed and torque relation equations as, where h is the standing gear ratio of the planetary gear train, ω C is the angular speed of the gear carrier, ω S is the angular speed of the sun gear, ω R is the angular speed of the ring gear, τ S is the torque on the sun gear, τ C is the torque on the gear carrier, and τ R is the torque on the ring gear. For the speed, at least two speed values need to be determined, and then, the rest speed can be obtained from (6). On the other hand, for the torque, once any torque of the sun gear, the carrier, and the ring gear is determined, the other two values can be calculated from (7). The fuel consumption of the engine was calculated by the brake-specific fuel consumption (BSFC) map with engine speed and torque, which is based on the empirical data. The efficiency of the motors was also determined empirically with the efficiency map by the motor speed and torque. The temporary manipulated map data were used for the engine and motor, which are not the same as the map data for the production car. A battery model with empirical data is used for obtaining the state-of-charge (SOC) variation. Once the battery demand power is calculated from the engine and motor models with vehicle dynamics, the derivative of SOC can be calculated as follows [28] (8) whereṠOC is the balance rate of the state-of-charge, V OC is the open circuit voltage of the battery, R in is the internal resistance of the battery, P bat is the battery power exchange with electric components, and I bat is the current from the battery. The open circuit voltage and internal resistance can be obtained with empirical relation data, which are expressed as a function of SOC. B. OPTIMAL CONTROL MANAGEMENT In this study, DP was used for describing the optimal control policy of the simulation models, which guarantees global optimal solutions, even though it requires heavier computational time compared to other optimization methods. According to Bellman's principle of optimality, the remaining decisions must include optimal policies for the states resulting from the beforehand decisions [29]. The time step of the given driving test cycle is divided into N stages and the transition function can be expressed as follows [30], where x is the state variables, u is the control variable, and k is the stage of time. The optimization problem for DP can be formulated by the choice of control variables at each state to find the optimal value of the objective function. The objective function can be expressed as [28], [31], [32], where J is the objective function, and L is the instantaneous cost function. The instantaneous cost function includes the instantaneous fuel consumption rate and penalty function which is given by, whereṁ fuel is the instantaneous fuel consumption rate and f p is the instantaneous penalty function. In this study, fuel consumption during the engine start-up was considered in the penalty function in order to prevent the engine from starting frequently. IV. RESULTS AND DISCUSSION The performance ability and component design aspects of DSHS have been analyzed in comparison with THS. For the performance analysis, both systems have been simulated by the backward-facing modeling with the energy management strategy by using DP approach. The electrical component sizing aspects have been analyzed in terms of the required power and torque of the electric motors with an example case study. A. PERFORMANCE ANALYSIS The engine operating points were observed with the engine BSFC map in order to check the simulation control optimization policy and engine power consumption. The engine map in the simulation was modified based on the empirical data in order to make it different from that of the vehicle in the market due to the data security purpose. Fig. 5 shows the simulation results of the engine operating points for UDDS driving test cycle. Fig. 5 (a) and (b) are the results for DSHS and THS, respectively. The blue line in the figure shows the optimum operating line and the black line represents the maximum engine torque line. As shown in the figure, the operating points are located near the optimum operating line for both cases. This means that the simulation control optimization policy is well organized according to the given driving test cycle for both of them. The difference between the two structures for the engine operating points is that DSHS uses an engine speed area between 1000 and 2300 rpm, while the engine speeds of THS are located between 1500 and 2900 rpm. This is due to that DSHS has more chances to choose lower engine speed with more driving modes in order to minimize the engine power consumption. As a result, DSHS consumes less fuel for UDDS driving test cycle than THS, because the engine power consumption gets lower with slow engine speed when the operating points are under the optimum operating line. The operating modes of the system were analyzed for the wheel demand power and torque with vehicle speeds. Fig. 6 shows the simulation results of the operating modes according to the wheel demand power and velocity for UDDS driving test cycle. Fig. 6 (a) and (b) show the results for DSHS and THS, respectively. For DSHS, all four driving modes are used for the given driving test cycle. The EV mode is used for the overall area, the input-split power split hybrid mode is used for the low speed and high demand power area, the parallel hybrid mode is used for the high speed area, and the output-split power split hybrid mode is used for the relatively low power area near 40 km/h vehicle speed. On the other hand, THS utilizes the input-split power split hybrid mode for the high demand power area and the EV mode for the relatively low power area. We see that DSHS exploits multiple driving modes for the different velocity and wheel demand power conditions compared to THS. 7 shows the simulation results of the operating modes according to the wheel demand torque and velocity for UDDS driving teat cycle. Fig. 7 (a) and (b) are figures for DSHS and THS, respectively. As the operating modes on the velocity and wheel demand power map, DSHS additionally utilizes the parallel hybrid mode and output-split power split hybrid mode, while THS only has two driving modes for the given driving test cycle. The EV mode is exploited relatively slow velocity area, and the parallel hybrid mode is used for high velocity area. The input-and output-power split hybrid modes are mainly utilized for the velocity near 40 km/h. Fig. 8 shows the simulation results of the SOC variations for UDDS driving test cycle. The initial SOC for both systems is set to 50 % and the final SOC is controlled by the optimal control strategy in order to have the same value as the initial SOC. The SOC variation range of DSHS is wider than that of THS. DSHS stores more energy into the battery compared to THS around between 200s and 800s, and utilizes it for the end of the driving test cycle. It shows that DSHS has more chance to store residual power from the engine by choosing a more efficient driving mode, and it utilizes the battery energy space more actively, which results in better fuel economy. 9 shows the simulation results of the accumulated fuel consumptions for UDDS driving test cycle. As expected, the engine for DSHS consumes more fuel energy between 200s and 800s. Even though the SOC variation range of DSHS is larger than that of THS, the final fuel consumption of DSHS is less than that of THS, since DSHS utilizes the residual space of the battery more actively using optimal control with multiple driving modes. The overall fuel consumption of DSHS is 1.7 % less than THS with more opportunities for the driving modes. The performance abilities of both given systems have been simulated and analyzed for HWFET driving test cycle. Fig. 10 (a) and (b) show the simulation results of the engine operating points for HWFET driving test cycle, for DSHS and THS, respectively. As the result of UDDS driving test cycle, the engine operating points of both systems are well positioned near the optimum operating line. The engine speed of DSHS is slightly lower than that of THS. The operating points of DSHS are mainly located between 1200 rpm and 2300 rpm, while those of THS are mainly placed between 1500 rpm and 2800 rpm of the engine speed. This is due to that DSHS has more opportunities to select the driving modes with low engine speed, which results in less engine power consumption for UDDS driving test cycle. The schematics of operating modes for HWFET according to the wheel demand power and vehicle speed are shown in Fig. 11. Fig. 11 (a) and (b) show the results for DSHS and THS, respectively. For DSHS, mainly two driving modes are utilized for HWFET because the overall wheel demand power is lower than that for UDDS driving test cycle. The EV mode is used for the overall region, and the parallel hybrid mode is used for the high speed region, while the input-and output-split power split hybrid modes are rarely exploited, relatively. On the other hand, THS utilizes the input-split power split hybrid mode for the relatively high power area and the EV mode for the relatively low power with high speed area. We can see that the control strategies of both systems are totally different according to the possible operating modes for the given driving test cycle. Fig. 12 shows the simulation results of the operating modes according to the wheel demand torque and velocity for HWFET driving test cycle. Fig. 12 (a) and (b) are the results for DSHS and THS, respectively. Unlike the results for UDDS driving test cycle, the input-split and output power split hybrid modes are hardly used for DSHS. The EV mode is mainly used for the low velocity and low torque area, while the parallel hybrid mode is mainly utilized for the high velocity with high demand torque area. According to the results, on the wheel demand power map, the main operating modes for DSHS are different from those for THS with a different control strategy. Both systems utilize different driving modes with the optimal control strategy in order to minimize fuel consumption and SOC balancing control. The simulation results of the SOC variations for HWFET driving test cycle are compared as in Fig. 13. The SOC starts from 50 % for both systems and the final SOC is set to the same as the initial SOC. Even though the variation trends for the first half of the driving test cycle are different, the trends are similar for the second half. According to Fig. 11 and 12, the DSHS uses mainly EV and parallel hybrid modes, while THS mainly uses EV and input-split hybrid modes. The difference of the main operating modes according to the time makes the SOC balancing strategies different between both hybrid systems. As a result, different SOC balancing strategies result in different SOC variation trends. Even though the trends are similar, the overall fuel consumption of DSHS is 5% less than THS with more options of the driving modes for optimal system control. According to the simulation results for UDDS and HWFET driving test cycles, DSHS shows better performance ability compared to THS in terms of the fuel economy for the given simulation conditions. The fuel energy consumptions for UDDS and HWFET driving test cycles are around 1.7 % and 4.5 % less than THS, respectively, even though the amount of the fuel consumption difference can be changed according to the vehicle specifications. The operating modes of DSHS are more variable and it makes the engine and motor of the vehicle operating points located in a more efficient position. B. COMPONENT SIZING As well as the performance aspect, DSHS has a chance to reduce the size of electrical components by reducing the maximum power through the electrical path. During the design process of the system, the sizing of motors is determined with the maximum power and torque requirements for the test development process. If the maximum power and torque requirements are reduced, the motors can be designed smaller. For the design aspect of DSHS, the variation trend of speed and torque of motors were analyzed with assumptions that the vehicle velocity increases and the engine operates at a constant speed. Fig. 15 shows the torque and speed variations of MG-1 when the vehicle velocity increases with a constant wheel demand torque. The torque for the output-split power split transmission is constant, while that for the input-split power split transmission increases along with increasing vehicle velocity. It results that the torque of the input-split power split transmission is lower than that of the output-split power split transmission when the motor speed is positive, and it becomes higher when the motor speed becomes negative. For DSHS, it can utilize the input-split power split hybrid mode for positive MG-1 speed and the output-split power split hybrid mode for negative MG-1 speed. As a result, the maximum required power of MG-1 for DSHS is smaller than the input-split power split transmission. It means that the maximum power and maximum torque of MG-1 for DSHS can be designed smaller compared to THS. The component sizing of MG-2 can be analyzed as MG-1. Fig. 16 shows the torque and speed variations of MG-2 when the vehicle velocity increases with a constant wheel demand torque. The speed of MG-2 increases along with increasing vehicle velocity for the input-speed power split transmission. For the output-split power split transmission, the speed of MG-2 is constant since the engine speed is assumed as constant. As shown in the figure, when the vehicle velocity is slower than the full-mechanical point, the MG-2 for the output-slit power split transmission consumes more power, while the MG-2 for the input-split power split transmission requires higher power when the velocity becomes high. Since DSHS can select the input-split power split hybrid mode for the low velocity and the output-split power split hybrid mode for the high velocity, the maximum required power and torque for sizing the MG-2 can be reduced compared to other single mode power split systems. In conclusion, DSHS shows the possibility of the reduction of component sizing for both MG-1 and MG-2 compared to other typical power split architectures by reducing the maximum torque and power requirements for the given example case. The sizing reduction of the motors affects not only the production cost reduction but also the vehicle packaging space minimization by reducing the physical volume of the motors, which can give an opportunity while producing vehicles by automotive manufacturers. V. CONCLUSION The invented novel multimode power split hybrid architecture based on input-and output-split power split hybrid modes has been studied by comparison with THS, which is a typical architecture of the power split hybrid systems. Both systems have been simulated by DP for UDDS and HWFET driving test cycles from EPA in order to compare the performance ability of the systems. In the result, the fuel consumption of DSHS shows around 1 to 5 % less than that of THS, even though the amount of difference depends on the vehicle specifications and driving test cycles. In addition, the component sizing aspect has been compared with an example case using constant wheel demand torque according to the varying vehicle speeds. In the case study, DSHS shows more opportunities to select driving modes with lower demand power for motors compared to THS, because DSHS has more available modes for driving conditions. As a result, the size of the motors for DSHS can be smaller than that for THS. Since the component sizing affects not only the vehicle packaging space but also the cost of production, it can be said that the sizing aspect is one of the key factors for vehicle mass productions. In the simulation and case study, the DSHS shows better aspects in terms of fuel economy and component sizing. As future work, the given system can be analyzed further by other simulation methods and experimental studies.
2021-12-30T16:22:12.299Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "b2ff1f63cc86348062ef8b2e6af110de476b3a85", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09664537.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "6b6b97e82b8acd96f5c02a70e0854a58db62d561", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
222177598
pes2o/s2orc
v3-fos-license
Effect of climate change on spatial distribution of scorpions of significant public health importance in Iran Objective: To establish a spatial geo-database for scorpions in Iran, and to identify the suitable ecological niches for the most dangerous scorpion species under different climate change scenarios. Methods: The spatial distribution of six poisonous scorpion species of Iran were modeled: Hemiscorpius lepturus, Androctonus crassicauda, Mesobuthus eupeus, Hottentotta saulcyi, Hottentotta zagrosensis, and Odontobuthus (O.) doriae, under RCP2.6 and RCP8.5 climate change scenarios. The MaxEnt ecological niche model was used to predict climate suitability for these scorpion species in the 2030s and 2050s, and the data were compared with environmental suitability under the current bioclimatic data. Results: A total of 73 species and subspecies of scorpions belonging to 19 genera in Iran were recorded. Khuzestan Province has the highest species diversity with 34 species and subspecies. The most poisonous scorpion species of Iran are scattered in the semi-arid climates, at an altitudinal range between 11 m and 2 954 m above sea level. It is projected that O. doriae, Androctonus crassicauda and Mesobuthus eupeus species would be widely distributed in most parts of the country, whereas the most suitable ecological niches for the other species would be limited to the west and/or southwestern part of Iran. Conclusions: Although the environmental suitability for all the species would change under the two climate change scenarios, the change would be more significant for O. doriae under RCP8.5 in the 2050s. These findings can be used as basis for future studies in the areas with the highest environmental suitability for the most dangerous scorpion species to fill the gaps in the ecology of scorpion species in these areas. study confirmed that climate factors and habitat type are the most important determinants of the distribution of these venomous arthropods. There is also a strong association between land surface temperature and population density of scorpions [4]. Some experts, however, believe that soil texture, type, and depth are the most important environmental factors which affect the population of some scorpion species [5]. In most cases, there are two peaks in the activity of scorpions during a nychthemeron; nightfall and early morning [4]. A recent review of the status of scorpion sting cases in relation to climatic variables using time series model indicated that temperature plays an important role in scorpion sting [6]. A similar study in Algeria also confirmed a relationship between temperature and scorpion sting [7]. In general, scorpions are among the arthropods which prefer desert habitats, and they do not normally enter into human settlements. But today, with the increasing human population and changing land use, particularly in the urban areas, these organisms are often found nesting near human settlements. Scorpion sting is a serious health problem in underdeveloped tropical and subtropical countries. Envenomation due to scorpion sting often causes mild symptoms such as localized skin rashes but can lead to widespread neurological, cardiovascular, and respiratory complications, and can sometimes be fatal. This is a major public health concern particularly in areas with high prevalence of poisonous scorpions. The first step in understanding the dispersal of various species of poisonous arthropods in an area is the collection of quality data on their taxonomy and geographical distribution. Recent studies on the mapping of scorpion species have attempted to determine their geographical distribution in relation to the environmental variables at the collection sites using ecological niche models [8,9]. Scorpion sting is one of the most important arthropod associated human injuries in Iran, especially in the south and southwestern areas, with an estimated annual incidence of 54.8 to 66.0 per 100 000 populations and a mortality rate of 0.05% [10]. About 1 500 species of scorpion have been identified worldwide, 30 species of which are considered venomous and highly poisonous [11,12]. Among the Earlier studies in Iran have reported these species [13][14][15]. A. crassicauda is widespread and can be found in most of the provinces of Iran. It is one of the most venomous and medically important arachnids. It is the second most poisonous scorpion in Iran. Vazirianzadeh et al. reported that A. crassicauda was responsible for 27% of scorpion stings recorded in Ahvaz, Southwest of Iran, from April to September in 2007 [16]. This rate was 29% in Khuzestan Province of Iran [17]. A. crassicauda specimens were commonly found in sandy and calcareous soil areas. The most preferred habitat for this species in Iran is thorn bush steppe [18]. H. lepturus is of high medical importance, especially in children, because of its relatively painless sting. The venom of this scorpion is cytotoxic and can induce severe inflammation and injuries on the skin, and even death in some cases. It is a non-digger species and prefers warm and relatively wet areas [8]. H. saulcyi specimens have been collected from calcareous soil and at altitudes ranging between 684 and 2 025 m above sea level [19]. It is a semi-digger, classified among the poisonous scorpion species of Iran [8]. In Iran, specimens of H. saulcyi were collected from steppe habitats, including calcareous soils in the western areas of the country [18]. H. zagrosensis is an endemic species in Iran and found mostly in the Zagros chain region in Fars, Khuzestan, Kohgilouyeh va Boyer-Ahmad, Lorestan, and West Azerbaijan provinces. It is also found in Alamout area in Qazvin Province, located in the foothills of the Alborz Mountain [20,21]. This species, like other species of the genus Hottentotta, is a non-digging scorpion and prefers mountainous and rocky area habitats. It has recently been identified in the eastern region of Iraq [22]. M. eupeus is also a non-digger species, and one of the most medically important scorpions of Iran. In general, a sting from M. eupeus results in minor local symptoms not requiring any specific intervention [23]. The sampling sites of this species in Zanjan Province in the west of Iran included hard calcareous soil and steppe vegetation [18]. It is commonly found in populated areas and is the most widespread species in human dwellings in Iran [8]. O. doriae is a burrowing scorpion, which can dig tunnels longer than 40 cm in length. It is one of the medically important scorpions which exists in relatively high numbers in Iran [24]. Sting of this species is considered as potentially fatal, and it is responsible for most deaths due to scorpion envenomation in the central parts of Iran [23]. O. doriae specimens have been collected from calcareous soils and stony areas. It is mostly found in steppe habitats [18]. Climate change is an important issue in the potential spatial distribution of arthropods in future because their activities are highly dependent on environmental conditions. It is therefore recommended that all countries investigate and predict the potential effects of climate change on the important vectors/arthropods that affect community health. This would help identify susceptible areas, and to implement appropriate strategies for the prevention or reduction of vector-borne diseases/injuries. Studies on the effects of climate change on the spatial distribution of some of the most important vector-borne diseases have been conducted in some countries [25][26][27][28]. The aim of this study was to establish a spatial geo-database on the scorpion species of Iran and to find the most suitable ecological niches for the most poisonous scorpion species under different climate change scenarios. Climate change scenarios and data The results of the research conducted by the National Climate Research Institute [29] were the basis for selecting the general circulation model in this study. The Beijing Climate Center Climate System Model version 1.1 (BCC_CSM1.1) was used in our analysis at a spatial resolution of 30 seconds (1 km 2 ) [30]. In the present study, two scenarios were used for modeling: representative concentration pathway (RCP) 2.6 and RCP8.5. In the RCP2.6 emission scenario, CO 2 concentration is estimated to be 490 PPM by 2100, with a radiative forcing level of 2.6 W/m 2 [31][32][33]. Global change assessment modeling team at the Joint Global Change Research Institute (a branch of the Pacific Northwest National Laboratory) in the United States developed the RCP2.6 scenario [34]. The RCP2.6 scenario is considered as a stabilization scenario, without an overshoot, in which the total radiative forcing is stabilized shortly after 2100 [35]. RCP8.5 scenario corresponds to the highest greenhouse gas emission trajectory among the RCPs based on the literature of the scenarios [32], and hence, also to the upper bound of the RCPs. The greenhouse gas concentrations and emission trajectories in the RCP8.5 are estimated to increase considerably over time, leading to a radiative forcing level of 8.5 W/m 2 at the end of the century with a temperature range between 3.5 and 4.5 ℃ [36]. The bioclimatic data for both scenarios for 2030s and 2050s were downloaded from www. ccafs.cgiar.org (http://ccafs-climate.org/data) and www.worldclim. org (http://www.worldclim.org/cmip5_30s), respectively, at a spatial resolution of 30 s. ArcMap was then used to clip the downloaded layers to the border of Iran. To compare the current situation of the environmental suitability for scorpions, current bioclimatic variables at the same spatial resolution (1 km 2 ) were downloaded from the worldClim website (www. worldclim.org) and prepared in ArcMap. A total of 19 bioclimatic variables were used for the modeling (Table 1). Table 1. Bioclimatic variables used in MaxEnt model. Modeling The List of scorpion species in different parts of Iran Firstly, a database was created for the scorpion species due to the large number of documents published on these arthropods. Androctonus crassicauda This species has been reported in all provinces of Iran, and it has been collected from all the seven climatic zones of the country (Table 3). It is usually found at 11 to 2 303 meters above sea level. The average altitude for this species was estimated at 1 096 m above Table 4 for the two climate change scenarios in the 2030s and 2050s. The results were then compared with that of the current climatic data shown in Figure 2. The results show that the environmental conditions under the current climate are more suitable for the distribution of this species, with a larger area of environmental suitability compared with the 2030s and 2050s. On the other hand, the area of environmental suitability for A. crassicauda with more than 60% presence probability would decrease in the 2030s and 2050s, although the decrease seems to be insignificant. Hemiscorpius lepturus This species has been reported in 18 out of the 31 provinces of Iran. It has been identified in six climatic zones of the country ( (Table 4). Mesobuthus eupeus This scorpion species has also been reported in all the provinces of Iran and it is found in all climatic zones of the country (Table 3). (Table 4). Hottentotta saulcyi This scorpion species has been reported in 19 provinces of Iran, and it has been found in six climatic zones of the country ( Table 3). The previous studies have reported that H. saulcyi was captured at Hottentotta zagrosensis This scorpion species has been reported in six provinces of Iran, and it is found in five out of the 7 climatic zones of the country (Table 3) (Table 4). Odontobuthus doriae This species has been identified in 15 provinces of Iran, and it is found in all climatic zones of the country (Table 3) Discussion According to the results of this study, there is a greater number of scorpion species in the southern part of Iran, where the climate is relatively dry. These areas receive more solar radiation than other parts of the country. Scorpions prefer warmer climates [23], and this can be the main reason for the higher species richness in the southern parts of Iran. According to [38], higher temperatures also shorten generation times and increase maturation rates, thereby accelerating the speciation process of scorpion in the tropics. Another interesting feature of scorpions' diversity and distribution is their successful colonization of arid climates. Unlike most animal groups which have rich biotic representation in the tropics compared to deserts and sandy areas, the most diverse communities of scorpions are found in arid regions. Their remarkable adaptation to such extreme ecosystems involves the ability to tolerate higher temperatures [39]. Metabolic and behavioral adaptations, ability to conserve water for prolonged periods even under very low humid conditions, and living inside burrows deep enough to provide shelter against high ambient temperatures are also other important features that aid their survival in higher temperatures. According to Koch's investigation [3], rainfall, temperature, and possibly species competition are the most important factors influencing scorpion diversity. It seems that scorpion diversity does not depend on vegetation. It is reported that ruggedness and edaphic factors such as soil depth, texture, and nutrient status were strongly correlated with the pattern of scorpion species richness and distribution in arid areas [5]. Regional scorpion species diversity varied from 1 to 13, with most areas having 3 to 7 species, and deserts averaging seven species [40]. The density of scorpion population is highly variable and depends on abiotic and biotic environmental factors. In some studies, temperature, precipitation, wind, and altitude were the most important bioclimatic and environmental factors affecting the diversity of scorpions, and temperature had the highest effect [7,40]. In line with the findings of these research studies, we found that precipitation and temperature are the most important variables affecting the environmental suitability of the six poisonous scorpion species of Iran. Based on climate preference, the scorpion species can be classified into 3 groups: xerophilic (prefer very dry and desert environments), mesophilic (prefer moderately humid environments, rocky areas in the Mediterranean forests, savannah), and hydrophilic (prefer wet tropical forests, caves) scorpions. There is a strong correlation between surface temperature and population density of scorpions. This can be explained by the peak of scorpion sting during the warm months of summer in Iran [6,10]. Many desert scorpions can tolerate temperatures of 45 ℃ to 50 ℃, and this tolerance increases during summer. This interesting adaptive behavior might help the scorpions to adapt to climate change. In the present study, we found that the spatial distribution of the poisonous scorpions of Iran would slightly change but the change will be insignificant. It partly may be due to the potential adaptation of these arthropods to higher temperatures. Also it seems increasing of temperature in tropical areas of southern Iran will not be too much. Higher altitudes were most favorable for H. zagrosensis among all the species. This species mostly avoids arid and absolutely arid climates. The most effective variables on the model for this species were bio 19 and bio 12, both of which represent precipitation. So, it can be concluded that H. zagrosensis prefers wet areas, and moisture may be a limiting factor for its distribution. There is no published data on the biology and ecology of this important scorpion. It is recommended that more studies be conducted in this regard. In the present study, the AUC of the training data for the model for all studied species was more than 0.75, and >0.90 for H. lepturus, H. saulcyi, and H. zagrosensis. This indicates good model prediction for the ecological niches of these scorpion species under all scenarios [41]. There are few studies on ecological niche modeling for scorpions. In a study conducted on ecological niche modeling for a number of scorpion species in Brazil, the most contributed variables in the distribution model of Tytus serrulatus were precipitation and tree cover, whereas for Tytus bahiensis, temperature and thermal amplitude had the most contribution [42]. In our study, precipitation in precipitation to more than 200 mm will decrease the environmental suitability for A. crassicauda in the current climatic condition, the model showed a positive correlation between precipitation and environmental suitability in the 2030s and 2050s for this scorpion. It is predicted that rainfall will increase the population density of this deadly scorpion. Regarding the most important predictive variable for the distribution of H. lepturus (bio19), the condition is more or less the same for the current climate and in the 2030s and 2050s. In other words, higher precipitation will have a negative effect on this poisonous scorpion. A decrease in rainfall in the coming years might increase the environmental suitability for this scorpion. The most important predictive variables for M. eupeus were bio 19, bio 16, and bio 13. Higher precipitation will reduce the environmental suitability for this species and therefore the risk of stinging by this scorpion. An earlier study on ecological niche modeling for M. eupeus and M. phillipsii reported that mean temperature of wettest quarter of the year (bio 8) and precipitation of warmest quarter of the year (bio18) were the most important predictors for these two species, respectively [9]. The model for H. saulcyi was similar to that of A. crassicauda, such that, precipitation will have a negative effect on the environmental suitability of this species in the current climate and in the 2050s under RCP2.6. However, precipitation will be a positive predictor for this species in the 2030s under RCP2.6 and in the 2050s under RCP8.5. For H. zagrosensis, an annual precipitation of up to 300 mm will have a positive effect on its distribution and expand its ecological niches, but higher rainfall will decrease its range of distribution. Overall, precipitation was the most important predictive variable for 5 out of the 6 scorpion species described above, and temperature seems to be most important predictive variable for the distribution of O. doriae. Thus, higher temperature of the wettest quarter and coldest quarter will decrease the environmental suitability of O. doriae. Although the environmental suitability for all the species would change under the two climate change scenarios, the change would be more significant for O. doriae under RCP8.5 in the 2050s. These findings can be used as basis for future studies in the areas with the highest environmental suitability for the most dangerous scorpion species to fill the gaps in the ecology of scorpion species in these areas.
2020-10-08T13:11:56.446Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "9c2da4d8ef9ef7c2bd8fc02ab9ce2554d69c23d3", "oa_license": null, "oa_url": "https://doi.org/10.4103/1995-7645.295361", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "79e514ab8b92dd9d9d8285917d94d60d43c77c29", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
53348660
pes2o/s2orc
v3-fos-license
Calculating the output signal parameters of a lactose bienzymatic biosensing system from the transient phase response We constructed for the determination of lactose a bienzymatic biosensing system based on a fibre-optical oxygen sensor and two enzymes – β-galactosidase (β-gal, from Aspergillus oryzae, Sigma Aldrich, EC 3.2.1.23) and glucose oxidase (GOD, from A. niger, Sigma Aldrich, EC 1.1.3.4) and analysed how the calculation of biosensor output signal parameters, used for the calibration of lactose biosensors, is influenced by the data collection period during the transient phase of the signal rising in case no preliminary incubation period with β-gal was applied. The calculation of reaction steady state and kinetic parameters from the biosensor signal revealed that longer data collection periods resulted in more accurate biosensor calibration curves with bigger slopes, while in case of slower reactions the calculated reaction parameters had their maximal values already if data were collected for 600 seconds. For reactions where enzyme concentrations were higher (0.027–0.071 IU/mL β-gal and 2.03– 5.33 IU/mL GOD), the steady state signal was not achieved even within 1 hour from the initiation of the reaction and the calculated reaction parameters continued to change. Although the sensor signal was decreasing continuously, the reaction parameters calculated from the transient phase data were suitable for biosensor calibration if the data of at least 500 seconds were taken into consideration. * Lactose or milk sugar is the major carbohydrate in milk with its concentration ranging around 5 g/dL in cow milk.As about 75% of adults experience lactose intolerance, the measurement of lactose in milk and milk products has been thoroughly studied.For the estimation of lactose content in milk, numerous analytical methods, such as spectrophotometric, titrimetric, gravimetric, and chromatographic, are used [1].Although these methods give reliable results, they are time consuming and require sample pre-treatment.A good alternative to traditional analytical methods for a rapid determination of lactose is the application of biosensors.The time required for lactose determination with biosensors depends on the bio-recognition element as well as the construction and properties of the applied signal transducers and it can vary from a few minutes to several hours.Signal transducers used in biosensors are mainly electrochemical or fibre-optical, the latter are ________________ * Corresponding author, toonika.rinken@ut.ee also used for the construction of biosensors of acetylcholine [2], penicillin [3], or oligonucleotides [4]. Most lactose biosensors are based on two [5,6] or more [7][8][9] enzymes, forming a shorter or longer cascade of enzyme-controlled reactions enabling to turn lactose into some detectable products.The linear range of lactose detection of this kind of biosensors is typically up to 15 mM (0.5%) [10,11].Biosensors in which Langmuir-Blodgett films are used for enzyme immobilization show linearity from 30 to 175 mM (1-6 g/dL of lactose) [12].As several reactions are going on simultaneously in the system, the measurable output signal of a biosensor depends on the kinetics of more than one reaction.When milk is analysed the response of lactose biosensors based on lactose hydrolysis and consequent product oxidation suffers also from the interference of glucose and galactose, present in milk at concentrations of up to 0.1 mM [9,13].Lactose hydrolysis by β-galactosidase (βgal, EC 3.2.1.23)is very slow at room temperature; therefore temperatures as high as 45 °C are used to accelerate this process in biosensing systems [7].At lower temperatures the response time of lactose biosensors can subsequently be far over 15 min [14].In some cases, an additional preliminary incubation of lactose containing probes with β-gal is carried out [15].Detection of lactose with biosensors sometimes requires pre-treatment of milk samples to remove fat and proteins [16].The response of a lactose biosensor may be additionally affected by calcium chloride, ascorbic and/or uric acids [17]. The aim of the present work is to study the presteady state signal of a fibre-optical lactose biosensor in which lactose hydrolysis and the consequent oxidation of the hydrolysis products are going on in parallel with lactose hydrolysis being the limiting step of the system, and to find optimal conditions for data acquisition and calculation of parameters that can be applied for biosensor calibration based on the transient phase response of the system. EXPERIMENTAL The lactose biosensing system was based on a fibreoptical oxygen sensor, constructed in the Institute of Physics of the University of Tartu, and two soluble enzymes: β-gal and glucose oxidase (GOD, EC 1.1.3.4).The oxygen sensor comprised optical fibre covered with an oxygen-sensitive Pt porphyrin-doped membrane, a source of excitation (λ = 405 nm), and a detector of fluorescent light (λ = 700 nm).The luminescence parameters of the oxygen-sensitive membrane depend on the concentration of dissolved oxygen according to the Stern-Volmer relationship [18]. The enzyme β-gal catalyses the hydrolysis of lactose into the monosaccharides glucose and galactose.The forming glucose is oxidized by dissolved oxygen into glucono δ-lactone and H 2 O 2 .This reaction is catalysed by GOD, whose specificity is 1000 times higher towards glucose than to galactose and lactose: Because of the oxidation of glucose the concentration of dissolved oxygen decreases in the reaction medium.The decrease is proportional to the concentration of glucose.The degradation of the forming H 2 O 2 is slow in comparison with the speed of glucose oxidation and does not influence the output of the oxygen sensor [19]. The kinetics of reactions (1) was followed with the oxygen sensor in air-saturated 0.14 M (5 g/dL) lactose solutions in a 0.1 M acetate buffer (pH = 5.60) at constant stirring at 25 °C.The reaction process was started with the injection of enzymes into the air-tight reaction cell (volume 28 mL). The output of the oxygen sensor was registered automatically at 1 s intervals.Oxygen concentrations were calculated with the original software OxySens1.8. We calculated from every output curve the characteristic process parameters, and used these for the characterization of the biosensor output.The calculations were carried out according to the biosensor model taking into account enzyme kinetics, diffusion of substrates to the sensor, and system inertia.This enabled calculation of steady state parameters from the sensor transient phase data according to the following equation [20,21]: is the oxygen concentration at the start of the reaction, t is time, A is the total signal change parameter and B is the kinetic parameter, s τ is the lag period that includes the inertia of the oxygen sensor and the lag period of the enzyme-catalysed reactions, and n is the number of terms. All reagents used in the study were of analytical grade. RESULTS AND DISCUSSION Two processes were running simultaneously in the system: the hydrolysis of lactose and the consequent oxidation of glucose formed in the course of hydrolysis.The hydrolysis of lactose catalysed by β-gal from Aspergillus oryzae is a relatively slow process with k cat (catalytic constant) value of 63 s -1 [22], which can be described with the Michaelis-Menten equation with competitive product inhibition by galactose [23].The value of k cat for GOD-catalysed β-D-glucose oxidation is around 300 s -1 [22].With a sufficient amount of GOD present in the system in comparison with β-gal, the oxidation of glucose can be considered to be proceeding (1) in line with its formation and the decrease of oxygen in the system is the indicator of the hydrolysis of lactose.In our system, the ratio of GOD and β-gal activities was 75 : 1 (counted in IUs), so the oxidation of glucose was approximately 350 times faster than the lactose hydrolysis, and the latter was the limiting step of the system. The decrease in the concentration of dissolved oxygen in the system at different concentrations of the enzymes (the ratio GOD/β-gal was kept constant) and lactose concentration of 0.14 M (similar to that in raw milk) in time is shown in Fig. 1.The concentration of dissolved oxygen decreased in nonlinear mode and no stationary state was achieved within 1 h.At higher enzyme concentrations (0.027-0.071IU/mL, as β-gal) the oxygen available in the system was totally consumed in the oxidation process of glucose and the sensor output signal reached its limiting value. As the measurable oxygen decrease was nonlinear, it was necessary to select an appropriate model to characterize these curves.We used an integrated biosensor model allowing the calculation of the characteristic parameters of the curves where the steady state is not achieved and the accuracy from the transient phase data is high [19].Each output curve was characterized with two independent parameters: the steady state and the kinetic parameter.The accuracy of these calculated parameters depended on the depth of the limiting reaction during which the data were collected. In the initial phase, after the injection of enzymes into the reaction medium, there was a lag period during which the signal decreased slowly in nonlinear mode and the reactions going on in the system at different speeds due to different concentrations of enzymes were practically indistinguishable (Fig. 1).We calculated the length of the initial lag period for each curve using the biosensor model [21] and found that it depended on total enzyme concentrations.At lower GOD/β-gal concentrations (0.009-0.022IU/mL, as β-gal) the lag period was approximately 150 s; at higher concentrations (0.027-0.071IU/mL, as β-gal), up to 400 s.During this lag time, the model deviation from experimental curves was considerable, and the calculated process parameters did not enable the calibration of the system.Accordingly, simple and widely used system calibration options measuring the sensor output at a certain fixed time moment will also lead to systematic mistakes because the factors determining the course of the output signal curve depend on the speed of the measured processes. For the characterization of the lactose bio-sensing system we calculated the signal parameters from different selections of the sensor transient phase data, always including the start of the reaction (the injection of enzymes) and following the reaction to different depths.The calculated values of the total signal change parameter for the studied reactions were decreasing almost linearly along with the prolongation of the data collection periods used for calculations (Fig. 2a) and did not reach any stable value within 3600 s.At the same time, the values of kinetic parameters increased up to 4 times at higher enzyme concentrations (Fig. 2b).The effect of the length of the data collection period on the kinetic parameter was greater in comparison with the effect on the total signal change and its impact was dependent on the speed of the measured reaction. From the curves shown in Fig. 2 we constructed biosensor calibration curves (the value of the calculated parameters vs enzyme concentration) for different data collection periods (0-400 s, 0-500 s, 0-600 s, etc.).The slopes of these calibration curves indicated the sensitivity of the system to definite data collection periods (Fig. 3).The biosensor sensitivity rose exponentially along with the increase of the length of the data collection period when we used the total signal change to characterize the system (Fig. 3a).The parameters calculated from the data of less than 500 s from the start of the reaction were not applicable for system calibration as the slope of the calibration curve was very small (below 0.002 conc -1 ).When data of 800 s were used the slope of the calibration curve was over two times bigger than with data of 500 s, and the sensitivity of the system was sufficient to differentiate between processes going on at various speeds.After 500 s we could characterize the function as a linear regression with a slope of (1.364 ± 0.033) × 10 -5 s -1 (squared correlation coef- ficient R 2 = 0.994).For longer data collection periods the sensitivity was higher, but as the measuring time was very long, these are not suitable for practical applications.In case very long measuring periods are used, it should be remembered that the amount of dissolved oxygen in the reaction medium is an ultimate quantity and the system can run out of oxygen at higher enzyme concentrations (reactions at higher speed). The sensitivity of the system calculated on the basis of the kinetic parameter rose linearly along with the increase of the data collection period (Fig. 3b), with the slope of (1.716 ± 0.044) × 10 -7 s -1 (R 2 = 0.993).The sensitivity of the sensor increased 3.5 times in case we used the data collection period of 3500 s instead of 500 s.The kinetic parameter was about 10 times less sensitive to the length of the data collection period than the signal total change parameter. CONCLUSIONS The calculation of reaction steady state and kinetic parameters from the biosensor signal revealed that it was possible to characterize lactose hydrolysis using biosensor transient phase output, but the sensitivity of the system was dependent on the length of the data collection period.Longer periods resulted in higher biosensor sensitivity, both by the signal total change and the kinetic parameter.For reactions where enzyme concentrations were higher (0.027-0.071IU/mL, as β-gal), the steady state was not achieved even after 1 h from the start of the reaction and the values of the reaction parameters could not be fixed.The minimum data collection period enabling the calculation of reaction parameters applicable for biosensor calibration was 500 s. Fig. 1 . Fig. 1.Decrease in the dissolved oxygen concentration in time in the lactose biosensor at different enzyme concentrations in case the ratio of glucose oxidase and β-galactosidase (concentration shown in the graph) activities was kept constant at 75 : 1.The measurements were carried out in 0.14 M lactose solutions in an air-saturated 0.1 M acetate buffer (pH 5.60) at 25 °C. Fig. 2 . Fig. 2. Values of the total signal change (a) and kinetic (b) parameters for data sets of different length at different β-galactosidase activities in case the ratio of glucose oxidase and β-galactosidase activities was kept constant at 75 : 1.The measurements were carried out in 0.14 M lactose solutions in an air-saturated 0.1 M acetate buffer (pH 5.60) at 25 °C. Fig. 3 . Fig. 3.The dependence of the slope of the lactose biosensor calibration curve or the biosensor sensitivity on the total signal change (a) and kinetic (b) parameters for data sets of different length.The ratio of glucose oxidase and β-galactosidase activities was kept constant at 75 : 1 with β-galactosidase activity ranging from 0.009 to 0.071 IU/mL.The measurements were carried out in 0.14 M lactose solutions in an airsaturated 0.1 M acetate buffer (pH 5.60) at 25 °C.
2018-10-18T11:50:10.468Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "84489911821915ee7a2b8ead5ef0f950898a5eda", "oa_license": "CCBY", "oa_url": "https://kirj.ee/public/proceedings_pdf/2011/issue_2/proc-2011-2-136-140.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "84489911821915ee7a2b8ead5ef0f950898a5eda", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
119064067
pes2o/s2orc
v3-fos-license
Large Synoptic Survey Telescope Solar System Science Roadmap The Large Synoptic Survey Telescope (LSST) is uniquely equipped to search for Solar System bodies due to its unprecedented combination of depth and wide field coverage. Over a ten-year period starting in 2022, LSST will generate the largest catalog of Solar System objects to date. The main goal of the LSST Solar System Science Collaboration (SSSC) is to facilitate the efforts of the planetary community to study the planets and small body populations residing within our Solar System using LSST data. To prepare for future survey cadence decisions and ensure that interesting and novel Solar System science is achievable with LSST, the SSSC has identified and prioritized key Solar System research areas for investigation with LSST in this roadmap. The ranked science priorities highlighted in this living document will inform LSST survey cadence decisions and aid in identifying software tools and pipelines needed to be developed by the planetary community as added value products and resources before the planned start of LSST science operations. INTRODUCTION Corresponding author: Megan E. Schwamb mschwamb.astro@gmail.com Taking an inventory of the Solar System is one of the four key themes defining the science-driven requirements for the Large Synoptic Survey Telescope 1 (LSST; Ivezic et al. 2008). First light is expected in 2020, with full LSST science operations planned to commence in 2022. Objects brighter than approximately 16th magnitude will saturate in LSST observations, including all of the known Solar System's planets. Thus, the bulk of LSST's Solar System science will be derived from small body detections and observations. LSST will image and monitor millions of Solar System bodies. Over its 10-year lifespan, LSST is expected to catalog over 5 million Main Belt asteroids, almost 300,000 Jupiter Trojans, over 100,000 Near Earth Objects (NEOs), over 40,000 Kuiper belt objects (KBOs), tens of interstellar objects, and over 10,000 comets (LSST Science Collaboration et al. 2009;Solontoi et al. 2010;Cook et al. 2016;Engelhardt et al. 2017;Trilling et al. 2017). Many of these objects will receive hundreds of observations in multiple bandpasses. LSST will report detections of moving objects in various filters (ugrizy) between approximately 16 and 24.5 magnitudes (in r band) over its observing footprint (covering ∼18,000 square degrees in the Wide-Fast-Deep survey), link these detections into orbits, and provide metadata on observing conditions. It will be up to the planetary community to apply a wide variety of methods to synthesize and combine this information in order to fully leverage the LSST dataset for Solar System science. These goals include probing planetary formation and evolution and placing the Solar System in context with other planetary systems. This requires an understanding of the current status of our Solar System -the orbital and size distributions of small bodies (including asteroids, comets, planetary satellites, KBOs, and inner Oort cloud objects) and their physical properties (e.g., chemical composition, physical shape, mass, rotation rate, binarity, density, porosity, and mass loss rates). The LSST Solar System Science Collaboration (SSSC) 2 aims to prepare methods and tools to analyze LSST data for Solar System science, as well as develop optimum survey strategies for discovering moving objects throughout the Solar System with LSST. We present a science roadmap that outlines the SSSC's ranked science priorities achievable with LSST during its planned baseline operations, expected to be 10 years. The list outlined here is not exhaustive but represents the most important Solar System-based research goals in the LSST era, based on the input from dedicated topical working groups. Crucial decisions about the LSST Wide-Fast-Deep survey cadence, special ancillary surveys that would maximize LSST science (mini-surveys), and target fields with deeper coverage and more frequent temporal sampling than the Wide-Fast-Deep survey (deep drilling fields) will be made over the next several years. This document serves as a guide for making these future cadence decisions. This roadmap will also help identify research areas where preparatory software tools and pipelines will need to be developed by the SSSC and broader community to produce data products beyond what the LSST project will provide through annual and nightly data releases. We expect this roadmap to be a living document that will be updated periodically as needed before LSST science operations commence. WHAT WILL LSST PROVIDE? The LSST project will deliver a Moving Object Processing System (MOPS) capable of identifying the bulk of transient moving Solar System bodies within LSST imaging data. An estimate of the predicted number of Solar System objects detected by LSST is detailed in the LSST Science Book (LSST Science Collaboration et al. 2009). The LSST Wide-Fast-Deep survey, with a proposed Northern Ecliptic Spur mini-survey, would detect on the order of 100,000 NEOs, 5.5 million Main Belt asteroids, 280,000 Jupiter Trojans, and 40,000 KBOs with between 200-350 observations per object (for bright objects) in various filters. Solontoi et al. (2010) estimate that LSST will discover approximately 10,000 comets, with 50 observations or more per object in various filters. Cook et al. (2016), Engelhardt et al. (2017), and Trilling et al. (2017) estimate that at least one interstellar object (like 'Oumuamua; Meech et al. 2017) is expected to be discovered by LSST annually. Additionally, detailed estimates of LSST NEO detection rates can be found in Grav et al. (2016), Vereš & Chesley (2017), and Jones et al. (2017). LSST's basic capabilities will produce orbital distributions for large populations of Solar System objects, which can be de-biased using either the nominal completeness function or a user-supplied 'truth' population (through the metadata on observing history), with precision photometry in multiple bandpasses and accurate astrometry. Beyond detection and orbit characterization, there will be multi-band photometry for each moving object detected, although in the wide-survey, objects will be measured in different filters at different times (with a variety of times between measurements that could vary from a few minutes to many days or months). There will be sparse light curves and photometric variability information, including an upper limit on an object's brightness when LSST does not detect a source, although these light curves will have variable time sampling. The catalogs will include measurements of the point-spread-function (PSF) of each source and a measurement of the deviation from the stellar PSF. There will also be tools to retrieve cutout images of each source to enable searches for outbursts, brightening events, and cometary activity and to perform trailed photometry (using a non-circular photometric aperture matched to the object's on-sky motion during the exposure) and forced photometry (performing photometry at a fixed/predicted position rather than fitting a centroid). Additionally, markers of comet-like activity or disruption events (including measures of source extendedness) will also be reported in a nightly public alert stream. OVERVIEW OF LSST DATA PRODUCTS AND DATA ACCESS SERVICES An overview of LSST Data Management is provided in Jurić et al. (2015). The summary below describes what LSST will provide for Solar System science. The LSST Data Products Definition Document 3 (DPDD) is the authoritative source; here we provide a brief overview and interpretation of that document in the context of detecting and cataloging the small body populations residing within the Solar System. LSST is expected to image the sky with a cadence 4 generally appropriate for detecting small moving objects in the Solar System in ugrizy filters over the Southern hemisphere. A proposed mini-survey would extend the LSST Wide-Fast-Deep survey coverage to the ecliptic plane + 10 degrees in the Northern hemisphere; this 'Northern Ecliptic Spur' may only be imaged in griz. The observing cadence is not fully set and may be revised based on feedback from the science collaborations and the broader community. We refer the reader to the LSST Observing Strategy White Paper (LSST Science Collaboration et al. 2017) for more information. The LSST project will create difference images where the LSST observations in each visit are subtracted from a template image to detect moving objects (plus transient and variable sources). Transient sources that are detected at 5-sigma or more above the sky background in the difference images will be identified and referred to as diaSources 5 in the LSST Database Schema, where DIA refers to Difference Image Analysis. Within 60 seconds of each observation, diaSources will be made public via a real-time stream of observation reports (alerts). These public alerts will include transients and variables and will not include linking between different visits, but known moving objects (as well as known variable sources) will be identified with coincident detections in the alerts. The alert information will include astrometry, photometry, and PSF shape information including trailing and direction of motion (to identify very fast moving Near-Earth objects [NEOs], even if unknown) and identification of non-stellar PSFs (to identify outbursts or cometary activity). Solar System bodies located at distances closer than ∼200 au will have sufficient on-sky motion between visits taken in a single night to be identified as moving objects by MOPS. MOPS identifies moving objects from the catalogs of diaSources. MOPS will link diaSources from each visit within one night into tracklets (potential linkages in the same night using linear extrapolation). Between nights, MOPS links tracklets into tracks (potential linkages over three nights using a quadratic fit). Only a track that can be fit to a heliocentric orbit with reasonable residuals is considered to be a reportable detection of a moving Solar System object (called an SSObject 6 in the LSST database schema). All linked tracks and SSObjects will be reported daily to the Minor Planet Center 7 . Additionally, MOPS will link additional diaSources with previously known or newly discovered SSObjects to extend the orbital arc for each moving object (as new images are taken or as new objects are discovered). The LSST project will provide access to the associated metadata stored with SSObjects as well as the observations themselves (called Sources 8 when measured in images or diaSources when measured in difference images) in an online searchable database; this information will be available in yearly data releases as well as a daily database kept up to date with daily moving object processing. The yearly data releases will provide measurements of each moving object detected with absolute astrometry accurate to better than 0.05 ′′ and relative astrometry precise to 0.01 ′′ over spatial scales of a few tens of arcminutes; absolute photometry will be accurate to 10 mmag for bright (r <20) sources and relative photometry precise to 5 mmag for observations over spatial scales small relative to the visible sky in griz (7 mmag in uy). Through a Science User Interface 9 (SUI), users will have access to LSST catalog and image query tools. Users will be able to access metadata on observing conditions, such as the telescope pointing history, seeing history, cloud conditions, and estimated 5-sigma limiting magnitudes in the difference images for each visit. Through the SUI, users will be able to access catalog entries or postage stamps associated with each diaSource and/or Source for a particular SSObject as requested, as well as receive postage stamps from LSST images which match a user-defined orbit or user-defined RA (right ascension)/Dec(declination)/time regions. Some computing resources through data centers will be available for extended analysis of catalogs or images where the analysis routines are written by users and interface with application programming interfaces (APIs) provided by the LSST project. SCIENCE PRIORITIES OF THE SSSC In the following section, we briefly outline the ranked LSST science priorities as determined by the SSSC membership. We divide the small body populations of the Solar System in four broad categories based on location and similar discovery challenges: • Active Objects -broadly consisting of all categories of activity in the small body populations (i.e., objects exhibiting some type of mass loss): short period comets, long period comets, Main Belt comets, impact-disrupted or rotationally-generated active asteroids, etc. • Near Earth Objects (NEOs) and Interstellar Objects-broadly consisting of objects on orbits inward of or diffusing inward from the asteroid belt and objects on unbound orbits passing through the Solar System, like 'Oumuamua (Meech et al. 2017). For each of these core small body populations, the SSSC has a list of science goals to achieve with LSST, ranked from highest priority to lowest. Active Objects 1. Discovery and orbital classification of large numbers of active objects to understand and model the onset and termination of activity in the different Solar System small body populations and to explore correlations between physical/orbital characteristics and transient activity. 2. Discovery/frequency/population estimates of coma and/or dust tail-bearing bodies in the Solar System small body populations including Main-belt comets, NEOs, collisionally impacted asteroids, Centaurs, KBOs, short and long period comets, and interstellar objects to better probe the drivers of such activity and measure the size of these reservoirs. 3. Detection/frequency/population estimates of anomalous outbursts and rapid brightening/splitting events above the expected brightness evolution of objects in the Solar System small body populations to better probe the drivers of such activity and the size of these reservoirs. 4. Characterization of the changes in color, morphology, brightness, rotation, shape, and other observable properties of active objects over time (including changes from pre-activity/outburst properties) and at different epochs in the orbits of these bodies to probe surface changes and better explore the various drivers of such activity and their evolution. 5. Determination of rotational light curves for a large sample of active objects to study physical properties of active objects, including the spin angular momentum distribution, shape distribution, and binary frequency. 6. Detection and characterization of the non-gravitational forces (including jet-driven and collisional accelerations) acting on active bodies to compute better original and future orbits (especially important for identifying dynamically new or long period comets) and estimate rotation poles and seasonal states of active body nuclei. Near-Earth Objects (NEOs) and Interstellar Objects 1. Compilation of an NEO catalog with high completeness and adequate orbit quality. 2. Color measurements and broad phase coverage of NEOs, including distinguishing NEOs of cometary origin through color measurements and probing the color distribution of ten-meter scale objects. 3. Timely advance notice of close approaches or potential impacts to facilitate time critical characterization efforts including radar, spectroscopic, and light curve observations. 4. Measurement of the orbital, absolute magnitude, and taxonomy distributions within the NEO population, enabling the identification of correlations between taxonomy and orbital properties for all NEOs and the determination of the orbital distribution of ten-meter scale objects. 5. Determination of the long-term impact flux of NEOs as a function of size, for ≥ 140 m bodies in particular. 6. Discovery/frequency/population estimates of interstellar objects on unbound orbits passing through the Solar System as a potential probe of planet formation and planetesimal ejection rates in the local solar neighborhood. 7. Determination of rotational light curves for a large sample of NEOs to study physical properties of NEOs, including the spin angular momentum distribution, shape distribution, and binary frequency. 8. Detection and characterization of the non-gravitational forces (including the Yarkovsky effect, solar radiation pressure, outgassing, collisions) acting on NEOs to explore and better understand how NEO orbits evolve over time. 9. Measurement of the absolute magnitude distribution of temporarily-captured objects (NEOs that are temporarily captured by the gravity-well formed by the Earth and Moon) in order to compare to model predictions and to probe the low end of the asteroid size/absolute magnitude distribution. 10. Investigation of the possible NEO disruption mechanisms active at small perihelion distances to probe NEO internal structure and test dynamical models. Inner Solar System 1. Discovery and orbital classification of large numbers of asteroids and Mars/Jupiter Trojans to probe their orbital and absolute magnitude distributions and to measure the size frequency distributions of different taxonomic classes. 2. Measurement of high quality astrometry for new and previously known asteroids, Mars/Jupiter Trojans, and Jupiter irregular satellites to refine orbits and improve ephemerides for stellar occultation predictions. 3. Detection of impacts of small asteroids onto large ones and detection of asteroid disruption by impact to probe the current collisional environment within the asteroid belt, study dust dynamics, constrain asteroid internal structure, and explore space weathering processes through comparison of surfaces before and after detected impacts. 4. Determination of colors and compositions for a large sample of asteroids, specifically including Jupiter's irregular satellites, Mars/Jupiter Trojans, Hildas, Cybeles, and Phobos and Deimos to identify correlations with dynamical and taxonomic information with implications for understanding the formation of the inner solar system (e.g., chemical distribution in the primordial disk; collisional family parent bodies and formation events). 5. Investigation of the hydration of C-complex objects and Main Belt asteroids to explore the compositional evolution of the inner solar system and test giant planet migration models. 6. Determination of rotational light curves for a large sample of asteroids in different taxonomic classes to study physical properties of asteroids, including the spin angular momentum distribution, shape distribution, and binary frequency. 7. Improved characterization of newly discovered and previously known asteroid families, clusters, and pairs to study genetic relationships and homogeneity of collisional families at small sizes. 8. Measurement of asteroid masses and bulk densities from mutual gravitational interactions to probe asteroid internal structures and test planet formation models. 9. Detection and frequency of rotational fission within the non-NEO asteroid populations to probe internal structure and test dynamical models. Outer Solar System 1. Discovery and orbital classification of large numbers of outer Solar System objects over a wide range of sizes (H>9) and orbits to characterize the size-frequency-orbit distribution of KBOs and to probe the formation and evolution of the outer solar system (e.g., comet/Centaur pathways, collisional evolution, Neptune migration, etc.). 2. Discovery and orbital classification of objects on unusual or extreme orbits, especially inner Oort cloud objects (i.e. Sedna-like objects) with high perihelia (q > 40 au) and objects with very high inclination (i > 40 deg), to place constraints on proposed origin scenarios (e.g., the putative Planet 9; Trujillo & Sheppard 2014;Sheppard & Trujillo 2016;Brown 2017). 3. Determination of colors for large numbers of objects to identify correlations with dynamical information with implications for understanding the formation of the outer solar system (e.g., chemical distribution in the primordial disk; collisional families). 4. Determination of rotational light curves for large numbers of objects from different dynamical classes to study physical properties of KBOs, including spin angular momentum distribution and binary frequency. 5. Discovery and orbital classification of large numbers of objects in resonance with the giant planets, especially the libration islands of high-order resonances of Neptune, to constrain models of Neptune migration. 6. Discovery and clear characterization (e.g., PSF shape) of binaries and multiple systems wide enough to be resolved or partially resolved. 7. Measurement of acccurate and precise astrometry for known and new distant Solar System bodies to enable stellar occultation observations. COMMUNITY SOFTWARE DEVELOPMENT Given the unprecedented scale of the LSST survey, tools that can conduct rapid automated analyses of the large quantities of data produced on a nightly basis will be essential for taking full advantage of LSST's scientific potential. Furthermore, a collaborative approach will almost certainly be required within the planetary community to ensure that the broad range of analysis tools and software pipelines that will be required are available and ready to be implemented by the time LSST begins full science operations. In this Section, we briefly describe the status of the SSSC's community software and infrastructure development. One of the key objectives of the software development process being undertaken by the SSSC is the identification of common software and analysis needs among the planetary community based on the list of scientific priorities detailed in Section 4 in order to organize a coordinated effort that best maximizes available resources and effort. In some cases, this development will involve the adaptation or automation of existing software pipelines, while in other cases, entirely new tools will need to be developed. LSST will provide some computational resources for the analysis of LSST images and associated data products with user-added software. To make the most of these limited computational resources, it will be in the planetary community's interest to aggregate the various community developed tools that require access to LSST image data and apply them to each image of a moving object detection at the same time. This will require image data to be retrieved only once, minimizing the computational draw on the LSST servers. A key goal of the SSSC's software development effort is to create an overarching database that will store all user-derived values and output associated with LSST moving object discoveries produced by SSSC analysis tools. To facilitate SSSC investigations that take full advantage of the diverse higher-level data products that are expected to be generated from LSST observations, we are planning for this database to be fully accessible through APIs and a web-based query form. FUTURE WORK This science roadmap serves as a starting point towards maximizing LSST's potential for Solar System science. The extent to which the above science priorities will be achieved is directly related to the final survey strategy and cadence selected. The next step for the SSSC will be to generate quantifiable success metrics for each of the above science priorities that can be tested against the various LSST Wide-Fast-Deep survey observing strategies and their associated simulated observing histories. Additionally, one of the next steps for the SSSC will be to identify which of the above science goals would be significantly enhanced by or would be only achievable through a specially designed mini-survey (such as the proposed 'Northern Ecliptic Spur') or deep drilling field. Further actions include identifying and beginning development on user-added community software tools and pipelines needed beyond what the LSST project will provide in order to carry out the desired science goals in this roadmap.
2018-02-06T03:39:46.000Z
2018-02-06T00:00:00.000
{ "year": 2018, "sha1": "2eb2dfb3e9914d93d1ea949b5c3fb593ef773195", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2eb2dfb3e9914d93d1ea949b5c3fb593ef773195", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
119253839
pes2o/s2orc
v3-fos-license
Event-by-event simulation of quantum phenomena: Application to Einstein-Podolosky-Rosen-Bohm experiments We review the data gathering and analysis procedure used in real Einstein-Podolsky-Rosen-Bohm experiments with photons and we illustrate the procedure by analyzing experimental data. Based on this analysis, we construct event-based computer simulation models in which every essential element in the experiment has a counterpart. The data is analyzed by counting single-particle events and two-particle coincidences, using the same procedure as in experiments. The simulation models strictly satisfy Einstein's criteria of local causality, do not rely on any concept of quantum theory or probability theory, and reproduce all results of quantum theory for a quantum system of two $S=1/2$ particles. We present a rigorous analytical treatment of these models and show that they may yield results that are in exact agreement with quantum theory. The apparent conflict with the folklore on Bell's theorem, stating that such models are not supposed to exist, is resolved. Finally, starting from the principles of probable inference, we derive the probability distributions of quantum theory of the Einstein-Podolsky-Rosen-Bohm experiment without invoking concepts of quantum theory. As nanofabrication technology is advancing from the stage of scientific experiments to the stage of building nanoscopic systems that perform useful tasks, it is important to have computational tools that allow the designer to assess, with adequate reliability, how the system will behave [1]. Quantum theory provides the foundation for developing these tools. However, just like any other theory, quantum theory has its own limitations. If the successful operation of the device depends on individual events rather than on the statistical properties of many events, quantum theory can no longer be used to describe the behavior of the device. Indeed, as is wellknown from the early days in the development of quantum theory, quantum theory has nothing to say about individual events [2,3,4]. Reconciling the mathematical formalism that does not describe individual events with the experimental fact that each observation yields a definite outcome is referred to as the quantum measurement paradox and is the most fundamental problem in the foundation of quantum theory [4]. Computer simulation is widely regarded as complementary to theory and experiment [5]. If computer simulation is indeed a third methodology, it should be possible to simulate quantum phenomena on an event-byevent basis. In view of the fundamental problem alluded to above, there is little hope that we can find a simulation algorithm within the framework of quantum theory. However, if we think of quantum theory as a recipe to compute probability distributions only, there is nothing that prevents us from stepping outside the framework that quantum theory provides. To head off possible misunderstandings, it may be important to rephrase what has been said. Of course, we could simply use pseudo-random numbers to generate events according to the probability distribution that is obtained by solving the time-independent Schrödinger equation. However, that is not what we mean when we say that within the framework of quantum theory, there is little hope to find an algorithm that simulates the individual events and reproduces the expectation values obtained from quantum theory. The challenge is to find algorithms that simulate, event-by-event, the experimental observations that, for instance, interference patterns appear only after a considerable number of individual events have been recorded by the detector [6,7], without first solving the Schrödinger equation. In a number of recent papers [8,9,10,11,12,13], we have demonstrated that locally-connected networks of processing units with a primitive learning capability can simulate event-by-event, the single-photon beam splitter and Mach-Zehnder interferometer experiments of Grangier et al. [6]. Furthermore, we have shown that this approach can be generalized to simulate universal quantum computation by an event-by-event process [9,12,13]. Therefore, at least in principle, our approach can be used to simulate all wave interference phenomena and manybody quantum systems using particle-like processes only. This work suggests that we may have discovered a procedure to simulate quantum phenomena using causal, Einstein-local, event-based processes. Our approach is not an extension of quantum theory in any sense nor is it a proposal for another interpretation of quantum mechanics. The probability distributions of quantum theory are generated by local, causal processes. According to the folklore about Bell's theorem, a procedure such as the one that we discovered should not exist. Bell's theorem states that any local, hidden variable model will produce results that are in conflict with the quantum theory of a system of two S = 1/2 particles [14]. However, it is often overlooked that this statement can be proven for a (very) restricted class of probabilistic models only. Indeed, minor modifications to the original model of Bell lead to the conclusion that there is no conflict [15,16,17]. In fact, Bell's theorem does not necessarily apply to the systems that we are interested in as both simulation algorithms and actual data do not need to satisfy the (hidden) conditions under which Bell's theorem hold [18,19,20]. Furthermore, we have given analytical proofs that two-particle correlations of the simulation models agree exactly with the quantum theoretical expression [21,22]. A. Aim of this work In this paper, we take the point of view that the fundamental problem, originating from the work of Einstein, Podolsky, and Rosen (EPR) [23], reformulated by Bohm [3] and studied in detail by Bell [14], is to explain how individual events, registered by different detectors in such a way that a measurement on one particle does not have a causal effect on the result of the measurement on the other particle (Einstein's criterion of local causality), exhibit the correlations that are characteristic for a • Quantum theory is compatible with these facts. In the quantum physics community, it is generally accepted that the results of Einstein-Podolsky-Rosen-Bohm (EPRB) experiments agree with the predictions of quantum theory [24,25,26,27,28,29,30,31,32] In this paper, we review constructive proofs that there exist (simple) computer simulation algorithms that satisfy Einstein's criterion of local causality and exactly reproduce the results of the quantum theoretical description of real EPRB experiments [21,22,33,34,35]. These algorithms generate the same type of data as experiments and employ the same procedure as used in experiments to analyze the data. In view of the quantum measurement paradox [2,4], the latter prohibits the use of algorithms that rely on (concepts of) quantum theory. In addition, for the reasons explained later, these simulation algorithms do not rely on techniques of inductive inference (probability theory) to draw conclusions from the data. In this paper, we also discuss the apparent conflict with Bell's theorem. To appreciate the fundamental issues that are involved, it is necessary to understand well the logical relation between computer simulation, experiment and theory on the one hand and data and theory on the other hand. Therefore, we first elaborate on these relationships. B. Computer simulation versus experiment and theory In general, and in the analysis of real EPRB experiments [3,23] in particular, it is important to recognize that there are fundamental, conceptual differences between the set of experimental facts, their interpretation in terms of a mathematical model, and a computer simulation of the facts. Obviously, because of limited precision of the instruments, any record of experimental facts is just a set of integer numbers (floating point numbers have a finite number of digits and can therefore be regarded as integer numbers). Theories that describe Newtonian mechanics or electrodynamics assign real numbers to experimentally observable quantities. The relation between theory and experimental data is one-to-one: The experimental accuracy determines the number of significant digits of the real numbers. These theories have a deductive character. Quantum theory assigns a probability, a real number between zero and one, for an event (= experimental fact) to occur [2,14,36]. However, we can always use an integer number to represent the event itself (in any real experiment the number of events is necessary finite). By assigning probabilities to events, we change the character of the theoretical description on a fundamental level: Instead of deduction, we (have to) use inductive inference to relate a theoretical description to the facts [2,37]. Although probability theory provides a rigorous mathematical framework to make such inferences, there are ample examples that illustrate how easy it is to make the wrong inference, also for mundane, every-day problems [37,38,39,40] that are not related to quantum mechanics at all. Subtle mistakes such as dropping some of the conditions [41], or mixing up the meaning of physical independence and logical independence, can give rise to all kinds of paradoxes [18,19,36,42,43,44,45,46]. In general, a computer simulation approach does not need the machinery of probability theory to relate simulation data to the experimental facts. A digital computer can generate sets of integer numbers only. We can compare these numbers to the experimental data directly, without recourse to inductive inference. On the one hand, this puts computer simulation in the luxury position that it cannot suffer from mistakes of the kind alluded to earlier, simply because there is no need to use inductive inference. On the other hand, using the computer, we are strictly bound to the elementary rules of logic and arithmetic. Therefore, it is not legitimate to use arguments such as "in an experiment it is impossible to repeat the experiment twice and get exactly the same answer". While this statement is correct with very high probability, when we use a digital computer it is logically false because we can always exactly repeat the same calculation (we exclude the possibility that the computer is malfunctioning). Therefore, in a computer simulation, it should be possible to explain the facts without invoking "loopholes" such as detection efficiency or counterfactual reasoning. A graphical representation of the point of view taken in this paper is given in Fig. 1. On the left, we have processes that generate events. Each event is represented by one or more numbers, which we call raw data. Experience or a new idea provide inspiration to choose one or more methods to analyze the data. Typically, this analysis maps the raw data onto a few numbers (called averages and coincidence counts in Fig. 1), that is the raw data is being compressed. On the right hand side, we have several candidate mathematical models, "theories", that may "explain" the results of the data analysis. But, how do we relate data to (quantum) theory? It is essential to recognize that before we can address this question, we have to make the hypothesis that there exists some process that gives rise to the observed data. Otherwise, we cannot go beyond the description of merely giving the data as it is. Furthermore, a useful theoretical model should give a description of the data that is considerably more compact than the data itself. Crossing the line that separates the model space from the data space requires making the fundamental hypothesis that the process that gives rise to the data can be described within the framework of probability theory. Only then, we are in the position that we can use probability theory to relate the mathematical model to the observed frequencies. Of course, this is consistent with the fact that quantum theory does not describe the individual events themselves [2,4]. In this paper, the rules of probability theory are mainly used as a tool to reason in a logically consistent manner [37,47], to make logical inferences about the frequencies that we can compute from the observed data [37,40]. These inferences concern logical relations which may or may not correspond to causal physical influences [37]. As we will see later, much of the mysticism surrounding Bell's theorem can be traced back to the failure to recognize that probability theory is not defined through frequencies. To avoid misunderstandings of what we are aiming to accomplish here, it may be useful to draw an analogy with methods for simulating classical statistical mechanics [5]. According to the theory of equilibrium statistical mechanics, the probability that a system is in the state with label n is given by where N is the number of different states of the system, which usually is very large, E n is the energy of the state, and β = 1/k B T where k B is Boltzmann's constant and T is the temperature. Disregarding exceptional cases such as the two-dimensional Ising model, for a nontrivial many-body system the partition function Z = N n=1 e −βEn is unknown. Hence, p n is not known. Can we construct a simulation algorithm that generates states according to the unknown probability distri-bution (p 1 , . . . , p N )? An affirmative answer to this question was given by Metropolis et al. [5,48,49]. The basic idea is to design an artificial dynamical system, a Markov chain or master equation that samples the space of N states such that in the long run, the frequency with which this system visits the state n approaches p n with probability one [5,48]. Looking back at Fig. 1, if we replace "event-by-event simulation algorithm(s)" by "Metropolis Monte Carlo Method", "Average ... counts" by "Average energy ...", and "Quantum theory" by "Equilibrium Statistical Mechanics", the status of simulation algorithms and theoretical models in these two different fields of physics is the same. Although in applications to statistical mechanics, the Markov chain dynamics is of considerable interest in itself, there obviously is no relation to the Newtonian dynamics of the particles involved [5]. The same holds for the dynamical processes that reproduce the results of quantum theory: If an event-by-event simulation algorithm generates the same type of raw data as the experiment does and the data analysis yields results that agree with quantum theory we should be pleased with this achievement and not ask for this dynamics to be "unique". In fact, in our earlier work we have already shown that there exist both deterministic and pseudorandom processes that reproduce equally well the probability distributions obtained from quantum theory and experiments [8,9,10,11,12,13]. C. Disclaimer The work reviewed here is not concerned with the interpretation or extension of quantum theory. The fact that there exist simulation algorithms that reproduce the results of quantum theory has no direct implications to the foundations of quantum theory: The algorithm describes the process of generating events on a level of detail about which quantum theory has nothing to say (quantum measurement paradox) [2,4]. The average properties of the data may be in perfect agreement with quantum theory but the algorithms that generate such data are outside of the scope of what quantum theory can describe. This may sound a little strange but it is not if one recognizes that probability theory does not contain nor provides an algorithm to generate the values of the random variables either, which in a sense, is at the heart of the quantum measurement paradox. D. Structure of the paper The paper is organized as follows. In Section II, we review the EPRB gedanken experiment with magnetic particles and its experimental realization using the photon polarization as a two-state system. We elaborate on the data gathering and analysis procedures. An essential ingredient of the data analysis procedure is the time window that is used to identify coincidences. In constrast to textbook treatments of EPRB experiments in which the window is implicitly assumed to be infinite, in real experiments the time window is made as small as possible. We illustrate the importance of the choice of the time window by analyzing a data set of a real EPRB experiment with photons [32]. Section III briefly recalls the essentials of the quantum theoretical description of the EPRB experiment in terms of a system of two S = 1/2 particles. Section IV addresses the problem of relating quantum theory and real data. In Section IV A, we discuss how to generate individual events from the solution of the quantum theoretical problem and how to relate the quantum theoretical expectation values to the actual data. Section IV C deals with the inverse problem: How do we relate data to (quantum) theory? We elaborate on the fundamental difference between probabilities (quantities that appear in the mathematical theory) and frequencies (numbers obtained by counting events). Section V introduces deterministic and pseudo-random event-based computer simulation models that satisfy Einstein's criteria of local causality and reproduce the results of the quantum theory of two S = 1/2 particles. We also prove that these models can exhibit correlations that are stronger than those obtained from the quantum theory of two S = 1/2 particles. In Section VI, we resolve the apparent conflict between the fact that there exist event-based simulation models that satisfy Einstein's criteria of local causality and reproduce the results of the quantum theory of two S = 1/2 particles and the folklore about Bell's theorem, stating that such models are not supposed to exist. We show that Bell's extension of Einstein's concept of locality implicitly assumes that the absence of a causal influence implies logical independence [36], an assumption which, in general, leads to logical inconsistencies [36,37]. In Section VII, we use standard Kolmogorov probability calculus to analyze the probabilistic version of our simulation models. We give a rigorous proof that these models can reproduce exactly the results of the quantum theory of two S = 1/2 particles. In Section VIII, we propose a principle to derive the probability distributions of quantum theory of the EPRB experiment by using the algebra of probable inference [37,47], that is the axioms of probability theory, without making recourse to quantum theory. Our conclusions are summarized in Section IX. II. EPRB EXPERIMENTS A. Spin 1/2 particles Many experimental realizations and quantum theoretical descriptions of the EPR gedanken experiment [23] adopt the model proposed by Bohm [3]. A schematic di- agram of the EPRB experiment is shown in Fig. 2. A source emits charge-neutral pairs of particles with opposite magnetic moments. The two particles separate spatially and propagate in free space to an observation station in which they are detected. As the particle arrives at station i = 1, 2, it passes through a Stern-Gerlach magnet [50]. The magnetic moment of a particle interacts with the inhomogeneous magnetic field of a Stern-Gerlach magnet. The Stern-Gerlach magnet deflects the particle, depending on the orientation of the magnet and the magnetic moment of the particle. The Stern-Gerlach magnet divides the beam of particles in two, spatially well-separated parts [50]. The observation that the beam splits into two, and not in a continuum of beams is interpreted as evidence that the particles carry a magnetic moment that can take two discrete values; it is quantized [50]. In quantum theory, we describe such a magnetic moment using S = 1/2 operators. By changing the orientation of the Stern-Gerlach magnet, we change the direction of the plane that divides the two beams of particles. In quantum theory language, we say that the quantization axis is determined by the orientation of the Stern-Gerlach magnet. As the particle leaves the Stern-Gerlach magnet, it generates a signal in one of the two detectors. The firing of a detector corresponds to a detection event. Charge-neutral, magnetic particles that pass through a Stern-Gerlach magnet not only change their direction of motion but also experience a time-delay, depending on the direction of their magnetic moment, relative to the direction of the field in the Stern-Gerlach magnet. The time-delays in Stern-Gerlach magnets are used to perform spectroscopy of atomic size magnetic clusters [51] and atomic interferometry [52]. Real experiments require a criterion to decide which events, registered in stations 1 and 2, correspond to the detection of particles belonging to a pair (a single twoparticle system). In EPRB experiments, this criterion is the coincidence in time of the events [29,32,53], as is most clearly illustrated by the EPRB experiments that use the photon polarization as a two-state system [24,25,26,27,28,30,31,32]. B. Photon polarization In Fig. 3, we show a schematic diagram of an EPRB experiment with photons (see also Fig. 2 in [32]). Here, a source emits pairs of photons with opposite polarization. Each photon of a pair propagates to an observation station in which it is manipulated and detected. The two stations are separated spatially and temporally [32]. This arrangement prevents the observation at station 1 (2) to have a causal effect on the data registered at station 2 (1) [32]. As the photon arrives at station i = 1, 2, it passes through an electro-optic modulator that rotates the polarization of the photon by an angle depending on the voltage applied to the modulator. These voltages are controlled by two independent binary random number generators. As the photon leaves the polarizer, it generates a signal in one of the two detectors. The station's clock assigns a time-tag to each generated signal. Effectively, this procedure discretizes time in intervals of a width that is determined by the time-tag resolution τ [32]. In the experiment, the firing of a detector is regarded as an event. As light is supposed to consist of non-interacting photons, it is not unreasonable to assume that the individual photons experience a time delay as they pass through the electro-optic modulators or polarizers. Indeed, according to Maxwell's equation, in the optically anisotropic materials used to fabricate these devices, plane waves with different polarization propagate with different velocity and are refracted differently [54]. It is clear that, at least conceptually, the EPRB experiments with photons or massive S = 1/2 particles are very similar. C. Idealized experiments As it is one of the goals of this paper to demonstrate that it is possible to reproduce the results of quantum theory (which implicitly assumes idealized conditions) for the EPRB gedanken experiment by an event-based simulation algorithm, it would be logically inconsistent to "recover" the results of the former by simulating nonideal experiments. Therefore, in this paper, we consider ideal experiments only, meaning that we assume that detectors operate with 100% efficiency, clocks remain synchronized forever, the "fair sampling" assumption is satisfied [55], and so on. We assume that the two stations are separated spatially and temporally such that the manipulation and observation at station 1 (2) cannot have a causal effect on the data registered at station 2 (1). Furthermore, to realize the EPRB gedanken experiment on the computer, we assume that the orientation of each Stern-Gerlach magnet or electro-optic modulator can be changed at will, at any time. Although these conditions are very difficult to satisfy in real experiments, they are trivially realized in computer experiments. D. Particle source In general, on logical grounds (without counterfactual reasoning), it is impossible to make a statement about the directions of the spin (or polarization) of particles emitted by the source unless we have performed an experiment to determine these directions. Of course, in a computer experiment we have perfect control and we can select any direction that we like. Conceptually, we should distinguish between two extreme cases. In the first case, we assume that we know nothing about the direction of the spin (or polarization). We mimic this situation by using pseudo-random numbers to select the directions. This is the case that is typical for an EPRB experiment and we will refer to it as Case I. In the second case, refered to as Case II, we assume that we know that the directions of both spins (or polarizations) are fixed (but not necessarily the same). A simulation algorithm that aims to reproduce the results of quantum theory of two S = 1/2 particles should be able to reproduce these results for both Case I and II, without any change to the simulation algorithm except for the part that simulates the source. E. Data gathered in an EPRB experiment Here and in the sequel, we use the EPRB experiment with S = 1/2 particles as the primary example. The case of EPRB experiments that use the photon polarization can be treated in exactly the same manner, replacing three-dimensional unit vectors by two-dimensional ones and so on. In the experiment, the firing of a detector is regarded as an event. At the nth event, the data recorded on a hard disk at station i = 1, 2 consists of x n,i = ±1, specifying which of the two detectors fired, the time tag t n,i indicating the time at which a detector fired, and the unit vector a n,i that specifies the direction of the magnetic field in the Stern-Gerlach magnet. Hence, the set of data collected at station i = 1, 2 during a run of N events may be written as Υ i = {x n,i = ±1, t n,i , a n,i |n = 1, . . . , N } . ( In the (computer) experiment, the data {Υ 1 , Υ 2 } may be analyzed long after the data has been collected [32]. Coincidences are identified by comparing the time differences {t n,1 − t n,2 |n = 1, . . . , N } with a time window W [32]. Introducing the symbol ′ to indicate that the sum has to be taken over all events that satisfy a i = a n,i for i = 1, 2, for each pair of directions a 1 and a 2 of the Stern-Gerlach magnets, the number of coincidences C xy ≡ C xy (a 1 , a 2 ) between detectors D x,1 (x = ±1) at station 1 and detectors D y,2 (y = ±1) at station 2 is given by where Θ(t) is the Heaviside step function. We emphasize that we count all events that, according to the same criterion as the one employed in experiment, correspond to the detection of pairs. The average single-particle counts are defined by where the denominator is the sum of all coincidences. According to standard terminology, the correlation between x = ±1 and y = ±1 events is defined by [38] ρ(a 1 , The correlation ρ(a 1 , a 2 ) is +1 (−1) in the case that x = y (x = −y) with certainty. If the values of x and y are independent, the correlation ρ(a 1 , a 2 ) is zero, but the converse is not necessarily true. In the case of dichotomic variables x and y, the correlation ρ(a 1 , a 2 ) is entirely determined by the average single-particle counts Eq. (4) and the two-particle average E(a 1 , a 2 ) = x,y xyC xy x,y C xy For later use, it is expedient to introduce the function and its maximum In general, the values for the average singleparticle counts E 1 (a 1 , a 2 ) and E 2 (a 1 , a 2 ) the coincidences C xy (a 1 , a 2 ), the two-particle averages E(a 1 , a 2 ), S(a, b, c, d), and S max not only depend on the directions a 1 and a 2 but also on the time-tag resolution τ and the time window W used to identify the coincidences. F. Role of the time window Most theoretical treatments of the EPRB experiment assume that the correlation, as measured in the experiment, is given by [14] C (∞) As we will see later, using our model it is relatively easy to reproduce the experimental facts and the results of quantum theory if we neglect contributions that are O(W 2 ). Furthermore, keeping W arbitrary does not render the mathematics more complicated so there really is no point of studying the simplified model defined by Eq. (9): We may always consider the limiting case W → ∞ afterwards. G. Case study: Analysis of experimental EPRB data It is remarkable that all textbook treatments of the EPRB experiment assume that the experimental data is obtained by using Eq. (9). This is definitely not the case [32,56]. We illustrate the importance of the choice of the time window W by analyzing a data set (the archives Alice.zip and Bob.zip) of an EPRB experiment with photons that is publically available [57]. In the real experiment, the number of events detected at station 1 is unlikely to be the same as the number of events detected at station 2. In fact, the data sets of Ref. 57 show that station 1 (Alice.zip) recorded 388455 events while station 2 (Bob.zip) recorded 302271 events. Furthermore, in the real EPRB experiment, there may be an unknown shift ∆ (assumed to be constant during the experiment) between the times t n,1 gathered at station 1 and the times t n,2 recorded at station 2. Therefore, there is some extra ambiguity in matching the data of station 1 to the data of station 2. A simple data processing procedure that resolves this ambiguity consists of two steps [56]. First, we make a histogram of the time differences t n,1 − t m,2 with a small but reasonable resolution (we used 0.5 ns). Then, we fix the value of the time-shift ∆ by searching for the time difference for which the histogram reaches its maximum, that is we maximize the number of coincidences by a suitable choice of ∆. For the case at hand, we find ∆ = 4 ns. Finally, we compute the coincidences, the twoparticle average, and S max using the expressions given earlier. The average times between two detection events is 2.5 ms and 3.3 ms for Alice and Bob, respectively. The number of coincidences (with double counts removed) is 13975 and 2899 for (∆ = 4 ns, W = 2 ns) and (∆ = 0 , W = 3 ns) respectively. In Figs. 4 and 5 we present the results for S max as a function of the time window W . First, it is clear that S max decreases significantly as W increases but it is also clear that as W → 0, S max is not very sensitive to the choice of W [56]. Second, the procedure of maximizing the coincidence count by varying ∆ reduces the maximum value of S max from a value 2.89 that considerably exceeds the maximum for the quantum system (2 √ 2, see Section III) to a value 2.73 that violates the Bell inequality and is less than the maximum for the quantum system. The fact that the "uncorrected" data (∆ = 0) violate the rigorous bound for the quantum system should not been taken as evidence that quantum theory is "wrong": As we explain later, it merely indicates that the way in which the data of the two stations has been grouped in two-particle events is not optimal. Put more bluntly, there is no reason why a correlation between similar but otherwise unrelated data should be described by quantum theory. In any case, the analysis of the experimental data shows beyond doubt that a model which aims to describe real EPRB experiments should include the time window W and that the interesting regime is W → 0, not W → ∞ as is assumed in all textbook treatments of the EPRB experiment. In Sections V and VII, we show that our simulation models reproduce the salient features of Figs. 4 and 5 quite well if contributions that are O(W 2 ) can be neglected. III. QUANTUM THEORY In this section we briefly review some well-known results for the quantum theory of a system of two S = 1/2 particles and we give a brief account of the quantum theoretical description of Case I and Case II. In quantum theory, the state of a system of two S = 1/2 objects is described by a 4 × 4 density matrix ρ [2]. The average value of a dynamical variable, represented by the 4 × 4 matrix X is X = TrρX [2]. According to the axioms of quantum theory [2], repeated measurements on the two-particle system described by the density matrix ρ yield statistical estimates for the single-particle expecta- tion values for i = 1, 2 and the two-particle correlations where σ i = (σ x i , σ y i , σ z i ) are the Pauli spin-1/2 matrices describing the spin of particle i = 1, 2 [2], and a and b are unit vectors. We introduce the notation to distinguish the quantum theoretical results from the results obtained by analysis of the data {Υ 1 , Υ 2 }. If the density matrix of the quantum system factorizes, ρ = ρ 1 ⊗ ρ 2 , where ρ i is the 2 × 2 density matrix of particle i. Then E(a, b) = E 1 (a) E 2 (b) and the correlation ρ(a 1 , a 2 ) = E(a, b) − E 1 (a) E 2 (b) = 0. Hence, ρ = ρ 1 ⊗ ρ 2 is called the uncorrelated quantum state. [42] |ac − ad + bc + bd| ≤ |ac − ad| + |bc + bd| Thus, we conclude that if the quantum system is in the uncorrelated state we must have If the density matrix ρ does not factorize, the upperbound to S max can be found as follows [58]. Using the algebraic properties of the Pauli-spin matrices, a simple calculation yields, Noting that TrρX † Y defines an inner product on the vector space of 4 × 4 matrices X and Y , making use of the fact that ρ is positive semi-definite and that Trρ = 1, we have As the eigenvalues of σ 1 ·a σ 2 ·b are ± a·b, and since a, b, c, and d are unit vectors, we have |E(a × b, c × d)| ≤ 1. independent of the quantum state ρ. According to Eqs. (13) and (17), if 2 < S max ≤ 2 √ 2 the quantum sys-tem is in a correlated state, that is ρ = ρ 1 ⊗ ρ 2 . For pure states (Trρ 2 = 1), the converse is also true [59] but, for general states ρ it is not [60,61,62]. If, in an experiment or simulation, we would find that S max > 2 √ 2, the results of this experiment or simulation cannot be described by the quantum theory of a system of two S = 1/2 particles. We now examine the examples of a maximally correlated (entangled) quantum state (called Case I) and the uncorrelated quantum state (called Case II) in more detail. A. Case I: Singlet state The quantum theoretical description of the EPRB experiment assumes that the state of the two spin-1/2 particles is described by the singlet state ρ = |Ψ Ψ| where |Ψ = (| ↑↓ − | ↓↑ ) / √ 2 and | ↑ (| ↓ ) is the eigenstate of σ z with eigenvalue +1 (−1). For the singlet state, the single-particle expectation values and the twoparticle correlations are given by and respectively. A simple calculation shows that S max = 2 √ 2, in other words, the singlet state satisfies Eq. (17) with equality. For the singlet state, the probability P (x, y|a 1 , a 2 ) that we observe a pair of events x, y = ±1 under the (fixed) condition (a 1 , a 2 ) is given by from which it follows that P (x|a 1 , a 2 ) = y=±1 P (x, y|a 1 , a 2 ) = 1/2, and x,y=±1 yP (x, y|a 1 , a 2 ) = 0, in agreement with the second column of Table I. In the quantum theoretical description, the state of the two spin-1/2 particles may be correlated (ρ(a 1 , a 2 ) = E(a 1 , a 2 )), even though the particles are spatially and temporally separated and do not necessarily interact. For the product state, the probability P (x, y|a 1 , a 2 , S 1 , S 2 ) that we observe a pair of events x, y = ±1 under the (fixed) condition (a 1 , a 2 , S 1 , S 2 ) is given by and yields expectation values that are in agreement with the third column of Table I. Obviously, for the spinpolarized state ρ(a 1 , a 2 ) = E(a 1 , a 2 )− E 1 (a 1 ) E 2 (a 2 ) = 0, hence there is no correlation in this case. C. Photon polarization In the quantum theoretical description of Case I, the whole system is described by the state where H and V denote the horizontal and vertical polarization and the subscripts refer to photon 1 and 2, respectively. The state |Ψ cannot be written as a product of single-photon states, hence it is an entangled state. In Case II, the photons have a definite polarization η 1 and η 2 when they enter the observation station. The polarization of the two photons is described by the product state Using the fact that the two-dimensional vector space with basis vectors {|H , |V } is isomorphic to the vector space (Case II) where cos θ1 = a1 · S1, cos θ2 = a2 · S2, and cos θ1,2 = S1 · S2. − cos 2θ1,2 cos 2θ1 cos 2θ2 of spin-1/2 particles, we may use the quantum theory of the latter to describe the EPRB experiments with photons. The resulting expressions for the averages are given in Table II. They are similar to those of the genuine S = 1/2 problem except for the restriction of a 1 and a 2 to lie in planes orthogonal to the direction of propagation of the photons and the factor of two that multiplies the angles. The latter reflects the fact that the polarization is defined modulo π, not 2π as in the case of S = 1/2. IV. RELATING QUANTUM THEORY AND DATA There is no doubt that quantum theory is very successful in describing a vast amount of phenomena in which we observe the ensemble average of many measurements that are repeated under the same external conditions [2,4]. The EPRB experiments seem to be no exception: The analysis of the experimental data according to the procedure discussed earlier, demonstrates that E(a 1 , a 2 ) ≈ E(a 1 , a 2 ) [24,25,26,27,28,30,31,32]. On the other hand, as is well known from the early days of quantum mechanics, quantum theory itself has nothing to say about the individual events (quantum measurement paradox) [2,4]. The very concept of an event cannot be reconciled with quantum theory [2,4]. In this section, we elaborate on the relation between quantum theory and (experimental) data. A. From quantum theory to experimental data The fundamental problem of relating the object in the mathematical formalism of quantum theory to experimental facts may be solved by (1) interpreting the state of the system as the probability distribution for events to occur and by (2) supplementing quantum theory by a Bernouilli process [37,38] that generates logically independent events according to the prescribed probability distribution, the so-called measurement postulate. Thus, we have Quantum theory + Bernouilli process ⇒ Events. (28) All treatments of quantum theory that we are aware of turn the logical implication Eq. (28) around, without any justification and declare all quantum events to be uncorrelated random. Of course, it might be the case that the analysis of experimental data supports the hypothesis that the events are generated as Bernoulli trials. However, there is rather compelling experimental evidence that successive events are correlated [63]. Notwithstanding this, using Eq. (28) we are in the position to use quantum theory and discuss events in a mathematically well-defined context. For simplicity, in the example of the EPRB experiment, we focus on the case where a 1 and a 2 are fixed in time. Let us then inquire how we can simulate the quantum theoretical results of the EPRB experiment (see Table I) using the procedure laid out by Eq. (28). According to the axioms of quantum theory, in each event we observe only one of the eigenvalues of the dynamical variable that is being measured [2]. For the case at hand, the eigenvalues of σ 1 ·a 1 , σ 2 ·a 2 , and σ 1 ·a 1 σ 2 ·a 2 are ±1. Then, according to Eq. (28), what is left to do is to imagine three Bernoulli processes that generate sets of data Q = {a n = ±1, b n = ±1, c n = ±1|n = 1, . . . , N } such that for a sufficiently large number of events N , for all a 1 and a 2 , the expressions for E 1 (a 1 ), E 2 (a 2 ) and E(a 1 , a 2 ) being given in Table I. The fact that we use Bernouilli processes in which every trial is drawn from the same probability distribution guarantees, by the law of large numbers, that the average over all events converges with probability one to the ensemble average [37,38], which in the present case is given by quantum theory. Note that quantum theory does not impose any relation (correlation) between the numbers a n , b n , and c n , other than that Eq. (29) should hold. In general, generating data (x, y) according to the probability distributions Eqs. (20) and (25) is a nearly trivial exercise. Once we have solved the quantum mechanical problem, that is, once we have the explicit form of the wave function, constructing a Bernoulli process that generates events according to the explicit form is a simple task. In practice, we assume that the pseudorandom number generator that we employ produces Bernoulli trials, a hypothesis that cannot be justified in a mathematically strict sense. B. Fundamental problem Let us now try to relate the quantum theoretical expectation values that appear in Eqs. (10) and (11) to the actual data. In general, the probability for observing a pair of dichotomic variables {x, y} can be written as from which, by the standard rules of probability theory, it follows that By definition, x and y are logically independent if and 37,38]. If x and y are logically independent it is easy to show that E xy = E x E y . In general, the converse is not necessarily true [2,37,38] Thus, for the case we are treating here, E xy = E x E y if and only if x and y are logically dependent. In quantum theory, we have two different cases also. If the density matrix of the two spin-1/2 particle quantum system factorizes (Case II), we have σ 1 · a σ 2 · b = σ 1 · a σ 2 · b and the state of the system is completely characterized by E 1 (a) and E 2 (b). However, if the density matrix does not factorize (Case I), a complete characterization of this entangled state requires the knowledge of E 1 (a), E 2 (b), and E(a, b). Upto this point, it seems that there is full analogy with the probabilistic model of the data, but we still have to relate the quantum theoretical expressions to the observed data. To this end, we invoke the postulate that states that the possible values of a dynamical variable in quantum theory are the eigenvalues of the linear operator that corresponds to this variable [2]. For the case at hand, the operators are σ 1 ·a, σ 2 ·b, and σ 1 ·a σ 2 ·b, with eigenvalues x = ±1, y = ±1 and z = ±1, respectively. It is evident that the triples { x, y, z} cannot represent the data Eq. (2) that is recorded and analyzed in real EPRB experiments [24,26,27,28,30,31,32]: The quantum mechanical model is trivially incomplete in that it has no means to describe the time-tag data. But, quantum theory is incomplete in a more fundamental sense [3,23]. First, let us consider an experiment that produces z only. In general, the probability to observe z can be written as A consistent application of the postulates of quantum theory yields and we would use E(a, b) = E z (a, b) to relate the theoretical result to the data. Likewise, we could imagine an experiment that produces x (y) and use )) to relate the theoretical description to the data. Second, we ask whether it is possible to describe by quantum theory, an experiment that yields the data {x, y}. According to the postulates of quantum theory, the probabilities for the eigenvalues to take the values { x, y} are given by where x, and y are logically independent random variables, that is each measurement of a dynamical variable constitutes a Bernouilli trial [2]. Then, we would use to relate the theory to the data. But the real data is {x, y}, not the logically independent random variables { x, y} of the mathematical model. Therefore the quantum theoretical description of an experiment that yields {x, y} is necessarily incomplete if the data is such that E xy = E x E y . The fact that EPRB experiments show good agreement with the quantum theory of two S = 1/2 objects is not in conflict with this reasoning: In real EPRB experiments, the coincidences are computed according to Eq. (3), which includes the time-tag information, about which quantum theory has nothing to say. Hence there is no logical inconsistency. C. From experimental data to quantum theory Let us now turn things around and ask the much more interesting question how we, as observers, relate the observed data of an EPRB experiment to quantum theory. To simplify the discussion, we assume that the directions a 1 and a 2 are fixed. Thus, we start from the data set and ask the question how to relate these numbers to the set of data that we obtained by adopting the procedure Eq. (28). It is not difficult to see that there are no a-priori rules. How could there be rules? In general, there is no guarantee that the data E that resides on the hard disk of the experimenter's computer has been produced by a physical system and not by, for instance, a bug in the operating system that is controlling the computer. Moreover, for bonafide experimental data, it should not matter who carries out the data analysis: Once the data has been recorded and there is agreement on the procedure to analyze this data, the results (but not necessarily the subjective conclusions) should not depend on whether or not the individual that performs the data analysis "knows" about quantum theory. The following example may be useful to understand the conceptual problem. Relating frequencies to probabilities Let us consider the experiment in which we toss a coin N times. The set of N observations looks like {H, H, T, . . .} where H and T denote head and tails, respectively. From the set of data, we find that the number of times that the coin ends up with tails on the floor is h. Thus, the frequency with which we observe head is then f = h/N , which clearly is a well-defined number. Little thought shows that without any further knowledge/assumption about the experiment, that is all we can say (of course, we could calculate correlations between events and so on but this does not change the essential point of the discussion). Imagining that we can continue the experiment forever does not help either because lim N →∞ h/N is not well-defined [37,38,40]. Indeed, it may happen that we never observe heads or always observe heads. If, in our description of the experiment, we would like to go beyond just giving the numbers (h i , N i ) for i = 1, . . . , M repetitions of the experiment, we have to make additional assumptions. Implicit in the interpretation of most scientific experiments is the assumption that there is some underlying process that generates the data. In the simple case of the coin, assuming Newton's law holds, solving the equations of motion allows us to predict the outcome of each individual toss [37]. This outcome depends on how well we know the initial conditions, the precise form of the force field and so on. If a description on the level of individual events seems too complicated, or if we do not have enough knowledge to describe the whole experimental situation (as in the case of the coin), it is customary to postulate that there is some underlying probabilistic process that determines the frequency with which the events will be observed. It is instructive to see how the process of reasoning works in the case of the coin (the use of quantum theory to describe observed phenomena requires the same logic). As usual, the simplest probabilistic model for the outcome of the experiment of tossing the coin, assumes that (1) there is a probability p to observe heads and that (2) this probability is logically independent of what happens at other tosses. Now, these are nice words but, in the absence of any experimental data, what do they mean? The probability p is a mathematical concept that we use to encode, by a real number in the interval [0, 1], our state of knowledge about the problem [37,40]. The statement that this probability is logically independent of what happens at other tosses cannot be expressed in terms of frequencies [37,38,40]. It is an hypothesis that we make without knowing what the frequencies and correlations between the events will be. Once we have collected the experimental data, we may compute the probability for this hypothesis to be true or not and we may also use the observed frequency to assign a value to the probability p [37,38,40]. From a logical and conceptual point of view, it is extremely important to realize that the first step is to define the concept of "probability" through the Kolmogorov set of axioms or through the more general inductive logic approach (see also Section VI) [37]. Then, and only then, it may make sense to use the observed frequency to assign a number to the probability for an event to occur. We continue with the example of the coin to illustrate this point. Now imagine a thought experiment (= a mental construct) in which we toss the coin N times. Note that in a strict mathematical sense, the mathematical model cannot be simulated by an algorithm on a digital computer, which by construction is a deterministic machine. Of course, using pseudo-random numbers, we can simulate events that are unpredictable to anyone who does not know the initial state or the pseudo-random number generator algorithm. The mathematical model can then be used to test whether it describes the global features (but not the individual events) well. A direct, constructive proof that probabilities are defined through frequencies would be to invent a practical procedure (algorithm) that simulates the tossing of the coin such that the probability for head is exactly p and such that each toss is logically independent from all others. Such an algorithm does not exist: The concept of probability is a mental construct and has no meaning in the realm of algorithms that generate events but, that does not imply that the concept of probability would be useless for describing some of the features of the data generated by these algorithms. In essence, we are repeating what has been said in the introduction. Looking back at the diagram in Fig. 1, the mathematical model is located at the right hand side (model space). The mathematical model itself does not "produce" events. This is done by some algorithm (data space). We can test the various hypotheses that underpin the mathematical model by calculating expectation values (ensemble averages in the case of the coin) and, using the mathematical machinery of probability theory, compute the probability that these hypotheses are correct. Let us now see how this works in the case of the coin. According to the assumed mathematical model, the probability to observe k heads and N −k tails in a thought experiment involving N tosses is given by [37,38,40] where Z represents all other knowledge about the experiment not contained in k and N [37,40]. If m denotes the number of heads such that P (m|N, Z) = max k P (k|N, Z), we have and from which it follows that and According to our mathematical model, of all k = 0, . . . , m, the value of k that has the largest probability to occur is m and from Eq. (43) it follows that as N increases, m/N → p. Of course, we can easily calculate other useful quantities such as the ensemble average We now consider the real experiment in which we toss the coin and assign the value x n = 0, 1, if at the nth toss, we observe tail or head, respectively. The frequency of heads is then f = N −1 N n=1 x n . The next logical step is to assume that the mathematical model, described above, is valid. Then, for the most likely experiment (the one that occurs with the largest frequency) we have Furthermore, the ensemble average of each event x n becomes a meaningful concept and, if we compute the ensemble average of the frequency, we find f = p as naively expected. Thus, it makes sense to use the observed frequency f for assigning a number to the symbol p in the mathematical theory. At this point, the mathematical theory has been "connected" to the observed phenomena. Once this connection has been made, we can (and should) use the tools of probability theory to compute the probability that the assumptions of the mathematical model are correct by confronting the mathematical results for various ensemble averages to the corresponding averages of the observed data. From this simple example, we see that in order to attach a meaning to the observed frequencies, we first need to introduce a mathematical model, probability theory in this case, and make the hypothesis that the outcome of the toss is determined by a Bernouilli process with probability p. Only when this hypothesis has been made, it can be proven that the observed frequency approaches p as N → ∞ with probability one [37,38,40]. Thus, the concept of probability and probability theory have to be introduced first. Only then we can use probability theory to relate the variables in the probabilistic model (p in the example of the coin) to the observed data. This simple example clearly shows that frequencies and probabilities have a different logical status [2,37,40]. Frequencies are the things that we observe (data space in Fig. 1) and exhibit a causal dependence on the conditions under which the data is recorded. Probability theory is a well-defined mathematical model (model space in Fig. 1) that allows us to think in a rational, logical manner [37,40]. Probabilities express logical relationships. A problem with the conceptual difference is that in many instances, simply using the frequency to assign a value to the probability works so well that we may be inclined to forget that there is a fundamental difference between the two. Although it is generally recognized that logical implication is not the same as physical causation, mixing up frequencies and probabilities leads to bizarre conclusions [2,37,40]. As we discuss later, the mysteries surrounding the EPR paradox and Bell's theorem dissolve if one recognizes that physical cause and logical dependence are fundamentally different concepts [37]. Relating EPRB data to quantum theory In the case of the EPRB experiment, we immediately see that we face the same fundamental problem if we go beyond the description of merely giving the data collected in the experiment. To make progress in understanding the behavior of the system as it is revealed to us by our (experimental) method of questioning, we have three options: 1. Use the established mathematical framework of probability theory to relate the quantities that appear in this theory (probabilities) to experimentally observed facts (expected frequencies). 2. Construct an event-based computer model that directly generates the data set Eq. (37), with expectation values that agree with those of quantum theory. 3. Without relying on concepts of quantum theory, construct a probabilistic model that predicts the expected frequencies as observed in the experiment. As quantum theory has nothing to say about individual events [2,4], logically speaking option (2) cannot make any reference to quantum theory. Sections V and VIII are devoted to options (2) and (3), respectively. For now, we continue with option (1). If we measure a property of a single particle, from Eq. (29) we naively expect that the assignment holds with probability one. Note that Eq. (44) contains contributions from the events that fall within the coincidence window only. As explained earlier, for the assignments Eq. (44) to make sense mathematically, we have to assume that there is an underlying probabilistic process that generates the data {x n,i }. The fact that quantum theory describes a very large variety of experimental data strongly suggests that the assignment Eq. (44) makes a lot of sense. As explained earlier, for the quantum dynamical variable σ 1 · a σ 2 · b, it is not clear at all how to relate its eigenvalue c n to the data set Eq. (37). What does it mean to measure a common property of a system of two particles? Why is the time-tag data absent in the quantum theoretical description while it is of vital importance for the experiment? Evidently, we need a proper operational definition of "a system of two particles" in terms of the observed data. As it is our aim to reproduce the experimental results as well as the results of the quantum model for the experiment, it would be logically inconsistent to adopt a definition that is different from the one used in real EPRB experiments. Therefore, we should consider the assignment where the frequency to observe systems of two particles is given by In Eq. (46), the coincidence in time enters because it is an essential ingredient in any EPRB experiment. The expression for the coincidence is an operational procedure to define precisely, in terms of the observed data, the meaning of the statement that two particles constitute a two-particle system. V. SIMULATION MODEL In this section, we take up the main challenge, the construction of locally causal (in Einstein's sense) processes that generate the data sets Eq. (2) such that they reproduce the results of quantum theory, summarized in Table I. A concrete simulation model of the EPRB experiment sketched in Fig. 2 requires a specification of the information carried by the particles, the algorithm that simulates the source and the observation stations, the Stern-Gerlach magnets, and the procedure to analyze the data. We now describe a computer simulation model that generates the data {Υ 1 , Υ 2 }, see Eq. (2). From the specification of the algorithm, it will be clear that it complies with Einstein's criterion of local causality on the ontological level: Once the particles leave the source, an action at observation station 1 (2) can, in no way, have a causal effect on the outcome of the measurement at observation station 2 (1). In this section, we limit the discussion to systems of two S = 1/2 particles. The algorithm that simulates the EPRB experiments with photons, as well as the results of the simulations, are very similar to those presented here. A detailed account of the simulations for the photon system can be found elsewhere [33,34,35]. A. Algorithm Source and particles As in the quantum theoretical treatment of the problem, we will consider two different cases. In Case I, the source emits particles that carry a unit vector S n,i = (−1) i+1 (cos ϕ n sin θ n , sin ϕ n sin θ n , cos θ n ), representing the magnetic moment (or spin) of the particles. The spin of a particle is completely characterized by ϕ n and cos θ n , which we assume to be distributed uniformly over the interval [0, 2π[ and [-1,1], respectively. In Case II, the source emits particles that carry fixed unit vectors S n,i = S i . Observation station Prior to the data collection, we fix the number M of different directions of the Stern-Gerlach magnets. We use 4M pseudo-random numbers to fill the arrays Stern-Gerlach magnet The input-output relation of a Stern-Gerlach magnet is rather simple: For a fixed direction a i of the field, the Stern-Gerlach magnet deflects a particle with magnetic moment S n,i in a direction that we label by x n,i = ±1. As the particle travels through the Stern-Gerlach magnet, the magnetic moment of the particle changes from S n,i to S n,i = x n,i a i . According to the simple quantum mechanical model of the Stern-Gerlach experiment [2], for fixed S and fixed a i , the probability to observe x n,i = ±1 is (1±S·a i )/2. Thus, in this case, the simulation algorithm should generate the sequence x n,i = ±1 such that with probability one. However, if the input consists of uniformly distributed S n,i , the sequence of output bits with probability one, independent of the orientation a i of the Stern-Gerlach magnet. We now consider two algorithms, a deterministic and a pseudo-random one, that simulate the operation of a Stern-Gerlach magnet. Deterministic model. Elsewhere, we have demonstrated that simple deterministic, local, causal and classical processes that have a primitive form of learning capability can be used to simulate quantum systems, not by solving a wave equation but directly through event-byevent simulation [8,9,11,13]. The events are generated such that their frequencies of occurrence agree with the probabilities of quantum theory. In this simulation approach, the basic processing unit is called a deterministic learning machine (DLM) [8,9,11,13,64]. A DLM is a device that exchanges information with the particles that pass through it. It learns by comparing the message carried by an event with predictions based on the knowledge acquired by the DLM during the processing of previous events. The DLM tries to do this in an efficient manner, effectively by minimizing the difference of the data in the message and the DLM's internal representation of it [8,9,11,13]. A DLM learns by processing successive events but does not store the data contained in the individual events. Connecting the input of a DLM to the output of another DLM yields a locally connected network of DLMs. A DLM within the network locally processes the data contained in an event and responds by sending a message that may be used as input for another DLM. Networks of DLMs process messages in a sequential manner and only communicate with each other by message passing: They satisfy Einstein's criterion of local causality. For the present purpose, we only need the simplest version of the DLM [11]. The DLM that we use to simulate the operation of the Stern-Gerlach magnet is defined as follows. The internal state of the ith DLM, after the nth event, is described by one real variable u n,i . Although irrelevant for what follows, this variable may be thought of as describing the fluctuations of the applied field due to the passage of an uncharged particle that carries a magnetic moment. As the particle with spin S n,i communicates (interacts) with the DLM (applied field), the latter updates its internal state according to and the spin changes according to corresponding to spin up and spin down (relative to the direction of the magnetic field a i ), respectively. If the DLM selects spin up (down), it generates a x n,i = +1 (x n,i = −1) event. In Eqs. (49) and (50), 0 < l < 1 is a parameter that controls the speed with which the DLM learns (and forgets) about the incoming events. The dynamic behavior of the DLM, defined by the rule Eq. (49) is discussed in detail elsewhere [11] and may be summarized as follows: 1. If the DLM receives particles with fixed spin S n,i = S, the sequence {x n,i } is periodic for all n > n 0 , n 0 depending on u 0,i and l [11]. For n > n 0 , the frequency N ± /(N + + N − ) of x n,i = ±1 events is given by (1 + S · a i )/2 and we have [11] lim N →∞ exactly. Note that the limit N → ∞ in Eq. (51) is well-defined because the sequence {x n,i } is periodic with a finite periodicity [11]. 2. If the DLM receives S n,i , statistically independent and uniformly distributed over the unit sphere, then the DLM generates the sequence x n,i = sign(S n,i · a i ) for all n > n 0 , n 0 depending on u 0,i and l [11]. In this case we have In this case, the x n,i are Bernoulli variables and the law of large numbers then guarantees that Eq. (52) holds with probability one [38]. Thus, depending on the nature of the input sequence S n,i , the DLM generates output sequences {x n,i = ±1} and particles with spin S n,i such that the time average of these sequences agree with the experimental facts. Pseudo-random model. The simplest algorithm that performs the task of simulating a Stern-Gerlach magnet reads where −1 ≤ r n < 1 are uniform pseudo-random numbers and the spin changes according to It is easy to check that on average, the input-output behavior is the same as the one of the idealized Stern-Gerlach magnet. Time tags When a charge-neutral, magnetic particle passes through a Stern-Gerlach magnet it experiences a timedelay that depends on the direction of its magnetic moment relative to the direction of the field in the Stern-Gerlach magnet. Experimentally, this time-delay is used to perform spectroscopy of atomic size magnetic clusters [51] and atomic interferometry [52]. As a simple simulation model for this time delay mechanism, we assume that the time delay t n,i of a particle with spin S n,i is distributed uniformly over the interval [t 0 , t 0 + T n,i ]. Similarly, experimental evidence that the time-of-flight of single photons passing through an electro-optic modulator fluctuates considerably can be found in Ref. 56. The idea that these fluctuations might be responsible for the observed "quantum correlations" has been proposed in our earlier work [21]. From Eq. (3), it follows that only differences of time delays matter. Hence, we may put t 0 = 0. The time-tag for the event n is then t n,i ∈ [0, T n,i ]. We thus need an explicit expression for T n,i . The choice T n,i = constant is too simple: In this case we recover the model considered by Bell, which is known not to reproduce the correct quantum correlation Eq. (19) [14]. Assuming that the particle only "knows" the direction of its own spin relative to the direction of the magnetic field in the Stern-Gerlach magnet, we can construct one number that is rotationally invariant, namely S n,i · a i . Thus, we assume T n,i = F (S n,i · a i ). As S n,i · a i = cos θ Sn,iai determines whether the particle generates a +1 or −1 signal, it is not unreasonable to expect that F is a function of sin θ Sn,iai . After a few trials, we found that T n,i = T 0 |1 − (S n,i · a i ) 2 | d/2 = T 0 |S n,i × a i | d , yields interesting results. Here, T 0 is the maximum time delay which defines the unit of time and d is a free parameter in our model. In the sequel, we express τ , W , t n,i and T n,i in units of T 0 , which for convenience we set equal to one. Data analysis The algorithm described earlier generates the data sets Υ i for spin-1/2 particles, just as experiment does for photons [32]. In order to count the coincidences, we strictly follow the procedure adopted in the EPRB experiment with photons [32]. First, we choose a time-tag resolution 0 < τ < T 0 and a coincidence window τ ≤ W . We set the correlation counts C xy (α m , β m ′ ) to zero for all x, y = ±1 and m, m ′ = 1, ..., M . We compute the discretized time tags k n,i = ⌈t n,i /τ ⌉ for all events in both data sets. Here ⌈x⌉ denotes the smallest integer that is larger or equal to x, that is ⌈x⌉ − 1 < x ≤ ⌈x⌉. According to the procedure adopted in the experiment [32], an entangled pair is observed if and only if |k n,1 − k n,2 | < k = ⌈W/τ ⌉. Thus, if |k n,1 − k n,2 | < k, we increment the count C xn,1,xn,2 (α m , β m ′ ). After processing all the data for the N events, we compute the single-particle expectation values and the correlation according to Eq. (4) and Eq. (6), respectively. B. Deterministic model: Results Simulation of Case I and II We first demonstrate that the simulation model reproduces the results of quantum theory in the case of the EPRB experiment (Case I). In Fig. 6 we show simulation data for k = 1, d = 0, 3, τ = 0.001, l = 0.999, and M = 10, N = 10 6 for 100 randomly chosen values of a 1 · a 2 , covering the interval [−1, +1]. At the nth event, two uniform pseudo-random numbers 1 ≤ m, m ′ ≤ M are used to select the rotation angles a n,i = b i,m . Within the statistical errors, for the pseudo-random number generators that we use [65], the correlation between m and m ′ is zero. The solid line is the prediction of quantum theory, see second column of Table I. It is clear that for d = 3 there is excellent agreement between simulation and quantum theory. This is not an accident. Simulations for d = 3 but with different values of the other parameters (results not shown) confirm that for sufficiently small τ and sufficiently large N , the simulation model reproduces the quantum theoretical results listed in the second column of Table I. Second, to simulate Case II, we let the source produce particles with fixed polarization but we do not change the algorithm that simulates the observation stations. In Fig. 7, we present simulation data for k = 1, d = 0, 3, τ = 0.001, l = 0.999, and N = 10 6 , a 1 = (0, 0, 1), a 2 = (1/2, 1/2, 1/ √ 2), and S n,i = (−1) i+1 (sin η, 0, cos η) for 0 ≤ η ≤ π. For this choice of a 1 , a 2 and S n,i , quantum theory predicts (see Table I) and for d = 3, as shown in Fig. 7, the simulation model reproduces the quantum theoretical results very well. Extensive tests (data not shown) lead to the conclusion that for d = 3 and to first order in W , our simulation model reproduces the results of quantum theory of two S = 1/2 objects, for both Case I and Case II. obtain simulation results that agree very well with the result that is obtained by considering the class of models studied by Bell. In Case II, E(a 1 , a 2 ) is given by the expression in Eq. (55) and up to the usual statistical fluctuations, the simulation data (see Fig. 7) do not depend on the value of the time-tag parameter d and the time window W . Case I: Numerical treatment As a check on the simulation results for Case I, we examine the limit N → ∞ and show that to first order in W , the simulation model yields the two-particle correlation that is characteristic for the singlet state [21,33]. In the case of Case I we may replace the DLM model for the Stern-Gerlach magnet by the more simple model that generates data according to x n,i = sign(S n,i · a i ). where D(T 1 , T 2 , W ) is the density of coincidences for fixed a i and angles (ϕ, θ) (within a small surface area sin θdθdϕ), T i ≡ F (S i · a i ), S i = S i (ϕ, θ) and x i = sign(S i · a i ). An analytical expression for D(T 1 , T 2 , W ) can be derived as follows. For a fixed time-tag resolution 0 < τ < 1, the discretized time-tag for the nth detection event is given by k n,i = t n,i τ −1 where ⌈x⌉ denotes the smallest integer that is larger or equal to x. The discretized time-tag k n,i takes integer values between 1 and K i ≡ ⌈τ −1 T i ⌉, where K i is the maximum, discretized time delay for a particle carrying angles (ϕ, θ) and passing through a Stern-Gerlach magnet with orientation a i . If |k n,1 − k n,2 | < k = τ −1 W , the two spin-1/2 particles are defined to form a pair. For fixed a i and (ϕ, θ), we can count the total number of pairs, or coincidences C(K 1 , K 2 , k), by considering the graphical representation shown in Fig. 8. After a careful examination of all possibilities, we find that the density can be written as (minimum) value if the system is described by a factorizable two-particle probability distribution. Solid horizontal lines at +2 √ 2 (−2 √ 2): Maximum (minimum) value if the system is described by the quantum theory for two spin-1/2 particles. Solid line: S(θ) = cos 3θ − 3 cos θ, as obtained from quantum theory. and k 0 = min(K 1 , K 2 , k), k 12 = min(K 1 , K 2 ), and K 12 = k 12 −max(0, max(K 1 , K 2 )−k). After substituting Eq. (57) into Eq. (56), the remaining integrals are easily calculated numerically. In Fig. 9 we present results for S(θ) = S(a, b, c, d) for the case k = W = 1 and d = 0, . . . , 5 and the choice a · c = b · c = b · d = cos θ and a · d = cos 3θ [2]. For d = 0 (or W > T 0 ), we find that S(θ) ≤ 2. Thus, we see that ignoring time-tag data automatically renders our model incapable of producing data that violates the Bell inequalities [14]. For 1 ≤ d < 3, 2 < S max < 2 √ 2 and hence, the model violates the Bell inequality but does not reproduce the correlations of the singlet state. As expected on the basis of our results for E(a 1 , a 2 ), if d = 3, the numerical results produced by our model are indistinguishable from the quantum theoretical result S(θ) = cos 3θ − 3 cos θ. For d > 3, 2 √ 2 < S max ≤ 4, implying that our model exhibits correlations that cannot be described by the quantum theory of two spin-1/2 particles, even though it rigorously satisfies Einstein's cri- teria for local causality. It is clear that the result for the coincidences depends on the time-tag resolution τ , the time window W and the number of events N , just as in real experiments [24,25,26,27,28,30,31,32], see Section II G. Expression Eq. (56) allows us to easily study the behavior of the model as a function of the time window W , relative to the time-tag resolution τ . In Fig. 10 we plot S max as a function of W/τ for various values of d. Note that the numerical results agree with the values of S max that can be obtained analytically for the limiting cases W = τ → 0, d = 0, 3 and W > T 0 (see Sec. V B 3). From Fig. 10, it is clear that for d = 3 and W = 0, the model reproduces the result of the quantum system in the fully entangled state. Furthermore, Fig. 10 shows that, for sufficiently small time-tag resolution τ , increasing the time window changes the nature of the two-particle correlations. Since W is a parameter solely used in the data analysis procedure and S max is a decreasing function of W , the value of S max and/or of the correlations are not sufficient to make a definite statement about the nature of the source or even the nature of the complete set-up. Second, we consider the case in which W → τ . Formula Eq. (57) greatly simplifies if we consider the case k = 1 (W = τ ), yielding C(K 1 , K 2 , 1) = min(K 1 , K 2 ) as is evident by looking at Fig. 8. For W = τ and fixed a i and (ϕ, θ), the density D(T 1 , T 2 , τ ) = C(K 1 , K 2 , 1)/K 1 K 2 that we register two particles with a time-tag difference less than τ is bounded by (59) For W = τ → 0 and T i = |S i × a i | 3 , the integrals in Eq. (56) can be evaluated in closed form. Denoting y 1 = sign(cos ϕ) and y 2 = sign(cos(ϕ − α)) and using the same Comparison between the event-based simulation results obtained by using a pseudo-random model for the Stern-Gerlach magnets, quantum theory and the exact solution for the analytical model in the limit N → ∞. The two-particle correlation E(a1, a2) for Case I is shown as a function of θa 1 a 2 ≡ arccos(a1 · a2). Markers: Event-based simulation results obtained by using a pseudo-random model for the Stern-Gerlach magnets. The simulation parameters are k = 1, τ = 0.00001, M = 10, N = 10 9 , d = 7 (red bullets) and d = 0 (blue squares), the latter corresponding to discarding the time-tag data (equivalent to W > T0). Solid line (black): Quantum theory b E(a1, a2) = − cos θa 1 a 2 . coordinate systems as above, we find E(a 1 , a 2 ) = − 2π 0 y 1 y 2 min(sin 2 ϕ,sin 2 (ϕ−α)) sin 2 ϕ sin 2 (ϕ−α) dϕ 2π 0 min(sin 2 ϕ,sin 2 (ϕ−α)) sin 2 ϕ sin 2 (ϕ−α) which is exactly the same as the quantum theoretical result Eq. (19). In retrospect, it is remarkable that we obtain Eq. (60) by requiring that the results do not depend on W and τ , which in this case is very much the same as hypothesis (3) of Sec. VIII, used in the probabilistic modeling of the EPRB experiment. For other integer values of d, the integrals can be worked out as well but the calculations are rather tedious and the results are not very illuminating. As an example, we give the expression for d = 5: In Fig. 11, we demonstrate that the simulation data for d = 5 agree very well with the analytical result Eq. (61). As shown in Fig. 9, for d = 5, the data not only violate the Bell inequality but also violate the rigorous upperbound S max ≤ 2 √ 2 for a quantum system of two S = 1/2 particles. C. Pseudo-random model: Results Using the simple pseudo-random model for the Stern-Gerlach magnet yields results that are qualitatively the same as those of the deterministic model. Therefore, we present a few, representative simulation results only. A detailed analytical treatment of the pseudo-random model is given in Section VII and fully supports the simulation results described next. In Fig. 12, we demonstrate that the simulation results for d = 7 are in excellent agreement with the quantum theoretical expression for the correlation in the singlet state. However, as we prove in Section VII, if the number of events goes to infinity, there is no exact agreement: There is a difference between the two curves of maximum 2%. Note that in the case of the deterministic model exact agreement is obtained for d = 3. Also notice that there is some weak but systematic deviation from the exact results for θ a1a2 ≈ 0 and θ a1a2 ≈ π. This is due to the pseudo-random nature of the model: It reproduces the perfect (anti) correlation at θ a1a2 = 0, π in the limit N → ∞ only, as shown rigorously in Section VII. The results for d = 5 and d = 9, presented in Figs. 13 and 14, respectively, show the same trend as we observed when using the deterministic model for the Stern-Gerlach model: For d = 5 the correlation is less strong than for a quantum system in the singlet state but for d ≥ 8 it is definitely stronger. Notice that for a fixed number of events N , the systematic deviation from the perfect (anti) correlation at θ a1a2 = 0, π increases with d. D. Summary Starting from the factual observation that experimental realizations of the EPRB experiment produce the data {Υ 1 , Υ 2 } (see Eq. (2)) and that coincidence in time is a key ingredient for the data analysis, we have constructed computer simulation models that satisfy Einstein's conditions of local causality and, in the case that we employ a deterministic model for the Stern-Gerlach magnet, exactly reproduce the correlation E(a 1 , a 2 ) = −a 1 · a 2 that is characteristic for a quantum system in the singlet state. In this case, both the simulation and a rigorous mathematical treatment of the model lead to the conclusion that for d = 3 and W → τ → 0, the model reproduces the results (see Table I) of quantum theory for a system of two S = 1/2 particles. The pseudo-random model for the Stern-Gerlach magnet yields data that are qualitatively similar but, for integer values of d, do not exactly agree with quantum theory (see Section VII). It is of interest to mention here that if we simulate EPRB experiments that use the photon polarization as a two-state system, both the deterministic and pseudo-random model exactly reproduce the quantum theoretical results [21,35]. Salient features of these models are that they generate the data set Eq. (2) event-by-event, use integer arithmetic and elementary mathematics to analyze the data, do not rely on concepts of probability and quantum theory, and provide a simple, rational and realistic picture of the mechanism that yields correlations such as Eq. (19). One may wonder why particles emitted by a source with definite spin orientations that are exactly opposite to each other are not described by a density matrix that is a product state. Of course, in this respect the description of our model may be deceptive. In a naive picture one might think that the whole system is described by a density matrix that is a product state. The problem with this naive picture is that it often works extremely well but in some cases leads to all kinds of logical inconsistencies (see Ref. [2] for an extensive discussion of this point) and it should not come as a surprise that the EPR problem is the prime example where the naive picture breaks down completely. Quantum theory describes the system as a whole: It does not describe a single pair of particles as they leave the source. Another deceptive point may be that in our model, one can compute the correlation of the particles right after they left the source. This correlation is exactly minus one. However, this correlation has no relevance to the experiment: To measure the correlation of the particles, it is necessary to put in the Stern-Gerlach magnets, detectors, timing logic and so on. We emphasize that the simulation procedure counts all events that, according to the same criterion as the one employed in experiment, correspond to the detection of two-particle systems. Our simulation results also suggest that we may have to reconsider the commonly accepted point of view that the more certain we are about a measurement, the more "classical" the system is. Indeed, according to experiments and in concert with the prediction of our model, this point of view is in conflict with the observation that the more we reduce this uncertainty by letting W → 0, the better the agreement with quantum theory becomes. Both in experiments and in our model, the uncertainty is in the time-tag data and it is this uncertainty that affects the coincidences and yields the quantum correlations of the singlet state (if W → 0). Isn't it then very remarkable that the agreement between experiment and quantum theory improves by reducing (not increasing!) the uncertainty by making W as small as technically feasible? We have shown that whether or not these simulation models produce quantum correlations depends on the data analysis procedure that is performed (long) after the data has been collected: In order to observe the correlations of the singlet state, the resolution τ of the devices that generate the time-tags and the time window W should be made as small as possible. Disregarding the time-tag data (d = 0 or W > T 0 ) yields results that disagree with quantum theory but agree with the models considered by Bell [14]. Our results show that increasing the time window changes the nature of the two-particle correlations. This prediction can easily be tested and is confirmed by re-analyzing available experimental data with different values of the time window W , as we did in Section II G. In Case I, the two-particle correlation depends on the value of the time window W . By reducing W from infinity to zero, this correlation changes from typical Bell-like to singlet-like, without changing the procedure by which the particles are emitted by the source. Thus, the character of the correlation not only depends on the whole experimental setup but also on the way the data analysis is carried out. Hence, from the two-particle correlation itself, one cannot make any definite statement about the character of the source. Thus, the two-particle correlation is a property of the whole system (which is what quantum theory describes), not a property of the source itself. In contrast, in Case II, the observation stations always receive particles with the same spin orientation and although the number of coincidences decreases with W (and the statistical fluctuations increase), the functional form of the correlation does not depend on W : In Case II, the single-particle and two-particle correlations do not depend on the value of the time window W . VI. EINSTEIN'S LOCALITY VERSUS BELL'S LOCALITY Starting from the data gathering and analysis procedures used in EPRB (gedanken) experiments, we have constructed an algorithm in which every essential element in the experiment has a counterpart (see Section II). The algorithm generates the same type of data as recorded in the experiments. The data is analyzed according to the experimental procedure to count coincidences. The algorithm satisfies Einstein's criteria of local causality, does not rely on any concept of quantum theory but nevertheless reproduces the two-particle correlation of the singlet state and all other properties of a quantum system consisting of two S = 1/2 particles. At first sight, our results may seem to be in contradiction with the folklore on the EPR paradox, very often formulated in terms of Bell's theorem which states that quantum theory cannot be described by a local hidden variable model. In fact, there is no contradiction once one recognizes that the concept of locality, as defined by Bell, is different from Einstein's definition of locality. Bell made an attempt to incorporate Einstein's concept of locality (defined on the level of individual events) to probabilistic theories, apparently without realizing that probabilities express logical, not necessarily physical, relationships between events. However, the assumption that the absence of a causal influence implies logical independence leads to absurd conclusions, even for very mundane problems [36,40] and it is therefore not surprising that, when applied to the quantum problems, this assumption can generate all kind of paradoxes [36]. The simulation model that we describe in this paper, and similar models that we described elsewhere [8,9,11,13,66] do not rely on concepts of probability theory: They operate on the ontological, event-by-event level. Therefore it would be logically inconsistent to even attempt to apply Bell's notion of locality to these models. However, the fact that we have proven that there exist event-based models that satisfy Einstein's criterion of locality and causality and also reproduce all properties of a quantum system consisting of two S = 1/2 particles, suggests that it may be of interest to revisit the relation between localityà la Einstein and localityà la Bell. Before we address this issue, we want to make clear that we do not question the validity of the Bell-type inequalities. These inequalities are mathematical identities that are useful to characterize the amount of (quantum) correlation between two quantities. In this section, we focus on the logic that is used to address the meaning of "locality" in quantum physics. In the discussion that follows, we assume that all processes are causal, that is they should be physically realizable, and we implicitly exclude all others. A. Einstein's locality criterion Einstein expressed the principle of locality as the real factual situation of the system S 2 is independent of what is done with the system S 1 , which is spatially separated from the former [2]. We formalize this by introducing Definition: A theory is E-local if and only if it satisfies Einstein's principle of locality for each individual event. Clearly, E-locality applies to each individual fact (ontological level). Recall that quantum theory or probability theory have nothing to say about individual events: They describe phenomena on the epistemological level. The simulation model that we describe in this paper is a purely ontological model of the EPRB experiment that can reproduce the results of quantum theory. From the description of the simulation algorithm, it is evident that x n,i and t n,i depend on the variables (ϕ n , θ n ) that represent the magnetic moment of a particle, and on the orientation a n,i of the Stern-Gerlach magnets, which can be chosen at will for each (n, i). Furthermore, the event n cannot affect the data recorded for all n ′ < n, implying that the algorithm simulates a causal process. In addition, it is obvious from the specification of the algorithm that x n,1 , t n,1 , or a n,1 do not depend (in any mathematical sense) on a n,2 nor do x n,2 , t n,2 , or a n,2 depend on a n,1 . This implies that for each event, the numbers x n,1 and t n,1 (x n,2 and t n,2 ) do not depend on whatever action is taken at observation station 2 (1). Summarizing: Our simulation model is E-local and causal. B. Bell's locality criterion To set the stage, we first recall the axioms of probability theory [2,37,38]. Let A, B, and Z denote some propositions (events) that may be true (may occur) or false (may not occur). The probability that A is true, conditional on Z being true, is denoted by P (A|Z) [37,38]. The axioms of probability theory may be formulated as [2,37] 1. 0 ≤ P (A|Z) ≤ 1. P (AB|Z)=P (A|BZ)P (B|Z)=P (B|AZ)P (A|Z). These three axioms are necessary and sufficient to define a consistent mathematical framework for probability theory. By definition, two events A and B are logically independent if and only if P (A|BZ) = P (A|Z) [37,38]. If the events A and B are logically dependent, we have showing that the assignment of the probability of the event A (B) depends on the knowledge of the event B (A). From Eq. (62), we see that P (A|BZ) = P (A|Z) unless the events A and B are logically independent (we may assume P (A|Z) > 0 and P (B|Z) > 0 because of the fact that we actually registered A and B). As we shall see shortly, the definition of logical independence is of extreme importance for understanding the implications of Bell's definition of locality. Bell considers theories (see Ref. 14 Chapt.7) that assign a probability for an event A to be registered, given that the circumstances under which A is registered are described by another event Z. The events A and Z are propositions of the kind "the values of the variables (as recorded by m measurement devices) are A = {A 1 , . . . , A m }" and "the values of the variables (as recorded by n measurement devices) are Z = {Z 1 , . . . , Z n }". Bell considers the case that the events A and B are localized in regions 1 and 2 respectively, and assumes that region 1 and 2 are separated in a spacelike way such that events in region 1 (2) have no causal influence on events in region 2 (1) [14]. According to probability theory, we have [37,38] P (ÂB|âbz) = P (Â|Bâbz)P (B|âbz), where we introduced the notationX andY to indicate that event X (Y ) can have no causal effect on event Y (X). We also made explicit that the condition Z =âbz under which A and B have been registered may be written in terms of a common condition z and conditions a and b that may have a causal effect on the outcome of A and B, respectively. Note that a, b and z are propositions too. According to Bell, since the events B and b can have no causal effect on the event A, in a local causal theory [14] P (Â|Bâbz) = P (Â|âz), (64) and, similarly, yielding P (ÂB|âbz) = P (Â|âz)P (B|bz). The steps that take us from Eq. (63) to Eq. (66) clearly show that Bell believes that the absence of a causal influence implies logical independence. In fact, within probability theory, Eq. (66) is the formal statement that A (B) is logically independent of b (a) (see Eq. (62)). According to Bell, theories that do not satisfy Eq. (66), such as quantum theory, are not locally causal [14]. Theories that satisfy Bell's criterion of locality, as expressed by Eq. (64), will be called B-local. We formalize this by introducing Definition: A theory is B-local if and only if Eqs. (64) and (65) Clearly, B-locality is defined within the realm of probabilistic theories only. Note that the folklore on the EPR paradox generally does not distinguish between Blocality and E-locality, a remarkable logical leap because E-locality is defined on the level of individual events whereas B-locality is defined in terms of probabilities for events to occur. A possible explanation for not noticing that this is a major logical step to take is that it is quite common to mix up the meaning of frequencies and probabilities. The former is a property that we measure by counting. It is a property of the whole system under study. The latter is a mental, mathematical construct that allows us to reason about the former. The reader who has difficulties to grasp this delicate but fundamental point may find it useful to read Sec. IV C 1 once more. If the events A and B are represented by integer or real variables A and B (a minor abuse of notation), the expectation of the joint event AB conditional on ab is defined by [37,38] If Eq. (66) holds, we have where we used the subscript B to indicate that we have assumed that the theory is B-local. Let us focus on the case that −1 ≤ ≤ 1 and −1 ≤B ≤ 1. Denoting a, b, and c all lie in the interval [−1, 1] and we have hence which has the form of one of the Bell inequalities (other inequalities can be derived in exactly the same manner) but lacks the element of the hidden variables (see later). A B-local theory can never violate the inequality Eq. (70). If, we find that inequality Eq. (70) is violated for some E(a, b), the only conclusion that can be drawn is that E(a, b) cannot be obtained from a B-local probabilistic theory. To appreciate the consequences of Bell's definition of a local theory, it is very instructive to apply it to examples that do not require concepts of quantum theory. We first consider a very simple experiment that shows that application of Bell's definition of locality leads to the conclusion that an urn filled with balls of two different colors is described by a theory that is B-nonlocal [36]. Second, we show that Bell's assumption that the absence of causal influence implies logical independence enforces very strong conditions on the functional dependence of the probability distributions, severely limiting the (classical) phenomena that a B-local theory can describe. Bernouilli's urn is B-nonlocal Let us take an urn filled with M red and N − M white balls (it is sufficient to take N = 2 and M = 1 to see the consequences of Bell's definition of locality) [36]. A blind monkey, having no knowledge about the position of the balls in the urn, draws two balls without putting the first ball back into the urn. We consider the events R 1 ="the result of the first draw is a red ball" and R 2 ="the result of the second draw is a red ball". Denoting all other knowledge about this experiment by Z, the probabilities for R 1 and R 2 are If the result of the first draw is a red ball, the probability that the result of the second draw is also a red ball is given by Let us now assume that the monkey hides the first ball from us but that it shows us the second ball, which turns out to be red. As there can be no causal effect of the second draw on the result of the first draw, application of Bell's reasoning to this experiment yields which is obviously inconsistent with the basic rules of probability theory. Indeed, from axiom 3, we have and using Eq. (71) we find which is definitely in conflict with Eq. (73). Thus, Bell's assumption that the absence of a causal influence implies logical independence leads to inconsistent results in probability theory when applied to the simple physical system of an urn filled with red and white balls [36]. B-local hidden variable theories We now demonstrate that a consistent application of Bell's definition of locality imposes severe constraints on the functional form of the probabilities. Following Ref. 37, let us introduce a new set of K exhaustive, mutually exclusive events H k (k = 1, . . . , K), exhaustive implying that H 1 + . . . + H K is always true. Then, according to the rules of probability theory [37] P (AB|abz) = P (AB(H 1 + . . . + H K )|abz) To make contact to Bell's work, we write λ instead of H k , call them hidden variables and replace the summation by an integration. We have P (ÂB|âbz) = P (ÂB|âbzλ)P (λ|âbz)dλ. (77) The variables λ may have a causal influence on the events in regions 1 and 2, hence they may affect the events and/orB. Invoking the product rule, we find [37] P (ÂB|âbz) = P (Â|Bâbzλ)P (B|âbzλ)P (λ|âbz)dλ. Let us now make the (physically reasonable) assumption that the events λ are logically independent ofâ anď b, an assumption which is also implicit in the work of Bell (because he ignored the difference between physical and logical independence). In other words, it is assumed that P (λ|âbz) = P (λ|âz) = P (λ|bz) = P (λ|z). Then, Eq. (79) simplifies to P (ÂB|âbz) = P (Â|âzλ)P (B|bzλ)P (λ|z)dλ, (81) which is the expression for the joint probability P (ÂB|âbz) under the hypothesis of B-locality [14]. The famous Bell inequality [14] follows from Eq. (81) by repeating the steps that lead to Eq. (70). We denote the expectation value of AB by where the superscript H indicates that we compute the expectation using the "hidden variable" probability distribution defined by Eq. (81). As before, we focus on the case that −1 ≤ ≤ 1 and −1 ≤B ≤ 1. Then hence Logical consistency of a B-local theory demands that we may first apply Eqs. (64) and (65) and we see that in order for Bell's local probabilistic theory to be mathematically consistent, the probabilities P (Â|âzλ) and P (B|bzλ) should satisfy Eq. (86), for allÂ,B,â, andb, and for all P (λ|âbz) satisfying Eq. (80). Furthermore P (ÂB|âbz) is completely determined by P (Â|âzλ) and P (B|bzλ). Assuming, as is usually done, that the two measuring devices are the same, we may write Eq. (86) as the functional equation where 0 ≤ F (A, a, λ) ≤ 1, 0 ≤ F (B, b, λ) ≤ 1 and 0 ≤ p(λ) ≤ 1. It may be of interest to note that the quantum theoretical expression for the single-particle probability describing a Stern-Gerlach magnet (for which A = ±1), does not satisfy functional equation Eq. (87), assuming Eq. (80) holds here too. Indeed, integrating over S over the unit sphere yields (1 + ABa · b/3) = 1 for the consistency condition Eq. (87), which obviously leads to a nonsensical conclusion (see also the Appendix). Within probability theory, a mathematically consistent application of B-locality severely limits the form of the probabilities and, as in the case of the urn, leads to conclusions that defy common sense, even in the realm of every-day experience. C. Reductio ad absurdum We now address the logic of the reasoning that was used by EPR and then apply the same logic to the reasoning used by Bell. We emphasize that we consider the logic of reasoning only. For instance, whether or not quantum theory is a correct description of the experimental data is not the issue here. We are concerned with logic only. The argument put forward by EPR can be formalized as follows 1. Q is true is equivalent to the statement that quantum theory is a correct description of the experimental data. 2. C is true is equivalent to the statement quantum theory is complete. Note that the precise definition of "complete" is irrelevant as far as the logic of reasoning is concerned. EPR use the formalism of quantum theory to prove that quantum theory is incomplete. Thus, EPR show that if quantum theory is a correct description of the experimental data and quantum theory is complete then quantum theory is incomplete. This reasoning is an example of reductio ad absurdum: To disprove a statement, we assume it is true and then prove that it leads to a logical contradiction. In formal language, EPR prove that where ∧, ⇒ and denote the logical "and" operation, logical implication, and logical negation, respectively. Equivalently, we can write where ∨ denotes the logical "or" operation. From Eqs. (89) or (90), it is clear that if we accept that statement Q is true, statement C must be false if we do not want to run into a contradiction. We now apply the logic of the reasoning used by EPR to the reasoning used by Bell. First, we introduce the symbol E: 3. E is true is equivalent to the statement that quantum theory obeys Einstein's criterion of local causality (the precise meaning of this criterion is irrelevant for the logic of reasoning). Bell's extension of Einstein's criterion for a locally causal theory to probabilistic theories can be formalized as follows: 4. B is true if and only if Einstein's criterion of local causality is equivalent to the statement that if a variable b has no causal effect on the variable A then, in a probabilistic theory, P (A|bZ) = P (A|Z) must hold. Assuming B is true, Bell derives inequalities that are violated by quantum theory. In formal language, Bell has shown that which is a logical contradiction. Assuming that quantum theory gives a correct description of experimental data, Q is true. Then, from Eq. (91), if follows that 1) B is false or 2) E is false or 3) both B and E are false. Bell apparently excluded the possibility that his probabilistic interpretation of Einstein's criterion of local causality was wrong, hence he drew the conclusion that quantum theory is E-nonlocal. However, Bell's conclusion that quantum theory is E-nonlocal has been drawn on the basis of a logically incorrect argument: B-locality implicitly assumes that the absence of a causal influence implies logical independence [36] but, in probability theory, it is well-known that the assumption that the absence of a causal influence implies logical independence leads to logical inconsistencies [36,40]. Hence, either B is false or the mathematical framework of probability theory is logically inconsistent. Excluding the hypothesis that probability theory is logically inconsistent, it follows that B is false but we cannot rule out that E is false also. However, Bell's general conclusion that an E-local, causal theory cannot be a candidate for a more complete theory than quantum theory is based on the wrong assumption that B is true. B-locality only looks deceptively similar to E-locality but is fundamentally different. Thus, we are left with three options: (1) We adopt Bell's definition of locality, keep insisting that causal indenpence implies logical independence and learn to live with the fact that it leads to absurd conclusions such as an urn with two balls being "nonlocal", (2) we change the rules of probability theory [67] or (3) we keep using probability theory as it is and reject Bell's definition of locality as a logically consistent extension of Einstein's notion of locality to the domain of probabilistic theories. We do not believe that option (1) is worth considering any further, nor that option (2) is a viable one, in particular because quantum theory, being a very successful theory, requires the established mathematical apparatus of probability theory to make contact to experimental data. D. Alice on Earth and Bob on Mars For a logically local algorithm, such as the one described in Sec. V, the condition that the two observation stations must be spatially separated is irrelevant. To see this, imagine the following scenario. We ask Bob to choose a set of directions a n,2 as he likes and we also ask him to keep this set secret. Then we send Bob to Mars. After Bob has arrived on Mars, we (still on Earth) prepare data sets {S n,1 |n = 1, . . . , N } and {S n,2 = −S n,1 |n = 1, . . . , N } for Case I and send the second set by a radio link to an observation station 2 that is located on Mars. Once this data has been sent (which takes a few seconds at most), the link is destroyed. Then, we give the first data set to Alice who is in charge of station 1 on planet Earth. She processes her data for some set of directions a n,1 that she may choose as she likes and obtains the data set Υ 1 . This also takes a few seconds. It takes at least five minutes before Bob, who controls station 2 on Mars, starts to receive the data. Bob processes this data, using a set of directions a n,2 he chose before leaving for the mission to Mars and which he kept secret all the time, and obtains the data set Υ 2 . Then, Bob activates a radio link and sends the data set Υ 2 to Alice (or a third person). Alice analyses the data {Υ 1 , Υ 2 } and computes the correlations according to the procedure outlined in Sec. V and draws the unescapable conclusion that the data exhibit "quantum correlations". If we assume that Alice and Bob never had the chance to communicate with each other, there is no way, other than by telepathy, that Bob could have influenced Alice's choice of a n,1 . Alice, not aware of the existence of Bob before Bob arrived on Mars could not influence Bob's choice of a n,2 either. In this hypothetical procedure, at the time that the data analysis was carried out, the two systems were spatially and temporally separated and there is no physical mechanism known to man by which Bob could have influenced Alice's choice. There is no point of sending Bob to Mars: If Bob would have analyzed the data {S n,2 } on earth, the data {Υ 1 , Υ 2 } would be exactly the same and so would be the conclusion that the data exhibit "quantum correlations". This thought experiment (which can in fact be realized) is just another illustration that correlations express logical but not necessarily physical dependencies. E. Summary In an attempt to extend Einstein's concept of a locally causal theory to probabilistic theories, Bell implicitly assumed that the absence of causal influence implies logical independence. In general, this assumption prohibits the consistent application of probability theory and leads to all kinds of paradoxes [37,40]. However, if we limit our thinking to the domain of quantum physics, the violation of the Bell inequalities by experimental data should be taken as a strong signal that it is the correctness of this assumption that one should question. Instead of calling quantum mechanics (or an urn containing two balls) a nonlocal theory, it would be more appropriate to reject the assumption that the absence of causal influence implies logical independence. This step is difficult to take unless one recognizes that probabities are not defined by frequencies: Much of the recent controversies about the correctness and/or applicability of Bell's theorem [18,19,42,43,44,45,46,68,69] can be traced back to the failure of keeping apart the concept of the frequency of events and the concept of the probability to observe this frequency [37,40]. Most importantly, it is simply logically incorrect to use probability theory to even make a statement about the (non)existence of correlations in a set of experimental data. At most, we can conclude that a probabilistic model is compatible with the data, in which case we made a significant step in describing the process that gave rise to the data. The simulation models that we describe in this paper do not rely on concepts of probability theory: They are purely ontological models of the EPRB experiment. The expression for the coincidence Eq. (3) cannot be written in terms of a product of two single-particle probabilities, an essential feature of the restricted class of local models examined by Bell [14]. Hence, the fact that we have discovered event-by-event simulation algorithms that (1) generate the same type of data as recorded in the experiments, (2) analyze data according to the experimental procedure to count coincidences, (3) satisfy Einstein's criteria of local causality, (4) do not rely on any concept of quantum theory or probability theory, but nevertheless reproduce the two-particle correlation of the singlet state and all other properties of a quantum system consisting of two S = 1/2 particles can never be in conflict with a theorem that has its roots in probability theory. VII. PROBABILISTIC MODEL OF THE SIMULATION ALGORITHM In this section, we use the probabilistic (Kolmogorov) approach to analyze the simulation model that we described in Section V. This section serves three purposes. First, it provides a rigorous proof that for to first order in W , the probabilistic description of the simulation model can exactly reproduce the results of quantum theory for a system of two S = 1/2 objects. Second, it illustrates how the presence of the time-window introduces correlations that cannot be described by a Bell-like "hidden-variable" model. Third, it reveals a few hidden assumptions that are implicit in the derivation of the specific, factorized form of the two-particle correlation that is essential to Bell's work. The first, fundamental step is to assume that the simulation algorithm can be replaced by an abstract mathematical model in which the quadruple {x 1 , x 2 , t 1 , t 2 } is a random variable and that the data occurs with probability P (x 1 , x 2 , t 1 , t 2 |a 1 , a 2 ). We then use the standard rules of probability theory to write this probability such that it can be evaluated analytically. Using the product rule (see Eq.( 76)), we may always express the probability for observing the data {x 1 , x 2 , t 1 , t 2 } as a sum over the mutual exclusive events. Thus, we may write where S 1 and S 2 denote the three-dimensional unit vector representing the spin of the particles. Representation Eq. (92) is an exact expression for P (x 1 , x 2 , t 1 , t 2 |a 1 , a 2 ). In the simulation model, {x 1 , x 2 , t 1 , t 2 } are mutually independent and {x 1 , t 1 } ({x 2 , t 2 }) do not depend on {a 2 , S 2 } ({a 1 , S 1 }). This suggests that it is reasonable to assume that {x 1 , x 2 , t 1 , t 2 } are mutually independent random variables and that {x 1 , t 1 } ({x 2 , t 2 }) are logically independent of {a 2 , S 2 } ({a 1 , S 1 }). Then, we have P (x 1 , x 2 , t 1 , t 2 |a 1 , a 2 ) = 1 (4π 2 ) 2 dS 1 dS 2 P (x 1 , t 1 |x 2 , t 2 , a 1 , a 2 , S 1 , S 2 )P (x 2 , t 2 |a 1 , a 2 , S 1 , S 2 )P (S 1 , S 2 |a 1 , a 2 ) = 1 (4π 2 ) 2 dS 1 dS 2 P (x 1 , t 1 |a 1 , S 1 )P (x 2 , t 2 |a 2 , S 2 )P (S 1 , S 2 |a 1 , a 2 ) where, in the last step, we assumed that S 1 and S 2 are logically independent of a 1 and a 2 , which is reasonable because in the simulation algorithm S 1 and S 2 are independent of a 1 and a 2 . Note that Eq. (93) gives the exact probabilistic description of our simulation model. The reader may wonder why in the present case it is allowed to go from Eq. (92) to Eq. (93) while in Section VI, we demonstrated that making these steps may lead to logical inconsistencies. The difference is in the fact that in Section VI, we use probability theory to make inferences about logical dependencies whereas in the present case we know for certain (by assumption) which variables are logically dependent on others and which variables are not. Thus, in the present case it is mathematically correct to describe our simulation model by the probability Eq. (93). However, if we analyze data for logical dependencies, it is logically inconsistent to draw conclusions from an analysis based on Eq. (93). In essence, we are repeating ourselves: We can cross the line in Fig. 1, separating model space from data space from right to left because we know the properties of our simulation model but crossing the line in the opposite direction is impossible without making additional assumptions. Up to this point, Eq. (93) has the same structure as the expression that is used in the derivation of Bell's results and if we would go ahead in the same way, our model also cannot produce the correlation of the singlet state. However, the real factual situation in the experiment is different: The events are selected using a time window W that the experimenters try to make as small as possible [56]. Accounting for the time window, that is multiplying Eq. (93) by the step function, and integrating over all t 1 and t 2 , the expression for the probability for observing the event (x 1 , x 2 ) reads P (x 1 , x 2 |a 1 , a 2 ) = dS 1 dS 2 P (x 1 |a 1 , S 1 )P (x 2 |a 2 , S 2 )w(a 1 , a 2 , S 1 , S 2 , W )P (S 1 , S 2 ) x1,x2=±1 dS 1 dS 2 P (x 1 |a 1 , S 1 )P (x 2 |a 2 , S 2 )w(a 1 , a 2 , S 1 , S 2 , W )P (S 1 , S 2 ) = dS 1 dS 2 P (x 1 |a 1 , S 1 )P (x 2 |a 2 , S 2 )w(a 1 , a 2 , S 1 , S 2 , W )P (S 1 , S 2 ) dS 1 dS 2 w(a 1 , a 2 , S 1 , S 2 , W )P (S 1 , where, in general, the weight function will be less than one (because +∞ −∞ dt 1 +∞ −∞ dt 2 P (t 1 |a 1 , S)P (t 2 |a 2 , S) = 1) unless W is larger than the range of (t 1 , t 2 ) for which P (t 1 |a 1 , S 1 ) and P (t 2 |a 2 , S 2 ) are nonzero. Unless w(a 1 , a 2 , S 1 , S 2 , W ) = w 1 (a 1 , S 1 , W )w 2 (a 2 , S 2 , W ), Eq. (94) cannot be written in the factorized form P (x 1 , x 2 |α, β) = P (x 1 |α, λ)P (x 2 |β, λ)ρ(λ)dλ that is essential to derive the Bell inequalities (see Section VI). In the light of the discussion in Sections I and IV C, it is not without importance to note that Eq. (94) can be written down directly (as we did in Section V B 3), without reference to concepts of probability theory. Indeed, it suffices to replace the sums over the pseudo-random numbers by discrete sums over equally spaced intervals and let these intervals go to zero. Then the total number of events goes to infinity and we recover Eq. (94), except that the P 's that appear in Eq. (94) do not have the meaning of probabilities. Again, we see that the use of probabilistic models requires additional assumptions, the correctness of which can be established a-posteriori only. First, let us consider Case II, that is we assume that the source emits pairs of particles with fixed, known directions S 1 and S 2 . Then, P (S 1 , S 2 ) = δ(S 1 − S 1 )δ(S 2 − S 2 ), the weight function w(a 1 , a 2 , S 1 , S 2 , W ) drops out and Eq.(94) reduces to P (x 1 , x 2 |a 1 , a 2 ) = P (x 1 |a 1 , S 1 )P (x 2 |a 2 , S 2 ), which agrees with the expression for the quantum system of two S = 1/2 particles in the product state. Second, we put P (S 1 , S 2 ) = δ(S 1 + S 2 ). Then, S 1 = −S 2 is a random variable that covers the unit sphere in a uniform manner, that is we are treating Case I. In our simulation model, the time delays t i are distributed uniformly over the interval [0, where we added the parameter d to the list of variables to make explicit that we adopted the time-tag model that we employ in the simulation. The integrals in Eq.(97) can be worked out analytically, yielding Clearly, Eq. (98) cannot be written in the factorized form w 1 (a 1 , S 1 , W )w 2 (a 2 , S 2 , W ). Hence, it should not come as a surprise that as soon as we want to model the real experiment in which the time window is essential, we may obtain correlations that cannot be described by Bell-like models. We now consider the relevant limiting cases for which we can easily derive closed-form expressions for the expectation values. From Eq. (98), it follows that If the weight function is a constant, as in Eqs. (99) and (100), Eq. (94) reduces to and takes the factorized form that is characteristic for the probabilistic models considered by Bell [14]. Hence, we know that we cannot recover the results of quantum theory in the limiting cases d = 0 or W ≥ T 0 in which the time-tag information plays no role. From now on, we focus on the experimentally relevant case of small W , that is we neglect contributions of O(W 2 ). We insert in Eq.(94), the probability distributions P (x|a, S) = Θ(xa · S) or P (x|a, S) = (1 + xa · S)/2, corresponding to the deterministic and pseudo-random model for the Stern-Gerlach magnet respectively. By symmetry we have E 1 (a 1 , a 2 , W → 0) = E 2 (a 1 , a 2 , W → 0) = 0 for all values of d, in agreement with quantum theory (see the second column of Table I). The two-particle correlations are given by for the deterministic and the pseudo-random model, respectively. Without loss of generality, we may choose the coordinate system such that a 1 = (1, 0, 0) and a 2 = (cos α, sin α, 0). Then, substitution of where g(φ, θ, x) = sign(cos φ cos(φ − θ)) or g(φ, θ, x) = (1 − x 2 ) cos φ cos(φ − θ) for the deterministic or pseudo-random model for the Stern-Gerlach magnet, respectively. Here and in the remainder of this section, we define cos θ ≡ a 1 · a 2 . For specific values of d, Eq. (104) can be written in terms of elementary functions. In the case of the deterministic model for the Stern-Gerlach magnet, we find − 6890 cos θ − 895 cos 3θ + 149 cos 5θ 5774 + 280 cos 2θ + 90 cos 4θ , d = 7 . In the case of the pseudo-random model for the Stern-Gerlach magnet, we obtain . (106) All the d > 0 results in Eqs. (105) and (106) violate the Bell inequalities but, as we already explained, this finding has no significant consequences. From Eq. (105), we conclude that for d = 3 and the deterministic model of the Stern-Gerlach magnet, the expression is identical to the correlation of a system of two S = 1/2 particles in the singlet state. The result for the pseudo-random model of the Stern-Gerlach magnet and d = 7 (see Eq. (106)) is very close (with a maximum error of less that 2%) to the singlet correlation. Of course, there is no fundamental reason why d should be an integer. Finally, we note the almost trivial fact that for W → 0, the results are insensitive to small variations in W , in agreement with the general idea, explored in the next section, that quantum theory is one out of the many probabilistic theories that has the special feature that its predictions are insensitive to small changes of the parameters of the model. For completeness, we list the analytical results for the case of the photon polarization. For the deterministic model of a polarizer (which does not reproduce Malus law), the probabilistic treatment yields [33] In the case that we adopt the pseudo-random model for the polarizer that can reproduce Malus law, the probabilistic model yields [35] E(a 1 , a 2 , where we have omitted the expressions for odd d because they cannot be written in terms of elementary functions. In passing, we note that the mathematically rigorous result for d = 4 disposes of the widespread believe [14] that perfect correlation of the singlet state requires some form of determinism. VIII. DERIVATION OF THE QUANTUM THEORY OF THE EPRB EXPERIMENT In the quantum theoretical model, the choice of the state that describes the EPRB experiment is an educated guess. There is no underlying principle that guides us to this choice other than that the particular averages (of time series) that we compute from the experimental data agree with the expectation values (ensemble averages) obtained from the theory. From the work of Cox and Jaynes in the early 60's, we know that once we have agreed to represent the degree of the plausibility of a proposition by a real number, then there is a unique set of rules, identical to the standard rules of probability theory, that we must adhere to in order that the logical inferences we make do not violate elementary desiderata of rationality and consistency [37,40,47]. In this case, the rules of probability theory are used as a vehicle for carrying out probable inference [37,40,47] and have a much broader range of applications than the Kolmogorov theory of probability. The latter is incorporated in the former. As mentioned earlier, and as is most evident in Section VI, we mainly use probability theory as a vehicle to make statements about propositions, that is we use it in its extended logic mode. An intriguing question now arises: Would it be possible to derive the quantum theoretical description of the EPRB experiment from the general principles of logical inference and empirical knowledge about the results of the experiment, not involving concepts from quantum theory at all? Elsewhere, we have shown that Malus law can be derived in this manner [66]. In this section, we show that the same approach yields the probability distributions Eqs. (20) and (25) without making the detour via quantum theory. The approach that we take here is very much inspired by the work of Frieden [70]. Frieden has shown that one can recover all the fundamental equations of physics by finding the extremes of the Fisher information plus the "bound" information [70]. According to Frieden, the act of measurement elicits a physical law and quantum mechanics appears as the result of what Frieden calls "a smart measurement", a measurement that tries to make the best estimate [70]. Although our approach is similar to Frieden's, our line of reasoning is different. We do not invoke concepts from estimation theory, such as the estimators and the Cramér-Rao inequality, nor do we require the concept of random noise. The probabilistic model that we will develop is based on the following four hypotheses: 1. Each detection event constitutes a Bernouilli trial, that is we assume that the events are logically independent [37,38,40]. Note that the absence of statistical correlation in the data recorded in an experiment is an indication but definitely not a proof that the events are logically independent [37,38,40]. On the other hand, if the data would exhibit correlations, the events would be logically dependent [37,38,40]. 2. The time series recorded during an experiment suggest that the averages of the data are rotational invariant. This observation we formalize by making the hypothesis that the expectation values (not necessarily the probabilities) are invariant for rotations of the conditions under which the experiments are carried out. 4. The time series that we observe is the one which is most likely to be observed, that is its probability is maximum. In the remainder of this section, we will simplify the notation a little by omitting from the conditions that appear in the probabilities, the proposition that expresses the knowledge about the problem that we do not need to specify explicitly. We begin by demonstrating that these four assumptions suffice to derive the probability distribution of a single Stern-Gerlach magnet. Then, using the same four assumptions, we derive the probability distribution for the EPRB experiment. A. Stern-Gerlach magnet We consider the case that the direction a of the applied field in the Stern-Gerlach magnet and the magnetic moment S of the particles do not change with time. The measuring apparatus (Stern-Gerlach magnet + particle detector) transforms the input, N particles with magnetic moment S, into a time series {x n |n = 1, . . . , N } of signals x n = ±1. By hypothesis (1), the probability P (x 1 , . . . , x N |a, S, N ) to observe the data record {x n |n = 1, . . . , N } can be written as N n P (x n |a, S). As x = ±1, P (x|a, S) is completely determined by its first moment, that is we can write where E(a, S) = x=±1 xP (x|a, S). By hypothesis (2), E(a, S) = E(a · S) and hence the probability for a single event x is given by and is conditional on the relative angle θ between the magnetic moment S of the particle and the direction a of the applied field. Denoting p(θ) = P (x = +1|θ), the probability for observing a time series {x n |n = 1, . . . , N } in which m of the events x n take the value +1, that is N n=1 x n = 2m − N , is given by [37,38,40] We now consider the likelihood that the observed sequence of {x n } was generated by p(x|θ + ǫ) instead of p(x|θ), ǫ being a small positive number. The log-likelihood L that the data was generated by p(x|θ + ǫ) instead of by p(x|θ) is given by [37,40] According to hypothesis (3), the variation of L with ǫ should be minimal. Then, the results (averages over the time-series) will be least sensitive to small variations of the conditions under which the experiment is carried out. We bring the problem of determining the function p(θ) in a mathematically trackable form by using the Taylor expansion with respect to ǫ. We find Invoking hypothesis (4), m is the value that maximizes P (x 1 , . . . , x N |a, S, N ). A simple calculation (see Section IV C 1) shows that Hence, for large N we may set m/N = p(θ) in Eq. (115) and then the second term of the right hand side vanishes. Then, L will be least sensitive to changes in ǫ if is minimal. The quantity I F is the Fisher information [37,70,71] for this particular problem. Hypothesis (1) was used to obtain the right hand side of Eq. (118), which upon substitution of p(x = +1|θ) = p(θ) and p(x = −1|θ) = 1 − p(θ) turns into Eq. (117). We find the minimum of the Fisher information I F by substituting p(θ) = cos 2 g(θ) and obtain Rotational invariance requires that I F is independent of θ, hence g(θ) = aθ + b, where a and b are constants still to be determined. Rotational invariance further requires that p(θ) = cos 2 (aθ + b) = p(θ + 2π), hence a = k/2 and I F = k 2 , with k an integer number. We may exclude the case k = 0 because then p(θ) does not depend on θ and a Stern-Gerlach magnet that operates according to this k = 0 model would not be a useful device. Thus, I F is minimal if k = 1 and we may set the irrelevant phase factor b to zero. Therefore, using the four hypotheses given earlier, we have found that the probabilistic model for the Stern-Gerlach magnet generates events with probabilities P (x = +1|a, S) = cos 2 θ 2 = 1 + a · S 2 , which is in exact agreement with Eq. (109). B. EPRB gedanken experiment We consider the case that the directions a 1 and a 2 of the applied fields in the Stern-Gerlach magnets do not change with time (as in the quantum model) and that we have no knowledge about the direction of the magnetic moments S 1 and S 2 of the particles. Thus, the probability p(x, y|θ) for a single-event (x, y) is conditional on the relative angle θ between the two unit vectors a 1 and a 2 . The probability that an experiment of N events yields n(x, y) events of the type (x, y) is given by P (n(+1, +1), n(−1, −1), n(+1, −1), n(−1, +1)|θ, N ) = N ! x,y=±1 p(x, y|θ) n(x,y) n(x, y)! , where n(+1, +1) + n(−1, −1) + n(+1, −1) + n(−1, +1) = N . Adopting the same strategy as in the case of the single Stern-Gerlach magnet, we consider the log-likelihood L N = 1 N ln P (n(+1, +1), n(−1, −1), n(+1, −1), n(−1, +1)|θ + ǫ, N ) P (n(+1, +1), n(−1, −1), n(+1, −1), n(−1, +1)|θ, N ) = x,y=±1 n(x, y) N ln p(x, y|θ + ǫ) p(x, y|θ) , that the data was generated by p(x, y|θ + ǫ) instead of p(x, y|θ). Repeating the steps that lead from Eq. (114) to Eq. (117), we find that for small ǫ minimization of L is tantamount to finding the probability p(x, y|θ) that minimizes the Fisher information Using Eq. (123), we can write Eq. (126) as which, in essence, is the same expression as the one that we obtained for the case of the Stern-Gerlach magnet. Of course, the solution of the minimization problem is also the same. Solving Eq. (127) for E(θ), we find In the case that one uses the magnetic moment of the particles, the experimental data indicates that E(θ) is periodic in θ with a period of 2π (π if the experiment measures the polarization, as in EPRB experiments with photons). This implies that I F should be an integer number. The solution I F = 0 can be discarded because then E(θ) would not depend on θ, which would contradict the experimental observations. Therefore, the nontrivial solution with minimum Fisher information is I F = 1. Using the fact that the solution of the minimization problem is determined up to an arbitrary phase b, the two-particle correlation can be written as E(a 1 , a 2 ) = − cos θ = −a 1 · a 2 , in agreement with the expression of the correlation of two S = 1/2 particles in the singlet state. Thus, we may conclude that we can derive the results of quantum theory for the singlet state from a straightforward application of probability theory, without making reference to concepts of quantum theory. C. Real EPRB experiment As explained in Section II, real EPRB experiments produce the data sets Υ i = {x n,i = ±1, t n,i , a n,i |n = 1, . . . , N } . At this point, we feel that we lack the necessary mathematical tools for carrying out the procedure that we successfully applied to the simpler cases treated earlier. First, it is difficult to see how the empirical knowledge that single-particle averages are zero and that the two-particle average is rotational invariant leads to useful conditions on the form of the f i (t 1 , t 2 |a 1 , a 2 ). Second, the presence in Eq. (133) of the step functions introduces nontrivial correlations and prevents us from making further progress in the mathematical treatment of this problem. Third, the description now contains a new parameter (W to which we should also apply hypothesis (3)) as well as extra variables (t 1 and t 2 ). We leave the problem of the analytical treatment of the general case for future research. D. Summary The assumption that there is an underlying probabilistic process that gives rise to the observation of the data as obtained in Stern-Gerlach and EPRB experiments, together with the very simple, plausible hypotheses (1)-(4) are sufficient to derive the probability distributions of quantum theory for the EPRB experiment, without using a single concept of quantum theory. In addition, this derivation suggests that quantum theory is the probabilistic model for the set of data that is most likely to be observed. From a more general perspective, this section demonstrates, by way of a successful application to specific problems, how to formalize the process of inductive inference and derive useful results (those of quantum theory) from it. This derivation builds on prior, empirical knowledge that we have acquired through experiments, the application of probability theory as mathematical vehicle for rational reasoning, and the metaphysical principle that we, human observers, have great difficulties to interpret experimental data that is not robust with respect to small changes in the conditions under which the experiments are carried out [72]. IX. CONCLUSION Starting from nothing more than the observation that an EPRB experiment produces pairs of triples of data {Υ 1 , Υ 2 }, we have constructed computer simulation models that reproduce the results of all single-particle and two-particle correlations of a quantum system of two S = 1/2 particles. Salient features of these models are that they • Generate, event-by-event, the same kind of data set {Υ 1 , Υ 2 } as the one recorded in real EPRB experiments • Satisfy Einstein's criteria of local causality • Count all events in which systems of two particles have been detected, using the same timecoincidence criterion as used in real EPRB experiments • Provide a simple, rational and realistic picture of a mechanism that yields the correlations of an "entangled state" • Do not rely on any concept of quantum theory or probability theory A key ingredient of these models, not present in the textbook treatments of the EPRB gedanken experiment, is the time window W that is used to detect coincidences. We have demonstrated (see Section II G) the importance of the choice of the time window by analyzing a data set of a real EPRB experiment with photons [32]. The mathematical treatment of the models yields results that are in exact agreement with quantum theory. The condition under which an EPRB experiment yields results that agree with quantum theory is evident: The resolution τ of the devices that generate the time-tags and the time window W should be much smaller than the time delays, the range which is determined by T 0 . Disregarding the timing data yields a result that disagrees with quantum theory and with experiment. The EPR paradox reappears when the experiments are analyzed in terms of an incomplete set of data. We have demonstrated that the event-by-event simulation of EPRB experiments allows us to reproduce not only the results of quantum theory but also allows us to consider cases that are not described by quantum theory. Therefore, for this type of experiments, the two-particle "world" that we can simulate contains the two-particle "world" described by quantum theory as a special case. As our work shows that it is possible to construct event-based simulation models that satisfy Einstein's criteria of local causality and reproduce the expectation values calculated by quantum theory it opens new routes to ontological descriptions of microscopic phenomena [8,9,11,13,21,33,34,35]. We have resolved the apparent conflict between the fact that there exist event-based simulation models that satisfy Einstein's criteria of local causality and reproduce the results of the quantum theory of two S = 1/2 particles and the folklore about Bell's theorem, stating that such models are not supposed to exist. The origin of this conflict has been traced back to Bell's extension of Einstein's concept of locality to the domain of probabilistic theories, the fundamental assumption being that the absence of a causal influence implies logical independence [36]. This leaves two options: • One accepts the assumption that the absence of a causal influence implies logical independence and lives with the logical paradoxes that this assumption creates • One recognizes that logical independence and the absence of a causal influence are different concepts [37,40,47] and one searches for rational explanations of experimental facts that are logically consistent, as we did in this paper Finally, we have demonstrated that it is possible to derive, without resorting to concepts of quantum theory, the quantum theoretical description of the EPRB experiment from the general principles of logical inference, developed by Cox and Jaynes, [37,40,47] and empirical knowledge about the results of the experiment. The computer models we have invented can be built with macroscopic, say mechanical parts (in principle a digital computer can be built from mechanical parts). To the experimenter who has no knowledge of what is going on inside the building where the mechanical machine is operating, there is no way of telling whether the data he/she receives is generated by a quantum system or not. In a sense, this supports Bohr's point of view that "There is no quantum world. There is only an abstract quantum theoretical description" [73].
2007-12-25T17:54:16.000Z
2007-08-01T00:00:00.000
{ "year": 2007, "sha1": "3d36aee1fb1038fabe85fa30260c20dba63b0943", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0712.3781", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3d36aee1fb1038fabe85fa30260c20dba63b0943", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
1710875
pes2o/s2orc
v3-fos-license
Self-management of coronary heart disease in older patients after elective percutaneous transluminal coronary angioplasty Objective To explore how older patients self-manage their coronary heart disease (CHD) after undergoing elective percutaneous transluminal coronary angioplasty (PTCA). Methods This mixed methods study used a sequential, explanatory design and recruited a convenience sample of patients (n = 93) approximately three months after elective PTCA. The study was conducted in two phases. Quantitative data collected in Phase 1 by means of a self-administered survey were subject to univariate and bivariate analysis. Phase 1 findings informed the purposive sampling for Phase 2 where ten participants were selected from the original sample for an in-depth interview. Qualitative data were analysed using thematic analysis. This paper will primarily report the findings from a sub-group of older participants (n = 47) classified as 65 years of age or older. Results 78.7% (n = 37) of participants indicated that they would manage recurring angina symptoms by taking glyceryl trinitrate and 34% (n = 16) thought that resting would help. Regardless of the duration or severity of the symptoms 40.5% (n = 19) would call their general practitioner or an emergency ambulance for assistance during any recurrence of angina symptoms. Older participants weighed less (P = 0.02) and smoked less (P = 0.01) than their younger counterparts in the study. Age did not seem to affect PTCA patients' likelihood of altering dietary factors such as fruit, vegetable and saturated fat consumption (P = 0.237). Conclusions The findings suggest that older people in the study were less likely to know how to correctly manage any recurring angina symptoms than their younger counterparts but they had fewer risk factors for CHD. Age was not a factor that influenced participants' likelihood to alter lifestyle factors. Introduction  Coronary heart disease (CHD) is considered to be a true pandemic, killing in the region of eight million people globally each year. [1] It is known to be the main cause of death in the Western world and is increasing in developing countries. [2,3] It is estimated that 85% who die as a consequence of coronary heart disease are those aged 65 or older. [4] Although mortality rates are declining overall, the prevalence of the disease in the United Kingdom remains high and the commonest manifestation of coronary heart disease is stable angina. In the United Kingdom, there are more than four million people who suffer from angina and that equates to 5% of males and around 4% of females. [5] In the USA more than a fifth of people aged eighty and older suffer from stable angina. [6] The prevalence of stable angina increases with age. [7] In an attempt to control angina symptoms, patients can undergo two main types of elective coronary revascularisation: percutaneous transluminal coronary angioplasty (PTCA) and coronary artery bypass graft (CABG) surgery. [8] While there has been a gradual decline in the use of CABG surgery in the last decade in the United Kingdom, the number of PTCA procedures has grown substantially from around 10,000 in 1990 to almost 90,000 in 2010. [9] While PTCA should diminish patients' angina symptoms, it does not cure the underlying CHD and patients are expected to manage their condition and prevent progression of the disease through adherence to a medication regime and by reducing modifiable risk factors known to contribute to CHD. [10] They would also be expected to identify and deal with any recurring angina symptoms. [11] It is known, however, that three quarters of PTCA patients will require further revascularisation within ten years for further symptom management. [12,13] Anecdotal evidence from clinical practice in the United Kingdom suggested that patients with stable angina may not manage their CHD effectively after they have PTCA for symptom relief. Evidence indicates that between 31% and 75% of patients regularly experience angina symptoms after PTCA, [1417] and that a diverse range of methods are adopted to manage these symptoms including taking no action at all and contacting healthcare professionals for support. One study found that older patients were less likely to experience angina symptoms after elective PTCA, [15] but due to limitations of the study design, the reason for that was not evident. It is unknown whether patients adhere to clinical guidelines, for example when experiencing angina symptoms they should rest and use their glyceryl trinitrate spray and then if the symptoms continue for 10-15 min, make a call to emergency services. [18] The rate of CHD risk factor modification after elective PTCA is poor and patients' risk of CHD progression remains high as a result of inactivity and obesity. [19,2022] One study [19] indicated that patients made no changes to smoking or body mass index (BMI) after PTCA, whereas another made unsubstantiated claims that patients found it easier to make dietary alterations than change other modifiable risk factors. [23] Smoking continued to pose a risk to some patients after PCTA in two studies, [19,22] and some evidence indicates that the incidence of CHD patients who smoke is higher in Asia than Europe. [24] There is a paucity of evidence about patients' adherence to a medication regime as a strategy for secondary prevention of CHD post-PTCA. The focus of many studies is the rate of prescribing of certain medicines. Two studies that did explore adherence to medication found that almost all patients self-reported adherence to their medication after elective PTCA. [14,25] These studies were however, limited by their design (non-experimental, cross-sectional study design) and sampling approach (both studies used a convenience sample). One study used a heterogeneous sample of patients who had undergone PTCA for both the management of stable angina symptoms and myocardial infarction and so did not comprehensively provide evidence of patients' adherence to medicines specifically after elective coronary revascularisation with PTCA. To date, there is little evidence of patients' adherence to secondary prevention of CHD medications and to clinical guidelines in relation to the self-management of angina symptoms after PCTA. Previous research suggests that CHD risk factor modification after elective PTCA may be poor but that some risk factors may be easier to change than others and so there appears to be a knowledge gap. The purpose of this paper is to report the findings of a study that sought to explore this with a focus on older patients (those aged 65 years and older) who underwent PTCA for the relief of stable angina symptoms. We forward the proposition that: older patients after undergoing elective PTCA for the management of stable angina symptoms have ineffective management of angina symptoms, they modify few CHD risk factors but they do adhere to a medication regime for secondary prevention of CHD. Thus the hypotheses are tested: H o1 , patients effectively self-manage angina symptoms V; H a1 : patients do not effectively self-manage angina symptoms; H o2 : there is little or no modification of CHD risk factors V; H a2 : at least one CHD risk factor is modified; H o3 : patients adhere to a medication regime for the secondary prevention of CHD risk factors V; and H a3 : patients do not adhere to a medication regime for the secondary prevention of CHD. Study design and location A sequential, explanatory mixed methods design was used for this study. [26] The study took place in a teaching hospital in central Scotland, United Kingdom. The hospital where the patients were recruited was chosen as it has one of the highest procedure rates for PTCA in the United Kingdom. [27,28] Ethical approval Ethical approval to undertake the research was given by the regional Research Ethics Committee and permission to conduct the study was also given by the Research and Development Department at the teaching hospital where the study participants were recruited. Study population A convenience sample (n = 93) of patients who had undergone elective PTCA for the management of stable angina pectoris was recruited. Those eligible to participate were patients who had elective coronary revascularisation with PTCA under the care of consultant cardiologists at the teaching hospital in Scotland. Participants also needed to be able to speak and read English. Any patients who had suffered a serious complication during PTCA (e.g., stroke, http://www.jgc301.com; jgc@mail.sciencep.com | Journal of Geriatric Cardiology coronary artery dissection) were excluded from the study, as were those who were unable to provide consent to participate, including those with impaired cognition. A power calculation performed to estimate the sample size of the study concluded that a minimum of 81 participants were required for phase 1 of the study. Study protocol The study was divided into two phases; the first collected data from participants by means of a self-administered survey and phase two used face-to-face interviews to gather qualitative data from study participants. Data were collected between December 2011 and January 2013. Potential participants were identified by the cardiology healthcare team and were given a pack of information about the study on the day they had their PTCA procedure performed. Eligible participants were invited to take part in the study when they attended their first outpatient consultation with their cardiologist approximately three months after the PTCA. After giving their written consent to take part, a self-administered survey was administered to participants. The survey contained items related to patient demographics including age, level of education, marital status and gender. It also used outcome measures related to patients' self-management of CHD and gathered data on anxiety, depression and illness perceptions using the validated Hospital Anxiety and Depression Scale (HADS) and the Brief Illness Perceptions Questionnaire (Brief IPQ). [29,30] The survey tool was checked for face validity by a group of healthcare practitioners who are considered "expert" in the cardiology specialty and it was also piloted with a small group (n = 8) of PTCA patients. The survey data collected in Phase 1 were analyzed using univariate (descriptive statistics: measures of central tendency, distribution and spread) and bivariate (independent samples t tests) procedures using the software package SPSS Version 20. Purposive sampling for phase 2 of the study was informed by the findings from phase 1 and ten participants were selected from the original sample for an in-depth face-to-face interview. To ensure a wide representation of participants with differing abilities in self-management of CHD, criteria were developed to inform the purposive sampling process for phase 2. Participants were selected from the original sample based on their demographics, their knowledge of how to manage angina symptoms, their lifestyle factors and whether any changes had been made to these and their self-reported adherence to medicines. A hybrid approach to qualitative data analysis was used and that aligned with the explanatory design of the study. [26] The quantitative findings from phase 1 informed the development of some pre-determined, A priori topic codes and these were used in deductive analysis of the qualitative data. [31,32] An inductive approach was also used to analyze the qualitative data to ensure the findings were not 'stifled' through confinement to the A priori codes only. Coding was performed by two independent researchers and comparisons made to ensure rigor and consistency. Through the iterative approach used to analyze the qualitative data, themes developed from the findings and these were subsequently grouped into over-arching themes. Member validation was used to check the reliability of the themes. Sample demographics Within the convenience sample of ninety-three patients in Phase 1, forty-seven participants were aged 65 years or older (identified as the older age group in this study) and just over a quarter of this sub-group (27.7%) were female. The ethnic group that predominated was white but the representation of people with an Asian ethnicity was consistent with previous epidemiological studies that showed that particular group of people to have a higher incidence of CHD. [33] The majority of the older participants in the study owned their own homes (81.8%). Just under a quarter of them lived alone (23.4%) and the same percentage lived in the most deprived areas in central Scotland when the Scottish Government's tool for measuring deprivation, the Scottish Index of Multiple Deprivation was applied. [34] A total of 36.2% of the sub-group of older participants were educated beyond secondary school. More than three quarters of the sub-group (78.7%) had at least one existing co-morbidity and just under half (46.8%) had three or more, the most common were osteoarthritis, type 2 diabetes and hypertension (Table 1). Relationship between age and self-management of angina symptoms Since the PTCA procedure, approximately 40% of the older participants had experience what they considered to be angina symptoms and a slightly greater proportion (43.5%) reported that these symptoms had a limiting effect on their daily activity. When asked how these symptoms would be managed, the most common response (78.7%) from these participants was that they would take glyceryl trinitrate (Table 2). Around a third of the older age group also thought that they should rest or relax. Regardless of the duration or severity, over a quarter (27.7%) of the participants would call their general practitioner for any recurrence of angina symptoms, and a further 12.8% would contact the emergency ambulance/paramedic service for assistance. Older people in the study were less likely to know how to correctly manage any recurring angina symptoms (rest and administer glyceryl trinitrate) than their younger counterparts. This is evidenced in that 34% of older PTCA patients would rest when the angina symptoms occurred compared with 45% of the younger patient group. Also a greater number of older patients (27.7%) would inappropriately contact their family doctor for any symptom recurrence compared with 20% of those under the age of 65 years. Older participants in the study indicated that there were unlikely to contact healthcare services out-with normal working hours as patients perceived that they would be too busy to attend to them. A quote from one participant highlights this: "At night I wouldn't call an ambulance, I would wait until the GP (general practitioner) opened at half past eight" (Participant No. 86; 77 years old). Older participants also believed that by seeking help from their general practitioner initially that medical attention in hospital would be expedited. "I think by doing it through the surgery (general practitioner), the hospital knew I was coming and I didn't have to sit and wait for a long time (Participant No.14; 82 years old). This seemed to result in lengthy delays in seeking treatment for ongoing angina symptoms. Older participants in the study who had co-morbidities were more likely than the younger participants to trivialize their angina symptoms and believed that they would resolve with little or no personal intervention. "If I suddenly got it (pain), I would just wait until it goes away. I don't think I really need GTN...My aortic valve stenosis is worse. I get really breathless. (Participant no. 93, 82 years old). Relationship between age and self-management of CHD risk factors: A small proportion (4.7%) of older participants smoked cigarettes regularly, over 80% took some exercise on most days of the week and just under half (47.8%) abstained from drinking alcohol. Almost two thirds of the older participants ate between 3 and 5 portions of fruit and vegetables per week. Since the PTCA a number of the older participants had made alterations to their CHD risk factors. More than half ate less saturated fat but more fruit and vegetables, two participants in five indicated that they had increased the amount of exercise they participated in each week and a third of the sub-group reported that they had reduced their weight since the PTCA. An independent samples t-test was used to establish any association between participants' age and their ability to adopt a healthy lifestyle. The test results suggest that older participants tended to weigh less and were less likely to smoke than their younger counterparts in the study. These findings were statistically significant at the 5% level (P = 0.02 and 0.01 respectively). The effect that age had on the changes participants made to fruit/vegetable consumption, fat consumption and weight, was tested using an independent samples t-tests. Age did not seem to affect PTCA patients' likelihood of altering these dietary factors in the adoption of a healthy lifestyle. There was no statistically significant difference noted between the age of participants who made no dietary changes compared with those who did (66.96 vs. 67.75 years, P = 0.237). Some older participants seemed very motivated to mod- It was also found that older participants with co-morbidities thought that their co-morbid conditions were more serious than CHD and that appeared to lessen their motivation to change their behavior to reduce their CHD risk. "Well I think the bigger change in the diet was at the time of the diabetes. (Participant No. 70, 85 years old). Relationship between age and confidence in self-management of CHD It is known that confidence or self-efficacy may influence the effectiveness of patients' CHD self-management. [35,36] Using a five point scale (1 = totally confident to 5 = not at all confident, Table 3), participants in the study were asked how confident they were on various aspects such as when to seek medical help, how much exercise to take and general confidence in self-managing CHD. It was found that although a larger proportion of the older participants indicated that they were either very confident or confident in doing this, using an independent t test there was found to be a significant statistical difference between the two age groups (P = 0.007). Evidence also indicates that perceptions of illness can affect CHD patients' symptom management. [15,17] Respondents were asked about their perceptions of CHD on an eleven point scale from 0 (not at all) to 10 (fully) where the higher end of the scale represents more threatening illness perceptions. The under 65 year olds had a mean score of 7.82 and those aged 65 or more had a slightly higher mean score of 8.36, the difference however, using an independent t test the difference was not found to be statistically significant (P = 0.295). Discussion The quantitative findings indicate that a statistically significant link existed between older age and less effective monitoring and management of angina symptoms in the sample. Qualitative data from phase 2 of the study suggested a possible explanation for this could be that older participants were more stoic and accepted recurring pain as a sign of the normal aging process. While under reporting of pain has been linked to stoicism before, the research concerned elderly patients with osteoarthritis. [37,38] Any delay in accessing treatment could however, cause a significant risk to CHD patients' morbidity and mortality. Older participants in the study also seemed to have a greater reliance on their general practitioner to help them deal with any angina symptoms. There was a general reluctance by more elderly PTCA patients to contact unscheduled care services when angina symptoms were prolonged. Research has found that older members of the public are less likely to contact emergency ambulance services and other unscheduled care services. [39] Evidence emerged to suggest that stoicism in older participants meant that they endured the angina symptoms and waited until they could get help from their general practitioners, rather than calling for emergency ambulances. This study provides evidence that older PTCA patients think that recurring angina symptoms are normal in the aging process and are reluctant to contact emergency services for prolonged periods of pain, opting to see their general practitioners instead. It seemed that older participants who had co-morbid conditions trivialized their angina symptoms and believed that they would resolve with little or no personal intervention. The trivialization of angina symptoms in these participants may be a consequence of them experiencing few angina symptoms after PTCA. This contrasts with other cardiac patients (e.g., heart failure patients) as they are more likely to have frequent symptoms and so more effort is required from the patient to manage these. [40,41] Ineffective symptom management could have an impact on the morbidity and mortality of PTCA patients. It is documented that 80% of CHD is preventable, [42] and it is recommended that patients adopt and maintain healthier lifestyles to reduce their risk of disease progression. [43,44] In the current study, it was found that participants older than 65 years had fewer CHD risk factors and that many were motivated to adopt more healthy behaviours soon after the PTCA. Those with co-morbidities however, often perceived the co-morbid condition to be more serious and that appeared to make them less likely to modify lifestyle factors. Older participants in the study reported greater confidence in self-managing their CHD and had more threatening perceptions of their condition than the younger patients in the study but this did not achieve statistical significance. Limitations The non-probability approach to sampling limits the ability to generalise the findings of the study to the wider PTCA population. Also, the use of a single research centre in Scotland cannot guarantee the representativeness of the sample in relation to the global PTCA patient population. It does however, provide a valid perspective of how patients cope with their CHD self-management after PTCA. Although the sample size for this study was sufficient based on the power calculation used to determine it, the number of patients aged 65 years or older was relatively small (n = 47, 50.5% of total sample). This limited the depth and scope of the analysis and indicates the need for larger sample sizes in future studies. No survey tool existed that encompassed all aspects of CHD self-management and so a new tool was developed. While some items were included that were already known to be reliable and valid (HADS, Brief IPQ), new questions in the survey were tested for face/content validity by experienced researchers and cardiology practitioners. Further testing would have enhanced the reliability and validity of this survey tool. Performing a pilot project, however, helped to provide reassurance that the questionnaire functioned well. There was a potential for response bias as the participants' responses were self-reported and so need to be taken at face value, as no objective corroboration was used. Using a mixed methods study design, however, helped to reduce the possible biases of one single method. [45] Conclusions Globally, it is known that populations are aging, [46,47] and the growth rate is staggering, particularly in those aged eighty years and older who are considered the 'oldest old'. For example, in China, it is predicted that the 'oldest old' population will swell almost five fold from around eighteen million people in 2010 to an estimated ninety-eight million by 2050. [48] Consequently, the incidence of CHD will increase, [7] and the demands on healthcare will escalate. Traditional care for older patients with CHD is likely to be eroded, [49] and replaced with a greater reliance on people to self-manage their condition. This Scottish study however, indicates that CHD self-management in patients over the age of sixty-five years who have undergone elective PTCA for the management of stable angina symptoms is sub-optimal. Few studies have explored CHD self-management specifically in a PTCA patient population. This study provides evidence that older patients are less likely to know how to correctly manage any recurring angina symptoms than their younger counterparts after coronary revascularization with PTCA. Many older patients would suffer ongoing angina symptoms rather than access healthcare services out-with normal working hours and consider angina symptoms to be part of a normal aging process. Any delays in accessing help for prolonged angina symptoms could increase patients' risk of morbidity and mortality. Other patients would contact their general practitioner for any recurrence of angina symptoms and that increases the demand for healthcare provider support for CHD self-management. [50] Current methods of educating and supporting patients after elective PTCA in their management of angina symptoms seem inadequate and healthcare professionals need to determine the most effective way to enhance these to reduce the patients' mortality and morbidity risk and also their reliance on help from general practitioners for any episode of symptoms. Although older participants in the study had fewer CHD risk factors than those under the age of sixty-five years, the existence of co-morbidities made it less likely for them to modify their lifestyle. As the world population ages and a greater number of older people present with CHD, the healthcare cost associated with not supporting lifestyle change in patients who are known to be less likely to make changes may become unsustainable. [51] This study therefore, gives evidence that older patients with co-morbidities after elective PTCA require support and potential intervention to lessen their risk of CHD progression. It seems that this is the first study of its kind to find that patients after elective PTCA for the management of stable angina have sub-optimal angina management and those with co-morbidities are less likely to alter lifestyle factors to lessen the risk of CHD. As the global population ages, a review of how these patients are supported in their self-management of CHD seems necessary.
2016-12-22T08:44:57.161Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "533486dab44a5c55f148c9120e06b1eb4ec93ee4", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "533486dab44a5c55f148c9120e06b1eb4ec93ee4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18257389
pes2o/s2orc
v3-fos-license
Zubov's Method for Stochastic Control Systems We consider a controlled stochastic system with an a.s. locally exponentially controllable compact set. Our aim is to characterize the set of points which can be driven by a suitable control to this set with some prescribed probability. We show that a generalization of Zubov's method leads to this characterization and can be used as basis for numerical computations. INTRODUCTION Zubov's method (Zubov, 1964) is a general procedure for deterministic systems of ODEs which allows to characterize the domain of attraction of an asymptotically stable fixed point and an associated Lyapunov function on this domain by the solution of a suitable partial differential equation, the Zubov equation (see f.e.(Khalil, 1996) for an account of the various developments of this method). A typical difficulty in the application of this method is the existence of a regular solution to the Zubov equation, which was overcome in (Camilli et al., 2001) by using a suitable notion of weak solution, the Crandall-Lions viscosity solution.The use of weak solutions allows the extension of this method to perturbed and controlled systems, see (Grüne, 2002), Chapter VII for an overview. Using this framework, in (Camilli and Loreti, 2004), (Camilli and Grüne, 2003) the Zubov method was applied to (uncontrolled) Ito stochastic differential equations obtaining a characterization of the points which are attracted with any prescribed probability to the fixed point. In control theoretic applications it is interesting to consider the so-called asymptotic controllability problem, i.e. the possibility of asymptotically driving a nonlinear system to a desired target by a suitable choice of the control law.Whereas in the deterministic case there is huge literature about this problem (see f.e.(Sontag, 1999)), in the stochastic case it seems to be less considered, also because it request some degeneration of the stochastic part which makes it difficult to handle with classical stochastic techniques.In (Grüne and Wirth, 2000) Zubov's method was extended to this problem for deterministic systems and in this paper we apply this method to stochastic control systems, proceeding in two steps: In the first step in Section 2 we introduce a suitable optimal control problem associated with the stochastic system.We show that a suitable level set of the corresponding value function v gives the set of initial points for which there exists a control driving the stochastic system to the locally controllable set with positive probability.The value function is characterized as the unique viscosity solution of the Zubov equation, which is the Hamilton-Jacobi-Bellman of the control problem. In the second step in Section 3 we introduce a discount factor δ > 0 and pass to the limit for δ → 0 + .We show that the set of points controllable to the fixed point with probability p ∈ [0, 1] is given by the subset of R N where the sequence v δ converges to 1 − p.The sequence v δ converges to a l.s.c.v 0 which is a supersolution of a Hamilton-Jacobi-Bellman related to an ergodic control problem.In this respect the Zubov equation with positive discount factor can be seen as a regularization of the limit ergodic control problem which gives the appropriate characterization. Finally, in Section 4 we describe an example where the previous objects are calculated numerically. ZUBOV'S EQUATION AND POSSIBLE NULL-CONTROLLABILITY We fix a probability space (Ω, F, F t , P), where {F t } t≥0 is a right continuous increasing filtration, and consider the controlled stochastic differential equation (1) where α(t), the control applied to the system, is a progressively measurable process having values in a compact set A ⊂ R M .We denote by A the set of the admissible control laws α(t).Solutions corresponding to an initial value x and a control law α ∈ A will be denoted by X(t, x, α) (or X(t) if there is no ambiguity). We assume that the functions b : R N × A → R N , σ : R N ×A → R N ×M are continuous and bounded on R N × A and Lipschitz in x uniformly with respect to a ∈ A and that 0 ∈ A. Moreover we assume that there exists a set ∆ ⊂ R N locally a.s.exponentially null-controllable, i.e. there exist r, λ positive and a finite random variable β such that for any x ∈ B(∆, r) = {x ∈ R N : d(x, ∆) ≤ r}, there exists α ∈ A for which d(X(t, x, α), ∆) ≤ βe −λt a.s.for any t > 0. (2) In this section we study the domain of possible null-controllability i.e., the set of points x for which it is possible to design a control law α such that the corresponding trajectory X(t, x, α) is attracted with positive probability to ∆. We introduce a control problem associated to the dynamics in the following way.We consider for x ∈ R N and α ∈ A the cost functional Proof: Note that by definition 0 ≤ v ≤ 1 and v(x) > 0 for x ∈ ∆.We claim that C is the set of the points x ∈ R N for which there exists α ∈ A such that E[exp(−t(x, α))] > 0, where and therefore v(x) = 1. If x ∈ C, by the previous claim there exists α such that P[t(x, α) < +∞] > 0. Set τ = t(x, α) and take T and K sufficiently large in such a way where M g and L g are respectively an upper bound and the Lipschitz constant of g. We have obtained a link between C and v.In the next two propositions we study these objects in order to get a PDE characterization of v. ii) C is open, connected, weakly positive forward invariant (i.e.there exists α ∈ A such that the inequality P[X(t, x, α) ∈ C for any t] > 0 holds.) iii Proof: The proof is a straightforward generalization of the proofs of the corresponding results in (Camilli and Loreti, 2004). Remark 2.3.Note that if C does not coincide with all R N , the weakly forward invariance property requires some degeneration of the diffusion part of (1) on ∂C, see f.e.(Bardi and Goatin, 1999). The typical example we have in mind is a deterministic system driven by a stochastic force, i.e. a coupled system see e.g.(Colonius et al., 1996) for examples of such systems.Note that for systems of this class the diffusion for the overall process ) is naturally degenerate. Set Σ(x, a) = σ(x, a)σ t (x, a) for any a ∈ A and consider the generator of the Markov process associated to the stochastic differential equation Proposition 2.4.v is continuous on R N and a viscosity solution of Zubov's equation Proof: The only point is to prove that v is continuous on R N .Then a standard application of the dynamic programming principle shows that v is a viscosity solution of (5) (see f.e.(Yong and Zhou, 1999), (Fleming and Soner, 1993)). (xn,α) ] → 1 for n → +∞ and hence v is continuous on the boundary of C. To prove that v is continuous on the interior of C, it is sufficient to show that v is continuous in B(∆, r) since outside g is strictly positive and we can use the argument in (Lions, 1983, part I), Theorem II.2. Fix x, y ∈ B(∆, r) and > 0. Let b be such that Take T in such a way that L g b exp(−λT )/λ < /4, where λ as in (2), and let α be a control satisfying (2) and g(X(t,x,α),α(t))dt ] + 8 and δ sufficiently small in such a way that The next theorem gives the characterization of C through the Zubov equation (5). Theorem 2.5.The value function v is the unique bounded, continuous viscosity solution of (5) which is null on ∆. Proof: We show that if w is a continuous viscosity subsolution of ( 5) such that w(x) ≤ 0 for x ∈ ∆, then w ≤ v in R N .Using a standard comparison theorem (see f.e.(Fleming and Soner, 1993)), the only problem is the vanishing of g on ∆.Therefore we first prove that w ≤ v in B(∆, r) using (2), we then obtain the result in all R N by applying the comparison result in R N \ B(∆, r). Fix > 0 and let δ > 0 be such that if d(z, ∆) ≤ δ, then w(z), v(z) ≤ .For x ∈ B(∆, r) by the dynamic programming principle we can find α ∈ A satisfying (2) and such that Therefore we have where Set B k = {β ≤ K} and take T and K sufficiently large in such a way that 2M e −g δ T ≤ , 2M P[B c k ] ≤ and, recalling (2 k ] + 2 ≤ 4 , and thus w ≤ v in B(∆, r) since was arbitrary. By a similar argument we can prove that if u is a continuous viscosity supersolution of ( 5) such that u(x) ≥ 0 for x ∈ ∆, then u ≥ v in R N . Remark 2.6.The function v is a stochastic control Lyapunov function for the system in the sense that for any x ∈ C \ ∆ and any t > 0. CONTROLLABILITY DOMAINS In this section we are interested in the set D p of points x ∈ R N which are asymptotically controllable to the set ∆ with probability arbitrarily close to a given p ∈ [0, 1], i.e., We require a slightly stronger stability condition, namely that besides (2) it also holds that for any x ∈ B(∆, r) there exists a control α ∈ A such that E[d(X(t, x, α), ∆) q ] ≤ M e −µt for any t > 0 (7) for some q ∈ (0, 1] and positive constants M , µ.This assumption is motivated by the uncontrolled linear case where ( 7) is a consequence of (2). We consider a family of value functions depending in the discount factor on a positive parameter δ The main result of this section is Theorem 3.1. Proof: The proof is split in three steps. Since g is Lipschitz continuous in x uniformly in a and g(x, a) = 0 for any (x, a) ∈ ∆ × A, we have g(x, a) ≤ min{L g x , M g } ≤ C q x q for any q ∈ (0, 1] and corresponding constant C q .Let α be a control satisfying (7).Then for any δ, by the Lipschitz continuity of g, (2) and ( 7) we get where t(x, a) is defined as in (4). The proof of the claim is very similar to the one of Lemma 3.2 in (Camilli and Grüne, 2003), so we just sketch it.Let α ∈ A be such that sup (x,α) ] + and T 0 such that exp(−δT ) ≤ for T > T 0 .Hence for T > T 0 To obtain the other inequality in (9), take α ∈ A, T sufficiently large and δ small such that and, for t < T , e −δt ≥ 1 − .Then Since is arbitrary, it follows that lim inf For any α ∈ A, we have (x,α) ] and therefore by Claim 2, lim inf Now fix > 0, δ > 0 and take T sufficiently large such that exp(−δM g T ) ≤ .By the dynamic programming principle, for any α ∈ A we have δg(X(s),α(s))ds dt+ e − T ∧t(x,α) 0 δg(X(t),α(t))dt v(X(T ∧ t(x, α))}. (10) Now using Claim 1 and recalling that 0 ≤ v δ ≤ 1 we estimate the second term in the right hand side of ( 10) by A NUMERICAL EXAMPLE We illustrate our results by a stochastic version of a creditworthiness model from (Grüne et al., 2005) given by In this model k = x 1 is the capital stock of an economic agent, B = x 2 is the debt, j = α is the rate of investment, H is the external finance premium and f is the agent's net income.The goal of the economic agent is to steer the system to the set {x 2 ≤ 0}, i.e., to reduce the debt to 0. Extending H to negative values of x 2 via H(x 1 , x 2 ) = θx 2 one easily sees that for the deterministic model controllability to {x 2 ≤ 0} becomes equivalent to controllability to ∆ = {x 2 ≤ −1/2}, and that for the stochastic model this set ∆ satisfies our assumptions. Using the parameters 2 we have numerically computed the solution v δ for the corresponding Zubov equation with δ = 10 −4 using the scheme described in (Camilli and Grüne, 2003) extended to the controlled case (see (Camilli and Falcone, 1995) for more detailed information).For the numerical solution we used the time step h = 0.05 and an adaptive grid (see (Grüne, 2004)) covering the domain Ω = [0, 2] × [−1/2, 3].For the control values we used the set A = [0, 0.25]. As boundary conditions for the outflowing trajectories we used v δ = 1 on the upper boundary and v δ = 0 for the lower boundary, on the left boundary no trajectories can exit.On the right boundary we did not impose boundary conditions (since it does not seem reasonable to define this as either "inside" or "outside").Instead we imposed a state constraint by projecting all trajectories exiting to the right back to Ω.We should remark that the effect of these conditions has to be taken into account in the interpretation of the results.Figure 1 show the numerical results for σ = 0, 0.1 and 0.5 (top to bottom).In order to improve the visibility, we have excluded the values for x 1 = 0 from the figures (observe that for x 1 = 0 and x 2 > 0 it is impossible to control the system to ∆, hence we obtain v δ ≈ 1 in this case).
2016-05-21T08:49:11.215Z
2005-01-01T00:00:00.000
{ "year": 2005, "sha1": "2da4a441471748e2030ffb27b42215f86b12fc0d", "oa_license": "CCBY", "oa_url": "https://epub.uni-bayreuth.de/5511/1/gruene_et_al_zubovs_meth_ifac_2005.pdf", "oa_status": "GREEN", "pdf_src": "Grobid", "pdf_hash": "4c4d5b9a2a7f0df0a728cfb77e1e5694250866f5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
221245976
pes2o/s2orc
v3-fos-license
Transport in boundary-driven quantum spin systems: One-way street for the energy current We study transport properties in boundary-driven asymmetric quantum spin chains given by $\mathit{XXZ}$ and $\mathit{XXX}$ Heisenberg models. Our approach exploits symmetry transformations in the Lindblad master equation associated to the dynamics of the systems. We describe the mathematical steps to build the unitary transformations related to the symmetry properties. For general target polarizations, we show the occurrence of the one-way street phenomenon for the energy current, namely, the energy current does not change in magnitude and direction as we invert the baths at the boundaries. We also analyze the spin current in some situations, and we prove the uniqueness of the steady state for all investigated cases. Our results, involving nontrivial properties of the energy flow, shall interest researchers working on the control and manipulation of quantum transport. I. INTRODUCTION A bedrock of nonequilibrium statistical physics is the understanding of the transport laws [1][2][3]. In particular, the study of the energy flow properties is of theoretical and experimental interest: a good example is the investigation of thermal rectification. Motivated by the amazing progress of modern electronics due to the invention of transistor, electric diode and other nonlinear solid state devices, several works are devoted to the investigations of asymmetries in the energy current in order to build thermal diodes [4,5], a device in which the magnitude of the energy current changes as we invert the system between two baths. A subject of increasing attention nowadays is the study of such transport laws in the quantum regime. Stimulated by the emerging field of quantum thermodynamics, by the development of nanotechnology and the possibility of experimental manipulation of small quantum systems, the study of quantum models becomes mandatory. Quantum spin chains, in specific, are exhaustively investigated. They are the archetypal models of open quantum systems, and are related to problems in several different areas: condensed matter, cold atoms, optics, quantum information, etc. Their boundary-driven versions, i.e., systems with target polarization at the boundaries are recurrently studied [6][7][8][9][10]. The energy current of these boundary-driven systems, differently of the weakly coupled models, usually involves heat and work [11][12][13]. It is an important information: when we ignore the work component, incorrect conclusions may be obtained [13,14]. The present article addresses the investigation of some (a)symmetries in the energy current of boundary-driven Heisenberg (XXX ) and XXZ models. In specific, we show the occurrence of the one-way street phenomenon for the energy current to several asymmetric Heisenberg and XXZ with general cases of different boundary polarizations. Such a phenomenon means that the energy current is the same as we invert the baths at the boundaries, that is, it does not change in magnitude and direction. Thus, the phenomenon is, in some way, related to (but stronger than) rectification. It is important to emphasize that, as said, the energy current is not only heat, and so, there is no thermodynamic inconsistency in the occurrence of the one-way street effect. For more details, see Refs. [11][12][13]. The dynamics associated to the models, as usual, is given by a Lindblad master equation (LME) [15]. To establish our results we exploit symmetries of the density matrix and of the LME. And these results are independent of the system size and of the transport regime. The existence of the one-way street phenomenon was established in Ref. [16] by a direct computation of the steady density matrix and the energy current for a XXZ model with σ z polarization at the boundaries. The argument of symmetries appeared in Ref. [17] for the same case, and in a recent letter [18] we stated, without presenting a mathematical proof, the possibility of a ubiquitous occurrence of such phenomenon for systems with general target spin polarization at the boundaries. In the present paper, we give the mathematical proofs for the energy current property; we also show that, in some cases, the spin current changes the sign as we invert the baths, differently of the energy flow. And, an important mathematical point, we prove the uniqueness of the steady distributions for the cases treated here. The rest of the paper is organized as follows. In section 2, we introduce the model and describe the approach. In section 3, we analyze several cases of different target polarizations and present the mathematical steps. In section 4, we prove the uniqueness of the steady states. Section 5 is devoted to the final remarks. II. MODELS AND APPROACH Now we introduce the models to be treated here, the LME, the approach to be used and some previous results. We consider here standard quantum spin models, namely, the XXZ and Heisenberg (XXX ) chains. For the Hamiltonian of the asymmetric version of the spin 1/2 XXZ chain, we take where σ β i (β = x, y, z) are the Pauli matrices. We are interested in cases involving asymmetric distributions for the anisotropy parameter ∆ i , for example, a graded distribution: For the Heisenberg model, we take the Hamiltonian where α i is asymmetrically distributed. The open quantum systems to be analyzed are given by the steady states of the LME where we assume = 1, ρ is the density matrix, the dissipator L(ρ) describes the coupling with the baths and it is given by {·, ·} above describes the anti-commutator; different L s will be specified later. The spin and the energy current are derived from the LME and continuity equations, see, e.g., Ref. [19] for details. For the XXZ chain, the spin current is Adding in the Hamiltonian the interaction with an external magnetic field N j=1 B j σ z j , the energy current becomes It is important to recall that there is a remarkable difference between symmetric and asymmetric XXZ chains. For the symmetric case we have J XXZ j = 0 [19]. And so, the energy current becomes proportional to the spin current, and vanishes as B = 0. But it does not follow in the asymmetric case as shown, by a direct computation, in Ref. [16] for a system with σ z polarization at the boundaries. Turning to the Heisenberg Hamiltonian, the expressions for the currents become We now describe our strategy to prove the current properties, in particular, the one-way street phenomenon. In some way, we follow Popkov and Livi [20]. We exploit symmetries in the LME to show that, if ρ is a steady state solution of the LME, then there is a unitary transformation U (to be built) such that U ρU † is a solution of the LME with inverted baths. By uniqueness (to be proved), it is the steady state with inverted baths. Then we turn to the energy current and show that the average with the new steady state is the same of that with the initial steady state. That is, the energy current does not change as we invert the baths: this is the one-way street phenomenon. To be precise, in the steady state the LME becomes It means that, in order to perform our analysis, we must find a unitary transformation U such that Moreover, to show the one-way street phenomenon we need to prove that U JU † = J. In the next section, we find U for several different dissipators, i.e., several different boundary polarizations, such that these relations are satisfied. III. UNITARY TRANSFORMATIONS AND SYMMETRY RESULTS Now we will build the unitary transformations in order to exploit the symmetries of the Lindblad equations and prove some current properties, in particular, the one-way street phenomenon for the energy current. We begin by noting that any unitary matrix can be written as (the reader can prove it) where a, b ∈ C, ϕ ∈ R and |a| 2 + |b| 2 = 1. Then, we analyze several cases involving different boundary polarizations. We also investigate two different graded systems: the two first cases are related to the XXZ chain, and the other ones to Heisenberg model. First we take the case in which the polarization is in the x direction in one boundary, and in some generic angle in the plane xy in the other boundary. Precisely, we consider the Lindblad operators where γ is the coupling constant and f is the driving strength (we take To perform the change between the baths, we need to find a unitary operator such that We may still have factors such as −1, i or −i on the right hand side without any further problem. Given such conditions, we see that it is enough to find a unitary matrix A such that the operation A (·) A † transforms as: And so, U will be given by Carrying out the computation: But we want It implies that |b| 2 − |a| 2 = 1. As we already have |b| 2 + |a| 2 = 1, then |a| 2 = 0, and so, a = 0, |b| 2 = 1. Now the matrix A is given by To find b we perform the computation We want we take ϕ = θ, and it leads us to −ib 2 = 1. Then, it is enough to take b = 1+i √ 2 . Hence, is the desired unitary matrix. Carrying out some computation we find that A(cos θσ x + sin θσ y )A † = σ y . Now we analyze the XXZ Hamiltonian First, we note that Then, it follows that That is, U HU † = H. Before investigating the effect of U on the currents, we note that The energy current isĴ and so the effect of U is It shows the occurrence of the one-way street phenomenon: the energy current is the same, it keeps the same value and direction as we invert the reservoirs at the boundaries. Taking the spin currentĴ the effect of U is That is, in other words, the spin current keeps the value and inverts the direction as we invert the reservoirs at the boundaries; there is no spin rectification, no further effect. X-Y orthogonal polarization. Let us consider the set of Lindblad operators for one boundary And, for the other boundary, For this case, the inversion of the baths can be given by a unitary operator Indeed, with such an operator we have the transformations We will use the general representation for a matrix A ∈ SU (2): where a r , a i , b r and b i ∈ R e a 2 r + a 2 i + b 2 r + b 2 i = 1. Turning to the computations, where As we want we must have For the σ z transformation where As we want we must have It is easy to see that a solution is And so, is the searched matrix. We note that we have (as expected) The energy current for the graded XXZ chain iŝ For the action of U , noting that that is, the one-way street phenomenon holds. For the spin currentĴ it follows that is, the current is inverted without rectification or any other effect. Y-YZ polarization. We now investigate the chain in which the first spin is target on Y direction and the last one target on some direction on Y Z plane. We also turn to the Heisenberg models. Precisely, we consider the Lindblad operators To perform the baths inversion, it is enough to find A such that A(·)A † makes the transformations −σ x . We will use the representation of a unitary matrix in SU (2): where a 2 r + a 2 i + b 2 r + b 2 i = 1 We begin studying the condition (3) above: where we want It leads to and we must have We can take a r = b i = 0, and so, we stay with where a 2 i + b 2 r = 1. Let us satisfy condition (1). We have and we want = sin θ −i cos θ i cos θ − sin θ . That is and so Consequently, for 0 ≤ θ ≤ π/2 which are our angles of interest. We take Moreover Then, the final form of A is We know that Aσ x A † = cos θσ y + sin θσ z . A simple computation shows that Aσ y A † = cos θσ z − sin θσ y . Noting that A † = −A, it follows and the same for σ y and σ z . For the graded Heisenberg Hamiltonian we have For the energy current we have and so that shows the occurrence of the one-way street phenomenon. Y-Z orthogonal polarizations. Let us consider the set of Lindblad operators at one boundary, say the left one, given by and for the right boundary To invert the baths it is enough to find an operator A such that Indeed, in such case, we will have the transformations and also Consequently, the dissipator transforms as As we show below, it is enough to use a representation for A in SU (2) We still want from (II) that leads us to From the equations above we have a i a r = 0 = b i b r . We choose a r = b i = 0 and, consequently, Hence, And we obtain for A the final form A short computation shows us that (III) follows: as expected. It is easy to see that the transformations keep the Hamiltonian of the graded Heisenberg model unchanged, i.e., For the energy current, we have that is, the one-way street phenomenon holds. Z-XZ polarization. We now consider the case involving a σ z target polarization at one side, and on a rotated axis on plane XZ for the other side. That is, we consider the Lindblad operators as Again, we search for one operator related to baths inversion. We use the general representation of SU (2). After manipulations similar to those previously described, we find Then we study the effect of U = A ⊗ A ⊗ ... ⊗ A on the Heisenberg Hamiltonian and on the energy current. We have U HU † = . . . as expected. For the energy current of the Heisenberg model we have and using cos 2 x + sin 2 x = 1, we obtain that is, one-way street phenomenon. X-Z orthogonal polarization. Now, for one boundary we take the Lindblad operators and, for the opposite boundary, To implement the bath inversion, it is enough to find a unitary operator A such that since its action will perform the transformations and also After some algebraic manipulations, we find And everything follows: the Heisenberg Hamiltonian is preserved under the transformations, as well as the energy current, i.e., the one-way street phenomenon holds. IV. STEADY STATE UNIQUENESS Now we prove the uniqueness of the steady state for all the cases previously analyzed here. As well known, the steady state is unique if the set of Lindblad operators together with the Hamiltonian are enough to generate the whole Pauli algebra [21] involving all sites 1, 2, . . . , N . Here, in our prove, we follow Prosen [21]. In any of the previous analyzed cases, the Lindblad operators are given in terms of σ + and σ − , or Γ + = σ z +iσ x 2 and Γ − = σ z −iσ x 2 in one of the sides of the system (1 or N ), or in terms of Π + = σ y +iσ z 2 and Π − = σ y −iσ z 2 . But this last case is reduced to the first one by the relations and the other reduces to the first one due Thus, let us show that having σ + and σ − in one of the sides is enough to generate the whole algebra (of course, with the Hamiltonian). To prove it, we will show that the following relations are valid for j = 3, 4, ..., n and the conjugate we recall that [σ + , σ − ] = σ z . With the previous relations, we get the set {σ + j , σ − j ; j = 1, ..., n} that generates the whole Pauli algebra. First, we rewrite the XXZ Hamiltonian as Talking about algebraic properties, the constants α and ∆ are not important (as well as the difference between ∆ j and ∆ j+1 ). And so, our computation follows also for the Heisenberg model. We have and so Consequently as we wanted. Carrying out the computation hence, and so, For the adjoint, we have where we used the identity We also have and with these results we can conclude the proof. V. FINAL REMARKS We believe that our results showing the general occurrence of a nontrivial property of energy transport in quantum spin systems will enhance the interest of researchers in quantum transport. It is worth to emphasize that the one-way street phenomenon shown here is an effect stronger than rectification, even a perfect rectification. These boundary-driven quantum spin systems are the archetypal models of nonequilibrium statistical physics, and the asymmetric versions proposed here are not only theoretical proposals. Graded materials, for example, i.e., asymmetric systems with structure changing gradually in space, are abundant in nature and can be also built. They are recurrently studied in different areas: material science, optics, etc. An example of graded thermal diode has been already experimentally constructed [22]: a carbon and boron nitride nanotube, externally coated with heavy molecules. It is important to stress that, in particular, asymmetric versions of XXZ and Heisenberg chains seem to be realizable. In Refs. [23,24], it is shown the possibility to engineer these quantum spin Hamiltonians with different values for the structural parameters α and ∆. Finally, still concerning the realizability of such systems, recent experimental works with Rydberg atoms in optical traps [25,26] appear associated to Heisenberg and XXZ models.
2020-07-09T09:12:34.283Z
2020-08-21T00:00:00.000
{ "year": 2020, "sha1": "279a3f1901f5c3ea04e472a6f419624550cdbcf2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2008.09447", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b6cba70a436e2a72d16998288652ffddfe006e93", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
246240455
pes2o/s2orc
v3-fos-license
Observation of spin-1 tunneling on a quantum computer Spin-1 tunneling and splitting of energy levels as a result of tunneling are observed explicitly on IBM's quantum computer, ibmq-bogota. The spin-1 is realized with two spins-1/2. We detect oscillations of spin-1 between the states $|1\rangle$, $|-1\rangle$ in the result of tunneling on the basis of studies of the time dependence of the mean value of z-component of spin-1 on the quantum device. The energy level splitting is observed quantifying on the IBM's quantum computer the eigenvalues of Hamiltonian which describes the spin tunneling. Quantum spin tunneling is phenomena where single spin tunnels between two opposite directions. This leads to degeneracy of energy levels related with opposite states which is called quantum spin tunneling splitting (see, for instance, [15,16]). Experimental observation of quantum tunneling of the magnetization of cluster with S = 10 was reported in [17]. Latter in [18,19] a direct measurement of the quantum tunneling splitting energy of the spin S = 1 was done. Note that quantum spin tunneling splitting for zero field is only possible for integer spins. The smallest spin for which this phenomena can be observed is S = 1. In this paper we simulate phenomena spin-1 quantum tunneling on quantum computer. We observe explicitly spin-1 tunneling and splitting of energy level as result of tunneling. The spin tunnelling is detected on the basis of studies of evolution of mean value of z-component of spin-1. The splitting of energy level is observed detecting energy levels of the Hamiltonian that describes single-spin tunneling on a quantum computer. The studies are done using the method of detection of the energy levels of spin system on a quantum computer with probe spin evolution proposed in [10,9]. The paper is organized as follows. In Section 2 the spin-1 tunnelling is considered and the way to detect it on a quantum device is presented. In Section 3 the spin tunneling and the energy splitting in the result of tunneling are detected on IBM's quantum computer. Conclusions are presented in Section 4. 2 Spin-1 tunneling and its studies on a quantum computer The Hamiltonian that describes single-spin tunneling reads here S α are spin-1 operators, D is the axial constant which determines the magnetic anisotropy, constant γ is responsible for single-spin tunneling effect (see, for instance, [20]). Energy levels and corresponding eigenstates of (1) are well known. They are as follows Tunneling leads to the splitting of energy level, it reads ∆ = 2|γ|. The processes of tunneling can be seen explicitly studying dynamical properties of spin-1. In relation with this it is worth mentioning paper [21] where dynamical problems of spin-1 were considered to study quantum brachistochrone problem. Let in the initial time t = 0 the spin is in the state |1 and is positively directed along z-axis, S z t=0 = 1. One can find that the evolution of the state vector reads where ω = γ/h. From (4) we have, that in result of tunneling the spin oscillates between two opposite directions described by the state vectors |1 , | − 1 . These oscillations are reflected in the time dependence of the mean value of z-component of spin and can be detected on a quantum device. Spin-1 can be realized with two spins-1/2 (see [22]). The operator of spin-1 can be represented as a sum of two spin-1/2 operators as follows where σ α i are Pauli operators, α = (x, y, z). For spin-1 the eigenvalue of S 2 is j(j + 1) = 2, (j = 1). In order to satisfy this relation the action of spin-1/2 operators has to be restricted on subspace spanned by vectors (see [22]). Note that singled state of two spins-1/2 is annihilated by operators (6) and does not belong to this subspace. Representation of spin-1 by two spins-1/2 allows us to model and study spin-1 systems on a quantum computer, in particular to examine quantum spin-1 tunneling with quantum calculations. In the next section we present results of simulation of spin-1 tunnelling on the IBM's quantum computer. Detection of spin-1 tunneling on IBM's quantum computer Using representation for spin-1 (6) we rewrite Hamiltonian (1) as follows Expression (8) corresponds to Hamiltonian of two spins-1/2 with anisotropic Heisenberg interaction. Evolution operator of this system can be realized on a quantum computer. Due to commutation relation [σ i 1 σ i 2 , σ j 1 σ j 2 ] = 0, the evolution operator can be factorized. It reads Here for convenience we puth = 1. In order to show quantum tunneling of spin-1 explicitly, we detect evolution of the mean value governed by (1) on a quantum computer. Quantum protocol for this studies is presented on Fig. 1. Figure 1: Quantum protocol for studies of evolution of S z (t) governed by Hamiltonian (1), α = t. In the quantum protocol we consider the initial state of spin-1 to be |1 , that corresponds to the state of two spins-1/2 (qubits) as |00 . Also, to construct protocol Fig. 1 we take into account that with the exactness to the total phase the operator exp(−iγασ x 0 σ x 1 /2) can be represented as CNOT 01 H 0 P 0 (γα)H 0 CNOT 01 , where H i is the Hadamard gate acting on q[i], CNOT ij is the controlled NOT-gate acting on qubit q[i] as on the controlled and on the qubit q[j] as on the target. Operator exp(iγασ y 0 σ y 1 /2) can be rewritten as CNOT 01 CZ 01 H 0 P 0 (γα)H 0 CZ 01 CNOT 01 , where P 0 (γα) is the phase gate acting on qubit q[0], CZ 01 is the controlled Z-gate acting on qubits q[0], q [1]. For exp(−iDασ z 0 σ z 1 /2) we have representation CNOT 01 RZ 1 (Dα)CNOT 01 , here RZ 1 (Dα) is Z-rotation gate that acts on q [1]. In quantum protocol Fig. 1 we also take into account that (CNOT ij ) 2 = 1. We realized protocol Fig. 1 for α changing from 0 to 2π with the step π/48 on ibmq-bogota. The results of quantum calculations in the case of D = −2 and γ = 0.5 are presented on Fig. 2. On Fig. 2 we see that during the time interval [0, π] the spin-1 oscillates from the state |1 to |−1 . During the next time interval [π, 2π] it returns to the initial state |1 . This reflects spin-1 tunnelling between two directions described by the state vectors |1 , | − 1 . Detection of the energy spectrum splitting on a quantum computer To find the energy spectrum of spin-1 Hamiltonian (1) on a quantum device and detect its splitting we use the method of quantifying the energy levels of spin systems proposed in our papers [9,10]. In [9] we presented a method for detecting the energy levels of a spin system which is based on the studies of evolution of the mean value of operator of a physical quantity anticommuting with the Hamiltonian of the system. Because an operator anticommuting with Hamiltonian does not exist for all spin systems, in [10] we generalized the proposed method of detecting energy levels to the case of arbitrary spin Hamiltonians. In [10] it was proposed to build the total Hamiltonian adding probe (ancila) spin-1/2 and to detect the energy levels on the basis of studies of the probe spin evolution. In this section we apply the method of detection of the energy levels of spin systems presented in [10] to Hamiltonian (1) describing spin-1 tunneling problem. So, let us construct the total Hamiltonian as follows here H is given by (1), σ z 0 is z-component of the Pauli matrix corresponding to the probe spin-1/2. Constant C has to be chosen to shift the energy levels of the Hamiltonian (1) to the positive or negative ones. The energy levels of H (1) for D < 0 and |D| > |γ| are nonpositive. Therefore we can put C = 0. In this case the energy levels of H T are related with the energy levels of the Hamiltonian (1) as E T = ±E, where we use notation E T for the energy levels of H T and notation E for the energy levels of H. Operator σ x 0 of the probe spin anticommutes with the total Hamiltonian. Let us study its evolution and detect the energy levels of H T and as result the energy levels of H as was proposed in [9]. We consider the initial state as where |+ = 1 √ 2 (|0 + |1 ) is the initial state of the probe spin. State |χ, χ is the initial state of two spins-1/2 representing spin-1 with Hamiltonian (1), |χ = 1 √ 2 (|0 + e iϕ |1 ). The state |χ, χ belongs on the subspace (7). Note that state |χ, χ with ϕ = 0 includes all eigenstates of H. Calculating the evolution of the mean value of σ x 0 , we find σ x 0 (t) = 1 2 (cos 2 ϕ cos 2ω + t + sin 2 ϕ cos 2ω − t + 1). Then the Fourier transformation of the time evolution of the mean value σ x 0 (t) has δ-peaks at the frequencies related with the energy levels of H T and H. We obtain where we use notations To study σ x 0 (t) on a quantum device we construct quantum protocol presented on Fig. 3. Figure 3: Quantum protocol for studies of evolution of σ x 0 (t) governed by Hamiltonian (11), α = t. In protocol Fig. 3 the Hadamard gates and the phase shift gates are applied to prepare the initial state (12). We have |ψ 0 = P 1 (ϕ)P 2 (ϕ)H 0 H 1 H 2 |000 . To realize the operator of evolution with Hamiltonian (11) we take into account that with the exactness to the total phase the operator exp(−iDασ z 0 σ z 1 σ z 2 /2) can be represented as CNOT 01 CNOT 12 RZ 2 (Dα)CNOT 12 CNOT 01 . Also, therefore operators exp(−iγασ z 0 σ x 1 σ x 2 /2), exp(iγασ z 0 σ y 1 σ y 2 /2) with the exactness to the total phase can be represented as respectively. Here RX i (π/2), RY i (π/2) are X-and Z-rotation gates acting on q[i]. Finally, to find the mean value of σ x 0 operator we apply RY (−π/2) because operator σ x 0 can be represented as σ x 0 = e i π 4 σ y 0 σ z 0 e −i π 4 σ y 0 . So, to calculate the mean value of σ x 0 on the basis of the results of measurement in the standard basis the state of qubit q[0] has to be rotated around the Y axis. Conclusions Quantum spin-1 tunneling has been observed on IBM's quantum computer, ibmq-bogota. To model spin-1 on the quantum devise we have used its representation by two spins-1/2. We have proposed to detect spin-1 tunnelling studying time evolution of mean value of z-component of spin-1 operator S z (t) . It has been shown that the time dependence of S z (t) reflects oscillations of spin-1 between two opposite directions in the result of tunneling. The evolution of mean value of z-component of spin-1 operator governed by Hamiltonian describing spin-1 tunnelling has been found (see Fig. 2). For this purpose quantum protocol Fig. 1 has been realized on the ibmq-bogota. As a result, the spin-1 tunnelling between two opposite directions described by the state vectors |1 , | − 1 has been observed on a quantum device.
2022-01-25T02:15:51.410Z
2022-01-21T00:00:00.000
{ "year": 2022, "sha1": "9eb3ce299f49e2dd5df800951351af636c6e4c85", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9eb3ce299f49e2dd5df800951351af636c6e4c85", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2816811
pes2o/s2orc
v3-fos-license
p21-Activated Kinase 4 Promotes Intimal Hyperplasia and Vascular Smooth Muscle Cells Proliferation during Superficial Femoral Artery Restenosis after Angioplasty The aim of this study is to explore the function of p21-activated kinase 4 (PAK4) in intimal hyperplasia (IH) and vascular smooth muscle cells (VSMCs) proliferation. We choose vascular samples from patients undergoing angioplasty in superficial femoral artery (SFA) as the experimental group and vascular samples from donors without clinical SFA restenosis as the control group, respectively. We draw from the results that both levels of mRNA and protein of PAK4 in the experimental group increased dramatically compared with the control group. IH arose from angioplasty of SFA. Moreover, overexpression of PAK4 dramatically contributed to cell proliferation of VSMCs and promoted cell cycle progression from G0/G1 phase (71.12 ± 0.69% versus 58.77 ± 0.77%, P < 0.001) into S phase (23.99 ± 0.21% versus 31.35 ± 0.33%, P < 0.001). Besides, PAK4 downregulated the level of p21 and enhanced the activity of Akt as well. And we conclude that PAK4 acts as a regulator of cell cycle progression of VSMC by mediating Akt signaling and controlling p21 levels, which further modulate IH and VSMCs' proliferation. Introduction Advanced angioplasty and stenting technology are two main therapeutic methods for treating cardiovascular disease. Although therapeutic percutaneous interventions have shown some good therapeutic efficacy of diverse vascular beds, such as the abdominal aorta and the iliac arteries [1,2], the superficial femoral artery (SFA) restenosis and occlusion is of frequent occurrence after these interventions [3]. Additionally, it has been reported that vascular restenosis, as a critical complication after these procedures, is secondary to intimal hyperplasia (IH) [4]. Proliferation of VSMCs is a hallmark of the early pathologic appearance of IH [5,6]. Given this, inhibition of VSMCs proliferation is the key to prevention and treatment of IH. The p21-activated kinases (PAKs) are a family of serine/threonine kinases that are major effector proteins for the Rho GTPases Cdc42 and Rac, which are important for cell morphology and cytoskeletal reorganization [7,8], as well as various cell processes including proliferation, migration, and survival [9][10][11]. Among them, p21-activated kinase 4 (PAK4) is the most unique and profoundly studied member. PAK4 is expressed at low levels in the majority of normal adult tissues and accumulating documents have reported that the aberrant expression of PAK4 is closely related to the diverse cancers, such as glioma, breast cancer, colon and gastric cancers, and prostate cancer [12][13][14]. Moreover, high expression of PAK4 is closely associated with cell proliferation, migration, invasion of ovarian cancer cell, and poor prognosis in patients [15]. Of significance, it has been reported that PAK4 plays an important role in the cell cycle through regulating the level of p21, a key member of the cyclin-dependent kinase-(CDK-) inhibitory protein family, in fibroblasts [16]. Additionally, PAK4 is highly expressed in embryonic stage, and knock-out of PAK4 would result in embryonic lethality, accompanied by anomalies in the heart and placenta and defects in vascular system [17,18]. However, till date, there is no documented evidence of its pathological significance in VSMCs proliferation. BioMed Research International In the present study, we investigate whether PAK4 is involved in vascular restenosis using vascular samples from patients that underwent angioplasty of SFA and cell proliferation of VSMCs. . Patients were treated in the standard manner of our practice. The three patients showed clinical restenosis and lower-limb necrosis was after postprocedure 10, 13, and 15 months, respectively. The SFA restenosis samples were harvested through amputation above the knee. The control SFA samples were obtained from the corresponding region of donors without clinical SFA restenosis. The SFA samples were promptly washed with PBS and resected longitudinally by the surgeon. Parts of the samples were stored immediately in −80 ∘ C for qRT-PCR assay and western blot analysis. The remaining samples were embedded with paraffin and prepared for further H&E staining. The inclusion criteria for the experimental participants were as follows: (1) CT angiography (CTA) showing that SFA restenosis occurred after the PTA treatment; (2) being willing to participate in the study. Exclusion criteria included CT angiography (CTA) showing SFA restenosis after the other surgery besides PFA treatment. Cell Culture. Human vascular smooth muscle cell line T/G HA VSMC (ATCC5 number: CRL-19996) was purchased from American Type Culture Collection. Cells were cultured in Ham's F12K medium with 2 mM L-glutamine supplemented with 10% fetal bovine serum (FBS) at 37 ∘ C in a humidified incubator with 5% CO 2 . Then, cells were trypsinized, transferred to 10 cm tissue culture dishes, and cultured to subconfluence. Cells at passages 4-8 were used. Cells overexpressing PAK4 were obtained by transfection with PAK4 ORF expression clone vector, using Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol. For further experiments, cells were cultured in 6-well or 96well plates with serum-free medium for 24 h. Quantitative Real-Time PCR. Total RNA was isolated using TRIzol reagent (Invitrogen) according to the manufacturer's protocol. For the reverse transcription reaction, TaqMan MicroRNA Reverse Transcription Kit (Applied Biosystems) was applied. qRT-PCR was carried out in 7500 Fast Real-Time PCR System (Applied Biosystem) by using SYBR Green Universal Master Mix (Roche). Briefly, the following PCR program was performed: 50 ∘ C for 2 min and 95 ∘ C for 5 min, followed by 40 cycles for 15 s at 95 ∘ C and 56 ∘ C for 45 s. The sequence information is as follows: PAK4, forward primer: 5 -TCCCCCTGAGCCATTGTG-3 and reverse primer: 5 -TG ACCTGTCTCCCCATCCA-3 ; -actin, forward primer: 5 -CTA TCGGCAA TGAGCGGTTC-3 and reverse primer: 5 -GATCTTGATCTTCATGGTGCTAGG-3 . Mature PAK4 mRNA levels in cells were measured by 2 −ΔΔCT method, with -actin as an internal control. Cell Proliferation Assay. Cell proliferation of VSMCs was detected via cell count technique. In brief, cells were seeded in 96-well plates with a density of 1 × 10 4 cells/mL and incubated for 24, 48, and 72 h. Then, cell proliferation was assessed by direct cell count with a Coulter Counter. The experiments were performed three times. BrdU Incorporation Assay. The viability of the VSMCs was determined by 5-Bromo-2-deoxyUridine (BrdU) assay according to the manufacturer's instructions. Cells transfected with PAK4 or control vectors were seeded in sterile 96-well culture plates at a density of 2 × 10 5 cells per well with serum-free medium and incubated for 48 or 72 h. Then, cells were incubated in medium containing a final concentration of 10 M BrdU for 2 h. Subsequently cells were washed and fixed with 2% paraformaldehyde solution for 25 min at room temperature and then washed three times with PBS to discard the culture medium. After observation, pictures were taken randomly in diverse views under a fluorescent inverted microscope. Cell Cycle Assay. Flow cytometry was conducted to analyze cell cycle. The cells were collected, trypsinized, and fixed with 75% methanol at −20 ∘ C overnight and then washed in PBS three times and incubated with PBS containing 10 ng/mL propidium iodide (PI), 100 g/mL RNase A, and 0.2% Triton X-100 for 30 min at 4 ∘ C in the dark. DNA content was monitored using a cell sorter (FACSCalibur, BD). Western Blot. The PAK4 protein levels of VSMCs and clinical tissues were assayed by western blot technology. The cells were grown in 6-well culture dishes to 70% confluence. Cells and tissues were lysed with RIPA buffer. After 15 min incubation, lysates were centrifuged at 12,000 g for 15 min, and the collected protein was loaded onto 10% sodium dodecyl sulfate-polyacrylamide gel (SDS-PAGE) and transferred onto polyvinyl difluoride (PVDF) membrane. The membranes were blocked with skimmed milk powder in Tris-buffered saline and then incubated with primary anti-PAK4 (1 : 500), anti-p21 (1 : 400), anti-p-Akt (1 : 1000), anti-Akt (1 : 300), and anti--actin (1 : 200) at 4 ∘ C overnight, followed by incubation with horseradish peroxidase-conjugated anti-rabbit secondary antibody (1 : 5000). The bands were visualized with the enhanced chemiluminescence plus system (ThermoFisher Scientific). 2.9. Histochemistry and Immunohistochemistry. Vascular tissues were washed with 0.9% saline, fixed with 4% neutral buffered paraformaldehyde and embedded in 10% paraffin. The intimal and medial lesion size were assessed by H&E staining and photographed. Stained specimens were assessed by a pathologist with a light microscope (Leica DM 6000 B; Leica Microsystems, Germany). Statistical Analysis. All experiments were repeated at least three times in this study. Data were presented as mean ± standard deviation (SD). Statistically significant differences between two groups were performed via -test. A value < 0.05 was considered as statistical significance. PAK4 Expression in Human Vascular Walls with IH. The degree of IH was evaluated morphologically using H&E staining. As shown in Figure 1(a), the intima layer was dramatically thickened in experimental group compared with the control group. To define the clinical functional role of PAK4, we detected the mRNA and protein expression levels of PAK4 in tissues with IH that arose from angioplasty. qRT-PCR indicated that, compared with the control samples, IH significantly increased the mRNA level of PAK4 (Figure 1(b)). Consistent with these, the western blot results showed that the expression of PAK4 protein was remarkably unregulated by IH (Figure 1(c)). Further details of the patients were listed in Table 1. As shown, there was no significant difference in complications of cardiac and cerebrovascular diseases between the two independent cohorts. These results demonstrated that PAK4 may play a vital role in the pathogenesis of IH originated from angioplasty. PAK4 Facilitates Vascular Smooth Muscle Cells' Proliferation. To gain insight into the pathobiological involvement of PAK4 in IH, the VSMCs of PAK4 overexpression were constructed. We identified the efficiency of transfection with qRT-PCR and western blot. As a result, PAK4 mRNA and protein levels were significantly higher in VSMCs of PAK4 overexpression than the control cell line transfected with mock-vehicle (Figures 2(a) and 2(b)). Further, cell count technique and BrdU incorporation assay were performed to detect cell proliferation. As shown in Figure 2 accelerated cell proliferation at 48 h (Figure 2(d)). These data indicated that PAK4 could facilitate the proliferation of VSMCs. PAK4 Promotes Vascular Smooth Muscle Cells' Cycle Progression. The effects of PAK4 on cell cycle progression were also analyzed. Flow cytometry analysis revealed that overexpression of PAK4 obviously increased the number of VSMCs in the G2/M phase and S phase (4.88 ± 0.12% versus 9.88 ± 0.06%, < 0.001; 23.99 ± 0.21% versus 31.35 ± 0.33%, < 0.001) (Figures 3(a) and 3(b)). The percentage of cells in G0/G1 phase decreased from 71.12% to 58.77% after PAK4 overexpressed in VSMCs (Figures 3(a) and 3(b)). This data suggested that PAK4 might promote the progression of cell cycle from G0/G1 phase into S phase, which further contributed to the proliferation of VSMCs. PAK4 Modulates p21 Expression and Akt Activation. To define the potential mechanism by which PAK4 signaling confers regulating cell cycle progression, we investigated the expression level of several putative cell cycle-related factors in VSMCs. In the overexpressing PAK4 group and control group, we detected the protein expression levels of p21, p-Akt, and Akt by using western blot analysis. As the results showed, the level of p21 protein was dramatically low in PAK4 overexpressed VSMCs group compared to the control cell line (Figure 4(a)). At the same time, overexpression of PAK4 notably increased the phosphorylation of Akt, while it had no effect on total protein levels of Akt. The data together indicated that PAK4 might mediate cell cycle progression through regulating the expression of p21 and the activation of Akt, further contributing to VSMC proliferation (Figure 4(b)). Discussion Percutaneous transluminal angioplasty (PTA) and stent implantation have become a top option of therapeutic schedule in atherosclerotic diseases, which is mainly manifested by peripheral arterial occlusive disease. However, PTA contributes to the occurrence of restenosis, especially in the SFA, which would dramatically decrease the efficacy of treatment [19]. Thus, it is of great importance to inhibit post-PTA restenosis. It has been widely accepted that the VSMCs proliferation was the pathological basis of IH, further resulting in vascular stenosis [20,21]. The PAK proteins family was involved in diverse cellular activities, which was classified into group I (PAK1-3) and group II (PAK4-6), based on the domain organization and regulatory properties [22]. PAK4, the prototype of group II PAKs, has been indicated to be involved in a variety of cellular processes, including cell proliferation, cell mobility, and cell cycle regulation [16,23]. Cell cyclin-dependent kinase inhibitor p21 plays an important role in suppressing cell cycle progression. Previous report indicated that knockdown of PAK4 obviously prohibited breast cancer cell proliferation [22]. However, the role of PAK4 in VSMCs proliferation has not been explored. Here, clinical studies reveal that IH results in an enhancement of PAK4 expression. Therefore, it is not surprising that PAK4 has effect on cell proliferation in VSMCs. In our study, VSMCs proliferation was increased with the overexpression of PAK4, which confirmed the speculation. Improper regulation of cell cycle could be the vital factor of aberrant cell proliferation. p21, a member of CDK-inhibitory protein family, suppresses cell cycle process through the Ras-Raf-MEK-ERK signaling pathway [24,25]. p21 is a short-lived protein, of which the level is modulated by ubiquitin-independent proteasome degradation process [26]. Diverse signaling pathways could contribute to the alternation of p21 level via different mechanism. Importantly, it has been documented that PAK4 plays a significant role in the initiation of cell cycle via regulating the cell cycle regulatory protein p21 [16]. In the present study, we demonstrated that overexpression of PAK4 arrested the cell cycle in S phase by flow cytometer. And further western blot results revealed that the expression of p21 protein was dramatically downregulated with the overexpression of PAK4 in VSMCs. Our study reported the integrated roles for PAK4 and p21 in VSMCs, which are of great importance in VSMCs proliferation and the development of restenosis. It is well established that Akt plays an important role in regulating cell cycle progression and cell survival by targeting at diverse downstream targets [27]. Moreover, in VSMCs, the Akt signal was proved to be closely related to controlling VSMCs proliferation, at least partly, through modulating p21 level [28]. Besides, a relationship between Akt and PAK4 has also been previously demonstrated. PAK4 has been shown to be controlled by miRNA-433 and subsequently attenuates Akt signaling, resulting in regulating the proliferation of hepatocellular carcinoma (HCC) cells. Following these data, we speculated that PAK4 might have some effect on p21 expression via regulating the activity of Akt signaling in VSMCs. This notion was validated by our findings that overexpression of PAK4 in VSMCs remarkably decreased the level of p21 and enhanced the activation of Akt simultaneously and further mediate cell cycle progression and cell proliferation. Thus, our findings have provided experimental evidence to support that PAK4 exerts the effects cell proliferation and cell cycle progression on VSMCs via regulating Akt signaling and the downstream factor p21. Conclusions In conclusion, we provided the direct evidence that PAK4 was involved in IH triggered by angioplasty with clinical data. The results demonstrated that PAK4 mediated IH via promoting cell proliferation of VSMCs. Further, we sought to verity the underlying mechanism of this effect, and the data suggested that PAK4 increased VSMCs proliferation through activating the Akt signaling and downregulating the expression of p21 protein, further resulting in G0/S phase transition of cell cycle progression. These findings contribute to the clarification of the crucial role of PAK4 in IH and might provide potential therapeutic targets for restenosis. Conflicts of Interest The authors declare that they have no conflicts of interest.
2018-04-03T00:16:58.089Z
2017-06-19T00:00:00.000
{ "year": 2017, "sha1": "0b6001df1bae0a7f67cc001f4515629fbc03b9d9", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2017/5296516.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a21a20f870bd46e45f895320546d53e7277f61aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18231726
pes2o/s2orc
v3-fos-license
Auditory Hallucinations in a Deaf Patient: A Case Report This case report describes the progression of symptoms in a young deaf female. Her initial psychotic symptoms occur at the age of 16, but she did not come into contact with a psychiatric treatment facility before the age of 27, where she felt symptoms were distressing. The case report describes the difficulties in evaluating psychotic symptoms in a deaf patient, as well as the use of specialized scales in combination with the standard psychiatric evaluation. The current evidence, concerning the prevalence of psychotic symptoms, as well as the influence of deafness on the understanding of psychosis, is described. Introduction In a review by Landsberger and Diaz [1], they examined the diagnostic and clinical features of deaf psychiatric inpatients and found studies showing prevalence rates of psychotic disorders from 20% to 54%. Authors of early studies suggested that these prevalence rates could represent diagnostic inaccuracies resulting from cultural and linguistic biases [2]. The deaf and hard of hearing population have a "deaf culture" that needs to be considered in the diagnostic procedure; this is especially important in deaf persons who are not fluent in the spoken/verbal language [3]. Due to suboptimal communication between deaf patients and early caregivers and peers, for example, parents and children of the same age, it is reasonable to suspect that the deaf patients could have difficulties with the normal experience of socialization, problem-solving skills, and emotional regulation [4]. The very early functional skill attainment seen in childhood is lacking in deaf patients, which likely contributes to a high rate of deaf patients displaying symptoms of socially inappropriate behaviour, poor self-care, behavioural impulsivity, aggression and self-injury, leading to diagnosis of impulse control disorders, mental retardation, and pervasive developmental disorders [4]. In a recent review, the prevalence rate of psychotic disorders in a sample of deaf inpatients was 43% in the USA [3]. Similar results were found in a population of deaf inpatients living in the UK, with a prevalence rate of 39% [5]. In contrast, studies from other European countries have reported lower prevalence rates with deaf Dutch inpatients having a prevalence rate of eight percent [6] and deaf Austrian inpatients having a prevalence rate of four percent [7]. In this paper, we wish to describe a patient with symptoms of psychosis and congenital deafness and the difficulties observed in our diagnostic process. Case Presentation A 28-year-old single female with family history of depression, was admitted to the psychiatric hospital due to suspicion of auditory hallucinations, with voices encouraging her to do self-harm acts. The patient has a congenital hearing impairment that was diagnosed at the age of two. The patient began learning sign language at the age of ten, and had previously only communicated by lip reading and talking. From the age of ten to 16, the patient received schooling in sign language techniques in a specialized institution 2 Case Reports in Psychiatry for deaf children in Aalborg. Currently, the patient speaks understandable Danish; she still reads lip, but adequate twoway communication is dependent on sign language. The patient was interviewed using the Danish version of the Present State Examination (PSE) [8]. Both the patient and the examiner received a transcript of the PSE questions, thereby allowing the patient to read the questions for herself, while they were read out loud by the examiner. Additionally, an interpreter was present to translate each question into sign language for the patient. Due to the general difficulties in the diagnostic process of nonhearing patients, the Atkinson interview was also employed [9]. Ninety-four questions enable the interviewer to identify how much auditory meaning the patient is putting into voice hallucinations how much of it is a communication act with lip reading and sign language, and also if any other types of hallucinations are present. Somatic Disease Biography. In October 2009, during a period of two weeks, the patient's full field of vision was blurry on both eyes. Magnetic resonance imaging (MRI) and magnetic resonance angiography (MRA) of the cerebrum were conducted in relation to these symptoms, and the results were normal. In November 2009, Benign intracranial hypertension (BIH) was diagnosed with spinal pressure of 33 mmHg that was accompanied with symptoms of fatigue, headache, and poor concentration. She was treated with acetazolamide, carbonic anhydrase inhibitor, during a threeyear period that was phased out without any relapsing symptoms. The patients hearing difficulties have been investigated on several occasions, last in January 2012, utilizing audiograms, showing that the patient could hear sounds on her left ear; however, she could not recreate any words. On her right ear, she could both hear and successfully recreate words. The right ear was examined with discrimination test, which determines how well the patient can hear sounds and understand speech under amplified volume and by using a hearing apparatus. Thirty-two percent of the time the patient could not distinguish separate words, despite the fact that the sound volume was increased to 85 dB. The conclusion was severe hearing loss. In November 2012, a computed tomography scan of cerebrum (CTC) was conducted during the psychiatric hospitalization to investigate possible organic aetiologies, and the scan was normal. Psychiatric Disorder Biography. The patient presents three episodes of hallucinatory experiences, as well as symptoms of anxiety, probably related to an assault five years ago. The initial episode of hallucinations occurred when the patient was 16 or 17 years of age. The hallucinatory voices appeared gradually. Initially, she heard her father's voice, which encouraged the patient with positive comments, similar to his role in her life. Hereafter, she heard her mother's voice, which praised and supported the patient, also in accordance with her real experiences. The voices came from her right ear. Patient describes these voices as being loud in her head, approximately with the same volume as patient's own voice and was perceived quite musically. The volumes of the voices remained constant and had a soothing effect on the patient and were not just hypnopompic or hypnagogic. During the present hospitalization, the patient explains that she has not heard these voices during the last years and that the specific wording of the auditory hallucinations was forgotten. The hallucinations persisted for two to three years and disappeared without pharmacological treatment. The patient again experienced hallucinatory symptoms, the second time in connection with the death of her cousin, when she was 17 years of age. She saw him as clearly as she could see other people, his face, his clothed body, and smelling his scent. Communication with him was as it used to be when he was alive. The patient describes his voice with an alternating volume, high and low, and she experienced his voice being clear at some time points, but also faint, indistinguishable at other time point, making it difficult to comprehend. She experienced a two-way verbal communication, both by hearing his voice and by lip reading. The auditory hallucination seemed to be external, not originating from inside her head. The patient described nonverbal, or so-called hidden signals, that her diseased cousin sent her; however, she can neither describe the actual way these signals were sent nor their meaning. The patient was aware that other people could not see her dead cousin, but was unable to explain this fact. The hallucinations persisted until pharmacological treatment was initiated during hospitalization. At the age of 25, the patient was physically assaulted and developed anxiety attacks with autonomic symptoms. The symptoms appeared once or twice a month mostly in the evening when the patient was alone and lasted for approximately 30 minutes. They appeared spontaneously without any triggering factors. They were accompanied by sweating, tachycardia, stomach aches, and flashbacks to the episode of the attack. The current episode of auditory hallucinations caused her to be admitted to the psychiatric hospital. It occurred in the summer of 2012, where the patient was 27 years old. The patient experienced a male voice that commanded and urged her to perform certain acts, where the patient was supposed to hurt herself or others, for example, "you should take a knife and stab yourself. " The patient characterized the voices as being harmful and threatening. There were no delusions in relation to the auditory hallucinations. The voices were heard in both ears and were of the same volume as the patient's own voice, but they were of a higher pitch and felt loud in her head. The voices appeared intermittently several times a day with duration of five to ten minutes. The symptoms disappeared after initiation of aripiprazole 20 mg per day. The patient was diagnosed with an ICD-10 F28 diagnosis (other nonorganic psychotic disorder). Discussion The phenomenon of auditory hallucinations among patients with hearing disorders is poorly examined; currently, the nature of this phenomenon is not fully elucidated. In our case, the communication with the patient was compromised, due to the difficulties of understanding questions, which had to be reformulated and repeated. It was challenging for the patient to give comprehensive descriptions of the phenomenon she experienced. Bailly et al. [10] find similar difficulties, while working with hearing-impaired children where the assessment of psychiatric disorders sets methodological challenges in relation to verbal communication and diagnostic process. The immature language understanding exhibited by many hearing-impaired or deaf patients hampers accurate psychiatric evaluation. The investigations of psychiatric symptoms are then compromised due to many of the assessment procedures being highly verbal and standardized for normalhearing. These difficulties may explain that the prevalence rates of mental disorders in hearing-impaired children and adolescents were found to vary from 15% to 60% [10]. The use of PSE may increase the risk of false positive psychotic symptoms, due to the questions being used in a different population, than the originally intended. More specialized scales and questions may improve the diagnostic process, particularly in this subpopulation of patients with hearing impairments, and patients should, in case of any diagnostic ambiguity, be seen by a psychiatrist specialising in patients with hearing difficulties. Learning Points (i) Auditory hallucinations are more common in patients with hearing difficulties. (ii) Diagnostic procedures should include specialized scales for the evaluation of psychotic symptoms in patients with hearing deficits. (iii) A thorough somatic examination including neuroimaging should be conducted if psychotic symptoms are suspected.
2016-05-12T22:15:10.714Z
2013-07-09T00:00:00.000
{ "year": 2013, "sha1": "f8eeb52bf0dbec4fe425b07cfb87c82b333cf03d", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crips/2013/659698.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "afc39c9854711ed31e62bad72a3a136fb83d6653", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211084266
pes2o/s2orc
v3-fos-license
Use of Mustard Seed Footbaths for Respiratory Tract Infections: A Pilot Study Objective Respiratory tract infections (RTIs) are the most commonly treated acute problems in general practice. Instead of treatment with antibiotics, therapies from the field of integrative medicine play an increasingly important role within the society. The aim of the study was to evaluate whether mustard footbaths improve the symptoms of patients with RTIs. Methods The study was designed as a pilot study and was carried out as an interventional trial with two points of measurement. Between November and December 2017, six practices were invited to participate. Two of them participated in the study. Patients were included who presented with an RTI at one of the involved primary care practices during February and April 2018. Participants in the intervention group used self-administered mustard seed powder footbaths at home once a day, to be repeated for six consecutive days. The improvement of symptoms was measured using the “Herdecke Warmth Perception Questionnaire” (HeWEF). A variance analysis for repeated measurements was performed to analyse differences between the intervention and control groups. Results In this pilot study, 103 patients were included in the intervention group and 36 patients were included in the control group. A comparison of the intervention and control group before the intervention started showed nearly no difference in their subjective perception of warmth measured by the HeWEF questionnaire. Participants of the intervention group who used mustard seed footbaths for six consecutive days showed an improvement in four of the five subscales of the HeWEF questionnaire. Conclusions This study could provide a first insight into a possible strategy to improve symptoms regarding RTI by using mustard seed footbaths. Introduction Respiratory tract infections (RTIs) are the most commonly treated acute problems in general practice [1]. e treatment of RTI often involves the prescription of antibiotics. However, as RTIs are mostly due to viral infections, antibiotics are not an appropriate treatment [2,3]. Such unnecessary use of antibiotics is considered as a major risk factor for developing resistances in particular [4]. erefore, different national and international initiatives exist to reduce the use of antibiotics in patient treatment. Consequently, the "German Strategy against Antibiotics Resistance" (Deutsche Antibiotika-Resistenzstrategie, "DART") of the German Federal Government recognised the urgent need for an effective infection treatment which simultaneously restricts the use of antibiotics [5]. Furthermore, the World Health Organisation (WHO) strategy on traditional medicine supports the use of integrative medicine and strengthening the self-care process of patients [6]. A systematic review supports the effective use of integrative medicine, especially Chinese herbal medicines, for the treatment of RTI [7]. It has been shown that different biochemical processes are responsible for a positive effect, such as antiviral action, antipyretic action, and antiinflammatory action [7]. Similar effects have been found in the mustard plant, which contains glucosinolates, especially sinigrin. Different studies have shown its beneficial pharmacological effects against cancer, antibacterial, antifungal, antioxidant, antiinflammatory and wound-healing properties, and biofumigation [8,9]. Mazumder et al. discussed that sinigrin is one of the glucosinolates whose bioactivity should be further explored and its known activity enhanced through optimal delivery to the human body [9]. Moreover, the combination of thermogenic substances like mustard and warm footbaths could have a beneficial effect on the perception of illness. As a complement, warm footbaths could improve the immune status [10]. erefore, it can be assumed that mustard seed footbaths could be one option for reducing the symptoms of RTI. e aim of the current pilot study was to evaluate whether mustard footbaths improve the symptoms of patients with RTI. Study Design. e study was designed as a pilot study. Two points of measurements where assessed by the questionnaire, before (T0) and ten days later (T1) after intervention. Detailed information is shown in Figure 1. Recruitment of Primary Care Practices and Patients. e recruitment of the primary care practices was based on personal contact. Between November and December 2017, six practices were invited to participate. e practices were located in the north of Germany, Lubeck. Two practices participated in the study. During February and April 2018, patients who presented in one of the involved primary care practices with an RTI where invited to participate in this study. Written informed consent was obtained. Adults (over 18 years) with an RTI who suffered from the symptom "cold feet" were included. Exclusion criteria were severe illness in immediate need of antibiotic treatment, ongoing immunosuppressive treatment or conditions, chronic obstructive pulmonary disease, serious renal failure (GFR < 45 ml/min), skin problems, and hypersensitivity to mustard seed. Intervention. Participants in the intervention group used self-administered mustard seed powder footbaths with at home once a day, according to the given instructions, to be repeated for six consecutive days. A hot mustard seed footbath consists of a footbath at 40°C to which three table spoons of ground black mustard seeds have been added and stirred. e footbath must come above the ankle joints. It is applied for 7 minutes. en, the feet are washed with warm water and calming/nurturing oil can be applied. At the start of the footbath, there is a quick sensation of warmth, a little burning sensation, and a slight tingly irritation of the skin. is feeling changes into a plateau of intense superficial heat. Moreover, the patients' feet grow warm, and they experience slight sleepiness and very often a feeling of well-being. e footbath is to be finished after seven minutes or just before the skin gets irritated and the strong smell and the burning sensation of the skin grows uncomfortable. After each footbath, participants are invited to lie down and take a 30 minute break without interruptions by television or mobile phones. Having applied a mustard seed footbath in the evening, the patient becomes sleepy and allows them to quickly and easily fall asleep with warm feet. Measurements. e "Herdecke Warmth Perception Questionnaire" (HeWEF) was used for subjective ratings of warmth [11]. e questionnaire assessed sensations of the body warmth for up to 28 different items on a five-point rating scale, ranging from 0 "fully agree" to 4 "fully disagree." ese items were summarized to 5 scales by calculating the sum score. See Table 1 for a description of the 5 scales and the corresponding items. e questionnaire was validated and showed good psychometric properties [12]. Furthermore, the sociodemographic data of the participants were evaluated, such as age and gender. e reasons for recommendation for a mustard seed footbath were assessed by 7 possible reasons (e.g., common cold, cold feet, and chest cold). Data Analysis. Data evaluation was carried out using the statistics programme SPSS 25.0 (Inc., IBM). Differences between the sociodemographic data of the control and the intervention group were analysed using Student's t-test for continuous variables as appropriate and the chi-square test for categorical variables. e HeWEF was analysed descriptively. Group differences for each item of the questionnaire were evaluated using the nonparametric Wilcoxon rank-sum test. Furthermore, a variance analysis for repeated measurements was performed. At first, the control and intervention groups were compared regarding the time and interaction (time × group) effect over two measurement points. In addition, the intervention and control groups were compared considering the time effect over two measurement Results e sociodemographic data of the intervention and the control group are listed in Table 2. Significantly, more female patients participated in the intervention group than in the control group (74.8% vs. 44.4%). Patients in the intervention group were significantly older than in the control group (49.9 years vs. 40.1 years). e main reason for participating in the intervention group was common cold (69.9%). Herdecke Warmth Perception Questionnaire-Descriptive Analysis. Table 1 presents the descriptive data of the HeWEF questionnaire of the intervention and the control group before the intervention with mustard seed footbaths started. With the exception of 4 items such as "I freeze a lot," "I tend to shiver a lot," "I'm physically fine right now," and "I'm psychologically fine right now," no significant differences were found between the intervention and the control group regarding the HeWEF questionnaire. Longitudinal Effects of Mustard Seed Footbaths in the Intervention and the Control Group. e longitudinal effects of mustard seed footbaths in the intervention and the control group are presented in Table 3. e variance analyses for repeated measurement showed a time effect in nearly all dimensions of the questionnaire with the exception of "need for warmth" (F � 0.07; P � 0.79). No statistical significance was observed regarding the time × group effect. However, a nonsignificant tendency was found in the dimension "devotion" (F � 2.78; P � 0.09) in favour of the intervention group. Longitudinal Effects of Mustard Seed Footbaths. Analysing longitudinal effects, only subjects who had completed two measurements in the intervention group (n � 88) and in the control group (n � 30) were included in the variance analyses of repeated measurements and are shown in Tables 4 and 5. In these analyses, no interaction effects were considered. In the intervention group, significant improvements were observed for the dimensions "sensation of cold" (F � 20.15; P < 0.01), "devotion" (F � 15.4; P < 0.01), "exhilaration" (F � 5.89; P � 0.02), and "unwellness" (F � 17.41; P < 0.01). With the exception of "unwellness" (F � 11.29; P < 0.01), no significant improvements were observed in the control group over two measurement points (Table 5). Discussion To our knowledge, there has been little research on the effects of mustard seed footbaths on the improvement of symptoms of RTI. Our results showed that more female patients were included in the intervention group. e patients of the intervention group were more than 9 years older than in the control group. Nearly, 15% of the participants in the intervention group stopped their participation in the intervention and did not respond to the questions after six days. It can be assumed that participation in such an intervention is acceptable and feasible. However, a further qualitative study will be helpful to evaluate the attitudes and experiences of participants who used mustard seed footbaths. In this pilot study, the comparison of the intervention and the control group before the intervention showed nearly no difference in their subjective perception of warmth measured by the HeWEF questionnaire. For the participants of the intervention group who used mustard Taking antibiotics after treatment with mustard seed footbaths 10 (9.7) - * Multiple responses possible. seed footbaths for six consecutive days, an improvement is observed in four of the five subscales of the HeWEF questionnaire. ese are "sensation of cold," "devotion," "exhilaration," and "unwellness." e results of our study compare favourably with a study which showed that mustard footbaths increase the warmth perception of feet as measured using the HeWEF questionnaire [13]. Moreover, it can be assumed that mustard seed footbaths have a positive effect on the patients' well-being. Footbaths as a complementary treatment option have a positive impact on the immune function and on the patients' health due to its thermographic effect [10,14]. It was also found that footbaths can lead to a reduction in stress [15]. erefore, the relaxing effects of footbaths in combination with mustard plants could lead to a reduction in the perception of symptoms of RTI. Different herbal preparations may be effective for the treatment of RTI or the common cold [16,17]. Mustard, as a member of the Brassicaceae family, is amongst the oldest recorded spices. A review shows that mustard is used as a medicinal remedy for the treatment of different conditions such as bronchitis or diabetes [18]. Moreover, it was reported that it is used against colds and the flu [18]. Mustard plays an essential role in holistic herbal medicine, especially in Australia and New Zealand [18]. Nonspecific effects of treatment have to be considered in this context too. On average, up to 30% of an effect may be due to nonspecific aspects of care [19]. Future study designs should therefore control for this aspect. Limitations. Despite the positive results reported here, our study has some limitations. e study was conceptualised as a pilot study with an explorative design. e presented results are therefore potentially strongly biased. A replication of this study with a larger sample size (intervention and control group) could help shed light on these matters and confirm the effects of footbaths with mustard seed in more detail. In view of the voluntary nature of participation in this study, which required a certain amount of motivational readiness on behalf of the subjects, we cannot generalise the results. Conclusions In spite of the mentioned limitations, this study could provide a first insight into a possible strategy to improve the symptoms of RTI by using mustard seed footbaths. e effect size of our pilot study was small. erefore, further studies with slightly modified designs, especially randomised trials, are needed to establish the robustness of the possible effects of footbaths with mustard seeds. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Disclosure e funder was not involved in the study design, in the collection, analysis, and interpretation of data; in the writing of the manuscript; and in the decision to submit the manuscript for publication. Conflicts of Interest e authors declare that they have no conflicts of interest.
2020-01-30T09:10:13.687Z
2020-01-23T00:00:00.000
{ "year": 2020, "sha1": "ff3424bb74f60c689d4ec19952e5ecfbacf84897", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ecam/2020/5648560.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86efbe20b82b61a72aa8c6938e1984a63d9df640", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
241895065
pes2o/s2orc
v3-fos-license
Clinical Investigation The Master of Science in Clinical Investigation (MSCI) and the Certificate in Clinical Investigation (CI) are programs for young investigators committed to pursuing academic careers in clinical research. The unique MSCI degree combines didactic course work with mentored research and career development opportunities, and it provides students with the knowledge and tools needed to excel in the areas of clinical investigation most relevant to their careers. The CI certificate is made up of the core MSCI didactic course work in study design, research implementation, statistical approaches, responsible conduct of research, scientific communication and literature critique, leadership, and community engagement. Clinical investigation programs offered through the Washington University School of Medicine are sponsored by the Clinical Research Training Center (https://crtc.wustl.edu/) and the Institute of Clinical and Translational Sciences (http://icts.wustl.edu/). • Pursue one of four concentrations: Translational Medicine, Genetics/Genomics, Clinical Investigation, or Dissemination and Implementation (https://crtc.wustl.edu/ msci-concentrations/), with each concentration providing focused training that is tailored specifically to a student's interest within clinical and translational research • Attend a weekly multidisciplinary seminar to learn about alternative research designs and methods through the discussion and presentation of peers' research and to obtain key feedback from senior faculty and peers with expertise in their fields • Attend monthly career development sessions to learn best practices in areas critical to success in clinical research, including grant writing, data management, intellectual property management, budgeting, ethics and other areas • Complete a thesis requirement (https://crtc.wustl.edu/thesisrequirement/) consisting of a manuscript of original clinical research submitted for publication • Participate in a formal, structured mentorship program that offers an opportunity to work alongside faculty renowned for their innovative clinical research and teaching experience Location Core courses are held on the School of Medicine campus after 4:00 p.m. to accommodate working professionals and full-time students participating in mentored research activities. Research While in the program, scholars conduct their own clinical research projects. These projects must receive Institutional Review Board approval, and they need to involve either patients, human tissue, human cell lines or clinical data. The resulting thesis manuscript cannot be a review article, case report or case series. Multidisciplinary mentors and leaders guide research projects and encourage career development activities. Research in progress is presented at multidisciplinary seminar sessions during which peer and mentor feedback is received. Program graduates have published more than 740 peerreviewed manuscripts; secured more than 100 federal, state and privately sponsored grants; and presented at more than 1,000 conferences, symposia and meetings locally, nationally and internationally. M17 CLNV 503 PIRTT Mentored Independent Research Trainees earn Predoctoral Interdisciplinary Clinical Research Training Mentored Independent Research credits for conducting clinical research, completing a report, and developing and presenting a poster describing their work. They are also expected to attend a half-day research symposium in the fall with other clinical investigators. Mentored Independent Research will be presented each semester to an advisory committee that includes the scholar's departmental mentors as well as Clinical Research Training Center program faculty. The research presented will be in the form of a research paper submitted for publication in a peer-reviewed journal. Under some circumstances, a grant application submitted for review will be acceptable in place of the research paper. PICRT Mentored Independent Research will provide scholars with the practical application of skills learned in the Clinical Research Training Program didactic course work and seminars. Open to CRTC Predoctoral Program scholars only. Credit variable, maximum 6 units. M17 CLNV 510 Ethical and Legal Issues in Clinical Research This course prepares clinical researchers to critically evaluate ethical and regulatory issues in clinical research. The principal goal of this course is to prepare clinical researchers to identify ethical issues in clinical research and the situational factors that give rise to them, to identify ethics and compliance resources, and to foster ethical problem-solving skills. The course aims to deliver practical guidance for investigators through discussion of critical areas of clinical research ethics. An additional aim of the course is to enable participants to recognize the different ways in which research participants may be vulnerable and the ethical issues raised by including and excluding vulnerable participants. By the end of the course, participants will understand the regulatory framework that governs human subjects research and the distinction between compliance and ethics; be able to identify major ethical concerns in the conduct of clinical research, including situational factors that may give rise to ethical concerns; and be able to apply an ethical problem-solving model in clinical research. Please contact the MSCI Program for permission to enroll in this course. Credit 2 units. M17 CLNV 5110 MTPCI Mentored Independent Research Scholars earn Mentored Independent Research credits for conducting clinical research, completing a report, and developing and presenting a poster describing their work. They are also expected to attend a half-day research symposium in the fall with other clinical investigators. Mentored Independent Research will be presented each semester to an advisory committee that includes the scholar's departmental mentors as well as Clinical Research Training Center program faculty. The research presented will be in the form of a research paper submitted for publication in a peer-reviewed journal. Under some circumstances, a grant application submitted for review will be acceptable in place of the research paper. MTPCI Mentored Independent Research will provide scholars with the practical application of skills learned in the Clinical Research Training Program didactic course work and seminars. Open to CRTC Postdoctoral Program scholars only. Credit variable, maximum 4 units. M17 CLNV 513 Designing Outcomes and Clinical Research This course covers how to select a clinical research question, outline a research protocol, and execute a clinical study. Topics include: subject selection, observational and experimental study designs, sample size estimation, clinical measurement, bias and confounding, and data management. The course is designed for health care professionals who wish to conduct patientoriented clinical research. Students incorporate research design concepts into their own research proposal. The course consists of lectures, weekly problem sets, weekly reading assignments, outlining a research protocol, and a final exam. Credit 3 units. M17 CLNV 5140 MTPCI Research Seminar Weekly seminar series are required for Postdoctoral Program and Career Development Program scholars for four semesters, one credit per semester. An important learning experience in research is the presentation and critical discussion of research ideas and projects at various points in their evolution. Seminars will alternate discussion of work in progress with critical reading of current clinical research in order to practice and enhance analysis and communication skills. Each scholar will formally present their own research in progress twice per year for feedback by peers and faculty from multiple disciplines. In addition to presenting their own work in oral and written form for peer and faculty evaluation, scholars will formally review the written proposals of their peers in a way that emulates the duties of a member of an NIH study section. This formal research evaluation exercise is a highly successful element of other clinical training instruction at Washington University. The program director and co-directors will lead a weekly seminar with participation of other core faculty. The weekly, small group, intensive discussions of research issues are one of the most valuable aspects of the program, allowing scholars to learn in an active and participatory fashion. Open to CRTC Postdoctoral Program scholars only. M17 CLNV 515 PIRTT Research Seminar Pre/Postdoctoral Interdisciplinary Research Training in Translation (PIRTT) Seminar. Two semesters of this course are required for the TL1 Scholars. This course alternates faculty presentations, research-in-progress discussions, and reading and journal discussions. CRTC scholars only. Credit 2 units. M17 CLNV 518 Drug and Device Development This course will provide an overview of the commercial development pathways for both pharmaceuticals and medical devices, from inception to market. Through lectures and discussions, students will gain an appreciation for the role clinical study programs play in the broader scope of product development. Class topics will include preclinical, clinical, regulatory, and marketing factors which influence discovery and development of new medical products. Same as U80 CRM 518 Credit 3 units. UColl: OLI M17 CLNV 520 Entrepreneurship for Biomedicine I Today's biomedical research trainees have the opportunity to pursue multiple career paths within academic, industry, nonprofit, and entrepreneurial settings. In addition to scientific and technical expertise, today's trainees need additional skills in innovation and entrepreneurship (I&E) to take advantage of this opportunity. This course is designed to teach these skills. This course consists of seven "nanocourses" focused on different aspects of the entrepreneurial process. Throughout the course, trainees will work to identify an innovation and assess a new academic, entrepreneurial, or nonprofit venture to bring that innovation to market. Nanocourses are taught by successful real-world entrepreneurs and experts in their fields. The primary instructional methods are via video and hands-on learning experiences, with some supplementary reading. To succeed in this class, students should be prepared to work with their peers and coursemasters using online communication tools both inside and outside Canvas. Credit 1 unit. M17 CLNV 522 Introduction to Statistics for Clinical Research This is an introductory course in statistics with a focus on the use of statistical analysis in clinical research. It is taught using SPSS, statistical analysis software commonly used in clinical research. The course teaches basic statistical methods with which clinical researchers will have the facility to execute their own analyses. Credit 3 units. M17 CLNV 524 Intermediate Statistics for the Health Sciences This course builds upon Introduction to Statistics for Clinical Research (M17-522) and will focus on SPSS, Cox proportional hazards, generalized linear models, multiple linear models, ANOVA, repeated measures, regression, applied modeling, 2X2, ROC curves, checking assumptions and regression diagnostics. Completion of this course will enable clinical investigators to work independently with their own data and run their own analyses. Content will include data sets with applied exercises, interpreting output, lab assignments, and a midterm and final exam. Course director is Mark Walker, PhD, and instructor is Brian Waterman, MPH. Prerequisite: M17-522. Credit 3 units. M17 CLNV 528 Grantsmanship Scholars will learn how to 1) develop research and career development grant proposals that incorporate well-formulated hypotheses, rationales, specific objectives and long-range research goals; 2) organize and present sound research and career development plans that accurately reflect the ideas and directions of the proposed research activities; and 3) avoid many common grant-writing mistakes. Scholars will also learn about the peer review process for grant evaluations and will participate in a mock NIH review exercise (study section) at the end of the semester. Though it is not required, scholars will get maximum benefits from the class if they are working on grant proposals. Credit 2 units. M17 CLNV 529 Scientific Writing and Publishing The objective of this course is to teach the proper techniques of writing and publishing a biomedical manuscript. Writing a working title and structured abstract as well as hand drawing of figures and tables is covered. Publishing strategies are also discussed. Credit 2 units. M17 CLNV 532 Genomics in Medicine I This course introduces principles of genomics in medicine as they apply to clinical research and provides a practical background in molecular biology and genetics. Students will be provided with an introduction to genomic research and applications of genomic technologies in the research environment and an understanding of the clinical application of genetic/genomic knowledge. Critical thinking and scientific/ analytic competencies are emphasized through weekly lectures by renowned faculty. Reflection papers are required. Prior clinical research experience is helpful but not required. Course options include face-to-face, hybrid and online. Credit 1 unit. M17 CLNV 533 Genomics in Medicine II This course introduces principles of genomics in medicine as they apply to clinical research and provides a practical background in molecular biology and genetics. Students will be provided with an introduction to genomic research and applications of genomic technologies in the research environment and an understanding of the clinical application of genetic/genomic knowledge. Critical thinking and scientific/ analytic competencies are emphasized through weekly lectures by renowned faculty. Reflection papers are required. Students may enroll in this course even if they have not taken Genomics in Medicine I (M17-532). Prior clinical research experience is helpful but not required. Course options include face-to-face, hybrid and online. Credit 1 unit. M17 CLNV 540 Introduction to Dissemination and Implementation Science Upon successfully completing this class, scholars will be able to: Describe the need for dissemination and implementation research, compare theories and frameworks in the field, select the appropriate designs, strategies, outcomes, and measures for implementation studies. Scholars will also: Understand the importance and language of D&I basic science, explore the theories and frameworks that are commonly used in D&I research and practice, describe the importance of context at multiple levels in D&I science, distinguish between implementation strategies and outcomes from those in efficacy and effectiveness research, describe various study designs, methods, and measures that support D&I science, understand D&I methods and challenges across various settings and populations, recognize opportunities to apply D&I science to intervention development and evaluation, and understand how D&I science can further your research/practice plans and career. Credit 3 units. M17 CLNV 541 Implementation Science: Approaches in Local, Regional and Global Contexts This course will address key conceptual dimensions of implementation science in the setting of social, political and organizational constraints due to inadequate or uncertain investments in health. Such environments create distinctive needs for implementation of evidence-based interventions. These contexts are common in Lower-and-Middle Income countries, where human resources for health, health care infrastructure and investments in social protections are limited. However, limited, concentrated and racialized investments in public health are also characteristic in High-Income Countries, and lead to health disparities. In both cases, this structural setting shapes the limitations in availability of evidence-based interventions for health, as well as the strategies needed to overcome those barriers. Specific topics in the course will touch on current global and regional distribution of disease burden, parallels in health systems in health systems in Lowand-Middle-Income countries and regions locally, global trends and efforts around use of evidence based treatments for major infectious diseases (e.g., HIV, TB and Malaria), rising burden of cardiovascular disease, as well as around Global Health, theories and frameworks on health disparities (e.g., postcolonial studies, Black scholarship). Topics will also include conceptualizing implementation strategies (and implementation outcomes) appropriate for this setting, and which are tailored for these constraints; study designs (e.g., stepped wedge, natural experiments) frequently employed in such settings; and the notion of preferences and personalization in public health delivered in such environments. Credit 3 units. M17 CLNV 5544 Developing and Evaluating Implementation Strategies in Health and Social Services Internationally, there is a substantial gap between the establishment of effective interventions and their delivery in routine practice. Implementation research has emerged as a means of addressing that gap. It is defined as "the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices" to improve the quality of service delivery in routine care settings (Eccles & Mittman, 2006). It includes the study of influences on professional and organizational behavior that impact implementation effectiveness. This course focuses Clinical Investigation (09/19/22) on developing and evaluating implementation strategies or the methods and techniques that are used to enhance the adoption, implementation, sustainment, and scaling up of effective interventions. It is intended for graduate students, postdoctoral students, staff, and faculty in public health, social work, medicine, and other areas of health science who are interested in developing and/or testing strategies to promote improved implementation of effective health and social service interventions. Same as S55 MPH 5554 Credit 3 units. M17 CLNV 588 Epidemiology for Clinical Research The purpose of this course is to provide an understanding of the use of epidemiological concepts and methods in clinical research. Two primary foci are included: 1) common applications of epidemiologic principles and analytic tools in evaluating clinical research questions; and 2) student development of skills to review and interpret the medical literature and utilize publicly available datasets to address clinical research questions. Same as M88 AHBR 588 Credit 3 units. M17 CLNV 589 Advanced Methods for Clinical and Outcomes Research This course focuses on the application of advanced epidemiologic principles and outcomes research as applied to clinical research. Students study the tools used in clinical research, in clinical issues, and in understanding the medical literature concerning these issues, which are crucial for making informed decisions in the care of patients. Critical thinking and scientific/analytic competencies are emphasized throughout the course. Prerequisite: M17 513. Credit 3 units.
2018-05-08T17:39:00.239Z
2020-02-02T00:00:00.000
{ "year": 2020, "sha1": "65506416cff4a2210912944d4b2899d5ee3887f5", "oa_license": "CCBY", "oa_url": "https://www.qeios.com/read/9E1M22/pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "595d422027d674855658149082aa8e584aa9c81c", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [] }
229004365
pes2o/s2orc
v3-fos-license
Will plain packaging of cigarettes achieve the expected? Perceptions among medical students INTRODUCTION Plain packaging is one of the critical strategies in eliminating the promotion of tobacco products. Evidence indicates that plain packaging decreases the attractiveness of tobacco products and enhances the effectiveness of health warnings. This study aimed to explore the perceptions of undergraduate medical students of plain packaging and new pictorial warnings before they came into use in Turkey. METHODS This qualitative study was carried out among undergraduate students in a Medical School in Istanbul in 2019. Participants were recruited through purposive sampling, and data were collected through focus group discussions. The participants were asked to discuss their perceptions regarding one original branded pack and ten plain package models. All discussions were audiotaped and thematic content analysis was conducted. RESULTS A total of 72 students participated in the study. None of the students had seen plain packaging before. Most of the students perceived plain packaging as more favorable compared to the branded packs. The terms used to describe plain package were: ‘appealing/desirable’, ‘attractive’, ‘beautiful’, ‘cool/eye-catching’, ‘charming’, ‘elegant’, and ‘special’. Some students indicated that they would have preferred plain packs over the branded ones if both types of products had been in the market and provided they were of the same brand. Pictorials had different impacts based on their content. At the same time, outer body deformities were perceived as ‘real’ and provoked unfavorable feelings; inner organ images were defined as ‘imaginary’ and had little to no impact. CONCLUSIONS Plain packaging was perceived as a more attractive alternative to the conventional branded packs among most participants. We must be aware of the unforeseen effects of plain packaging among different subgroups in the new generations. We suggest using outer body deformities in the pictorials more frequently due to their higher impact. INTRODUCTION Packaging is the most well-known tobacco marketing strategy in countries where advertising and promotional material are prohibited 1,2 . Framework Convention on Tobacco Control (FCTC) proposes measures to combat this strategy. FCTC Article 11 indicates that tobacco product packaging and labelling should not promote a product, and packaging should contain health warnings that explain the harmful effects of tobacco use in the form of pictograms 3 . Health warnings and pictograms should be large, clear, visible, legible and culturally appropriate. FCTC, through Article 13, also ensures that advertising, promotion and sponsorship of tobacco products should all be banned 3 . Plain packaging is proposed as a key measure to adopt the implementation of Articles 11 and 13 of FCTC. With plain packaging, the use of logos, colors, brand images and promotional information on the packaging are prohibited. Also, product names are displayed in standard color and font styles 4,5 . So plain packaging is expected to decrease the appeal and attractiveness of packages and eliminate the effects of advertising and promotion on the packaging [5][6][7] . Plain packaging is also expected to increase the noticeability and effectiveness of health warnings and reduce industry package design techniques that present some products as less harmful [5][6][7] . Studies evaluating the effectiveness of plain packaging on smoking prevention and cessation yield relatively consistent evidence [6][7][8] . Plain packaging is reported to reduce the appeal of tobacco products and to result in a negative perception of smoking [6][7][8] . Plain packaging has also been shown to enhance the effectiveness of health warnings by increasing the salience of pictorials on the packs. Consequently, plain packaging is suggested to reduce initiation and experimentation, resulting in a higher motivation to quit and lower purchase intentions [7][8][9][10][11] . Turkey introduced plain packaging with the amendments to Law No. 4207 on Prevention and Control of Hazards of Tobacco Products in December 2018 12 . The amendment required tobacco products to be marketed in plain packages and allowed the trademark on only one side of the pack, covering a maximum 5% of the surface area. The amendment also included an increase in the size of the pictorials from 65% to 85%. Plain packaging was put into force in January 2020, and branded tobacco products were not allowed in the market after that. This study aimed to explore the perceptions of undergraduate medical students of plain packaging and new pictorial warnings before the amendment was implemented in Turkey. Medical students were selected as the study population because smoking is prevalent among this group; almost one in five students is a smoker in Turkey 13 . METHODS Design This is a qualitative study which was carried out in 2019. The study protocol was developed by using the Qualitative Research Review Guidelines -RATS. Setting and participants The study was carried out in a Medical School in Istanbul. Undergraduate students, who had currently been smoking, had quit and never smoked, were selected through purposive sampling and invited to participate in the study. Procedure Eleven cigarette packages, one original branded pack and ten plain package models were used in this study. The branded pack was obtained from the market. The researchers designed the plain package models since plain packages were not available in the Turkish market at the time of the study ( Figure 1). The colors, font styles, the trademark size and pictograms were formed in line with the Regulation on the Procedures and Principles Related to the Production Methods, Labeling and Surveillance of Tobacco Products 14 . The models did not have brand names; only the word 'brand' was printed on the packs with the font and size specified by the amendment. The color was dark green, and the pictorials appeared on both sides of the packs as required by the new regulation. There were no cigarettes inside the packages. The plain packages released to the market soon after our study were very similar in design to the models we used in this research. Data were collected through Focus Group Discussions (FGDs). Each focus group was formed homogenously in terms of the student's smoking status and clinical phase (preclinical/clinical). FGDs comprised 6-8 participants and were carried out with a moderator and an observer around a round table. A semi-structured interview guide was used. FGDs were initiated with a general discussion on smoking history and motives for choosing a cigarette package. Then, each box was presented and the group members were asked to discuss their perceptions and compare the branded and plain package models. Also, the impact of each pictorial on plain packages was evaluated. Eleven FGDs were conducted until the data reached saturation. Data analysis All FGDs were audiotaped after the participants provided informed consent. Recordings were transcribed verbatim and thematic content analysis was conducted. Two researchers read the transcripts several times, and identified and coded the idea elements. The codes were discussed, revised, and grouped into themes with a subgroup of authors, and the final coding framework was developed. Texts were coded with the identified themes, and an inductive approach was used. Disagreements were resolved with the subgroup of authors through consensus. RESULTS A total of 72 students participated in the study; 28 were female, and 41 were final year students. Among them, 50 were current smokers, 9 were ex-smokers, and 13 were non-smokers. The age of the participants ranged 18-26 years with a mean of 22.1±2.0 years. Perceptions about plain packaging The students were not familiar with the term plain package. Few students had heard the term 'plain packaging' before, and none had seen one. Plain packaging was perceived as more favorable compared to the branded cigarette packs by most of the students. A positive perception was expressed concerning the aesthetic look; the participants defined the design of plain packs as 'appealing/desirable', 'attractive', 'beautiful', 'cool/eye-catching', 'charming', 'elegant', and 'special'. A student indicated that the appealing features were related to the 'minimalist' design of the packs. The elementary figure created a stylish look that was in line with the world's new trends, whereas the branded packs were perceived as 'old fashion': Participant: 'I like it more (referring to the plain package), to be honest… It has a non-eye-straining, more minimalistic design; it makes me drawn/interested.' Moderator: 'Minimalist design? Do you find it aesthetic?' Participant: 'And the world is now … Yes. I think these things are wrong when the world is going to minimalist designs … Because minimalism is ahead of fanciness both in advertisements and in products. This kind of design (plain packaging) wouldn't be beneficial (for tobacco control).' (Male, smoker) The aesthetic appeal created a positive image regarding the quality of cigarettes; the products in the plain packs were evaluated as 'good quality' and 'reassuring'. And the quality of the product served as an identifier of the user. Plain packages were defined to serve as a symbol for high-class or elite groups, while branded packs were accessible to everyone. The terms used to describe the aesthetics of plain packages, product quality and perceived smoker identity are listed in Table 1 Some of the students, mostly the girls, indicated that they liked the dark green color of the plain packages. A female smoker said that the dark green color reminded her of 'olives', and she had associated this color with being 'healthy'. Purchase intentions about plain packaging The positive perception of the plain packages was transferred to the product quality and reflected purchase intentions. If both types of products had been in the market, the students mainly indicated that they would have preferred plain packs over the branded ones, provided they were of the same brand Perceptions about the pictorials The participants indicated that the pictorials on the plain packages were more eye-catching and vivid, compared to the branded ones. The visibility made the health warnings more 'striking' on plain packages. Some participants indicated that the presence of pictorials on both sides of the packs and the textual warnings appearing on the lid were disturbing: Pictorials were observed to have diverse effects based on their content. Most of the participants indicated that pictorials of physical deformities visible on the outer body were very disturbing. Some of the students indicated that this was related to perceiving visual appearance as more important than health in the short-term. Also, students stated that they had actually seen patients with such outer deformities in the course of their lives. So, they had 'related' these pictorials to exposures to similar patients and labelled them as 'real'. The pictorials with a tracheotomy opening, damaged teeth and foot gangrene ( Figure 1, pictorials 1-3) were listed under this category. On the other hand, pictorials presenting inner organ pathologies were perceived as more 'intangible': The pictorial about blindness (Figure 1, pictorial 4) was the only outer body image that did not bring a disease to mind. Most students did not associate blindness with smoking because they were not fully aware that smoking could damage the eyes. Others indicated that blindness could develop only after a very long duration of smoking. Still, the pictorial was mainly evaluated as effective because it gave the impression of being observed/watched while carrying on an unacceptable behavior such as smoking: ' Pictorials showing inner organ pathologies such as brain hemorrhage (Figure 1, pictorial 5) and various lung deformities were perceived as less disturbing. The participants indicated that these images were not recognized as actual parts of the body or an organ system. The students believed these pictorials were 'fictitious' and did not reflect 'real-life' situations: It [ Figure 1, pictorial 5] appears like a poor-quality horror film image to me, it is made with Photoshop, and I laughed at it; it didn't seem scary; it seemed funny.' (Female, smoker) However, if the internal organ was pictured with its connection to the outer body surface, then it was also perceived as disturbing. The image, which was referred to as the 'autopsy lung' by the students (Figure 1, pictorial 6), had a strong impact: Figure 1, pictorial 6], I think it was the previous one; the autopsy lung was more impressive.' (Male, smoker) The pictorials that presented the impact of smoking without displaying the image of the affected organ ( Figure 1, pictorials 7-9) were defined as 'illusionary'. Also, pictorials that didn't reflect culturally familiar people from the Turkish community (Figure 1, pictorial 10) seemed fictional. The students stated that they did not feel 'connected' to such images: 'Such pictures look like artificial pictures. Maybe they are real, but they look like cover art, film poster, so artificial ... This one [ Figure 1, DISCUSSION This study shows that plain packaging was perceived as a more attractive alternative to conventional branded packs, among most of the participants. Some of the students stated that they would have preferred to purchase cigarettes in plain packs rather than branded ones, provided that they were of the same brand. The features attributed to the plain packs mainly were linked to the perceived good aesthetics. The students indicated that these packs had a 'minimalist' design, an 'elegant' look which was in line with the recent trends. Package design was also perceived to indicate the quality of the tobacco product and the user. The cigarettes in plain packages were perceived as being of good quality and smoked by 'elite' groups. There is considerable evidence that plain packaging has less appeal and a poorer image than the branded packs, in adolescents and adults among diverse populations [7][8][9][10][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33] . Plain packs were perceived as being of low quality and associated with unfavorable personal attributes such as being 'older' and 'less fashionable' 7 . Particularly, younger age groups had shown less appeal compared to the older ages. Our findings differ from the studies in the literature and indicate an increased appeal for plain packaging. This finding might be related to the study group; to our knowledge, this is the first qualitative study reporting the perception of plain packs in medical students. Plain packaged products might be the medical students' way of differentiating themselves from the 'ordinary' and 'old-fashioned' smokers in the community. The quote about plain packaging giving 'an image as I smoke, but I'm not an addict; I know my limits' suggests the perception of being an 'exceptional' smoker, unlike the rest of the population. The favorable perception regarding plain packs might also be related to the shifts in cultural norms and values among young adults. Understanding the root causes of the positive feelings about plain packaging needs a deeper psychosocial approach and is certainly beyond this study. Still, these findings highlight the need to be aware of and to study the unforeseen effects of plain packaging perceptions among different subgroups in the new generations. The color of cigarette packaging can have an impact on perceptions regarding harm and strength, thus influencing product choice 29 . Hoek et al. 34 discuss that the color brown could lead to 'natural' connotations because it is used in recycled paper or the color white might remind people of some branded products which had been marketed as 'light'. Lacave-García et al. 15 also determined that the grey and brown pack colors were associated with more negative feelings than white. A French study indicated that gray-colored packages as the most effective options compared to brown or white packs 35 . In our study, some participants, particularly the girls, indicated that they liked the dark green color of the plain packages. A female smoker's connotation of 'olives' evoked a perception of 'healthiness' and 'wellbeing'. The plain colors used in the background of packaging should be tested before implementation because they can provoke unintended positive feelings depending on cultural differences 15,34,35 . Studies mainly indicate that plain packaging increases the salience of health warnings and pictorials. The pictorials on plain packages are noticed more easily, recalled better and have a stronger impact 7,10,15,17,19,20,[28][29][30]36,37 . Our results also indicate that the effects of the pictorials are more profound in the plain packs compared to the branded ones. Nevertheless, on the plain packages, the size of the pictorials was increased from 65% to 85%, and the pictorials were placed on both sides of the packs as stipulated by the new amendment 14 . These changes might also have contributed to the improved salience of the pictorials on the plain packages. Our findings show that the pictorials have varying effects based on their content. Outer body deformities, which could be observed with a naked eye, evoked highly unfavorable feelings for most of the participants. Students described such pictorials as 'real' since they had seen and known patients with such disabilities in their daily lives. In contrast, the inner organ images were defined as 'imaginary', and 'script from posters and movies' with little to no impact. Similarly, pictorials culturally unfamiliar did not have an effect. These findings suggest that the impact is large when the students associate the pictorials with their past observations. But when the image is not recognized experientially, as in the example of the brain hemorrhage pictorial, it has little to no impact. A qualitative study conducted among socioeconomically disadvantaged smokers in Australia indicated that some messages were not part of the smokers' experiences and were perceived as exaggerated and not 'realistic'. The authors noted that the participants were suspicious of the harms described in the messages 11 . Another qualitative study also indicated the skepticism related to the health warnings; for some participants, the messages would serve as a warning only when they experienced it for themselves 38 . Hence, we suggest using outer body images and pictures of the internal organs with their connections to the outer body surface more frequently, to expose the reality of smoking harms. Limitations Our study aimed to explore the subjective meanings attached to plain packaging among medical students, so we used a qualitative approach and recruited the participants through a non-probability sampling method. This sampling strategy prevents the generalizability of our results to a broader population. We explored only perceptions and attitudes regarding plain packaging and do not exactly know if these perceptions will be transferred to actual purchase intentions and smoking behavior. We should also note that in the FGDs, we used only one cigarette package as an example of branded packs which is quite limited given the large variability of the branded designs on the market. Still, we suggest that our results would be beneficial since they shed light on the unforeseen perceptions that might also exist in other communities. Furthermore, our results might be used for theory building in explaining plain packaging perceptions. CONCLUSIONS Plain packaging is a critical public health strategy in preventing the cigarette pack from being used as a promotional and advertising vehicle. Yet this study showed that plain packaging could be perceived as a more attractive alternative to conventional packs among medical students. We should consider that plain packaging might have unforeseen and changing effects among young adults in different cultures. While pictorials on plain packages are more visible and noticeable, we suggest using outer body images and images of internal organs with their connections to the outer body surface more frequently due to their stronger impact. CONFLICTS OF INTEREST The authors have each completed and submitted an ICMJE form for disclosure of potential conflicts of interest. The authors declare that they have no competing interests, financial or otherwise, related to the current work. All the authors report that since the initial planning of the work, this work was conducted by the Health Institute Association supported by the Bloomberg Initiative Grants Program (TURKEY-23-03; Enhancing implementation of MPOWER strategies and supporting legislation reform for full compliance to FCTC in Turkey) administered by the International Union Against Tuberculosis and Lung Disease. FUNDING This work was conducted by the Health Institute Association supported by the Bloomberg Initiative Grants Program (TURKEY-23-03; Enhancing implementation of MPOWER strategies and supporting legislation reform for full compliance to FCTC in Turkey) administered by the International Union against Tuberculosis and Lung Disease. ETHICAL APPROVAL AND INFORMED CONSENT The research protocol was approved by the Ethical Committee of Marmara University School of Medicine (Approval number: 560; Date: 27 September 2019). Participants provided informed consent. DATA AVAILABILITY The data supporting this research are available from the authors on reasonable request. AUTHORS' CONTRIBUTIONS PA, YY, MG, TG, MC and ED contributed to the concept and design of the study. PA, UPS and YY contributed to data collection. PA, OE, MC, FY and ED contributed to analysis and interpretation. PA, TG and MG contributed to writing the manuscript. All authors reviewed and approved the manuscript. PROVENANCE AND PEER REVIEW Not commissioned; externally peer reviewed.
2020-11-12T09:06:37.297Z
2020-09-07T00:00:00.000
{ "year": 2022, "sha1": "76d66199380cfd9f90231a62c1a7691a47218537", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "554249ce39917417812644e34801338cadd3fe3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
93075308
pes2o/s2orc
v3-fos-license
THE ELECTROCHEMICAL BEHAVIOR OF SULFIDE IONS IN MOLTEN CRYOLITE The electrochemical behavior of sulfide ions in molten cryolite (Na 3 AlF 6 ) has been studied by cyclic voltammetry using graphite electrodes at 1323 K. The oxidation of sulfide ions is found to proceed via a quasi-reversible mechanism, i.e., one in which the current is controlled by both diffusion and charge transfer kinetics, S2_ ----- S + 2e“ The transfer coefficient 3 and the standard rate constant ks are estimated to be 0.5 and 0.0042 cm/sec, respectively. The apparent diffusion coefficient for sulfide ions in cryolite at 1323 K is about 3.93 x 10”5 cm2/sec. INTRODUCTION Since the work of Delarue (1,2) on the anodic oxidation of sulfide ions in molten salts, many investigations have been published on the subject. However, as indicated by the reviews (3-5) written on the electrochemical behavior of sulfide ions in molten salts, the mechanism of the reaction is still controversial and cannot be inter preted in an unambiguous manner. The electrochemical behavior of sulfide ions in molten salts is of considerable interest from both fundamental and applied viewpoints. The oxidation of sulfide ions offers challenging fundamental research since the chemistry and electrochemistry of sulfur-sulfide in molten systems are quite complex. From the applied viewpoint, knowledge of the electrochemical reaction of sulfide is important for (i) battery technology such as high-temperature secondary batteries (6) and (ii) metallurgical molten-salt processes such as metal electrowinning from sulfides (5). A three-electrode system was used for all measurements. The working electrode was a graphite rod (0.63 cm dia., Union Carbide, grade ECV) insulated with hot-pressed boron nitride (Carborundum) so that only a defined surface area was exposed to the molten salt. Another graphite rod served as the counter electrode. A Pt wire was used as a quasi-reference electrode. Most of the previous such studies reported in The molten salt was prepared from an accurately weighed and blended mixture of cryolite and AI2 S3 reagent (Cerac Pure, 99.9% pure) inside a helium atmosphere glovebox. The mixture was charged into the boron nitride crucible and brought out of the glovebox, and the cell was quickly assembled under argon. An argon atmosphere was maintained above the cell in all experiments. For cyclic voltammetric measure ments, standard voltammetric instrumentation was employed. RESULTS AND DISCUSSION For background information, voltammetry of pure cryolite without sulfide added was carried out. A typical voltammogram of the melt at 1323 K is shown in Fig. 1. The voltammetric curves for cryolite re semble those reported in the literature for Na3AlF6 (10, 11) and NaF (12). The steeply rising cathodic current observed at about -0.5 V vs_. Pt reference electrode (all potentials given vs. Pt reference electrode) is attributed to aluminum deposition. An anodic peak was observed at approximately +2.5V. This anodic peak represents the so-called criti cal current, i.e., the maximum current that is attained before the normal anode reaction is superseded by the anode effect, which is attributable to dewetting of the electrode by fluorocarbon compounds. As shown In Fig. 2, the background current of molten cryolite is quite small in the potential range +0.6 to -0.2V. Within this poten tial range, voltammograms of the cryolite melt containing AI2 S3 show a pair of peaks: the anodic oxidation of sulfide ions and, on the reverse scan, the cathodic reduction of the oxidation products (Fig. 3). The anodic oxidation of sulfide in cryolite was studied at two AI2 S3 concentrations, 1.3 x 1 0 " 5 and 3 . 1 x 10"^ mol/cm^. Voltammo grams for 1.3 x 1 0 " 5 mol/cm^ sulfide in cryolite melt at sweep rates of 10-500 mV/sec are shown in Fig. 3. The properties of the voltammetric curves obtained at both sulfide concentrations can be summa rized as follows: (iii) The separation between the anodic peak potential and the cathodic peak potential, | -E^j, increases as v increases (Table I). Corrected for ohmic drop. The above properties are in agreement with the criteria for a quasi-reversible charge-transfer mechanism (13), i.e., one in which the current is controlled by both diffusion and charge transfer kinetics. For this type of reaction mechanism, if the sweep rate is slow enough, the reaction approaches reversible behavior. At slow sweep rates, at 1323 K, the anodic-to-cathodic peak separation and the peak-to-halfpeak separation of a quasi-reversible reaction approach 253/n and where F is the Faraday, D is the diffusion coefficient, R is the universal gas constant, and T is the temperature. It was found that only the values of peak separation at 100 and 200 mV/sec in Table I could be used to estimate k s . From these peak separation values and Nicholson's working curve, ks was calculated to be about 4.2 x 10~3 cm/sec.
2019-04-04T13:06:16.098Z
1983-05-01T00:00:00.000
{ "year": 1984, "sha1": "6ad0dde83ca59dc551852f357940b4acdeb46acd", "oa_license": null, "oa_url": "https://doi.org/10.1149/198402.0443pv", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0a4921506e64fb6f86949f8cc63cbb080a23d443", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Chemistry" ] }
252695107
pes2o/s2orc
v3-fos-license
Effects of the COVID-19 Pandemic on Incidence and Epidemiology of Catheter-Related Bacteremia, Spain We compared hospital-acquired catheter-related bacteremia (CRB) episodes diagnosed at acute care hospitals in Catalonia, Spain, during the COVID-19 pandemic in 2020 with those detected during 2007–2019. We compared the annual observed and predicted CRB rates by using the negative binomial regression model and calculated stratified annual root mean squared errors. A total of 10,030 episodes were diagnosed during 2007–2020. During 2020, the observed CRB incidence rate was 0.29/103 patient-days, whereas the predicted CRB rate was 0.14/103 patient-days. The root mean squared error was 0.153. Thus, a substantial increase in hospital-acquired CRB cases was observed during the COVID-19 pandemic in 2020 compared with the rate predicted from 2007–2019. The incidence rate was expected to increase by 1.07 (95% CI 1–1.15) for every 1,000 COVID-19–related hospital admissions. We recommend maintaining all CRB prevention efforts regardless of the coexistence of other challenges, such as the COVID-19 pandemic. We compared hospital-acquired catheter-related bacteremia (CRB) episodes diagnosed at acute care hospitals in Catalonia, Spain, during the COVID-19 pandemic in 2020 with those detected during 2007-2019. We compared the annual observed and predicted CRB rates by using the negative binomial regression model and calculated stratified annual root mean squared errors. A total of 10,030 episodes were diagnosed during 2007-2020. During 2020, the observed CRB incidence rate was 0.29/10 3 patient-days, whereas the predicted CRB rate was 0.14/10 3 patient-days. The root mean squared error was 0.153. Thus, a substantial increase in hospitalacquired CRB cases was observed during the COVID-19 pandemic in 2020 compared with the rate predicted from 2007-2019. The incidence rate was expected to increase by 1.07 (95% CI 1-1.15) for every 1,000 COVID-19-related hospital admissions. We recommend maintaining all CRB prevention efforts regardless of the coexistence of other challenges, such as the indicator (17). For these reasons, CRB surveillance is mandatory in most countries (18)(19)(20). In Catalonia, Spain, CRB surveillance is guided by the VINCat program of the Catalan Health Service (21), which provides a surveillance system for healthcare-associated nosocomial infections. The VINCat program was launched in 2006; the main objective of this program is to reduce the incidence of HAIs through continuous active monitoring and implementation of preventive programs (21). During recent decades, the incidence of healthcare-acquired CRB has decreased in most hospitals, especially in intensive care units (ICUs), because of the application of preventive measures (22,23). Some of the most critical evidence-based preventive interventions have been using appropriate barrier precautions and hand hygiene before handling catheters, disinfecting skin with chlorhexidine solutions, using appropriate catheter materials, carefully selecting insertion sites that avoid the femoral site, and withdrawing catheters whenever possible (24). During the COVID-19 pandemic, adherence to some of these preventive measures has notably affected HAI incidence rates (11); however, the effect of COVID-19 on CRB incidence is not definitively known. The aim of this study was to assess the effects of the COVID-19 pandemic on the incidence of hospital-acquired CRB. Clinical Setting Bacteremia associated with the use of venous catheters was continuously monitored under the VINCat program. All nosocomial episodes of CRB diagnosed in adult patients at each participating hospital were prospectively followed and reported to the VINCat program by infection control teams. CRB cases were identified by daily evaluation of all patients with bacteria-positive blood cultures. Hospitals participating in the VINCat program are classified into 3 categories according to the number of beds available for hospitalization: >500 beds (group I), 200-499 beds (group II), and <200 beds (group III). Data from each hospital are continuously monitored and presented in general clinical sessions. A public annual report is published on the VINCat website (21). Definitions We defined catheter-related bacteremia as the detection of bacterial growth in patient blood using a venous catheter; >1 set of blood cultures were obtained from a peripheral vein and 2 sets were obtained to identify habitual skin-colonizing microorganisms, such as coagulase-negative staphylococci, Micrococcus spp., Propionibacterium acnes, Bacillus spp., and Corynebacterium spp. Positive bacterial cultures had to be associated with clinical manifestations of infection, such as fever, chills, or hypotension, and absence of any apparent alternative source of bloodstream infection (BSI). The conditions had to be accompanied by >1 of the following criteria: >15 CFU per catheter segment in semiquantitative cultures or >10 3 CFU per catheter segment in quantitative cultures that detected the same microorganism found in peripheral blood cultures; quantitative blood cultures that detected the same microorganism and showed a difference of >5:1 between the blood obtained from the lumen of a venous catheter and that obtained from a peripheral vein by puncture; difference of >2 hours between positive bacterial cultures obtained from a peripheral vein and the lumen of a venous catheter; presence of inflammatory signs or purulent secretions in the insertion point or the subcutaneous tunnel of a venous catheter (a culture of the secretion showing growth of the same microorganism detected in the blood cultures was also useful); and resolution of clinical signs and symptoms after catheter withdrawal with or without appropriate antibiotic treatment. For the clinical diagnosis of peripheral venous CRB, we required signs of phlebitis (induration, pain, or signs of inflammation at the insertion point or the catheter route). Exclusion Criteria We excluded patients if they were under 18 years of age, were outpatients, and had a hospital stay <48 hours at the time of BSI detection. We also excluded those who had CRB detected at an outpatient service or had CRB associated with arterial catheters. Microbiology Two sets of 2 blood samples from a peripheral vein were obtained from all patients with a suspected BSI. An additional blood sample was also obtained through the catheter. When possible, the catheter tip was cultured after removal. Blood samples were processed at the microbiology laboratories of each center in accordance with standard operating procedures. All microorganisms were identified by using standard microbiological techniques at each center. Statistical Analysis We reported categorical variables as the number of cases and percentages and continuous variables as means +SD or medians with interquartile ranges, depending on whether the distribution was normal or nonnormal. We assessed normality of variables graphically by using quantile-quantile and density plots. We calculated the CRB incidence rate by dividing the total number of episodes of CRB by the total number of hospital stays (patient-days) in 1 year. We used a negative binomial regression model to assess the rate trend of CRBs diagnosed at VINCat hospitals each year during 2007-2019. We used the number of admissions per year as the offset variable, number of events as the dependent variable, and year as the main independent variable. We performed stratified analyses according to hospital ward, catheter type, catheter insertion site, catheter use, and type of identified microorganism. We reported the annual rate of CRBs diagnosed per 1,000 patient-days and the incidence rate ratio (IRR) and 95% CI for each model. We focused the interpretation of the IRR on the annual rate of increase or decrease. We plotted and compared the annual CRB rates observed during 2007-2020 and annual CRB rates predicted by our model. We calculated the average root mean squared error (RMSE) of the model predictions for CRB rates during 2007-2019 and Observed and predicted incidence rates of CRB during 2007-2020 in study of effects of the COVID-19 pandemic on incidence and epidemiology of CRB, Spain. We calculated the CRB incidence rate by dividing the total number of episodes of catheter-related bloodstream infections by the total number of hospital stays (patientdays) for each year from 2007 to 2020. We predicted incidence rates by using the negative binomial regression model and compared the predicted rates with observed rates for each year. CRB, catheterrelated bacteremia. compared the RMSEs between the expected rate according to the model and observed rate in 2020. We replicated these analyses after stratifying by hospital ward, catheter type, catheter insertion site, catheter use, and type of microorganism. We evaluated the conditions of application in all models and calculated the 95% CI for each estimator. We arbitrarily set the level of statistical significance at 5%. We performed the analyses using the statistical package R version 4.0.3 (The R Project for Statistical Computing, https://www.r-project. org) for Windows. Ethical Considerations Participation in the VINCat program was voluntary, and data confidentiality was guaranteed. This study was evaluated and approved by the Parc Taulí Hospital Research Ethics Committee, Sabadell, Spain. Study Periods During 2007-2020, a total of 10,030 nosocomial episodes of CRB were diagnosed. Data from the 2007-2019 period have been analyzed and described previously (25). In summary, during 2007-2019, a total of 9,290 episodes of CRB were diagnosed. The mean annual incidence was 0.2 episodes/10 3 patient-days, 73.7% of episodes occurred in non-ICU wards, 62.7% of episodes were related to central vascular catheters, 24.1% of episodes were related to peripheral venous catheters, and 13.3% of episodes were related to peripherally inserted central venous catheters (25). The incidence rate of CRB decreased substantially over the 2007-2019 study period (IRR 0.94, 95% CI 0.93-0.96), especially in ICU wards. CRB episodes caused by central vascular catheters fell markedly (IRR 0.90, 95% CI 0.89-0.92), whereas those associated with peripherally inserted catheters increased. In 2020, a total of 774 CRB episodes were diagnosed at the participating hospitals. We determined that the incidence rate was 0.29 episodes/10 3 patientdays ( Figure 1) Observed and predicted incidence rates of CRB and number of CRB cases stratified by hospital ward, catheter type, and catheter use during 2007-2020 in study of effects of the COVID-19 pandemic on incidence and epidemiology of CRB, Spain. We calculated the CRB incidence rate by dividing the total number of episodes of catheter-related bloodstream infections by the total number of patient-days for each year from 2007 to 2020. We predicted incidence rates by using the negative binomial regression model and compared the predicted rates with observed rates for each year. A) CRB incidence per 1,000 patient-days, stratified by the type of hospital ward. B) CRB incidence per 1,000 patient-days, stratified by the type of catheter used. C) CRB incidence per 1,000 patient-days was stratified according to the reason for catheter use. CRB, catheter-related bacteremia; ICU, intensive care unit; CVC, central vascular catheter; PICVC, peripherally-inserted central vascular catheter; PVC, peripheral vascular catheter; PN, parenteral nutrition; HD, hemodialysis. We observed an incidence rate of 0.064 for CRB caused by peripheral catheters in 2020; the predicted rate according to the negative binomial regression model was 0.05 (O/P 1.24, 95% CI 1.06-1.43). When Figure 3. COVID-19-related hospital admissions and CRB incidence rates in 2020 in study of effects of the COVID-19 pandemic on incidence and epidemiology of CRB, Spain. We compared the incidence rates for COVID-19related hospital admissions with rates for CRB each month during 2020. We calculated the COVID-19 incidence rates by dividing the total number of COVID-19 admissions by the total number of patient-days. We calculated CRB incidence rates by dividing the total number of episodes of catheter-related bloodstream infections by the total number of patient-days. CRB, catheter-related bacteremia. Figure 2). In addition, we determined that the number of observed CRB episodes in 2020 were higher than predicted episodes depending on the location of the catheter; increased incidence was more pronounced in catheters located in femoral ( Figure 2). In 2020, we found increases in observed CRB incidence rates compared with rates predicted by the binomial regression model according to catheter use and causative microorganisms. For hemodialysis, the observed CRB rate was 0.004, and the predicted rate was 0.003 (O/P 1.25,. For parenteral nutrition, the observed CRB rate was 0.06, and the predicted rate was 0.03 (O/P 1.62,. For other uses, the observed CRB rate was 0.22, and the predicted rate was 0.10 (O/P 2.14, 95% CI 1.97-2.31); the last category increased most notably (Table 1; Figure 2). Observed CRB rates were increased compared with predicted rates for all causative microorganisms, especially enterococci (O/P 5.41,. Relationship between Monthly CRB Incidence Rates and SARS-CoV-2 Admissions The total number of hospital admissions and the proportion of patients affected by COVID-19 changed substantially during 2020 ( Figure 3). We recorded more COVID-19-related admissions during February-June in both conventional wards and ICUs (Table 2; Figure 3). The peak rate of COVID-19 hospital admissions was 54.87 in March, and the lowest rate was 14.15 in January. Concomitantly, CRB incidence rates also varied during 2020, reaching a peak in April (0.57 episodes of CRB/10 3 admissions), followed by August and December (0.44 episodes of CRB/10 3 admissions for each month) ( Table 2). We observed the lowest CRB rate at the beginning of the year (0.13 episodes of CRB/10 3 admissions). We observed an association between CRB and COVID-19 incidence rate. The CRB incidence rate was expected to increase by 1.07 (IRR 1.07, 95% CI 1-1.15) for every 1,000 COVID-19 admissions if all factors remained constant (Figure 4). Discussion We demonstrated that the COVID-19 pandemic increased CRB incidence in 2020 in our hospitals in Catalonia, Spain. We found that months with the highest proportion of COVID-19 admissions were strongly associated with increased CRB incidence. We also described the most critical CRB characteristics that changed during the pandemic in 2020. Compared with previous years, we observed increased CRB incidence in both ICUs and conventional wards in 2020. Other studies conducted around the same time observed increased HAI incidence rates during 2020, especially in ICUs. Catheter-associated urinary tract infections, ventilator-associated pneumonia, and CRB were the HAIs with the greatest increases (9)(10)(11). In contrast, other HAIs, such as nosocomial-acquired C. difficile colitis (5,6) or surgical-site infections (7,8,Figure 4. Association between COVID-19-related hospital admissions and CRB incidence rate in 2020 in study of effects of the COVID-19 pandemic on incidence and epidemiology of CRB, Spain. We calculated COVID-19 incidence rates by dividing the total number of COVID-19 admissions by the total number of patient-days and CRB incidence rates by dividing the total number of episodes of catheter-related bloodstream infections by the total number of patient-days. We used linear regression analysis to determine the relationship between COVID-19-related hospital admissions and the incidence of CRB. We found a positive association between the incidence of COVID-19-related hospital admissions and incidence rate of CRB (R 2 = 0.45). CRB, catheterrelated bacteremia. Effects of COVID-19 on Catheter-Related Bacteremia decreased during the same period. Of note, HAIs may be more frequently associated with patients receiving steroids or tocilizumab (26), although a specific association with BSI was not observed (27). In most cases, the increased rates of CRB were likely associated with a lower adherence to specific preventive measures during the months when the pandemic caused the most hospital admissions, despite the generalized reinforcement of contact precautions and hand hygiene to reduce SARS-CoV-2 nosocomial transmission. Of note, in our hospital settings, alcohol-based product consumption for hand hygiene during 2020 increased 2.4-fold overall and 1.9-fold in ICUs compared with the previous year, and a similar trend was observed in a hospital in Taiwan (28). Therefore, although proper hand hygiene is necessary to prevent CRB and other HAIs, it is not sufficient to avoid HAIs if other measures are not performed during the insertion and care of vascular catheters. Specifically, since 2006, various evidence-based bundles for CRB interventions have been shown to reduce CRB, especially in the ICU setting. These bundles include handwashing, using full-barrier precautions, cleaning the skin with chlorhexidine, avoiding the femoral site if possible, and removing unnecessary catheters (22,23). Among the different preventive measures, both hand hygiene and catheter insertion measures were associated with reduced incidence of CRB, and they were most effective when both measures were applied simultaneously (24). The first limitation of our study is that heterogeneity of COVID-19 pandemic responses existed between hospitals, resulting in lack of data on adherence to CRB preventive measures at each center. Second, there was a lack of clinical information regarding the presence of chronic diseases or clinical conditions that might influence CRB incidence rates. However, the availability of a large number of CRB episodes diagnosed by standardized definitions is a strength that enables generalization of our observations. In addition, CRB incidence rates were adjusted by patient-days rather than catheter-days, which enabled surveillance of all types of catheters inserted in all hospital wards. In 2020, substantial resources were allocated for infection prevention to manage the SARS-CoV-2 outbreak, which also affected HAI prevention programs. Because CRB is a key healthcare quality indicator (29), our observations stress the importance of maintaining all prevention efforts, regardless of the coexistence of other challenges, such as the worldwide COVID-19 pandemic.
2022-10-05T06:16:40.222Z
2022-10-03T00:00:00.000
{ "year": 2022, "sha1": "b91c12d2e1009f84368a76e48f327b2d74f5551f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "979ae7ac5985e65ed35181390a2415a0e2d9c544", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
239968652
pes2o/s2orc
v3-fos-license
Repair of acute respiratory distress syndrome by stromal cell administration (REALIST) trial: A phase 1 trial Background Mesenchymal stromal cells (MSCs) may be of benefit in acute respiratory distress syndrome (ARDS) due to immunomodulatory, reparative, and antimicrobial actions. ORBCEL-C is a population of CD362 enriched umbilical cord-derived MSCs. The REALIST phase 1 trial investigated the safety and feasibility of ORBCEL-C in patients with moderate to severe ARDS. Methods REALIST phase 1 was an open label, dose escalation trial in which cohorts of mechanically ventilated patients with moderate to severe ARDS received increasing doses (100, 200 or 400 × 106 cells) of a single intravenous infusion of ORBCEL-C in a 3 + 3 design. The primary safety outcome was the incidence of serious adverse events. Dose limiting toxicity was defined as a serious adverse reaction within seven days. Trial registration clinicaltrials.gov NCT03042143. Findings Nine patients were recruited between the 7th January 2019 and 14th January 2020. Study drug administration was well tolerated and no dose limiting toxicity was reported in any of the three cohorts. Eight adverse events were reported for four patients. Pyrexia within 24 h of study drug administration was reported in two patients as pre-specified adverse events. A further two adverse events (non-sustained ventricular tachycardia and deranged liver enzymes), were reported as adverse reactions. Four serious adverse events were reported (colonic perforation, gastric perforation, bradycardia and myocarditis) but none were deemed related to administration of ORBCEL-C. At day 28 no patients had died in cohort one (100 × 106), three patients had died in cohort two (200 × 106) and one patient had died in cohort three (400 × 106). Overall day 28 mortality was 44% (n = 4/9). Interpretation A single intravenous infusion of ORBCEL-C was well tolerated in patients with moderate to severe ARDS. No dose limiting toxicity was reported up to 400 × 106 cells. Introduction Acute respiratory distress syndrome (ARDS) is characterised by hypoxaemia and bilateral radiographic opacities [1]. The mortality burden is high, between 35 and 45%, and there is considerable physical and psychological morbidity in survivors [2À5]. ARDS is driven by immune activation and cytokine release with loss of integrity of the epithelial-endothelial interface resulting in alveolar and interstitial oedema, loss of pulmonary compliance, and impaired gas exchange [6]. The mainstay of therapy in ARDS is supportive treatment in the critical care environment [7]. Numerous clinical trials have studied pharmacological interventions in ARDS, but to date these have failed to demonstrate therapeutic benefit [8]. More recently, immunomodulatory therapies including dexamethasone and IL-6 antagonism have proven to be of benefit in mechanically ventilated patients with COVID-19, supporting the potential benefit of immunomodulation in critically ill patients with respiratory failure [9À12]. Mesenchymal stromal cells (MSCs) have been proposed as a possible therapy for ARDS with pleiotropic immunomodulatory, reparative, and antimicrobial effects [13À17]. The mechanisms of action of MSCs include (a) paracrine secretion of growth factors, cytokines, and antimicrobial peptides [17À23], (b) direct cell contact transferring functional mitochondria to damaged cells and immune cells [24,25], and (c) release of extracellular vesicles which can transfer mitochondria, mRNA and microRNA [26À30]. In pre-clinical models of ARDS, MSCs improve physiological outcomes, including oxygenation and lung compliance, and survival [14À16, 31,32]. In human ex vivo lung perfusion (EVLP) models, MSC administration effectively restored alveolar fluid clearance, a measure of the integrity of the alveolar epithelial-endothelial barrier [33À35]. Phase 1 and phase 2 clinical trials of bone marrow, adipose, and umbilical cord derived MSCs and MSC-like cells suggest these are safe in patients with ARDS but these trials were not powered to test clinical efficacy [36À42]. The optimal cell type, source, manufacturing method, and dose to use in patients with ARDS have also not been determined. The REALIST trial investigates ORBCEL-C, a defined cellular product, consisting of CD362 enriched umbilical cord (UC)-derived MSCs. UC-derived MSCs have the advantage over bone marrow derived cells of both abundant source tissue and being readily obtained without risk to donor, making them both less expensive and safe (for donors). UC-derived MSCs have demonstrated comparable efficacy to bone marrow (BM)-derived MSCs in a clinically relevant model of ARDS induced by intratracheal Escherichia Coli administration [15]. CD362 is a heparan sulphate proteoglycan identified as a marker for MSC isolation and therapeutic development [43]. A defined subpopulation of MSCs offers an advantage in terms of purity of the cellular product, which may be more likely to fulfil emerging standards regarding cell isolation and characterisation. In preclinical models, CD362+ enriched MSCs isolated from UC-tissue were equally efficacious as traditionally manufactured plastic adherent MSCs in attenuating bacterial and ventilator induced lung injury [14,16]. In this phase 1 study we tested the safety of a single intravenous infusion of ORBCEL-C in mechanically ventilated patients with moderate to severe ARDS, defined by the Berlin criteria. The maximum tolerated dose, over the range of 100 to 400 £ 10 6 cells, was determined. Study design and participants The REALIST phase 1 study was a UK multicentre (5 sites), open label, dose escalation trial in which cohorts of patients with moderate to severe ARDS, received increasing doses of a single intravenous infusion of ORBCEL-C in a 3 + 3 design [44]. Eligible patients were mechanically ventilated, within 48 h of the onset of moderate to severe ARDS, defined by Berlin criteria [1]. Inclusion and exclusion criteria are detailed in Table 1. The trial was approved by a UK research ethics committee (18/NE/ 0006) and the Medicines and Health products Regulatory Agency (MHRA, CTA 32485/0034/001À0001 and Eudract Number 2017À000584À33). The study was registered on clinicaltrials.gov (NCT03042143). The study protocol is available as a supplementary file (Supplemental file 1: Protocol v 3.0 26.06.2019). Patients or their relatives provided written informed consent. Patients were treated with ascending doses of ORBCEL-C in a 3 + 3 design. Dose limiting toxicity (DLT, defined by the presence of a serious adverse reaction) was assessed at day 7. The DMEC convened after each group of 3 patients had been recruited to a given dose and had completed 7 days follow-up, to approve progression to the next dose. The study planned to recruit 3 cohorts of 3 patients/cohort, but up to 18 patients could be recruited according to the dose escalation procedure if DLT toxicity occurred. The planned dose escalation procedure was as follows. If no patient within a dose cohort experienced DLT, then the trial proceeded to the next dose. If one patient in the cohort demonstrated DLT, a further three subjects would be treated at the same dose level. This dose escalation procedure planned to continue until at least two patients among a cohort of three to six patients had DLT or until the maximum planned cell dose had been tested. As a safety precaution only one patient across all sites received the cell infusion at one time and no further patients received treatment in the 24 h following the completion of their infusion. Research in context Evidence before this study Several phase 1 trials have reported traditionally manufactured plastic adherent Mesenchymal Stromal Cells (MSCs) are safe and well tolerated in patients with Acute Respiratory Distress Syndrome (ARDS). The phase 2 START (Stromal cells for ARDS Treatment) trial investigated bone marrow derived plastic adherent MSCs in ARDS and did not demonstrate efficacy. Several clinical trials of MSCs in ARDS, and COVID-19 ARDS, are ongoing. ORBCEL-C is a defined population of CD362-enriched umbilical cord-derived MSCs manufactured by advanced techniques. They have not been investigated in clinical trials of patients with ARDS previously. Added value of this study The REALIST (Repair of Acute Respiratory Distress with Stromal Cell Administration) phase 1 trial aimed to investigate the safety and tolerability of a single intravenous infusion of ORB-CEL-C in patients with moderate to severe ARDS, prior to proceeding to a larger phase 2 trial evaluating efficacy. No dose limiting toxicity was observed in any dose cohort (100, 200, and 400 £ 10 6 cells), and a dose of 400 £ 10 6 cells was determined to be the maximum tolerated dose for the phase 2 study. Implications of all the available evidence Following completion of this phase 1 study, having demonstrated the safety of ORBCEL-C and determined the maximum tolerated dose, investigators have progressed to the REALIST phase 2 study, investigating a single intravenous infusion of 400 £ 10 6 ORBCEL-C in patients with moderate to severe ARDS. In light of the COVID-19 pandemic, an additional cohort of patients with ARDS due to COVID-19 will be recruited to the phase 2 study. Procedures The investigational medicinal product (IMP) in REALIST phase 1 was ORBCEL-C. CD362-enriched cells were harvested from umbilical cord and expanded under Good Medical Practice (GMP) conditions at the National Health Service Blood and Transplant (NHSBT) Birmingham. Cells were cryopreserved and shipped frozen to cell therapy facilities (CTFs) located in proximity to the clinical sites. After informed consent, patients were allocated to receive either 100, 200 or 400 £ 10 6 cells of ORBCEL-C according to the dose escalation protocol. IMP was thawed and diluted in Plasma-Lyte 148 to a total volume of 200mls, according to the REALIST study specific standard operating procedure, at the site's CTF. Laboratory studies demonstrated > 70% viability of cells thawed and diluted in accordance with the study SOP at 6 h. Patients were administered an intravenous bolus of chlorphenamine 10 mg before administration of the IMP, to reduce any histamine-mediated effects of the DMSO in the cell cryopreservant [45]. IMP was administered over 30À90 min, and the infusion was completed within 6 h of the onset of the thaw process. All other aspects of care were according to standard critical care guidelines and at the discretion of the treating physician. Lung protective ventilation was the standard of care with a tidal volume of 6 ml/kg predicted body weight (PBW) [7]. Baseline data (day 0) were collected in the 24 h before IMP administration. Physiological and ventilatory parameters, along with temperature and vasopressor doses, were recorded immediately before IMP administration, every 15 min during infusion of the IMP, and every hour for the 5 h following IMP administration. Daily data were collected until day 14 (or death or ICU discharge if sooner), including Sequential Organ Failure Assessment (SOFA) score, temperature, ventilatory, and arterial blood gas parameters, use of adjunctive therapies, and clinical laboratory assessments. Vital status and adverse events were followed up to day 90. Patients were followed up for significant medical events, including death, at 1 year. A focused set of biological markers was measured by ELISA (Duoset kits, R&D systems) in plasma collected at day 0, 4, 7, and 14. This included markers of systemic inflammatory response that are associated with outcome in ARDS (IL-6, IL-8 and IL-18), and markers of epithelial injury (Surfactant protein-D [SP-D]) and endothelial activation/injury (ICAM-1 and Angiopoietin-2 [Ang2]). Anti-HLA antibodies were measured in serum samples collected at day 0, and day 28, using Luminex antibody detection and single antigen bead methods (One Lambda LABScreen). HLA typing was performed on retained ORBCEL-C samples administered to patients who developed anti-HLA antibodies at day 28, to determine if donor specific antibody reactivity had occurred. Adverse event reporting REALIST phase 1 recruited patients were already critically ill. Events expected in the critically ill (examples include transient hypoxaemia, agitation, delirium, organ failure, nosocomial infections, skin breakdown, and gastrointestinal bleeding) were not reported as adverse events unless considered to be related to the IMP or unexpectedly severe or frequent. The following pre-specified adverse events occurring within 6 h of the start of infusion were collected An increase in vasopressor dose greater than or equal to the following: Noradrenaline: 0¢1 mcg/kg/min Adrenaline: 0¢1 mcg/kg/min Commencement of any vasopressor including noradrenaline, adrenaline, vasopressin, phenylephrine, and dopamine New ventricular tachycardia, ventricular fibrillation or asystole New cardiac arrhythmia requiring cardioversion hypoxaemia requiring an increase in FiO 2 of 0¢2 or more and an increase in PEEP of 5 or more to maintain Sp0 2 in the target range Clinical scenario consistent with transfusion incompatibility or transfusion related infection (e.g. urticaria, new bronchospasm). The following pre-specified adverse events occurring within 24 h of the start of infusion were collected: Any death Any cardiac arrest Temperatures recorded as > 38¢5°C or temperatures that are recorded as > 38¢5°C prior to study drug administration and have increased by 1°C As ORBCEL-C had not been administered to patients with ARDS previously, all adverse events considered by the site investigator to be related to IMP (thereby an adverse reaction, AR) were considered unexpected. All serious adverse events (SAEs) related to the IMP (thereby a serious adverse reaction, SAR) were considered to be a suspected unexpected serious adverse reaction (SUSAR). Outcomes The primary objective of this study was to determine the safety of a single intravenous infusion of ORBCEL-C and to define a safe dose for a subsequent phase 2 trial. The primary safety outcome was the incidence of serious adverse events. Adverse events, including prespecified infusion related adverse events, are reported to day 90. Although this phase 1 trial was not designed to evaluate efficacy, the primary efficacy outcome reported was oxygenation index (OI) at day 7. OI, calculated as (mean airway pressure [cmH 2 O] x FiO 2 x 100)/PaO 2 [kPa]) independently predicts outcome in ARDS [46]. Secondary outcomes reported included: OI at day 4 and 14; physiological indices of pulmonary function (respiratory compliance, driving pressure, and PaO 2 /FiO 2 (PF) ratio) and organ failure measured by SOFA score on days 4, 7, and 14. Clinical outcome measures including extubation, reintubation, ventilator-free-days (VFDs) at day 28, duration of ventilation, length of ICU and hospital stay, as well as 28-and 90-day mortality are reported. Definitions of clinical outcomes are provided in the phase 1 statistical analysis plan, available as a supplemental file (Supplemental file 2: Phase 1 Statistical Analysis Plan). Exploratory outcomes including biological markers of systemic inflammatory response, epithelial and endothelial injury, indices of coagulation, and anti-HLA antibodies are reported. Additional exploratory outcomes detailed in our protocol (which covered the phase 1 and the subsequent phase 2 studies) that were not measured in this phase 1 trial included pulmonary markers of inflammation and cell injury (as bronchoalveolar lavage was not carried out during the phase 1 study) and cardiac function (as echocardiography was not routinely conducted during this phase 1 trial). These exploratory outcomes will be assessed in the subsequent phase 2 clinical trial. Survival status and significant medical events at 1 year are reported. Statistical analysis The primary analysis was descriptive and focused on serious adverse events. Pulmonary and non-pulmonary organ function, clinical outcomes, and exploratory outcomes are reported as descriptive analyses with mean (standard deviation, SD) or median [Interquartile range, IQR] (see Supplemental file 2: Phase 1 Statistical Analysis Plan). For pulmonary and non-pulmonary organ function, as data was not available at all specified timepoints, imputed data from the last observed value is also provided. Role of funding source The trial was funded by the Wellcome Trust Health Innovation Challenge Fund [reference 106939/Z/15/Z] and sponsored by the Belfast Health and Social Care Trust. The funder had no role in the study design, conduct or analysis. Orbsen Therapeutics Ltd. has granted a non-exclusive, trial-specific licence to the Cellular and Molecular Therapies Division of the National Health Service Blood and Transplant Service to manufacture ORBCEL-C to GMP standards for the REALIST trial. Orbsen Therapeutics Ltd. has had no role in the study design, data acquisition, data analysis or manuscript preparation. Participants Nine patients were recruited between the 7th January 2019 and 14th January 2020, all from one of the participating sites: three patients per dose cohort (100 £ 10 6 , 200 £ 10 6 , and 400 £ 10 6 cells). 127 patients were assessed for eligibility across the five sites, of whom 118 were excluded with reasons for exclusion provided in Fig. 1 (Fig. 1: CONSORT diagram). All patients recruited completed infusion of the IMP and no patients were lost to follow up at day 90. Summary baseline characteristics for the included patients are described in Table 2. Individual baseline characteristics are described in Table 3. Safety outcomes and adverse events Study drug infusion was generally well-tolerated with no adverse haemodynamic or respiratory physiological changes during infusion and for the 5 h following IMP administration (data not shown). Adverse events, including pre-specified infusion related events, are summarised in Table 4. In summary, eight adverse events were reported in four patients. Four adverse events, reported in three patients, were considered to be serious adverse events (SAEs). These SAEs were considered to be severe but were deemed unlikely to be, or not, related to study drug administration. Four adverse events were considered to be mild and possibly related to study drug administration therefore are categorised as adverse reactions (ARs). Two of these events, specifically pyrexia within 24 h of study drug administration, were reported as pre-specified infusion related adverse events. There was no dose limiting toxicity in any cohort and 400 £ 10 6 cells was determined to be the maximum tolerated dose. In the lowest dose cohort, one patient experienced four adverse events (patient 1, Table 4). This patient was admitted with ARDS and sepsis due to Streptococcus pneumoniae and influenzae A. The patient developed pyrexia within 24 h of study drug administration which was reported as pre-specified infusion related event. Transient non-sustained ventricular tachycardia (NSVT) was reported on day 1, but not within the six hour window for reporting of pre-specified events. The NSVT resolved without haemodynamic compromise although the patient was commenced on an anti-arrhythmic. Both events were considered possibly related to the study drug. On day 24, this patient developed a perforated duodenal ulcer, requiring laparotomy and readmission to ICU. This was considered unrelated to study drug administration. Six weeks following study drug administration the patient underwent a cardiac MRI to investigate severe left ventricular systolic dysfunction noted during ICU admission. MRI findings were consistent with recent myocarditis, which was felt to have a viral aetiology and unlikely to be related to the IMP administration. The patient recovered and subsequent cardiology follow up at nine months after study drug administration found cardiac function had improved. In the intermediate dose cohort, one patient experienced two adverse events (patient 6, Table 4). Pyrexia occurred within 24 h of study drug administration (reported as pre-specified infusion related event). On day 9 colonic perforation was demonstrated on computerised tomography (CT). The patient deteriorated clinically, with increased organ support requirements, was deemed too unstable for surgical intervention and subsequently died. Underlying decompensated alcoholic liver disease was felt to have contributed to the patients' death. Of note, Child Pugh score at recruitment, was within eligible range. The colonic perforation was considered unlikely to be related to the study drug. In the highest dose cohort, two patients experienced a single adverse event. One patient (patient 8, Table 4) had a bradycardic episode with brief loss of cardiac output requiring a short period (30 s) of cardiopulmonary resuscitation prior to return of spontaneous circulation and resolution of the event. This event occurred on day 15 and was considered to be unrelated. Another patient developed acute derangement of liver function tests within six hours of study drug administration (patient 7, Table 4). This was considered possibly related, however improved spontaneously over the following six days. This patient died on day 8 due to sequelae of their underlying disease (multiorgan failure due to intra-abdominal sepsis and strangulated hernia). The death was unrelated to study drug administration and the deranged liver function was not felt to have contributed. Physiological and clinical outcomes Measures of pulmonary and systemic organ function, including OI, PF ratio, respiratory compliance, driving pressure, and SOFA score until day 14 are presented in Fig. 2 and tabulated in Supplementary Table 1. Imputed data (using last observed value) for these outcomes are also provided in Supplementary Fig. 1 and Supplementary Table 1. There was no evidence of any dose dependent effects of MSCs on these physiological measures. Clinical outcomes and adjuvant therapies are reported in Table 5. Biological and clinical laboratory measures We measured plasma markers of systemic inflammation (IL-6, IL-8, and IL-18), epithelial cell injury (SP-D) and endothelial injury /activation (Ang-2/ICAM-1), which have all been shown to be elevated in patients with ARDS. There were no important trends over time (from baseline to day 4, 7 and 14) identified, and there were no apparent dose dependent effect of MSCs on any of these markers (Fig. 3). Individual patient biomarker measurements are included in the supplement (Supplementary Table 2). Clinical laboratory data (including CRP, renal indices, indices of coagulation, haemoglobin or leucocytes) following study drug administration did not show any trends or signals of harm in relation to the study drug administration (Supplemental Table 3). Anti-HLA antibodies Baseline (n = 9) and day 28 (n = 4) serum samples were analysed for anti-HLA antibodies (Supplemental Table 4). At day 28 samples were not available for four patients who had died, and one patient was unable to provide a sample. No patients had anti-HLA antibodies at baseline, and of the four patient samples analysed at day 28, two developed anti-HLA antibodies (patient 1 and patient 8). One of these patients (patient 1) had antibody reactivity towards one antigen found on HLA typing of the donor ORBCEL-C infusion. However, the HLA antigen (HLA-A*01) is common [47], and as the patient had been transfused blood products between day 0 to 28, they may have been exposed to this antigen during transfusion. This patient also had HLA-antibody reactivity towards HLA-B antigens which were not accounted for by the HLA type of the donor ORBCEL-C infusion. None of the HLA-antibodies developed by patient 8 matched the HLA type of the donor ORBCEL-C infusion. Survival and long term follow up Four patients died within 28 days of study drug administration ( Table 5, day 28 mortality 44%). All patients within the intermediate dosing cohort had died before day 28. One patient in this intermediate dose cohort was transferred to a different unit for ECMO therapy on day 2 due to refractory hypoxaemia and subsequently died on day 14 due to multiorgan failure (patient 5). Another patient in the intermediate dose cohort died on day 7 due to multiorgan failure as sequelae of pulmonary aspiration (patient 4). The death of the third patient in the intermediate dose cohort (patient 6) and one patient in the highest dose cohort (patient 7) are described earlier. Each death was reviewed in detail, and none were felt to be related to study drug administration. No further patients had died at day 90 following study drug administration. All surviving patients have been followed up to one year and none had died by this time point. One patient (patient 2) has had two significant medical events necessitating hospital admissions (1) mechanical fall with a spinal fracture (2) acute stroke. The remaining surviving patients have had no significant medical events reported at follow up. Discussion In this phase 1 trial, CD362 enriched human umbilical cord derived MSCs (ORBCEL-C) were well tolerated in patients with moderate to severe ARDS. No severe adverse events related to study drug administration or dose limiting toxicity were reported in any dose cohort (100, 200 or 400 £ 10 6 cells). Adverse reactions considered possibly related to study drug administration included pyrexia, nonsustained ventricular tachycardia, and deranged liver function. These events occurred early after study drug administration therefore a relationship could not be excluded, but this was a critically ill population in whom these events may have been related to their underlying condition. Follow up of patients to one year following MSC administration has not identified safety concerns. These findings support the safety of intravenous administration of ORBCEL-C, up to a dose of 400 £ 10 6 cells, in patients with moderate to severe ARDS. The emerging safety profile of MSC therapy in critically ill patients with ARDS, is supported by the findings of other MSC clinical trials. A recent systematic review of intravascular MSC therapy for a range of clinical conditions demonstrated MSCs compared to control therapy was associated with an increased risk of fever, but there was no association with non-fever infusional toxicity, infection, thrombotic embolic events, death or malignancy [48]. In other trials of MSCs in ARDS, no infusion related toxicity or serious adverse events related to MSC administration has been reported [36À42]. There has been variation in dosing regimens and tissue sources of MSCs investigated in ARDS. Matthay and co-investigators, in the START phase 1 trial, [49]. These trials have not convincingly demonstrated dose dependent effects, despite evidence of dose-dependent effects in preclinical investigations in ARDS [16,50]. In START phase 1 numerically greater improvements in lung injury score and SOFA score were reported in the highest dose cohorts, but the sample size in each dose cohort was small (n = 3) and differences were not statistically significant [37]. In a study of healthy male volunteers, MSC administration in a model of lipopolysaccharide (LPS) induced systemic inflammation demonstrated dose dependent adverse effects at 4 £ 10 6 cells/kg compared to lower doses of 1 £ 10 6 cells/kg, 0¢25 £ 10 6 cells/kg or placebo [51]. These included an enhanced febrile response and a transient increase in markers of coagulation activation, however this did not translate into clinically relevant thromboembolic events [51]. In our trial standardised doses were used rather than doses per body weight to support the feasibility of manufacture and delivery of the MSC product. The doses chosen (100, 200, and 400 £ 10 6 ) are equivalent to approximately 1¢5, 3, and 6 £ 10 6 cells/kg respectively, for an average adult with PBW of 70 kg. Lower maximum doses were chosen, compared to other studies of MSCs in ARDS, in light of the evidence of dose dependent coagulation activation in the healthy human LPS model [51]. We did not observe any dose dependent effects in either physiological markers of lung or systemic organ dysfunction nor in biological markers of inflammation or cell injury. However, in our phase 1 study, there was no placebo group to allow for suggestion of efficacy and with n = 3 per group there is no power to show significant differences between dose cohorts. Efficacy of MSC therapy in patients with ARDS has yet to be determined and clinical trials to date have been underpowered to report clinical outcomes. In the MUSTARDS phase 2 randomised placebo controlled trial of MultiStem in 30 patients with ARDS, a possible improvement in clinical outcomes was reported (day 28 mortality 25% vs 40%; VFD 12¢9 vs 9¢2; ICU free days 10¢3 vs 8¢9) [39]. However the full peer reviewed report is awaited, and conclusions about efficacy must be guarded given the study size. Matthay et al., reported no significant difference in 28-day mortality (30% MSC group vs 15% placebo group, odds ratio 2¢4, 95% CI 0¢5 to 15¢1) in their 60 patient START phase 2 randomised, placebo-controlled trial of bone marrowderived MSC therapy using a dose of 10 £ 10 6 cells/kg [38]. In this study cell viability was measured post thaw and found to be less than expected in some cases (range 36 to 85%). Interestingly in a post-hoc analysis the authors found that patients treated with higher viability cells appeared to have improved OI, and lower mortality compared with those receiving cells with lower viability. However, this posthoc analysis involved very small numbers and the findings need to be validated in a larger study. A procedure to wash the dimethyl sulfoxide (DMSO) containing cryoprotectant from the final cellular product before administration was implicated in loss of cell viability. In response to this, cells for the REALIST study underwent cryopreservation in a lower concentration of DMSO and in human albumin-containing cryopreservant. After thaw the cryopreserved cells were added directly to crystalloid solution, with no wash step, and patients were pre-treated with an antihistamine to counter any DMSO related effects [45]. Validation work during manufacture of the cellular product confirmed ORBCEL-C infusions had a post thaw cell viability > 70% for at least 6 h following the thaw procedure. Biological markers of inflammation (IL-6, IL-8, and IL-18), and of epithelial (SP-D) and endothelial (Ang-2/ICAM-1) cell injury, were evaluated as part of the exploratory analysis in this REALIST phase 1 study. These biological markers are elevated in ARDS and predictive of poorer clinical outcomes [52À56]. In this REALIST phase 1 study we do not demonstrate any important trends in these biological markers over time following MSC administration. The small numbers, and absence of control group for comparison, limit any conclusions which can be drawn. Other investigators have reported positive findings in relation to biological markers following MSC administration. In the small RCT (n = 5 in each group) by Zheng et al., plasma SP-D concentration reduced significantly from baseline to day 5 following MSC administration, and plasma IL-6 and IL-8 numerically reduced from baseline to day 5, however there was no significant differences compared to the placebo group [36]. Similarly, in the START phase 1 study the median concentration of biomarkers (IL-6, IL-8, Ang-2 and RAGE (receptor for advanced glycation end-products)) reduced from baseline to day 3 following MSC administration, but there was no placebo group for comparison [37]. In the START phase 2 trial, a post-hoc analysis demonstrated patients treated with higher viability cells had a greater fall in Ang-2 concentrations [38]. Importantly, in a subsequent report of biomarkers from bronchoalveolar lavage (BAL) at 48 h in the START phase 2 trial (n = 17 MSC group, n = 10 placebo group), MSC administration significantly reduced BAL Ang-2, IL-6 and TNF receptor-1 concentrations compared to placebo [57]. These biological markers will be assessed in our phase 2 trial. MSCs are considered to be relatively immune evasive, lacking MHC Class II antigen expression on their surface [58]. As part of the exploratory analysis in the REALIST phase 1 study, we report the development of anti-HLA antibodies in two patients after MSC administration, one of whom developed donor specific anti-HLA antibodies. Development of anti-HLA antibodies following MSC infusion has been evaluated in clinical trials in other conditions [59À63], though development of donor specific anti-HLA antibodies has been rarely reported [60,61,63]. The incidence of developing anti-HLA antibodies in critically ill patients is unknown, and these patients may have other sensitising events, such as blood transfusions, which can also lead to the development of anti-HLA antibodies. Immunological responses to MSC administration in ARDS will be evaluated further in Phase 2. In this trial there was a 44% in a cohort of patients with moderate to severe ARDS. This is towards the upper range of mortality previously been reported in this population [4]. In a systematic review and meta-analysis of control arm mortality in randomised controlled trials in ARDS, when the inclusion criteria included a PF ratio consistent with moderate to severe ARDS, the control arm mean mortality rate was 35¢1% [5]. The mortality rate in this phase 1 study is consistent with the baseline severity of illness. In the intermediate dose cohort for example, patients had a mean APACHE II score of 25 which is known to predict mortality rates greater than 50% [64]. No deaths in the study were deemed to be related to MSC infusion, and as a single arm study with a small sample size no conclusions can be made regarding the impact of MSC infusion on mortality. Limited reports of long-term follow up after MSC administration in ARDS have not raised safety concerns [38,42,65]. Our trial is the first to conduct long term follow up for significant medical events, as well as mortality, and supports the long-term safety profile of ORBCEL-C in this patient population. In conclusion, the REALIST phase 1 trial study has shown that administration of a single intravenous infusion of ORBCEL-C, up to 400 £ 10 6 cells, is safe and feasible in critically ill patients with moderate to severe ARDS. Based on the absence of dose limiting toxicity or safety concerns in this phase 1 trial, a dose of 400 £ 10 6 cells has been approved as the intervention for the planned phase 2 randomised placebo-controlled REALIST trial (NCT03042143). This phase 2 trial will assess efficacy of MSC therapy in ARDS and, in light of the COVID-19 pandemic, will also evaluate MSC therapy in a separate cohort of patients with ARDS due to COVID-19 [66]. necessarily those of the National Health Service, the National Institute for Health Research (NIHR), or the Department of Health and Social Care. Funding The trial was funded by the Wellcome Trust Health Innovation Challenge Fund [Reference 106939/Z/15/Z]. Author contributions DFM and CO'K conceived the study. All authors made a substantial contribution to the protocol development and conduct of the study. CC and CMcD are the trial statisticians and have verified the data included in this report. EG and COK have verified the biomarker data included in this report. EG and COK prepared the first draft of the manuscript and all authors have contributed to the writing of the report and have reviewed and approved the final version.
2021-10-27T13:08:33.254Z
2021-10-24T00:00:00.000
{ "year": 2021, "sha1": "1bac7a08f7bc46feab0d874273fc650f75c1c5b2", "oa_license": "CCBY", "oa_url": "http://www.thelancet.com/article/S2589537021004478/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b02695fdc584e48a49dbbb0fcf4f8d1d4103c1e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268260128
pes2o/s2orc
v3-fos-license
Carbapenem- and cefiderocol-resistant Enterobacterales in surface water in Kumasi, Ashanti Region, Ghana Abstract Background MDR pathogens including ESBL- and/or carbapenemase-producing Enterobacterales (ESBL-PE and CPE) increasingly occur worldwide in the One Health context. Objectives This proof-of-principle study investigated the occurrence of ESBL-PE in surface water in the Ashanti Region in Ghana, sub-Saharan Africa (SSA), and investigated their additional genotypic and phenotypic antimicrobial resistance features as part of the Surveillance Outbreak Response Management and Analysis System (SORMAS). Methods From 75 water samples overall, from nine small to medium-sized river streams and one pond spatially connected to a channelled water stream in the greater area of Kumasi (capital of the Ashanti Region in Ghana) in 2021, we isolated 121 putative ESBL-PE that were subsequently subjected to in-depth genotypic and phenotypic analysis. Results Of all 121 isolates, Escherichia coli (70.25%) and Klebsiella pneumoniae (23.14%) were the most prevalent bacterial species. In addition to ESBL enzyme-production of mostly the CTX-M-15 type, one-fifth of the isolates carried carbapenemase genes including blaNDM-5. More importantly, susceptibility testing not only confirmed phenotypic carbapenem resistance, but also revealed two isolates resistant to the just recently approved last-resort antibiotic cefiderocol. In addition, we detected several genes associated with heavy metal resistance. Conclusions ESBL-PE and CPE occur in surface water sources in and around Kumasi in Ghana. Further surveillance and research are needed to not only improve our understanding of their exact prevalence and the reservoir function of water sources in SSA but should include the investigation of cefiderocol-resistant isolates. Introduction Antimicrobial resistance (AMR) has emerged as an urgent global health crisis, severely limiting available therapeutic options including last-resort antibiotics such as carbapenems.Sub-Saharan Africa (SSA) bears a high burden of AMR, with a significant number of deaths attributed to ESBL-and carbapenemase-producing Enterobacterales (ESBL-PE and CPE) such as Escherichia coli and Klebsiella pneumoniae. 1 In Ghana, multiple studies have revealed an increasing number of hospital-and community-associated infections with ESBL producers. 2Like many other low-and middle-income countries (LMICs), Ghana faces challenges regarding unclean water due to open drains, limited sanitation and healthcare, and a lack of public education and environmental consciousness.While there is evidence suggesting that environmental sources might serve as potential transmission points for AMR bacteria in SSA, the extent of environmental contamination including surface water with ESBL-PE and CPE remains largely unknown.One of the few available studies revealed overall high rates of ESBL-producing E. coli in rivers in two cities in Ghana between 2018 and 2020. 3In a broader African context, data on the presence of ESBL-PE and particularly CPE in environmental settings across the continent are exceptionally scarce.In addition, nothing is known about the occurrence of Enterobacterales resistant to the recently approved antibiotic cefiderocol in these niches. This proof-of-principle study's objective was to complement the Surveillance Outbreak Response Management and Analysis System (SORMAS) project 4 with data on the occurrence of ESBL-PE and CPE in surface water from Kumasi, the capital city of the Ashanti Region in Ghana. Sampling strategy and bacterial isolation We collected 75 surface water samples from 10 randomly chosen periurban sites in the Kumasi area in Ghana (Figure 1).This included nine small-to medium-sized river streams and one pond spatially connected to a channelled water stream.Note that the individual sampling spots are representative of the variety of water sources found in the Kumasi area, which are used by animals and humans alike.Observed activities around the water bodies included irrigation and collection of water for households, laundry and livestock.Sampling of 1 L water volume per site was performed once a week over 7-8 weeks between July and September 2021.Water samples were collected directly by filling a sterile plastic container with surface water.Samples were refrigerated at +4°C and further processed at the laboratory of the Kumasi Centre for Collaborative Research, Ghana, within 4 h of collection.Samples were then prefiltered using a sterile gauze (PZN: 04046708, FESMED Verbandmittel GmbH, Frankenberg/Sa., Germany) to retain macroscopic particles.This was followed by filtration through a sterile 1.20 µm, and ultimately 0.45 µm, pore-sized membrane filter (Millipore, Merck, Darmstadt, Germany).Filtration was supported by the 'All-Glass Filter Holder Kit' (XX5514700) combined with a pump (WP6122050, 220 V/50 Hz, both from Millipore).Similar to a previous study, 5 a 6 mm-sized piece of the 0.45 µm filter was transferred to 10 LB broths (Lennox, Sigma-Aldrich, Merck KGaA, Darmstadt, Germany) containing 2 µg/mL cefotaxime (VWR International, Darmstadt, Germany) and cultured at 37°C and 200 rpm overnight.On the following day, 1.8 mL of the culture was pelleted at 20 000×g for 1 min and resuspended in 1 mL of the same culture medium supplemented with glycerol (anhydrous; Merck, Darmstadt, Germany) at a final concentration of 20%.Samples were then stored at −80°C and transported on dry ice to the Friedrich-Loeffler-Institut, Germany.One hundred microlitres of an overnight culture was then plated on CHROMagar Orientation (MastDiagnostica GmbH, Reinfeld, Germany) supplemented with 2 µg/mL cefotaxime, and incubated overnight at 37°C.Cefotaxime-resistant colonies of E. coli and other Enterobacterales were subcultured until pure cultures were obtained and selected for further characterization.All isolates were stored at −80°C in LB broth and glycerol at a final concentration of 20%. WGS Total DNA was extracted using the MasterPure DNA Purification Kit for Blood, v. 2 (Lucigen, Middleton, WI, USA), according to the manufacturer's instructions.DNA was then quantified fluorometrically using the Qubit 4 Fluorometer and the corresponding dsDNA HS Assay Kit (Thermo Fisher Scientific, Waltham, MA, USA).DNA was shipped to SeqCenter in Pittsburgh, PA, USA, and sequenced on an Illumina NextSeq 2000 after library preparation using the Illumina DNA Prep kit and IDT 10 bp UDI indices (Illumina, San Diego, CA, USA), resulting in 2 × 151 bp reads.Demultiplexing, quality control and adapter trimming were performed using bcl-convert v. 3.9.3(https://support-docs.illumina.com/SW/BCL_Convert/Content/SW/FrontPages/BCL_Convert.htm). Phenotypic antimicrobial susceptibility testing MICs were determined with the automated VITEK 2 system (AST-N428 and AST-XN24; bioMérieux, Marcy l'Étoile, France), according to the manufacturer's instructions.Susceptibility to cefiderocol was assessed by disc diffusion tests using cefiderocol 30 μg discs (Mast Diagnostics, Merseyside, UK).Isolates assigned to technical uncertainty (Enterobacterales, 18-22 mm 15 ) were re-analysed using a commercial broth microdilution kit (ComASP, Liofilchem, Waltham, MA, USA) according to the manufacturer's instructions.All results were interpreted according to the published breakpoints and guidelines of EUCAST. 15 We performed antimicrobial susceptibility testing for these isolates to verify carbapenem resistance phenotypes.In addition, we evaluated susceptibility to the recently approved and important last-resort siderophore cephalosporin antibiotic cefiderocol (Table 1). The bla OXA-181 -positive isolates were only resistant to ertapenem, highlighting the weak hydrolytic activity of this OXA-48-like carbapenemase against carbapenems. 16In contrast, Eger et al. isolates carrying bla NDM-5 or bla OXA-48 showed higher MICs of imipenem and/or meropenem.Notably, the CPE were predominantly associated with internationally recognized high-risk clonal lineages, such as E. cloacae ST171, E. coli ST410 and ST1588, and K. pneumoniae ST25.Many studies have consistently demonstrated that successful clonal lineages carry multiple AMR determinants, can be rapidly transmitted among and persist in different host species and ecosystems, may cause severe disease in animals and humans, and are globally distributed (e.g. the study by Eger et al. 17 ).Environmental implications of AMR emergence predominantly originate from high-resource settings, leaving a significant knowledge gap in LMICs.The facilitation of AMR occurs through the discharge of sewage and antimicrobial residues into the environment and the inadequate treatment of human and animal waste.In Ghana, like in other countries in SSA, the drainage systems comprise open drains and street gutters fed from various sources including households, hospitals and industries, subsequently contaminating the environment.In fact, we frequently observed private household sewage and excrement from livestock and feral domestic animals near the sampling spots.Reducing environmental contamination with AMR bacteria primarily includes public education programmes to not only raise awareness about the risks associated with untreated effluent discharge into the environment but also regarding the usage of lake and river water for domestic purposes.In addition, improving basic sanitation and increasing the number and efficacy of sewage treatment plants is crucial. To the best of our knowledge, this is the first study identifying cefiderocol-resistant Enterobacterales in surface water from SSA.Both resistant isolates, PBIO3888 and PBIO3903, were collected shortly after the international approval and clinical use of cefiderocol in 2019.Interestingly, the corresponding sampling location is at the Kumasi Zoo compound, which, although accessrestricted, receives wastewater from the zoo enclosures, visitors and the nearby central marketplace.Recent studies have suggested several mechanisms contributing to cefiderocol resistance, including gene alterations in the iron transport pathway and nutrient uptake (e.g.cirA and ompC). 18However, a BLAST analysis of the amino acid sequences of CirA (UniProt accession P17315), OmpF (UniProt accession P02931) and OmpC (UniProt accession P06996), using E. coli K-12 as a reference, did not reveal any potential resistance-mediating mutations in our isolates.This suggests that cefiderocol resistance may be attributed to overexpression of the NDM-5 carbapenemase. Ghana faces the challenge of environmental contamination with heavy metals, particularly those associated with illegal gold mining activities. 19Heavy metals in the environment may co-select for AMR in bacteria, which is caused by either single mechanisms conferring cross-resistance to both antibiotics and heavy metals, and/or the occurrence of different resistance determinants located on the same genetic element. 20Except for seven of the E. coli isolates in this study, all CPE were positive for genes associated with multi-metal RND efflux pump activity (silABCEFPRS), which typically confer resistance to various heavy metals, including silver.Genes involved in copper resistance (pcoABCDRS; 17/24), followed by arsenic resistance (arsD; 8/24), mercury resistance (merRT; 7/24) and tellurium resistance (terD; 4/24) also occurred.Consequently, additional assessment of not only heavy metal resistance determinants but also heavy metal residues should be an elementary component of surface water surveillance approaches. 5ven though this is a proof-of-concept study, it could have been improved by considering more sampling sites, a longer sampling period, and by obtaining relevant metadata including weather conditions and physicochemical water parameters during sampling.These issues will be addressed in a follow-up, large-scale study. Conclusions In conclusion, we not only found ESBL-PE and CPE but also, more importantly, two isolates resistant to the recently approved antibiotic cefiderocol in surface water samples in Ghana, also indicating the potential for transmission to humans and animals.Our For isolates with zone diameters within the range of technical uncertainty (Enterobacterales, 18-22 mm 15 ), the MIC of cefiderocol was determined by broth microdilution. Eger et al. study suggests that further exploration of transmission pathways related to environmental contamination is warranted. Figure 1 . Figure 1.Map of the Kumasi area (Ghana).The selected water sources are labelled from (A) to (J).Sample sizes were (A) n = 8, (B) n = 8, (C) n = 8, (D) n = 8, (E) n = 7, (F) n = 7, (G) n = 7, (H) n = 7, (I) n = 7 and (J) n = 8.The CPE isolates are assigned to their sampling locations.The inset image outlined in red shows images of the Kumasi Zoo water bodies (location G) where the two cefiderocol-resistant isolates (highlighted in red) were isolated.The map was created in QGIS v. 3.20 using Bing Aerial (http://ecn.t3.tiles.virtualearth.net/tiles/a{q}.jpeg?g=1) as a map source with TM World Borders 0.3 overlay (https://thematicmapping.org/downloads/world_borders.php; last accessed on 23 January 2024).Microsoft product screenshots were reprinted with permission from Microsoft Corporation.Major surface waters are highlighted in pale blue based on the OpenStreetMap data for Ghana (CC BY-SA 2.0 DEED; https://download.geofabrik.de/africa/ghana.html).The Digital Elevation Model (DEM) layer uses a single band with values from 185.2 to 388.5.For reference, the overall location of Kumasi is indicated on the Ghana overview map with Regions (https://vemaps.com/ghana/gh-04by Vemaps.com). Table 1 . Overview of the CPE and their phenotypic and genotypic properties Predictions for carbapenemase genes are based on alignments of sequences from the AMRFinderPlus database. 14Default settings of identity (use a curated threshold if it exists, and ≥90.0%otherwise) and coverage (≥50.0%).
2024-03-08T05:08:35.003Z
2024-03-06T00:00:00.000
{ "year": 2024, "sha1": "8b36f27995bd80affd3937f40f15b2c0867baa6d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/jacamr/dlae021", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b36f27995bd80affd3937f40f15b2c0867baa6d", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18933901
pes2o/s2orc
v3-fos-license
Spermidine promotes stress resistance in Drosophila melanogaster through autophagy-dependent and -independent pathways The naturally occurring polyamine spermidine (Spd) has recently been shown to promote longevity across species in an autophagy-dependent manner. Here, we demonstrate that Spd improves both survival and locomotor activity of the fruit fly Drosophila melanogaster upon exposure to the superoxide generator and neurotoxic agent paraquat. Although survival to a high paraquat concentration (20 mM) was specifically increased in female flies only, locomotor activity and survival could be rescued in both male and female animals when exposed to lower paraquat levels (5 mM). These effects are dependent on the autophagic machinery, as Spd failed to confer resistance to paraquat-induced toxicity and locomotor impairment in flies deleted for the essential autophagic regulator ATG7 (autophagy-related gene 7). Spd treatment did also protect against mild doses of another oxidative stressor, hydrogen peroxide, but in this case in an autophagy-independent manner. Altogether, this study establishes that the protective effects of Spd can be exerted through different pathways that depending on the oxidative stress scenario do or do not involve autophagy. The naturally occurring polyamine spermidine (Spd) has recently been shown to promote longevity across species in an autophagy-dependent manner. Here, we demonstrate that Spd improves both survival and locomotor activity of the fruit fly Drosophila melanogaster upon exposure to the superoxide generator and neurotoxic agent paraquat. Although survival to a high paraquat concentration (20 mM) was specifically increased in female flies only, locomotor activity and survival could be rescued in both male and female animals when exposed to lower paraquat levels (5 mM). These effects are dependent on the autophagic machinery, as Spd failed to confer resistance to paraquat-induced toxicity and locomotor impairment in flies deleted for the essential autophagic regulator ATG7 (autophagy-related gene 7). Spd treatment did also protect against mild doses of another oxidative stressor, hydrogen peroxide, but in this case in an autophagy-independent manner. Altogether, this study establishes that the protective effects of Spd can be exerted through different pathways that depending on the oxidative stress scenario do or do not involve autophagy. The population's proportion of older people is steadily increasing in many countries and with it the incidence of experiencing the process of ageing. More people will live longer 1 but will also have an elevated risk of suffering ageassociated disabilities and diseases. Thus, being able to postpone and/or lessen the deleterious effects of ageing represents an acute challenge for modern society and would bring high societal and economical advantages. Our understanding of ageing has increased at an unprecedented pace during the last 30 years and we have now realised that ageing is a plastic process that can be modulated. Indeed, longevity is partly under genetic control and mutations in single genes have been shown to increase life span and, interestingly, also stress resistance in model organisms. 2 Alternatively, ageing can be modulated by external, non-genetic interventions. One of them is dietary restriction, where a decrease in the amount of food intake delays ageing, disease onset and mortality in a wide range of organisms, including non-human primates. 3 Furthermore, several pharmacological interventions have been recently reported to be beneficial for ageing, diseases and life span extension, even though the obtained results do not offer a clear picture. For instance, resveratrol, a naturally occurring phenol found, for example, in the skin of red grapes, increased the life span of mice kept on a high-fat diet. 4 However, the doses used to achieve life span extension were very high, raising the question of this compound's bioavailability. In addition, resveratrol has not been shown to exert any beneficial effect in healthy organisms. 5 The immunosuppressant drug rapamycin has also been demonstrated to increase the life span of rodents 5,6 but only shows inconclusive effects in the fruit fly Drosophila melanogaster. 7,8 Notably, a serious drawback of rapamycin is its immunosuppressant properties, which could hinder its potential use on a wide basis in healthy organisms. Altogether, these examples clearly show that further research is needed to broaden our knowledge on the effects and mechanisms governing the activity of already identified molecules and to find new ones. Spermidine (Spd) is a natural polyamine involved in an array of crucial molecular processes such as DNA stability, transcription, translation, apoptosis, cell proliferation, differentiation and survival. Intriguingly, its intracellular level decreases with age. 9, 10 We have recently shown that addition of Spd to the food medium increases the life span of yeast, worms, flies and the survival of human immune cells in culture. 11 Spd also reduced age-related oxidative damage in mice and increased resistance to hydrogen peroxide (H 2 O 2 ) and heat in yeast. We further showed that Spd induced intracellular self-digestion (autophagy) to exert its life span extension effect, which could be abrogated by genetic inactivation of autophagy genes in mutant yeast, worms and flies. The regulatory mechanism underlying this effect might be of epigenetic origin. We observed that in yeast, Spd inhibits histone acetyltransferases activity and leads to a global hypoacetylation of histone H3 at all acetylation sites located at the amino terminus of the histone. Consistent with the antiageing potential of Spd, its intracellular reduction decreased the life span of mice. 12 Accordingly, in a further report external feeding with polyamines increased life span and reduced ageassociated pathology in a short-lived mouse model. 13 However, the latter results need to be confirmed as the study was stopped when a significant amount of mice were still alive (after 88 weeks of age). Taken together, these results suggest that Spd could be a powerful tool against the deleterious consequences of ageing. 14 Anti-ageing properties are often correlated with high stress resistance. 15,16 In the present study, we address the effects of Spd on two ageing-relevant stresses, oxidative stress and starvation, in the fruit fly D. melanogaster. To this end, we first challenged flies with the herbicide paraquat, which is a neurotoxic agent widely employed to generate oxidative stress through the reactive oxygen species superoxide. We show that upon paraquat exposure, treatment with Spd improves survival in female flies exposed to high paraquat levels (20 mM). In addition, it confers longer retention of locomotor activity and survival in both male and female animals when challenged with a lower paraquat concentration (5 mM). We also demonstrate that these effects are exerted in an autophagy-dependent manner. Spd also increases resistance to a different oxidative stressor, H 2 O 2 , when applied at mild doses (1%). In contrast to paraquat, however, this resistance is not dependent upon functional autophagy, hinting to differential pathways governing Spd-mediated resistance to different oxidative stressors. Finally, we show that Spd fails to rescue toxicity induced by more severe H 2 O 2 levels (2%) or starvation. These results suggest that Spd protects against toxicity resulting from detrimental pathways that involve selective oxidative stress. Furthermore, this protection can be exerted via different mechanisms (that do or do not involve autophagy) depending on the oxidative stress scenario. Results Spd improves survival during 20 mM paraquat stress in female Drosophila. Paraquat is a superoxide generator that has been used, for instance, as a neurotoxic agent to model age-related neurodegenerative diseases in the fruit fly D. melanogaster. 17 To test the oxidative stress response upon Spd treatment, flies were exposed to 20 mM paraquat and either fed Spd or left untreated. As expected, paraquat strongly compromised fly survival, killing 50% of the untreated animals (males and females) in average after approximately 55 h of exposure (Figures 1a and b). When treated with Spd, female flies challenged with paraquat displayed a significantly better mean and maximum survival, reaching the overall largest improvement at a Spd concentration of 0.1 mM (Figures 1a and c). In males on the other hand, the additional treatment with Spd did not influence paraquat-induced toxicity over the tested concentration range (Figures 1b and d). For variance between replicates, refer to the Supplementary Material (Supplementary Figures S1A, B). These data suggest that Spd confers sex-specific paraquat resistance, specifically favoring survival in female flies. Spd increases survival and climbing activity during 5 mM paraquat exposure in both sexes. In addition to detrimental oxidative stress, paraquat also induces locomotor impairment. Thus, we next decided to determine whether Spd might influence locomotor activity upon paraquat exposure. For this purpose, we monitored the ability of flies to climb the vertical wall of the vial in which they were kept, until no fly could perform the task. To decelerate the incidence of death and follow their locomotor activity for a longer period of time, flies were exposed to a lower concentration of paraquat compared to the above survival experiments (5 mM instead of 20 mM). Indeed, this concentration allowed prolonged overall survival compared to 20 mM At 96 h after the beginning of paraquat exposure, for instance, Spd-treated females displayed about 30% higher activity than control flies ( Figure 1h). In male flies, similar rates were obtained ( Figure 1i). Altogether, these data demonstrate that Spd can lessen death, as well as locomotor impairment in both male and female flies, upon exposure to low but toxic paraquat doses (5 mM). Spd-improved survival and climbing activity are dependent on autophagy. Spd-mediated survival improvement during ageing is at least partly associated with autophagy, 11 a self-digestion mechanism that has been connected to longevity in various organisms. 18 To test if this process is involved in Spd-induced paraquat resistance, we performed survival and climbing activity experiments in loss-of-function mutants for atg7 (autophagy-related gene 7) (atg7 -/-), a gene essential for autophagy. In contrast to the results obtained in wild-type flies, in autophagy mutants Spd treatment did not improve the survival decrease caused by paraquat exposure, neither in females with 20 mM nor in both sexes with 5 mM paraquat ( Figures 1h and i). Thus, autophagy is essential for Spdmediated resistance to paraquat-induced toxicity as well as for paraquat-induced loss of locomotor performance. Spd confers resistance to mild H 2 O 2 exposure but not to more severe exposure or to starvation. Given that paraquat specifically induces the generation of superoxide anion radicals, we asked whether the observed rescuing effect of Spd was the result of a protection against paraquatspecific or rather general oxidative stress. To test this, we examined survival of wild-type male and female flies in a different oxidative stress situation, namely upon exposure to H 2 O 2 . As observed with 5 mM paraquat, the decrease in survival resulting from challenge to 1% H 2 O 2 could be partly rescued by Spd supplementation in both male and female flies (Figures 3a, b and e). However, in contrast to paraquat exposure, Spd treatment also increased survival to 1% H 2 O 2 in autophagy-deficient atg7 À / À flies (Figures 3c, d and f). In turn, when H 2 O 2 was applied at a higher concentration (2%), Spd supplementation could not revert the adverse effects on survival (Supplementary Figures S3A-D). Similar results were obtained in the atg7-deletion background (Supplementary Figures S3E-H). Spd at the highest concentration even seemed to decrease survival in atg7 À / À males (Supplementary Figure S3H). To evaluate if Spd might also exhibit a rescuing effect upon exposure to adverse conditions requiring different mechanisms than oxidative stress, we starved flies by keeping them in vials containing only water and agar with or without Spd and measured survival. Starvation stress resulted in a severe loss of viability over time reaching complete death of the population after approximately 160 and 120 h in female and male flies, respectively (Supplementary Figures S4A, B). These survival rates remained unchanged also when flies were additionally fed Spd (Supplementary Figures S4A-F). Altogether, these data suggest that Spd specifically reduces oxidative stress as generated through paraquat or mild doses of H 2 O 2 but does not protect against exposure to severe H 2 O 2 or starvation stress. Discussion We have previously shown that the polyamine Spd increases life span in various model organisms, as well as in human immune cells. 11,[19][20][21] Life span extension has often been linked to a concomitant increase in stress resistance. For instance, the first longevity mutant identified, the age-1 mutant in Caenorhabditis elegans, also displayed heat resistance. 22 More recently, flies mutants in Loco, a Drosophila regulator of G-protein signaling protein, were shown to exhibit a longer life span accompanied by a higher resistance to starvation, heat and paraquat. 23 Such coupling of longevity and stress resistance is not only observed when life span is increased by mutations but also when achieved by non-genetic interventions. For instance, dietary restriction -a well-known longevity inductor -has been demonstrated to increase stress resistance 24 and decrease oxidative damage, 25 although some negative effects of dietary restriction on stress resistance have been reported too. 26 Similarly, we could show that life span-extending Spd administration increases resistance to heat and H 2 O 2 in yeast, and decreases agerelated oxidative stress in mice. 11 Indeed, extensive evidence -especially in plants -supports the concept that polyamines (e and f) Survival curve for female (e) and male (f) atg7 À / À flies exposed to 5 mM paraquat and treated with either no spermidine (control) or 0.1 mM spd. Results represent the pooled data of three independent replicate experiments. (g) Relative mean survival ± S.E.M. for female and male atg7 À / À flies exposed to 5 mM paraquat and treated with either no spermidine (control) or 0.1 mM spd. Mean survival was normalized to the untreated control. Results represent the pooled data of three independent replicate experiments. (h and i) Climbing activity curve for female (h) and male (i) atg7 À / À flies exposed to 5 mM paraquat and treated with or without 0.1 mM spd. Results show the percentage of flies able to climb 8 cm of the vial in 10 s and represent the pooled data of three independent replicate experiments Spermidine and stress resistance in Drosophila N Minois et al improve stress resistance. 21 In the present study, we show that Spd confers resistance to paraquat and mild doses of H 2 O 2 in D. melanogaster. Although Spd increased survival of both female and male flies upon exposure to 1% H 2 O 2 and 5 mM paraquat, it did so only in females when challenged to a high paraquat concentration (20 mM). In fact, it is generally observed that female flies are more resistant to stress than male flies, most likely because females are bigger and can withstand the stress to which they are exposed longer before dying. Thus, 20 mM paraquat (in contrast to 5 mM) might be too high of a concentration to withstand for the males but not for the females, allowing only the latter to take advantage of Spd supplementation. Of note, the big difference in size probably explains why we observed the biggest survival difference between sexes on starvation resistance as compared with the other tested stresses: starvation resistance relies entirely on the available bodily reserves of the organism, which in females are larger than in males. Paraquat is a superoxide generator and thus its toxicity seems to be related at least in part to oxidative damage. This idea is supported by the fact that pure polyphenols, known for their antioxidant properties, increase survival and locomotion in flies exposed to paraquat. 27 Resveratrol on the other hand, another known antioxidant with life span-extending effects, does not rescue paraquat-induced locomotor impairment in flies and even decreases exploratory locomotion in flies not exposed to paraquat. 17 Thus, establishing the specific mechanism(s) underlying the beneficial effects of Spd on both locomotion and survival upon toxic exposure to paraquat is crucial. According to our results, it may involve autophagy, an intracellular self-digestion process that we have previously shown to determine Spd-mediated life span extension in yeast, worms and flies. 11 The present study reveals that upon paraquat exposure the survival and the locomotor activity of autophagy-deficient mutant flies cannot be improved by Spd feeding. Autophagy is thus an essential component of Spdmediated resistance to paraquat. This aligns with reports showing that paraquat exposure induces autophagy as a protective response in models as diverse as the plant Arabidopsis 28 or human neuroblastoma cells 29 and seems to be protective. In neuroblastoma cells, inhibition of autophagy accelerates apoptotic cell death, whereas in Arabidopsis, the proteins oxidized by paraquat are degraded by autophagy. It thus appears that autophagy contributes to the faster or more efficient degradation of damaged molecules arising from paraquat exposure conferring longer protection to this stress. It is tempting to speculate that a similar mechanism might underlie Spd-mediated protection against paraquat in flies, which will need to be tested in future studies. However, autophagy does not seem to be the only mechanism by which Spd is able to confer protection in a given oxidative stress scenario. Interestingly, our data show that while Spd supplementation also increases resistance to a different oxidative stress than paraquat, namely 1% H 2 O 2 , in this scenario Spd still increases survival in atg7 À / À flies. Thus, under H 2 O 2 conditions autophagy does not seem to be involved in the rescuing mechanism. Interestingly, Spd seems to exhibit both autophagy-dependent but also -independent protective effects under different conditions in yeast as well (early and late chronological aging). 10,11 The discrepancy in the effect of Spd on resistance to paraquat and H 2 O 2 in flies may be due to the fact that oxidative stress as induced by paraquat, which directly generates superoxide anion radicals, may work by a different mechanism than that triggered by H 2 O 2 . Girardot et al. 30 reported that almost 10 times as many genes were up or downregulated in Drosophila during paraquat compared with H 2 O 2 exposure, although both treatments induced the same mortality. Furthermore, some sets of genes were specific to one treatment, supporting the fact that toxicity mechanisms are different for paraquat and H 2 O 2 . For instance, 73% of the genes encoding the 26S proteasome subunits were induced by paraquat but not by H 2 O 2 treatment. In contrast, ubiquitin protein ligases were under-represented in genes affected by paraquat. The authors hypothesized that Drosophila can implement two types of response to oxidative stress: one relying on posttranscriptional mechanisms as induced by H 2 O 2 and the other one supported by a coordinated increase of proteasome genes as induced by paraquat. Of note, Spd failed to rescue toxicity resulting from more severe H 2 O 2 exposure (2%). This may be due to the fact that the cellular damage inflicted by very high H 2 O 2 concentrations differs from that resulting from more moderate toxic levels. It should be noted that in this study autophagy dependency was assayed using atg7 À / À flies, which are deficient in the essential autophagic regulator ATG7. It has been shown that atg7 À / À flies are generally shorter-lived, less resistant to diverse stresses (including 30 mM paraquat, 1% H 2 O 2 -both mixed in food -and starvation), display faster decline of climbing activity and enhanced neurodegeneration. 31 Likewise, we have previously reported that atg7 À / À flies are shorter-lived. 11 Our herein presented results furthermore show that lack of autophagy decreases overall survival and climbing activity upon 5 mM paraquat exposure (albeit not always significantly) and survival to 1 and 2% H 2 O 2 . In contrast, some of our results also indicate that lack of autophagy may not always be detrimental. For instance, overall survival to 20 mM paraquat exposure is not affected in atg7 À / À animals. In fact, lack of autophagy has been already reported to not always lead to dramatic effects; for example, atg7 À / À flies do not show any developmental defect and are fully viable. 31 It may also be that in our study the stress under which ATG7 deletion did not influence overall survival (20 mM paraquat) was comparatively strong so that the lack of autophagy could not worsen an already low survival. Finally, we report that the rescuing effect of Spd as observed towards paraquat and 1% H 2 O 2 toxicity is absent upon challenge to a further stressor, starvation. The varying degrees of correlations reported between oxidative stress and starvation resistance in long-lived organisms suggest that the machinery involved in executing a response to these two stress factors may or may not overlap depending on the conditions. For example, although Loco mutant female flies were more resistant to both paraquat and starvation, 23 Gr63a (a CO 2 sensor) female mutant flies were more resistant to paraquat but not to starvation. 32 Resistance to starvation is mainly controlled by the amount of reserves (lipids and glycogen) of the flies. Thus, metabolic changes induced by Spd supplementation may prevent beneficial effects of Spd on survival to this stress. Although it is yet unknown whether Spd does alter metabolism, some evidence hints towards it. For instance, mice fed a high polyamine diet have been reported to be hyperphagic. 13 Current work studying the effect of Spd on metabolism in Drosophila will help to clarify a putative role of Spd in metabolism modulation. To sum up, this study adds to the mounting evidence delineating the beneficial effects of Spd under specific adverse conditions. We show for the first time that Spd confers autophagy-dependent resistance to the neurotoxic agent paraquat by improving survival and locomotor activity in Drosophila. Our work suggests that Spd might counteract neuronal damage caused by particular types of oxidative stress. Future work, however, will have to test this speculation and address the mechanism(s) underlying the herein presented effects. On the other hand, we show that Spd is able to protect against a different type of oxidative stress (H 2 O 2 ) but in an autophagy-independent manner. Thus, Spd is able to promote survival with or without the involvement of autophagy, depending on the specific oxidative stress scenario. Materials and Methods Flies and reagents. Flies from an isogenized w 1118 strain were used in all the experiments. The lines for the generation of Atg7 -/flies were kindly provided by T Neufeld (University of Minnesota, Minneapolis, MN, USA). 31 H3410) was purchased as a 30% solution and kept at 41C. It was diluted as specified just before use. Paraquat resistance. Around 20 5-day-old flies were put in each vial containing filter papers (Sigma, Ref Z274852) soaked with 1.5 ml of a solution consisting of 5% glucose, 20 mM paraquat and either 0, 0.01, 0.1 or 1 mM Spd, with an average of 87 flies in each group. Both w 1118 and atg7 À / À flies were studied, yielding a total of 3622 and 3427 observed flies, respectively. Paraquat resistance was measured in both males and females, which were kept in separate vials. Solutions were renewed every other day until the last fly died. Vials were checked for dead flies every 24 h. Comparison of survivorship for pooled data of all replicates was performed using log rank or Wilcoxon survival tests and corrected for multiple comparisons against the control group. Each genotype and sex was analyzed separately. Locomotor activity. Ten 5-day-old flies were put in a vial containing filter papers soaked with 1.5 ml of a solution consisting of 5% glucose, 5 mM paraquat and with or without 0.1 mM Spd with an average of 99 flies in each group. Both w 1118 and atg7 À / À flies were studied, yielding a total of 1179 and 1188 observed flies, respectively. Locomotor activity was measured in both males and females, which were kept in separate vials. Vials were checked for dead flies every 24 h until the last fly died and filters renewed every other day. Locomotor activity was measured daily until no fly could perform the task. Flies were moved to the bottom of their vial by mechanical stimulation and the number of flies reaching the top of the vial (8 cm) in 10 s was recorded. The difference with Spd on the proportion of flies able to perform the climbing task was tested for each sex, genotype and age (hours after beginning of exposure to paraquat) separately using nonparametric z tests for proportions. Comparison of survivorship of pooled data was performed using log-rank survival tests. Each genotype and sex was analyzed separately. Hydrogen peroxide resistance. Around 20 5-day-old flies were put in a vial in containing filter papers soaked with 1.5 ml of a solution consisting of 5% glucose and 1 or 2% H 2 O 2 without or with either 0.01, 0.1 or 1 mM Spd, with an average of 97 flies in each group for 1% H 2 O 2 and 98 flies in each group for 2%. Both w 1118 and atg7 À / À flies were studied, yielding a total of 2329 and 2353 observed flies, respectively, for 1% H 2 O 2 , and 2360 and 2356 observed flies, respectively, for 2% H 2 O 2 . H 2 O 2 resistance was measured in both males and females, which were kept in separate vials. Solutions were renewed every other day until the last fly died. Vials were checked for dead flies every 24 h. Comparison of survivorship data of pooled data was performed using log rank or Wilcoxon survival tests. Each genotype and sex was analyzed separately. Starvation resistance. Around 20 5-day-old flies were put in a vial containing only water and agar without or with either 0.01, 0.1 or 1 mM Spd, with an average of 99 flies in each group. Only w 1118 flies were studied, yielding a total of 2382 observed flies. Starvation resistance was measured in both males and females, which were kept in separate vials. Vials were renewed every other day until the last fly died. Vials were checked for dead flies every 24 h. Comparison of survivorship of pooled data was performed using log rank or Wilcoxon survival tests. Each sex was analyzed separately. Conflict of Interest The authors declare no conflict of interest.
2016-05-17T21:45:10.017Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "23b31c6c78d11d032bd3d2b3afad72ee34f0367c", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/cddis2012139.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "23b31c6c78d11d032bd3d2b3afad72ee34f0367c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
51960963
pes2o/s2orc
v3-fos-license
Challenging İssues in Automated Oil Palm Fruit Grading ABSTRACT INTRODUCTION Progressed get ready of palm oil natural product bunches into agreeable oil using distinctive Image Processing strategies is sharpened in a couple endeavors. The ordinary methodologies are similarly open, little scale mechanical units, medium and endless scale palm oil industrial facilities. It is agreed that ordinary systems for isolating palm oil were grim and inefficient for making oil accessible to be obtained. Henceforth present enthusiasm for little scale palm oil plants is moving from clear stay singular unit operational machines to a more joined and complex PC composed system which is definitely not hard to work and keep up. This assistants in addition in effectiveness and decreases the human effort which may achieve presentation of screw up. This papers reveiws the examination discoveries of different creators and finds the testing issues which are missed by specialists in mechanized palm natural product evaluating. India has been represented to have the greatest region under oil seed improvement in the World yet the disjointedness is that the private creation is not adequate to meet the irrelevant agreeable oil necessities of the general population. The factors responsible for this consolidate poor land conditions and the creating population. The enthusiasm for palm oil has been creating. There has in like manner been discernible upward example in the per capital use of consumable oil among the Indian population. A lion's share of the Indian people encounters calorie deficiency. Inferable from the cost of edible oil, its usage is maintained a strategic distance from the ordinary menu of the fiscally blocked and tried populace. India has been acquiring boundless measures of oils from late nineteenth century as creation has stagnated or declined in the midst of the latest two decades. This is by virtue of the advancement of private oil creation doesn't proportionate with relating populace improvement. Considering the future demands starting now highlighted a whole deal philosophy for a thorough progression of creation and taking care of advances ought to be produced to make India sure about consumable oils supply as by virtue of oats and so on. It has been represented that the oil seeds creation in India is significantly helpless against the whims of nature particularly the rainstorm. The moving case of precipitation which is outside the capacity to control of the agriculturists impacts the era and gainfulness of oil items unquestionably. Typical low effectiveness of oil seeds created in India could be taken after fundamentally to the dependence on the rainstorm. The oilseeds in India, which are yearly or periodic yields, are all the more unprotected to precipitation. The natural products are normally round to ovoid or lengthened and protruding at the top. It is around two to five cm long and weight may change from 3 to 30 gram. As appeared in Figure 1, fuit comprises of an external thin skin (exocarp), oil bearing mash (mesocarp) and a shell endocarp. The shell together with portion frames the seed. The piece comprise of layers of hard oil endosperm, which is grayish white in shading encompassed by a dull cocoa skin secured with a system of filaments. Palm oil is extricated from the mesocarp. Kernel likewise yields oil known as kernel oil; however the amount is just around 1/4 of that acquired from mesocarp. Oil palm bundle comprise of external and inward natural products. The inward natural products are less pigmented, fairly level, undeveloped and non-oil bearing. Bundle weight different from a couple to 100 kilograms. Well set clusters convey 1000 to 3000 organic products. Aging is as a rule from tip downwards. A bundle takes around five to six months for aging. Oil development in the piece and mesocarp happens towards the end of a time of development amid which shell solidifies and afterward fetus gets to be noticeable. Three oil palm assortments have been distinguished in light of the distinction in organic product structure. They are Dura, Tenera and Pisifera. Dura has a thick shell (normally two to eight mm) with low to medium mesocarp content. This assortment is not monetarily developed at this point. Figure 2 demonstrates all the three assortments. Tenera assortment is a crossover got by intersection Dura (female) and pisifera (male). It has a thin shell for the most part and a medium to huge mesocarp substance. There is unmistakable fiber ring in the mesocarp. This is the generally developed sort everywhere throughout the world because of the high 113 mesocarp substance and resultant oil yield. Pisifera assortment is described by a shell-less natural product pea like piece inside. Regularly the piece is likewise missing. Since a large number of the organic products don't have developing life, seed proliferation is practically unimaginable. Palm oil fruit changes shading as it achieves legitimate collecting period. There are add up to four phases unripe, underripe, ready and overripe. These four phases are appeared in Figure 3. It is seen that the shade of the organic product changes from base to pinnacle. [1] developed model of mechanized reviewing framework for oil palm organic product is created utilizing the RGB color model and fuzzy logic. The motivation behind this evaluating framework is to recognize the three distinct classes of oil palm natural product which are underripe, ready and overripe. This extend gives a decent strategy to institutionalize the oil palm organic product reviewing framework over a huge zone and the exploration will keep on normalizing the framework to have the capacity to use under various wellspring of lighting. Meftah Salem M. Alfatni, [2] in their examination created robotized reviewing framework for oil palm packs utilizing RGB color model. This evaluating framework is produced to recognize three classes of palm organic products bunch. The development shading record depended on various shading intensity. The evaluating framework utilizes Computer and camera to dissect and translate images. The Computer program created and utilized mean shading force to separate between various shading and readiness of natural product. Nursuriati Jamil, Azlinah Mohamed, Syazwani Abdullah [3] In this paper, external surface shades of palm oil crisp natural product bundles are broke down to consequently review the organic products into over ready, ready and unripe. They looked at two strategies for shading reviewing: 1) utilizing RGB computerized numbers and 2) colorsc classification prepared utilizing a managed learning Hebb method and evaluated utilizing fuzzy logic. W.I. Wan Ismail, M.Z. Bardaie, and A.M. Abdul Hamid [4] The trial was led to decide the Hue optical properties of the three classifications of Fresh Fruit Bunches in particular unripe, underripe and ripe.Nikon Coolpix 4500 advanced camera with tele-converter zooming and the Keyence vision framework were utilized to catch the pictures in genuine oil palm ranch. The relationship of the oil content for mesocarp oil palm organic products with the advanced estimation of Hue was dissected. A.Nureize, J. Watada [5], the destinations of this paper is to fabricate a fuzzy multicriteria assessment model that describes the criteria of oil palm organic products to choose the fuzzy weights of these criteria in light of a fuzzy regression display. Z Abdullah, L.C Guan and B.M.N Mohd Azemi [6] The quality element of a standard Elgaeis guineensis oil palm was evaluated utilizing a PC vision demonstrate as a part of request to assess and grade the oil palm bunches by a robotized creation framework. The element considered was shading, and the review criteria depended on the Palm Oil Research Institute of Malaysia. The relationship between oil substance and shading was investigated in HSI (Hue, Saturation and Intensity) space for readiness assurance. Manza R.R., Gaikwad B.P. and Manza G.R. [7] some researchers used different edge detection operators and proved that they can be used to categorize the mango fruits to evaluate the quality and grade. Same concept can be used in oil palm fruit grading. The Simulink based adjustable structure is intended for fast reenactment, execution, and check of video preparing frameworks. In this work the similar examination of different mangoes video edge identification techniques resemble Sobel, Prewitt and Canny is exhibited. Figure 4 gives the overview of different researchers. Ahmed Jaffar, Roseleena Jaafar, Nursuriati Jamil [9] this paper introduces a PC helped photogrammetric approach which corresponds the shade of the palm oil fruits to their readiness and in the end deals with them physically.The framework and the approach planned in this work have built up a finish robotization reviewing arrangement of oil palm FFB and along these lines radically expanded the reviewing efficiency. Fatma Susilawati Mohamad, Azizah Abdul Manaf and Suriayati Chuprat, [10] this paper uses the utilization of Distance Measurements for histogram based oil palm readiness identification. In this study, HSV color model is investigated its capability of colors.Four Distance Measurements are chosen and looked at in this study. Sunilkumar and D. S. Sparjan Babu [11] Stated The present study was embraced to assess diverse development phases of oil palm fruits as far as shading and oil content, set up their bury relationship and to create expectation models in light of shading qualities so that non dangerous readiness assessment could be accomplished. Correlation of RGB and L*a*b shading model is done.The L*a*b* based model would be perfect for fusing in contraptions like colorimeters with the end goal of shading based reviewing of FFB and expectation of oil substance. This study explores the connection between oil content in the oil palm fruit against the shade of the oil palm organic product. The finding of this study is helpful for deciding readiness of oil palm for gathering and at the last will use in building up the business shading meter to gauge organic product readiness utilizing non-contact estimation procedure. [12]. Choong et al. [13] the oil substance of the tissue of mesocarp has coordinate association with shading groups red, green and blue. By running escalated tests, it was observed that oil content related with the red shading band, with a relapse estimation of 0.86. The finding of this study might be valuable for deciding the readiness of oil palm for collecting and for the utilization in the operation and control of nonstop sterilizer in palm oil process. However, a later study by Ghazali et al. [14] discovered that the red components for unripe and underripe categories were almost the same. The summary of research finding is shown in Figure 5. The test comes about demonstrate that [15] Fuzzy Moving K-implies has characterized the remote detecting picture more precisely than other three calculations. Rather than utilizing single component bunching calculation, [16] this paper introduces different element grouping calculation with three elements for every pixel, for example, pixel power, separate from the focal point of the spot and middle of encompassing pixels. CHALLENGING ISSUES IN PALM FRUIT GRADING After starting examination of the writing review it is found that there are numerous issues which must be talked about and sorted to get the best results. There are add up to seven testing issues which were not considered in the past research. There are situations where the organic product is harmed by nuisance in light of which the natural product shading may change and prompts wrong reviewing of organic product. There are distinctive variables which need to consider when the way toward evaluating is computerized. A portion of the key components which is found and recognized are appeared in Figure 6. Figure 6. Different parameters to be considered in palm oil fruit grading 1. Color model: Some of the researchers have proved that RGB color model or HIS color model is the best for grading oil palm fruit. But after literature survey it is found that the color model to be used depends on the environmental condition which is unpredictable. Also the color of the fruit is different and varies from region to region so there is a need for investigation and detailed study. The purpose of a color model is to facilitate the specification of colors in some standard generally acceptable way. 2. Camera specification: When the image is captured from the tree for real time processing, the camera resolutions and pixel data also plays major role in deciding the complexity of algorithm to be designed. 3. Camera location: The location of camera also plays very important role in decision making because the color of fruit is not uniform from base to apex. The location, orientation and mode of operation of each camera need to be carefully chosen to ensure well-covered scene. should work in all the environmental conditions, irrespective of cloudy or sunny weather. 6. Species type: Till now no research has been done on categorizing the palm oil fruits into their three species dura, tenera or pisifera. 7. Detecting diseases/pest: Detecting the type of disease/pest the oil palm fruit affected with is also very important for farmers to take precautions against the pest. Figure 7 shows different diseases and pest. It is found after literature survey that automated detection of disease and pest has not been done by researchers. 8. Exocarp color: Exocarp color varies from region to region. Through study is required to find out the different color in different regions. We cannot generalize the mesocarp color. Figure 8 shows the relationship between color and category of fruit. 9. Fruit category: Some researchers have taken only three categories namely Unripe, Ripe and Overripe. But the truth is there are total four categories Underripe, Unripe, Ripe and Overripe. 10. Percentage of FFA: Finding the percentage of free Fatty Acid (FFA) content from the fruit is also very important in automated grading. More the FFA content less healthy will be the oil. The relationship between fruit category and FFA content is shown in Figure 9. 11. Number of Detached fruits: This is the main element which is utilized as a part of manual evaluating. So there is some connection between number of natural products tumbled from trees to the reviewing. Till now this element has not contemplated. This technique has been rehearsed till today since it is possible if the tree is tall. It can be observed from Figure 10 that the number of detached fruitlets has direct relationship with the FFB category. The data is compiled from statistical data, Godrej Agrovet Pvt. Ltd.-Goa. The manual harvesting works with the number of detached fruits fallen down on the ground from the tree. Since the trees are very tall, and farmers cannot climb the trees because of the thorns, this is the only feasible method for them to find out the correct harvest time of oil palm fruits. CONCLUSION In this paper we introduced the diagram of various Image Processing and fuzzy logic methods utilized for Palm oil natural product evaluating and talked about the difficulties that ought to be met to get exact and substantial results. Despite the fact that a few scientists say that the calculation and strategies proposed by them is great and flawless yet at the same time there is extension for legitimate procedure and change. Till now there was no model indicated which can be utilized as plan for execution. Since shade of organic product changes from locale to district, same technique can't be received to every one of the cases. So there is a requirement for summed up model utilizing displaying strategies. Alongside shading there are more components which are expressed before can be utilized to build the exactness and precision while evaluating oil palm natural products. Presently a day's PC vision framework is utilized wherever to decrease the blunder and increment the proficiency and profitability, it is a need that Palm oil extraction factories ought to be received. In this manner image processing and fuzzy logic methods are intense instruments which will help in planning powerful machine-vision framework for farming space if the above issues are tended to effectively.
2019-05-30T23:45:15.860Z
2018-08-06T00:00:00.000
{ "year": 2018, "sha1": "ba8993c645324859a76450e91f5a4dcfedbba513", "oa_license": null, "oa_url": "https://doi.org/10.11591/ijai.v7.i3.pp111-118", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5f30c65fec8f208f97e2080b02dc25ea10b6ecbc", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Computer Science" ] }
119194914
pes2o/s2orc
v3-fos-license
Dynamic electron correlation in interactions of light with matter formulated in b-space Scattering of beams of light and matter from multi-electron atomic targets is formulated in the position representation of quantum mechanics. This yields expressions for the probability amplitude, a(b), for a wide variety of processes. Here the spatial parameter b is the distance of closest approach of incoming particles traveling on a straight line with the center of the atomic target. The correlated probability amplitude, a(b), reduces to a relatively simple product of single electron probability amplitudes in the widely used independent electron approximation limit, where the correlation effects of the Coulomb interactions between the atomic electrons disappear. As an example in which a(b} has an explicit dependence on b}, we consider transversely finite vortex beams of twisted photons that lack the translational invariance of infinite plane-wave beams. Some experimental considerations and future applications are briefly considered. I. INTRODUCTION Physics describes complex objects and processes in terms of simpler ones, successfully to some extent. There is a wide range of systems composed of light and matter that are well described, both mathematically and conceptually. However, descriptions of dynamic processes, such as interactions of light with atoms and atoms with atoms, usually depend on understanding the underlying static components. As a consequence less progress has been made in describing dynamic processes with light and matter that have subsystems of atoms involving more than one active electron, even though most systems we encounter are in this category. In this paper we consider dynamic multi-electronic atomic systems interacting with beams of light and matter. We begin with the specific example of a vortex beam of twisted photons interacting with a one-electron atom, and extend this to interactions of light and matter with targets that contain more than a single electron. The variation of twisted vortex beams in the direction transverse to the beam axis leads to an explicit dependence on the translational distance transverse to the beam axis, b, between the center of the vortex beam and the center of the target. This dependence on b is generally absent in descriptions using plane wave photons. Effects of dynamic electron correlation have been widely observed in interactions of atoms and molecules with both light [1-5] and particle [6][7][8][9] beams. Of the formulations of the many-body problem available [8][9][10][11][12][13], the one we employ [13] has been used to describe electron correlation dynamics in collisions of multi-electron atoms with charged particle beams, as well as interactions with plane wave photon beams in the few-eV to few-keV regime. The widely tested formulation we follow was developed in position space. Most (but not all) experiments and current applications involving light interacting with atoms [1, 2] have utilized optical photon beams, such as laser beams, where the wavelength of the photons, λ ∼ 5×10 −7 m, is quite large compared to the atomic size, a T ∼ 5×10 −11 m. Under these experimental conditions it is somewhat simpler, conceptually and mathematically, to work in a momentum-space representation, describing light as wave-like rather than particlelike, as discussed below. Nevertheless, in this paper we work in ' b -space' (position space) rather than ' q -space' (momentum space). In quantum mechanics both representations give the same observable results. We work in b -space here because it follows an available formulation, provides a natural extension of semiclassical methods used in optical texts [14], can be used in interactions involving x-rays, and offers new insight into the nature of quantum dynamics. In Sec. II, we formulate electron correlation dynamics in interactions of light and matter with multi-electron atomic systems. This includes plane-wave beams of light, as well as recently formulated twisted vortex beams [15][16][17], interacting with single electron atoms [18,19]. The twisted vortex beams are more complex than plane-wave beams; they may carry an orbital angular momentum not present in plane-wave beams. Moreover the asymptotic FIG. 1: Sketch in theb-ẑ plane of an atom interacting with a Gaussian or Gauss-Laguerre vortex beam [18]. The maximum of the Gaussian envelope (shown here) of the beam intensity distribution is along the beam axis, and the envelope is cut off when its intensity falls by a factor of 1/e 2 0.135 of its maximum. This defines the waist size, w(0), of the center of the beam. The two other independently variable sizes are the mean radius of the atom, a T , and the wavelength of the light, λ (not shown here). If the beam is a twisted photon (or electron) beam, it may carry orbital angular momentum, corresponding to a localized photon (or electron) that passes through b as it rotates about the z-axis. The origin of b is arbitrary: it may be either at the center of the beam or the center of the atom, for example. When the twisted vortex beam carries orbital angular momentum ( = 0), the beam has a more complex geometry [16], and has zero intensity along its axis. The atom shown is in its ground state with = 0. In order to exchange angular momentum with a twisted vortex beam with = 0, the atomic electron must be in an excited state with a non-zero value of that matches that of the twisted photon but has an opposite direction. In this case the atom has a more complex structure than that shown here and the electronic wavefunction has a node at the center of the atom. angle of the vortex may be adjusted macroscopically and serves as an additional control parameter that affects the interactions of atoms with vortex beams [19,20]. This additional continuously variable parameter is determined by the waist size of the beam vortex, w(0). Thus we explicitly include three variable-size parameters: the target size, a T , the projectile wavelength, λ, and w(0). In Sec. III we present relatively simple single-electron calculations for photon beams interacting with a two-state degenerate single-electron atomic target. In Sec. IV, we address some mathematical considerations including the nature of the paraxial approximation [15] that is often employed for vortex beams. We also comment on various experimental considerations including the use of our formulation with various targets such as macroscopic gas cells, molecules, and crystals. Then we address some future applications. Finally, in Sec. V we summarize our main results. II. FORMULATION In this paper we consider a beam of photons (or electrons) incident on an atomic target, a well-defined initial electronic state |i . The beam may cause transitions to a particular asymptotic final state |f . An incoming photon carries momentum k i , while the outgoing photon carries momentum k f . The momentum transfer is q ≡ k f − k i . It is sufficient for our purposes here to consider only elastic scattering where k i = k f . This simplifies our notation, and allows us to put aside the effects of a non-zero minimum transfer inessential to this paper, but straightforward to include when needed [21]. Cross sections and reaction rates for dynamic processes discussed in this paper may generally be described [8][9][10][11][12][13][14] in terms of the scattering amplitude as a function of the momentum transfer, f ( q). A. Dual quantum amplitudes in b -space and q -space The equally useful variable, conjugate to q, is b. In this paper we explore uses of the probability amplitude, a( b), that is conjugate to f ( q). The physical meaning of b itself can depend on the size scale of the projectile compared to that of the target. In collisions where the beam is diffuse compared to the target, b describes the transverse displacement of a point-like atom from the axis of the beam [18]. When the transverse extent of the beam is small compared to the size of an atom (e.g. in the case of a tightly focused high-energy x-ray beam), b describes the transverse displacement of the beam from the center of the atom. In this paper we generally regard b as the transverse displacement between the center of a target and the center of a beam, whose axis is taken as the z-axis of the beam-target system. The scattering amplitude in q -space is related to the probability amplitude in b -space by [22], In Fourier transforms [23] such as these, if f ( q) is localized in q, then a( b) is delocalized and vice versa. Both yield the same count rates for physically observable reactions [24], as illustrated in Eq. (5) below for the case of total reaction cross sections. Since q is a wave number, Eq. (1) may be applied to either classical or quantum wave amplitudes. Relative size matters. For optical photon beams interacting with atoms, a( b) is generally delocalized compared to the size of a much smaller atom, and f ( q) is localized in q space, while for x-rays (or beams of fast electrons or protons), a( b) may be localized (i.e., the scattering is approximately particle-like). In the optical case, b describes the location of a well-localized atom within a larger photon beam, whose size is determined by the waist size, w(0), of the beam and the wavelength, λ, of the photon. In the case of hard x-ray beams, b describes the location of a well-localized middle of the beam trajectory within the atom, whose larger size, a T , is often defined in terms of the Bohr radius, a 0 . In any description the physical interpretation of b may change as relative size scales change. In general the size of any system of (possibly overlapping) objects with distinctly different sizes is determined by the size of the largest object. In scattering of twisted vortex photons with atoms, the distance to which b scales emerges automatically in the scattering amplitude [18] to the larger [25] of w(0) or a T . In Sec. IV B below, we will address how a( b) may be used to describe interactions with beams that fall off with distance in the transverse direction from the beam axis, and thus have an explicit dependence on b. We also briefly address some aspects of twisted vortex beams of photons and electrons [18]. But next we show how a( b) may be used to to describe interactions of photons with atomic matter in such a way that one may apply previous formulations of electron dynamics to interactions of light with matter. As a result, cross sections and reaction rates for a larger number of processes may now be calculated, including processes that involve the transition of more than one electron, as well as processes that exhibit the effects of twist in Gauss-Laguerre vortex beams. B. Basic formulation of interacting systems in b -space Since properties of most materials are usually determined by the state of the composite electrons, we seek the dynamic electronic wavefunction, ψ el , which may be found by solving the time-dependent Schrödinger equation [26], Before the interaction occurs, we assume the electronic state is a known eigenstate |i of The interaction, H int , changes this state into a superposition of states. When the interaction has died away, the wavefunction is asymptotically in a superposition of the complete set of eigenstates, namely, Using orthonormality of the complete set of basis states, the probability amplitude that the electronic system is in a particular final state, f |, is, The observable probability that an electron made a transition from a particular initial state |i , to a possibly different particular final state, |f , is The total cross section for this particular transition is, where the last step follows from Parseval's relation for Fourier transforms [24]. A conventional, straightforward method of evaluating the probability amplitudes is to solve the differential equations [14,26] arising from Eq. (2), where E f s = E f − E s is the energy difference between the atomic states |f and |s . The solutions for the probability amplitudes, a f i ( b), are often found using the semiclassical approximation for the trajectories of the incoming particles, e.g, For a one-electron atom interacting with a photon, Here A is the vector potential of the photon field at the location of the atomic electron [27], and p denotes the momentum operator of the atomic electron. The formulation above is a standard formulation used for single-electron atomic targets. with and Here the mass and charge of an electron are denoted by m and −e respectively, c denotes the speed of light, Z T e is the charge of the target nucleus, N is the number of electrons in the target (N = Z T for a neutral atom), and A j is the vector potential at the location of the j th electron. Thus, the formulation for multi-electron targets [13] is essentially the same as that outlined in Eqs. (2) -(6) for single-electron systems. However, detailed calculations become rapidly more difficult as the number of interacting electrons increases [28]. The difficulty, mathematically and conceptually, that arises in solving Eq. (2) using the multielectron Hamiltonian of Eq. (7) is attributable to the inter-electron interactions, e 2 | r k − r j | , in Eq. (8). In the limit where inter-electron Coulomb interactions may be replaced by a mean field approximation [29], i.e., −e 2 /| r j − r k | → v(r j ), the amplitudes a( b) (as well as the corresponding amplitudes f ( q)) for various processes reduce to simple products of singleelectron transition amplitudes, and calculations are much easier to deal with. Correlation is mathematically characterized by a probability [30,31] for a process subject to N ≥ 2 conditions such that P 12...N = P 1 P 2 . . . P N . That is, only in the widely used uncorrelated independent electron approximation does the probability for an event involving N electrons reduce to a product of single-electron probabilities. A. Photons incident on a degenerate two-state atom To illustrate our formulation in b-space, we consider a system consisting of a photon beam interacting with an atom. Our photon beam has an electric field given by E(x, y, z; t) = E( b, z) cos(2πt/T ), corresponding to monochromatic light with oscillation period T . The atomic transition involves two states, an initial state |i = |1 and a final state, |f = |2 . We focus on events where the state of the atom changes, i.e., |f = |i . Examples of such dynamic systems include a plane-wave beam, a plane-wave beam with a Gaussian envelope, or a twisted vortex photon incident on an atom, which undergoes a transition involving an exchange of orbital angular momentum with the beam, e.g. a 2s − 2p atomic transition involving an exchange of angular momentum with the photon. As needed, one may employ the paraxial approximation so that in the scattering region the light beam is approximately parallel to the beam axis,ẑ, and the intensity, which may vary with b, is independent of z. The interaction operator, H int , may assume various forms. For photo-annihilation by where z 12 is the dipole matrix element of the atomic transition. For Compton scattering by x-rays, H int = j e 2mc A j · A j and the matrix element, H 12 ( b), includes higher multipole components, and is related to that of scattering by high energy electrons and protons [32]. The vector potential, A, is linearly related to the electric field, E, of the photon beam, and the beam intensity, I( b), is proportional to | E| 2 . As a specific example, we now consider the degenerate limit in which E 1 − E 2 → 0. In this limit Eqs. (10) have algebraic solutions [33], namely, We have chosen the atomic energy E 1 as the zero-point energy of the system. The probability for a transition from |1 to |2 is P ( b, t) = |a 12 ( b, t)| 2 . In Figs. 2 and 3 we plot this probability at a fixed b as a function of time. The plot shown in Fig. 2 is a typical result when H 12 ( b)T /h ≥ π/2, so that the interaction is non-perturbative and the transition probability may reach unity. In the non-perturbative regime the presence of many, often complex, oscillations is common. An example where the probability never reaches unity is shown in Fig. 3 (dashed curve), where H 12 ( b)T /h = 1/2. Special cases with relatively simple oscillations in time are shown in Fig. 3. In addition to the perturbative case where P ( b, t) never reaches unity, two special cases are shown. These special cases occur when H 12 ( b)T /h is an integer multiple of π/2. When the integer multiple is even, maximum population transfer from state |1 to state |2 is relatively short lived. However, if the integer is odd, P ( b, t) has a broad maximum [33] around t = n odd T /4. In cases when the transition probability is sufficiently small, so that first-order perturbation theory conditions apply, a perturbative approach using Eqs. (11) could be used as a second-order calculations are now available [9]. At lower energies, coupled channel calculations are available [13]. Effects of electrons on partially stripped ionic, or neutral atomic, beams are discussed below. , has recently been derived for atomic hydrogen [18]. A. Mathematical considerations In our experience, mathematical expressions for the wave-like scattering amplitude, f ( q), are a little simpler than for the corresponding probability amplitude, a( b). On the other hand the probabilities, |a( b)| 2 , may be more intuitive to a wider audience, and the unitarity restriction, |a( b)| 2 ≤ 1, can be useful in verifying the validity of specific calculations. To our knowledge there is no formulation of electron correlation dynamics in q -space, but we expect it to be straightforward. In the limit of uncorrelated, independent electrons both f ( q) and a( b) are products of single-electron amplitudes [30,31,34], so the corresponding observables are products of one-electron observables. We wish to draw attention to the fact that different physical size scales emerge naturally in f ( q) (and consequently in a( b)) when the scales characterizing various parts of the system change [18]. Thus there is no one scale more fundamental than another in this description. The parameter, b, used to locate an object in space, is a chameleon-like mathematical parameter whose physical significance conceptually changes with different relative scales. The paraxial approximation [15] used for twisted vortex photons in our previous paper [18] simplifies the scattering problem by decoupling beam trajectories from the x-y plane. In this approximation, particle and ray trajectories are approximated as parallel to the axis of the macroscopic beam [35], which is taken as the z-axis with b in the x-y plane (as is q for forward scattering). In both the wave and particle limits, the trajectory of a photon may be regarded as a straight line along z = ct. This may also be applied to electron, proton, and some ion beams in the limit that Coulomb scattering of the incident charged projectile with the target can be ignored [36]. In the example of transfer of orbital angular momentum between the beam and the target [18], in this limit the direction of spin of an atom is reversed (like reversing the spin of a boat's propeller) by exchange of the direction of spin with the twisted photon, where the joint photonic-atomic spin axis is the beam axis, which differs in general from the axis of the photon's trajectory [37]. Mathematical descriptions of twisted vortex beams that avoid the paraxial approximation are available [38,39], but they are more complex both mathematically and conceptually. B. Experimental considerations Although there presently exist some experimental results on two-electron transitions due to weak interactions of light with few-electron atomic targets, over a range of wavelengths ranging from visible light to x-rays above 10 keV [1-5], many more experiments that detail how multi-electron dynamics works are possible, including experiments using plane waves as well as twisted vortex photons. As noted at the end of Sec. I, for beams of twisted photons and electrons incident on atomic targets (see Fig. 1), there are three size (or distance) scales, a T , λ, and w(0). The waist size (minimum beam width) w(0) can be related to another useful parameter by w(0) = λz R /π, where the Rayleigh range, z R , describes the distance scale on which the vortex beam is approximately parallel to the z-axis, i.e., where the paraxial approximation mentioned above is valid. The macroscopic beam angle varies with the magnitude of displacement b, and the Rayleigh range, z R , according to tan Θ V (b) = b/z R for Gauss-Laguerre vortex beams [18]. Thus, at a fixed value of z R (and fixed w(0) at a fixed λ), Θ V (b) can be used to macroscopically control cross sections and reaction rates by choosing different Θ V (b) within the beam [40] to vary b. With x-rays this might be used to select specific regions within an atom. In scattering of the beam from the atomic target (see Fig. 1), the incoming and outgoing beams, differing by the scattering angle, Θ, share the same impact parameter, b. Figure 1 shows forward scattering at Θ = 0. It is possible to do experiments using macroscopic gas cells [18], so long as the size of the cell along the beam axis, ∆z, is not large compared to the Rayleigh range, z R . That is, the vortex beam need not be focused at the center of the atom so long as the condition required by the paraxial approximation (discussed above) is satisfied [35]. Our description generally requires single-collision conditions experimentally, namely that the target be sufficiently diffuse that the effect of scattering from more than a single atom by a single projectile is not significant. We point out that a so-called 'twist factor' can be used to convert data for beams of plane wave photons to data for twisted vortex photon beams, and could be useful in designing experiments. This is relatively easy to calculate, although it is presently described in qspace [18]. We note in passing that a virtual impact method has been developed to describe the observed crossover from particle-like to wave-like behavior in collisions of beams of ions carrying electrons scattering from atomic targets [42]. This involves additional size scales. The number of such scales grows as the number of electrons on the incoming ion increases. C. Future applications In regard to future applications, we call attention to the emerging fields of twisted vortex beams [17], quantum information [43], and quantum control [44]. Twisted beams are more complex than plane wave beams, offering new features such as orbital angular momentum and macroscopically adjustable parameters (Rayleigh range [19] and rotational acceleration [20]), which can be used to control transfer of information and reaction rates in interactions of atoms with light and matter. Opportunities may also occur in strongly interacting systems, such as beams interacting with atoms and molecules in a regime where |a( b)| 2 1, where full control can occur [12,33,44]. In this paper we have addressed the description of multi-electron transitions in twodimensional dual b and q spaces. By applying this approach in dual time and energy spaces, it might be possible to interpret recent FAST experiments [45] that probe how quantum processes are both connected and separated, i.e., correlated, in time [46]. V. SUMMARY We have mathematically formulated electron correlation dynamics in scattering of light and matter from multi-electron atomic targets by extending an existing formulation for scattering of protons, electrons, ions and plane-wave photons done in a position representation [13] to photon beams that vary (e.g. decrease in intensity) in directions transverse to the beam axis. The key parameter in this representation is the position, b, that specifies the minimum distance between the centers of the light beam and of the multi-electron atomic target. We have presented results of relatively simple calculations that illustrate b-dependence in transition probabilities for photon beams interacting with two-state degenerate single-electron atomic targets. We have more generally discussed interactions of vortex twisted photon beams with multi-electron atomic targets. Because they are neither monotonic in b nor necessarily isotropic inb, vortex beams provide a relatively rich dependence on b in scattering cross sections and reaction rates in these processes. VI. ACKNOWLEDGMENTS We acknowledge useful discussions with M. Frow, J. Wolff, J. Eberly, and Z. Chang. This work was supported in part by the NSF under Grant PHY-1205788.
2015-09-02T17:25:27.000Z
2015-06-21T00:00:00.000
{ "year": 2015, "sha1": "f634e12094a99525e78153ad2c2e7095b97eaac1", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevA.92.032702", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "f634e12094a99525e78153ad2c2e7095b97eaac1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
243393971
pes2o/s2orc
v3-fos-license
Adoption of the Wealth Index Approach in Analyzing the Determinants of Household’s Poverty in Tanzania This study examines the determinants of poverty in Tanzania using the 2015 Tanzania Demographic and Health Survey data. Ordered logit model was used to model the determinants of poverty and the study revealed that age, sex, household size, level of education, marital status, type of residence and access to financial services are significant in explaining the status of poverty. All of the mentioned variables are significant at 1% except sex of the household head which was significant at 5%. Since it is the global goal to reduce poverty, the study recommends that the government should invest more in education to improve knowledge and skills of individuals, improve financial services and financial inclusion especially in rural areas to eradicate poverty and remove rural-urban disparity. people who are poor have managed to attain this goal. Castañeda et al., (2016) argue that in many parts of the world the rate of economically dependent members per working-age adult is high because the poor in many part of the world live in larger households. However, this rate has been declining in many regions of the world unlike in SSA where the fast rate of population growth causes the increase in total poor population and hence leads to increased dependency ratio. This is because the growth is less effective to reach the poor population residing in different areas of the region. Also, Adeyemi, Ijaiya, & Raheem (2009) argue that the reasons for poverty in SSA include the increased rate of population growth, inflation, external debt servicing, lack of safe water, low economic activities, gender discrimination, ethnic and religious conflict and HIV/AIDS. The World Bank Report adds that the speed of extreme poverty reduction in SSA is slowing down because of slower than average economic growth, concentration in capitalintensive sector, higher than average population growth, low levels of human capital and access to basic infrastructure and finally increased levels of fragility and conflict. Also, the current COVID-19 pandemic has slowed down the rate of poverty reduction as over 20 years extreme poverty was declining steadily but recently due to pandemic the number of people living in poverty has increased by about 120 million and is expected to further rise to 150 million by the end of 2021 (World Bank, 2021) Since independence, Tanzania's development process has centered on human development by focusing on the major development problems such as ignorance, diseases and poverty (URT, 2000). This focus however is made possible through a strong economy which the government is trying to build by using various strategies to ensure that the country attains a middle-income target and eradicate the existing level of extreme poverty by 2025. However, there is mismatch of pace between economic growth and poverty reduction because poverty continues to decline at a low pace compared to the rate of economic growth. This emerges from the fact that the sectors which have high contribution to Gross Domestic Product (GDP) have low contribution to poverty reduction and vice versa. For instance, mining, communication and transport sectors have high contribution to GDP but less to employment which have high implication on poverty reduction as compared to agriculture sector which contributes an average of 30 percent to GDP but higher to poverty reduction because of employing many people, more than 66.3 percent. The effort of fighting poverty is evidenced by the trend of reduction of multidimensional as well as extreme poverty which were reduced from 64 percent and 31.3 percent in 2010 to 47.4 percent and 17.7 percent respectively in 2015 (URT, 2018). This was successful because the government used policies and strategies which increased access to electricity and rates of ownership of assets which include mobile phones, radios and motorcycles. Availability of electricity in many areas, possession of mobile phones and motorcycles have accelerated the pace of poverty reduction by the means of self-employment to many poor through establishment of various new investments which require only small amount of capital, conducting online business through mobile phones but also to transport passengers and goods using motorcycles (Bodaboda), improvement in social services provision such as education, health and water services. The reduction of multidimensional poverty go in line with target of the National Five Year Development Plan (FYDP II) which is geared to the reduce Multidimensional Poverty Index (MPI) to 38.4 percent by 2020/21 and ultimately to 29.2 percent by 2025/2026(URT, 2018. The World Bank announced that Tanzania had been upgraded from low to low middle income status from July 1, 2020 which was planned to be reached on 2025. This tremendous achievement is due to the acute effort done by the Government of Tanzania by build a strong economic performance of over 6% real gross domestic product (GDP) growth on average for the past decade. Tanzania's GNI per capita increased from $1,020 in 2018 to $1,080 in 2019, which exceeds the 2019 threshold of $1,036 for lower-middle income status. However, discipline in financial expenditure, prevailing peace and tranquility are also helped the country to earn the middle income status. This paper aims at analyzing the determinants of poverty in Tanzania using a Demographic and Health Survey Data and Wealth Index Approach. This is due to the fact that analysis of the determinants of poverty by many researchers has mostly focused attention on household income, expenditure and consumption which rely mostly on economic factors and single dimension measures. According to Gachanja & Kinyanjui (2016), these measurement variables are inherently inaccurate in analyzing the status of poverty in developing countries unlike in developed countries. Hence, the analysis based on household income, expenditure and consumption fails to capture all aspects necessary in determining the poverty level of the households. Review of Literature The empirical literatures on the determinants of poverty have been well established in the country as well as around the world. Adeyemi et al., (2009) examined the determinants of poverty in Sub-Saharan Africa using multiple regression analysis technique for 48 countries. The results of the study showed that many SSA countries have low level of development or high poverty rate because of the increase in the rate of population, inflation and external debt servicing, lack of safe water, low economic activities, gender discrimination, ethnic and religious conflicts and prevalence of HIV/AIDS. A study by Kabuya,(2007) points out other causes of poverty in Africa, which include income inequality, conflicts, location, natural disasters, ill health and disability, inheritance of poverty, education and skills, as well as gender discrimination. Gender discrimination goes in line with the results of DFID, (2005) which argues that social exclusion of people causes poverty due to the fact that people do not participate equally in social activities. Other researchers add that poverty can be caused by lack of income and productive resources sufficient to ensure sustainable livelihood, hunger and malnutrition, ill health, limited or lack of access to education and other basic services, increased morbidity and mortality from illness, homelessness, inadequate, unsafe and degraded environment, social discrimination and exclusion but also lack of participation in decision making (WB, 1990;UN, 1995 andWB, 2001). Other factors identified to cause poverty include inadequate access to employment opportunities, physical assets such as land, capital and credit, means of supporting development, markets and assistance for people living in marginal areas and victimized by transition poverty and lack of participation (Obadan, 1997). Furthermore, Korf et al. (2005) found that poverty is linked to lack of resource endowments such as oxen, land, and human capital. Narayan et.al. (2000a) explain that poverty is caused by two main factors which are structural causes and traditional causes. Structural causes include; limited resources, lack of skills, locational disadvantage and other factors that are inherent in the social and political set-up. Traditional causes on the other hand consist of natural calamities such as drought and man-made disasters such as wars and environmental degradation among others. Majeed and Malik (2014) employed logistic regression technique in Pakistan to examine household characteristics and personal characteristics of the household head as the determinants of poverty. They revealed the importance education has on poverty reduction. Their study discovered that poverty is greatest among the less literate households and declines as education level increases. Human capital accumulation plays a great role in the development process and reduction of poverty (Chikelu, 2016) through improved cognitive and non-cognitive abilities, skills and health of a labor involved in development process. Many countries are now investing in human capital development due to the role it plays in economic growth and development as reflected in poverty reduction. Human capital significantly reduces the chance of being poor (Mok, Gan, & Sanyal, 2007). Obadan (1997) points out that low endowment of human capital, destruction of natural resources are among the causes of poverty. According to Coulombe & Mckay (1996) low level of education significantly increase the probability of household being poor. Zuluaga, (2002) adds that education improves decisions and behavior regarding housing and can avail credit facility in a better way thus allowing people to escape and avoid poverty. Household heads play an important role in poverty reduction depending on the way they manage their households. This is also influenced by the level of education of the head of the household. When heads of households attain higher education, in other words, when there is an increase in the schooling of the household heads make the level of poverty to be much lower as there is positive impact between increase in schooling and productivity and earnings which is a significant factor in poverty reduction (Tilak, 2002;Abuka, Atingi-Ego, Opolot, &Okello, 2007 andAl-samarrai, 2007). Okojie (2002) and Bundervoet (2006) analyze the importance of the head of the household on poverty reduction. The reason behind is to know among the households headed by male or female, which ones suffer from poverty? The results revealed that incidence of poverty, poverty gap and poverty severity are more prominent in households headed by female as compared to male headed households. This results comply with the results of (Zuluaga, 2010) who argues that female headed households are more likely to have less income than male headed households which signifies higher rates of poverty. (Horrell & Krishnan, 2006) conducted a study to analyze poverty and productivity in female-headed households in Zimbabwe and revealed different forms of poverty for female-headed households which in turn affect their ability to improve productivity particularly in agriculture. However, the women empowerment agenda plays important role in poverty reduction because empowerment eradicates the conditions that cause powerlessness and dependencies through engaging women in different socio-economic activities, inspiring them to participate through action plans and suggestions and encouraging them to accept responsibilities (Arif, 2014). Shaukat, Javed, & Imran, (2019) conducted a study in Pakistan to assess the poverty status using wealth index as substitute to household income and consumption. Using DHS data and multivariate analysis technique the study revealed that poverty status of the household is significantly associated with the size of the household, dependency ratio, sex and age of the household head. Moreover, higher education reduces the likelihood of the household being poor. On the other hand, dependency ratio increases the likelihood of poverty. Sex of the head of household indicates that male-headed households are more likely to have poverty-lower wealth index compared to female-headed households. Aikaeli (2010) in his study on the determinants of rural income in Tanzania revealed that rural income was lower in female-headed households than male-headed households. This justifies the existence of high rate of poverty in female-headed households than the counterpart. Moreover, he found the need of improving the level of education of rural households, size of household labor force, acreage of land use by households in rural areas and ownership of a non-farm rural enterprise as they are significantly and positively related to income of rural households in Tanzania. In Nigeria, Apata, Apata, Igbalajobi, & Awoniyi (2010) employed probit model on a sample of 500 small farmers to examine the determinants of rural poverty. The study revealed that access to micro-credit from financial institutions, education level, participation in workshops or seminars related to agriculture, livestock asset and extension services significantly influence the probability of household's chronic poverty. Ermiyas, Batu, & Teka (2019) examined the determinants of rural household's poverty in Dejen -Ethiopia using primary data collected through questionnaire from 204 households selected through multi-stage sampling technique. Initially, they employed Foster, Greer and Thorbecke (FGT) poverty index to examine the extent and severity of poverty, and it was found that out of the total sampled households nearly 49 percent lives below the poverty line with an average poverty gap of 0.083 and severity gap of 0.065. Nevertheless, the results of the determinants of rural poverty studied indicate that household size (family size), sex of the head of households, dependency ratio and ownership of livestock are key determinants of rural poverty. Specifically, poverty status is negatively correlated with the number of livestock owned by the households and sex of the head of the households. Household size and dependency ratio have shown positive relationship to poverty status of the households. In Kenya, Gachanja & Kinyanjui (2016) conducted a study to analyze the household poverty determinants using a demographic and health survey data and wealth index approach. Both binary and ordered logistic models used revealed that years of education of the head of the household, marital status, household size and the region of residence strongly determine the welfare status of the household and therefore, important in explaining household probability to poorest. Mutabazi, Sieber, Maeda, & Tscherning, (2015) analyzed determinants of poverty and vulnerability of smallholder farmers in the rural area of Morogoro Region -Tanzania. The study used 240 households selected at one point in a time in six villages of the region. Researchers employed descriptive and econometric approaches such as Three Stages Least Squares (3SLS) and Generalized Methods of Moment (GMM) for data analysis and the results revealed that in the studied six villages there was prevalence of income poverty. More specifically, income poverty was relatively low in agro-climatically favourable areas than in less favourable areas. On the other hand, majority of the households (3/4) were vulnerable and the pattern of such vulnerability tended to overlap with poverty rates in the studied six villages. Nevertheless, ageing of the household head has accelerated the level of vulnerability; large-sized households were more income poor than small-sized houses because of higher consumption expenditures as compared to the counterpart. The results also revealed that farming experience and increased farm size have enhanced the level of income and as a result reduce the probability of future vulnerability. Having higher income contributes to wealth formation through improved access to assets and housing amenities. Lastly, the study found that, farmer with perception that climate change is induced by human tended to have significantly higher income than the counterpart. Yusuf et al., (2015) assessed the determinants of rural poverty in Mkinga District (Tanga Region) where 93 percent of the sampled respondents were poor. Moreover, gender of household, size of land owned by the households, the size of farm used in farming, household size and the dependency ratio were found to be related to the level of poverty. The study recommends that the women should be empowered to have positive attitude towards participating in various economic activities and ensure the utilization of resources around them is as optimal as possible. To achieve the goal of reducing poverty in the area, the government has to ensure it provides proper infrastructural settings. Data and Method 3.1 Sample Design This study uses the 2015 Tanzania Demographic and Health Survey (TDHS) data. 2015 TDHS is the fifth survey conducted; prior surveys being 1991-92 (TDHS), 1996 (TDHS), 2004-05 (TDHS) and 2010 (TDHS). In this recent survey, 13,400 households were selected as a representative sample. The survey was mainly concerned with the women and men aged 15 -49 years who are usual residents or slept in the households that night before the survey. The survey managed to interview 13,000 women and 3,200 men belonging to the age group mentioned above. The sampling frame for this survey was the Tanzania Population and Housing Census (2012) conducted in Tanzania in 2012. All over the country the enumeration areas (EAs) were considered as a sampling frame. The sample was selected using stratified sampling where each region was separated in urban and rural areas. In the first stage, 608 EAs were selected (180 from urban areas and 428 from rural areas). In the second stage, from each selected cluster 22 households were selected making the total number of 13,376 households. However, due to differences of the household size among regions, adjustment was made to select 20 or 21 clusters for all regions except in Dar es Salaam (37 clusters) and 15 clusters for each region located in Zanzibar. This is due to the fact that Dar es Salaam is an urban area only and the size of the households in regions of Zanzibar is large. Empirical Regression Model This study employs the ordered logit model due to the categorical nature of the dependent variable and the households use latent movement from the lowest category to highest category. The outcome variable is expressed as follows: The modelling uses the latent variable which is not observable but linearly depends on the vector of explanatory variables ( . This latent variable can be interpreted as the utility between choosing categories and is modeled as given by equation (1) below: ) ; (1) Assuming that the epsilon follows the logistic cumulative distribution and since the authors are concerned with the probability of belonging in category j, then g g yj ) ( 2) ) (3) Rearranging equation (3) (4) Equation (4) provides the underlying structural model for estimation by maximum likelihood estimation (MLE) using both dependent and independent variables. Moreover, the ordered logit model that is estimated in this study is expressed as equation (5) below: The following is the description of the variables used in regression: Dependent Variable This study uses poverty status (Povstatus) as a dependent variable. This variable is categorical in nature having five (5) categories such as 0 = 'poorest', 1 = 'poorer', 2 = 'middle', 3 = 'richer' and 4 = 'richest' households. This ordering is logical since household movement from being poorest to being richest makes sense. Explanatory Variables The study at hand uses household size (HHsize) which is measured as the number of household members, age of the household head (age). It also uses sex of the household head (Sex) which takes the value of 1 if the household head is a male and 0 otherwise. The marital status of the household head (Marstat), which indicates the marital status of household head if he/she is 0 = "never in union", 1 = "living together with his/her partner" and 2 = living without a partner". The level of education of the household head (education) has four categories namely 0 = 'no education', 1 = 'primary education', 2 = 'secondary education' and 3 = 'higher education'. Taking into account where the household is located, the type of place of residence (Residence) was considered. It takes 1 if the household resides in urban area and 0 otherwise. Finally, we consider the contribution of financial technology (access to financial services) as the determinant of poverty by incorporating the use of mobile phone for financial transaction (FSaccess). This is a dummy variable with values 0 if the 'household does not use phone to access financial services' and 1 if the 'household uses phone for financial services'. Data Analysis This paper uses STATA Version 16 to analyze the data using the ordered logit regression approach. This approach is suitable when the dependent variable is categorical with more than two categories thus is capable of predicting the probabilities of all possible outcomes basing on several selected independent variables (Noor Amira Mohamad, Zalila Ali, Norlida Mohd Noor, 2016). In addition, the transformation of household from being poorest to richest is logically ordered following the latent nature implying that the values assigned to each category or outcome is no longer arbitrary. The ordered logit regression model is estimated using the maximum likelihood estimation (MLE) approach assuming independence across observations and this estimation procedure is an iterative with the first iteration being the log likelihood of the 'empty' or 'null' model (i.e. a model without predictors). In addition, at each next iteration there is inclusion of predictor(s) and the log likelihood increases as the goal is to maximize the log likelihood (Long, 1997). Table 1 shows the summary statistics for the variables used in the study. Since it is difficult to interpret the statistics for categorical variables especially those which are not ordinal the study will concentrate in interpreting dummy variables and the continuous variables used in the analysis. The mean age is 36 years which implies that majority of the sampled household heads are mature enough to engage in various economic activities for poverty reduction. On the other hand, the mean number of household members is 7 people. This is a reasonable number especially in rural areas. The mean value of sex is 0.78 which indicates that on average the population comprises of more male than female. Moreover, the type of residence has the mean value of 0.23 which is less than 0.5 and it implies that the majority of the poorest are residing in rural areas than in urban areas. The mean value for the variable FSaccess which measures the access to financial services has the mean value of 0.56 and the standard deviation of 0.5. This implies that many households have access to financial services but there is no significant difference between those with access and those without access. The standard deviations show that there is no significant deviation of observations from their mean. Table 2 presents the results for both coefficients, marginal effects and their corresponding standard errors. All variables are statistically significant at 1% except sex of the head of households (male) which is statistically significant at 5%. It can be noted from the same table that the sign of the coefficients of the variables and the sign of their corresponding marginal effects are alternating. The marginal effects presented are for the lowest outcome which reflects the probability of being the poorest category. The marginal effect of age (-0.004) shows that the increase in age reduces the probability of being poorest than being in any other category by 0.4 percentage point. This is similar to when the household is headed by a male. However, the study conducted by Baiyegunhi & Fraser, (2010) shows that households headed by old age people are more vulnerable to poverty than those headed by younger people. The marginal effects for the size of the household is 0.003 which implies that the increase of one member of the household increases the probability of that household being poorest by 0.3 percentage points. This is due to the fact that the higher the size of the household the higher the dependency which causes the poverty persistence. These findings are consistent with the findings of Makame & Mzee, (2014) and Ermiyas et al., (2019). However, some studies like Meyer & Nishimwe-Niyimbanira, (2016) reveal that the relationship between poverty and the household size may also be negative. This is because large households require large income to escape poverty. Furthermore, the marginal effects for education level is -0.034, -0.091 and -0.10 for primary level, secondary level and higher level respectively. This implies that the probability of being poorest decreases as the household head is educated by 3.4, 9.1 and 10 percentage points if the household's head has primary education, secondary education and higher education respectively. There is a positive relationship between the level of education and the marginal effects. These results are consistent with Mok et al., (2007) and Apata et al., (2010). Wedgwood, (2015) argues that getting children into school on its own is not enough for poverty reduction because the quality of education is also important to realize potential benefits of education on poverty reduction. Ordered Logit Estimation Results The results further reveal that by living together as partners reduces the probability of being poorest by 12 percentage points than never being in union or living without partners. The type of residence being urban reduces also the probability of being poorest by 10 percentage points than living in rural areas. This is due to the reason that in urban areas there is many economic activities taking place than in rural areas where they mainly depend on agriculture which is seasonal. This results comply with the results of Abuka et al., (2007) who studied determinants of poverty vulnerability in Uganda and found that poverty is more pronounced in rural areas than urban as more remunerative economic activities tend to be concentrated in urban areas.. The marginal effect for FSaccess which measures the use of phone to access financial services is -0.020 which can be interpreted as the probability of being poorest decreases by 2 percentage points when the households have access to financial services than when they are not. These findings comply with the findings of Sife, Kiondo, & Lyimo-Macha, (2010) who found that mobile phones contribute to reduce poverty and improve rural livelihood. 195 19,195 Robust standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1 Conclusion and Policy Implication The current study reveals that the main determinants of poverty are the age, sex, education, marital status of the head of household. Also the type of residence of the household and access to financial services play important role in explaining the status of poverty of households. The study recommends that the government should invest more in education since the economy with educated labor force performs better and this ensures achieving the fourth sustainable development goal which requires the government to provide quality and inclusive education for upward social mobility and poverty reduction. Since access to financial services is significant in reducing poverty, the government should ensure improved financial services and financial inclusion especially in rural areas to eradicate poverty. On the other hand, the rural-urban disparity need to be removed to ensure equality in poverty reduction in both rural and urban areas through measures like equality in allocation of resources in both rural and urban areas.
2021-09-29T15:37:02.533Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "110a089a38ff9be03029e85c1d81cb3bbbc77186", "oa_license": "CCBY", "oa_url": "https://iiste.org/Journals/index.php/JEDS/article/download/56841/58697", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "aecb74de3385ca3e1326fa7a4cec80d6b03bc5bf", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
253306445
pes2o/s2orc
v3-fos-license
Measurement of Radium and Radon Exhalation Rate in Marble Samples used in Al-Bayda City Market-Libya : The aim of the present study is to measure the activity concentrations of 226 Ra, 222 Rn, the mass exhalation rate of 222 Rn, and the annual effective dose of radon in marble samples collected from Al-Bayda city local market –Libya. Samples were measured by using a low-background NaI (Tl) detector. The average activity concentrations of 226 Ra and 222 Rn were 72.57 Bq.kg -1 and 597.85 Bq.m -3 .The radon exhalation rate in marble samples vary from 0.05-0.30 Bq.kg -1 .S -1 with an average of 0.13 Bq.kg -1 .S -1 . The annual effective dose of radon was calculated in samples under investigation. For most samples, the values were lower than the maximum permissible dose limits. It can be concluded that marble samples under investigation do not pose any radiological hazard to the dwellers of buildings used in their construction. INTRODUCTION The human body is naturally exposed to ionizing radiation, which can be found in soils, rocks, and water (Abo-Elmagd, 2014).In addition, artificial radiation was added to this background radiation.The background radiation arises from natural sources present in natural ores, such as some building materials.This radiation is due to primordial radionuclides of the natural radioactive series of Thorium-232 ( 232 Th) and uranium-238 ( 238 U) series and their decay products (Ghose et al., 2012).These radionuclides are widely distributed and their concentrations depend on the geological conditions.Therefore, it is important to measure the natural activity of all building materials.This step will help to assess the possible radiological risks to human health (Kumara et al., 2018).This radioactive isotope results from the disintegration of radium-226 ( 226 Ra), a decay product of the 238 U series and responsible for the largest source of natural radiation to which the population is exposed (Kama et al., 2011).Radon and Thoron are both generated from radium decay in the solid grains, then migrated to a significant distance from the site of generation in rock, soil, and building materials into the atmosphere (exhalation) before undergoing radioactive decay (Bala et al., 2017).Radon-222 ( 222 Rn) concentration can reach high levels in buildings depending on exhalation from the building material used, such as concrete, marble, or granite.Marble is one of the metamorphic rocks found to occur on the earth's surface.The colors of the marble depend on the mineral composition and metamorphism.Marbles are used commonly as floor laying material (Kaiser et al., 1999). In this study, the gamma radiation has been measured in marble samples collected from the Al-Bayda local market in Libya to obtain an activity concentration of 226 Ra, radon exhalation rate, and the annual effective dose of radon.Because of health risks caused by exposure to indoor radiation, many international organizations, such as the International Commission on Radiological Protection (ICRP,. 1993), the World Health Organization (WHO,.2021), and UNSCEAR, have adopted strong measures in order to reduce such exposure. MATERIALS AND METHODS Samples Collection and Preparation: Nine marble samples were collected from the local market in Al-Bayda city -Libya, to measure the radioactivity concentrations of 226 Ra and 222 Rn (Marble available in Libya, some local and some imported.).All samples were brought to the laboratory and properly cataloged, washed, and dried (at 110C o for 2h) for complete removal of moisture.Then, all samples were crushed to a fine powder with a particle size ≈1 mm (This process took place in the laboratory of the Faculty of Engineering-Omer Al-Mukhtar University, Al-Bayda, Libya).Samples were packed and sealed in radon impermeable airtight cylindrical plastic containers, then stored for four weeks before counting to ensure 226 Ra and its short-lived daughters reached secular equilibrium (Sroor, 2013).Table (1) shows the description of the samples. Gamma-Ray Detection System: A gamma-ray NaI(Tl) scintillation detector contains a 3"×3" crystal, with a multichannel analyzer (MCA) used for the spectral measurements of naturally occurring radionuclides.The detector was placed in the center of a two-layered shield made from stainless steel of 10 mm thickness, and lead of 30 mm thickness.The shield must be used to reduce the radioactive background, as well as the detector from unwanted background radiation, and reduce the contribution of scattered radiation.After that, the sample was placed on a detector for 7200 S. The spectra were analyzed using a software program.The samples were prepared and measured in the Advanced Nuclear Lab-Department of Physics-Faculty of Science -Omer Al-Mukhtar University, Al-Bayda, Libya.Activity Concentration: The activity concentration (A) of a radionuclide for a peak at energy, is given by the relation (Al-Sewaidan, 2019): Where: ε is the absolute efficiency at photopeak energy, t is the time of the sample spectrum collection in seconds, I γ is the intensity of emitted gamma-ray (gamma abundance), m is the mass of the sample in (kg), N is the number of count in a given peak area corrected for background peaks of a peak at energy. Radon Mass Exhalation Where A Ra is the measured activity of 226 Ra, A D is the measured activity of daughter element 214 Pb (or 214 Bi), which escapes into the surrounding environment.ρ is the density of radon (9.73 kg.m -3 ).The introduction of the radon emanation factor F, which is defined as: The radon exhalation rate E Rn (Bq.kg -1 .S -1 ) is the product of the emanation factor, and the 222 Rn production rate was determined by used relation (Turhan & Gündüz, 2008): Where: λ Rn is the decay constant of 222 Rn (2.1x10 -6 S -1 ). Annual Effective Dose of Radon: Radon concentration was converted into an effective dose, as the long-standing exposure to a high concentration of radon, and its progenies, may lead to pathological effects like lung cancer.The annual effective dose, received by workers and residents due to inhalation of radon gas and its decay products, where calculated by relation (Abd El-Halim, 2019): 3700 Bq.m −3 ×170h (5) Where AED Rn is the annual effective dose (mSv.y - ), C Rn is the emanation coefficient of radon (Bq.m -3 ), K is the ICRP dose conversion factor (5 mSv WL.M -1 for occupational workers, and 3.88 mSvWLM -1 (effective dose per unit Work Limit in Month) for the general public), H is the annual occupancy at the location, 2160 h for workers and 7000 h for residents (80% of total time) and 170 is expo-sure hours taken for WL.M -1 (ICRP, 1993). RESULTS The present values in Table (2 The results show that there is a variation in radon exhalation rate from one sample to another, depending on the geological formation of the region from which the sample is taken. The variation in values of radon exhalation rate may be due to the differences in radium content and porosity of the marble (Frutos-Puerto et al., 2020).The present values of the radon exhalation rate observed in the marble ranged between 0.05-0.30Bq.kg -1 .S -1 (Turkey -Libya) with a 0.13 Bq.kg -1 .S -1 average, as shown in Figure (3). Figure (3): The mass exhalation of radon in marble samples. From Figure ( 4), the variation of radon mass exhalation rate with 226 Ra activity concentrations shows a correlation between them.Therefore, it can be concluded that it is possible to predict the radon exhalation rate from the activity concentration of radium. DISCUSSION The results have shown that the activity concentration of 226 Ra for most samples is higher than the world value of 50 Bq.kg - (WHO, 2021), and for 222 Rn, the values are higher than the average permissible level of 200 Bq.m -3 (Jasaitis & Girgždys, 2007;Sharma et al., 2016).When comparing values of the radon exhalation rate between the different samples of marble, we found that the present value of the radon exhalation rate in sample M1 is higher than the values of other samples.The results indicate that low levels of an annual effective dose from radon in most marble samples were lower than the maximum permissible dose limits (10 mSv.y -1 ) recommended by (ICRP,. 1993). CONCLUSION The obtained results showed that the average radium and radon concentration in investigation samples varies from 72.57Bq.kg -1 and 597.85 Bq.m -3 .Also, the obtained value of radon mass exhalation rate varies between 0.05-0.30Bq.kg -1 .S -1 (Turkey -Libya) with an average of 0.13 Bq.kg -1 .S -1 .It is recommended that the radon exhalation rate should be measured for all building materials and a standard code placed on all products.The an nual effective dose of radon in marble samples for workers and residents is lower than the maximum permissible dose limit 10mSv.y - recommended by (ICRP, 1993), with the exception of samples M1 and M5 (made in Libya and India) for residents.The variation in obtained results depends on the geological formation of the region and the increased exposure time.The annual effective dose limit and the activity concentration index show that the investigated samples are within the recommended safety limit and do not pose any source of radiation hazard.Therefore, the use of these materials in the construction of dwellings is considered to be safe for inhabitants. Figure ( 4 ): Figure (4): Correlation between 226 Ra activity concentration and radon exhalation rate in the marble samples. Table ( 2 ): The values of activity concentrations of 226 Ra Bq.kg -1 , emanation coefficient 222 Rn Bq.m -3 , E Rn and AED Rn in marble samples. Table ( 3 ).Comparison values of radium concentration, radon exhalation rate and of marbles samples used in different countries. Sroor,A.T. (2013).Radiological hazards for marble and granite used at Shak El Thouban industrial zone in Egypt.
2022-11-05T15:20:55.076Z
2022-09-30T00:00:00.000
{ "year": 2022, "sha1": "d8e509b18deeb5e53f1c261950e30e94108da5fc", "oa_license": "CCBYNC", "oa_url": "https://omu.edu.ly/journals/index.php/mjsc/article/download/626/647", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f91c1653669e1dcda7c08ebede470de1f63c8ea3", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
268786888
pes2o/s2orc
v3-fos-license
A dataset of endorheic basins on detailed delineation and classification for the Qinghai–Tibet Plateau Endorheic basins are important geomorphological and ecological units on the Qinghai-Tibet Plateau (QTP), which is undergoing a rapid evolution of its lake system structure and drainage reorganization that is threatening local ecology, infrastructures and residuals owing to climate change. This dataset provides a detailed delineation and classification of endorheic basins on the QTP for understanding the complex dynamics under climate changes. A newly-developed algorithm, namely the Joint Elevation-Area Threshold (JEAT) algorithm (Liu et al, 2024), is applied for delineating endorheic basins based on digital elevation model (DEM). A total of 184 endorheic basins were divided, of which the permanent divide lines were characterized. All the endorheic basins were further categorized into five groups based on the hydraulic connectivity attributes, which have been commonly observed since 2000. The dataset also includes basic information such as drainage area, water surface area, and water storage volume of each endorheic basin. It is particularly beneficial for digital watershed analysis towards ecological restoration and water resource management on the environmentally vulnerable QTP. a b s t r a c t Endorheic basins are important geomorphological and ecological units on the Qinghai-Tibet Plateau (QTP), which is undergoing a rapid evolution of its lake system structure and drainage reorganization that is threatening local ecology, infrastructures and residuals owing to climate change.This dataset provides a detailed delineation and classification of endorheic basins on the QTP for understanding the complex dynamics under climate changes.A newly-developed algorithm, namely the Joint Elevation-Area Threshold (JEAT) algorithm (Liu et al, 2024), is applied for delineating endorheic basins based on digital elevation model (DEM).A total of 184 endorheic basins were divided, of which the permanent divide lines were characterized.All the endorheic basins were further categorized into five groups based on the hydraulic connectivity attributes, which have been commonly observed since 20 0 0. The dataset also includes basic information such as drainage area, water surface area, and water storage volume of each endorheic basin.It is particularly beneficial for digital watershed analysis towards ecological restoration and Value of the Data • This data is valuable as it provides a comprehensive delineation and classification of endorheic basins on the Qinghai-Tibet Plateau (QTP), which is still insufficient for regional watershed analysis.Additionally, a dependable algorithm namely the Joint Elevation-Area Threshold (JEAT) algorithm for endorheic basin delineation is adopted [1] .It characterizes the hydrogeomorphological and hydrometerological features of endorheic basins and presents a simple procedure that does not rely on remote sensing images.Compared to the existing methods, the JEAT has been proven to be more accurate in delineating endorheic basins.• Given the history of drainage reorganizations that have occurred on the QTP, existing methods are currently inadequate for identifying the hydraulic connectivity among endorheic basins with lake levels rising under global climate change.Using the JEAT, the dataset can provide accurate results on endorheic basin delineation with permanent divide lines regarding potential reorganizations.The result will be the foundation of regional, and even global watershed analysis.• In terms of ecological restoration and water resource management, the dataset also contains drainage area, water surface area, and water storage volume of each endorheic basin.It may serve as input data for researchers to calculate water balance or as a reference for ecological problems, which is vital important for analysing the water demands of residents, and helpful for the local government to make ecological, industrial, agricultural, and domestic water policies in endorheic QTP. Background Endorheic basins occupy one-fifth of the Earth's surface [ 2 , 3 ].In China, endorheic basins are mainly located on the QTP, and they cover more than 70% of the arid region.Since 20 0 0, continuous warming has accelerated lake expansion in the endorheic basin, floodings spreading from upstream to downstream along the connective rivers and thus drainage reorganization events, which have greatly changed hydrological regimes in the endorheic QTP and may pose more risks of outburst flooding or potential ecological threats [4][5][6] .A total of 11 drainage reorganization events that have been observed in the QTP from 20 0 0 to 2018, which involves 24 endorheic basins with area of approximately 610 0 0 km 2 [4] .To better understand the drainage reorganizations in QTP, the accurate divide lines for endorheic basins are important.However, accurate delineation of endorheic basins remains a big challenge.Existing methods commonly treat depressions in digital elevation model (DEM) as false objects and subsequently fill them, which may alter the authentic topography of endorheic basins.In addition, although considering true depressions, they will regard endorheic basins as isolated depressions, which neglects divide changes caused by potential connectivity under global warming.Therefore, a newly developed algorithm, namely JEAT, is adopted for delineating endorheic basins [1] .The JEAT divdes the endorheic basins to reflect the endorheic units, which illustrates all the connectivity phenomenon have occurred.The division results not only include the endorheic basins which located in the endorheic QTP, but also involve some basins which are neglected in exorheic basins.Subsequently, all the endorheic basins are categorized into distinct groups based on potential forms of hydraulic connectivity, which reflects the possibly reorganization locations.The dataset aims to investigate hydrological, geomorphological, and ecological characteristics of each endorheic basin with Google Earth images and visual interpretation.We hope it will serve as a valuable reference dataset for ecological analysis. Master data The master data contains various characteristic information of the endorheic basins.The basic hydrogeomorphological and hydrometerological characteristics of each endorheic basin are provided in Appendix 1, which provides essential information about geographical locations, meterological attributes and categories of endorheic basins.Appendix 2 provides information about lakes ( ≥1km 2 ) located in each endorheic basin, which are considered as one of the most important factors in endorheic basins.In addition, one can find the drainage area, water surface area, and water storage volume of each endorheic basin in Appendix 3.These data are necessary for orienting the possibe reorganization locations and ecological analysis such as water resource management.For more detailed information about the master data, one can refer to Table 1 . Shapefiles The shapefiles of delineation and classification results are provided as QTP_delineation.shpand QTP_classification.shp.They contain the spatial distribution of endorheic basins.Detailed information about shapefiles can be found in Table 2 , including its ID number, shape type which is provided by GIS software, drainage area and category for each endorheic basin. Experimental Design, Materials and Methods The raw data includes DEMs and a lake map on the QTP.MERIT DEM by Yamazaki [7] with a 3-arc second resolution was used.It has demonstrated good applications in high mountain areas, which meets the research requirements.The lake map was obtained from the NTPDC, which provides basic information about the lake area in 2022 [8] . The JEAT algorithm was introduced to address the challenges of endorheic basins delineation [1] .It adopts two joint thresholds, i.e., the elevation threshold and the area threshold, to recognize the complex characteristics of hydraulic connectivity in the endorheic regions.In the JEAT algorithm, the elevation threshold characterizes the height of low divide between two endorheic basins, while the area threshold identifies false depressions caused by narrow streams that are not captured by DEM [9] . First, the algorithm identifies all the initial depressions in endorheic regions including false depressions.Starting from the bottom of each depression, it searches all the inflow grid cells based on the flow direction matrix by iFAD8 [10] and RWFlood algorithms [11] .In the next step, it is important to compare the height of low divide and drainage area with two joint thresholds separately.For details of the JEAT method, one can refer to the literature [1] .Historical remote sensing images in the Google Earth and visual interpretation have been implemented to validate the algorithm [1] .It has been proved that the algorithm can capture the hydraulic connectivity between endorheic basins well.After clarifying the effects of different combinations of elevationarea thresholds, a set of optimal thresholds is obtained as 10 m-50 km 2 .For the QTP, a total of 184 endorheic basins were identified. Based on the phenomenon which have been observed, 5 different connectivity categories has been introduced (See Fig. 1 ).All the endorheic basins were further categorized into 5 groups according to hydraulic connectivity attributes.Each basin unit in Category I usually includes two false endorheic sub-basins with very low divide line, each of which serves as the outlet for the other.As endorheic lake levels rise under climate warming, these two false endorheic sub-basins will connect each other.Category II represents a continuum of depression units with a visible upstream and downstream relationship.Usually, the lake level in the upstream depression could reach the height of its low divide during summer floodings, and thus both depressions were connected.In Category III, the relationship between inner-depressions is more complex, as one endorheic basin doesn't necessarily have only two depressions like Category I and II.In this case, several low divide lines should be considered to compare with the given elevation threshold.When all lake levels reach the elevation threshold, all the sub-basins will be connected together.Category IV represents isolated endorheic basins with high terrain relief around it, which commonly exists in high mountain areas.The height of divide is so high that endorheic lake levels almostly cannot reach the overflow point.Category V represents a depression group with a number of smaller depressions that are blocked by very low divide lines.A casual rainstorm event may lead to hydraulic connectivity among all the depressions within one group, i.e., lake levels of all the depressions reach the elevation threshold. Afterwards, Google Earth images were used to obtain the central latitudes, central longitudes, central elevations for each endorheic basin.These information are important for orienting the possibly reorganization locations.The annual mean temperature, annual mean precipitation, rates of precipitation and temperature changes referred to [ 12 , 13 ], which illustrates the basic hydrometeorological features in each basin.The Google Earth images and the lake map from NT-PDC were also utilized to obtain the names, areas, central latitudes, central longitudes, central elevations, and current levels of lakes.The lakes are one of the most important factors, which directly demonstrate the connectivity rivers between endorheic basins.For ecological analysis, we calculated the drainage area in GIS software and used the lake map to determine the water surface area, i.e., the sum of lake area in each endorheic basin.Afterwards, a simple method based on the theory of terrain similarity both above and below the lake level is adopted for calculating water storage volume [14] .We chose the lake levels from Google Earth images to be the maximum level.Corresponding surface area and storage volume were recorded as lake level rising, and after calculating the storage volume above and below the water level, the storage volume of each lake can be determined.Finally, the water storage volume of each endorheic basin would be calculated.The volume of lakes and its total surface area are very important for future evaluating water resources in endorheic basins, which can further be used to ecological analysis. Creation of the master data and shapefiles The ID numbers of each endorheic basin were assigned, which is according to the sub-basins of the QTP.Hydrogeomorphological and hydrometerological attributes including central longitude, central latitude, central elevation, annual mean precipitation, annual mean temperature, rates of precipitation and temperature changes were first obtained for each endorheic basin.Lakes ( ≥1 km 2 ) were also identified.For ecological analysis, drainage area, water surface area and water storage volume were provided.More detailed information can be found in Appendix 1, Appendix 2 and Appendix 3. Through using the JEAT algorithm, a total of 184 endorheic basins were obtained.These endorheic basins were further categorized into five groups to illustrate the endorheic units with permanent divide lines.The results showed that the division endorheic basins are mainly located on the Qiangtang Endorheic Basin [1] and the Qaidam Basin.The basins in Qaidam Basin usually have large area because of numerous ephemeral rivers.Several endorheic basins are located on the exhorheic basins, such as Brahmaputra Basin, the Indus Basin, the Ganges Basin, and the Amu Darya Basin.These basins generally locate on the upstream mountain area, which may result outburst floodings in summer or by glacier lakes collapse.Specifically, there are 6, 41, 11, 123, and 3 endorheic basins in Category I-V, respectively.The Category IV has the largest quantity, which means the isolated depressions are very common in the QTP.The amount of Category I, II and III is next to the Category IV, which illustrates the high possibility of hydraulic connectivity that can occurr in the future.The depression groups are not usual, which have a large number of small lakes that change rapidly even in one precipitation event.To create the shapefiles, we added a column in the attribute table, namely Category, which assigned a unique category number to each polygon (i.e., endorheic basin).We also added a GTOPO30 World Hillshade Map as the base map in the GIS software to display the category for each basin clearly.Five colors were further applied to plot these polygons for different colors (See Fig. 2 ).Category I to V are red, orange, green, blue and purple separately.The boundary lines of the QTP and its sub-basins were repesented by adjusting the transparency of their polygons.Afterwards, we thickened the lines of endorheic basins to depict their boundaries.The shapefile of world rivers was added to display the drainage network on the QTP. Table 1 Appendix table description. Table 2 Attribute table description.
2024-03-31T15:34:32.324Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "3cb9d698238adb9bf3334a72a6508af128a048da", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1016/j.dib.2024.110369", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7e23383b62eee0dcc94c1394d55d939216fbb98", "s2fieldsofstudy": [ "Environmental Science", "Geography", "Geology" ], "extfieldsofstudy": [ "Medicine" ] }
52094952
pes2o/s2orc
v3-fos-license
Antiviral Effect of Resveratrol in Piglets Infected with Virulent Pseudorabies Virus Pseudorabies virus (PRV) is one of the most important pathogens of swine, resulting in devastating disease and economic losses worldwide. Nevertheless, there are currently no antiviral drugs available for PRV infection. Resveratrol (Res) was identified to exert its antiviral activity by inhibiting the PRV replication in preliminary investigations. In our previous study, we found that Res has anti-PRV activity in vitro. Here, we show that Res can effectively reduce the mortality and increase the growth performance of PRV-infected piglets. After Res treatment, the viral loads significantly (p < 0.001) decreased. Pathological symptoms, particularly inflammation in the brain caused by PRV infection, were significantly (p < 0.001) relieved by the effects of Res. In Res-treated groups, higher levels of cytokines in serum, including interferon gama, interleukin 12, tumor necrosis factor-alpha and interferon alpha were observed at 7 days post infection. These results indicated that Res possesses potent inhibitory activity against PRV-infection through inhibiting viral reproduction, alleviating PRV-induced inflammation and enhancing animal immunity, suggesting that Res is expected to be a new alternative control measure for PRV infection. Introduction Pseudorabies virus (PRV; also called Aujeszky's disease virus or suid herpesvirus type 1), is a member of the Alphaherpesvirinae subfamily within the family Herpesviridae, and is the causative agent of Aujeszky's disease (AD), which is one of the most devastating infectious diseases of swine and results in significant economic losses for the swine industry [1,2]. AD is a contagious disease which is characterized by encephalomyelitis, frequently accompanied by inflammation of the upper respiratory tract and lungs [3]. In general, PRV mainly infects pigs at various production phases, such as causing nervous system disorders and high mortality in newborn piglets, respiratory disorders in older pigs, and reproductive failure in sows [1,4,5]. Despite widespread use of the Bartha-K61 vaccine in controlling PRV, AD continues to be one of the most important diseases in pigs in many countries, particularly in regions with dense pig populations, including China [6,7]. Outbreaks of AD in pigs caused by PRV variants happen frequently, even in herds immunized with the Bartha-K61 vaccine. Moreover, new prevalent PRV strains have caused great economic losses to the swine industry in China since 2011 [5,[8][9][10][11][12][13]. Resveratrol (3,5,4-trihydroxystilbene, Res), a non-flavonoid polyphenol compound exists widely in several higher plants. Res has been reported to have antiviral activity against a series of viruses either in vitro or in vivo, including herpesviruses [14][15][16], retroviruses [17,18], respiratory syncytial virus [19] and human immunodeficiency virus type 1 [20]. Although Res has been known to have antiviral activity for many years, the use of Res to treat virus infection in relevant virus-host systems has rarely been undertaken. Previously, we determined the anti-PRV activity of Res for the first time in vitro. The results showed that Res could effectively inhibit virulent PRV replication in vitro [21]. However, little is known about in vivo antiviral activity of Res against PRV. In this study, the anti-PRV activity of Res in piglets infected with virulent PRV was determined in order to develop a new alternative control measure for PRV infection and investigate the antiviral activity of Res in a relevant virus-host system. Virus and Piglets Virulent PRV (Rong A strain, purchased from China Veterinary Culture Collection Center) was propagated in PK-15 cells. 28-day-old healthy (before the Res administration, the piglets had been investigated for 7 days, with no disease symptoms being observed) piglets were purchased from a remote mountain village (Leshan, China), and no PRV gB-specific antibody were detected through ELISA assay (IDEXX, Westbrook, ME, USA). Piglets were maintained under normal daylight and fed with a standard commercial diet and water ad libitum. Ethics Statement All procedures involving animals and their care in this study were approved by the Experimental Design Fifty 35-day-old piglets were randomly divided into five groups. Before infection, the piglets in the Res-treated groups were administered Res through addition into the commercial diet at doses of 30 (Res-H), 10 (Res-M) and 3 (Res-L) mg/kg body weight daily for 7 days. The piglets in the untreated and non-infected groups received only the commercial diet. At 42-days-old, piglets were infected intranasally with 1 mL of 2 × 10 6 TCID 50 PRV, except the non-infected group. After infection, the piglets in all of the groups were fed the standard commercial diet. The infected piglets were immediately administered Res solutions orally at doses of 90 (Res-H), 30 (Res-M) and 10 (Res-L) mg/kg body weight twice daily for 21 days, respectively. The dosages of Res before and after infection were based on our previous research: Fu et al., 2018 andZhao et al., 2017, respectively [21,22]. The piglets in the untreated and non-infected groups were given the same volume of SCMC-PBS. All animals were physically examined daily, and nasal swabs were taken at regular intervals after infection to monitor virus excretion. Serum samples were taken at 0, 7, 14, and 21 days post infection (dpi). Three randomly selected piglets were subjected to necropsy in each group at 7 and 21 dpi. The rearing conditions were based on the Guidelines of the International Committee on Laboratory Animals. Analysis of Viral Load by Real-Time PCR The PRV load of piglets was monitored by the real-time fluorescence quantification PCR (FQ-PCR) of nasal swab and brain tissue. The total DNA was isolated from the nasal swabs and brains tissue by TIANamp Swab DNA Kit (Tiangen Biotech, Beijing, China) and Genomic DNA Extraction Kit (TaKaRa, Tokyo, Japan), respectively. The upstream and downstream primers were 5 -ACAAGTTCAAGGCCC ACATCTAC -3 and 5 -GTCYGTGAAGCGGTTCGTG AT -3 , respectively, which were used to amplify a 95-bp fragment of the glycoprotein B gene of PRV (GenBank accession no. KJ526438). A 17-bp probe (5 -ACGTCATCGTCACGACC -3 ) complementary to an internal region between two primers was selected and labelled with carboxyfluorescein at the 5 end and with carboxytetramethylrhodamine at the -3 end. The FQ-PCR was analyzed by using SsoAdvancedTM Universal Probes Supermix (BIO-RAD, Hercules, CA, USA) with a Bio-Rad CFX96TM Manager software system according the method described in our previous study [21]. Histopathological Analysis Histopathological lesions of PRV-infected piglets treated with or without Res were investigated. Heart, liver, kidney, lung, spleen and brain tissues were procured at 7 dpi, preserved in 4% paraformaldehyde, and enclosed in paraffin for subsequent histopathological examination. A 5 µm section of each organ tissue was stained with hematoxylin and eosin. Each section was analyzed under an optical microscope (Nikon eclipse 80i, Tokyo, Japan). Three slides from different parts of each tissue (3 piglets per group) were analyzed. The whole lesions for each tissue were scored by multiplying the degree of severity (0 = no lesions, 1 = mild lesions, 2 = moderate lesions, and 3 = severe lesions) by the extent of lesions (1 = low extent, 2 = intermediate extent, and 3 = large extent) [23]. Statistical Analysis Data was expressed as the mean ± S.D and the statistical significance of the data was assessed using a two-tailed Student's t-test with GraphPad Prism software 5 (LaJolla, CA, USA). Correlation analyses were evaluated by Pearson r2, ns: p > 0.05, * p < 0.05 and p < 0.001. Resveratrol Reduced Mortality and Increased Body Weight Gained by Piglets Infected with Virulent PRV As shown in Table 1, there were no deaths among groups before 6 dpi, but the piglets began to die in the untreated group at 6 dpi; in contrast, there were no deaths in the Res-treated groups at the same time. The groups treated with Res exhibited a high protection rate (100%, in Res-H and Res-M groups) against PRV infection. However, only six out of ten animals survived in the untreated group. Res-H Res-M Res-L Untreated Non-Infected Survival rates of the PRV-infected piglets treated with resveratrol (RV-H, RV-M, RV-L, respectively) and untreated were recorded at 7 dpi (n = 10, in each group). a Date of last death. Changes in body weight were analyzed over 21 dpi (Figure 1). Compared with the non-infected group, all infected piglets had a reduced gain in body weight. However, compared with the untreated group, the body weight gained increased in the Res-treated groups in a dose-dependent manner. Days Post Infection Survival Rate (%) Res-H Res -M Res -L Untreated Non-Infected Changes in body weight were analyzed over 21 dpi (Figure 1). Compared with the non-infected group, all infected piglets had a reduced gain in body weight. However, compared with the untreated group, the body weight gained increased in the Res-treated groups in a dose-dependent manner. The Viral Load of Nasal Swab and Brain Were Depressed by Res The nasal swabs of each group were collected at 0, 3, 5, 7, 10, 14 and 21 dpi, and the viral copies were assayed by FQ-PCR. The results are shown in Figure 2. In the untreated group, virus excretion began to increase rapidly at 3 dpi, while lower viral loads were detected in the Res-treated groups. The viral loads in the Res-treated groups were significantly (p < 0.001) lower than that in the untreated group. At 5 dpi, viral loads increased in all infected groups. The Res-treated groups had significantly (p < 0.001) lower viral loads compared to the untreated group. At 7 dpi, viral loads in all infected groups decreased, and the viral loads in the Res-treated groups were significantly (p < 0.001) lower than those in the untreated group. At 10 dpi, viral loads in Res-H and Res-M groups continually decreased, while the viral loads in Res-L and untreated groups increased; the viral loads in the Restreated groups were significantly (p < 0.001) lower than those in the untreated group. At 14 dpi, there was no PRV genome detected among the groups, except one piglet in the untreated group. There was no PRV genome detected at 21 dpi among any of the groups. The Viral Load of Nasal Swab and Brain Were Depressed by Res The nasal swabs of each group were collected at 0, 3, 5, 7, 10, 14 and 21 dpi, and the viral copies were assayed by FQ-PCR. The results are shown in Figure 2. In the untreated group, virus excretion began to increase rapidly at 3 dpi, while lower viral loads were detected in the Res-treated groups. The viral loads in the Res-treated groups were significantly (p < 0.001) lower than that in the untreated group. At 5 dpi, viral loads increased in all infected groups. The Res-treated groups had significantly (p < 0.001) lower viral loads compared to the untreated group. At 7 dpi, viral loads in all infected groups decreased, and the viral loads in the Res-treated groups were significantly (p < 0.001) lower than those in the untreated group. At 10 dpi, viral loads in Res-H and Res-M groups continually decreased, while the viral loads in Res-L and untreated groups increased; the viral loads in the Res-treated groups were significantly (p < 0.001) lower than those in the untreated group. At 14 dpi, there was no PRV genome detected among the groups, except one piglet in the untreated group. There was no PRV genome detected at 21 dpi among any of the groups. The brains of each group (three piglets per group) were collected at 7 and 21 dpi, and the viral copies were assayed by FQ-PCR. The results are shown in Figure 3. In the untreated group, viral copies were significantly (p < 0.001) higher than those in the Res-treated groups at 7 dpi. There was no PRV genome detected at 21 dpi among any of the groups. Copies of PRV genome per microgram brain of piglets. Copies of the PRV genome in the brain of the piglets were analyzed by FQ-PCR at 7 and 21 dpi (n = 3 in each group). There was no PRV genome detected in the non-infected group at any time point and no PRV genome detected at 21 dpi among the groups. Correlation analyses were evaluated by Pearson r2, & p < 0.001 vs. untreated group. Res Reduced the Pathological Lesions of PRV-Infected Piglets The PRV-infected piglets of each group (three piglets per group) were dissected at 7 dpi. The different histopathological lesions of brain, liver, lung, kidney, spleen and heart of PRV-infected piglets in the untreated group and Res-M-treated groups are shown in Figure 4. In the brains of the untreated group, a large number of lymphocytes surround the blood vessel, showing tubular infiltration (↑, Figure 4B). There were fewer lymphocytes observed in the brain of the Res-M-treated group ( Figure 4A). In the lungs of the untreated group, severe pulmonary abscessations were The brains of each group (three piglets per group) were collected at 7 and 21 dpi, and the viral copies were assayed by FQ-PCR. The results are shown in Figure 3. In the untreated group, viral copies were significantly (p < 0.001) higher than those in the Res-treated groups at 7 dpi. There was no PRV genome detected at 21 dpi among any of the groups. The brains of each group (three piglets per group) were collected at 7 and 21 dpi, and the viral copies were assayed by FQ-PCR. The results are shown in Figure 3. In the untreated group, viral copies were significantly (p < 0.001) higher than those in the Res-treated groups at 7 dpi. There was no PRV genome detected at 21 dpi among any of the groups. Copies of PRV genome per microgram brain of piglets. Copies of the PRV genome in the brain of the piglets were analyzed by FQ-PCR at 7 and 21 dpi (n = 3 in each group). There was no PRV genome detected in the non-infected group at any time point and no PRV genome detected at 21 dpi among the groups. Correlation analyses were evaluated by Pearson r2, & p < 0.001 vs. untreated group. Res Reduced the Pathological Lesions of PRV-Infected Piglets The PRV-infected piglets of each group (three piglets per group) were dissected at 7 dpi. The different histopathological lesions of brain, liver, lung, kidney, spleen and heart of PRV-infected piglets in the untreated group and Res-M-treated groups are shown in Figure 4. In the brains of the untreated group, a large number of lymphocytes surround the blood vessel, showing tubular infiltration (↑, Figure 4B). There were fewer lymphocytes observed in the brain of the Res-M-treated group ( Figure 4A). In the lungs of the untreated group, severe pulmonary abscessations were Copies of PRV genome per microgram brain of piglets. Copies of the PRV genome in the brain of the piglets were analyzed by FQ-PCR at 7 and 21 dpi (n = 3 in each group). There was no PRV genome detected in the non-infected group at any time point and no PRV genome detected at 21 dpi among the groups. Correlation analyses were evaluated by Pearson r2, & p < 0.001 vs. untreated group. Res Reduced the Pathological Lesions of PRV-Infected Piglets The PRV-infected piglets of each group (three piglets per group) were dissected at 7 dpi. The different histopathological lesions of brain, liver, lung, kidney, spleen and heart of PRV-infected piglets in the untreated group and Res-M-treated groups are shown in Figure 4. In the brains of the untreated group, a large number of lymphocytes surround the blood vessel, showing tubular infiltration (↑, Figure 4B). There were fewer lymphocytes observed in the brain of the Res-M-treated group ( Figure 4A). In the lungs of the untreated group, severe pulmonary abscessations were observed; the alveolar structure disappeared and was infiltrated with many neutrophilic granulocytes (↑, Figure 4E). There were mild lesions observed in the lungs of the Res-M-treated group ( Figure 4D): thickened alveolar wall and infiltration with some of the neutrophilic granulocytes. In the kidneys of PRV-infected piglets ( Figure 4H), severe capillary hyperemia (↑) was observed. The normal histological structure of the kidney was observed in the Res-M-treated group ( Figure 4G). In the livers of the untreated group ( Figure 4K), focal necrosis with lymphocytes infiltrated was observed (↑, Figure 4K). Granular degeneration of hepatocytes was observed in the Res-M-treated group ( Figure 4J). However, compared with the untreated group, no focal necrosis was observed ( Figure 4J). In the spleens of the untreated group ( Figure 4N), splenic corpuscles were demolished and had disappeared, red pulp was widened, and white pulp was atrophied (↑). The normal histological structure of spleen was observed in the Res-M-treated group ( Figure 4M). Vacuolar degeneration appeared in the hearts of the untreated group (↑, Figure 4Q), while in the Res-M-treated group, the heart basically kept normal histological structure ( Figure 4P). These observations were also proven by the lesional score ( Table 2). The Concentrations of Cytokines Were Affected by Res ELISA assays were used to detect the concentrations of IFN-α, IFN-γ, TFN-α and IL-12 in the serum which was separated from the venous blood at 0, 7, 14 and 21 dpi. The results are shown in Figure 5. At 0 dpi, the concentrations of IL-12, TNF-α, IFN-α and IFN-γ showed no significant difference among groups. At 7 dpi, compared with the non-infected group, the concentrations of IL-12 and IFN-α were significantly (p < 0.001) decreased in the untreated group, while the decreasing tendency was significantly (p < 0.05 or 0.001) inhibited by Res treatment in a dose-dependent manner. Surprisingly, the concentrations of IFN-γ in the untreated group were decreased compared to the non-infected group; however, compared with the non-infected group, the concentration of IFN-γ in Res-treated groups were significantly (p < 0.05 or 0.001) increased due to the Res treatment. The concentration of TFN-α was significantly (p < 0.001) increased by the infection of PRV. Moreover, compared with the untreated group, Res significantly (p < 0.05 or 0.001) increased the concentration of TFN-α. At 14 dpi, compared to the non-infected group, the concentrations of IL-12 and TFN-α were decreased in the untreated group, while the decreasing tendency was significantly (p < 0.05 or 0.001) inhibited by Res treatment. The concentration of IFN-γ was significantly (p < 0.001) increased due to PRV-infection. Moreover, the concentrations of IFN-γ in Res-treated groups were significantly (p < 0.001) higher compared to the untreated group. The concentration of IFN-α showed no significant difference among the groups. At 21 dpi, there were no significant differences of IL-12, TNF-α, IFN-α and IFN-γ levels among the groups. Discussion Although Res has been known to have antiviral activity for many years, the use of Res to treat virus infection in a relevant virus-host system has rarely been done. In our previous study, Res showed potent antiviral activity against virulent duck enteritis virus [14,24]; we also found that Res possessed potent antiviral activity against PRV [21]. This study confirms that Res has a potent antiviral effect in PRV-infected piglets. Addition of Res could reduce mortality rate caused by PRV infection. No piglets died in the Res-H and Res-M groups, and nine out of ten piglets survived in the Res-L-treated group. It should be noted that the body weight gains of the Res-treated groups were higher than that in the untreated group in a dosedependent manner. Our previous study showed that piglets (without infection) treated with Res were able to gain an insignificant amount of body weight more than control piglets (i.e., non-infected piglets) [22]. Here, we show that Res could help PRV-infected piglets to gain more body weight. These results indicate that Res could be used to reduce the economic losses in PRV-infected piglets by increasing their survival rate and growth performance. These results are consistent with our previous study [14]. Viral load is an important and direct parameter in the evaluation of antiviral effects in vivo [14,23,25,26]. The viral loads of brain tissue and nasal swabs were the most important parameters in the evaluation of virus proliferation and excretion in PRV-infected piglets, respectively [25,26]. In this study, the viral loads were detected by FQ-PCR. The results revealed that Res could significantly inhibit virus excretion, and efficiently reduce virus reproduction. The levels of viral copies in the brain were positively linked to the clinical parameters of infected piglets, which were confirmed by our previous study that Res exerts antiviral activities by inhibiting viral reproduction [14,21,24]. The antiviral effects of Res on PRV-infected piglets were also supported by histopathological observations. In this study, obvious lesions were detected in the brain, lung, kidney, liver, spleen and heart after infection ( Figure 4B,E,H,K,N,Q), which were consistent with the previous study [8]. Res Discussion Although Res has been known to have antiviral activity for many years, the use of Res to treat virus infection in a relevant virus-host system has rarely been done. In our previous study, Res showed potent antiviral activity against virulent duck enteritis virus [14,24]; we also found that Res possessed potent antiviral activity against PRV [21]. This study confirms that Res has a potent antiviral effect in PRV-infected piglets. Addition of Res could reduce mortality rate caused by PRV infection. No piglets died in the Res-H and Res-M groups, and nine out of ten piglets survived in the Res-L-treated group. It should be noted that the body weight gains of the Res-treated groups were higher than that in the untreated group in a dose-dependent manner. Our previous study showed that piglets (without infection) treated with Res were able to gain an insignificant amount of body weight more than control piglets (i.e., non-infected piglets) [22]. Here, we show that Res could help PRV-infected piglets to gain more body weight. These results indicate that Res could be used to reduce the economic losses in PRV-infected piglets by increasing their survival rate and growth performance. These results are consistent with our previous study [14]. Viral load is an important and direct parameter in the evaluation of antiviral effects in vivo [14,23,25,26]. The viral loads of brain tissue and nasal swabs were the most important parameters in the evaluation of virus proliferation and excretion in PRV-infected piglets, respectively [25,26]. In this study, the viral loads were detected by FQ-PCR. The results revealed that Res could significantly inhibit virus excretion, and efficiently reduce virus reproduction. The levels of viral copies in the brain were positively linked to the clinical parameters of infected piglets, which were confirmed by our previous study that Res exerts antiviral activities by inhibiting viral reproduction [14,21,24]. The antiviral effects of Res on PRV-infected piglets were also supported by histopathological observations. In this study, obvious lesions were detected in the brain, lung, kidney, liver, spleen and heart after infection ( Figure 4B,E,H,K,N,Q), which were consistent with the previous study [8]. Res significantly decreased the tissues lesions. The results indicated positive therapeutical effects of Res on tissue lesions caused by PRV-infection. Given the high survival ratio and growth performance in the Res-treated groups, we can conclude that Res could effectively inhibit PRV reproduction and suppress the inflammations induced by PRV-infection, and thus decrease the tissue lesions. These results were consistent with our previous reports, which showed that Res could suppress tissue lesions and inflammation [14,27,28]. The immune system plays a key role in protecting the body from foreign pathogens through either innate immunity or acquired immunity. It is well established that innate factors, including IFN-α, IFN-γ, TNF-α and IL-12, play a critical role in inhibiting virus infections; thus, the levels of these cytokines are critical for antiviral immunity [29][30][31][32][33]. In this study, the levels of cytokines (IFN-α, IFN-γ, TNF-α and IL-12) were detected. The results show that the levels of TNF-α in Res-treated groups were significantly higher than that in the untreated group, and the depressed productions of IFN-α, IFN-γ and IL-12 induced by PRV-infection were significantly improved by Res treatment, especially the level of IFN-γ. These results were consistent with our previous reports, which showed that Res could increase the concentrations of IFN-γ in the serum of piglets [22]. Smith et al. reported that IFN-γ-mediated mechanisms play a critical role in the control of and recovery from acute Herpesviridae virus infection [34]. Based on the combination of this information with our results, we can conclude that the higher levels of IFN-γ in the Res-treated groups might be one of the primary reasons for Res having an antiviral effect against virus infection. In conclusion, Res showed potent antiviral activity on PRV infection in piglets. It was able to decrease the mortality of PRV-infected piglets, enhance growth performance, inhibit viral reproduction, alleviate tissue inflammation and lesions, and improve the levels of cytokines in PRV-infected piglets. The antiviral activity of Res might mainly be attributed to the inhibitory effect on PRV proliferation and immunomodulatory effects of IFN-γ. Resveratrol exhibits potential for PRV control, and further studies should be conducted to evaluate the antiviral activity of Res against infection with other viruses in a relevant virus-host system.
2018-08-29T23:34:40.840Z
2018-08-27T00:00:00.000
{ "year": 2018, "sha1": "0e147257f5e7c179e5cdb16babe267b47b66add7", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/viruses/viruses-10-00457/article_deploy/viruses-10-00457.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eaa4ca1f116daaca64d086a90bad90fcba3bca64", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252367649
pes2o/s2orc
v3-fos-license
Data-Driven Stochastic Optimal Control Using Kernel Gradients We present an empirical, gradient-based method for solving data-driven stochastic optimal control problems using the theory of kernel embeddings of distributions. By embedding the integral operator of a stochastic kernel in a reproducing kernel Hilbert space, we can compute an empirical approximation of stochastic optimal control problems, which can then be solved efficiently using the properties of the RKHS. Existing approaches typically rely upon finite control spaces or optimize over policies with finite support to enable optimization. In contrast, our approach uses kernel-based gradients computed using observed data to approximate the cost surface of the optimal control problem, which can then be optimized using gradient descent. We apply our technique to the area of data-driven stochastic optimal control, and demonstrate our proposed approach on a linear regulation problem for comparison and on a nonlinear target tracking problem. I. INTRODUCTION The advent of autonomous systems, and the increasing complexity of real-world autonomy stemming from human interactions and learning-enabled components, obviates the need for algorithms which can accommodate real-world stochasticity. In such scenarios, model-based approaches may simply fail or hinge upon unrealistic assumptions such as linearity or Gaussianity, which can lead to questionable outcomes or unpredictable behaviors. One approach to dealing with such systems is data-driven control, which has proven to be useful for systems which may be resistant to traditional modeling techniques, or for which finding a simple mathematical model is simply impossible. In order to circumvent the problems faced by traditional model-based approaches, data-driven control uses empirical modeling techniques to synthesize implicit models which are amenable to analysis and control. Nevertheless, these data-driven representations present new challenges for controller synthesis and optimization, which require the development of new tools and techniques to enable their use. We present a method for computing data-driven solutions to stochastic optimal control problems using an empirical, gradient-based approach. Our approach is based on Hilbert space embeddings of distributions, a nonparametric statistical This material is based upon work supported by the National Science Foundation under NSF Grant Number CNS-1836900. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. The NASA University Leadership initiative (Grant #80NSSC20M0163) provided funds to assist the authors with their research, but this article solely reflects the opinions and conclusions of its authors and not any NASA entity. A. Thorpe learning technique that uses data collected from system observations to construct an implicit model of the dynamics as an element in a high-dimensional function space known as a reproducing kernel Hilbert space (RKHS). Hilbert space embeddings of distributions have been applied to Markov models [1]- [3], policy synthesis [4,5], state estimation and filtering [6]- [8], and also for solving stochastic optimal control problems [9]- [11]. Additionally, these techniques admit finite sample bounds, which show convergence in probability as the sample size increases [12]. Reproducing kernel Hilbert spaces and kernel embeddings of distributions, specifically, are broadly used in the area of nonparametric statistical inference and estimation. However, these techniques have not yet seen widespread adoption for controls. The use of kernel methods for control has been explored in literature, and is closely related to the theory of Gaussian processes, Koopman operators, and support vector machines, in that they rely upon kernels or operators in highdimensional function spaces. The use of functional gradients in an RKHS for motion planning and policy synthesis have been used in [4,13]. However, these techniques either impose a particular problem structure, which limits their use more broadly, or rely upon a specific policy representation to compute the functional gradient. Methods applying kernel embeddings of distributions to optimal control problems have been explored previously in [2,10,11] for MDPs and chance-constrained control. In addition, [9,10] show that a kernel-based approximation of the stochastic optimal control problem can be solved as a linear program, but relies upon finite or discrete control spaces, which may be restrictive in practical control scenarios. Our main contribution is a technique for computing solutions to stochastic optimal control problems using empirical, kernel-based stochastic gradient descent in an RKHS. Unlike existing functional gradient approaches such as [4], our approach does not rely upon explicit parameterizations of the policy in an RKHS. Instead, we use the partial derivative reproducing property of kernels presented in [14] to compute an empirical gradient of the cost using observed data. Our approach is based on the RKHS control framework presented in [9], which optimizes over a finite set of user-specified admissible control actions. However, our proposed approach improves upon the techniques in [9] by eliminating the need for the control designer to strategically pre-select the policy support, at the cost of increased computation time due to the iterative nature of gradient descent. The rest of the paper is outlined as follows. In Section II, we define the stochastic optimal control problem using kernel embeddings. Then, in Section III, we describe the gradientbased optimization approach. In Section IV we demonstrate our approach on a double integrator system for comparison to existing approaches and then on a nonholonomic vehicle system to demonstrate the capabilities of the approach. Concluding remarks are presented in Section V. A. System Model Let (X , B X ) be a Borel space called the state space and (U, B U ) be a compact Borel space called the control space. Consider a discrete-time stochastic system, where x t ∈ X ⊆ R n , u t ∈ U ⊂ R m , and w t are independent and identically distributed (i.i.d.) random variables representing a stochastic disturbance. As shown in [15], the dynamics in (1) can equivalently be represented by a stochastic kernel Q : B X × X × U → [0, 1] that assigns a probability measure Q(· | x, u) on (X , B X ) to every (x, u) ∈ X × U. The system evolves from an initial condition x 0 ∈ X (which may be drawn from an initial distribution P 0 on X ) over a finite time horizon t = 0, 1, . . . , N , N ∈ N. B. Stochastic Optimal Control Problem Let g : X → R be an arbitrary convex cost function, which we assume is measurable and bounded and lies in a Hilbert space of functions H . At each time step, we seek the control u ∈ U that minimizes the following objective, We assume that the stochastic kernel Q is unknown, meaning we do not have direct information of the dynamics in (1) or the structure of the stochastic disturbance. Instead, we assume that we have access to a sample S = {(x i , u i , y i )} M i=1 of observations taken i.i.d. from Q, where x i and u i are taken randomly from X and U and y i ∼ Q(· | x i , u i ). Because the stochastic kernel Q is unknown, we cannot solve (2) directly since the integral in (2) is intractable. Instead, as shown in [9], we can use S to approximate the intractable integral with respect to Q in (2) as an empirical embedding in a high-dimensional space of functions known as a reproducing kernel Hilbert space. Then, we can solve an approximation of the original problem in (2) in order to compute an approximately optimal control. We outline the procedure below, but refer the reader to [9] for more details. C. Approximate Problem Using Kernel Embeddings Define a positive definite kernel function k : X × X → R [16,Definition 4.15]. According to the Moore-Aronszajn theorem [17], given a positive definite kernel k, there exists a unique corresponding reproducing kernel Hilbert space (RKHS) H with k as its reproducing kernel. Definition 1. A Hilbert space H of functions from X to R is called a reproducing kernel Hilbert space (RKHS) if there exists a positive definite function k : X × X → R called the reproducing kernel that satisfies the following properties: 1) For every x ∈ X , k(x, ·) ∈ H , and 2) For every x ∈ X and f ∈ H , f (x) = f, k(x, ·) H , which is known as the reproducing property. Similarly, we define the RKHS U of functions from U to R with l : U × U → R as its associated reproducing kernel. According to [1,18], assuming the kernel k is measurable and bounded, and given a probability measure Q(· | x, u), then by the Riesz representation theorem there exists a corresponding element m(x, u) ∈ H called the kernel distribution embedding, such that by the reproducing property, g, m(x, u) H = X g(y)Q(dy | x, u). This means that by representing the integral operator with respect to Q as an element in the RKHS, we can compute the expectation of any function f ∈ H as an RKHS inner product. Using a sample S, we can compute an empirical estimatê m(x, u) of m(x, u) as the solution to a regularized leastsquares problem [18]. The solution is given by, where Φ, Ψ are feature vectors with elements Using the estimatem(x, u), we can approximate the intractable integrals with respect to Q in (2) via an RKHS inner product, where g is a vector with elements g i = g(y i ). This representation is key to our approach, since it means we can approximate the previously intractable problem in (2) using data comprised of system observations. D. Problem Statement Following [9], we can approximate the stochastic optimal control problem (2) using the estimatem(x, u) and (5) as, Theoretically, we could optimize for u directly. However, this is a non-convex problem in general, which makes solving (6) difficult. For example, it may be exceptionally difficult to solve (6) for common kernel choices such as the Gaussian kernel function, since optimizing a linear combination of Gaussians is a non-convex problem. Thus, finding a control action u ∈ U that minimizes the approximate problem presents a significant challenge. One possible approach is given in [9,10], where a stochastic control policy with finite support is represented as an embedding in the RKHS U . Then, the policy can be obtained as the solution to a linear program, giving a set of probability values over a user-specified set of admissible control actions A ⊂ U. However, a significant drawback of Fig. 1. Illustration of the gradient-based method on a stochastic optimal control problem of a nonholonomic vehicle system with bounded control authority seeking to minimize the Euclidean distance to the origin over a single time step. The initial condition is indicated by an orange arrow, the goal is denoted using an × at the origin, and the actual cost surface is depicted using contour lines. (Left) Using the kernel-based estimatem, we can empirically estimate the cost of taking control actions in an admissible set A ⊂ U . The resulting states after taking the actions in A from the initial condition are shown, color-coded by their estimated cost. (Center) The control algorithm in [9] chooses the control action in the admissible set A that minimizes the expected cost, but is sub-optimal. The resulting state after taking the chosen action is shown in red. (Right) Our proposed approach using kernel-based gradient descent finds an approximately optimal solution by traversing the approximate cost surface (depicted using filled polygons), without resorting to a sampling-based approach. The resulting state after taking the approximately optimal control action is shown in green. this approach is the need to strategically select A such that it contains controls which are close to the true solution, and typically only finds a sub-optimal solution to the approximate problem. We propose to compute the control input via a kernelbased stochastic gradient descent method. Using the properties of reproducing kernel Hilbert spaces, we can compute the gradient by taking the partial derivative of the kernel, rather than explicitly computing the gradient with respect to u. This allows us to optimize the control input by directly optimizing within the RKHS and avoids the problem of nonconvexity in optimizing for u in the approximate problem. An illustration of this idea is depicted in Figure 1. III. COMPUTING CONTROLS USING GRADIENT DESCENT IN AN RKHS We seek to compute the partial derivative of the objective in (6) with respect to u. We first define the notation used to describe the partial derivative of a bivariate function. Definition 2 (Partial Functional Derivative Notation). Given a bivariate function l : U × U → R, u, u ∈ U, we denote the partial derivative as, where p, q are multi-indices. As shown in [14], we can compute the partial derivative of any function h ∈ U via the reproducing property as, In short, this means that we do not need to directly compute the partial derivative of the cost function g with respect to u (which may be unknown if we are only given points g(y i )), and we may compute the empirical gradient by taking the partial derivative of the kernel l. LetĴ(u) = g W Ψk(x, ·)l(u, ·) be the objective of the approximate optimal control problem in (6). Note thatĴ(u) can be written using the reproducing property aŝ Then, using (8), the partial derivative ofĴ(u) with respect to the control u can be computed as, This approach has a significant advantage, most notably that most popular kernels are easy to differentiate, meaning we can quickly compute the empirical gradient for an arbitrary cost function g ∈ H . In addition, the empirical cost gradient can be computed as a simple matrix multiplication. As a practical example, consider the Gaussian kernel, l(u, u ) = exp(− u − u 2 2 /2σ 2 ), σ > 0 (assuming u is a scalar variable for simplicity). The partial derivative of the Gaussian kernel is given by, Then the partial derivative of the objective in (10) can be computed as ∂ 1Ĵ (u) = g W (Ψk(x, ·)l(u, ·) ∆), where denotes the Hadamard (or element-wise) product and ∆ ∈ R M is a vector with elements ∆ i = −|u i − u|/σ 2 . We use the empirical gradient of the cost function g computed using (10) in order to compute the gradient direction for stochastic gradient descent. Then, by traversing the approximate cost surface using the empirical gradient, we obtain an approximately optimal solution to the problem in (6). We outline the procedure in Algorithm 1. Since the estimatem converges in probability to the true Algorithm 1 Kernel-Based Gradient Descent 1: given embedding estimatem, initial guess u 0 2: repeat 3: ∆u n ← g W Ψk(x, ·), ∂ 1,0 l(u n , ·) U 4: choose step size η 5: u n+1 ← u n − η∆u n 6: until stopping criterion satisfied 7: return u n embedding m at a minimax optimal rate of O(M −1/2 ) [12], the approximate cost surface also converges in probability to the true cost surface. Hence, as the sample size increases, we obtain a closer approximation of the true cost surface. However, it is important to note that the empirical cost surface is generally not convex, even if the original function is convex, meaning we are only guaranteed to find a locally optimal solution to the approximate problem. This is obvious, since the noise of the data also adds noise to the empirical cost surface. Nevertheless, we can use more advanced gradient descent methods (e.g. using momentum or a "temperature" in place of the learning rate) to mitigate the issues of optimizing over an empirical cost surface. This also motivates the need to choose an initial guess as close as possible to the optimal solution, which is detailed in the next section. A. Initialization Initializing the gradient descent algorithm close to the true solution ensures that we obtain an approximately optimal solution in fewer gradient steps. One possibility is to compute a sub-optimal initial guess for Algorithm 1 using [9]. As shown in [9], we can compute a solution to the (unconstrained) approximate stochastic optimal control problem in (6) by representing a stochastic policy π : B U × X → [0, 1] as a kernel embedding p in the RKHS U , where γ(x) ∈ R P are real-valued coefficients that depend on the state x ∈ X , Υ is a feature vector with elements Υ j = l(ũ j , ·), and the points A = {ũ j } P j=1 are a set of userspecified admissible control actions that we want to optimize over. The problem then becomes finding the coefficients γ(x) that optimize the approximate control problem. According to [9], we can view the coefficients γ(x) as a set of probabilities that weight the user-specified control actions in A, which we can find as the solution to a linear program, The linear program can efficiently be solved via the Lagrangian dual. Letting C(x) = g W Ψk(x, ·)Υ , the solution according to [19] is given by a vector of all zeros except at the index j = arg min i {C i (x) }, where it is 1. In other words, we choose the control action in A that corresponds to the index j which is the solution to the Lagrangian dual problem. See [9] for more details. By choosing control actions in A that are good candidate solutions to the optimal control problem in (2), we obtain a good initial guess for the gradient-based learning algorithm. However, unlike the approach in [9], we do not require the approximately optimal solution to lie within A, and we further improve the solution of the LP using stochastic gradient descent. IV. NUMERICAL RESULTS We demonstrate our approach on a regulation problem using a discrete-time stochastic chain of integrators for verification, and on a target tracking problem using nonholonomic vehicle dynamics to demonstrate the utility of the approach. For all problems, we use a Gaussian kernel for k and l, which has the form k(x, x ) = exp(− x−x 2 2 /2σ 2 ), where σ > 0. Following [1], we choose the regularization parameter to be λ = 1/M 2 , where M ∈ N is the sample size used to construct the estimatem. In practice, the parameters σ and λ are typically chosen via cross-validation, where σ is chosen according to the relative spacing of the data points (usually the median distance) and λ is chosen such that λ → 0 as M → ∞. A more detailed discussion of parameter selection is outside the scope of the current work (see [12] for recent results on regularization rates). Numerical experiments were performed in Python on an AWS cloud computing instance. Code for all analysis and experiments is available as part of the stochastic optimal control using kernel methods (SOCKS) toolbox [20]. A. Regulation of a Double Integrator System We consider the problem of regulation for a 2D stochastic chain of integrators system, with dynamics given by where x t ∈ R 2 is the state, u t ∈ R is the control input, which we constrain to be within [−1, 1], w t is a random variable with distribution N (0, 0.01I), and T s is the sampling time. We seek to compute a control input u as the solution to the following stochastic optimal control problem, where Q is a representation of the dynamics as a stochastic kernel. We use the cost function g(x) = x 2 , which serves to drive the system to the origin. We consider a sample S = {(x i , u i , y i )} M i=1 of size M = 1600 taken i.i.d. from Q. The states x i were taken uniformly in the region [−1, 1] × [−1, 1], the control inputs u i were taken uniformly from [−1, 1], and the resulting states were generated according to y i ∼ Q(· | x i , u i ). We then presumed no knowledge of the system dynamics or the stochastic disturbance for the purpose of computing the approximately optimal control action using our proposed method. Using Fig. 2. Vector field of the closed-loop dynamics of a deterministic double integrator system under an optimal control strategy computed using CVX (blue) and the vector field of the approximately optimal closed-loop system under the gradient descent-based control algorithm (orange). We can see that the kernel-based gradient descent solution closely matches the solution from CVX. To provide a basis for comparison, we computed the optimal control actions using CVX from the evaluation points {x j } R j=1 using the deterministic dynamics. We then propagate the dynamics forward in time using the optimal inputs to obtain the state at the next time instant. The vector field of the closed-loop dynamics under the optimal control inputs is shown in Figure 2 (blue). We then computed the approximately optimal control actions using Algorithm 1 with the sample S taken from the stochastic dynamics to minimize the cost at each point {x j } R j=1 over a single time step. For Algorithm 1, we used a step size of η = 0.01 and limited the number of iterations to 100. The vector field of the closed-loop dynamics using the approximately optimal solution computed using our proposed method is shown in Figure 2 (orange). We can see that the gradient based algorithm computes approximately optimal control inputs which closely match the solution computed via CVX. This demonstrates the effectiveness of the gradient-based algorithm to compute approximately optimal control actions with no prior knowledge of the dynamics or the stochastic disturbance. Note that the quality of the empirical approximation of the cost surface depends on the sample size M . As the sample size increases, the approximation improves, and we obtain a closer approximation of the optimal solution using our method. To demonstrate this, we computed the mean error and the maximum error between the data-driven gradientbased solution and the solution via CVX using the deterministic dynamics for varying sample sizes M ∈ [250, 2500] averaged over 20 iterations. The results are shown in Figure 3. We can see that the error of the approximately optimal solution decreases as M increases. However, we can also see that the quality of the solution does not improve appreciably as the sample size increases, which is due to the asymptotic convergence of the estimatem to the true embedding m. This presents a tradeoff between computation time and numerical accuracy, since the computational complexity scales polynomially with the sample size. B. Target Tracking Using a Nonholonomic Vehicle We consider the problem of target tracking for a nonholonomic vehicle system as in [9]. The dynamics are given bẏ (16) where x = [x 1 , x 2 , x 3 ] ∈ R 3 are the states, u = [u 1 , u 2 ] ∈ R 2 are the control inputs. The control inputs are constrained such that u t ∈ [0.5, 1.2] × [−10.1, 10.1]. We discretize the dynamics in time using a zero-order input hold and apply an affine disturbance w ∼ N (0, 0.01I). We define a target trajectory as a sequence of position coordinates indexed by time, shown in Figure 4 (blue). We choose an initial condition of x 0 = [−1, −0.2, π/2] , and evolve the system forward in time over the time horizon N = 20. At each time t, starting at t = 0, we seek to compute a control input u t as the solution to the following stochastic optimal control problem, where the cost function g t (x) seeks to minimize the squared Euclidean distance . Trajectory generated via our proposed method (orange) which tracks the target trajectory (blue). The trajectory generated using the technique in [9] is shown for comparison (green). Note that the trajectory computed using our proposed approach more closely follows the target trajectory. [−10.1, 10.1], and the resulting states were drawn according to y i ∼ Q(· | x i , u i ). Using the sample S, we then computed an estimatem of the kernel embedding m as in (3) using Gaussian kernels with bandwidth parameter σ = 3, chosen via cross-validation. In order to compute a baseline for comparison, we then computed the control actions using [9], which computes a stochastic policy embedding p t at every time step over a finite set A = {ũ j } 210 j=1 , as in (12). We chose the controlsũ j in the admissible set A to be uniformly spaced in the rangeũ j ∈ [0.5, 1.2] × [−10.1, 10.1]. Starting at the initial condition x 0 , we then computed the control actions by solving (13) at each time step forward in time. The resulting trajectory is plotted in Figure 4 (green), had a total cost of N t=0 g t (x t ) = 1.447, and the computation time was 1.159 seconds. We then evolve the system forward in time using the approximately optimal control action selected via the kernelbased gradient descent algorithm (Algorithm 1), and initializing using the solution to (13) using A as above. For Algorithm 1, we chose a step size of η = 0.1 and limited the number of gradient iterations to 100. The resulting trajectory is plotted in Figure 4 (orange), and has a total cost of N t=0 g t (x t ) = 1.363. The total computation time was 8.865 seconds. As expected, we see that the trajectory computed using our method more closely follows the target trajectory (has a lower overall cost). This shows that the gradient-based algorithm is able to compute the approximately optimal control actions for a nonlinear system at each time step, using only data collected from system observations. V. CONCLUSIONS & FUTURE WORK In this paper, we presented a method for computing the approximately optimal control action for stochastic opti-mal control problems using a data-driven approach. Our proposed method leverages kernel gradient-based methods and achieves more optimal control solutions than existing sample-based approaches. We plan to explore methods to compute control solutions more efficiently using the geometric properties of the RKHS, e.g. via projections, and to adapt the algorithm to dynamic programs and constrained stochastic optimal control problems.
2022-09-20T01:16:09.865Z
2022-09-19T00:00:00.000
{ "year": 2022, "sha1": "454b3c1c6ff23fc02ae845eaac4c19e713b5dd24", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "454b3c1c6ff23fc02ae845eaac4c19e713b5dd24", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Engineering" ] }
199407746
pes2o/s2orc
v3-fos-license
CFD simulation of dense gas dispersion in neutral atmospheric boundary layer with OpenFOAM In this study, Monin–Obukhov similarity theory is used to specify the profiles of velocity, turbulent kinetic energy (k), and eddy dissipation rate (ϵ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon$$\end{document}) in atmospheric boundary layer (ABL) flow. The OpenFOAM standard solver buoyantSimpleFoam is modified to simulate neutrally stratified ABL. The solver is able to obtain equilibrium ABL. For gas dispersion simulation, buoyantNonReactingFoam is developed to take into account fluid properties change due to temperature, buoyancy effect, and variable turbulent Schmidt number. The solver is validated for dense gas dispersion in wind tunnel test and field test of liquefied natural gas vapour dispersion in neutrally stratified ABL. Introduction Many human activities are affected by the atmospheric boundary layer (ABL). This is also where most air pollution phenomena occur. Understanding of the processes taking place in the ABL has attracted various research studies. Some typical applications of ABL-related research topics are wind engineering, urban flows, weather forecast, air pollution, and risk assessment of hazardous material spills in industrial sites One hazardous dense gas is liquefied natural gas (LNG), which is an effective solution for long-distance natural gas transfer. LNG has become the preferred option for international trading of natural gas. However, LNG storage, handling, and transportation are exposed to serious risks for humans, equipment, and the environment due to thermal hazards associated with combustion events such as pool fire, vapour cloud fire, explosion, or rapid phase transition. Safety assessment and hazard mitigation methods should be applied to lower the possibilities of catastrophic disaster relating to the LNG industry. The scope of this study is constrained to the discussion of dense gas dispersion when released into the ABL. Computational fluid dynamics (CFD) is increasingly being used in simulation of ABL flows. Open-source CFD tool is a more powerful research tool in comparison to proprietary software because of its flexibility to incorporate new implementation of field calculation and post-processing. OpenFOAM is an open-source CFD software package that attracts users from both industry and academia. Using a general CFD code such as OpenFOAM for simulating ABL flow and gas dispersion also encourages research sharing and reusing code in this specific field where in-house code is usually adopted. An important task before modelling gas dispersion in the ABL is obtaining the correct ABL flow prior to the release of gas source. One approach to achieve this is using equilibrium ABL, i.e., zero stream-wise gradients of all variables, as a steady-state ABL flow. For neutral ABL, Richards and Hoxey (1993) proposed appropriate boundary conditions of mean wind speed and turbulence quantities for the standard k − model based on Monin-Obukhov similarity theory (MOST). These profiles were derived assuming constant shear stress with height and were used to model ABL as horizontally homogeneous turbulent surface layer (HHTSL). However, HHTSL was hard to achieve mostly due to the ground boundary conditions (Yang et al. 2009), which manifested in a decay of velocity profile due to a spike in the turbulent kinetic energy close to the ground. However, consistency between wall boundary conditions, turbulence model with associated constants, and numerical schemes was shown to achieve HHTSL (Jonathon and Christian 2012;Parente et al. 2011;Yan et al. 2016). These authors adopted proprietary CFD software for their simulation. Applying these implementations in opensource CFD code also require extensive modifications of the source code to successfully simulate equilibrium ABL. Open-FOAM was previously used for atmospheric buoyant (Flores et al. 2013) and dense gas dispersions (Mack and Spruijt 2013;. However, the validation of these solvers in simulation of equilibrium ABL was not reported. Therefore, the atmospheric turbulence might not be correctly solved throughout the computational domain. In this study, MOST is used to model the profiles of velocity, turbulent kinetic energy (k), and eddy dissipation rate ( ) of ABL according to an approach proposed by Richards and Hoxey (1993). These profiles are used as the boundary conditions at the inlet of ABL flow simulation. OpenFOAM application buoyantSimpleFoam is modified to simulate neutrally stratified ABL turbulence. For gas dispersion simulation, buoyantNonReactingPimpleFoam is developed to take into account the buoyancy effect and variable turbulent Schmidt number. The solver is validated for dense gas dispersion cases from wind tunnel and field tests of LNG vapour dispersion in neutrally stratified ABL. Models The k − model is used for turbulence modelling. It is based on expression of turbulent dynamic viscosity t by the following: Jones and Launder (1972): Two additional transport equations for turbulence kinetic energy k and turbulence dissipation rate are required. To include the effect of buoyancy, the transport equations for k and are as follows: where is the fluid density; C = 0.09 , k = 1 , = 1.3 , C 1 = 1.44 , and C 2 = 1.92 are model constants as proposed in original paper (Launder and Spalding 1974). The value of C 3 is calculated using the following: where v and u are vertical and horizontal velocities accordingly. G k is production of turbulence kinetic energy due to the mean velocity gradients. G b is the buoyancy source term: where C g = 1∕Pr t is used as a model constant to take into account the user-defined value of turbulent Prandtl number Pr t . g is gravitational vector. Energy, heat, and transport properties are determined by a set of thermophysical models (The OpenFOAM Foundation 2017) in OpenFOAM. This set defines mixture type, transport and thermodynamic properties models, choice of energy equation variable, and equation of states. The fluid in a simulation is defined as a mixture of fixed compositions. Enthalpy is chosen as energy equation variable. Transport and thermodynamic properties are determined using models based on the density , which are calculated from pressure and temperature fields. Polynomial functions of order N are used to relate transport property , the specific heat c p , and density with temperature field T: where a i , a c p i , and a i are the polynomials coefficients. ABL air inlet MOST has been validated for the surface layer of ABL by many empirical studies (Foken 2006). It assumes horizontally homogeneous and quasi-stationary flow field, i.e., profiles of (4) flow variables are only varying in the vertical direction and their vertical fluxes are assumed constant. The inlet boundary conditions proposed by Richards and Hoxey (1993) based on MOST are widely used in CFD study of atmospheric flow. The velocity, turbulent production rate, k and dissipation rate profiles in vertical direction z are written as follows: where z 0 is aerodynamic roughness length, u * is friction velocity, and C is k − model constant. These profiles are implemented in OpenFOAM as atm-BoundaryLayer class and its subclasses. Required parameters are flow and vertical direction, reference velocity, reference height, and aerodynamic roughness length. The friction velocity is calculated as follows: Parente et al. (2011) presented an elaborate procedure to ensure the consistency for arbitrary inlet profile of turbulent kinetic energy k. Instead of altering model constants as Yang et al. (2009), the effect of non-constant k on momentum and equation can be characterised by deriving an equation for C : Source terms are added to k and transport equations to ensure equilibrium condition: Richards and Norris (2011) revisited the problem of modelling the HHTSL by deriving the inlet profiles directly from the conservation and equilibrium equations. This allows various inlet profiles to be specified by varying the turbulence model constants. For standard k − models, the inlet profiles of velocity and turbulence properties are identical to Eq. (7). However, they suggested to change the von-Karman constant according to model constants as follows: Using the standard k − model constants, we obtain k− = 0.433. Wall boundary conditions In CFD, the below approximation is used to calculate wall shear stress: where y P is distance to wall of the wall adjacent cell. Subscripts w and P denote field value evaluated at wall and wall adjacent point, respectively. However, this approximation is inaccurate when wall velocity gradient is significantly larger than velocity difference between the adjacent cell and the wall. This is the case for most ABL flows. Turbulent kinematic viscosity t wall function is used to calculate the wall shear stress w from the wall velocity difference. To take into account the aerodynamic roughness length z 0 , the calculation of turbulent kinematic viscosity at wall adjacent cell is as follows: where friction velocity u * can be calculated from a simple relation derived by Launder and Spalding (1974), assuming that generation and dissipation of energy are in balance: wall function is used to calculate value of at wall adjacent cell P as follows: The wall is usually defined as non-slip condition where velocity is zero. However, to account for the effect of aerodynamic roughness length, a new boundary condition for velocity is implemented in OpenFOAM as follows: (16) u P = u * ln y P + z 0 z 0 . Top, side, and outlet boundaries At the outlet boundary, the flow is assumed fully developed and unidirectional. All flow variables are supposed to be constant at this boundary. The top and side of the computational domain are external boundaries representing the far fields of flow. If a constant pressure is applied in these boundaries, this may alter the inlet wind profile in case the prescribed pressure is not matched with the boundary velocity (Luketa-Hanlin et al. 2007). The zero-gradient boundary condition, which set normal velocity to zero and all others variables are set equal to the inner values, or symmetry condition can be used at the top and side boundaries to reserve the wind profile and eliminate the effect of changing the inlet profiles. Hargreaves and Wright (2007) showed that zero-gradient velocity at the top boundary resulted in a decay of velocity downstream, due to the extraction of energy at the wall with respect to the wall shear stress. A driving shear stress, zero flux of turbulent kinetic energy, and a flux of dissipation rate are imposed at the upper boundary: Numerical tool and data sets OpenFOAM is an open-source CFD software package based on finite volume method, co-located variables, and unstructured polyhedral meshes. In this study, buoyantSimple-Foam is used to simulate ABL turbulence. The application buoyantNonReactingFoam is developed based on rhoReactingBuoyantFoam solver, previously used for dense gas dispersion by , to simulate atmospheric turbulence under neutral stability for dispersion of dense gas continuous source in flat terrain. buoy-antNonReactingFoam uses polynomial thermophysical models to account for the change of fluid properties due to temperature. The solver takes into account the buoyancy effect and the variable turbulent Schmidt number. Algorithms used in these two solvers are presented in Algorithm 1 and 2. (17) Algorithm 1 buoyantSimpleFoam solver algorithm Boundary conditions used for ABL flows are developed as new libraries in OpenFOAM. These include velocity inlet, turbulent kinetic energy, dissipation rate inlet, and wall boundary conditions. A set of full-scale field tests and experimental wind tunnel tests for LNG dispersion model validation was reported in Ivings et al. (2013). Most data of these tests were available in REDIPHEM database (Nielsen and Ott 1996). The data contain physical comparison parameters of each test. These are maximum arc-wise concentration, i.e., the maximum concentration across an arc at the specified distance from the source and point-wise concentration, i.e., the concentration at specific sensor locations. Two wind tunnel data DA0120 and DAT223 are used to validate OpenFOAM solver in prediction of dense gas dispersion over a flat, unobstructed terrain in simulated neutral ABL. In these tests, continuous source of SF 6 gas was released in flat terrain without obstructions. For field test, we select Burro9, which is continuous LNG spills under neutral ABL. Domain and mesh A 2D domain of 5000 m × 500 m with the resolution of 500 × 50 cells is used for the simulation of neutral ABL over flat terrain. The mesh is uniform in stream-wise direction and stretched in vertical direction with the expansion ratio of 1.075. Numerical setting The boundary conditions of the cases are represented in Table 1. ABL parameters used to define inlet variable profiles are listed in Table 2 according to the reference case of Hargreaves and Wright (2007). Steady-state simulation is employed using buoyant-NonReactingSimpleFoam described in previous section. OpenFOAM discretization schemes, velocity-pressure coupling algorithm as well as linear solvers are listed below: Residual control is set at three order of magnitude for pressure and four order of magnitude for other variables such as U, k, , and h. Modification of k − (Eq. 11) are used to simulate neutral ABL and comparing with standard models. These three cases are summarised in Table 3. Different levels of inlet kinetic energy are obtained by altering C according to Eq. (9). The source term by Pontiggia et al. (2009) Results and discussion of neutral ABL simulations Modification of k − models achieves the matched results, as shown in Fig. 1. Including source terms as in Eq. (10) is sufficient to compensate terms deflection from calculation of von-Karman constant k− = 4.3 from model constants (Eq. 11) and = 4.1 used in Monin-Obukhov theory. Results from modelling different turbulence kinetic energy by varying C are presented in Fig. 2. The profiles of velocity and dissipation rate are perfectly matched with the Monin-Obukhov profiles. In the C = 0.017 simulation, the value of k near ground is smaller than the theoretical value; however, the kinetic energy level is matched with the theory at greater height. The smaller value of k at the wall adjacent cell is due to the wall function, where wall treatment used with the default C = 0.09 is implemented. However, the overall results are acceptable for verifying the proposed model in simulating different levels of kinetic energy. Numerical setting The effect of the turbulent Schmidt number Sc t is investigated in dense gas dispersion. Three test cases are summarised in Table 4. The effect of turbulent models is examined by applying the modified k − which is already validated in simulating ABL over flat terrain in Sect. 3. First, the steady simulation using bouyantSimple-Foam is performed to establish the steady ABL flow prior to the dense gas release. The solver which includes buoyancy effects bouyantSimpleFoam is used to account for density stratification in dense gas flow. The atmospheric inlet profiles are specified by MOST with parameters in Table 5. Standard k − with modifications is used to study the ability to simulate the ABL with each model. Second, the transient simulation is performed using steady simulation solutions as initial fields. A modified version of rhoReactingBouyantFoam is studied to model multi-species flow where mixture considered is air and dense gas SF 6 . The wind tunnel tests were conducted in isothermal condition, and therefore, constant thermal and transport properties are used for both gases. In simulations of DA0120 and DAT223 tests, the discretization schemes and linear solver setting are identical to the simulation of neutral ABL (Sect. 3). Fig. 1 Comparing velocity, turbulent kinetic energy, and turbulent dissipation rate profiles at the outlet boundary from simulation of neutral ABL using standard k − (kEps), modified k − (kEpsMod) turbulence model, and MOST inlet profiles (MOST) Fig. 2 Comparing velocity, turbulent kinetic energy, and turbulent dissipation rate profiles from simulations of different kinetic energy levels by varying C = 0.09 (KEpsCmu09) C = 0.017 (KEpsCmu017) and MOST inlet profiles (MOST) Peak concentration prediction The steady-state plumes at ground level of DAT0120 and DAT223 tests are plotted in Fig. 3. Under higher release volume flow rate and higher wind speed, DAT223 plume is wider and is spreading further downstream than the DAT0120 plume. The predicted and measured peak gas concentration are compared at several distances from the spill in Fig. 4. Turbulent Schmidt number Sc t has significant effect in predicting dense gas dispersion. The original rhoRe-actingBouyantFoam code, with assumption of species diffusivity equals to viscosity, is shown to over-predict concentration with a factor of three. The modified code takes into account the variable species diffusivity Sc t by reading this parameter from user input. The value of Sc t = 0.3 is shown to yield a perfect match with the experimental data. However, there is a slightly acceptable over-predicted species concentration at a point near the source release. Results from the DAT223 simulation are presented in Fig. 5. Satisfactory over-predicted peak concentration is similar to DA0120 case. Figure 6 presents gas concentration at the downwind distance X = 1.84 of the DA0120 test. The simulation can reproduce the averaged gas concentration. The first incidence time of gas concentration is earlier than observed in experiments. However, the time for reaching averaged maximum concentration is well predicted. Statistical model evaluation Statistical performance measures (SPMs) are means to compare prediction parameters and the measured ones for model evaluation. The SPM chosen should reflect the bias of these predictions. In the context of LNG vapour dispersion model evaluation, Ivings et al. (2013) proposed five SPMs including mean relative bias (MRB), mean relative square error (MRSE), the fraction of predictions within the factor of two of measurements (FAC2), geometric mean bias (MG), and geometric variance (VG). Definition and acceptability criteria for each SPMs are presented in tabular form as Table 6 where C m and C p are the measured and simulated concentration, respectively, and A denotes the mean operation of variable A. Statistical performance of OpenFOAM results is compared with the specialised commercial code for gas dispersion FLACS in Table 6. FLACS results are extracted from Hansen et al. (2010). The performance of current OpenFOAM code is considerably better than FLACS. In fact, FLACS is based on the porosity distributed resistance (PDR) approach. Therefore, this modelling of the boundary layer close to solid surfaces might contribute to the outperformance of the OpenFOAM model in the comparison. In conclusion, even though larger tests were validated in FLACS, the proposed model in OpenFOAM is a promising tool for further investigation of atmospheric dense gas dispersion. Numerical setting The steady simulation uses the atmospheric inlet specified by MOST. Standard k − with modifications is used to study the ability to simulate the ABL with each model. All required meteorological parameters are tabulated in Table 7, where u ref and T ref are air velocity and temperature at the height of 2 m respectively. The transient simulation is divided into two steps. The first step is during the spill duration, i.e., from the time of zero to when the spill ends. The second step is after the spill stops to the end time of simulation. The gas inlet is treated as a ground boundary in this later step. The gas inlet condition is usually obtained from separate source term modelling. There is not much information about the vaporization of LNG from the experimental data. Therefore, uncertainty arises at the setting of this condition. Mass flux of LNG or the LNG vaporization rate is used to derive source term of LNG spilling. Luketa-Hanlin et al. (2007) (Luketa-Hanlin et al. 2007). The spill diameter is derived from the vaporization rate, reported spill mass m spill , and duration t spill : The volume spill rate is used as gas inlet condition: The LNG spill variables used in simulation are also tabulated in Table 7. The steady simulation Profiles of velocity and turbulence quantities are sampled at the outlet boundary and compared with Monin-Obukhov theory profiles which are used as inlet boundary conditions. The steady-state simulation of ABL with k − reveals that wind velocity and turbulence profiles are accurately reproduced as presented in Fig. 7. The success of the modified k − model proves that the proposed model can adequately reproduce the Monin-Obukhov ABL profiles in the fullscale simulation. Mesh sensitivity study Maximum concentration at the arcs of 57 m, 140 m, 400 m, and 800 m downwind are used as performance parameters for the mesh sensitivity study. Three meshes with refined factors as summarised in Table 8 are used to simulate LNG gas dispersion under adiabatic thermal wall condition. Results from four peak arc-wise concentrations are plotted in Fig. 8. Increasingly, mesh refinements help to resolve maximum concentration more accurately. The difference of gas concentrations between meshes is significantly reduced with refinement. Due to computational restriction, no further mesh is used for mesh sensitivity study and Mesh 3 parameters (Table 8) are chosen for the following study. Ground heat transfer sensitivity study Three different models of heat transfer from the ground are used to study their effect on the numerical results, which are summarised in Table 9. For constant heat flux case, the value of 200 W∕m 2 is used. The effect of ground heat in predicting peak gas concentration is plotted in Fig. 9. The adiabatic case results in a better prediction of experimental data than the fixed flux and fixed temperature cases. However, all simulations yield under-predicted results. This may be due to that the buoyancy effect is over-predicted, and consequently, the gas concentration is zero in the fixed flux case at downwind arcs (at 400 and 800 m). Turbulence Schmidt number, Sc t , sensitivity study Two values of Sc t = 1 and Sc t = 0.3 are used for studying the sensitivity of the proposed model in predicting the maximum gas concentration. Results are compared in Fig. 10. Sc t = 0.3 , which was used previously in wind tunnel dense gas dispersion which is shown to be appropriate for accurate prediction of maximum gas concentration at the 57 m array and 140 m array. Further downwind, at 400 m array and 800 m array, there is no significant difference between the two values. Isosurface contour The vertical isosurface contours at X = 140 are illustrated in Fig. 11. Under-predicted cloud height is revealed in all tests, indicating that the cloud buoyancy is not correctly solved. Horizontal isosurface contours at height Z = 1 are shown in Fig. 12. The gas concentration contour is plotted side by side with the contour from experiment data, where the left is the result of interpolating concentration at some concentration data points (presented in plots by black dot points) and the right is from experimental data. Overall, the cloud height is considerably well predicted, but the cloud width is over-predicted. Furthermore, it can be seen that the gas moves downwind slower than experimental data, which under-estimates the downwind spreading of the gas cloud. Concentration predictions Fire dynamics simulator (FDS) (McGrattan et al. 2013) is a low Mach number code using the LES turbulence model. The computational domain is discretised into a connected rectilinear mesh. The governing equations are discretised using finite-difference method. A second-order scheme is used for space discretization and an explicit second-order Runge-Kutta scheme for time discretization. OpenFOAM concentration results are compared with FDS data extracted from Mouilleau and Champassith (2009). The comparison of OpenFOAM, FDS, and experimental results for Burro9 test is shown in Fig. 13. FDS is overpredicted, while OpenFOAM is under-predicted. However, OpenFOAM is accurate in prediction at 800 m arc. Figure 14 is the plot of gas concentration at 1 m elevation at 140 m downwind of Burro9 experiment (EXP) and simulations using the developed solver (FOAM) and FDS (FDS). For the developed solver result, the peak concentration is under-estimated, while the temporal trend of changing concentration generally shows good agreement with validation data. The concentration magnitude is fairly matched except during local maximum/minimum durations. Also shown in Fig. 14 is the result from FDS simulation which is generally over-predicted. However, the developed solver cannot capture the fluctuation, while FDS yields fluctuating gas concentration over the time period. This is an advantage of LES over RANS turbulence model. The over-prediction of FDS may be due to that a constant coefficient Smagorinsky model was adopted in the simulation. However, the dynamic Smagorinsky model was shown to improve the gas dispersion prediction (Ferreira Jr and Vianna 2016). This indicates that it would be a promising approach to use LES to enhance the performance of the developed solver. Statistical model evaluation Overall statistical performance of OpenFOAM results is compared versus FLACS with data extracted from Hansen et al. (2010) in Table 10. The predictions do not match all SPMs. However, some important SPMs are within the acceptable range. All gas concentrations are within a factor of two (FAC2 = 1) and better than FLACS (FAC2 = 0.94). Conclusions A solver is developed to reproduce horizontal homogeneous atmospheric surface layer in neutrally stratified ABL using OpenFOAM. The empirical atmospheric boundary layer model MOST is used to specify the inlet boundary conditions for velocity, turbulent kinetic energy, and dissipation rate. Flow variable profiles at outlet boundary are successfully maintained and consistent with their profiles at the inlet boundary. This demonstrates the effectiveness of the solver in simulating the horizontal homogeneous atmospheric surface layer. It can also predict different levels of ABL turbulence kinetic energy. A solver for ABL gas dispersion simulation taking into account buoyancy effect, variable turbulence Schmidt number, and ground heat transfer is developed using the Open-FOAM platform. In the study of dense gas dispersion in neutral simulated ABL, the model is successfully validated by reproducing maximum gas concentration. SPMs from simulation results are better than those from the specialised commercial software for gas dispersion, FLACS. In the study of LNG accidental release, a dense cold gas vapour dispersion in ABL with three ground heat transfer assumptions are simulated and compared with the full-scale field measurements. The gas peak concentration is used as validation parameters. Adiabatic wall assumes zero heat flux from ground to the gas cloud, whereas the fixed temperature model assumes isothermal ground where the ground temperate remains unchanged when in contact with the cold gas cloud. The real heat flux to the gas cloud would be in between these two cases. The other model assuming a fixed flux of heat to the gas cloud is also included. Of the three ground heat transfer models, adiabatic wall gives the closest prediction of gas peak concentration. The model is shown to accurately predict vertical buoyancy, while the cloud spreading downwind is under-predicted. SPMs from the simulation results are compared with the LES code FDS and specialised dispersion code FLACS, showing that the solver is more accurate in predicting gas concentration in neutrally stratified ABL. Further investigation is required to validate the OpenFOAM solver in ABL with thermal stratification.
2019-08-05T17:19:30.998Z
2019-08-02T00:00:00.000
{ "year": 2019, "sha1": "55b02511b5272397f09456272777b09b62b53948", "oa_license": "CCBYNC", "oa_url": "https://dspace.lib.cranfield.ac.uk/bitstream/1826/14458/4/CFD_simulation_of_dense_gas_dispersion-2020.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "55b02511b5272397f09456272777b09b62b53948", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
254248350
pes2o/s2orc
v3-fos-license
Risk of stroke and retinopathy during GLP-1 receptor agonist cardiovascular outcome trials: An eight RCTs meta-analysis Purpose To explore the risk of stroke (including ischemic and hemorrhagic stroke) in type 2 diabetes mellitus treated with glucagon-like peptide 1 receptor agonist (GLP-1RA) medication according to data from the Cardiovascular Outcome Trials(CVOT). Methods Randomized controlled trials (RCT) on GLP-1RA therapy and cardiovascular outcomes in type 2 diabetics published in full-text journal databases such as Medline (via PubMed), Embase, Clinical Trials.gov, and the Cochrane Library from establishment to May 1, 2022 were searched. We assess the quality of individual studies by using the Cochrane risk of bias algorithm. RevMan 5.4.1 software was use for calculating meta- analysis. Results A total of 60,081 randomized participants were included in the data of these 8 GLP-1RA cardiovascular outcomes trials. Pooled analysis reported statistically significant effect on total stroke risk[RR=0.83, 95%CI(0.73, 0.95), p=0.005], and its subtypes such as ischemic Stroke [RR=0.83, 95%CI(0.73, 0.95), p=0.008] from treatment with GLP-1RA versus placebo, and have no significant effect on the risk of hemorrhagic stroke[RR=0.83, 95%CI(0.57, 1.20), p=0.31] and retinopathy [RR=1.54, 95%CI(0.74, 3.23), p=0.25] Conclusion GLP-1RA significantly reduces the risk of ischemic stroke in type 2 diabetics with cardiovascular risk factors. IDF(International Diabetes Federation)Releases New Global Diabetes Map 2021( http://www.diabetesatlas.org/), the data show that 537 million cases of adult diabetes worldwide by 2021. A variety of complications can arise from the progression of diabetes, and cardiovascular complications are the main cause of death in diabetics, for example, heart attack, brain attack, heart failure, malignant arrhythmia, etc. Stroke is a brain disorder that is attributed to a dramatic blood vessel rupturing or blocking within the brain, thereby preventing the flow of blood to the brain. Nearly a population of 15 million encounter stroke each year in the globe, with 33% caused to be permanently disable, and 40% caused to die (1,2). Diabetes is now widely recognized nationally and internationally as a major and independent risk factor for stroke morbidity and mortality. A study in 2022 《the European Journal of Preventive Cardiology》 showed that type 2 diabetics are at high or very high risk of fatal myocardial infarction (MI) or stroke (3). Recent data from a study conducted by Wang Congjun at Beijing Tiantan Hospital in China showed that 33.4% of 833,000 acute ischemic stroke patients have combined diabetes. As the pathological basis of most ischemic strokes and some hemorrhagic strokes is atherosclerosis, While diabetes can precipitate or exacerbate the development of atherosclerotic lesions, Thus, it is so important to pay attention to the atherosclerosis-protective effect when developing individualized glucose-lowering regimens for type 2 diabetes mellitus (T2DM) that reduce the risk of stroke. GLP-1RA is a new type of glucose-lowering drug that decreases the risk of hypoglycemia by stimulating insulin secretion and lowering glucagon secretion, delaying gastric emptying, and reducing appetite to lowers HbA1c, modestly improves blood lipids and body weight (4). A meta-analysis study showed that GLP-1-based therapies appear to provide beneficial effects against atherosclerosis (5). A pooled analysis reported GLP-1RA no significant effect on atherosclerotic MACE (RR 0.91, 95% CI 0.84-1.00, p=0.05) (6). International attention is mainly focused on the cardiac and renal outcomes of GLP-1RA clinical trials, there are other good meta-analyses and umbrella reviews analyzing MACE events (including nonfatal stroke) (7-11). But less attention has been paid to the question of whether GLP-1RA treatment can reduce the risk of subtype of stroke in patients with T2DM. On the other hand, the 2013 American Heart Association (AHA)/American Stroke Association (ASA) Expert Consensus on "A New Definition of Stroke for the 21st Century" (the "Updated Consensus") concluded that, Central nervous system (CNS) infarction is defined as "ischemic cell death in the brain, spinal cord, or retina based on pathology, imaging, other objective evidence, and/or clinical evidence of ischemia" (12). Retinal ischemia secondary to central retinal artery occlusion meets the definition of acute ischemic stroke. Therefore we aimed to perform a meta-analysis of adverse event outcomes of stroke and retinopathy in the GLP-1RA large-scale CVOT clinical trial (RCT). Inclusion criteria and exclusion criteria Inclusion criteria: (1) Randomized, double-blind, parallelgroup, multicenter study of clinical trials; (2) Type 2 diabetes mellitus who are at high cardiovascular risk (including but not limited to obesity, metabolic syndrome, insulin resistance, hypertension, dyslipidemia, etc.) are the primary study subjects; (3) At least 1000 people in the test and control groups, and at least 3000 people in total; (4)Intervention with GLP-1 receptor agonist and control with placebo; (5)Data on ischemic stroke, hemorrhagic stroke and retinal arteriopathy must be available in all trials for adverse events; (6)A more complete table of baseline patient characteristics is available; (7) Published English literature up to May 1, 2022. Exclusion criteria: (1) Reviews, reports, and conference proceedings on GLP-1RA and cardiac arrhythmias; (2) Studies with inaccessible full text or incomplete data; (3) Repeatedly published or repeatedly included studies or studies with similar information; (4) Clinical trials that included type 1 diabetic patients. Primary outcome Total stroke events and major stroke types (including ischemic stroke and hemorrhagic stroke). Secondary outcomes Retinopathy (Retinopathy means disease of the retina. There are several types of retinopathy but we only include hemorrhagic and ischemic related retinopathy) Literature screening, data extraction and quality evaluation Randomized controlled trials comparing GLP-1RA with placebo in T2DM at high cardiovascular risk were included. Outcomes of interest included stroke events and retinopathy. First, titles and abstracts were screened to assess their potential eligibility for inclusion, and then full-text checks were applied to determine final eligibility. The following information was collected using a predefined data extraction form: study information (trial name, sample size, drug name), patient characteristics (age, gender, baseline status), therapy information (regimen, dose) and outcome data (number of events per outcome). All outcomes of interest were dichotomous, first preferentially extracting data from ClinicalTrials.gov and secondarily selecting data from the original trial publication or secondary analysis of the same trial. The Cochrane Risk of Bias Tool was used to assess the quality of included studies (13). Bias was assessed in seven ways: selection bias (including random sequence generation, allocation concealment), implementation bias (whether subjects and trial personnel were blinded), measurement bias (whether outcome assessors were blinded), follow-up bias (whether outcome data were complete), reporting bias (whether study outcomes were selectively reported), and other bias (whether there were other sources of bias). The assessment criteria levels are classified as high, low or unclear. If one item was judged to be high, the overall risk of bias was judged to be high, and if all items were judged to be low, the risk of bias was judged to be low, otherwise it was unclear. Statistical methods Data were analyzed with RevMan 5.4.1, and effect analysis statistics were expressed as RR and 95% CI, p <0.05 being a statistically significant difference. Heterogeneity analysis among groups between studies was executed using the c 2 test, and the results are presented as I 2 . Fixed-effects models were used for analysis if there was homogeneity among studies (p >0.05 or I 2 ≤ 50%), and random-effects models were used for analysis if there was heterogeneity among studies (p ≤ 0.05 and I 2 >50%). Sources of heterogeneity can be searched for by sensitivity analysis and subgroup analysis when large heterogeneity exists. Due to the number of trials being less than 10, publication bias was not evaluated. Procedure and outcomes of included literature 2008 literatures were initially screened in the database according to keywords, and 176 were obtained after deduplication and exclusion of review literature, irrelevant literature, incomplete data, unreported cardiovascular events, unspecified results, and incorrect study type, and then 8 clinical trials with a total of 68,001 patients were finally included after the screening process was repeated by two investigators again ( Figure 1). The included reports, in chronological order, were ELIXA (14)、LEADER (15)、SUSTAIN-6 (16)、EXSCEL (17)、Harmony Outlets (18)、REWIND (11)、PIONEER 6 (10) and AMPLITUDE-O (9). Key trial and patient characteristics at baseline examination are shown in Table 1. All trials were of considerable size (>3000 patients). Of the 8 trials, ELIXA recruits patients with a recent acute coronary syndrome, and the study populations of the other 7 trials indicated in their inclusion criteria that they primarily included patients with stable cardiovascular disease or cardiovascular risk factors (19). In all eight trials, local investigators were encouraged to manage participants according to local guidelines. Analysis of stroke types 3.2.1 Total stroke events All 8 included studies can be used to analyze the effect of GLP-1RA on total stroke. No significant heterogeneity among studies(I 2 = 0%, p =0.71), so the fixed-effects model was used to combine the effect sizes. Pooled analysis reported statistically significant effect on total stroke outcomes from treatment with GLP-1RA versus placebo [RR=0.83, 95%CI (0.73, 0.95), p=0.005], showed that the risk of total stroke was about 17% lower in the GLP-1RA treatment group than the placebo group. Hemorrhagic stroke events The eight studies collected for mate-analysis of the effect of GLP-1RA on hemorrhagic stroke are listed in Figure 2 Data are mean ± SD, unless otherwise noted. Discussion The results of eight RCTs on CVOT outcomes of GLP-1RA in type 2 diabetic patients, which we included, showed that GLP-1RA treatment significantly reduced the risk of total stroke (~17%) and ischemic stroke (~17%) in type 2 diabetic patients, suggesting that GLP-1RA may have some protective effect in patients with cerebrovascular stenosis. Similar studies have been done in the past, A 2022 retrospective cohort study reported longer use and higher dose of GLP-1 RAs were associated with a decreased risk of hospitalization for ischemic stroke among Asian patients with T2DM who did not have established atherosclerotic cardiovascular diseases, but who did have dyslipidemia or hypertension (20). A net meta-analysis in 2021 showed GLP-1RA versus placebo Our findings indicate that GLP-1RA reduce the risk of stroke (OR 0.87, 95% CI 0.77 to 0.98; high-certainty evidence) (7). Another study in 2021 comparing the efficacy and safety of SGLT2I, GLP-1RA and DPP4i found that only GLP-1RA was associated with a lower risk of stroke compared with placebo (RR 0.85, 95% CI 0.76, 0.94) (21). Some trials showed that there was a significant (p = 0.012) 9% risk reduction of non-fatal stroke associated with the use of newer glucose-lowering drugs, which was largely driven by the 16% reduction associated with the use of GLP-1RA, with no significant heterogeneity (I 2 = 21.3%, p = 0.206) and no evidence of publication bias (Egger test, p = 0.233) (22). Besides, a clear significant benefit on non-fatal stroke were showed on SUSTAIN-6 and REWIND for GLP-1RA and SCORED for SGLT-2i (22).This suggests that perhaps GLP-1RAs with longer half-lives appear to be beneficial for preventing MACE generally and possibly reducing the risk of stroke (23). Overall, the novelty of our study vs previous studies include the following: First, previous studies have focused more on MACE outcomes in patients with T2DM treated with GLP-1RA, capturing the risk of non-fatal stroke as a whole--as for a meta-analysis of the eight trials was already reported (24), while our study focuses on the risk of subtype of stroke after GLP-1RA An interesting and important finding that administration of GLP-1Ras did not lower the risk for hemorrhagic stroke with statistical significance, but the RR was equivalent to that for ischemic stroke. It could be just due to the number of events and power. Although this meta-analysis showed that the RRs for hemorrhagic stroke were no statistical significance, the overall absolute risk increase should be of concern (50 events in 29069 patients treated with a GLP-1RA), because of its high disability and lethality rates. Retinopathy has been reported as an serious adverse event in the published reports and/or the supplemental materials of the eight randomized clinical trials of GLP-1 Ras. We collected the data in the public disclosure one by one, and to maintain homogeneity, we only include hemorrhagic and ischemic related retinopathy, and excluded retinopathy due to other causes (e.g., glaucoma, cataract, etc.). However, to our knowledge, retinopathy events have seldom been reported. only 17 events in 29069 patients treated with a GLP-1RA eligible for this systematic review clearly reported retinopathy events, underreporting cannot be ruled out. The main mechanisms of cardiovascular protection by GLP-1RA are currently explored as follows: anti-inflammatory, antiatherosclerosis, reduction of reactive oxygen species production and anti-oxidative stress, reduction of thrombotic events, etc. Ex-4 inhibits oxidized LDL-induced macrophage foam cell formation by downregulating several inflammatory and adhesion molecules in monocytes and macrophages to suppress their accumulation in the arterial wall (25). Evidence from preclinical studies suggests that liraglutide was shown to inhibit TNF-a-induced oxidative stress and inflammation in endothelial cells through calcium and AMPK-dependent mechanisms (26,27), and reduce the occurrence and progression of atherosclerotic plaque formation at an early stage and enhance plaque stability (28). Furthermore, GLP-1RA also reduces the inflammatory cytokines TNF-a, IL-1b and IL-6 (29), GLP-1RAs are also proposed to have antioxidant and neuroprotective effects by upregulating vascular endothelial growth factor production (30) and reducing proinflammatory cytokine production (31). Oeseburg et al. showed that GLP-1RA can prevent ROS through induced expression of antioxidant genes downstream of protein kinase A (PKA) and cAMP response element-binding (CREB) protein (32). Shi et al. suggested that liraglutide could protect cells from glucotoxic damage by inhibiting ERK1/2 and PI3K/Akt signaling pathways through GLP-1RA (33). a single injection of Ex-4 inhibited thrombus growth in normoglycemic and hyperglycemic mice in an in vivo laser injury model of arterial thrombosis (34). In a real-world study, liraglutide was found to significantly reduce cIMT (surrogate marker of subclinical atherosclerosis) in subjects with metabolic syndrome (MetS) during 18 months of follow-up, with a statistically significant reduction after only 6 months of treatment (35). More directly, intracerebroventricular administration of liraglutide reduced the cerebral infarct volume in rats with ischaemia-reperfusion injury (23,30). These data suggest that GLP-1RAs have anti-atherosclerotic or vasculoprotective properties, which may be the main mechanism of their beneficial effect on stroke prevention (23). Evidence from animal studies suggests that GLP-1-based therapies may be used as cardio-or neuroprotectants, suggests that GLP-1RAs and Dipeptidyl Peptidase-4 inhibitors (DPP-4is) may provide neuroprotection (36). However, a review suggests that DPP4i do not reduce any risk of efficacy outcomes, while moderate-certainty evidence likely supports the use of GLP-1RA to reduce fatal and non-fatal stroke (7). Moreover, research shows that GLP-1RA is the only drug class that reduces the risk of stroke among the various types of new hypoglycemic drugs (21). In terms of the molecular mechanisms, GLP-1 peptides Retinal arteriopathy risk forest map. shown more interesting actions, may cause GLUT4 upregulation in SHRs due to GLP-1 action (37). Glut4 mRNA expression and sarcolemmal translocation were also increased after GLP-1 stimulation in high-fatty acid incubated cardiomyocytes. PI3K/ Akt and AMPKa were involved in this response (38). Low-level expression of Glut4 was observed in motor nuclei of spinal cord, nuclei of medulla oblongata, cerebellar nuclei and Purkinje cell layer, basal ganglia, neocortex, olfactory bulb, hypothalamus, and hippocampus in rodents (39-42),GLUT4/Glut4 in brain is supposed to be involved in provision of metabolic energy for firing neurons (43), and is supposed to be also involved in hypothalamic regulation of food intake, energy expenditure, and whole-body glucohomeostasis (44). However, data from clinical trials only report therapeutic efficacy for GLP-1RAs (36). Thus, GLP-1RA administration is the most promising treatment to pursue for patients at risk of stroke or immediately after stroke (36). Our study also has limitations. First, these studies were not specifically designed to evaluate the risk of gallbladder or biliary diseases associated with GLP-1 Ras treatment, further validation could be done in future by designing large clinical studies with stroke as an endpoint. Second, we were unable to use patientlevel data to evaluate outcomes, which limited our capability to further explore any subgroups of interest, Therefore, the findings of the subgroup analysis have to be interpreted cautiously and further investigations are needed to assess whether GLP-1RA may reduce the incidence of stroke in T2DM with multiple comorbidities. Third, the small number of events in subgroups may have allowed for underpowered subgroup analyses. Fourth, we included RCTs with a total of at least 3000 participants to limit bias due to small base size, which may have missed some of the positive or negative data. Fifth, no further animal or cellular experiments were done in this study to verify. The above limitations may weaken the persuasiveness of our study, but the majority of eligible studies selected by 2 different investigators based on strict inclusion criteria were of high quality through a comprehensive literature search of 4 databases. Thus, we believe the conclusions drawn from this meta-analysis are reasonable. Conclusion Overall, our study showed that GLP-1RA significantly reduced the risk of total stroke (about 27%) and ischemic stroke (about 27%) in T2DM, with statistically significant differences, and showed a neutral effect of GLP-1RA on the risk of hemorrhagic stroke and retinopathy. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. Author contributions JW designed the research, JW and BY contributed to the literature database search, data collection, data extraction, data analysis, and writing of the manuscript. JW, BY, RW, HY and YW participated in the discussion. JW, BY, RW, XZ and LW reviewed and revised this article. All authors contributed to the article and approved the submitted version.
2022-12-06T14:10:04.212Z
2022-12-05T00:00:00.000
{ "year": 2022, "sha1": "1bd5f508dee455c08b24ed10dcbc1f72143b952e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "1bd5f508dee455c08b24ed10dcbc1f72143b952e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
250192608
pes2o/s2orc
v3-fos-license
Assessment of the knowledge on insulin therapy among adult diabetics patients in Jabir Abuleiz center, Khartoum, Sudan ABSTRACT Objectives: The objective of the study is to assess the knowledge and practice concerning insulin therapy in adult diabetic Sudanese patients and relate it with their control of diabetes and selected demographic variables. Methods: Personal interview, using a specific pretested designed questionnaire was used to collect data from 200 adult diabetic patients in Jabir Abuleiz center in Khartoum state. Result: The result showed that only 15% of the respondent had adequate knowledge about insulin use. Also, good knowledge was associated with a higher level of education and good glycemic control (P < 0.001). Conclusion: Knowledge about insulin therapy has an important role in the control of diabetes mellitus. Those who are knowledgeable about insulin therapy are more likely to have good control of HbA1c. Introduction Diabetes mellitus (DM) is a group of metabolic diseases, characterized by high blood glucose levels caused by a relative or absolute insulin deficiency. World Health Organization (WHO) has estimated that about 366 million individuals will be affected by DM in the world by the year 2030. [1] More than 3 million patients decease yearly with underlying DM. [2,3] DM is considered the leading cause of death in most developing nations. [4,5] This might be endorsed to poorly controlled hyperglycemia, which is correlated with several life-threatening complications such as renal failure and cardiovascular diseases. [6] Optimal glycemic control is obligatory to decrease morbidity and mortality of DM via the prevention and/or delay of complications. [7] Best glycemic control can be only accomplished when the patients are adherent to self-management behaviors like a healthy diet, physical activity, monitoring of blood glucose, taking medications appropriately, ability to resolve diabetes problems, and healthy coping. [7][8][9][10][11] Primary care physicians or family physicians are frontline care providers in the management of diabetes and its complications. Routine management of diabetes is increasingly delivered in primary care where patients can receive care closer to home, but this could not be achieved without adequate knowledge of patients toward insulin therapy. Despite the abundance of guidelines high quality of care is not always achieved; risk factor control continues to be suboptimal, with international variation in the achievement of clinical targets. Interventions to improve diabetes management are not always successful, with limited impact on clinical outcomes. [12] Insulin therapy presents many challenges because of difficulties associated with its complicated use. Adequate knowledge of its use can helpful to prevent complications, adverse patient outcomes, poor adherence to therapy, and always poor glycemic control. [13] Nevertheless, the knowledge scores of patients with DM were not satisfactory. [14] Educating patients on insulin helps to improve self-confidence and arrogance of contribution in their management. [15] Furthermore, an appropriate injection technique is important for accurate delivery to subcutaneous tissues and to avoid intramuscular injuries and lipohypertrophy. [16] The American Diabetic Association created a set of guidelines for insulin storage, mixing of insulin, proper use of insulin syringe, and other considerations. [17] However, patients particularly in developing countries may not follow the guideline due to low socioeconomic problems. Although insulin is recognized as the ideal treatment for DM, a lack of knowledge and coordination among physicians and patients regarding appropriate insulin use is reported. [18,19] A limited study has been done in Sudan that focused on the knowledge of insulin administration among patients with DM. Therefore, this study aimed to assess knowledge of diabetic patients regarding insulin at Jabir Abuleiz Center, which is the biggest diabetes specialized center in Khartoum, Sudan. Material and Methods This is a nonexperimental, cross-sectional study. A total of 100 participants were recruited from the outpatients and inpatients attending the Department of General Medicine and General Surgery. The study was approved by the institutional ethics committee. Informed written consent to participate in the study was obtained from the study participants. Both male and female adult patients with DM receiving insulin injections for more than 6 months duration and willing to participate in the study were included in the study. Participants who were not physically or mentally able to respond to the interview were excluded. The sample size of the study was 200. It was estimated with 5% precision, 95% of confidence level and 50% anticipated proportion. Knowledge Knowledge refers to the correct responses given by the patients with DM on insulin therapy like the types and site of insulin injection, storage of insulin, rotation of the site, dosing and complications of insulin, transportation of insulin, disposal of insulin syringe/pen needle, and symptoms of hypoglycemia, which are assessed by using a structured interview guide. The data were collected using structured questionnaires to assess the level of knowledge by 30 statements and modified 3-point Likert scale. The knowledge of the respondents was categorized as follows: (a) Poor: 0-10, Average: 11-20, and (c) Good: 21-30. Data analysis The collected data were coded, entered, and analyzed by using Statistical Package for Social Sciences version 20 software. Descriptive statistics such as frequency distribution and percentages were performed to summarize the result. Chi-Square test was used in bivariate analysis and P value considered as significant if <0.05 Results In total, this study enrolled 200 diabetic patients, of which 130 (65%) were males and 70 (35%) were females, and their mean age was 55.8 ± 8.2 years. The mean of DM duration was 15 ± 8.2 years and the duration of insulin therapy was 7 ± 6.1 years. For glycemic control, the mean of HbA1c was 8.6 ± 2.2% and 127 (63.5%) of cases were uncontrolled. More details of patients are presented in Table 1. As illustrated in Figure 1, the levels of knowledge were moderate in 130 (65%) patients, poor in 42 (21%), and good in 30 (15%) patients. Table 2 reveals that the level of knowledge about insulin therapy was found to be associated with the level of education. The higher the level of education of the patients is, more the likelihood to be knowledgeable about insulin therapy (P. value < 0.001). Table 3 shows that the monthly income of the patients was not found to be significantly associated with levels of knowledge about insulin therapy (P. value = 0.193). As shown in Table 4, glycemic control was associated with knowledge about insulin therapy. It was observed that glycemically controlled patients were more knowledgeable than uncontrolled patients (P < 0.00). Discussion In this study, the percentage of good knowledge on insulin self-administration was found to be 15%. This was observed to be higher than 4% in Southern India, [19] lower than 33.3% in Egypt, [20] 52.5% in India, [21] 46% in Nepal, [22] 50.3% in Turkey, [23] and 70.4% in Tigray, Ethiopia. [24] The variation observed compared to other studies could be due to the differences in sample size, the operational definition used, and the methodology in general. Besides, the socioeconomic, cultural, and educational profile of the study population may create a significant variation between studies. Also, access to optimal education and demonstration of insulin self-administration by health care providers could to one of the factors in this discrepancy. In this study, respondents who achieved secondary school and above were found to have increased rates of knowledge on insulin self-administration than primary school achievers and below (P. value < 0.001). The finding was consistent with a study conducted in different countries. [21,25,26] This may be because good educational status gives good knowledge of diseases, disease treatment, importance, practice, and adherence to treatments. Remarkably, the presents study showed that insufficient knowledge of insulin use was associated with poor glycemic control among our series (P. value < 0.001). These findings were in agreement with several previous studies like Bukhsh et al. in Pakistan, [27] Solanki et al. in India [28], and Tahiya et al. in Saudi Arabia. [29] Primary care physicians (PCPs) treat a vast majority of DM patients worldwide. [30] Thus, this work could provide to PCP and other health care professionals who treat patients with DM the latest information and figures in the current awareness of DM patients regarding the management of patients with type 2 DM. This study is not without limitation, as the cross-sectional design of the study may not allow generalization of the study findings to the complete population in Sudan. Further research may be needed in Sudan to assess knowledge about insulin therapy. Despite these limitations, our study is novel and provides first hand information to evaluate knowledge about insulin therapy among individuals with diabetes in Sudan. Diabetologists, PCPs, and pharmacists are needed to work together to increase the level of knowledge and self-care about diabetes. Conclusion DM imposes a lifelong threat on individuals and their families. This study showed that the level of knowledge on insulin therapy was inadequate and associated with low educational levels and poor glycemic control. In addition, the study findings revealed that there is an immense need for education on diabetes and insulin therapy in PHCs. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. Informed consent was obtained for participation. Acknowledgements I wish to acknowledge Dr. Alyaa Almahdy and Mr. Eisa Ahmed for their support. Summarization the key points • The level of knowledge on insulin therapy was suboptimal • The level of knowledge on insulin therapy was associated with low educational • The level of knowledge on insulin therapy was correlated with poor glycemic control. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2022-07-02T15:08:15.252Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "5d6415aa63af3797979e2108848fad0829d4d368", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_2064_21", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fee7bf2e2a439af519dd48aedf2d788439124a10", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53570803
pes2o/s2orc
v3-fos-license
Bovine respiratory syncytial virus in-situ hybridization from sheep lungs at different times postinfection Se estudio, mediante la tecnica de hibridacion in situ, la distribucion del ARN viral del virus respiratorio Sincicial Bovino (VRSB) en pulmon de corderos infectados en forma experimental, a diferentes tiempos postinoculacion. La sonda usada para la hibridacion in situ se preparo mediante transcripcion reversa del ARN del VRSB, seguida de la amplificacion mediante PCR de cADN. 25 corderos de raza Merino de ambos sexos y de un peso vivo de 55 (+/- 10) Kg, fueron inoculados por via intratraqueal con 40 ml de solucion salina que contenia 1.26 x 10[elevado a 6] DIM[subindice 50] por ml (cepa viral NMK7). Los corderos se sacrificaron los dias 1, 3, 7, 11 y 15 postinoculacion. Las celulas epiteliales bronquiales y bronquiolares resultaron positivas a los acidos nucleicos de VRSB los dias 1, 3, 7 y 11 postinoculacion. A su vez, el epitelio alveolar contenia celulas positivas los dias 1, 3, y 7 postinoculacion. Se detectaron celulas que contenian el ARN viral en las luces bronquiales y bronquilares del dia 1 al 11 postinoculacion, y en el exudadado alveolar en los dias 3 y 7 postinoculacion. Se identificaron senales positivas de hibridacion desde el dia 3 al 11 postinoculacion, tanto en las celulas intersticiales mononucleares como en el tejido linfoide asociado a los bronquios. Las senales de hibridacion mas intensas se detectaron a los dias 3 y 7 postinoculacion, lo que coincidio con las lesiones histopatologicas de mayor consideracion. Summary We studied the distribution of bovine respiratory syncytial virus (BRSV) RNA in lungs of experimentally infected sheep by in situhybridization at different times postinfection. The probe used for in-situ hybridization was prepared by reverse transcription of BRSV RNA, followed by PCR amplification of the cDNA. Twenty five Merino sheeps of both sexes with a live weight of 55± 10 Kg, received a intratracheal inoculation of 40 ml saline solution containing 1.26 x 10 6 TCID 50 BRSV (strain NMK7) per ml. Sheep were slaughtered 1,3,7,11 and 15 postinoculation days (PID). Bronchial and bronchiolar epithelial cells were positive for BRSV nucleic acid by ISH at 1, 3, 7 and 11 PID. However, alveolar epithelial cells contained positive hybridization signals cells at 1,3 and 7 PID. Cells containing viral RNA were detected from 1 to 11 PID, in exudate within bronchial and bronchiolar lumina; and from 3 to 7 PID in alveolar exudates. Positive hybridization signals were identified in interstitial mononuclear cells and in bronchi associated lymphoid tissue from 3 to 11 PID. The highest signal intensity of positive cells were observed at 3 and 7 PID, coinciding with high virus antibodies titres and with the most important histopathological findings. A digoxigenin-labeled DNA probe corresponding to the part of the genome that codes for bp 736-1522 of the BRSV protein gene was developed for localization of virus in lung sections. The aim of the present study is to analyze the pathology of sheep experimentally infected with BRSV, and to visualize specific hybridization signals for this virus in lung samples. Animals and BRSV inoculum. Twenty-five gnotobiotic merino sheep of both sexes with a live weight of 55 ± 5 Kg, each received a intratracheal inoculation of 40 ml of the NMK7 strain of BRSV (supplied by Dr. Gómez-Tejedor, Instituto Nacional de Investigaciones Agrarias, Madrid, Spain) diluted to a concentration of 1.26 x 10 6 TCID 50 /ml. The origin and characterization of the NMK7 isolate of BRSV has been previously reported by Belknap et al., (1995). The sheep were housed in an isolation barn, and were euthanized 1,3,7,11 and 15 postinoculation days (PID). Virus inoculum was prepared using primary bovine fetal kidney (BFK) cell cultures infected with a BRSV stock strain NMK7, to which minimun Essential medium (Sigma, Madrid, Spain) with 15% bovine fetal serum (Sigma, Madrid, Spain) was added; the final cultures were incubated at 37ºC in a CO 2 atmosphere. Ten animals used as control were inoculated with an identical volume of un-infected BFK cell culture. They were kept in another isolation box and were slaughtered 1,3,7,11 and 15 PID. Aliquots of virus were cultured and were determined to be free of aerobic bacteria and mycoplasma by standard techniques. Samples of lung were collected for microbiological studies and virus isolation. Formalin fixed section were used for histopathological techniques and in-situ hybrydization assay. Histopathology: Sections, 3 µm thick, were cut and stained with haematoxylin and eosin for routine morphological studies. Immunoperoxidase staining procedure for detection of BRSV antigens. An Avidin-Biotin-Peroxidase complex was carried out on deparaffined and trypsinized lung samples. Sections were blocked in diluted (1:50) normal swine serum (Dakopatt, Spain) for 15 minutes to reduce background and were incubated in diluted (1:1000) rabbit anti-RSV (Dakopatt, Spain) for 3 hours at 20ºC. Diluted (1:500) biotinylated swine anti-rabbit IgG (Dakopatt, Spain) was placed on the sections for 30 minutes, followed by 1 hour incubation in diluted (1:50) ABPC reagent (Dakopatt, Spain). Sections were incubated in diaminobenzidine solution for 5 minutes and were counterstained lightly with Mayer´s haematoxylin. The positive controls included BRSV-infected BFK cell cultures (the cytopathic effect was evident in the formation of numerous syncytia) and noninfected BFK cell cultures. Test control sections were also stained using nonimmune rabbit serum as first layer. Microbiological investigations. Lung samples tissue were homogenized in phosphatebuffered saline and a serial 10-fold dilutions were made up to 10 4 and 0.1 ml of each dilution spread on to half a sheep´s blood agar plate. Duplicate plates were used, one incubated aerobically and the other anaerobically under H2 containing 10% CO 2 . Colonies were identified using the methods described by Cowan (1974). Bronchial swabs and lung samples were also cultured for mycoplasma in modified Hayflick´s broth, and colonies were identified on agar plates. Nasal swab specimens and lung samples were used to detect BRSV by virus isolation in BFK cells, following the technique described previously by Castleman et al., 1985. Isolates were confirmed as BRSV by indirect immunofluorescence (Masot et al., 1993a). RNA extraction. BT cells (75 cm 2 flasks) were infected with BRSV strain NMK7 (Lerch et al., 1991;Oberst et al., 1993); provided by Dr. Kelling, Department of Veterinary and Biomedical Sciences, University of Nebraska, Lincoln, USA. This is a BRSV strain plaque purified more than three times that was originally obtained from cattle and is bovine viral diarrhea (BVD) virus free (Lehmkuhl et al., 1979). Cell cultures were incubated at 33 ºC in high glucose Dulbecco modified Eagle medium (DMEM) supplemented with 2% horse serum. Viruses were allowed to replicate and form syncytia (cytopathic effect), at which point the RNA was extracted from infected cells. RNA was isolated with the Trizol LS Reagent (Gibco, NY, USA) following the manufacturer's instructions. RNA concentration of the extracts was estimated by measuring the A 260 with a Beckman DU-64 Spectrophotometer. (Invitrogen, CA, USA). The correct sequence was confirmed by sequencing. The probe was prepared by removing the DNA insert by digestion with restriction enzyme EcoRI. The DNA was submited to gel electrophoresis and the fragment extracted using Quiaex II (Quiagen, Hilden, Germany). DNA fragment was digoxigenin-labeled by random priming (Boehringer Mannheim Biochemicals, Mannheim, Germany). The specifity of the probe was checked by dot blot and southern blot onto nylon membranes. Preparation of samples for ISH. BRSVinfected and uninfected cell cultures were used for optimization of ISH. BT cells were infected and grown as has been described before . The cultures that showed cytopathic effect were trypsinized and washed twice with DMEM. The first time with the DMEM containing 5% equine serum and the second time containing as serum. After counting, the harvested cells were applied to ProbeOn Plus microscope slides (Fisher Scientific, Pittsburgh, PA, USA) by centrifugation at 1500 rpm for 3 min (cytocentrifuge Shandon Cytospin 2) at a density of 2x10 5 cells per spot. Slides were fixed in 4% paraformaldehyde and dehydrated through an ethanol series. Lung sections, 3 µm thich, were deparaffinized with xylene and rehydrated through decreasing concentrations of ethanol. All the samples (slides containing cells or sections) were equilibrate in phosphate-buffered saline (PBS). Permeabilization to allow the penetration of the probe was carried out by treatment with 0.2N HCl for 20 min at room temperature and digestion with 5 µg/ml of proteinase K (Promega, Madison, WI, USA) for cells and 30 µg/ml for tissues in PBS for 20 min at 37ºC. Some slides were treated with 50 µg/ml of RNase (Boehringer Mannheim Biochemicals) in PBS for 60 min at 37ºC in wet chamber, to be used as specificity control. Next, slides were refixed in 5% paraformaldehyde and followed by two rinses in PBS. Slides were acetylated in 100 mM triethanolamine with 0.25% of acetic anhydre (to neutralize positive charges and thus reduce nonspecific electrostatic binding of the probe) for 10 min. Slides were then washed twice in 2x SSC (1x SSC is 15 mM of sodium citrate plus 0.15 M NaCl, pH 7.0). Hybridization. The ISH conditions were as described elsewhere (Sur et al., 1996). Enzyme-Linked Immunoabsorbent Assay (ELISA). Positive control antigen was prepared from bovine foetal kidney cell cultures infected with BRSV strain NMK7. Negative control antigen was determined by block titration; those which yielded maximum optical density (OD) values for positive sera without causing an increased OD in negative sera were considered satisfactory. Polystirene plates were coated with 100 µl of the antigen diluted to a concentration of 5µg/ ml in sodium carbonate buffer (0.1 M, pH 9.6) with 0.02% NaN3 for 3 h at 37 ºC. After coating and between all subsequent steps, the plates were washed 4 times in phosphate-buffered saline containing 0.05% Tween 20 (PBSTw). As a blocking agent, 5% bovine serum albumin (BSA) in washing buffer was added (200ml) and incubated for 30 min at 37 ºC. After washing, serum samples diluted to 1/1000 in PBSTw (100 µl) were added (200ml) and incubated for 30 min at 37 ºC. After subsequent washing, 100 µl of horseradish peroxidase conjugate mouse antisheep IgG (diluted 1/2000 in washing buffer) were added. The reaction was developed at room temperature by the addition of 100 µl of 0.04% ophenylenediamine and 0.004% H 2 0 2 in citratephosphate buffer (pH 5), stopped after 1h by the addition of 50 µl of 3N H2 SO4 and read on a spectrophotometer at 490 nm. BRSV inoculum. Viral inoculum was free of aerobic bacteria, mycoplasma and BVD virus. Histopathology. The histopathological study has been previously reported by us on studies reported here (Masot et al., 1995(Masot et al., , 1996. Catarrhal bronchiolitis observed on 1 PID was associated with granulocyte infiltration of the bronchiolar lumen. The interalveolar septa were thickened, with pronounced interstitial edema and moderate cell reaction. Alveolar exudate in animals slaughtered 3 and 7 PID consisted of neutrophils, lymphocytes and multinucleate giant cells which formed the syncytia. There was considerable thickening of the interalveolar septa, due to the presence of edema and granulocyte and monocyte infiltration. Bronchiolitis was accompained by epithelial cell necrosis. Hyperplasia of bronchiolar epithelial cells was conspicuous and early stages of reepithelization were apparent. Exudate in the bronchiolar lumina consisted of desquamated cells, necrotic debris and syncytial cells. Severe bronchial, bronchiolar and alveolar damage was visible by 11 PID. Lung parenchyma had clear focal areas of consolidation due to bronchiolitis obliterans and the alveolar collapse caused by infiltration of macrophages, lymphocytes and syncytial cells into lumina. Animals slaughtered at 15 PID presented a marked interstitial inflammatory reaction, with considerable septal thickening. Intense bronchiolar and/or alveolar hyperplasia was also observed. Lung consolidation was less marked than in animals slaughtered earlier. IHC signals. The IHC study has been previously published by us on studies reported here (Masot et al., 1993b(Masot et al., , 1996. BRSV antigen was detected in bronchial and bronchiolar epithelial cells, in bronchial mucous cells and in alveolar epithelial cells at 3 and 7 PID. Intense staining was also observed in alveolar macrophages, interstitial mononuclear cells and syncytia from 3 to 11 PID. Antigen was commonly detected in exudate within bronchiolar, bronchial and alveolar lumina. Specific staining was absent in the negative control (table 1). Microbiological findings. Twenty three of the sheep inoculated with BRSV and nine of the control animals, were bacteriologically negative at the conclusion of the experiment. Contaminative species of bacteria were E. coli and Bacillus spp. Mycoplasma was not isolated from the upper or lower respiratory tract of any sheep. ISH signals. In all PID the specificity of the signal reaction was confirmed by the rigorous observance of specific controls ( figure 1, 2). The BRSV nucleic acid signal was specific because this was completely absent from non infected cells (figures 1, 2). There was no detectable ISH signal of BRSV RNA in the lungs of the control sheep (table 1). Hybridization signals specific for BRSV RNA were detected in bronchial and bronchiolar epithelial cells at 1,3,7 and 11 PID (figure 3 and 4) but they were not observed at 15 PID. BRSV-positive cells was commonly detected in exudate within bronchial and bronchiolar lumina from 1 to 11 PID (figure 5). The positive IHS signal was observed in alveolar epithelial cells at 1, 3 and 7 PID (figures 6, 7, 8). In several cases the signal of RNA-virus also coincided with a cell morphology and location that would suggest that the cell could be classified as a type II pneumocyte. The RNA genome was visible in: free cells in alveolar space (figure 9) and in syncitial alveolar cells (figure 10) at 3 and 7 PID. Hybridization signal specific for BRSV RNA was observed in the peribronchiolar area (figures 11,12), in bronchus associated lymphoid tissue (figure 12) and in mononuclear cells of interalveolar septum from 3 to 11 PID. ELISA. Serum IgG BRSV antibody titres are summarised in figure 13. A progressive increase in titres was recorded until 7 PID; giving way, to an decrease from 11 PID onwards. The highest antibody response was observed in PID 3 and 7. those recorded for preinfected animals throught the experiment. Viral isolation Results of BRSV isolation attemps from lung homogenates or nasal secretions are presented in table 2. DISCUSSION The positive IHS signal in bronchi and bronchioles is indicative of the direct pathogenic action of BRSV at these levels (Viuff et al., 1996) inducing necrosis and desquamation of bronchial and bronchiolar epithelia. Necrosis of bronchial and/or bronchiolar epithelia has been described elsewhere as a limitation of pulmonary antibacterial defenses with impaired mucociliary clearance, and as a predisposition to secondary bacterial infection (Pirie et al., 1981;Al-Darraji et al., 1982;Castleman et al., 1985;Bryson et al., 1988;Redondo et al., 1994;Masot et al.1993 a,b;Viuff et al., 1996). Cells obstructing bronchial and bronchiolar lumina tend to form syncytia (Bryson et al.,1988;1991a,b;Ciszewski et al., 1991;Masot et al., 1993a,b;Belknap et al., 1995) which are usually located in bronchioles and rarely observed in bronchi. This finding has previously been reported in studies of immunohistochemistry (Masot et al., 1993a,b) or in situ hybridization (Viuff et al., 1996). The possible involvement of type I and II pneumocytes in BRSV replication in sheep lung has been detected by IHC (Masot et al., 1993a(Masot et al., ,b, 1995(Masot et al., , 1996. Extensive hypertrophy and hyperplasia of type II pneumocytes are also known to occur during acute BRSV infection in the lung (Masot et al., 1993a(Masot et al., , 1995(Masot et al., , 1996. In the present study, results with IHS bears out the role of type II pneumocytes in BRSV replication, since the lung section showed cells exhibiting positive IHS signals and which were morphologically and topographically consistent with type II pneumocytes. In this connection, Viuff et al., (1996), in natural infected calves, suggest that the replication of BRSV in the alveolar type II cells, may lead to a change in the amount and quality of surfactant. The deficiency of surfactant caused by BRSV replication in the type II pneumocytes, may be a determining factor in collapse of alveoli. The alveolar collapse was visible in lungs of experimentally infected sheep described in the present study. In order to confirm that type II pneumocytes represent a specific BSRV replication site, TEM should be performed on the same sections as those used for IHS. In this experiment, a positive IHS signal for BRSV in alveolar macrophages could not be established (Viuff et al., 1996). These results agree with previous reports (Schrijver et al., 1994) suggesting that in vitro bovine alveolar macrophages (BAMs) exhibit a high intrinsic resistance for infection with BRSV and that bovine alveolar macrophages do not appear to be important for replication of BRSV. However, large numbers of BAMs may harbour virus antigen, even for several days, that may influence the function of these cells (Schrijver et al., 1994). However, replication of human respiratory syncytial virus in alveolar macrophages has been shown (Cirino et al., 1993). Interstitial cells expressing BRSV-RNA may carry virus particles to other organs, as described elsewhere (Lerch et al., 1991). The results obtained showed that more intense positive hybridization signals were detected at 3 and 7 dpi. It was precisely on these days that lesions were most intense (Masot et al., 1993a(Masot et al., ,b, 1995(Masot et al., , 1996,coinciding with maximum levels of serum IgG anti-BSRV. (Masot et al., 1993a). However, this relationship is not clearly defined, since previous reports (Ciszewski et al., 1991), state that levels of anti-BSRV serum antibodies remained relatively constant from 4 to 21 dpi with BRSV. High values for IgG (PID 11 and 15) were associated with a marked decrease of BRSV antibody titres on days 11 and 15 PI, (Korbecki and Maksymowicz, 1977). Other authors report high neutralising antibody levels in calves from 5 PID onward, although maximal levels were detected at 5 weeks PI (Elazhary et al., 1981;Castleman et al., 1985). However, other studies have found low serum IgG values in experimentally-infected calves, although this response was recorded only when the animals were exposed to the virus for second time Mohanty et al., 1975).A marked increse in serum IgA levels were coincident with the decrease of virus antibodies titres over the same period (Korbecki and Maksymowicz). Other studies, however, report an absence of IgA in the serum of calves experimentally infected with human and bovine RSV (Thomas et al., 1984). In our experience, serum IgM levels did not increase. This finding conflicts with the increase in serum IgM levels described in children with RSV infection (Bruhn and Yeager, 1977;Korbecki and Maksymowicz, 1977). authors were able to show, through IHC, that sites containing viral antigen corresponded to virus replication sites. It is a well-known fact that immunodetection requires a large number of molecules in order to obtain positive reactions. In prolonged postinfection periods, the number of positive cells detected by immunohistochemistry (IHC) decreased; it was also sometimes difficult to differentiate between positive and negative signals, especially with high levels of background noise. IHC therefore has more limitations than IHS; with impaired antigen production in infected cells, viral RNA in cells persists (Schrijver et al., 1994).
2018-11-03T11:21:28.451Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "41be7f3dce19235d24cf4dca3ed4803749bb87ce", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4067/s0301-732x2003000100004", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1b97f2770ab27f5537408179be01593a6035eabb", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
51950484
pes2o/s2orc
v3-fos-license
Consequences and Control Measures of Workplace Violence among Nurses Background: The exposure to violence at workplace can result in post-traumatic stress disorder symptoms, anger, anxiety, shame, guilt, and self-blame among nurses. Workplace violence is associated with nurse absenteeism, medical errors, decreased job satisfaction and burnout. Aim: To examine the workplace violence, its negative consequences and the measures used to control it among nurses. Methods: A descriptive research design using selfadministered questionnaire was employed. A convenient sample of 107 nurses from three hospitals completed the study. In order to assess negative consequences of violence at workplace and measures used to control it, an instrument was adopted for the purpose of this study. Results: About half of the participants were subjected to workplace violence in the last 12 months. About 39% of the participants reported that no action was taken to investigate the causes of violence. In addition, the most frequently reported consequence of violence was "verbal warring". Overall, most of the participants were not satisfied in the way in which the violence was handled. Only thirty percent of the participants who witnessed incident of violence in workplace reported it. The most commonly reason for not reporting violence is because it is "useless" which was reported by 26.2% of the participants. The most frequently reported measure performed to decrease the violence incidences was increasing staff number. Conclusion: Policy makers should develop specific policies to report violent incidences. Using specific security measures to decrease the violent incidences is also highly recommended. Introduction Workplace violence against nurses and health care professionals leads to serious negative consequences for nurses, patients, and the health care organization (International Labour Organization, International Council of Nurses, World Health Organization, and Public Services International [1]. Workplace violence might include aggression, assault, abuse, or threatening of health care providers at work or in circumstances related to their work [2]. In recent years, violence at workplace has gained special attention and is now a major concern in both developing and developed countries [3]. Workplace violence is very costly, with estimated costs of billions of dollars yearly [4], and result in hundreds of workplace homicides each year [5]. Violence at workplace might also be associated with serious personal, emotional, physical, professional consequences. Workplace violence against nurses and health care professionals might result in absenteeism from work or leaving nursing altogether [6]. The exposure to violence at workplace can result in posttraumatic stress disorder (PTSD) symptoms, anger, anxiety or fear, shame, guilt, and self-blame among nurses [7,8]. In addition, it is associated with nurse absenteeism, medical errors, decreased job satisfaction and burnout [9]. In some situations, violence at workplace might result in severe physical health consequences include injuries and disabilities [10]. Literature suggested that nurses are at a higher risk of experiencing violence in the workplace compared to other healthcare providers [7,8]. Some studies found that up to 80% of nurses have reported experiencing violence by the patients [7,8,11]. In addition, many of violence incidences are underreported. To date there are limited studies in the Middle East Region regarding the negative consequences of violence at workplace and the measures used to control it. To develop effective intervention programs to control violence at workplace in the Middle East Region, baseline data investigating various factors related to workplace violence among nurses are needed. Subsequently, the purpose of this study was to examine the workplace violence, its negative consequences and the measures used to control it among Jordanian nurses. Research design A descriptive design was used for this study employing a survey method to investigate the workplace violence, its negative consequences and the measures used to control it among Jordanian nurses. Data regarding the sociodemographic variables and measures to control violence at workplace were obtained from the participants. Data collection For the purpose of this study, the researchers collected the data from three hospitals located in Amman, the capital city of Jordan. The researchers targeted three settings including psychiatric and mental health settings, the emergency departments, and one elderly home in which care is provided by nurses. Nurses employed in these settings are providing care for patients from all over the country. Ethical considerations An approval of the study protocol was obtained from the Institutional Research Board (IRB) committee in Zarqa University. The researchers have also obtained the IRB for the ethical approval form the three hospitals where data were collected. Data collection was started in June, 2015 and completed in January, 2016. The inclusion criteria for the current study are: being a Jordanian nurse, who is able to read and write in Arabic language; having experience of at least one year, and currently working in a Jordanian hospital. The purpose of selecting these criteria is to assure that the participants are able to complete the study questionnaires and to guarantee that they could have experienced violence at workplace. Nurses who met the inclusion criteria were invited to complete the study. Data were collected by the original researchers who provided a description about the study protocol to all nurses who completed the study. The researchers explained the purpose of the study to all participants and assured the confidentiality for their data; the researchers informed them that the data would be used for the research purpose only. Completing all the questionnaires in the study took 15 minutes. Participants A convenient sample of 107 nurses completed the study including 49 males (45.8%) and 58 females (54.2%). Most participants have bachelor degree in nursing (n=84, 78.5%). About half of the participants have experience of less than 5 years in nursing practice. Most participants were employed in the emergency departments (n=73, 68.2%), followed by psychiatric and mental health care settings (n=26, 24.3%), and elderly home (n=8, 7.5%). The sample characteristics are presented in Table 1. Instruments The current study used two instruments to investigate the workplace violence, its negative consequences and the measures used to control it among Jordanian nurses. These instruments include the demographic questionnaire and the modified scale about the negative consequences and the measures used to control it. The modified scale about the negative consequences of violence at workplace and the measures used to control it In order to assess negative consequences of violence at workplace and the measures used to control it, an instrument was adapted for the purpose of this study. The instrument was originally developed by the Public Services International (PSI) and the International Council of Nurses [12]. In addition, the instrument was finalized in collaboration with the World Health Organization (WHO) and the International Labor Office (ILO). This measure focuses on various problems and complaints nurses experienced after attack, the consequences of violence to attacker, the reasons of not reporting violence incidents, policies on various aspects associated with workplace, the measures to deal with workplace violence, and changes occurred in the workplace in the last two years. Data analysis Data were analyzed using SPSS program (version 22). Descriptive statistics including frequencies and percentages were used to describe the sample characteristics. Descriptive statistics were also used to describe the problems and complaints nurses experienced after violence, the consequences to attacker, reasons for not reporting the incidences of violence, and the policies and measures used to control workplace violence. Results The experience of violence, its types, and its preparators A total of 51 (47.7%) participants were attacked in the last 12 months. A total of 28 (26.2%) participants took time off from work after attack. Thirty nine percent of them reported that no action was taken to investigate the causes of incident. About 38.3% of the investigations were conducted by the employer, while 9.3% were conducted by police. The consequences of violence to attacker The consequences to attacker are reported in Figure 1. No action was taken for 32.7% of the violence incidences. The most frequently reported consequence of violence was "verbal warring issued", which was reported in 32.7 of the violence incidences. Only 10% of the participants were satisfied in the way in which the incident was handled. Figure 1 The consequences to attacker. The reasons of not reporting violence incidents A total of 42 (39.3 %) of the participants witnessed incident of violence in workplace in the last year. However, only 33 (30.8%) of the participants have reported the incident of workplace violence that they witnessed or experienced in the last year. Table 3 presents the reasons of not reporting violence incidents as reported by the participants who did not report the incidences of violence. The most commonly reason for not reporting violence is because it is "not important" and "useless" which was reported by (28% &26.2%) of the participants respectively. Policies on various aspects associated with workplace Presence of specific policies on various aspects associated with workplace violence is presented in Table 4. These aspects include safety, physical workplace violence, bullying/mobbing, and threat. As shown by Table 5, the percent of participants who reported presence of specific policies for all of these aspects was less than 50%. The measures to deal with workplace violence The measures to deal with workplace violence in workplace are presented in Table 5. These measures are security measures, improve surroundings, restrict public access, patient screening, patient protocols, increased staff numbers, changed shift or rotates, reduced periods of work alone, training, and investment in human resource development. Only 4.7% of the participants reported absence of all of these measures in their workplace. As presented by Table 6, the most helpful measure to control workplace violence was "Restrict public access" which was reported by 95.3% of the participants. Changes occurred in the workplace in the last 2 years To decrease the violence incidences, many changes have occurred in the workplace. The most frequently reported change was "Increased staff numbers " which was reported by 25 participants (23.4%). However, it is noteworthy to mention that about 27.1 % of participants who experienced violence incidences reported "None" of these changes occurred in the workplace in the last 2 years ( Table 7). Discussion The purpose of the current study was to examine the workplace violence, its negative consequences and the measures used to control it among Jordanian nurses. Although the prevalence and sources of workplace among Jordanian nurses were reported in literature [13], data regarding the consequences of workplace violence and the measures used to control it have yet to be established. About half of the participants were subjected to workplace violence in the last 12 months. This percent is consistent with the previous literature in this area of investigation [13]. Nurses who were exposed to workplace violence had various psychological disturbances after attack, including disturbing memories, thoughts, or images, being super alert, avoid thinking or talking about the attack, and feeling everything, they did was an effort. Unsurprisingly, violence at workplace was reported to cause serious consequences on nurses [6]. About 39% of the participants reported that no action was taken to investigate the causes of violence. In addition, the most frequently reported consequence of violence was "verbal warring". Overall, most of the participants were not satisfied in the way in which the violence was handled. In addition, this could be due to the absence of specific policies regarding workplace violence in the selected setting [14]. Only 33 (30.8%) of the participants who witnessed incident of violence in workplace reported it. The most commonly reason for not reporting violence is because it is "useless" which was reported by 26.2% of the participants. Subsequently, there is a need to establish a specific and uniform reporting system for all incidents of violence at workplace. The security measures (i.e. prevent unwanted visitors, establish clear policies regarding access to sensitive areas, and video surveillance) were the most frequently reported measures to deal with workplace violence. Excellent security measures might enhance working conditions for nurses and alleviate the risks of violence at workplace. Previous research has emphasized on the role of these measures to reduce workplace violence in health care facilities [15]. The most frequently reported change to decrease the violence incidences was "Increasing staff number". Increasing nurse-to-patient ratios was found to be a significant predictor of violence among nurses [15]. However, it is noteworthy to mention that about 27.1 % of participants who experienced violence incidences reported "No" changes at workplace to decrease violence. This indicates a need for taking specific actions to control violence at workplace. Conclusion The current study concluded that most of the participants were not satisfied in the way violence was handled, the most commonly reason for not reporting violence is because it is "useless", and the most frequently reported measure performed to decrease the violence incidences was increasing staff number. Therefore, future research may want to examine the effectiveness of specific interventions to control violence at workplace. In addition, using the qualitative approach to explore the lived experiences of nurses who were exposed to various types of violence is recommended. Additionally, Policy makers may want to develop specific policies to report violent incidences. Using specific security measures to decrease the violent incidences is also highly recommended as it was reported by most of the participants. Limitation of the Study The most important limitation of the current study is using a convenience and relatively small sample. A future research with a larger and more representative sample is recommended.
2019-05-11T13:06:43.024Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "5c69df05362f4de9a111ef5283dce25c9ca6dfd4", "oa_license": "CCBY", "oa_url": "http://www.imedpub.com/articles/consequences-and-control-measures-of-workplace-violence-among-nurses.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9c5d94d08bef9221b3a1da4142955308a8ef6220", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
249284609
pes2o/s2orc
v3-fos-license
The association between health anxiety, physical disease and cardiovascular risk factors in the general population – a cross-sectional analysis from the Tromsø study: Tromsø 7 Background Health anxiety (HA) is defined as a worry of disease. An association between HA and mental illness has been reported, but few have looked at the association between HA and physical disease. Objective To examine the association between HA and number of diseases, different disease categories and cardiovascular risk factors in a large sample of the general population. Methods This study used cross-sectional data from 18,432 participants aged 40 years or older in the seventh survey of the Tromsø study. HA was measured using a revised version of the Whiteley Index-6 (WI-6-R). Participants reported previous and current status regarding a variety of different diseases. We performed exponential regression analyses looking at the independent variables 1) number of diseases, 2) disease category (cancer, cardiovascular disease, diabetes or kidney disease, respiratory disease, rheumatism, and migraine), and 3) cardiovascular risk factors (high blood pressure or use of cholesterol- or blood pressure lowering medication). Results Compared to the healthy reference group, number of diseases, different disease categories, and cardiovascular risk factors were consistently associated with higher HA scores. Most previous diseases were also significantly associated with increased HA score. People with current cancer, cardiovascular disease, and diabetes or kidney disease had the highest HA scores, being 109, 50, and 60% higher than the reference group, respectively. Conclusion In our general adult population, we found consistent associations between HA, as a continuous measure, and physical disease, all disease categories measured and cardiovascular risk factors. Supplementary Information The online version contains supplementary material available at 10.1186/s12875-022-01749-0. to an increase in mental distress [10], we have little knowledge about the association with HA. Therefore, the association between HA and physical disease and cardiovascular risk factors deserves increased attention and relevance in clinical work. The association between HA and physical disease has mostly been explored within specific patient groups. High HA has been reported in several patient populations with different physical diseases, such as cancer [11], cardiovascular disease [12,13], diabetes [14], and kidney disease [15]. In addition, different studies have examined disease-specific anxiety such as fear of cancer recurrence [16], fear of hypoglycaemia [17], and cardiac anxiety [18]. However, a recent review [19] proposed that these are dimensions of the broader HA concept, and pointed out that diseasespecific measurements in disease-specific populations make comparison between different diseases difficult. The association between HA and physical disease and risk factors for disease has been less explored in the general population; only three studies on the topic have been published, with inconsistent results [6,20,21]. To our knowledge, only one study, published by Noyes and colleagues in 2000, has examined the association between HA and various diseases in a general adult (aged 40-65 years) population [22]. They found that high blood pressure, stroke, and chronic lung disease were associated with high HA. All of these studies used a single cut-off to dichotomise high and low HA, and to-date, no one has looked at this association while measuring HA as a continuous construct. HA is reported to be unequally distributed in the population [2], with no clear cut-offs to define high HA. In accordance with Rachman [1] and Ferguson [3], we support the idea that HA in the general population should be assessed as a continuous construct. The aim of the present paper was to examine the association between HA and 1) number of diseases, 2) different disease categories, and 3) cardiovascular risk factors in a large sample of the general population. Study design and population The Tromsø study is a large Norwegian population-based health survey, where inhabitants of the municipality of Tromsø have been invited to seven different surveys (Tromsø 1-7) since in 1974 [23]. The present paper used cross-sectional, self-reported data from Tromsø 7, which was conducted in 2015-2016. All inhabitants aged 40 years or older (n = 32,591) were invited by post and received two reminders to participate. Informed consent was given upon attendance, where both self-reported and clinical measures were collected. This study only utilized self-reported measurements. Of the invited participants to the Tromsø 7, 21,083 gave informed consent and participated in this study (response rate of 65%). Only information concerning age and gender of non-participants were collected. Dependent variable We measured HA using a validated and modified onefactor, six-item Whiteley Index-6 (WI-6-R) ( Table 1), which was included in the Tromsø 7 questionnaire. The WI-6-R has satisfactory psychometric properties [24] in a general population. Respondents answered each item on a 5-point Likert scale (0="not at all", 1="to some extent", 2 = "moderately", 3="to a considerable extent", 4="to a great extent"). Item scores were then summed to create a HA score ranging from 0 to 24, with higher scores indicating higher HA. Independent variables Participants gave information on the following diseases: heart failure, atrial fibrillation, angina pectoris, myocardial infarction, stroke, diabetes, kidney disease, chronic bronchitis/emphysema/chronic obstructive pulmonary disease, asthma, cancer, rheumatoid arthritis, osteoarthritis, and/or migraine. Response options were "no", "yes, now", or "previously, not now" for each disease except myocardial infarction and stroke, where only "no" and "previously, not now" were possible. Participants also reported cardiovascular risk factors (high blood pressure, use of blood pressure lowering medication, or use of cholesterol lowering medication), now or previously. When examining the association between HA score and number of diseases (number of diseases analysis), participants were categorised according to number of diseases (0, 1, 2, 3, > 4), past or current, and cardiovascular risk factors were not counted as diseases. When examining the association between HA score and disease category (disease category analysis), we grouped the different diseases into eight disease categories, and cardiovascular risk factors were included as a separate category ( Table 2). Confounders We included four groups of possible confounders in the analyses: disease-related variables, socioeconomic, social network, and demographic variables, all of which were taken from the Tromsø 7 questionnaire. The diseaserelated variables included disease in first-degree relatives and self-reported mental illness by the Hospital Anxiety and Depression Scale (HADS). Participants were asked if their first-degree relatives had any of the following: angina pectoris, stroke, asthma, diabetes, breast cancer, prostate cancer, colon cancer, or myocardial infarction before the age of 60. Participants were categorised as "yes" if they reported that their first-degree relatives had one or more of these diseases, and "no" if they had none of them. Disease in first-degree relatives was chosen as a confounder as we hypothesised that HA may be affected by disease in close family [1], and since many of the diseases can be hereditary. Mental illness is associated with HA [6] and physical disease [25][26][27]. We therefore included the measurement tool HADS [28] as a confounder. HADS is a questionnaire based on participants' responses to 14 questions concerning symptoms of anxiety and depression in the last week, with a total range of 0-42. Due to the diverse use of cut-offs for HADS total score [29], we used HADS as a continuous measure, except for descriptive purposes. Socioeconomic variables were considered confounders based on associations with both HA [2] and physical disease [30]. Participants reported highest level of completed education (primary education up to 10 years of schooling, vocational/upper secondary education ≥3 years, college/university < 4 years, or college/university ≥4 years) and annual household income, which was categorised as low (NOK < 451,000), lower middle (NOK 451-750,000), upper middle (NOK 751000-1 million), or high (NOK > 1 million). There were two social network variables: participation in organised activities and friendship. Both are associated with HA [2] and physical disease [31]. Response options for participation in organised activities were "never or just a few times a year", "1-2 times a month", "approximately once a week", or "more than once a week". The friendship variables included two questions: "Do you have enough friends who can give you help and support when you need it?" and "Do you have enough friends with whom you can talk confidentially?" Response options were "yes" and "no", and these were merged and coded as "no", for those who answered "no" to both questions; "to some extent", for those who answered "yes" to only one question; and "yes", for those who answered "yes" to both questions. Finally, demographic variables included gender and age as of 31 December 2015. Statistical analyses No participants were excluded prior to the analyses, but missing values were consequently excluded in the analyses and all results are therefore presented as completecase. In the disease category analysis, disease categories were exclusive, thus participants with diseases in two different categories (e.g. cancer and angina pectoris) were excluded. However, participants were not excluded if they had cardiovascular risk factors in addition to a specific disease category, e.g. high blood pressure in addition to cancer. Participants could state several diseases within each disease category, e.g. angina pectoris and heart failure. If they answered "previously, not now" for one disease and "yes, now" for another within the same disease category, they were categorised as "yes, now". We set the reference group for all analyses as participants who reported both no current or previous physical disease and no cardiovascular risk factors (healthy reference group). In the descriptive analyses, frequency distributions are presented for categorical variables, and mean (Standard deviation, SD) median [quartiles 1, 3] for continuous variables. All analyses were performed with STATA version 16.1 (STATA Corp LP, College Station, Texas, USA). Due to the non-normal and highly skewed distribution of the dependent variable HA, we used bivariate and multivariate exponential regression analyses to detect associations. The regression coefficients in the estimated models are presented with the exponentiated beta [exp(b)], where exp.(b) describes the percentage change in the WI-6-R score relative to the reference category for the different other categories. The unadjusted regression model included the disease category independent variable, and the adjusted model adjusted for all specified confounders. We tested for two possible interactions: physical diseases and education and physical diseases and age, with the hypotheses that people with a higher education level would have more resources to handle disease, and that younger and older participants would deal with illness differently. However, no interactions were evident. Ethics The study was conducted in accordance with the Declaration of Helsinki and was approved by the Regional Committee for Medical and Health Research Ethics (REC North) in Norway (ID 2016/1793). All participants gave written informed consent before admission. Participant characteristics Of the 21,083 Tromsø 7 participants (age range: 40-99 years; mean 56, SD 11), 52.5% were women. Supplementary Table 1 shows participant characteristics of the confounders. In total, 18,432 participants had complete information on the number of diseases and cardiovascular risk factors. Of these, 17,997 had completed the WI-6-R. The mean (SD) median [quartiles] HA score was 3.26 (3.39), 2 [1,5] out of 24 points in the population, and HA scores increased with increasing number of diseases (Table 3). For all the investigated disease variables, having no disease was the most common, with increased HA observed among those with one or more diseases, those who fell into any disease category, and those with cardiovascular risk factors. For most diseases, the mean HA score was higher among those with current disease compared to those with previous disease. Association between health anxiety, physical disease, and cardiovascular risk factors There was a significant, positive association between HA score and number of diseases, and between HA score and disease categories (Table 4). In the fully adjusted model, participants reporting one physical disease had 29% higher HA scores than the healthy reference group, and participants with four or more physical diseases had a two-fold increase in HA scores compared to the healthy reference group. HA was consistently associated with all disease categories, with higher HA scores in all disease categories compared to the healthy reference group. For all disease categories, current disease was associated with higher HA scores than previous disease. Moreover, in most disease categories except previous diabetes or kidney disease and previous rheumatism, those with previous disease had higher HA scores than the healthy reference group. Participants with current cancer had the highest HA scores; twice as high as in the healthy reference group. Participants with current cardiovascular disease and current diabetes or kidney disease had an increase in HA scores of 50 and 60%, respectively, compared to the healthy reference group. Participants with cardiovascular risk factors also had a significant, 24% increase in HA scores compared to the healthy reference group. Discussion The aim of our study was to explore the association between HA and physical disease. We found several important and consistent results: Increasing number of diseases was associated with significantly higher HA scores. Both people reporting current and previous disease had higher HA scores compared to the healthy reference group. Cancer, cardiovascular diseases, and diabetes or kidney disease showed the strongest association with HA. Finally, participants with cardiovascular risk factors had significantly higher HA scores than the healthy reference group. To our knowledge, this is the first paper to demonstrate how HA is associated with both number of physical diseases, different disease categories, current and previous disease, and cardiovascular risk factors in the general population. The HA scores we observed among those with four or more diseases were twice as high as scores among those with no diseases, and we believe this to be a novel finding. Although some studies have found an association between high HA and having a disease [6,20], only one previous study has examined the association between HA and the number of physical diseases [21]. In contrast to our study, they did not find any significant association between HA and increasing number of diseases. However, they used a cut-off to dichotomise high and low HA, which might have obscured a significant trend. Unlike previous studies that used different cut-offs to measure HA [6,21,22], we utilised HA as a continuum, which may better represent the phenomenon of HA. As this is a cross-sectional study, it cannot determine causality. We speculate that the observed association may be explained by the presence of disease increasing the risk of having higher HA score [19]. However, high HA is also associated with high healthcare use [4], which may increase the probability of acquiring a diagnosis. In addition, we do not know if HA is itself is a risk factor for future disease. High levels of HA has been found associated with increased risk for ischaemic heart disease [32], whereas Knudsen and colleagues [33] found that high HA was associated with increased cancer incidence in men. Further, no association was found between HA and cancer incidence in a cohort of women, but high HA was associated with increased all-cause mortality [34]. To better examine and understand causal directionality in the relationship between HA and different diseases, and to investigate if gender influences the role of HA, a cohort study design is warranted. The association between HA and different diseases We found significant associations between HA scores and all disease categories investigated in this study, which included the most common non-communicable chronic diseases. Our results are in accordance with previous findings of high HA in patient populations [11,14,15,18,35]. Current cancer, cardiovascular diseases, and diabetes or kidney disease was associated with the highest HA scores. Fear of cancer and cardiovascular disease is common in people with HA [1,36]. Having current diabetes or kidney disease was also highly associated with HA scores in this study. Diabetes control requires strict adherence and bodily monitoring. Fear of complications was strongly associated with HA in a previous population of patients with diabetes [14], and may explain the high association between HA and this disease category in our study. Assuming that the diseases occurred prior to the HA, it could be reasonable to suggest that the bodily monitoring and fear of the fatal outcome may explain the high associations in this general population. Another consistent finding was that those reporting previous disease had lower HA scores than those with current disease, but they had still higher scores than the healthy reference group. Although most of the diseases included in our study are considered chronic, their symptoms can be reduced by proper treatment. We therefore speculate that some of our participants may have Table 4 Association between health anxiety score and number of diseases, and between health anxiety score and disease category, presented with exponential regression coefficients. Data Adjusted model, N = 8014 some disease, but proper management of that disease decreased both their symptom burden and HA. Interestingly, we found a significant association between HA and cardiovascular risk factors, with a coefficient similar to coefficients for migraine and respiratory disease. The impact of a 24% increase in HA score in otherwise healthy persons indicates a potential health burden on a population level. In Norway, the proportion of 70-74-year-olds taking blood pressure-or cholesterol lowering medication is increasing [37]; and was as high as 57% in 2016 [38]. Primary healthcare in Norway is wellfunctioning [39]. It is reasonable to assume that those who report a cardiovascular risk factor receive treatment, and thereby are at lowered risk for future cardiovascular disease. It is therefore interesting that we observed such a pronounced association between cardiovascular risk factors and HA. This significant association is important in the discussion of adverse effects in identifying people "at risk". Possible cohort effect Older age is associated with lower HA [40], and as physical diseases are more prevalent in older individuals, we hypothesised that the association between HA and disease may differ by age. However, we did not find any significant interaction, indicating that having a disease is not associated with higher HA in younger (40 years) compared to older age groups. Moreover, mean HA has increased in student populations in the past three decades [41], and if there is, in fact, a cohort effect, it is likely that today's youth may experience an even higher HA later in life due to the increased prevalence of disease in older age groups. Methodological considerations As this study uses a cross-sectional study design, we cannot determine whether HA occurs prior to the disease or in response to the disease, and caution should be taken when making assumptions of the directions of associations. Nevertheless, we believe that this study shows novel findings of associations in a general population, which may lay the foundation for future prospective studies. A strength of this study is the large, representative sample from the general population, which enabled us to examine the association between HA and different diseases. We chose to use a validated measurement tool, which is a strength in the research field of HA, and used a revised version that distinguished the cognitive construct of illness worry from the presence of physical symptoms [24]. Comparisons between studies are difficult due to the use of different HA measurement tools [19] as well as reporting of different diseases [7]. Although our results align with studies in other countries [6,22] and patient populations [11,12,14,15], our sample is exclusively from inhabitants in a specific geographic region in Norway, and replication in other populations would allow for further generalisation of the results. All our data on the occurrence of disease was selfreported, and any misclassification may be due to recall bias. If the reporting of disease is related to HA, e.g. if those with low HA under-report disease more than those with higher HA, this could bias our results. However, a Norwegian study examining consistency among selfreported diagnoses and clinical registries found good overall consistency [42]. In our study, we asked about current or previous disease, not duration of disease. One article examining HA in cancer patients found that HA was consistent over time after diagnosis and also during remission [43], and high HA has also been described as stable over time [44]. However, one study carried out in a sample of patients with diabetes found that high HA was most highly associated with a recent diagnosis [14]. Another factor concerning morbidity is severity of disease (risk of fatal outcome, the need for disease monitoring, chronic disability, etc.), as most of the diseases in our disease categories may have a wide range of severity. Interestingly, Tu et al. [15] found that increased HA was independent of kidney disease severity. However, as disease severity and duration may have influenced participants' responses, the lack of this information may increase any heterogeneity of the associations presented. The introduction to the questionnaire, stating the timeframe of the past 12 months, was omitted in the survey. This limits our knowledge of the timeframe during which the participants answered. Although severe HA has been found to be stable over time [44], this is unknown for people with lower HA scores. As in all survey research, selection bias may occur. Unfortunately, we have no information on factors related to non-response in the Tromsø 7, other than age and gender. However, a similar survey found that chronic diseases, e.g. diabetes, was related to non-attendance [42], indicating that survey populations may be healthier than non-respondents. Although not previously examined, it has been hypothesised that, in contrast to other mental illnesses, people with HA attend studies that are advertised as a "health check-up" [5], which was done in Tromsø 7. If the participants in the Tromsø 7 were healthier, whilst having higher HA, our results may be biased towards the null. As Lebel et al. [19] pointed out, there is an overlap between disease-specific measures and HA. Although disease-specific HA may be more precise than the more general concept of HA, we believe that HA should be used in a larger and comparative perspective. Clinical implications Our study demonstrates a consistent trend in the association between HA and physical disease which confirms knowledge from clinical practice and highlights the importance of assessing and addressing HA in patients with either current or previous disease. Past research has shown associations between HA and a wide range of diseases in patient populations. In line with those results, we suggest that while the proportion of HA may not vary considerably between diseases, the mere presence of disease is associated with higher HA. This association is relevant from a clinical perspective, as over 50% of our study sample had one or more diseases (Table 3). Severe HA is associated with a wide range of negative consequences, such as functional impairment, activity limitations, psychological distress [6], and increased healthcare use and work disability [4,5,45], and should be managed through targeted treatment to reduce associated negative consequences. However, as we have found in this study, increasing number of diseases is associated with higher HA, but overall, HA remains low. However, some studies found an association between lower HA score and higher healthcare use [46,47] and therefore we do not know how low HA is relevant from a clinical perspective. From a healthcare systems perspective, it is important to account for HA in the management of disease, particularly in those with increased number of physical diseases. Even when HA is not severe enough to require diagnosis and targeted treatment, we believe it important that healthcare personnel acknowledge and address the additional burden that HA may place on persons with current or previous physical disease and those with cardiovascular risk factors. Conclusion In our general adult population, we found consistent associations between HA and physical disease and cardiovascular risk factors. The highest HA scores were found among those with four or more diseases and participants with current cancer, but the positive association was consistent in all disease categories and cardiovascular risk factors. Previous disease was also associated with increased HA. Our results indicate that HA should merit closer attention in future research on populations with physical disease and risk factors for disease.
2022-06-03T13:34:46.546Z
2022-06-02T00:00:00.000
{ "year": 2022, "sha1": "9e053f717bdad9118b3889c6ea56fd54fc71d98b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c77a197c573ccb70d8e278abb5962e87ab683643", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
227228853
pes2o/s2orc
v3-fos-license
Determinants of COVID-19 Vaccine Acceptance in Saudi Arabia: A Web-Based National Survey Background Vaccine hesitancy is a potential threat to global public health. Since there is an unprecedented global effort to develop a vaccine against the COVID-19 pandemic, much less is known about its acceptance in the community. Understanding key determinants that influence the preferences and demands of a future vaccine by the community may help to develop strategies for improving the global vaccination program. The aim of this study was to assess the prevalence of the acceptance of COVID-19 vaccine and their determinants among people in Saudi Arabia. Methods A web-based, cross-sectional study was conducted using snowball sampling strategy under a highly restricted environment. A bilingual, self-administered anonymous questionnaire was designed and sent to the study participants through social media plat-forms and email. Study participants were recruited across the country, including the four major cities (Riyadh, Dammam, Jeddah, and Abha) in Saudi Arabia. Key determinants that predict vaccine acceptance among respondents were modelled using logistic regression analysis. Of the 1000 survey invitees, 992 responded to the survey. Results Of the 992 respondents, 642 showed interest to accept the COVID-19 vaccine if it is available. Willingness to accept the future COVID-19 vaccine is relatively high among older age groups, being married participants with education level postgraduate degree or higher (68.8%), non-Saudi (69.1%), employed in government sector (68.9%). In multivariate model, respondents who were above 45 years (aOR: 2.15; 95% CI: 1.08–3.21) and married (aOR: 1.79; 95% CI: 1.28–2.50) were significantly associated with vaccine acceptance (p < 0.05). Conclusion Addressing sociodemographic determinants relating to the COVID-19 vaccination may help to increase uptake of the global vaccination program to tackle future pandemics. Targeted health education interventions are needed to increase the uptake of the future COVID-19 vaccine. Introduction The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, which is widely referred to as "COVID-19", has been infecting more than 5.5 million over 144 countries. [1][2][3] The pandemic poses a significant threat to the public health system, 3,4 including catastrophic economic consequences around the world. Saudi Arabia has been plagued with several pandemics, including the Middle East Respiratory Syndrome Coronavirus (MERS-CoV), and the ongoing COVID-19 outbreak. 5,6 As of 17th October 2020, the virus has rapidly spread in the Kingdom, causing a total of 341,495 laboratory-confirmed cases with 5144 deaths. 7 A vaccine is considered to be the most awaiting intervention 2,4,7 and hundreds of global R&D institutions engaged in unprecedented speed to develop the vaccine [7][8][9][10][11] However, public perception towards COVID-19 vaccine uptake is not available. Numerous studies have shown several factors responsible for vaccine acceptancy when a new vaccine is introduced. [12][13][14][15] These include the safety and efficacy of the vaccine, adverse health outcomes, misconceptions about the need for vaccination, lack of trust in the health system, lack of knowledge among the community on vaccine-preventable diseases. 15,16 Misinformation leading towards vaccine hesitancy could put public health at risk in responding to the current crisis. In the previous pandemic like the H1N1 influenza A, when the vaccine was introduced, the acceptancy rate varied between 8% and 67%. 12 In the United States, the acceptance rate was reported to be 64%. 13 In the United Kingdom, 56.1% of the study participants reported accepting the swine flu (influenza A H1N1v) vaccine. 17 In Hong Kong, 50.5% of the study population intended to receive a future A/H7N9 vaccine during the outbreak in 2014. 18 In Beijing, China, 59.5% of the study participants who had heard of H7N9 were willing to accept a future influenza A (H7N9) Vaccine. 8 Vaccine acceptance and demand are complex in nature and context-specific, varying across time, place, and perceived behavioral nature of the community. [9][10][11][12][13][14][15] A study in Ireland showed that health care workers avoided seasonal influenza vaccination due to their misconception, efficacy, and trust in the vaccine. 16 In China, demographics and public perceptions are the predictors of vaccination acceptance. 8 In Hong Kong, anxiety level and vaccine history were the main predictors towards vaccine acceptancy. 17 In the United States, perceived effectiveness of the vaccine, social influence, and health insurance was the key predictor towards acceptance of an influenza vaccine. 18 Another study in the United States reported greater hesitancy associated with lower vaccine uptake and greater confidence associated with higher vaccine uptake. 19 In the United Arab Emirates, a study investigates parent attitudes about childhood vaccines and reported only 12% of parents' hesitancy towards childhood vaccination. 20 The study reported vaccine safety (17%), side effects (35%), and too many injections (28%) are critical factors in vaccine hesitancy. 20 Respondents who had a history of being vaccinated against seasonal flu were more likely to report their intention to be vaccinated. 14,15 A systematic review highlighted the role of public trust in vaccine uptake and reported a dearth in the research of vaccine uptake based on public trust in lowand middle-income settings. 12 Another review that investigated the general public's willingness to accept or decline a pandemic vaccine (H1N1) identified several key predictors like people's perceived risk of infection, the severity of the event, personal consequences, history of previous vaccination, and ethnicity. 11 A recent study highlighted that equitable vaccination across all population groups is challenging due to the complex human behavior which changes over space and time, 10 and a meta-analysis demonstrated behavioral health model like the "theory of planned behavior" in explaining vaccine hesitancy. 13 Numerous studies urged to enhance tailored interventions and policies to increase vaccination uptake. [9][10][11]13,16,21 Few studies have explored the prevalence of COVID-19 vaccine acceptance and their determinants. 22,23 A study conducted among health care workers (HCWs) in China showed a high acceptance of COVID-19 vaccination among health care workers in comparison to the general population. 23 Another study in the United States reported that only 20% intend to decline the COVID-19 vaccine. 22 Since vaccine acceptancy is context-specific and varies with geography, culture, and sociodemographic, we aimed to understand the public willingness of a future COVID-19 vaccine in Saudi Arabia. Study Design and Setting The cross-sectional survey was designed using Survey Monkey ® platform and used a snowball sampling strategy. Study participants were recruited across the Kingdom of Saudi Arabia, including major cities (Riyadh, Dammam, Jeddah, and Abha), and other minor cities. The above cities were selected based on the geographical presence of the Saudi Electronic University, which enables the researcher to collect information during the highly restricted environment of COVID-19 pandemic. Initially, the study investigators shared the survey link in social media (Twitter, Whats App, Telegram channel) and through emails to their primary contacts (aged 18 and above). The primary participants were requested to roll out the survey further. On receiving and clicking the link, participants got auto directed to the informed consent page, followed by the survey questionnaires. submit your manuscript | www.dovepress.com Study Sample From the previous literature review of vaccine hesitancy in the community, it is estimated that about 15% of the study participants showed hesitancy towards accepting a vaccine. We estimate that a sample size of 800 should give us 80% power at a confidence level of 95%. Accounting for non-response, dropout, and subgroup analyses our final sample size was planned to be 1000 completed questionnaires from participants. The sample size was calculated using the formula N = Zα2P (1 − P) / d2, in which α = 0.05 and Zα = 1.96, and the estimated acceptable margin of error for proportion d is 0.1. The survey was stopped when we received 1000 completed questionnaires. Questionnaire Development We conducted a literature review 9,24-26 to identify key areas, and a draft questionnaire was devised. The draft questionnaire was in bilingual (Arabic and English) format and consisting sections on sociodemographic, knowledge and perception towards COVID-19, trust in the health system, and participants' willingness to accept the COVID-19 vaccine if it is available in future. We tried to keep the questionnaire as short in length so that it can be quick to complete, and easy to follow. Questionnaire's content and clarity was assessed by the public health experts working at the College of Public Health at Saudi Electronic University. The draft questionnaire was pilot tested. The final questionnaire was developed based on the Cronbach's alpha values (>0.70). 27 The questionnaire was self-administered. The participants were instructed to select one option from the list of responses (Yes/No/Not sure). Ethics Statement This study followed the principle of the Declaration of Helsinki 1995 (revised in 2013). Ethical approval was granted for the study by the institutional Research Ethics Committee (SEUREC-CHS20110) Saudi Electronic University, Riyadh, Kingdom of Saudi Arabia and consent was taken before participation in the study. Data Analysis Descriptive statistics were conducted to generate summary tables for study variables. A cross-tabulation analysis was performed to examine the distribution of intention to uptake COVID-19 vaccine with respondents' sociodemographic characteristics using chi-squared tests. Logistic regression models were employed using a priori hypothesis to tabulate odds ratios (OR) and their 95% confidence intervals (95% CI). All data analysis was performed using STATA 13.0. A two-tailed p-value <0.05 was considered statistically significant. Results Of 1000 survey invitees, 992 (99.2%) provided the informed consent and returned the survey. Table 1 shows the summary statistics of the sociodemographic profile of the study participants. Most of the respondents 436 (43.9%) were aged between 26 and 35 years, followed (Table 1). Table 2 shows bivariate associations between sociodemographic characteristics and intent to uptake the COVID-19 vaccine among respondents in Saudi Arabia. Of the 992 respondents, 642 (64.7%) intended to uptake the hypothetical vaccine, only 70 (7%) reported hesitancy towards the COVID-19 vaccine, and 280 (28.2%) were reported "not sure" about their intention ( Table 2). Of the 53 respondents who were aged 45 years and above, 42 (79.2%) of them showed interest to uptake the vaccine if it is available. Of the 512 participants who were married, 355 (69.3%) reported accepting the COVID-19 vaccination (Table 2). Table 3 presents logistic regression analysis for sociodemographic prediction of intent to uptake the COVID-19 vaccine among respondents. In the multivariate model, respondents who were above 45 years are 2.15 times likely to accept the vaccine (aOR: 2.15; 95% CI: 1.08-3.21). Similarly, participants who were married are 1.79 times likely to accept the vaccination (aOR: 1.79; 95% CI: 1.28-2.50). Table 4 shows the logistic regression analysis for factors potentially associated with the intention to receive the COVID-19 vaccine among respondents. In the multivariate model adjusted for sociodemographic characteristics, participants who were concerned about acquiring infection with the COVID-19 virus were 2.13 (95% CI: 1.35-3.85) times likely to accept the COVID-19 vaccine compared with those who were not concerned with the infection. Participants, who trusted the health system were 3.05 (95% CI: 1.13-4.92) times most likely to accept the vaccination than those who have reported no trust. Discussions Vaccination is considered one of the most outstanding public health inventions in the 21st century. However, its acceptancy is varied with space, time, social class, ethnicity, and contextual human behavior. 9,10,12,13,28 Our study, first of its kind in Saudi Arabia, used a web-based self-administered questionnaire and collected responses across the Kingdom, including four major cities (Riyadh, Jeddah, Dammam, and Abha) and some minor cities in the country. Of the 992 study participants, 642 (64.7%) said "Yes" to uptake the COVID-19 vaccine, 70 (7.0%) said "no" to uptake COVID-19 vaccine, and 280 (28.2%) said "not sure" to uptake the COVID-19 vaccine if it is available. Further, being aged (45 years and above) (aOR: 2.15; 95% CI: 1.08-3.21), and being married (aOR: 1.79; 95% CI: 1.28-2.50) are likely to accept the COVID-19 vaccine than their counterparts. Study participants' trust in the health system (aOR: 3.05; 95% CI: 1.13-4.92) and perceived risk of acquiring infection (aOR: 2.13; 95% CI: 1.35-3.85) were found to be significant predictors in explaining acceptancy of the COVID-19 vaccine. Though there have been limited studies to explore the intention to uptake the COVID-19 vaccine in the current crisis, our results are in agreement with study conducted in China, and in the United States. 22,23 The Chinese study reported 72.5% of the general population's intention to uptake COVID-19 vaccine. 23 And the study conducted in the United States reported 80% acceptance of the COVID-19 vaccine among the study population. 22 In our study, 64.7% of study participants showed interest in the uptake of the COVID-19 vaccine. Similar observations were made during the H1N1 pandemic. 11 Some qualitative comparisons can be made with similar studies, like in a systematic review, the acceptance rate varied between 8% and 67% for the H1N1 influenza A pandemic vaccine. 11 The acceptance rate was reported to be 64% in the United States, 25 56.1% in the United Kingdom, 29 59.5% in Hong Kong, 17 and 59.5% in China. 8 The systematic review also highlighted that there was no consistent association with participants' demographic variables (age and sex) with vaccine uptake behavior. 11 However, in our study, old aged participants are more likely to accept COVID-19 vaccination than their counterparts (aOR: 2.15; 95% CI: 1.08-3.21). Numerous studies reported the perceived risk of becoming infected as a predictor towards intention behind vaccination. 11,14,15,23 In our study, participants who had a higher perceived risk of being infected are 2.13 times more likely to be vaccinated than those having a lower perceived risk (aOR: 2.13; 95% CI: 1.35-3.85). Studies have shown that a higher trust in the health system is associated with the utilization of preventive health services such as vaccination. 19,30,31 In our study, odds of having greater trust in the health system were 3.05 times higher, reporting their intention to uptake the COVID-19 vaccine (aOR: 3.05; 95% CI: 1.13-4.92). Our study has several limitations; firstly, it is crosssectional, depicts a picture of the community response at the point of the study. We asked the respondents to report their intention to receive the COVID-19 vaccine if it is available in the future. A considerable number of study participants (28.2%) reported "Not sure" about their intention to uptake the COVID-19 vaccination. The real intention could be different when the vaccine is available. 14 It is interesting to study how the intention varies over time and the context in the study population. Secondly, study responses were recorded using a web-based self- DovePress administered survey, instead of a direct face-to-face interview. This may lead to potential bias in reporting their responses. Third, the current study did not explore the motivation behind the acceptance or barriers behind the hesitancy of the VOVID-19 vaccine. Another key limitation of the study is the snowball sampling strategy, which may not represent the true picture of study participants. However, during the study period (lockdown due to this was the only available method to collect the data from the study participants. Despite the above limitations, our study is the first of its kind, with a representative sample size across the county demonstrated the population's intention to uptake the COVID-19 vaccine. Once the pandemic is over, we will explore many additional research questions, including vaccine promotion strategies, vaccine safety, vaccine referral/recommendations, cost (out of pocket expenditure), including the key motivation and barriers towards COVID-19 vaccination. Conclusion This is the first community-based study under a highly restricted environment that assessed the public's intent to accept the hypothetical COVID-19 vaccine in the Kingdom with a representative sample. The study participant has a good intention to accept the hypothetical vaccine and is in accordance with the previously reported figures. Participants' perceived risk and trust in the health system were found to be significant predictors towards the intention of the COVID-19 vaccine in the Kingdom. Further study should corroborate our findings with public health promotion interventions. Health education targeting various sociodemographic groups should be taken as a priority to increase the COVID-19 vaccine uptake behavior in the country, and elsewhere.
2020-11-26T09:03:53.660Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "64b07266fdbe814017448fb74f3a144c76978de9", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=63975", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "007578fe70ad3bac8ceff96851fa14d1853ea2d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
169905412
pes2o/s2orc
v3-fos-license
Application of direct payment clause 30A.0 of the Asian International Arbitration Centre (AIAC) Standard Form of Contract (With Quantities) Conditional payment such as “pay when paid” or “pay if paid” can create negative chain effect on the parties in construction projects, resulting in delay on the completion of a project, adversarialism and may affect a contractor’s reputation. Asian International Arbitration Centre (AIAC) has launched a standard form of contract which is Construction Industry Payment Adjudication Act (CIPAA) compliance with the aim to reduce payment issues. The aim of the research is to identify whether the clause for “direct payment under CIPAA 2012” of the new AIAC standard form of contract can facilitate problems in direct payment. In achieving the aim of the research, five legal cases were analysed and thirty questionnaires forms were distributed. Legal cases analysis findings highlighted that the major reasons of the direct payment issue being referred to court is due to the validity of the direct payment agreement between the disputant’s parties. Based on the cases heard before CIPAA enactment, the findings show that out of the three cases, the disputants went to litigation because of the legality of direct payment agreements. Most of the agreements were made orally. For cases analysed after CIPAA was enacted, the findings show that the disputant parties do not opt for adjudication and that the main contractors try to mitigate their responsibilities to the employer. The results from the questionnaires distributed established that, the direct payment clause could be successfully adopted for future use of the industry. Eventhough the AIAC standard form of contract has been formally introduced to the industry, but it is not widely used. From the findings of the questionnaire, it shows that with encouragement and support from the industry, direct payment clause of AIAC standard form of contract have the potential in reducing payment issues in the future. With the remodeling of standard form of contracts that are available in construction industry to be CIPAA compliance, it is hoped that this move may scale down the prevalent payment issues in Malaysian construction industry. History: Received: 14 November 2018 Accepted: 1 January 2019 Available Online: 30 January 2019 Background of study For decades, the construction industry has been plagued by various constraints encompassing issues such as cost and time overrun, poor quality and lack of sustainability (Bruno et al, 2017). Many factors contribute to the success and failure of a construction project and it has become an interesting arena for research (Yong and Mustaffa, 2017). One of the common area for research is on payment, as it has been the root of every dispute in the construction industry. Sometimes, main contractor feel they have an upper hand and power over the subcontractors. The reason behind this is possibly caused by the tendency of the contractors to ignore their obligations to pay the subcontractors in consideration of their poor financial cash flow condition. Subcontractors are entitled to be informed about their payment especially in recovering them. There are many dispute resolutions in solving this particular problem such as litigation, arbitration and adjudication. On the same wavelength, many institutions such as PAM, PWD and CIDB standard forms of contract have taken great initiatives in avoiding these problems. The introduction of Asian Institute of Arbitration Centre (AIAC) standard form of contract which is CIPAA compliance may help in reducing payment issues. Statement of problem Before CIPAA was enacted, the construction industry has been using PAM and PWD standard form of contract. In PAM standard form of contract, Clause 27.6 provides that the employer may deduct the amount paid to the subcontractor from the amount payable to the contractor. The same provision can also be found in PWD standard form of contract, under Clause 61.2(a). The two clauses in PAM and PWD standard forms of contract require parties in dispute to go through mediation and arbitration proceedings if any dispute pertaining to them cannot be solved. There is a provision in PAM form which gives the option for it to be solved by adjudication. However, there are no specific provisions in PAM and PWD that directly relate the matter to CIPAA. Since the existing standard forms of contract were issued prior to this Act, the Asian International Arbitration Centre (AIAC) or formally known as Kuala Lumpur Regional Arbitration Centre (KLRCA) has taken the initiative to introduce a new standard form of contract. This form which has been formally launched is to address the prevalent issue of payment in a more explicit manner. These new standard forms of contract are claimed to be more user friendly and CIPAA compliance. This could feasibly be the ultimate solution for direct payment problem. Since the form is relatively new in the industry, the players may be reluctant to use the new form. Research objectives The aim of this research is to identify whether the clause for "direct payment under CIPAA 2012" of the new AIAC standard form of contract can eradicate the problems arising in direct payment. In order to accomplish the aim, these objectives need to be pursued; firstly is to determine the common reason(s) that leads to problems in direct payment from legal perspective and secondly to investigate the awareness of construction industry players of the new direct payment Clause 30A.0 in AIAC standard form of contract (with quantities). Significant of study This research is important in order to help the clients, contractor and subcontractor to know of their rights and obligations arising in the context of direct payment under the new standard form of contract AIAC. In addition, it would shed some light guiding the construction players in solving and protecting their rights to attain healthy cash flow. It is hoped that the findings of the research would encourage the authorities to review their standard form of contracts and include the new provisions that might effectively help in remedying the problems concerning direct payment. Scope of study The main drive of this research is on discovering the perception of the construction industry players on the direct payment provisions with regards to the new standard form of contract released by the AIAC. Court cases have been referred to in identifying the direct payment problems occurred and the solutions to it. This research have been limited to construction cases in Lexis Malaysia under PAM 2006, PWD 2010 and CIPAA 2012, problems on direct payment that occurs among the construction players and perception on the new AIAC standard form of contract towards the direct payment clause. Definition of payment Payment is the amount of money that is going to be paid to the contractor as in the regular interim payments which are progressively paid throughout the duration of the contract (Jane, 2018). Certain procedures enable the parties to calculate the amount, the due date and the final date for payment of any payments falling due under the contract. Payment clauses in contract documents In PWD 203A Version 2010, payment clause which is stated in the provision of this contract falls under Clause 28, "payment to contractor and interim certificate". Likewise, in PAM 2006 standard form of contract, the clause falls under Clause 30, "certificates and payment". In both of these standard forms of contract, each of the clauses explain when the employer's representative needs to do valuation and the clauses lay out the procedures of payment that binds the parties to the contract respectively. Obligation of paymasters Payment does not require submission of claim because it is an obligation for the employer to pay the contractor accordingly for the completed works. According to Tony (2018), in the event of valuation of work completed, the regular basis of timely valuation commonly has been stated in advance. The main purpose of the contract is for the contractor to deliver the output (buildings) and for the employer to pay upon completion of work done. It is essential for the paymasters to the subcontractors to know that every rights of their nominated and domestic subcontractor should be paid accordingly for the works that they have done. Generally, all parties' cash flow interest must be protected. Payment issues Payment problems are not new in construction industry. Not only nationally but globally, payment is considered as one of the main issues that have significant influences no matter what industry a person is in. According to European Payment Report (2013), payment is an issue of concern in any industry. Factors contributing to payment issues According to Azhari ( Impact of payment issues There are a lot of impacts that can be caused by payment issues. According to a report by CIDB (2006), the most common effects of non-payment and late payments are the stress created on the contractors, financial hardship and cash flow problems. According to Mohd Khairul (2016), contractors' cash flows are going to be affected due to retention fund, payment term to supplier and subcontractor, advance payment, delay payment and frequency of payment. Sambasivan and Soon (2007) stated that any disruption within the flow of cash will cause monetary hardship and even causing failure lower down the contracting chain. Title of the goods will usually be transferred upon payment and late or non-payment would lead to shortage in material (Sambasivan and Soon, 2007). According to Azhari (2014) the impacts are as below: a. Creates negative chain effect on other parties b. Results in delay on completion of project c. Leads to bankruptcy d. Project Delay e. Affect the contractor's reputation f. Profitability of the project It can be highlighted that the payment issues that comprise of retention of title, delay in payment, failure of payment, late and non-payment have persisted in the Malaysia construction industry for quite some time now, but have yet to be fully resolved. Clauses in standard form of contract for remedies of payment issues In Clause 27.6 PAM 2006, the Architect may ask the contractor to supply him with reasonable proof of the contractor's claim that he had discharged the previous certificate to the Nominated Subcontractor's payment. If the Contractor fails to do so, the Architect may certify and the Employer may pay such amounts directly to the Nominated Subcontractor and deduct the same amount from the Contractor Similarly in PWD 2010 form, the normal procedure of payment from client to the contractor falls under Clause 28.3. Regarding the direct payment to the subcontractor, the provision falls under Clause 61.1, which cover the amount that being paid by the Government directly to the Nominated Subcontractor shall be deemed as payment to the Contractor by the Government under the virtue of the contract. Direct payment Emmanuel (2015) stated that problem in late and unfair payment could be influenced by the main contractor and subcontractor's relationships. Based on Supardi (2015), there are three principle methods in paying subcontractors comprising of: Payment upon certification Under the payment system, the main contractor receives payment through interim payment certificates and it is a conditional precedent for the main contractor to pay the subcontractors. It is not appropriate for the main contractor to default the payment to the subcontractor after the honoring period of certificate has lapsed. Direct payment from the employer Other than the payment upon presentation of the certificate, direct payment is another form of payment in which the payment is being paid directly to the subcontractor by the employer. As far as the employer is concerned, the subcontractor's payment may be apportioned from the Interim or Final Certificate received by the main contractor. Contingent payment or conditional payment The last principal method of payment is the contingent payment or also known as under various terms such as "pay if paid" or "pay when pay" and "back to back" provisions in paying the subcontractors. According to May and Siddiqi (2006), the main contractor may transfer the risk of non-payment by the employer to the subcontractor in order to protect their interests. There are a few cases of direct payment that have highlighted contingent payment: Worldwide perspectives on direct payment In another part of the globe, the United Kingdom's Housing Grants, Construction and Regeneration Act 1996 finds that the provision of conditional payment is considered unsuccessful with the exception when there is bankruptcy in the contractual chain. According to Sushani (2005), even though these initiatives have been taken, payment problems may still exist. The same occurrence and reports can be seen in the literature in UK (Reilly, 2008), Australia (Barry, 2010) and New Zealand (The Dominion Post, 2008) that pointed to the fact of liquidation could have effect the delayed payment. Construction Industry Payment Adjudication Act 2012 (CIPAA 2012) According to Loshini (2017), Construction Industry Payment and Adjudication Act ("CIPAA 2012") were enacted by the Malaysian Parliament and came into action on 15 April 2014. The introduction of a statutory adjudication process was made with a declared intention to improve payment problems in the construction industry. Small contractors and subcontractors may be facing with cash flow problems and they would be financially weak if they are not paid by employers or in some cases the payment could be unfair or untruthful. In another example, the main contractor could possibly have the upper hand and refuse to pay their subcontractors. The Act identifies this issue and made provisions to address this disputes. Adjudication Adjudication is a form of dispute resolution that was developed back in mid 2000 as an alternative to arbitration in the construction industry. Most of the standard form of contract adapts adjudication as its primary alternative dispute resolution (Dancaster, 2008;Seifert, 2005;Teo, 2008). All of the procedures under CIPAA may help in solving all the payment disputes between the construction players. Maybe this is the reason why AIAC has made their initiative to do a new standard form of contract as one of the solutions. In celebrating the 40 th anniversary of KLRCA recently, Datuk Sundra Rajoo has launched a new KLRCA new standard form in accordance with CIPAA compliance and also changed the name of KLRCA to Asian International Arbitration Centre (AIAC) to attract more international parties to arbitrate with them. This move is with clear hope that Malaysia would be acknowledged as the number one arbitration centre worldwide. Background of the AIAC standard form of contract The AIAC standard form of contract is perceived to offers a better way to address the problems and close the gaps by giving solutions that complies with CIPAA. Pursuant to that, AIAC would be expected to ensure that the standard form of contract is up to date and align the updates with the latest laws and construction court judgment in the Malaysian's construction industry. In such cases, it would enable the disputants' parties to easily resolve dispute while the works are still in progress. AIAC is also anticipated to ensure that the new standard form of contract will give benefit to both the employer and contractor and similarly perceived to be a user friendly form. It claims that there are over 60 expressions and words that provide clarity to the contract such as "Clause 33.0 Fossils, Clause 8.30 Weather Conditions and Clause 23.8(c) (viii) Antiquities". There are some key features that are claimed by AIAC (2017) including clarity, integrity, accountability, transparency, continuity and certainty. To summarise the discussion, the academic community has extensively explored the payment issues and usage adjudication statutory in their research. However, little research has been conducted to show the significance to include the clause of direct payment under the CIPAA 2012 in standard form of construction contract. To address this gap, this research has been designed to investigate the level of perception of the industrial player on the inclusion of the clause of direct payment under CIPAA in the new AIAC standard form of contract and the other standard forms. Introduction This part of the discussion will primarily be based on research process, tools, data collection and analysis of data. It is based on two modes of research strategies centering around legal research based on analysis of the legal cases and survey conducted on the industry's players to gather information on their views regarding the new AIAC standard form. Data collection This research adopts the descriptive study approach to describe the variables and investigative enquiries of various sorts. The descriptive statistics would furnish the frequencies, the mean and the standard deviation of the set of data. Facts or information that are already available would be analysed further to create a crucial analysis of the content. In this research, legal and quantitative approaches have been used to achieve the objectives. Legal research The facts were then filtered through by limiting the selection to cases that are more recent which have been reported from the year of 2010 to 2017. The cases were derived from search conducted through Lexis Malaysia using keywords "direct payment and building contract". The cases were then further filtered into the cases that adopts building contract set out under professional bodies such as Jabatan Kerja Raya (JKR), Pertubuhan Arkitek Malaysia (PAM) and Construction Industry Payment Adjudication Act (CIPAA). Quantitative research A set of questionnaire was distributed to achieve the second objective of the current research. The questionnaire responses are then used to investigate the perception of inclusion of the direct payment clause under CIPAA 2012 of AIAC standard form of contract. Questionnaires were sent to all participants throughout Malaysia using the online custom form and were distributed to the industry players. The target sampling is and not limited to thirty targeted respondents. Data analysis The first objective has been concluded through the legal cases analysis. The selected cases have been organized in chronological order, according to the years, from the previous years to most recent. The cases have been studied from the point of view of the facts of cases, judgments passed by the courts and the findings of the cases. The cases have been further scrutinized to investigate their relevancy in the introduction of AIAC standard form of contract. Data that addressed the second objective was analysed using the descriptive analysis. After the data has been obtained through questionnaires, they are then coded, edited and entered into a database. Research limitation There are several limitations of the research. First, the industry chosen is only the construction industry and the respondents are from the related companies in the industry in Malaysia (as this research focuses on the CIPAA 2012 that came into force to govern Malaysia). Thus, the results from this research may not be generalized to other countries which have different political, cultural and economic factors. Second, this research only examines the documents involved in the contract documentation and the focuses directly on documents and records that are related to payment issues or within the application of direct payment clause in CIPAA 2012. In order to carry out this research, the theoretical and technical assumptions underlying the research methodology in the direct payment concept field were review. In addition, a discussion of the research design for this study was made. On the research strategy, legal case studies have been adopted. This is then further combined with research techniques where the respondents responds were observe through questionnaires and documentation analysis. Introduction This part of the paper will be discussing the emerging role of the new AIAC standard form of contract in the context of direct payment as the method in solving payment issues. The legal case analysis will be discussing on the common reasons for direct payment under PAM 2006, PWD 2010 and CIPAA. This is in order to achieve the first objective of the research. The data for the research have been obtained from cases extracted from Lexis Malaysia database. The cases selected were from the year 2010 to 2017. The cases described and analysed have been selected based on the common reasons of direct payment occurrence. The descriptive statistical analysis will discuss on the data collected from the questionnaire distributed to 30 respondents. The interpretations of the said data will be thoroughly discussed accordingly. Legal case analysis It can be observed from the legal cases presented in Table 1 that they have several similarities pertaining to direct payment issues. The findings also reveal that there are few limitations to direct payment clause in AIAC standard form of contract. In general, it can be highlighted that the cases were arguing on the existence of contractual agreement of the direct payment. From the cases, direct payment agreement was in existence regardless if it is expressly written or orally agreed. In Pembinaan Juta Mekar Sdn Bhd v Sap Holdings Bhd & Ors (2014) 11 MLJ 821, with consistent action of the employer in paying the subcontractor directly for 2 years, court held that there were contractual relationship exists. In addition, even though the agreement was made orally, with enough evidence, subcontractor may exercise their rights to get the payment. There were some limitations that can be observed from the cases above. Contractor tends to mitigate their responsibility to third party regardless towards the employer or subcontractors. The possible explanation for this is the contractor may not understand the full concept of direct payment. There were possibilities that the contractors are aware of the concept however they try to manipulate and take advantage on the provisions. Descriptive statistical analysis A set of questionnaires were completed by thirty respondents. The data have been collected to investigate the level of awareness among the construction industrial players on the introduction of AIAC standard form of contract. More importantly, data collected are also for the purpose of observing the perspective of the construction players towards the direct payment clause under the AIAC standard form of contract (with quantities). Awareness on the AIAC standard form of contract The question asked on whether the respondents were aware of the new AIAC 2018 standard form of contract. Less than a third of the respondents (24%) indicated that they were aware of the existence of AIAC standard form of contract. Unfortunately, despite its objective to resolve the prevalent payment disputes, more than two third of the respondents (23 people) indicated that they were not aware of AIAC standard form of contract. The result may indicate that the AIAC standard form of contract is yet fully embraced by the construction industry. The initiatives taken by AIAC to organize road shows to promote the standard forms of contract are inadequate to increase the awareness of the forms' presence in the industry. This could possibly be due to lack of communication channel that may not reach out to much smaller players of the industry. Subcontractors are the critical parties that are expected to face higher disadvantages when payment disputes arise. As the data have indicated that there are a lack of awareness in the adoption of AIAC standard form of contract, more promotional activities need to be made in order for the subcontractors to be aware of the existence of the new form. AIAC 2018 standard form of contract in future project The following question asked was to assess the potential of the respondents to use the AIAC standard form of contract in the future. Only four respondents confidently answered positively, while another five respondents indicated that they would not expected to be using the form. Two third responded that they may be using the form in their future projects. On a positive remark, the positive response promises that the future use of this form looks bright. On the other hand, majority of the responses give different indication to the future use of the form. They are either indecisive because they have not been fully exposed to the form, or that they could be skeptical on the practicality of the form. Another reason contributing to the "uncertain" responses given by the respondents could also be expressed by the smaller numbers of direct payment cases that are resolved with the provisions provided in the form. Similarly, the negative response indicates that the respondents did not have trust in the new form and there are possibilities that they are complacent with the forms that have been established in the industry. Relatively, the reasons behind these responses are further discussed in the analysis under section C of the questionnaire. Direct Payment (Clause 30A.0) AIAC 2018 standard form of contract can help in reducing "non-payment" or "paid when paid" issues Following the previous question, the next question was to examine the respondents' agreement on whether the direct payment clause would be able to assist in eliminating or reducing the payment issues. This response would give an indication on the potential success of the direct payment clause on its full implementation. The RII is calculated at an index of 0.77 for this statement. The result reveals that the respondents, though they agree that the direct payment clause can help in reducing the "non-payment" or "paid when paid" issues, there is a possibility of some reservation on their part on its success. This could be due to the fact that the AIAC standard form of contract is still considered new in the industry and has not been used widely. The subsequent question is to gauge on the respondents' level of agreement on the statement that direct payment may have an effect in changing the payment culture that has been inculcated in the industry. The response that inclines positively towards the statement would give an indication that the direct payment clause would have a chance in setting a new culture of payment in the construction industry. The RII reveals an index of 0.72 which is interpreted as "Agree". This result indicates that the direct payment clause has the potential to change the payment culture in the industry On the contrary, there is a small chance that the change in culture would lead to a bigger problem in the construction industry. One of the possibilities is the mitigation of obligation to pay the subcontractors by the contractors. This potential problem could be due to the fact the direct payment clause is rather vague on the types of payment that are covered under the clause. For future improvement of the payment and claim system, all standard form of contract should be CIPAA compliance The final question in the questionnaire was targeted to assess the probability that all standard forms of contracts should be improved and be CIPAA compliance. The response would indicate if the AIAC standard form of contract would be successful as a model form that complies with CIPAA and can be benchmarked as payment solution in direct payment issues. From the RII analysis, the index for this question was recorded at 0.79. This shows that most of the respondents agree with the idea of remodeling standard forms of contracts that are available in construction industry to be CIPAA compliance. It is likely that the respondents could identify the importance of CIPAA in solving payment related issue especially for Subcontractors who are directly at the disadvantage of payment issues. All regulatory bodies such as CIDB, PAM and PWD should take the initiative to upgrade their standard form of contracts and adopt CIPAA into their contracts. They should imitate AIAC's move immediately since their current forms are yet to adopt CIPAA. The extra effort in improving the standard form of contract may give a break through to the construction industry players who are reluctant of changes. On the legal research, out of the five cases, only two cases were heard after CIPAA were enacted. However, both cases do not opt for adjudication as the mode for their payment dispute resolution method. Most of the cases were heard in High Court, a couple of cases went through Court of Appeal and one of the cases went to Federal Court. It is time consuming and costly process to go have a case being heard at the court. Instead of a long-awaited process in litigation, AIAC has made ready the solution to direct payment problems by producing standard form of contract with CIPAA compliance. The standard form synchronously compliments CIPAA's purpose in solving and avoiding short-term cash-flow problems during project delivery. On the contrary, it is also observed that the cases showed certain limitations in the AIAC direct payment clause. One of the set back is that the direct payment clause does not clearly define the terms of "any payment". The term "any payment" in clause 30A.1 in AIAC could lead to misuse and abuse of the clause. From the responds of the questionnaire distributed, all thirty respondents have given a very good cooperation in assisting this research process. Most of the respondents are also well qualified in terms of their education level and experience in working. Based on the findings, the direct payment Clause 30A.0 in AIAC standard form of contract has a very bright future and gives big impact in the construction industry payment system. Issues pertaining direct payment Based on the legal case analysis findings, the major reasons of the direct payment issue being referred to court is the validity of the direct payment agreement between the disputant's parties and the fact that other dispute resolutions methods apart from litigation have not been chosen. Without express agreement on direct payment clause, these can jeapordise subcontractors' to express their rights to be paid by the main contractors. In addition, from the findings, the even though some of the cases were held after CIPAA enactment, the disputants does not opt for adjudication as the payment dispute resolution method. Meanwhile, the research has managed to achieve the objective in investigating the perception on the inclusion of the direct payment clause 30A.0 in AIAC standard form of contract. The research has identified that the clause could be successly adopted for future use of the industry. Even though with the lack of awareness such form existed and the understanding direct payment concept, the AIAC standard form of contract were not fully utilize. Nonetheless, the findings may highlight that there is a reluctant on the part of the industry players to change from what they are comfortable with to something new. Possible steps in promoting direct payment clause in AIAC standard form of contract To enhance and elevate the usage of AIAC standard form of contract, AIAC could have a wider and extensive promotion on the forms. Since AIAC is now recognised internationally, it is only appropriate to spread the exposure internationally. AIAC may also be a bench mark for local standard form of contract to emulate. In addition to that, it is recommended that for the parties concern to have more trainings and conferences to educate them on this latest standard form. From the data obtained, the respondents are from younger generations who are open to challenges and willing to accept changes. This contributes to probable success of the AIAC standard form of contract. The more educated construction players on the AIAC standard form of contract, the more successful it would be in the future. It is hoped that the findings can be an eye opener for the related construction industry players on the awareness of direct payment in scaling down the prevalent payment issue in the Malaysian construction industry.
2019-05-30T23:46:32.260Z
2019-01-06T00:00:00.000
{ "year": 2019, "sha1": "ac44816e3aeb6cacee92db91a547181cf271a420", "oa_license": "CCBYNCSA", "oa_url": "https://ijbes.utm.my/index.php/ijbes/article/download/329/144", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7bbfba48f6b5f52a5133e60f64ba7b1f18985e80", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Business" ] }
354361
pes2o/s2orc
v3-fos-license
Relative efficacy of human monocytes and dendritic cells as accessory cells for T cell replication. Monocyte-specific monoclonal antibodies (7) were used to compare the efficacy of monocytes and dendritic cells as accessory or stimulator cells for human T cell replication. Both unfractionated and plastic-adherent mononuclear cells were first treated with a cytolytic antimonocyte antibody that kills greater than 95% of monocytes but not dendritic cells. When tested as stimulators of the mixed leukocyte reaction (MLR) and of oxidative mitogenesis (the proliferation of T cells modified with sodium periodate), the monocyte-depleted cells had normal or enhanced stimulatory capacity. Monocyte-depleted mononuclear cells also proliferated normally to soluble antigens (Candida albicans, tetanus toxoid), even under limiting conditions of cell dose, antigen dose, and culture time. Adherent blood mononuclear cells were next separated into monocyte-enriched and -depleted components using fluoresceinated antimonocyte antibody and the cell sorter. The depleted fraction (less than 2% monocytes by esterase staining and by cytology) contained the dendritic cells and exhibited at least 75% of the accessory activity. The monocyte-rich fraction (approximately 97% esterase positive) stimulated the MLR and oxidative mitogenesis weakly, and was comparable in potency to nonadherent cells. Cell-specific antibodies and complement were also used to prepare dendritic cells that were thoroughly depleted of monocytes and lymphocytes. The dendritic cells (70-80% pure) were potent stimulators of the allogeneic MLR, syngeneic MLR, and tetanus toxoid response, being active at stimulator to responder ratios of 1:100 or less. Taken together with previous studies (1, 2), these experiments indicate that the dendritic cell is the major stimulator of T cell replication in man. The contribution of class II products of the major histocompatibility complex (7) was then evaluated with a new monoclonal, 9.3F10. Accessory function was dramatically inhibited if cells bearing class II antigens were killed with 9.3F10 and complement, or if class II molecules were blocked by the addition of 9.3F10 Fab to the culture medium. The expression of 9.3F10 class II products was therefore studied on purified monocytes and dendritic cells. Most if not all cells in both populations reacted with 9.3F10, and each population exhibited approximately 150,000 125I-Fab 9.3F10 binding sites per cell. Since Ia+ dendritic cells are active accessory cells, but Ia+ monocytes are not, class II products are necessary but not sufficient for the stimulation of T cell proliferation in man. Two recent studies (1,2) have identified cells in human blood that fully resemble the dendritic cells described previously in mice and rats (3). Among other similarities, the human equivalent is Ia positive and Fc receptor negative, occurs in trace numbers (<1% of blood mononuclear cells), and acts as a potent stimulator of T cell proliferation in vitro. For example, preparations enriched in dendritic cells are 10-100 times more active than monocytes or lymphocytes in stimulating the syngeneic and allogeneic mixed leukocyte reactions (MLR) ~ as well as oxidative mitogenesis--the proliferation of periodate-modified T cells (1,2). Therefore the prevailing concept that monocytes are the principal accessory cells in man must be reexamined. Monoclonal antibodies that distinguish macrophages from dendritic cells provide new probes for accessory or stimulator cells in the immune response. Selective depletion of murine dendritic cells, with specific antibody and complement, decreases accessory function dramatically (4)(5)(6). In man, antidendritic cell antibodies are not available, but alternative and useful reagents have been obtained. For example, 3C10 and 1 D9 are related antimacrophage antibodies that do not react with dendritic cells, while 9.3F10 is an anti-HLA class II reagent (7) that reacts with both cell types. In this paper, we use these monoclonals to study the requirements for T cell proliferation. T cell growth is severely reduced when accessory cells are depleted with 9.3F10 and complement, or when an Fab fragment of 9.3F10 is added to the culture. Positive and negative monocyte selection experiments, with the fluorescence-activated cell sorter and with complement-mediated cytolysis, indicate that monocytes contribute little if at all to accessory function. In contrast, highly enriched and monocyte-depleted dendritic cells are potent stimulators of the syngeneic and allogeneic MLR and the response to soluble tetanus toxoid. Monocytes and dendritic cells express similar levels of Ia antigens, however, indicating that class II products need to be expressed on dendritic cells to induce several T cell-proliferative responses in man. Cytotoxicity Assays. A one-stage cytotoxicity protocol was used in which equal volumes of cells, antibody, and rabbit complement (C') were incubated for 60 rain at 37°C. All reagents were diluted in RPMI 1640 (Gibco Laboratories, Grand Island, NY), 0.3% bovine serum albumin, 25 mM Hepes buffer, and 10 #g/ml deoxyribonuclease (type I; Sigma Chemical Co., St. Louis, MO). The final concentrations of reagents were 1.6-2.5 X 106/ml for cells, 1-10 t~g/ml for antibodies, and 1:9 for rabbit serum reconstituted from lyophilized samples from two different rabbits. After treatment, the cells were washed three times and used as accessory cells or as responders in T cell proliferation assays. All cell numbers in Results are viable (trypan blue excluding) counts. Elimination of monocytes was monitored by cytology and nonspecific esterase staining (1). Cell Sorting. Adherent blood mononuclear cells were cultured overnight and the released fraction (50% or more monocytes) was sorted into monocyte-rich and depleted components using fluoresceinated 1D9 antimonocyte antibody and a fluorescence-activated cell sorter (FACS II; B-D FACS Systems, Sunnyvale, CA), equipped with a Spectra-Physics 5 W argon-ion laser. Optical filtration was done by placing 520-and 530-nm "cuton" filters (Ditric Optics, Hudson, MA) in series. The following instrument parameters were used in all cell sorting and analytical experiments: fluorescence-excitation wavelength, 488.8; laser power, 300 roW; photomultiplier voltage, 750 V; fluorescence preamplifier setting, 1-8; light scatter preamplifier setting, 2. Data were collected and displayed as dual parameter contour diagrams of fluorescence and light scattering intensity. Generally, the cells were sorted at ~2,200 cells per minute. The abort rate was <15% and total cell recovery was 80%. Directly fluoresceinated 1 D9 was used to stain cells because, relative to indirect immunofluorescence, there were fewer dead cells (3 vs. ~> 10%) and staining was simpler and faster. 1 D9 antibody was conjugated to fluorescein isothiocyanate isomer 1 (FITC-celite, F-1628; Sigma Chemical Co.). 0.5 mg of 1D9 Ig and 0.5 mg of FITC-celite were mixed for 1 h at room temperature in 0.33 ml sodium carbonate, 0.1 M, pH 9.5. Unconjugated FITC was removed by gel filtration on a 7.5 × 600-mm LKB TSK-3000 column (LKB-Produkter AB, Bromma, Sweden) in 0.1 M Na2PO4, pH 6.5, with 0.02% NAN3. The resulting reagent had a fluorescein/protein ratio of 4.8 and was used to stain cells at 5 #g for 5 × 106 cells/ml RPMI 1640 with 10% horse serum, on ice for 45 min. The cells were washed twice in phosphate-buffered saline before sorting under sterile conditions. Cells. Blood samples were obtained from normal volunteers in our laboratory, or were purchased as buffy coats from the Greater N. Y. Blood Center. Whole or unfractionated mononuclear cells were prepared from FicolI-Hypaque columns. Adherent cells were selected after culture on 100-mm plastic petri dishes (Falcon Labware, Oxnard, CA) in RPMI 1640 supplemented with 5% fetal calf serum. The adherent cells represented 20-40% of total cells and typically were comprised of 50-70% monocytes, 20-40% B cells, 10-20% T cells, and 1-4% dendritic cells. After overnight culture, 60-80% of the adherent cells had detached, and these were either used as accessory cells directly or to prepare dendritic cells. Monocytes that remained attached to the dish after overnight culture were shown previously to be weak or inactive as accessory cells (1). Dendritic cells were enriched by a new technique described in the accompanying paper (7). Briefly, adherent cells were cultured overnight, treated with 3C10 and C' for 1 h, washed, and treated again with 3C10 along with BA-I and Leu-1 (anti-B and anti-T cell) antibodies and C'. After 2 h of additional culture, viable cells were retrieved by flotation on dense albumin columns. 0.1-0.2% of the starting mononuclear cells were recovered, and 70-80% were dendritic cells by cytologic criteria; contamination with monocytes and lymphocytes was <2%. Control low density populations were adherent cells that had been exposed to C' in the absence of antibody; these contained 60% monocytes and 5-10% dendritic cells. Monocyte-enriched fractions were also obtained from firmly adherent populations (see above and reference 1) detached with EDTA. T Cell Proliferation Assays. Graded doses of viable, irradiated (3,000 rad) stimulator cells were added to 1.5 × 105 (6-mm flat microtest wells, 3596; Costar, Cambridge, MA.) or 2 × 106 (16-ram flat macrotest wells, 3524; Costar) responding T ceils, obtained by passing nonadherent blood mononuclear cells over nylon wool as described (1). These responder cells were unrelated to the donor (allogeneic MLR), syngeneic to the donor (syngeneic MLR), or syngeneic and modified with 2 mM sodium periodate (oxidative mitogenesis). Cultures were maintained in RPMI 1640 supplemented with 10% human AB serum, 20 t~g/ml gentamycin, 100 U/ml penicillin, 5 Proliferative responses to soluble antigens were performed in cultures of 0.3 and 1.0 x 105 responders in round-bottomed wells (Linbro Chemical Co., Hamden, CT). 15% autologous plasma, rather than 10% AB serum was used, because it supported higher and more consistent proliferative responses. The antigens were tetanus toxoid (Massachusetts Dep't of Health, Boston, MA.) and Candida albicans extract (Hollister-Stier, Spokane, WA.). Cultures were pulsed with 1 ~Ci of [~H]thymidine at 120-140 h. Responder cells were either unfractionated mononuclear cells or purified T cells prepared by passing nonadherent populations over nylon wool columns followed by further depletion of accessory cells with 9.3F10 (anti-Ia) antibody and complement. Blocking and Binding Studies with Anti-Ia or Anti-Class II Antibodies. 9.3F10 is considered to be an anti-class II reagent because it precipitates typical 33,000 and 29,000 tool wt class II polypeptides from monocytes, and reacts with B cells, monocytes, dendritic cells, and B cell lines, but not T cells or HLA-DR-negative lines such as K562, CEM T, U937, H L60, and Jurkat (7). The precise specificities identified by 9.3 F 10 have not been defined. An Fab fragment was prepared by papain digestion of ascites-derived Ig as described (7). The Fab fragment was stored as a sterile solution of 500 ug/ml and used at a concentration of 6 ~g/ml to block T cell proliferative responses in vitro. Control noninhibitory Fab fragments were 3C10 (antimonocyte) and 3G8 (antid-Fc7 receptor, kindly provided by Dr. Howard Fleit and colleagues, [8]). On a molar basis, 9.3F10 Fab was 10 times less effective than 9.3F 10 Ig in blocking proliferative responses to tetanus toxoid. Quantitative 25 5 binding studies with I-9.3F10 Fab were performed on 0.5-1 x 10 purified dendritic cells and monocytes, as described (7). Results Monoclonal antimonocyte antibodies were used in three sets of experiments to identify the active stimulator for T cell growth in man. Specifically, the antibodies were used to deplete monocytes by C'-mediated cytotoxicity, to sort monocytes from other cell types on the FACS, and to help purify dendritic cells. Since Ia + monocytes proved to be weak accessory cells, we tested whether all proliferative responses were mediated by HLA class II determinants and compared the expression of class II antigens on monocytes and dendritic cells in quantitative binding studies. Selective Monocyte Depletion with Antibody and C' Does Not Reduce Stimulator), Capacih, for the MLR and Oxidative Mitogenesis. 3C10 is an IgG2b monoclonal antibody that in the presence of rabbit C' kills >95% of human monocytes but no other blood cells including dendritic cells (7). In the experiments described here, monocyte depletion was monitored by nonspecific esterase staining (see Fig. 1 and legends to Tables I-III). Monocyte-depleted mononuclear cells were fully capable of stimulating the allogeneic MLR and oxidative mitogenesis, even FIGURE 1. Elimination of monocytes with 3C I0 and C'. Cells were exposed to C' only (le~) or 3C 10 antibody and C' (right), after which cytospin preparations were stained for nonspecific esterase. The top frames are unfractionated mononuclear cells and the lower frames adherent populations. Ceils with dark cytoplasmic staining were classified as monocytes. Monocytedepleted populations contain lymphoid cells with single, esterase-positive granules, x 400. at limiting stimulator to responder ratios (Table I). In contrast, treatment with the anti-class II monoclonal antibody 9.3F10 (7) and C' totally eliminated stimulating capacity (Table I). In most instances (e.g., experiments 2 and 3, Table I), treatment with anti-Ia alone was partially inhibitory. The monocyte depletion experiments were repeated using adherent mononuclear cells as stimulators. These cells were enriched in stimulating capacity, relative to total mononuclear cells (compare the proliferative responses in Tables I and II). Removal of monocytes ( Fig. 1) did not reduce accessory function and, in some cases, stimulatory function was actually increased (experiments 1 and 2, Table II). The latter can be attributed to the increased percentage of dendritic cells in the monocyte-depleted adherent fractions. In all cases, significant MLR and oxidative mitogenesis were induced even at limiting stimulator to responder ratios (1:12). At this dose, cultures stimulated with monocyte-depleted adherent cells had <1 monocyte per 2,000 T cells. Antigen-induced Proliferative Responses in Monocyte-depleted Cultures. It is known that large numbers of monocyte-enriched adherent cells ('~5% of the culture) are required to reconstitute normal antigen-induced proliferative response by purified T lymphocytes (9)(10)(11)(12)(13). T h e number of monocytes surviving treatment with 3C10 antibody and C' was so low (<1% of the culture) that we could ask whether selective monocyte depletion had any effect on proliferative responses to soluble protein antigens. Responses to tetanus toxoid and Candida albicans The experiments were constructed as in Table l except that irradiated adherent mononuclear cells were used as accessory cells or stimulators. The percentage of monocytes by nonspecific esterase staining was: remained intact in the virtual absence of monocytes even under culture conditions that were limiting in terms of antigen dose, cell dose, and time in culture (Table III). Treatment with 9.3F 10 antibody (anti-class II or anti-Ia-like) and C' dramatically reduced proliferative responses. Therefore monocytes seem unnecessary for proliferative responses to soluble antigens; treatment with anti-class II antibodies and C' probably eliminates responses by killing another cell type such as the Ia + dendritic cell. In establishing the gating conditions for sorting, we noted that 1D9 stained larger profiles in both adherent ( Fig. 2A) and unfractionated mononuclear cells (not shown); also, 1 D9 staining was blocked by the addition of excess nonfluoresceinated 3C 10 ( Fig. 2 B) but not by 9.3F 10 (not shown). The success of the sorting procedure under our gating condition (Fig. 2C) was monitored by cytology and by nonspecific esterase staining. The 1D9 + fraction was I>97% monocytes by both criteria, while the 1D9-fraction was 2% monocytes (Fig. 3, Table IV). The 1D9-fraction consisted primarily of lymphocytes, but also contained most of the dendritic cells. Adherent cells that had been sorted with fluoresceinated 1 D9 were tested for functional activity. Control studies indicated that exposure to 1D9 did not significantly alter the capacity of adherent cells to stimulate the MLR or oxidative mitogenesis (Table V, experiments 1,3), and that unsorted cells behaved similarly to mixtures of 1 D9 ÷ and 1D9-cells (Table V, experiment 1). When the accessory functions of the sorted populations were compared, the 1D9-or monocytedepleted fraction was consistently at least four times more active (Table V). Also, at least 75% of the total accessory activity was in the monocyte-depleted fraction, since the 1D9-cells represented 30-50% of the total. The 1 D9 + or monocyteenriched fraction weakly stimulated the allogeneic MLR and oxidative mitogenesis, and its stimulating capacity was comparable to that of nonadherent mononuclear cells. In most experiments, the syngeneic MLR was weak (<500 cpm), probably because dendritic cells were not greatly enriched by sorting; however, in one case (Table V, experiment 2), a significant syngeneic MLR was induced only by the monocyte-depleted populations. We conclude that monocytes are not the active accessory component of adherent cells from blood. Accessory Function of Highly Enriched Dendritic Cells from Blood. To obtain evidence that dendritic cells were the principal accessory cells in adherent populations, we used monoclonal antibodies (3C10, antimonocyte; BA-1, anti-B; and Leu-1, anti-T cell) to prepare highly enriched dendritic cells that were severely depleted of monocytes and lymphocytes (7, and Materials and Methods). Control populations were exposed to C' only and therefore contained large numbers of monocytes, some lymphocytes, and some (5-12%) dendritic cells. Zntensity ) L~.eh ~ scatter Unfractionated mononuclear cells or plastic-adherent cells were stained with fluoresceinated 1D9 and separated into ID9-positive and 1D9-negative fractions on the FACS II, or left unseparated (unfractionated). The three populations were then evaluated for the percent (mean ___ standard error) fluorescent cells (fluor +) and percent cells staining diffusely for nonspecific esterase (NSE+). The experiments were constructed as in Table I The monocyte-and lymphocyte-depleted adherent cells (primarily dendritic cells) were highly enriched in MLR-stimulating capacity, and were active at stimulator to responder ratios from 1:24 to 1:100 (Fig. 4). Purified dendritic cells were then compared to monocyte-enriched populations as stimulators of the proliferative response to soluble tetanus toxoid antigen. Purified T cells, depleted of Ia ÷ ceils with 9.3F10 and C', were used as responders. Accessory function in the dendritic cell fraction was clear-cut even at stimulator to responder ratios of 1:320 (Fig. 5). Monocyte-enriched populations were much less active (4-10-fold) and this activity may well have been due to contaminating dendritic cells. For example, 5-20% monocyte-enriched cells were required to elicit significant tetanus toxoid responses (Fig. 5), yet previous studies had shown that monocytes could be depleted to a level of 1% with no loss of function (Table III). We conclude that dendritic cells, in the virtual absence of monocytes and lymphocytes, are potent stimulators of proliferative responses to soluble antigens. Contribution of Class II (la-like) Molecules to Accessory Function. Treatment with the anti-class II antibody 9.3F10 and C' eliminates accessory function (Table I-III) but does not affect T cell responsiveness (Fig. 5). Class II molecules could either be a marker for active accessory cells such as dendritic cells (1)(2)(3)(4), and/or could contribute directly to function. Evidence for the latter was obtained in experiments in which the Fab fragment of 9.3F10 was present continuously. T cell proliferation in the allogeneic MLR, the syngeneic MLR, oxidative mitogenesis, and the tetanus toxoid response was blocked by 9.3F10 Fab, in most cases by 80-90% (Fig. 6). No inhibition was seen with two other monoclonal Fab fragments, the anti-Fc receptor reagent 3G8 (8) . The viable cells, which were 65-75% dendritic cells, were retrieved by flotation on dense albumin columns. Low density cells were also obtained from adherent populations exposed to C' only; these control cells were 60% monocytes, and 5-12% dendritic cells. In both experiments, stimulation by whole blood mononuclear cells is presented for comparison; however these cells were not exposed to Ab, C', or albumin columns. [SH]thymidine uptake of T cells in the absence of stimulators was 151 and 224 cpm, respectively. Stimulator to responder ratio FIGURE 5. Stimulatory capacity of dendritic cells in the proliferative response to tetanus toxoid antigen and in the syngeneic MLR (no antigen). As in Fig. 4, adherent cells were treated with cytolytic 3C10, BA-1, and Leu-1 antibodies to provide dendritic cells that were 75% pure. Control low density adherent cells were exposed to C' only and were primarily monocytes. Monocytes were also obtained from firmly adherent cells (1); the latter contained small numbers of dendritic cells and were not exposed to antibodies, C', or albumin columns. Purified monocytes and dendritic cells were then evaluated for reactivity with 9.3F10 (Table VI). Most if not all monocytes and dendritic cells were stained by indirect immunofluorescence. In quantitative binding studies with 12~I-9.3F10 Fab, monocytes and dendritic cells both expressed -~ 150,000 binding sites per Dendritic cells (80 and 62% pure by cytologic criteria) were prepared by the antibody-mediated cytolysis method. Control or low density adherent cells were treated with C' only, and this population was 71% monocytes, 12% dendritic cells, and 17% lymphocytes. Highly enriched monocytes were obtained from the persistently adherent population and were >90% monocytes. The number of Ia + cells in each population was determined by immunofluorescence with 9.3F10. Binding of 125I-9.3F10 Fab (specific activity of 6 X 106 cpm/~g) was determined in duplicate for two saturating concentrations (2.5-10 ug/ml). The value shown is the mean of the determinations with a standard error of <15%. Data are expressed per la + cell. cell. We conclude that class II products are needed but must be expressed on dendritic cells to stimulate the T cell proliferative responses studied in this paper. Discussion It is well known that accessory cells from human blood adhere to glass or plastic. For example, stimulator cells for the primary MLR and for oxidative mitogenesis are enriched in an adherent fraction that represents 20-30% of total mononuclear cells; the nonadherent population, which contains most of the T cells and many of the B cells, is weak or inactive (e.g., experiments 2, 3, Table V). The monocyte is the predominant adherent cell and is often assumed to be responsible for accessory function. Small numbers of dendritic cells also are present in adherent populations (1,2). If one is to determine how accessory cells initiate immune responses, it is essential to analyze the capacities of each adherent cell type, no less so than analyzing the different kinds of lymphocytes that mediate immunity. However, the characterization of dendritic cells in man has been more demanding than in mouse or rat. Human blood has a large (--~20fold) excess of monocytes relative to dendritic cells. Also, the two cell types do not differ as much in physical properties (buoyant density and capacity to adhere firmly to glass or plastic) as do their rodent counterparts. Therefore has not been clear whether monocytes can function as accessory cells, or if the function of dendritic cell-enriched preparations has been due to dendritic cells alone. In this paper, we have used monoctonal antimonocyte and anti-HLA class II antibodies to further characterize accessory cells in man. Selective Depletion of Monocytes. The first approach was to eliminate most (>95%) of the monocytes with specific antibody and rabbit C' (Fig. 1, Tables I-III). This treatment did not kill dendritic cells or other cell types (7). Elimination of monocytes did not reduce stimulation of the MLR and oxidative mitogenesis, and did not reduce proliferative responses to the soluble antigens, Candida albicans and tetanus toxoid. It would be difficult to establish whether treatment with 3C10 and C' removed every monocyte, and often it is reasoned that one must remove virtually every macrophage to render lymphocytes accessory cell dependent. However, this hypothesis is inconsistent with many doseresponse studies in which it has been observed that much larger numbers (5-30% of the culture) of adherent cells are required to fully reconstitute lymphocyte function after depletion of adherent ceils (9)(10)(11)(12)(13)(14)(15). It is possible that small numbers of monocytes exert atrophic effect in vitro, but our data suggest that the key accessory cell that must be removed and replenished is the dendritic cell rather than the monocyte. Previous studies have made relatively little use of specific antimacrophage antibodies and C' to deplete accessory function. Raft et al. (16) described the "Mac-120" monoclonal, which can kill ,~50% of monocytes and reduce stimulator function for the syngeneic MLR and for antigen-induced proliferative responses. It has not been established if treatment with Mac-120 alters the function of dendritic cells. This possibility must be entertained since 3C 10 killed >95% of monocytes but did not reduce accessory function even under limiting assay conditions (Tables I-III). Positive Selection of Monocytes. The second approach in this paper was to separate adherent mononuclear cells into monocyte-rich and -poor populations using fluoresceinated antimonocyte antibodies and the FACS (Figs. 2, 3, Tables IV, V). Sorting provided, in one step and in good yield, populations that were 997% and ~2% monocytes by esterase staining and by cytology. The monocytedepleted fractions, which contained the dendritic cells, had the bulk (~75%) of the accessory activity. The monocyte-rich fraction exhibited only weak activity, comparable to that seen with nonadherent mononuclear cells. Rosenberg et al. (17) noted that monocytes, selected with the 63D3 antimonocyte monoclonal antibody, restored pokeweed mitogen responses in sparse cultures. However, dose responses comparing monocytes and dendritic cells in this assay were not presented, and it was not clear whether both dendritic cells and/or monocytes had to be removed to render lymphocytes accessory cell dependent. The 1D9 sorting experiment (Table V) represents the first time in which macrophages and dendritic cells have been separated from one another using a specific antibody. Analogous experiments have been performed with mouse spleen adherent cells using a one step readherence method. The dendritic cell-rich component contained most of the stimulatory capacity for the MLR (3,18), and for the development of hapten-specific cytolytic T cells (19). In interpreting positive and negative selection experiments, one must consider the fact that monocytes can inhibit lymphocyte responses in vitro (e.g., [20][21][22]. Typically, however, the addition of monocytes to dendritic cells (Table IV in Function of Purified, Monocyte-depIeted Dendritic Cells. Antimonocyte antibodies were used in a third type of study that considered the capacity of purified dendritic cells to stimulate T cells in the primary MLR and tetanus toxoid response (Figs. 4, 5). In our previous work (1), dendritic cells were purified entirely by "physical" techniques. These cells lost the capacity to adhere to plastic after overnight culture (unlike most monocytes) and had a low buoyant density (unlike most lymphocytes). A dendritic cell-enriched fraction could be obtained by selecting adherent mononuclear cells that, after culture, were low density and loosely adherent. Yet these preparations contained at least 10% monocytes and lymphocytes, which could have contributed to function. Contaminating cells could be removed by rosetting methods (erythrocytes treated with neuraminidase for T cells; with antibody for Fc receptor-bearing monocytes; or with anti-human Ig for B cells), but only with a considerable loss of dendritic cells. In contrast, elimination of monocytes and lymphocytes with specific cytolytic antibodies (3C10, BA-1, Leu-1) provided dendritic cells that were highly enriched, in good yield, and depleted of contaminating cells by standard criteria. These enriched preparations of dendritic cells were strong stimulators of T cell proliferation to alloantigens and soluble proteins (Figs. 4, 5). Activity was detected at stimulator to responder ratios of 1:100 or less. The identification of accessory cells could be made more rigorous if one could relate activity in any cell population with the precise content of dendritic cells. This is not yet possible for three reasons. First, low frequencies of dendritic cells (0.1-3.0%), as occurs in unfractionated blood and monocyte-enriched populations, cannot be enumerated rigorously. Second, the purification of dendritic cells, monocytes, and responding T cells requires procedures that often cannot be applied uniformly to every population under study. Third, small numbers of dendritic cells in either the stimulator or responder populations may enhance responses to other cells. Thus mouse spleen dendritic cells enhance cytolytic T cell responses to hapten and class I alloantigens on T cells and on Ia-splenocytes (19,23). Given the strong stimulatory capacity of dendritic cells, the weak capacity of monocytes and lymphocytes, and the failure to observe any loss of function with extensive monocyte and lymphocyte depletion (1, 2, this paper), we would conclude that the dendritic cell is the principal accessory cell in blood. Contribution of HLA-class H Molecules to T Cell Growth. An extensive literature documents the fact that class II products of the major histocompatibility complex act as restriction elements for T cells. We have used a new monoclonal, 9.3F10, to study the contribution of HLA-class II molecules to the MLR, oxidative mitogenesis, and the tetanus toxoid response. Although the determinant identified by 9.3F10 is not known, the antibody precipitates a 33,000/29,000 tool wt doublet typical of class II products, and reacts with most monocytes, B cells, dendritic cells, and Ia + cell lines. Conceivably, 9.3F10 recognizes a specificity common to products of many class II loci, since it has the notable capacity to block T cell proliferation even when used as an Fab fragment (Fig. 6). Our working hypothesis is that 9.3F10 allows one to quantitate those Ia molecules needed for the proliferation of most class II-restricted T cells. Strikingly, both dendritic cells and monocytes bind comparable amounts of 9.3F10 Fab (Table VI). Since monocytes are weak or inactive in stimulating T cell growth, it seems that class II products must be present on dendritic cells to initiate replication. Most likely, 9.3F 10 blocks replication by inhibiting the recognition of dendritic cell class II products. Studies in mice and guinea pigs indicate that monoclonai anti-Ia antibodies can block function at the level of the accessory cell (e.g., 24,25). It has been reported that anti-Ia can also inhibit the T cell response to interleukins (26). However, interleukin-mediated human T cell growth, monitored as described recently in a routine model (5), was not inhibited by 9.3F10 (J. M. Austyn, personal communication). It is not clear why Ia + human monocytes and dendritic cells have such different functional capacities. Comparable observations have been made in studies of Ia ÷ macrophages in mice (3-6, 18, 19, 27). We favor the idea that dendritic cells are differentiated to induce responses in unprimed, resting or memory T cells, as were studied in this paper. The dendritic cell probably acts directly to initiate the response of class II-restricted cells, as well as indirectly on other T cells by controlling the release of soluble mediators like T cell growth factor (5). Ia + monocytes may very well interact directly with class II-restricted activated T cells or their products during the effector limb of the immune response. Yet the monocyte does not mediate the formation of sensitized cells, which appears to be the function of specialized dendritic cells. Summary Monocyte-specific monoclonal antibodies (7) were used to compare the efficacy of monocytes and dendritic cells as accessory or stimulator cells for human T cell replication. Both unfractionated and plastic-adherent mononuclear cells were first treated with a cytolytic antimonocyte antibody that kills >95% of monocytes but not dendritic cells. When tested as stimulators of the mixed leukocyte reaction (MLR) and of oxidative mitogenesis (the proliferation of T cells modified with sodium periodate), the monocyte-depleted cells had normal or enhanced stimulatory capacity. Monocyte-depleted mononuclear cells also proliferated normally to soluble antigens (Candida aibicans, tetanus toxoid), even under limiting conditions of cell dose, antigen dose, and culture time. Adherent blood mononuclear cells were next separated into monocyte-enriched and -depleted components using fluoresceinated antimonocyte antibody and the cell sorter. The depleted fraction (<2% monocytes by esterase staining and by cytology) contained the dendritic cells and exhibited at least 75% of the accessory activity. The monocyte-rich fraction (~-97% esterase positive) stimulated the MLR and oxidative mitogenesis weakly, and was comparable in potency to nonadherent cells. Cell-specific antibodies and complement were also used to prepare dendritic cells that were thoroughly depleted of monocytes and lymphocytes. The dendritic cells (70-80% pure) were potent stimulators of the allogeneic MLR, syngeneic MLR, and tetanus toxoid response, being active at stimulator to responder ratios of 1:100 or less. Taken together with previous studies (1,2), these experiments indicate that the dendritic cell is the major stimulator of T cell replication in
2014-10-01T00:00:00.000Z
1983-07-01T00:00:00.000
{ "year": 1983, "sha1": "992702297c21ff694315a097eb39ecd096c5e249", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/158/1/174.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "992702297c21ff694315a097eb39ecd096c5e249", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
235439188
pes2o/s2orc
v3-fos-license
Preferences for Digital Smartphone Mental Health Apps Among Adolescents: Qualitative Interview Study Background Mental health digital apps hold promise for providing scalable solutions to individual self-care, education, and illness prevention. However, a problem with these apps is that they lack engaging user interfaces and experiences and thus potentially result in high attrition. Although guidelines for new digital interventions for adults have begun to examine engagement, there is a paucity of evidence on how to best address digital interventions for adolescents. As adolescence is a period of transition, during which the onset of many potentially lifelong mental health conditions frequently occurs, understanding how best to engage this population is crucial. Objective The study aims to detect potential barriers to engagement and to gather feedback on the current elements of app design regarding user experience, user interface, and content. Methods This study used a qualitative design. A sample of 14 adolescents was asked to use the app for 1 week and was interviewed using a semistructured interview schedule. The interviews were transcribed and analyzed using thematic analysis. Results Overall, 13 participants completed the interviews. The authors developed 6 main themes and 20 subthemes based on the data that influenced engagement with and the perceived usefulness of the app. Our main themes were timing, stigma, perception, congruity, usefulness, and user experience. Conclusions In line with previous research, we suggest how these aspects of app development should be considered for future apps that aim to prevent and manage mental health conditions. Background The rise in common mental health issues among adolescents is a distressing trend. The World Health Organization has estimated that 20% of adolescents experience mental health conditions, and most of them do not receive or seek appropriate diagnosis and care [1]. Addressing this concern is an essential component of the current global mental health agenda [2]. Innovative solutions delivered by mental health apps (MHapps) could represent a feasible solution to tackle this issue. There is already a plethora of mental health mobile apps available to adolescents, and the increasing use of smartphones in this group might make these apps more acceptable and accessible [3]. Studies have also suggested that complex app-based mental health interventions for adolescents are feasible. For instance, in one study, cognitive behavioral therapy in the form of SMS text messages was considered useful by 75% of the participants [4]. Moreover, significant engagement with MHapps has been shown in the past, with nearly three-quarters of adolescents completing more than 80% diary entries over the 1-week intervention period [5]. MHapps also have the potential to reduce barriers to face-to-face help seeking, including stigma and distress about discussing one's own mental health [6]. This aspect of MHapps may be appealing to young people, given that most adolescents would not seek or pursue help with respect to mental health through traditional routes [7]. Adolescents' familiarity with mobile devices suggests that technology-based approaches would benefit them [8]; however, it is crucial to understand how best to tailor digital interventions to make them the most appealing. Tucker and Goodings [9] identified three themes that characterize most current MHapps, as follows: stress-inducing or stress-reducing apps, apps for configuring the body in space, and digital self-care apps. However, apps targeting adolescents likely need to expand into other areas such as positive focus, customizable features, human-human interaction, and easy access [3]. In terms of help-seeking preferences, adolescents have also expressed a desire for web-based, accessible information and health interventions, which are all technology-based needs rather than needs that can be met via in-person, telephone-based, or paper-based services [10]. When considering communication with providers, adolescents preferred email or text over video communication [11]. Despite their familiarity with digital technologies, engagement is generally low and evidence on the usefulness of concrete features is still scarce [12,13]. In addition, engagement with MHapps seems to vary independently from the presence of evidence-based features [14]. This raises questions about the clinical effectiveness and safety that undermine trust in both users and providers [15]. Furthermore, high dropout rates are generally associated with poor user experience (UX), whereas the specific components of engaging MHapp design are yet to be determined [16,17]. Objective In this study, we seek to identify the key preferences and attitudes of adolescents that future digital mental health interventions may need to take into account to successfully reach this population. For this purpose, we used the Thrive mental health app, which has an established evidence base [18,19], to explore adolescents' perceptions of the potential usability of such a tool in their everyday life. Using a qualitative design, we gathered feedback from a sample of adolescents through face-to-face interviews. Specifically, our goal was to detect potential barriers to engagement and to gather feedback on the current elements of app design regarding UX, user interface (UI), and content. The long-term aim of this exploratory study is to provide the foundations for creating digital interventions for adolescents that are equally driven by clinical rigor and UX to help retain engagement. Ethics and Preregistration This research study was approved by the Roehampton University ethics board (reference number: PSYC 18/306) and preregistered as a qualitative protocol (reference number: TCYP171110). All methodologies adhered to the protocol unless stated otherwise. Participants and Procedure We recruited a total of 8 male and 6 female participants (N=14). Overall, 11 participants were recruited from a local secondary school attended by approximately 1300 students at the time of the study. Students were made aware of the study through combined advertisements in the school. For interested students, the researchers gave a talk that outlined the overall purpose of the app and explained that the study was trying to understand what features of digital MHapps were most useful to keep students engaged. Furthermore, 3 participants were recruited through their parents who were also users of the app and knew about the study. These 3 participants were enrolled after an introductory conversation with the researchers, explaining the details and the purpose of the study. Participants in the school were asked to get in touch with the lead teacher to express their interest in participating in the study. The teachers assessed whether the students adhered to the inclusion criteria to be a part of the study. Researchers then got in touch with the lead teacher to schedule a meeting in person on school grounds to interview the participants. Participants who were recruited via end users were interviewed over the phone. Consent forms for participants, parents, and teachers were sent out and returned via email. The information sheets were sent via email. The participants were interviewed as soon as possible after they had given their consent forms and had used the app for at least 1 week. All participants were asked to use the Thrive: Feel Stress Free app ( Figure 1) for 1 week as much as they liked. Participants were asked to turn on their notifications in the app; however, this was not enforced as a strict inclusion criterion. Sample Size and Theme Saturation Although our target sample size was 30, recruitment was stopped after 13 participants had been interviewed. The reason for this decision was twofold: 1. Owing to unforeseen barriers (school examinations, holidays, and teacher availability), our initial recruitment strategy resulted in 13 interviews, and another round of recruitment would have been needed to reach our intended sample size. 2. At this stage, we decided to review our sample size estimate by assessing theme saturation in our data. We defined saturation as "the point during data analysis at which incoming data points (interviews) produce little or no new useful information relative to the study objectives" [20]. Using an approach recently refined by Guest et al [20], we set out to estimate how many additional interviews might be needed to reach theme saturation in our data. As most novel information is seen early in the coding process and follows an asymptotic curve in qualitative data sets [21], it is possible to use the occurrence of novel codes in each subsequent interview to estimate the slope of this curve and make an informed decision about a new recruitment target by systematically coding available interviews. However, after the coding of interview 9, it became apparent that saturation had already occurred and no further recruitment was necessary. To assess the saturation, interviews were coded by 2 of the researchers-JASF and JK-none of whom took part in interviewing participants or transcribing interviews. The App We chose the Thrive app for this study because we considered its design and UX elements as good examples of the state-of-the-art development principles in the digital mental health field. Moreover, one of the authors (JASF) was involved in the development of the Thrive app, which allowed us easy access to the app for the purpose of this study. After opening the app, participants were guided through a short mood assessment and thought training exercise, which allowed the app to create a customized cognitive training plan for the user. The recommended exercise modules (and others) could be accessed after the assessment phase was complete. The included modules were a combination of guided relaxation techniques and guides that provided further background and recommendations related to the given feature, such as meditation, breathing, or self-suggestions. Exercises were explained in detail, and the users were guided through the entire process when practicing each module. Next to the cognitive training features, participants were able to play games, such as Zen Garden or word puzzle, aimed to provide a more engaging UX. When finished with the modules, the app also provided participants with an overview of their progress where they could track their mood, practice, and goals. Examples of these steps are presented in Figure 1. Data Collection Data were collected between January and July 2018. Owing to unforeseen delays, such as school examinations, holidays, and teacher availability, this was substantially longer than reported in the protocol. Eligible participants were between the ages of 11 and 18 years, owned a smartphone, used it frequently (more than 1 hour a day), and did not have an existing diagnosis of a mental health condition. During the interview, no one else was present aside from the participant and the researcher, either in person or over the phone. We did not interview participants as a focus group because of the difficulty in organizing this at a convenient time. The sessions were guided by a semistructured schedule (Multimedia Appendix 1). The discussion began with participants' overall impressions of the app. For example, "what bits of it [the app] were useful if any" or "which bits of it [the app] did you dislike?" The schedule then proceeded to more specific questions if not covered in the overall broad questions. For example, "what did you think of our avatar" or "what did you think of the journal?" This was not pilot-tested before commencing the interviews but was constructed by consensus between the authors. Participants were asked when they would engage with the app during the day, if at all, and at which location this took place. We asked which barriers and facilitators led them to use the app less or more frequently. In addition, we asked participants to list what they would change about the app to make it more interesting. The participants led the conversation though the interviewers ensured that the participants were prompted to specific topics if previously missed. Each interview lasted approximately 20-40 minutes. No repeat interviews were conducted. The interviews were audio-recorded and transcribed verbatim by SA, FO, RR, and JMB. Analysis The interview extracts were analyzed using inductive thematic analysis [22]. First, interview transcripts were read carefully by 3 researchers (RR, JASF, and KJ) to identify meaningful ideas relevant to the research topic. On the basis of this, an initial list of relevant concepts was generated. Second, short segments of the data, dealing with similar issues or concepts, were identified and grouped together using provisional codes. At this stage, researchers coded each transcript on their own and could use different codes for any single data segment. Third, codes were discussed and cross-referenced between the researchers and collated into a common framework that allowed for candidate themes to emerge. Finally, upon review, candidate themes were grouped into main (or meta) themes that formed coherent meaningful concepts across the texts. The main themes and subthemes were then reviewed and refined using the original transcripts and linked to participant quotes to ensure that the final themes indeed formed a coherent pattern across the whole data set. Our analysis resulted in 6 main themes and 20 subthemes (Textbox 1). Textbox 1. Main themes and subthemes developed by the authors based on the data on the experience of using a digital mental health app. Overview One participant was not available for the interview, so our results included 13 interviews in total. Most participants were White and British (12/13, 92%), and 1 participant self-identified as being on Asian descent. We labeled our main themes as follows: (1) timing, (2) stigma, (3) perception, (4) congruity, (5) usefulness, and (6) UX. Although our aim was to treat these categories as separate, they are nevertheless closely intertwined concepts with inevitable overlaps. The subthemes are referenced by their numbers in parentheses in Textbox 1. Timing All participants emphasized the importance of time constraints when engaging with the app. Participants often described their daily schedule as busy and stressful where they have to get on with their work. Under these circumstances, using the app often felt like an extra task where "you do have to make a conscious effort to go in" (Participant 5). This routine usually limited participants to only engage with the app at home, particularly before bed when there is nothing else left to do; however, this also led to its own set of issues. It was frequently mentioned that the app was not being in sync with this kind of schedule: Generally, I was doing it more in the evenings and by then I couldn't, it was giving me like tasks to go out and go for a walk and things and I couldn't because it was dark outside. [Participant 4] The pressure of having a busy schedule also made exercise length an important question. In general, shorter and easily accessible exercises were more appealing, where users could simply log in and check on themselves by noting down their mood or completing a quick task. In this context, routine mood screening measures before accessing any particular exercise and longer tasks were usually seen as barriers to engagement. Participants were already conscious of these time restraints even before deciding to log in and knowing that they had enough time to complete a task was seen as an important determinant of engagement: Stigma Mental health-related stigma was one of the most frequent themes in our data, and it is likely to have a significant impact on engagement. Stigma primarily emerged through labels such as weird, uncomfortable, private matter, and ashamed, which referred to feelings of embarrassment and vulnerability associated with using a mental health app. Even though these issues were common to all, only 1 participant mentioned encountering any negative remark: There was one goal to go out for a walk and going out randomly for a walk is a bit weird so you know I was explaining it to them and they found it a bit silly at first but then as I was going through it with them and explaining how it worked they found it more interesting...I mean there was one member of my family that still thought of it as still a bit gimmicky. [ Participant 4] As this negative comment was unique in our data, we saw stigma as already internalized, which was brought to the surface by certain situations when using the app. Most frequently, users were cautious about openly using and/or talking about the app among their family and friends and in public, such as on public transport where the risk of being seen and labeled was highest. The presence of other people was also a barrier to performing various exercises, such as deep breathing or closing eyes, as these activities were seen as uncomfortable in public. In general, mental health was thought of as a private matter that belonged behind a closed door. Thinking of mental health as a private issue also made the app more appealing in other ways. Participants perceived that it gave them more control over their issues without having to rely on other people: I know a lot of teenagers who maybe wouldn't want to go to a counsellor or would be ashamed of going to a counsellor and in this way you're kind of helping yourself in your own way. [Participant 12] Participants also wanted to avoid being seen as a downer who complains about being stressed and preferred relying on an app rather than risking social rejection. Participants were also concerned about their self-perception when using the app. Participants also highlighted a fine middle ground between being too serious on one hand and childish on the other. Most of them picked up on playful design elements, which were generally considered childish, even though the app was designed with adults in mind. One participant even reflected on this sensitivity to being patronized: Teenagers always get funny about things being childish. Especially like slightly younger 14-year olds. They wouldn't want to feel like it was for children at all. [Participant 9] Conversely, participants also did not like the medical label attached to the app, which may be perceived as too serious: Following design, a common element was the way participants viewed the role that an app like this should play in their daily life. When asked about how they would describe it to a friend, a "use it only when you feel stressed" approach was overwhelmingly popular: I described it as like a self-help, meditation app that you could use in stressful situations. [Participant 1] Although the app itself was created with structured skill building in mind, the default perception was that it should be used as a quick check-in tool or quick intervention when one is anxious or needs immediate support. Congruity Congruity describes some of the key areas where the design of the app (both UX and UI) proved to be confusing or working against its intended purpose. A common experience that seemed off-putting to users was when the design of various features was not in line with its actual content. The most frequently mentioned examples were the relaxation exercises; a participant illustrated this as follows: In comparison to the really slow breathing and stuff, a fast-moving thing (background) just seemed really big at the time. [Participant 6] Along the same lines, participants also found looking at a screen problematic right before going to sleep-one of the most popular times to use the app-as its negative effects may do more harm than good and missed the option of a voice-only session for this situation. In addition, on the UX side, participants preferred more guidance from the app, as without it-or without preexisting knowledge of mental health and therapeutic techniques-some sections of the app proved too complex: These types of problems can be overcome by clear signposting and well-designed user journeys; however, for some participants, this aspect has proved confusing as well: ...when you log in it's not as straightforward to follow the instructions or like when you first log into it. Usefulness Going beyond content and design, the usefulness theme refers to common patterns in participants' subjective experiences that either contributed or hindered effective engagement with the app. Many pointed to the positive effects of having a sense of control when it comes to mental health. Therefore, the fact that they were able to do something about their problems was in itself beneficial: So, it felt like I was like actively going out and helping myself...rather than me just thinking "oh my goodness I'm just swamped," I'm actually making an effort to climb out of this mess. [Participant 4] Along the same lines, exercises were also most beneficial when users clearly understood their purpose and saw their progress: Although having control over a problem was generally seen as a good thing, predefined situation labels were generally seen as a step too far. Common complaints were that these labels were often simply wrong, too restrictive, or not accurate enough, and although predefined labels were also useful for many, the participants suggested adding the option of having their own labels: I did like the idea, but I felt if we were allowed to write our own responses instead of choosing one it would be more powerful. [Participant 9] Having the choice of selecting from predefined automatic thought labels also proved problematic for some users, especially if they were already in a negative mood: ...when I was in a bad mood it would ask me what I was thinking before I hit that bad mood and some of them were quite extreme. So Some users also developed an association between their low mood and engaging with the app, which over time even amplified certain negative states: ...when you're in a bad mood and you just kinda don't say explicitly you don't necessarily stay in that mood, but when you click on the app, like a dark cloud or whatever it is, I then just feel like down. [Participant 5] Users' past experiences with certain exercises, such as meditation, proved to be a strong contributor to the kind of features they liked and visited frequently. This also underlines the appeal of activities where users know what and why they are doing. Beyond meditation exercises, the most valued features were reminders, relaxation or sleep exercises, the mood tracker, and the peer support or communication function. UX Theme This final theme highlights some of the emergent contradictions and alignments between current trends in app design and participants' subjective experiences in 2 key areas: gamification and personalization. With regard to gamification, users saw games in this context as either neutral or counterproductive, although there was one participant who suggested that games should be more competitive rather than calming. Outside of the concrete games, users also did not make much use of other soft gamification features, such as the point system, whereby users were able to unlock new achievements and earn credits. This progression system, without real tangible rewards, did not make sense to the participants. Conversely, specific features of progression systems, with which participants were able to unlock new content based on their achievements, were generally seen as useful in creating a sense of progress and contributing to engagement: Participants valued the ability to personalize their own experiences by setting their own background to creating their own character. They also wanted the opportunity to send personal messages to one another and did not see much value in sending or receiving prepopulated messages: Principal Findings We aimed to understand adolescents' preferences when engaging with digital mental health interventions using semistructured interviews. Using these data, we developed 6 main themes and 20 subthemes that captured distinct aspects of adolescents' experiences with the Thrive app. Overall, most themes corresponded well with previous research in the field [7]; however, we also gained new insights for further exploration. Adolescents saw the app as helpful. Many expressed that simply having access to an MHapp was reassuring in itself and proved beneficial in increasing their sense of self-reliance and containing negative moods. Time was a major factor in terms of engagement. Most participants reported having a busy schedule and preferred using the app in the evenings before bed or just for quick check-ins during the day. Preferred features also corresponded with evening use and underscored results from previous studies highlighting the need for brief and easy-to-access features [23]. In terms of specific features and design, participants highlighted the importance of clarity in both their user journey and available information, which was also emphasized in previous studies [7,23]. Features seemed the most popular and engaging when either users already had experience with similar exercises, such as meditation, or when the purpose and goals of a given feature were clearly defined. We believe these preferences suggest that clarity of information likely has a direct impact on effectiveness by providing users with a clear mental map through which they can progress. In contrast to previous studies that suggest reward and progression systems as a way of facilitating engagement [23,24], we found a clear distinction between helpful and unhelpful progression systems. In our sample, simple leveling or point systems were not meaningful to participants if they were not tied to tangible rewards. Similarly, games did not facilitate engagement as users deemed them irrelevant in this context. This was unexpected given the age of the population and previous research indicating good acceptability of games in this context [25]. Most participants endorsed the ability to unlock new features and levels that provided them with access to new content and exercises. Stigma emerged as a hidden but important barrier to engagement. Participants often expressed embarrassment and feelings of weakness related to mental health. This concern, although not surprising given the associated stigma in the field [26], led many participants to use the app only in private and question how the app was branded and framed. Participants' reluctance to use the app when traveling is especially problematic when we consider that this could be the most obvious opportunity to find the time to engage with a mental health app. As a solution, less conspicuous designs were suggested. Other barriers mentioned by participants also converge with those of previous research pointing out the stress-inducing potential of these apps [9]. Specifically, some participants saw prepopulated questions about their feelings as stress inducing. Others also felt that engaging with negative thoughts, rather than ignoring them or simply letting them pass, was not desirable. However, this seems to contradict previous findings that praise apps for their ability to increase emotional awareness [27,28]. This avoidance may hint at a difficulty in enduring or accepting any amount of distress. This may stem from an assumption that one should not have to encounter anything that may be distressing or difficult in life. We believe that this potential unwanted effect of MHapps deserves further investigation, as bringing negative thoughts into awareness is a fundamental aspect of cognitive behavioral therapy and accepting this initial dip in mood is a prerequisite for effective engagement [29]. Along the same lines, the majority of participants used the app only for checkups and as a quick stress reduction tool in acutely stressful situations. This pattern of engagement is in stark contrast to the skill-based approach to improving mental health where users engage with MHapps to acquire the skill of managing their distress on their own and self-soothe. Although acute stress reduction can indeed be beneficial in certain situations, excessive reliance on this may increase dependence on an external source of soothing rather than reducing it. We see this mismatch in user attitudes and intended use as one of the key points to address in the future if MHapps are indeed to become scalable additions to therapy. Finally, although previous studies often emphasized the importance of customizability [3] and this also emerged as a desirable feature in our sample, given that time seems to be one of the most important factors influencing engagement, this finding should be approached with caution. Participants may express their desire for customizability in an interview situation, but in reality, they may respond better to clearer design, short interactions with the app, and easy access. Limitations Participants were only provided with access to the app for 1 week before the interview, which might have influenced the depth and detail of their experience and limited our conclusions. However, given that our main goal was to detect immediate, noticeable features that were liked or disliked, and that we reached theme saturation in our analysis, we are confident that this time frame was sufficient to address our question. Another potential source of bias is the small sample size and the convenience sample. This may mean that our participants may be overly similar in certain ways, having similar backgrounds and preferences, which could distort our findings. Collecting background information from participants would have enabled us to reflect on this aspect in more detail. Moreover, this information would have also helped us to adequately contextualize our results. Although our focus of this qualitative paper was to get a sense of how adolescents may view a mental health app, more background on demographics and other participant characteristics would have strengthened the interpretation of our findings. Finally, 3 participants were known to the experimenters. This may have caused some bias in the study as participants may have been interested in the app from the start and had an incentive to speak favorably. Although it was made clear that they would be anonymous and their responses recorded by a member of the research team not known to them, it is still reasonable to assume that there may be some bias regarding their experience of the app. However, we did not observe any differences between the responses of the participants along these lines. Conclusions We identified 6 main themes and 20 subthemes that captured distinct aspects of adolescents' experiences with the Thrive app. Overall, participants preferred convenient, clear, and easy-to-access features that they could use on an ad hoc basis. They saw the app as a potential way of calming themselves
2021-06-16T06:17:27.700Z
2021-05-06T00:00:00.000
{ "year": 2021, "sha1": "23fdc333e4111b420974460596937478f415ab97", "oa_license": "CCBY", "oa_url": "https://formative.jmir.org/2021/8/e14004/PDF", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f70bf077b442adfb82d651deec95a2071923cc79", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
27940500
pes2o/s2orc
v3-fos-license
Appropriateness of Pediatric Hospitalization in a General Hospital in Kuwait Objectives: To determine the rate of inappropriate pediatric admissions using the Pediatric Appropriateness Evaluation Protocol (PAEP) and to examine variables associated with inappropriateness of admissions. Subjects and Methods: A prospective study was conducted in the Department of Pediatrics, Farwania General Hospital, Kuwait, to examine successive admissions for appropriateness of admission as well as several sociodemographic characteristics over a 5-month period (August 2010 to December 2010). A total of 1,022 admissions were included. Results: Of the 1,022 admissions, 416 (40.7%) were considered inappropriate. Factors associated with a higher rate of inappropriate admission included older age of patients and self-referral. Conclusion: The rate of inappropriate hospitalization of children was high in Farwania Hospital, Kuwait, probably due to the relatively free health care services, parental preference for hospital care, easy access to hospital services, and insufficient education about the child’s condition. times [5,6] . Hence, the objective of this study was to use the PAEP to determine factors associated with the appropriateness of pediatric hospitalization in a general hospital. Subjects and Methods Farwania Hospital is a general hospital which serves a population of approximately 900,000. An average of 380 children under 12 years of age visited the Pediatric Casualty Department daily in the year 2010. The average daily admission to Pediatric Wards was 9 patients. A prospective, descriptive study was carried out in 1,022 pediatric patients (530 males and 492 females; age range from birth to 12 years) admitted to the Department of Pediatrics of Farwania Hospital over a period of 5 months from August to December 2010. Babies born in the hospital who were admitted to the special care unit were excluded. Of the 1,022 children included in the study, 624 were Kuwaiti and 398 were of other nationalities. Several items such as age, sex, nationality, time of admission, and type of referral were analyzed using the Statistical Package for the Social Sciences program (SPSS). Of the 1,022 patients, 343 were randomly selected; their medical records were examined and further analyzed for outcome and follow-up arrangements. Age was divided into three strata: ^ 1 year, 1 1 year but ^ 5 years, and 1 5 years. The patients were assigned to each age strata as follows: ^ 1 year, n = 412; 1 1 year but ^ 5 years, n = 378, and 1 5 years, n = 232. Time of admission was divided into 3 categories: morning and early afternoon (7: 00 a.m. to 1: 59 p.m.), afternoon (2: 00 p.m. to 9: 59 p.m.), and night (10: 00 p.m. to 6: 59 a.m.). The admitted children presented to the Pediatric Casualty Department as self-refer-rals, referrals from primary care clinics, or other forms of referral (from other hospitals, other departments in the same hospital, or private clinics). Upon discharge, children were classified as follows: those who needed outpatient follow-up, those who did not need further follow-up, those who were discharged against medical advice, those who were referred to other departments or other hospitals, and those who died. Case diagnosis was not subjected to analysis as part of the study since the appropriateness of admission was evaluated independently of the diagnosis. However, the diagnosis was further analyzed for possible insight as to the causes of inappropriate admission and suggestions for ways to reduce such admissions. Two hundred seventeen (40.9%) of the 530 male patients and 199 (40.4%) of the 492 female patients were classified as inappropriate admissions, and the difference was not statistically significant (p 1 0.001). Two hundred fifty-five (40.8%) of the 624 Kuwaitis and 161 (40.4%) of the 398 non-Kuwaitis were considered inappropriate admissions ( fig. 1 ), and the difference was not statistically significant. Among the children who were admitted in the morning and early afternoon, 122/314 (38.9%) were inappropriate admissions; 160/391 (40.9%) afternoon admissions and 134/317 (42.2%) night admissions were also inappropriate admissions. The differences were not statistically significant (p 1 0.001). Of the 1,022 patients, 931 (91%) were admitted to the Casualty Department on a self-referral basis. In those 931, the rate of inappropriate admission was 40.8%. Among 44 patients who were referred from primary care, the rate of inappropriate admission was 34% (15/44). This was significantly lower than the rate of inappropriate admission in the self-referred group ( table 1 ). Among the 343 patients randomly selected for further analysis, 135 patients were considered inappropriately admitted (39.4%), 136 (39.65%) were discharged with follow-up in the Outpatient Clinic at later dates; 139 (40.52%) were discharged without further follow-up, 53 (15.45%) were discharged against medical advice without further follow-up, 10 (2.92%) were referred to other hospitals, and 5 (1.46%) were referred to other departments within the . 2 ). Of the 136 patients discharged to followup in the Outpatient Clinic, 33 (24.3%) were considered inappropriately admitted. The ten most common diagnoses at the time of admission and the number of patients admitted with each diagnosis are given in ( table 2 ) Discussion The results of this study showed that the rate of medically inappropriate admission at 40.7% was higher than that of other countries; 24% was reported for Canada [6] and Australia [7] and 10-30% was reported for Europe, the USA, Israel, and Italy [1,4,8,9] . These differences could be attributed to the country's health care system based on whether or not health care is provided free of charge by the government or paid by individuals through private insurance. In Kuwait, the Kuwaitis are provided free health services in the public sector while non-Kuwaitis pay a small fixed amount. The relatively free health care system in Kuwait could account for higher inappropriate pediatric hospitalization rates in Kuwait compared to other countries. The inappropriate admissions rate of 33% among patients ^ 1 year old compared to 44.3% in patients 6 5 years of age could reflect the difference in the complexity of diagnoses between infants and older children. Infants are most likely to be hospitalized for prematurity, congenital problems, or infectious illnesses, all conditions that would lead to intensive medical services, whereas older children present with more chronic diseases and may be hospitalized for investigations and less intensive therapy. There was no difference in the rate of inappropriate admission at different times of the day, contrary to the findings of other studies [9,10] . In the study by Bianco et al. [9] , inappropriate admission occurred more during the day and was interpreted by use of the hospital as a primary care facility because it was easier to attend and find a pediatrician during the day [9] . However, in Kuwait, access to hospital pediatricians is possible all day long although access to primary care facilities is also available. Parental preference for hospital care, particularly during an acute illness, is apparently a factor that favors inappropriate admission. Also, in our study sex did not have any effect on the rate of inappropriate admission, contrary to other studies [8][9][10] . This can be explained by the fact that almost overall equal numbers of both sexes were referred to the emergency room and this can be interpreted as equal care given by parents and caregivers to both boys and girls in Kuwait. In the group of 343 patients that was randomly selected, the number of children who were scheduled for follow-up in the outpatient department was significantly higher in those who were considered to be appropriately admitted than in those who were considered to be inappropriately admitted (49.5 vs. 24.4%). This was to be expected; however, what is significant is that 24.4% of the children who were considered inappropriately admitted needed follow-up, emphasizing that the lack of need to admit to hospital does not necessarily mean the lack of need to treat, investigate, and follow-up. Inappropriate hospitalization not only has an economic impact on the healthcare system but it could also have a psychological and physical impact on the child and the child's family, probably because several events could make the hospital stay a potentially stressful experience for children. Hence, unnecessary hospitalizations should be avoided for routine procedures as well as for chronic illnesses [11] . Acute and chronic respiratory diseases represent a large proportion of pediatric hospital admissions, both appropriate and inappropriate (34% of the total admissions in our study; 33.6% of them were considered inappropriately admitted) ( table 2 ). Modern technology in the field of respiratory treatment, such as oxygen concentrators, suction and humidification machines, and nebulization equipment, allows for home treatment to be both effective and feasible for some of the respiratory diseases that are currently treated in hospital and for prolonged periods. Intravenous therapy can be administered on an outpatient basis or as home therapy. Innovations in antibiotic therapy, such as once-a-day dosage and technological innovation in infusion pumps, are some of the valuable ways to decrease the need for hospital admission. To allow for safe home nursing, parents should be trained to cope with the needs of their child at home, and home nursing facilities should be provided. Training may include nasogastric feeding, intramuscular injections, management of central venous lines, tracheostomy care, and home glucose monitoring [12] . Many pediatric hospitalizations might be avoided if parents and children were better educated about the child's condition, medications, the need for follow-up care, and the importance of avoiding known disease triggers [11] . Based on our findings, we believe that establishing an area for short-term observation in the emergency room that can provide intravenous hydration and/or observation of the response to treatment for children with acute respiratory problems (e.g. bronchial asthma or croup) might also help to lower the rate of unnecessary hospitalization. It is possible that improving these items will cause the rate of unnecessary admissions to decrease, so we can provide efficient use of hospital resources and cut the expenses without compromising the quality of care provided for children. Conclusion The rate of inappropriate hospitalization of children was high in Farwania Hospital, Kuwait. This could be explained by the relatively free health care services, parental preference for hospital care, easy access to the hospital services, and insufficient education about the child's condition.
2017-05-01T07:24:49.643Z
2012-06-06T00:00:00.000
{ "year": 2012, "sha1": "d5eaf0291e35ac8a15363d5abfe8c3e9326e8499", "oa_license": null, "oa_url": "https://www.karger.com/Article/Pdf/339084", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d5eaf0291e35ac8a15363d5abfe8c3e9326e8499", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
11479591
pes2o/s2orc
v3-fos-license
Mutation of Serine 32 to Threonine in Peroxiredoxin 6 Preserves Its Structure and Enzymatic Function but Abolishes Its Trafficking to Lamellar Bodies* Peroxiredoxin 6 (Prdx6), a bifunctional protein with phospholipase A2 (aiPLA2) and GSH peroxidase activities, protects lungs from oxidative stress and participates in lung surfactant phospholipid turnover. Prdx6 has been localized to both cytosol and lamellar bodies (LB) in lung epithelium, and its organellar targeting sequence has been identified. We propose that Prdx6 LB targeting facilitates its role in the metabolism of lung surfactant phosphatidylcholine (PC). Ser-32 has been identified as the active site in Prdx6 for aiPLA2 activity, and this activity was abolished by the mutation of serine 32 to alanine (S32A). However, aiPLA2 activity was unaffected by mutation of serine 32 in Prdx6 to threonine (S32T). Prdx6 protein expression and aiPLA2 activity were normal in the whole lung of a “knock-in” mouse model carrying an S32T mutation in the Prdx6 gene but were absent from isolated LB. Analyses by proximity ligation assay in lung sections demonstrated the inability of S32T Prdx6 to bind to the chaperone protein, 14-3-3ϵ, that is required for LB targeting. The content of total phospholipid, PC, and disaturated PC in lung tissue homogenate, bronchoalveolar lavage fluid, and lung LB was increased significantly in Prdx6-S32T mutant lungs, whereas degradation of internalized [3H]dipalmitoyl-PC was significantly decreased. Thus, Thr can substitute for Ser for the enzymatic activities of Prdx6 but not for its targeting to LB. These results confirm an important role for LB Prdx6 in the degradation and remodeling of lung surfactant phosphatidylcholine. Lung surfactant is a phospholipid-protein complex that is secreted by lung epithelium and is essential to maintain alveolar stability for normal lung function. The phospholipids of lung surfactant are synthesized by alveolar type II (ATII) 2 cells and stored in lamellar bodies (LB) for secretion into the alveolar space. LB are characterized as lysosome-related organelles (LRO) that are known to participate in biosynthetic pathways as illustrated by the synthesis of melanin by melanosomes, another LRO (1,2). LB maintain an acidic pH and contain at least some of the enzymes that are required for degradation as well as synthesis of phospholipids (3)(4)(5). Changes in surfactant phospholipid composition have been linked to alterations in the biophysical properties of surfactant and may contribute to the development of lung disease (4 -8). Peroxiredoxin 6 (Prdx6) is a bifunctional enzyme with lysosome-type Ca 2ϩ -independent phospholipase A 2 (aiPLA 2 ) and GSH peroxidase activities; the GSH peroxidase activity is able to reduce phospholipid hydroperoxides, and Prdx6 is the only mammalian peroxiredoxin known with the ability to both hydrolyze and reduce phospholipids (9 -11). The protein is widely expressed in tissues with especially high levels in the lung (9). Within the lung, Prdx6 is expressed at relatively high levels within alveolar type II epithelial (ATII) cells where it has been localized to both cytosol and acidic compartments (LB and lysosomes) (12,13). Prdx6 in LB plays a key role in the degradation and resynthesis of surfactant DPPC (14,15). Our previous studies showed that overexpression of Prdx6 led to increased metabolism of DPPC in lungs and ATII cells (16), whereas Prdx6 null mice exhibited DPPC accumulation in lung tissue and LBs (17); these results are compatible with an important role for aiPLA 2 activity in lung phospholipid turnover. However, the greatest fraction of Prdx6 is not in LB but is cytosolic, where it serves as a vital cellular antioxidant (18). We have shown recently that both the GSH peroxidase and PLA 2 activities of Prdx6 play important roles in the repair of peroxidized lung cell membranes (19). Using a protein truncation approach, we have previously identified the Prdx6 organellar targeting motif as a sequence comprising amino acids [31][32][33][34][35][36][37][38][39][40] ) that is located in the N-terminal region of the protein (13). The serine at position 32 (Ser-32) of Prdx6 is essential for its targeting to lamellar bodies and lysosomes. Thus, mutation of Ser-32 to alanine abolished Prdx6 organellar localization in A549 and MLE12 cells, models of ATII cells derived from human and mouse, respectively (13). Ser-32 also is required for binding of Prdx6 to its lipid substrates and constitutes an important component of the catalytic triad (His-26 -Ser-32-Asp-140) that is required for the PLA 2 activity of the protein; thus, the S32A mutant does not exhibit PLA 2 activity or reduction of phospholipid hydroperoxides (20). Prdx6 organellar localization does not depend on binding of the protein to phospholipids (13) but does depend on binding to a chaperone molecule, 14-3-3⑀ (21). In this study, a unique "knock-in" mouse model carrying serine 32 to threonine mutation in the Prdx6 gene was used as a model of targeted depletion of PLA 2 in LB. Our study shows that threonine can substitute for serine for the enzymatic activities of the protein but not for Prdx6 targeting to LB. Evaluation of surfactant phospholipid metabolism in lungs from Prdx6-S32T knock-in mice confirms an important role for lamellar body Prdx6 in the degradation and remodeling of lung surfactant phosphatidylcholine. Cell Lines-The human lung carcinoma A549 and human renal embryocarcinoma 293T cell lines were obtained from ATCC. A549 cells were grown in Dulbecco's modified Eagle's medium (DMEM) (Life Technologies, Inc.) supplemented with 10% fetal bovine serum and antibiotics. These cells are used as a model for ATII cells as their lysosomes exhibit some of the characteristics of LB (26). 293T cells were propagated in DMEM supplemented with 10% FBS (Sigma), 2 mM L-glutamine, and 1 mM sodium pyruvate (Invitrogen) and were used for lentivirus production. For transient expression of GFPtagged constructs, A549 cell layers at 95% confluence in 6-well plates were transfected with 3 g of each expression plasmid in 10 l of Lipofectamine 2000 reagent (Invitrogen) per well according to the manufacturer's protocol. The mammalian expression plasmids encoding full-length Prdx6 with an N-terminal green fluorescent protein (GFP) tag and the GFP-Prdx6 (S32A) mutant have been described previously (13). Generation of the GFP-Prdx6 (S32T) mutation was performed similarly using the QuikChange Lightning site-directed mutagenesis kit (Agilent Technologies, Santa Clara, CA), according to the manufacturer's instructions. Generation of the GFP-Prdx6 mammalian expression plasmids used the following HPLC-purified primers: 5Ј-CACGATTTCCTAGGAGATACATGGGGC-ATTCTCTTTTC-3Ј (forward) and 5Ј-GAAAAGAGAATGC-CCCATGTATCTCCTAGGAAATCGTG-3Ј (reverse). The underlined bases represent the mutated codon. Cells were subjected to experimental treatments at 48 h after transfection. Lentivirus particles were produced by co-transfection of one of three transfer vector plasmids (HMD, HMD-Prdx6/S32T, or HMD-Prdx6/S32A transfer vector plasmid) with the packaging plasmid (pCMV-dR8.2) providing the enhanced green fluorescent protein (EGFP) viral gene, and the envelope plasmid (pCMV-VSVG) encoding the vesicular stomatitis virus glycoprotein. The plasmids were transfected into HEK 293T cells using Lipofectamine 2000 transfection reagent (Invitrogen) according to the recommended protocol of the manufacturer. Conditioned medium was collected 48 and 72 h after transfection and stored at Ϫ80°C. The virus titer was determined by infecting Prdx6 null MPMVEC (5 ϫ 10 5 ) with serially diluted virus stocks and quantifying the numbers of EGFP-positive cells. To study the effects of S32A and S32T mutations in cells, Prdx6 null MPMVEC were incubated for 72 or 96 h with wild type or Ser-32 mutant Prdx6 virus-containing supernatants plus added Polybrene (8 g/ml). Empty HMD lentiviral transfer plasmid and HMD lentiviral transfer plasmid containing WT Prdx6 were used as negative and positive controls, respectively. Infection efficiency was determined by the percent of EGFPpositive cells. At the end of the incubation, cells were lysed in 1ϫ cell lysis buffer (Cell Signaling Technology, Danvers, MA), and the lysate was used for enzymatic assays. Animals-Constructs for generation of Prdx6 S32T knock-in mice were developed, and mice were generated by the Gene Targeting Core and Laboratory and the Transgenic and Chimeric Mouse Facility of the University of Pennsylvania (Fig. 1A). To retrieve the part of the Prdx6 gene to be mutated, short homologous arms for the pL253 retrieval vector (National Institutes of Health, NCI-Frederick, recombineering website) were amplified from a Prdx6 gene-containing BAC clone from a C57Bl/6J genomic library; this clone was used as all of our previous mouse studies have used the C57Bl/6J strain. Sequences of 300 bp were amplified by PCR to generate linear fragments flanked by either NotI/HindIII or HindIII/SpeI, respectively, and then co-ligated into the pL253 MCS NotI and SpeI sites. The resulting construct was linearized using HindIII and transfected into E. coli SW102 containing heat shock-inducible Red recombination proteins and the Prdx6 BAC clone. Following treatment at 42°C for 15 min, ampicillin-resistant colonies were screened for homologous recombination at the 300-bp flanking regions. pL253 derivatives with an ϳ12-kb fragment containing exons 1-4 were selected. To generate the S32T mutant allele, first a mini-vector was constructed. An ϳ300-bp-long genomic Prdx6 fragment located ϳ150 nucleotides upstream of exon 1 was generated by PCR, together with an ϳ700-bp Prdx6 PCR fragment containing exon 1 located just downstream of the ϳ300-bp fragment. The genomic PCR primers were designed to introduce the appropriate restriction enzyme sites at the ends of the fragments, for cloning into PL451 (National Institutes of Health, NCI-Frederick, recombineering website). The two PCR fragments were ligated into PL451 by two subsequent ligation and cloning cycles. This pL451 derivative was mutated by site-directed mutagenesis at codon Ser-32 (TCG to ACG) in exon 1 to derive the final mutagenic mini-targeting vector. This minivector, which also contained a pgk EM7 neo poly(A) cassette flanked by FLP recombinase target sites, was linearized, transfected, and recombined into the above ϳ12-kb Prxd6 vector backbone by heat induction of the Red recombination enzymes, as above. Generation of the final targeting vector with the S32T mutation was by neomycin/kanamycin resistance using 50 g/ml kanamycin. This final targeting construct was linearized, sequence-verified, and electroporated into C57BL/6J ES cells (EAP6 ES cells) for insertion of the mutant sequences into the mouse genome by homologous recombination. ES clones showing homologous recombination were identified by Southern blotting (Fig. 1B). These clones were karyotyped; the Ser-32 mutation was verified by genomic sequencing and used for blastocyst injection into CD-1/BALB/c mice. The serine to threonine mutation creates a convenient restriction endonuclease site for the SnaBI enzyme that is otherwise absent from the WT sequence (Fig. 1C). Successful transmission of the mutation was detected by PCR amplification of the region around the mutation followed by digestion of the PCR product with the SnaBI restriction enzyme. The wild type allele displays one large band, and the mutant allele displays two smaller bands (Fig. 1D). Mice were bred and maintained in the animal care facilities of the University of Pennsylvania (Philadelphia). S32T-Prdx6 knock-in mice developed normally and showed no obvious physical differences from their littermates. Wild type C57BL/6J mice were obtained from The Jackson Laboratory (Bar Harbor, ME). Both male and female 8 -10-week-old mice were used for experiments. All animal protocols were reviewed and approved by the University of Pennsylvania Animal Care and Use Committee. Animals were housed under the National Institutes of Health and United States Drug Administration guidelines for the care and use of animals in research. Lung Morphology-Lungs were excised from wild type (WT) or S32T mutant mice, cleared of blood, and fixed by perfusion through the pulmonary artery with 4% paraformaldehyde (30,31). Tissue samples for routine morphology were processed by the Pathology Core of The Children's Hospital of Philadelphia (Abramson Research Center, Philadelphia) and stained with hematoxylin and eosin (H&E). Lung sections were examined by three independent observers, and randomly selected fields were chosen for comparison. Isolation of Lamellar Bodies-Mouse lungs that had been cleared of blood and subjected to bronchoalveolar lavage were homogenized, and lamellar bodies were isolated by upward flotation in a sucrose density gradient (17,32). This method produces a relatively pure population of largely intact lamellar bodies with a phospholipid to protein ratio of ϳ10. Circular Dichroism Measurement-Circular dichroism (CD) measurements of wild type and mutant proteins (2.5 M in 40 mM potassium phosphate buffer, pH 7.4) were carried out by the Protein and Proteomic Core Facility of The Children's Hospital of Philadelphia. Spectra were recorded with a Jasco J810 circular dichroism spectropolarimeter (Jasco Analytical Instruments, Easton, MD). The output of the CD spectrometer was analyzed and recalculated according to the protein concentra-tion, amino acid content, and cuvette thickness into molecular ellipticity units (degrees/cm 2 /dmol) using Jasco's SpectroManager software (version 2.8.1.1). Enzymatic Activity Assays-Enzymatic activities were measured in recombinant proteins, lung homogenates, LB, or lysed MPMVEC. PLA 2 activity was measured at pH 4 by radiochemical detection of liberated fatty acid as described previously (15,33). The substrate was unilamellar liposomes consisting of DPPC, egg PC, cholesterol, and phosphatidylglycerol in the molar ratio of 50:25:15:10 that were prepared by extrusion through a membrane under pressure (15). This lipid mixture was chosen to reflect the lipid composition of lung surfactant (34). Liposomes were labeled with 1-palmitoyl, 2-[9,10-3 H]palmitoyl, sn-glycerophosphorylcholine ([ 3 H]DPPC) at a specific activity of 2 mCi/mmol DPPC. Analysis by dynamic light scattering (DLS 90 Plus Particle Size Analyzer, Brookhaven Instruments, Holtsville, NY) showed a homogeneous population of liposomes that was 100 -120 nm in diameter and represented Ͼ95% of total vesicles. Authentic lipids were purchased from Avanti Polar Lipids (Alabaster, AL). Radiochemicals were purchased from PerkinElmer Life Sciences. Peroxidase activity was determined by measuring the initial slope of the decrease in NADPH fluorescence with time in the presence of glutathione (GSH) and GSH reductase. The substrate was either H 2 O 2 or 1-palmitoyl,2-linoleoyl, sn-phosphatidylcholine hydroperoxide (PLPCOOH) in 40 mM PBS, pH 7.4, with 5 mM EDTA and 1 mM NaN 3 ; 0.1% Triton X-100 was added for assay with PLPCOOH as substrate (35). Assay for activity of the recombinant protein was performed in the presence of GST equimolar to Prdx6. GST was generated using a plasmid supplied by Dr. Roberta Colman (University of Delaware) and purified as described previously (36). PLPCOOH was prepared by enzymatic oxidation of PLPC as described previously (35). Protein concentration was measured using the Bradford protein assay with bovine ␥-globulin as standard (Bio-Rad). Lung Perfusion-Perfusion of isolated lungs was carried out as described previously (17,37). Mice were anesthetized with intraperitoneal ketamine/xylazine/acepromazine (10:15:2 mg/kg body weight). The abdomen and chest of the anesthetized and continuously ventilated mouse (60 cycles/min and 0.3-ml tidal volume with 5% CO 2 in air) were incised, and the lungs were cleared of blood by perfusion with Krebs-Ringer bicarbonate solution supplemented with 3% fatty acid-free BSA and 10 mM glucose (supplemented KRB). Lungs were placed in the lung perfusion chamber, continuously ventilated as above, and perfused at 2 ml/min. Lung Lipid Content and Uptake and Degradation of DPPC-The bronchoalveolar lavage fluid (BALF), post-lavage lung homogenate, and isolated LB were analyzed for total phospholipid and PC and disaturated PC (but consisting primarily of DPPC) fractions as described previously (14,15,16,17).Total phospholipids were extracted by the Bligh and Dyer procedure (38), and PC was isolated by thin layer chromatography (TLC). DSPC was separated from total phosphatidylcholine (PC) by treatment with OsO 4 followed by separation on a neutral alumina column. Fractions were quantitated by measurement of lipid phosphorus. The content of phospholipid fractions in BALF, lung homogenate, and LB was expressed relative to body weight, lung protein, and LB protein, respectively. Uptake and degradation of DPPC in isolated mouse lungs during a 2-h perfusion was measured as described previously (14,15,33). Briefly, unilamellar "surfactant-like" liposomes labeled with [choline-methyl-3 H]DPPC were instilled endotracheally in the anesthetized mouse, and lungs were isolated for recirculating perfusion. At the end of perfusion, lungs were lavaged to remove the remaining liposomes from the alveolar space and homogenized. Lungs treated similarly but without perfusion were used as the zero time value. Radioactivity was measured in the BALF, lung perfusate, and lung homogenate. For products of metabolism, total phospholipids and the aqueous fraction were obtained from the Bligh and Dyer separation; total PC and lyso-PC were obtained by TLC separation, and DSPC was determined by separation from total PC as above; unsaturated PC was calculated as total PC minus DSPC. DPPC uptake was calculated from the initial alveolar disintegrations/min (dpm) minus dpm recovered in lung lavage (plus perfusate) and expressed as a percent of the instilled dpm. The initial alveolar dpm was estimated from the instilled dpm and the alveolar DSPC pool size. Disintegrations/min was measured in the unsaturated PC, lyso-PC, and aqueous fractions and expressed as percent of internalized DPPC (uptake). Uptake as calculated from the reduction of dpm in the alveolar space versus the sum of dpm recovered in total PC plus metabolic products (data not shown) were approximately equal indicating reliability of the methodology, as described previously (17). Immunofluorescence-To evaluate Prdx6 localization in A549 cells that had been transfected to express GFP-Prdx6, cells were cultured on glass coverslips, rinsed with PBS, and fixed with cold ethanol/acetone (1:1 by volume) for 5 min on ice. For Prdx6 localization in lung tissue, lungs that were cleared of blood and perfused with fixative as described above were inflated with 2% low-melting temperature agarose (Sigma), and the gel was allowed to solidify on ice for 1 h. Lungs were dissected, and the same lobe from WT and mutant lungs was sectioned with an oscillating tissue slicer (MRC 5000, Electron Microscopy Sciences, Hatfield, PA). Cells or tissue sections were then incubated for 10 min (cells) or 30 min (tissue) with 1% Triton X-100 solution in PBS to maximally deplete Prdx6 from the cytosol followed by 1 h of blocking in 3% bovine serum albumin in PBS containing 0.2% Triton X-100. Cells on coverslips were immunolabeled with a polyclonal (rabbit) antibody to lysosome-associated membrane protein 1 (LAMP-1) (Cell Signaling Technology, Danvers, MA) that was used as a marker for lysosome-like organelles. The primary antibodies used for subcellular localization in lung sections were the LAMP-1 antibody and a monoclonal antibody to Prdx6 (Chemicon EMD Millipore, Billerica, MA). Cells were incubated with primary antibody (1:200 dilution) in 0.2% Triton X-100 solution in PBS (T-PBS) for 1 h at room temperature; tissue sections were incubated with primary antibodies overnight at 4°C. After extensive washing with T-PBS, preparations were incubated for 1 h at room temperature with secondary Alexa Fluor-594-conjugated (red) goat anti-mouse IgG antibody (cells and tissue sections) and with Alexa Fluor-488-con-jugated (green) goat anti-rabbit IgG antibody (tissue sections only) (Molecular Probes, Eugene, OR) at 1:1,000 (cells) or 1:500 (tissue) dilution in T-PBS. After a final extensive washing with T-PBS followed by PBS, the cells on coverslips or the tissue sections were mounted on slides with Vectashield mounting medium (Vector Laboratories, Burlingame, CA). Subcellular distribution of Prdx6 was observed by confocal microscopy (Radiance 2000; Bio-Rad) at ϫ600 magnification. Co-localization of the LB and Prdx6 signals was quantitated by ImageJ software using the co-localization indices plug-in filter. The analysis selected the entire cell as the field of interest and strictly followed the protocol provided by the plug-in filter. We calculated Manders' co-localization coefficient (39) that estimates co-localization of fluorescence in the red and green channels; values can range from zero (no co-localization) to one (perfect co-localization). We also determined Pearson's correlation coefficient (39) that measures intensities of each channel for each pixel; values for co-localization range from Ϫ1 (perfect non-co-localization) to ϩ1 (perfect co-localization). Values for these two indices were calculated for the total number of pixels in each image. Duolink in Situ Proximity Ligation Assay-To detect the proximity between Prdx6 and the 14-3-3⑀ chaperone molecule, we utilized the Duolink in situ proximity ligation assay (Olink Bioscience, Uppsala, Sweden) according to the manufacturer's protocol. Lung tissue sections from WT or S32T mutant mice were processed as described above. Mouse monoclonal antibody to Prdx6 (EMD Millipore) and rabbit polyclonal antibody to 14-3-3⑀ (T-16, Santa Cruz Biotechnology, Santa Cruz, CA) were used to detect the proximity required for protein-protein interaction. Sections were immunolabeled overnight with primary antibodies (1:100 dilutions in T-PBS) at 4°C. The presence of the Duolink fluorescence signal, which indicates that two proteins within the cell are separated by Ͻ40 nm, was observed by confocal microscopy at ϫ600 magnification. Western Blot Analysis-Western blot analysis was performed using the two-color Odyssey LI-COR (Lincoln, NE) technique as described previously (27). A polyclonal antibody to Prdx6 was used (22) at a dilution of 1:1,500 in blocking buffer. A polyclonal antibody made in rabbits was used to detect surfactant protein-A (40), and a mouse monoclonal antibody was used to detect GAPDH (EMD Millipore). Secondary antibodies were IrDye 800 goat anti-rabbit and IrDye 700 goat anti-mouse (Rockland, Gilbertsville, PA) for imaging on the green 800 nm and red 700 nm channels, respectively. Statistical Analysis-Values are presented as means Ϯ S.E. Statistical significance was assessed with SigmaStat software (Jandel Scientific, San Jose, CA). Group differences were evaluated by one-way analysis of variance or by Student's t test as appropriate. Differences between mean values were considered statistically significant at p Ͻ 0.05. Effect of Serine 32 Site-specific Mutation on Secondary Structure and Enzymatic Activities of the Recombinant Prdx6 Protein-Our earlier study of the recombinant rat Prdx6 protein using circular dichroism (CD) spectral analysis indicated that serine 32 to alanine (S32A) mutation alters the secondary structure of the protein with a markedly increased content of ␣-helices and a decreased content of ␤-sheets (20). This earlier study also found that the S32A mutation led to the loss of aiPLA 2 activity of the protein (20). The CD spectrum of recombinant human Prdx6 protein in the present study indicates a negligible effect of S32A mutation on protein secondary structure (Fig. 2). We interpret the different results between the previous and present studies as an indication of structural instability of the mutant protein. Nevertheless, consistent with our previous findings for the rat protein, serine 32 to alanine substitution in human Prdx6 resulted in the loss of its aiPLA 2 activity (Table 1). Peroxidase activity of the S32A mutated Prdx6 was not different from WT with H 2 O 2 as substrate, but it was absent with PLPCOOH as substrate ( Table 2). The loss of aiPLA 2 and PLPCOOH peroxidase activities reflects the centrality of Ser-32 in the catalytic triad for aiPLA 2 activity as well as the presence of Ser-32 in the putative phospholipid-binding site of the Prdx6 protein (13,20). Because of potential instability of Prdx6 associated with mutation of the Ser-32 to alanine, we evaluated a protein in which Ser-32 in human Prdx6 was mutated to threonine. The secondary structure of recombinant S32T Prdx6 as determined by CD spectral analysis indicated negligible difference from WT Prdx6 (Fig. 2). There also was no effect of this mutation on Prdx6 aiPLA 2 activity (Table 1). Kinetic parameters for the PLA 2 activity of WT and S32T-Prdx6 were investigated by varying DPPC concentrations. Double-reciprocal plots (Fig. 3) and the calculated Michaelis-Menten constant (K m ) and maximal velocity (V max ) for aiPLA 2 activity (Table 1) were similar for the WT and threonine mutant proteins. Thus, S32T-Prdx6 recombinant protein fully retained its PLA 2 enzymatic activity, in contrast to the S32A mutation. The S32T mutation also did not have any effect on the peroxidase activity of the recombinant Prdx6 protein with either H 2 O 2 or PLPCOOH as substrate ( Table 2). Effect of Ser-32 Site-specific Mutations on Prdx6 Enzymatic Activities in Cells-To confirm our in vitro findings and to study the effect of both Prdx6 S32A and S32T mutations in cells, we expressed human wild type or mutant protein in Prdx6 null MPMVEC using HMD lentivirus transfer vector. As detected by green fluorescent protein (GFP) expression, the efficiency for infection of Prdx6 null MPMVEC with human WT, S32A, and S32T mutant Prdx6 viruses was over 90% and equivalent for all constructs (Fig. 4A). aiPLA 2 and peroxidase activities in cell lysates were evaluated to determine the effect of Prdx6-S32A and S32T mutations activity and kinetic constants for recombinant WT and Ser-32 mutant Prdx6 Kinetic constants were calculated from the data presented in Fig. 3 (Fig. 4B). Assay of peroxidase activity in the same MPMVEC cell lysates confirmed that the S32T mutation has no effect on Prdx6 enzymatic function with comparable levels of peroxidase activity using either H 2 O 2 or PLPCOOH as substrate ( Table 2). The significantly greater activity with H 2 O 2 as substrate can reflect the presence of catalase and other H 2 O 2 peroxidases besides Prdx6. However, "knock-out" of Prdx6 has shown that there are essentially no other phospholipid hydroperoxidases in lung cells so that reduction of PLPCOOH as determined by the peroxidase assay reflects cellular Prdx6 activity (41,42). MPMVEC expressing S32A mutant protein showed a significant decrease of peroxidase activity with PLPCOOH as substrate, but there was no effect on activity with H 2 O 2 substrate ( Table 2). As described above, we interpret this substrate-related difference on the peroxidase activity of S32A-Prdx6 as reflecting an effect of the mutation on phospholipid binding. Mutation of Serine 32 Abolishes Prdx6 Targeting to Lysosome-related Organelles in A549 Cells-Our previous studies have demonstrated Prdx6 expression in lysosomes, LB, and cytosol of lung epithelial cells (12). Within LB, Prdx6 was predominantly localized to the vesicular lumen (21). A 10-amino acid N-terminal sequence ( 31 DSWGILFSHP 40 ) was shown to determine Prdx6 organellar targeting (13). The serine at position 32 (Ser-32) is necessary for this subcellular distribution because the lamellar body localization of Prdx6 was abolished by mutation of its serine 32 to alanine (S32A) (13). In this study, we evaluated protein targeting to lysosome-like organelles by generating GFP-tagged Prdx6 mutants with Ser-32 to alanine (S32A) or Ser-32 to threonine (S32T) muta-tion. The wild type A549 cells show small vesicular structures that stain positively for LAMP-1 and also for GFP-Prdx6 as indicated by the arrows in Fig. 5. As expected based on our previous results (13), the S32A-Prdx6 mutant also shows vesicular staining with LAMP-1, but there is relatively little corresponding GFP-Prdx6 staining (Fig. 5). Unexpectedly, because enzymatic activities were unaffected, the S32T-Prdx6 mutant also showed minimal GFP-Prdx6 vesicular staining. The decreased co-localization of GFP-Prdx6 and LAMP1 with both the S32A-and S32T-Prdx6 mutants is supported by the decreases in Manders' overlap and Pearson's correlation coefficients calculated from total pixels in Fig. 5 (Table 3). Thus, both the S32A and S32T mutations result in a similar loss of Prdx6 organellar targeting in A549 cells (Fig. 5) despite their opposite results with respect to enzymatic activities (Tables 1 and 2). However, caveats associated with the co-localization procedure, namely the small size of the organelles and uncertainty regarding identity of the vesicles in A549 cells, the necessity to rely on transient transfection with subsequent variable levels of Prdx6 expression, and the requirement to deplete cytosolic Prdx6 prior to immunostaining, prevent a definitive conclusion regarding targeting of the protein. This led us to consider other approaches to evaluate the effect of S32T mutation on targeting of Prdx6 to LRO. To accomplish this, we produced and studied a S32T-Prdx6 knock-in mouse. Characterization of Prdx6-S32T Mouse-Histological evaluation using H&E-stained sections indicated that the mutation of Ser-32 in Prdx6 to threonine had no observable effect on mouse lung morphology as compared with WT (Fig. 6). Evaluation of lung tissue sections from WT mice by immunofluorescence demonstrated the presence of Prdx6 in LB by its co-localization with LAMP-1 that was used as a marker of lysosome-related compartments (Fig. 7A). However, although lungs from S32T-Prdx6 mutant mice demonstrated Prdx6 fluo- rescence in epithelial cells, the fluorescence did not appear to be present in LB, i.e. organelles stained with LAMP-1 (Fig. 7B). The Manders' overlap and Pearson's correlation coefficients, calculated from total pixels in Fig. 7, A and B, indicate a marked decrease in co-localization in the S32T-Prdx6 lung sections as compared with wild type (Table 3). Thus, as for the A549 cells, the presence of the S32T mutation in Prdx6 appeared to inhibit its targeting to LB. We further evaluated LB targeting by the isolation of LB from WT and S32T mice. By immunoblotting, Prdx6 was present in LB from WT mice but was essentially absent in lamellar bodies isolated from lungs of S32T-Prdx6 mice despite approximately equal Prdx6 expression in WT and mutant mouse lung homogenates (Fig. 8A). As a control, the expression of surfactant protein-A, a protein that is predominantly localized to LB and the extracellular space in lungs, was not different in LB isolated from WT and S32T-Prdx6 mice. Absence of Prdx6 protein was associated with markedly decreased aiPLA 2 activity in LB of S32T-Prdx6 mutant mice (Fig. 8B). Confirming the findings for Prdx6 expression, aiPLA 2 activity in homogenates of whole lung tissue from these mutant mice was similar to levels in WT FIGURE 5. Serine 32 mutation abolishes targeting of Prdx6 to lysosomerelated organelles in A549 cells. Expression of a GFP-Prdx6 fusion protein was used to evaluate Prdx6 expression in lysosome-related organelles of A549 lung epithelial cells. Upper panels, lysosome-related organelles immunostained with LAMP-1 (red), a lysosome marker protein, in wild type (WT), serine to alanine (S32A), and serine to threonine (S32T) Prdx6 mutants. Lower panels, expression of GFP-Prdx6 (green).The arrows in the WT panels indicate an example of vesicles with co-localization of the two markers. . S32T mutation abolishes Prdx6 targeting to lamellar bodies in mouse lungs. LAMP-1 immunofluorescence (green) is used as a marker for lysosome-like organelles (left panels). Immunofluorescence for Prdx6 expression (red) is shown in the middle panels. Co-localization is indicated by yellow fluorescence in the merged images (right panels). All three panels for each condition are of the identical field. The area in the upper LAMP-1 panel for each condition that is enclosed by a box is enlarged as shown in the lower panels. A, immunofluorescence of wild type lungs. B, immunofluorescence of S32T-Prdx6 lungs. lungs (Fig. 8B). The essentially normal content of Prdx6 in the whole lung of the S32T-Prdx6 mutant mouse but its absence in LB is consistent with a defect in Prdx6 trafficking. TABLE 3 Analysis for co-localization of lamellar body membrane marker (LAMP1) and Prdx6 using Manders' overlap and Pearson's correlation coefficients Serine 32 to Threonine Mutation Abolishes Prdx6 Interaction with the 14-3-3⑀ Molecular Chaperone-Our earlier studies indicated that intracellular trafficking of Prdx6 to lysosomal organelles along the exocytotic pathway relies on its binding to a molecular chaperone, 14-3-3⑀. We demonstrated that amino acids 31-40 of Prdx6 comprising the targeting peptide directly bind to 14-3-3⑀ and showed that the mutation of Ser-32 to alanine in this sequence markedly decreases interaction of the peptide and chaperone in vitro (21). To evaluate whether the S32T mutation also interferes with Prdx6 binding to 14-3-3⑀, the possible interaction of the two proteins in the intact cell was studied with the Duolink in situ proximity ligation assay. A signal indicating proximity of the two proteins was detected in WT lungs but not in Prdx6-S32T mouse lungs (Fig. 9). These findings provide additional evidence that 14-3-3⑀ modulates Prdx6 intracellular trafficking to lysosomal organelles and provides a mechanism for the altered targeting of Prdx6-S32T mutant proteins to lamellar bodies. Effect of Altered Prdx6 Lamellar Body Targeting of Prdx6-S32T on Lung Surfactant Phospholipid Metabolism-We reasoned that impaired trafficking of Prdx6 to LB in mice would result in a phenotype that is similar to the Prdx6 null mouse with respect to LB phospholipid metabolism. Our previous studies in Prdx6 null mice indicated that Prdx6 plays an important role in the metabolism of lung surfactant DPPC. DPPC is the major bioactive phospholipid component of lung surfactant (1,43). After being endocytosed by alveolar epithelial cells, lung surfactant DPPC is degraded in LB by lysosome-type aiPLA 2 (i.e. Prdx6). Lyso-PC, a product of aiPLA2 activity, serves as the substrate for reacylation to regenerate DPPC by the remodeling pathway. Absence of aiPLA 2 activity in the Prdx6 null mouse results in the accumulation of phospholipids in lung tissue (33). However, results for the null mouse are confounded by the absence of Prdx6 from the whole cell. Thus, the precise role of LB Prdx6 in lung PC metabolism has not as yet been demonstrated. The present model where Prdx6 is absent only in LB represents an opportunity to evaluate the specific role of LB Prdx6 in lung surfactant phospholipid metabolism. To evaluate the role of LB Prdx6, we investigated the uptake and degradation of extracellular DPPC in isolated perfused mouse lungs. Phospholipid composition was measured in the lung tissue, lung lavage (BAL) fluid, and isolated lamellar bodies from WT and S32T mutant mouse lungs. Total phospholipid, DSPC, and PC contents in all three compartments in S32T mutant mouse lungs were significantly increased compared with WT lungs (Fig. 10, A-C). The increase in phospholipids in the whole lung as well as the LB and BALF phospholipids suggests that synthesis of phospholipids and their secretion by lamellar body exocytosis were unaffected by S32T-Prdx6 mutation. We next evaluated the recycling (uptake and metabolism) of lung phospholipids. Uptake of [ 3 H]DPPC by the isolated perfused lung was measured at 2 h after instillation of mixed unilamellar liposomes into alveolar spaces and was not significantly different in WT versus S32T-Prdx6 mouse lungs (Fig. 11A). The lyso-PC fraction is generated by PLA 2 activity, whereas the aqueous fraction is derived from further breakdown of lyso-PC to choline and its metabolite derivatives. The unsaturated PC fraction represents PC containing unsaturated fatty acid in the sn-2 position, which is generated by the reacylation of lyso-PC. Total lung degradation of DPPC was decreased by ϳ50% (p Ͻ 0.05) in Prdx6-S32T lungs compared with wild type lungs (Fig. 11B). The decrease in dpm recovery was greatest in the unsaturated PC fraction, whereas the lyso-PC and aqueous fractions showed decreases of a lesser degree (Fig. 11B). We have previously shown similar changes, compared with WT, in Prdx6 null lungs and in lungs treated with a transition state-mimic inhibitor of Prdx6 PLA 2 activity (17,33). These data indicate that targeted lamellar body depletion of Prdx6 in S32T lungs affects lamellar body phospholipid degradation leading to accumulation of phospholipids in lungs and LB. Discussion A major goal for this study was to compare the substitution of Ala versus Thr for Ser-32 in Prdx6 to evaluate the effect on enzymatic activities of the protein. The Ser-32 residue is a component of the Prdx6 catalytic triad and is specifically required for the PLA 2 activity of Prdx6 (20). This residue is conserved in mammalian (human, bovine, rat, and mouse) Prdx6 (9,25), avian (chicken, Gallus gallus), and amphibian (frog, Xenopus tropicalis) Prdx6 (GenBank TM ), many species of fish Prdx6 (44), insect (Drosophila dPrx 2540 and dPrx 6005) (45), plant (cress, Arabidopsis thaliana) (GenBank TM ), and yeast mitochondrial thioredoxin peroxidase (45). The Drosophila, plant, and yeast proteins are 1-cysPrdx enzymes that are homologous to Prdx6. The only 1-cysPrdx enzyme that we have identified in the literature as lacking Ser in position 32 is the yeast nuclear thioredoxin peroxidase that has been observed to be the least homologous of any member of the family (45). This high degree of conservation of the Ser-32 residue is compatible with its key role in Prdx6 enzymatic function. However, to date, only the mammalian enzymes have been shown to exhibit PLA 2 activity. Mutation of Ser-32 to Ala abolished Prdx6 PLA 2 activity in human (this study and Ref. 25) and rat (20) Prdx6, but the substitution of Thr for Ser had no effect on enzymatic activity. We have not found any instance of the substitution of Thr for Ser-32 in a natural Prdx6/1-cysPrdx protein. A secondary goal of this study was to evaluate the effect of targeted depletion of lamellar body PLA 2 on lung phospholipid metabolism. Our earlier studies of mice either lacking or overexpressing Prdx6 have indicated that the aiPLA 2 activity of this protein plays a major role in the regulation of lung surfactant phospholipid homeostasis (17,33). Treatment of lungs with MJ33, a specific PLA 2 inhibitor, markedly diminished the degradation of surfactant DPPC after its internalization by lung type 2 alveolar epithelial cells in situ (17,33). However, it was not possible using these latter models to specifically investigate the role of lamellar body PLA 2 in lung surfactant phospholipid turnover because the genetic manipulations or use of the inhibitor resulted in global inhibition of PLA 2 activity. The phenotype of the Prdx6-S32T knock-in mouse with Prdx6 depletion only from LBs of lung epithelial cells gave us the opportunity to study the specific role of lamellar body aiPLA 2 activity in the degradation and remodeling of lung surfactant PC. Analysis of protein expression in lung tissue and lamellar bodies isolated from WT and S32T mice confirmed that Prdx6 was equally expressed in WT and mutant mouse lungs but was essentially absent in lamellar bodies isolated from mutant lung homogenates. The S32T Prdx6 mutant lungs showed significantly diminished degradation of DPPC and the accumulation of phospholipids. Although this study evaluated a single time point in terms of mouse age, we have shown previously that the lungs of Prdx6 null mice accumulate DPPC, PC, and total phospholipids at a linear rate between the ages of 4 and 48 weeks (17). The lipid composition of wild type lungs (normalized to body weight) of a similar age range was constant. The values obtained in this study of mice with absent LB Prdx6 were nearly identical to values that were obtained 10 years ago for Prdx6 null mice of a similar age with absent Prdx6 in all lung compartments (17). Whereas the endoplasmic reticulum is the primary site for synthesis of lung surfactant phospholipids by the de novo pathway, the LB are the site for phospholipid (surfactant) "storage" (1,5). Our previous studies have shown that LB are also a site for degradation or remodeling of phospholipids that have been endocytosed from the alveolar space (recycled surfactant) (4,(15)(16)(17)33). The results of this study confirm that phospholipid degradation and remodeling in LB requires Prdx6 and that LB Prdx6 indeed has an important role in the normal turnover of lung phospholipids. Our earlier studies indicated that Prdx6 localization to LB may rely on the protein binding to 14-3-3⑀, a molecular chaperone that is known to facilitate transport of signaling molecules along the secretory pathway (21). Serine 32 to alanine mutation in the Prdx6(31-40) amino acid lysosomal targeting motif diminished the interaction of Prdx6 with 14-3-3⑀ in vitro and in cells and abolished lysosomal localization of Prdx6 (21). This mutation also resulted in the inactivation of the aiPLA 2 activity of Prdx6. In this study, the serine to threonine substitution in the Prdx6-S32T knock-in model showed no effect on protein enzymatic activity but resulted in the loss of protein interaction with the 14-3-3⑀ chaperone molecule, thus resulting in a lack of targeting to LB and presumably to other lysosome-like organelles. Threonine, like serine, is a small weakly polar amino acid, and substitution of one for the other generally has little effect on enzymatic activity (47). Thus, Thr can substitute for Ser in a range of enzymatic activities, including the PLA 2 activity of Prdx6. Many protein kinases as well as phosphatases catalyze phosphorylation/dephosphorylation of either serine or threonine sites in proteins with generally similar activities toward either amino acid residue. However, there are some reports that have shown a change in enzymatic activity with serine to threonine mutation. For example, activity of alcohol dehydrogenase from Thermoanaerobacter ethanolicus, using 2-propanol as a substrate, was increased in the S39T mutant without a significant effect on NADPH binding; it was proposed that the serine to threonine substitution permitted a change in the steric environment of the active site without disrupting the essential proton relay system in which the Ser-39 hydroxyl group participates (48). As another example, the T1S mutation of the 26S proteasome decreases proteolytic activity by an order of magnitude (53). Also, changing Ser-65 to threonine in the green fluorescent protein (GFP) chromophore region was reported to stabilize the hydrogen bonding network in the chromophore, resulting in the enhancement of the fluorescent signal (49). In the present results, decreased binding of S32T-Prdx6 to an important chaperone protein, 14-3-3⑀, resulted in altered transport to lung lamellar bodies, with subsequent impairment of LB PLA 2 activity and altered lung surfactant phospholipid metabolism. Changes in phospholipid content represent a hallmark for lysosomal storage disorders that result from the deficiency of LB hydrolases, and increased lung phospholipid content of 4 -6-fold has been reported in lungs of patients affected by sialidosis and Gaucher and Sandhoff diseases (51). A similar increase in lung phospholipids has been found in mouse models of these diseases as, for example, the ␤-hexosaminidase-deficient mouse (50). We have recently evaluated pearl mice that are a model of the Hermansky-Pudlak syndrome, a disease of protein trafficking and organellar dysfunction (52). These lungs showed impaired Prdx6 targeting to LB and lung phospholipidosis (54). Understanding the mechanisms that regulate localization of lysosomal cargo proteins may lead to the development of novel therapies for some lysosomal storage diseases. In conclusion, this study demonstrates that Thr can substitute for Ser at the active site for the PLA 2 activity of Prdx6. The study also presents a mouse model of targeted depletion of Prdx6 in the lung lamellar bodies and confirms the role of these organelles in lung surfactant phospholipid metabolism. Finally, we confirm that Prdx6 subcellular targeting to lysosome-related organelles relies on its interaction with the chaperone molecule 14-3-3⑀, thereby establishing the importance of the Ser-32 residue of Prdx6 for protein-protein interactions and subsequent organellar localization of Prdx6. Author Contributions-E. M. S. participated in project design, performed histology, PAGE, and peroxidase and Duolink assays, generated constructs for protein expression in cells. and wrote the first draft of the manuscript; C. D. performed phospholipid metabolism studies and PLA 2 assays; S. Z. generated lentivirus constructs; J. Q. T. prepared lung sections for histologic analysis; L. G. prepared the codon-optimized plasmid and generated recombinant Prdx6; T. R. generated the constructs for the knock-in mice; S. I. F. participated in the conceptualization of the project and supervised the generation of mutant constructs; A. B. F. participated in project conceptualization and design and edited the manuscript.
2018-04-03T00:23:08.597Z
2016-02-26T00:00:00.000
{ "year": 2016, "sha1": "d53622c69808d7c0948a4bc8f8b2a83673db9dce", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/291/17/9268.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "0acc11262f9e8a851efb1aea16cbd4a9651a7b57", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54591550
pes2o/s2orc
v3-fos-license
A Review of Soil Nailing Design Approaches oil nailing is a relatively new method, which has been used for over 3 decades for soil reinforcement purposes. It is an in-situ earth reinforcing method, in which the primary applications are to retain excavations or cuts and to stabilise slopes. The principal reinforcing materials, the nails, are inserted into the earth as passive inclusions providing reinforcement to the earth that help the earth structure to gain its overall strength. A factor, which makes soil nailing technique more desirable than other earth reinforcing methods when performed on cuttings or excavations, is its easy and flexible top-down construction (excavation, nail installation and placement of shotcrete) as shown in Figure 1. I. INTRODUCTION oil nailing is a relatively new method, which has been used for over 3 decades for soil reinforcement purposes.It is an in-situ earth reinforcing method, in which the primary applications are to retain excavations or cuts and to stabilise slopes.The principal reinforcing materials, the nails, are inserted into the earth as passive inclusions providing reinforcement to the earth that help the earth structure to gain its overall strength.A factor, which makes soil nailing technique more desirable than other earth reinforcing methods when performed on cuttings or excavations, is its easy and flexible top-down construction (excavation, nail installation and placement of shotcrete) as shown in Figure 1. Figure 1 The three stages of soil nailing construction process [7]. 1 Senior Lecturer, Department of Civil Engineering, Faculty of Engineering, Universiti Malaysia Sarawak. A Review of Soil Nailing Design Approaches S.N.L. Taib 1 S 2. DESIGN ACCORDING TO HA 68 [4] The Department of Transport of the UK [4] employs the limit state principles incorporating partial safety factors as suggested by [3] for geotechnical engineering design.Any design is based on ultimate and serviceability limit states.The ultimate limit state occurs when a collapse mechanism forms, while, the serviceability limit state might occur during the working or service condition of the structure in which a situation such as movement in the structure may affect the functionality of the structure or of the adjacent structures or services. HA 68 gives a single unified effective stress design approach for all types of reinforced highway earthworks with slope angles to the horizontal in the range 10 to 70, and soil types in the strength range  =15 to 50.Values of c' may be included, as well as pore water pressures and limited uniform surcharge applied at the top of the slope. A limit equilibrium approach is adopted based on a two-part wedge mechanism with the inclusion of partial safety factors.Figure 2 shows the geometry of HA 68's two-part wedge mechanism.Equilibrium is reached when the driving forces, which consist of the self weight of the structure and surcharge loads multiplied with the load partial factor (of predetermined value of unity) are in equilibrium with the resisting forces which are the shear strengths of soil and the reinforcement forces divided by the material partial safety factors of predetermined values suggested by Department of Transport.The assumption is made that the nails' contribution is purely axial.Shear stress and bending stiffness are ignored in this design. According to Department of Transport, the two-part wedge mechanism is preferable to a log spiral mechanism because it provides a simple basis for obtaining safe and economical solutions and is particularly suitable for reinforced soil including soil nailed structures.It is inherently conservative when compared to more exact solutions but allows simple hand check calculations to be carried out.Two two-part wedges are introduced in this manual that are the T max and T o mechanisms.The T max mechanism identifies the location in the structure, which needs the maximum total horizontal reinforcement, meanwhile, the T o mechanism is one where no reinforcement is needed.For inclined reinforcement (when angle of nail inclination,  0) the variables for these mechanisms are presented as T max and T o (refer to Figure 3).For inclined reinforcement, the values of T max and P des shall be determined next.T max value is the total reinforcement force inclined at angle  for most critical two-part wedge mechanism; while, P des is the design nail capacity per metre length of slope, based on the rupture strength of the reinforcement or pullout capacity of the reinforcement.HA68 in Section 2.23 on page 2/4 comments that the strength mobilised in the reinforcement is taken to be the lesser of the design rupture strength and the design pullout resistance of the length of reinforcement beyond the failure surface (L e ) whenever the failure surface cuts a layer of reinforcement or row of nails.lesser value is chosen to govern the design since this is the value that becomes critical at failure. The Department of Transport suggests nails of the same strength capacity to be utilised for reinforcing the slope.The number of reinforcements per unit length, N n , not including the basal layer is directly obtained from the values of T max and P des , where N n = T max / P des (Equation 1) Due to this assumption, of similar capacity to all the nails, the manual suggests optimum variable vertical layer spacing due to the requirement to avoid local over-stressing of any layer of reinforcement, which would later introduce progressive failure of the whole structure; especially for reinforcement having identical capacity.In HA68 designs, the total reinforcement force increases parabolically down to the bottom of the structure and the decrease in vertical spacing going down the slope is seen as desirable to avoid local instability.The equation that governs the spacing is zi = [ (i-1)/N n ] x H (Equation 2) where zi = depth of ith layer of reinforcement below crest of slope. Now that once the T max , T o mechanisms are located and total number of reinforcements (N n+1 ) are obtained, a drawing of the soil nailed structure's profile can be designed.In general, HA 68 gives a step by step design procedure for soil nailing. DESIGN ACCORDING TO BS 8006 [1] In the design approach of BS 8006, the limit state principle is again adopted.The limit equilibrium approach is applied wherein the internal as well as the external stabilities of structure are checked against the limit states.As in HA68, in order to be consistent with suggestions in [3], partial safety factors are included in its design calculation.The design of soil nailed structure in existing ground is presented in Section 7 (Design of Reinforced Slopes) of the standard, while design of soil nailing wall is presented as part of Section 6. (Design of Walls and Abutments) BS 8006 is more comprehensive than HA 68 in its explanation of the available approaches and assumptions from which the designer can choose.A comprehensive list of suitable load partial factors and material partial factors is given and these are related to different construction conditions and situations.Two methods of searching for the critical failure are presentedthe two-part wedge (as in HA 68) and the log spiral method.BS 8006 advises its users to include shear resistance along with the known tensile reinforcement provided by the nails if the resistance is significant. BS 8006 suggests the stages that designer can follow in soil nailed structures design; which are: 1.The determination of the position of the critical slip surface and the resisting force or moment to maintain equilibrium of the active zone. 2. The determination of the tensile and shear loads for an initial constant spacing and inclination of nails of constant stiffness and length. 3. A check for each level, allowing for stages of construction, against failure due to a. tension in the nail at the slip surface, b. pullout of the length of nail in the resistant zone, c. bending and shear in the nail near the slip surface, and d. bearing failure of soil against the nail. The designer can now select a new and improved pattern and disposition of nails and re-analyse.It should be noted here that the shear loads in the nails can be obtained from [8] and [7] in which according to BS 8006, a technique based on maximum plastic work with limits placed on the allowable lateral earth pressure on the nails and bond resistance is applied.Another method to look for shear loads is introduced by [2] where they adopted the theory of deflection of laterally loaded narrow piles to determine nail deflections and kinematical compatibility to determine the value of resulting shear forces in the nails. BS 8006 gives more freedom in many aspects of choosing the most suitable design approach rather than HA 68, which is more directive.It depends on the experience and the knowledge of the designer in choosing the appropriate approaches suitable design with guidance from the design manual. RECOMMENDATIONS BY RDGC [7] (PROGRAM CLOUTERRE) The French initiated the Clouterre program in 1986 [7], jointly funded by the French government and private industry, with a budget of the order of $4 million and with 21 individual private and public participants.The program involved three largescale experiments in a prepared fill of Fontainebleau sand and the monitoring of six full-scale in-service structures.The results of the Clouterre program have been published and form the basis for soil nailing design approach adopted in France. The report by RDGC on the program does not specifically provide a step-by-step procedure to design a soil nailed structure as presented in HA68.It sets up guidance on design and special criteria that must be considered.A designer is recommended to start with a preliminary design that will enable him or her to later define essential characteristics of the structure, such as the resistance values, lengths and spacings required in the final design.Preliminary design charts are used to seek for the characteristics mentioned above at the simplest condition of the structure; for example, identical nails are evenly distributed, homogeneous soil and nails working only in tension.Several design charts are presented in the report i.e. [6]. The report elaborates on the principles used to assess stability of soil nailed structures.In accordance with [3], for geotechnical design, and as can be seen in HA 68 and BS 8006, the conventional global safety factor is replaced by partial resistance and load factors.Apart from that, suggestions on characteristic values of the loads and resistances are also presented.The characteristic value is defined as the ratio of average value and the distribution coefficient.The coefficient is applied to make sure that a minimal probability is not achieved.The report mentions that analysis of stability and design of soil nailing can be done at both the ultimate limit state and the serviceability limit state. The limit equilibrium method and the finite element method are suggested as the basis for stability analysis and design of soil nailing.Limit equilibrium method includes an examination on the equilibrium between the soil and the strength of materials used in the slope, while the finite element method is used to calculate the amount of deformation that the structure will have (to check whether it is either below or beyond a certain acceptable threshold value).Due to unavailability of a mean to calculate the deformation in slope in this report, the design is limited to the limit equilibrium method which has to be checked not only when structure is completed but also during each phase of construction. The report presents 4 types of failure modes based on scaled-down laboratory models tested to failure, which are breakage of the nails, lack of friction between the soil and nails, instability during excavation process and overall sliding of the reinforced soil mass.These failure modes which are observed in laboratory models, justified the use of limit equilibrium in designing soil nailed structures as all these failures involved slip surfaces (except for the lack of friction case).A second justification on the use of limit equilibrium method is given by two actual structures, which failed and exhibited pullout and tensile failures respectively.Safety factor was checked by analysing the potential failure surfaces and was found to be near unity. An interesting point that is included in the report concerns the assumption made in any limit equilibrium calculation on the simultaneous mobilisation of resistances.These resistances are the resistances of the nail, particularly its tensile strength, shear resistance in the soil, pullout resistance of the nail (limit skin friction, f max ) and passive pressure at failure of the soil normal to the nail, which in actual condition do not act simultaneously in the structure.Further justification on this has to be done through experimental work in order to gain more confidence on the use of the limit equilibrium method in soil nailing design.However, according to the report, the assumption on simultaneous mobilisation of resistances is, in spite of everything, still a good approximation of the actual-and complicated-behaviour of soil nailed walls. DESIGN ACCORDING TO FHWA [5] As with other design approaches, the FHWA also applies limiting equilibrium method in its soil nailing design.Specifically, the manual utilises the slip surface limiting equilibrium method which is used by all current practical design methods of soil nailing.Two limit states, as with the other design recommendations, are considered, namely, the strength limit state (ultimate limit state) and the service limit state.Another limit state known as extreme limit state, which belongs to the strength limit state, recognises structure under extreme loads such as seismic loading. The manual recognises the benefits of utilising the slip surface limiting equilibrium method compared to the earth pressure method and these are summarised as below; To date, virtually all designers have utilised the approach and there are no current empirical earth pressure recommendations sufficient to handle the variety of conditions faced in soil nailing such as soil types, geometries and loading. Several factors inherited by soil nailing technique; for example, heterogeneity of soils introduces complexity in the use of earth pressure method in soil nailing. Another drawback of using the earth pressure system is the definition of locations of the maximum tension line for each of the reinforcements.Again, it is particularly complex to define these locations due to the variety of conditions encountered in soil nailing since the definition of these locations is dependent on the geometry of the system, character of reinforcements and distribution of applied loading.A soil nailed structure, will have a wide range of soil shear strengths and soil / grout bond capacities. The manual introduces two approaches to design, which are the Service Load Design (SLD) and the Load and Resistance Factor Design (LRFD), which consider both limit states in their calculations.In SLD, allowable nail loads (tendon strength and pullout resistance of nail) are suggested for the reinforcement strength and recommended factors of safety are applied to the soil strength at both limit states in which the allowable nail loads and factored soil strengths exceed the applied loads.In contrast, in LRFD, at strength limit state, the soil and nail design strengths which are obtained by applying resistance factors to their ultimate strengths, exceed the applied loads which are multiplied by load factors.In service limit state analysis for both designs, overall displacement of the structure is recognised and in certain cases, on the facing, the crack width has to be observed to be within specified limits.This manual provides guidance on finding the displacements (i.e.maximum lateral movements in different soil types) for consideration in the service limit state analysis. To consider the strength limit state, all potential modes should be considered; the external modes (that do not specifically intersect the reinforcement), the internal modes (i.e.failure either due to rupture of reinforcement or failure of facing) in which the global failure surface intersects the reinforcement and the mixed modes which includes internal mode failure and where some part of the failure surface does not touch the reinforcement.The local stability of the facing during excavation is also highlighted in the manual since this failure cannot be directly assessed using the conventional stability analysis. A matter to point out from this manual is the reinforcing effect of the nail.The nail is seen to contribute three reinforcing effects which are the rupture strength of the tendon, the pullout resistance and an additional effect not considered in other design manuals, which is the nail head connection to the facing.The manual continues to state that the nail contribution to the stability of the structure must be the least of three values namely, the tensile strength of the nail, pullout resistance of the length of the nail beyond the slip surface, and the nail head strength plus the pullout resistance of nail's length between the head and the slip surface. Design of soil nails and wall facing is treated as a combined integrated soil-nail-wall "system".This is an effort to ensure that the design could suffice for long-term usage.This manual does not include the shear and bending contributions of the nail and only considers the tensile strength.The other contributions are neglected due to the reason that their mobilisations only after significant deformation in the structure and this assumption, according to the manual, is conservative. Before establishing detailed design calculations, the designer has to choose the wall layout and dimensions (i.e.considering the environment in the vicinity of the location) together with the ground material properties and the subsurface properties in order to determine the preliminary nail pattern which includes nail lengths, locations, spacings, strengths and inclinations.As a starting point, a uniform inclination of 15 is suggested for nails installed in predrilled holes (inclination of lower than 5 should not be used since grouting will be particularly difficult).Uniformity also applies to the spacings, length (normally in the range of 0.6 to 1 times the height of the wall for cut slopes with modest backslopes and minimal surcharge loadings) and size of the nails.The nail lengths and required strengths are exposed to the same limiting factors as the nail spacing.They will increase in the presence of lower soil strengths, lower nail-ground pullout resistances, steeper face and backslope angle and higher surcharge loadings, which will alter the preliminary pattern.With the preliminary design, designer can check for stability and make necessary alterations in order to obtain a more satisfactory detailed design. CONCLUSIONS This paper reviews soil nailing design suggestions and manuals [HA68 [4] (U.K.), BS8006 [1](U.K.), RDGC [7] (France) and FHWA [5] (USA)].Theories on the mechanics of the technique were presented.General conclusions of this paper are:  All methods employ the limiting equilibrium analysis and in some methods, use of partial safety factors is evident. In the limiting equilibrium analysis, concern on the assumption made on the simultaneous mobilisation of resistances should be addressed. HA 68 allows the users to follow step by step procedures in designing a soil nailed structure as opposed to the other manuals (i.e.BS 8006 in which users are given various design suggestions for them to consider). The straightforward approach in HA 68, allows a spreadsheet program for designing to be produced.The design was applicable for up to ten reinforcements and for simple design of soil nailing.A proper method of finding the T o mechanism still needs to be established for program HA 68 Design. It would be beneficial if more empirical data (i.e.pullout tests on nails) are included in these manuals for reference.  The general failure mechanisms in soil nailing as provided in the manuals are tensile failure in the nail at the slip surface, pullout of the length of nail in the resistant zone, bending and shear in the nail near the slip surface, bearing failure of soil against the nail, instability during excavation process and overall sliding of the reinforced soil mass.However, FHWA does not include the shear and bending contributions of the nail and only considers the tensile strength.The other contributions are neglected due to their mobilisations only after significant deformation in the structure and this assumption, according to the manual, is conservative. Clouterre recommends that limit equilibrium method has to be checked not only when structure is completed but also during each phase of construction. FHWA includes extreme limit state which belongs to the strength limit state, recognises structure under extreme loads such as seismic loading. An additional effect not considered in other design manuals, which is the nail head connection to the facing, is included in FHWA manual. Further work on estimating amount of displacements on soil nailing structure is recommended and shall complement the limiting equilibrium analysis.
2018-12-02T01:00:12.746Z
2010-04-01T00:00:00.000
{ "year": 2010, "sha1": "b9ec550af971750fabeb99d5ede89b36c09c902a", "oa_license": "CCBYNCSA", "oa_url": "http://publisher.unimas.my/ojs/index.php/JCEST/article/download/70/47", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b9ec550af971750fabeb99d5ede89b36c09c902a", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
26427852
pes2o/s2orc
v3-fos-license
Biological Ageing Research in Systemic Sclerosis: Time to Grow up? study rheumatoid arthritis pigmented villonodular that telomerase activity is present at a high level in synovial infiltrating lymphocytes obtained from patients with RA, telomerase activation may be involved in lymphocyte activation and proliferation in the role of telomerase repeat amplification Systemic sclerosis (SSc), or often referred to as Scleroderma (tight skin), is characterized by an exaggerated formation of collagen fibers in the skin, which leads to fibrosis. Accumulating evidence now points toward three pathological hallmarks that are implicated in Ssc, the order of which has yet to be determined: endothelial dysfunction, autoantibody formation, and activation of fibroblasts. This current book provides up-to-date information on the pathogenesis and clinical features of this severe syndrome. It is our hope that this book will aid both clinicians and researchers in dealing with patients with this clinical syndrome. In addition, we hope to shed more light on this rare and severely disabling syndrome, ultimately leading to better research and successful therapeutic targeting. Introduction Systemic Sclerosis (SSc) is an autoimmune disease that is typified by several characteristic hallmarks such as vasculopathy, immune activation and extensive fibrosis of the skin and inner organs (1). Although the disease has an overwhelming effect on morbidity and mortality, a cure or even a well defined pathogenic chain of events remains to be discovered. SSc is quite a rare disease (prevalence between 3 and 24 per 100,000 persons) and as a consequence, it has taken a relatively long time to define well-recognised classification criteria. This initially hindered detailed research into the pathogenesis of this debilitating disease (2)(3)(4)(5)(6)(7). However, during the last 20 years research has intensified and several significant leaps forward have been made, assessing susceptibility risk either via epidemiologic/environmental or genetic research. SSc susceptibility disease does not show typical Mendelian heritability, but appears multi-factorial, with an onset later in life. This implies that the effects of many small genetic variations may combine over time to precipitate the disease in a SSc susceptible individual. Recently, this dogma was further underscored by a genome wide association study in SSc, showing that there was not one single genetic factor posing enough risk to be fully accountable for SSc development (8). However, investigations of interaction networks composed of multiple genetic risk variants, which together culminate in a higher disease risk, are just starting in this field (9)(10). Research focusing on environmental factors has initially yielded some interesting results. Environmental risk factors range from exposure to solvents and silicone breast implants, as well as CMV and parvo B19 virus infection (11,12). Although interesting, these results remain not well established, due to the lack of replication or small cohort size (11). Silica exposure is an exception and seems to be a rather reproducible risk factor among multiple small cohorts and case-series. This even led to the incorporation of SSc in insurance fees for silica workers in some countries (11). Next to these associations, a few studies failed to show an association between silica and SSc risk. A recently published and highly anticipated meta-analysis on this matter was severely hampered by heterogeneity in the methods used by the separate studies (13). When we overview the results from these two fields of interest in SSc research, it becomes clear that the risk for developing SSc is highly unlikely to be fully explained by genetic www.intechopen.com factors on the one hand, but that the field of epidemiology failed to identify clear environmental factors on the other hand. Hence, these observations are suggestive for the presence of more subtle processes that may be involved in determining disease on a genetically susceptible background. Since SSc rarely develops at very young age, it is logical to suppose that these processes may take place in the temporal dimension. More specifically, ageing at the level of cells, tissues and organs, i.e. biological as opposed to chronological ageing, might have an impact on development of the disease and has been increasingly implicated in SSc pathogenesis over the last few years. This review aims at critically describing findings coming forth from this area of research and attempts to place them in a hypothetical framework with regard to SSc pathogenesis. What is biological ageing, how is it defined and how is it measured Biological ageing is ageing at the level of a cell, tissue or organ and by extrapolation the whole organism. It need not necessarily equate with the chronological age of the individual. Indeed, it can be used to explain inter-individual variation in the rate of ageing between individuals of the same chronological age. Extrapolation of cellular ageing to the level of the tissue or organ, or the whole organism, is not straightforward. To do so, one must take account of the number of senescent cells (generated by both replicative senescence and stress or aberrant signaling-induced senescence (STASIS)), their location and similarly the number and location of cells lost through insult, in each respective organ or tissue, to gauge properly the effect on its functional capacity. Typically, functional capacity would be expected to decline with increasing biological age. The rate of biological ageing is influenced by the levels of oxidative insult at a cellular level, by lifestyle, socio-economic factors and environmental factors. Telomeres Telomeres are specialized nucleoprotein complexes at the end of eukaryotic chromosomes. They comprise tandem TTAGGG repeat arrays bound to a variety of proteins with roles in chromosomal protection, nuclear attachment and replication. Telomeres function to cap the chromosome, preventing chromosomal fusions and the recognition of the chromosome end as a DNA break. Telomeres facilitate chromosomal attachment within the correct subcellular compartment and have a critical role in DNA replication. The proteinaceous component of the telomere helps maintaining its structural integrity and functions in sensing, signalling and repair of DNA damage (14). The length of telomeric DNA repeats shortens during the ageing of cultured somatic cells ( e.g. fibroblasts, peripheral blood lymphocytes and colon epithelia), but the rate of shortening is also under both polygenic and environmental influences (15,16). As a consequence, telomere length reflects the "miles on the clock" of a given individual or cell type. The characteristic telomeric repeats typically end in a 3′ single guanine strand overhang (17). This is folded back into a double loop structure, comprising a large telomeric loop ( the T loop) with the single stranded repeat invading the adjacent double stranded DNA helix to form a second loop, called the displacement, or D loop. This loop is stabilized by, and dependent on, a cluster of proteins called the shelterin complex, which allows cells to distinguish telomeres from sites of DNA damage. (18). Of interest in this respect, is another, non shelterin, telomeric protein, the Werner syndrome protein (WRN) protein, which is involved inthe maintenance of telomeric stability (19,20). Mutations in the WRN gene cause the progeroid condition Werner syndrome. Notably, this syndrome is macroscopically quite similar to SSc, with features of scleroderma like skin changes, calcinosis cutis and ulcera and therefore is advocated to be entitled a place in the differential diagnosis when considering SSc (21)(22)(23). However, the syndrome has also many features, such as hyperglycemia and osteoporosis that are atypical for SSc and Werner's is virtually never accompanied by Raynaud's phenomenom or the typical SSc related autoantibodies (23). Increased chromosomal damage has been repeatedly reported in SSc lymphocytes as well as fibroblasts, (24)(25)(26)(27)(28)(29). Most authors advocate that such damage is due to a higher amount of oxidative damage, caused by the production of reactive oxygen species (ROS) in the SSc inflammatory state (24,25). In addition, SSc fibroblasts produce more ROS than their healthy counterparts. It is reasonable to expect that in the presence of such elevated levels of ROS, that telomere biology would be implicated in the chromosomal abberances observed in SSc. An initial study investigated telomere lengths of peripheral blood leukocytes (PBLs) and fibroblasts from 43 SSc patients, 182 SSc family members and 96 age-matched controls restriction fragment length polymorphism (RFLP) and chemiluminescent labelled probes. They observed an average loss of telomeric DNA in PBLs from SSc patients and their family members of 3 kb compared to the controls. This loss withstood correction for age and disease duration. Of interest, although telomeres in SSc fibroblasts were shorter overall compared to healthy control fibroblasts, this difference was not significant. The investigators did not observe an association with antibody profiles and telomere shortening. Furthermore, family members of SSc patients often had shorter telomeres compared to the patients. Two things can be distilled from this observation. Firstly, it seems unlikely that the telomeres shorten as a consequence of the disease, but that shorter telomeres are a risk factor for SSc themselves, or a secondary effect from another risk factor. Secondly, following from the previous hypthesis, this risk factor might very well be a genetic one, considering the familial occurrence of the shortened telomeres regardless of age (30). Another study addressing telomere length in SSc focused solely on females with the lcSSc phenotype. Forty-three lcSSc patients with an age ranging from 37 to 80 years were included. Terminal restriction fragment (TRF) analyses were used to determine telomere lengths in this study. Regression analysis showed significantly longer mean TRF lengths in lcSSc patients compared to their age-matched healthy counterparts. Moreover, these telomeres did not show any attrition, usually observed with ageing. When the authors analyzed the results by defined age groups, the difference between the lcSSc and control telomere lengths was only significant beyond the fifth decade. Below 50 years of age, no difference was observed between healthy females and females with lcSSc. Noteworthy, patients using non-steroid anti-inflammatory drugs (n=3) were observed to have longer telomeres, than those not on NSAIDs (n=17) (31). It is noteworthy that using Southern blotting to determine terminal restriction fragment lengths also includes detection of subtelomeric region sequences which are known to show interindividual variation. Consequently, these observations may indicate a subtelomeric component in lcSSc and masking TTAGGG repeat attrition, simply as a matter of methodology. Until now, literature addressing the role of telomeres in SSc appears conflicting, but this may be due to the different clinical subsets of SSc investigated by these studies and or www.intechopen.com methodological differences (see above). In this aspect it is important to note that each single telomeric repeat is a potential topoisomerase cleavage site (32). Since anti-topoisomerase antibody (ATA) positive patients are usually not of the lcSSc subset but from the dcSSc subset, it is tempting to speculate that the presence of these antibodies contributes to the differences between the studies. This is likely considering the first study included 40% patients with dcSSc. Although the study states no differences were observed with the antibody status of these patients, the authors do not provide numbers or data on this matter. When considering the size of both these studies, it is yet unlikely that they harbour enough power to provide a conclusive answer on the involvement of ATA+ in telomere shortening. A second point of consideration is the dissimilar methodology used in both studies. Both studies used different percentage gels affecting resolution; this is partially reflected by the differences in variation of the mean TRF, which was remarkably larger in the initial study. Considering the currently increasing amount of discrepancies coming forth from the use of diverse methodologies in telomere measurements, a study with a sufficient number of fully clinically characterized patients, analyzed by a single method, is essential to define the exact impact of different SSc clinical features on telomere length (33). Telomerase Telomerase is a holo-enzyme able to synthesize novel telomeric DNA. Typically, in the absence of telomerase activity (or of a second mechanism-alternative lengthening of telomeres-ALT), telomeres in somatic cells will gradually shorten resulting in cell growth arrest and eventual apoptosis. Telomerase activity is able to circumvent these processes by adding new TTAGGG repeats, thus enlarging the cells proliferative lifespan and combating the cellular ageing process (14). Telomerase has been a target of investigation in SSc several times, although each of the respective studies focused on different aspects of telomerase biology. A synopsis of these studies is presented below. One study investigating the role of telomerase in SSc hypothesized that telomerase activation may participate in activation and proliferation of circulating lymphocytes. This was based on a study in rheumatoid arthritis (RA) and pigmented villonodular synovitis (PVS) showing that telomerase activity is present at a high level in synovial infiltrating lymphocytes obtained from patients with RA, indicating that telomerase activation may be involved in lymphocyte activation and proliferation in RA (34). To address the role of telomerase activity, peripheral blood mononuclear cells from 9 female SSc patients and 10 healthy age-matched females were obtained and subjected to the telomeric repeat amplification protocol. Next to this, PBLs from SLE, Sjogren syndrome (SS) and mixed connective tissue disease (MCTD) were included. Telomerase activity was detected in 64.7% of SLE patients, 63.6% of MCTD, 54.5% of SS, and 44.4% of SSc. Telomerase activity in SSc was not significantly different from the activity observed in the controls, although it has to be noted that high telomerase activity was detected in some patients with this disease. However, a significant difference was observed in PBLs from patients with SLE, MCTD, and SS. Although of interest, this study is not conclusive considering the very small number of SSc patients included (35). In SSc, the observation was made that SSc fibroblasts had a longer longevity and were less likely to go into apoptosis than fibroblasts from healthy controls (36). From this perspective, the hypothesis was put forward that SSc fibroblasts have higher telomerase activity compared to fibroblasts from their healthy counterparts. To address this issue indirectly, a study investigated the presence of a polymorphism at position 514 in the telomerase gene in 53 patients with SSc and 98 healthy controls restriction fragment length analysis. The investigators found a significant higher presence of the 514 AA genotype in SSc. Again, these results are interesting, but the very small sample size and the lack of clearness of any functional implication of this polymorphism renders any firm conclusions vain (37). Notably, somatic cells such as fibroblasts express negligible levels of telomerase, so that a hypothesis based on differential telomerase activity between healthy and diseased cells, is highly questionable. A further cross-sectional study aimed at evaluating telomerase activity in various connective tissue diseases was similarly hampered by lack of power (38). This used 19 patients with SSc, 15 with SLE, 10 with RA and 14 with SS. Twenty-nine healthy subjects were also included. Human telomerase-specific reverse transcriptase (hTERT) was measured in PBLs, using RT-PCR. The highest values were observed subsequently in RA, SLE and SS. Whereas RA was the only disease with significantly higher telomerase expression than controls; SSc PBLs displayed significantly lower expression compared to controls. To place this observation in the proper persp e c t i v e , a d d i t i o n a l f e a t u r e s h a v e t o b e considered. The mean age of the SSc patients was not the highest of the tested groups, making an effect of age on telomerase activity unlikely. In their discussion the authors put their findings in the light of the study by Artlett et al. describing significantly shorter telomeres in SSc PBLs (reviewed above). They advocated that the shorter telomeres in SSc might be caused by lower telomerase activity. This is not intuitive from the point of view of telomere biology, where disease stress may simply result in increased telomeric attrition and replicative senescence. None of the studies above have tested for this, even by simply looking at senescence associated cell surface markers on PBLs (39). Another pivotal observation is that nearly half of the SSc patients included in this study received cyclophosphamide treatment, which has been suggested to influence telomerase activity (40). Unfortunately, the authors do not provide a comparison between the SSc patients with and without cyclophosphamide treatment, which would have certainly been helpful to rule out this possible bias. Also of note, is that the initial hypothesis of higher telomerase activity in SSc fibroblasts recently inspired researchers to isolate high collagen-producing fibroblasts from SSc biopsies and extend their lifespan with hTERT immortalization by lentiviral infection. This was done to the purpose of creating long living SSc fibroblast cell lines to better study and phenotype the characteristics of the SSc fibroblast in a consistent model (41). Such cell lines , while useful research tools, are blunt instruments, and negate primary telomere based damage response mechanisms that may be subverted by the disease, as they artificially immortalise the fibroblasts and bypass damage responses, as a consequence. It will be interesting to evaluate such cell lines for levels of DNA damage and chromosomal abnormalities with increasing passage in culture, in order to try to disentangle these from disease specific changes. A further criticism of such an approach is that it negates the contribution of any epigenetic driver of the disease state which may affect telomere biology and hence cellular life span. Impaired cytological senescence in SSc Immune senescence describes the ageing of the immune system and is rather than a chronological ageing process a biological ageing process. The most well defined findings in this field surround the involution of the thymus. This process starts after puberty, continues during ageing and ultimately results in partial failure of T cell receptor expression and a decrease in production of CD4+ and CD8+ cells. This ultimately results in a larger T memory cell pool. Both CD4+ and CD8+ cells lose CD28 expression. Intriguingly, CD28-T cells are less prone to apoptosis, autoreactive and profoundly interferon gamma (IFNg) producing. Among others, defective Fas signalling also plays an important role in the maintenance of thymus function. In addition, interleukin 2 (IL-2) production and response of aged people declines. A recent study showed that patients with SSc, during their lifespan, undergo a progressive expansion of the naive CD4+ T cell subset. This could be addressed to an age-inappropriate peripheral distribution of naive CD4+ T cells. It was regarded as age-inappropriate because, in contrast to healthy controls, the distribution of naive cells increased with age in SSc patients. Intriguingly, this is also in sharp contrast to RA, where the high levels of T cell activation and apoptosis ultimately produce a larger memory subset pool in disadvantage of the naive T cell pool (42). As described above, thymus involution seems to play an important role in maintaining the T cell pool. To investigate the role of thymus involution in the observed differences in T cell populations, the proportion of recent thymus emigrants by analysis of CD31 expression has been investigated. This has led to the observation that there was no correlation with decrease of recent thymus emigrants in the peripheral blood in inactive and the lcSSc forms of the disease, but not in patients with the diffuse and active disease. This indicates that in the lcSSc and inactive disease subsets, the physiological ageing related decrease in thymic T cells is evaded. However, there seems to be more at play than just an increase in thymically produced cells, since the observed increase in CD31 cells did not correlate significantly with the total number of CD4+ T cells. Based on this finding, it has been hypothesized that peripheral mechanisms must be involved as well to explain the increased frequencies of naive CD4+ T cells discovered in SSc patients. Several explanations have been proposed for these observations, including persistent in vivo antigenic stimulation and cytokine production. Of interest however is the finding that higher sFAS and Bcl-2 levels were detected in the SSc patients included in this study, possibly contributing to the difference in T cell homeostasis (43). As mentioned above, defective FAS functioning is implicated in conserving thymic function and has been involved on a functional and genetic level in SSc previously, more specifically in lcSSc patients, which fit with the lcSSc specific observations made in this study (44,45). Following injury, epithelial cells undergo an epithelial-mesenchymal transition (EMT), in which they start migrating over the wound site and begin proliferating to replace lost cells. In this respect, it is important to note that most cells exhibit a finite ability to replicate, termed the Hayflick limit (46). Based on this, it has been proposed that repeated eptithelial injury can lead to epithelial cells that enter a state of replicative senescence and can no longer proliferate. At this point a fibroblast response can be initiated as a compensatory mechanism that serves to patch injury site. This, partially hypothetical framework is consistent with an increasing prevalence of SSc in age and with the occurrence of the most aggressive SSc cases being described in late onset disease (47). More importantly, this hypothesis provides a direct connection between the process of ageing and fibrosis. In line with this hypothesis, although targeting endothelial cells, is a recent study addressing the ability of mesenchymal stem cells (MSCs) to differentiate into endothelial cells in SSc. This process is of interest in SSc, since endothelial damage has been strongly implicated in its characteristic vasculopathy. A recent study investigated the ability of MSCs derived from 7 SSc patients and 15 healthy controls to differentiate into endothelial cells. The cells were cultured in endothelial-specific medium, and subsequently the endothelial-like MSC phenotype was characterized by surface expression of vascular endothelial growth factor receptors. In addition, the authors investigated cellular senescence of these cells by measuring the telomerase activity in MSCs from SSc patients and controls. Intriguingly, telomerase activity in MSCs from SSc patients was significantly reduced as compared with that in MSCs from the controls. This observation is counterintuitive to previous hypotheses relating to higher telomerase activity in disease SSc. MSC's are a telomerase positive cell type. A lack of or a decrease in telomerase activity in these cells is indicative of a reduced proliferative repair capacity. This significant difference between SSc and control MScs disappeared after full endothelial differentiation. At this point, both subsets displayed decreased activity, with a stronger decrease in endothelial like MSCs from SSc patients as compared with those from controls. The authors propose that this reflects early senescence and that it is caused by an increased number of pathologic stimuli and events encountered by these cells during their lifespan in the SSc patients (48). It is also consistent with aberrant telomere biology in SSc and a reduced damage repair capacity. The X chromosome and age Perhaps unexpectedly at a first glance, X chromosomal expression alters with age. This is of particular interest in SSc, since this disease predominantly affects females, with ratio's reported as high as 14:1 (2-6). Interestingly, skewing of X chromosome inactivation and X chromosome monosomy, both affecting X chromosomal expression, have been implicated in SSc susceptibility or pathogenesis. These two aspects of biological ageing will be discussed in this paragraph in the context of SSc. The X-chromosome accommodates 1098 genes (49). Most X-linked genes are present with one copy in males (XY) and two copies in females (XX). To level differences between males and females in X chromosomal gene expression, several species including mammals, evolved dosage compensation mechanisms (50). One of these mechanisms balances expression of the X-linked genes, present as a single copy in males (XY) and as two copies in females (XX), by inactivation of one of the two X-chromosomes in females (50). The human X chromosome goes through several phases of inactivation and reactivation during germ cell development and in the first part of the embryogenesis. In female embryos, imprinted inactivation of the paternal X chromosome is effectuated at the two-to four-cell phase, pursued by random X-inactivation at the blastocyst stage. As a consequence of this, females are functional mosaics for inactivation of the paternal or maternal X-chromosome (51). About 15% from the X chromosomal genes escapes inactivation; this inactivation pattern shows some heterogeneity between females (52). Although inactivation of the Xchromosome is apparent to be permanent for all descendants of a cell, the XCI pattern alters with age. The frequency of skewed XCI in peripheral blood cells increases in elderly compared to younger healthy females. This is thought to be caused by the exhaustion of progenitor cell populations in the bone marrow with ageing, leaving only a few progenitor cells left to produce cells that will reflect the skewed XCI patterns of their progenitors in the periphery (53). Intriguingly, women with SSc comprise a significantly higher frequency of peripheral blood cells with a skewed XCI pattern compared to healthy women. The same observation has been made in females with auto-immune thyroid disease and juvenile arthritis, but was not observed in systemic lupus erythemathosous and primary biliairy cirrosis (54). Two overlapping Turkish studies postulated that in 195 female SSc patients and 160 female controls skewed XCI patterns were significantly more present; 44.9% of 149 informative patients and in 8% of 124 healthy controls. (55,56). Interestingly, there seemed to be no age related increase in skewed XCI patterns. A recent study replicated the significantly higher percentage of XCI skewing in a cohort of 217 women with SSc and 107 healthy women. More depth was added to this observation by showing that there was no significant difference between skewing patterns of peripheral blood mononuclear cells, plasmacytoid dendritic cells, T cells, B cells, myeloid dendritic cells and monocytes. At sharp contrast with the healthy control population, skewing percentages of X chromosomal inactivation were independent of age in patients with SSc. Furthermore; this study investigated the effect of the skewed XCI on Foxp3 gene expression. Foxp3 plays an important role in T regulatory cell development. Intriguingly, Foxp3 expression was diminished in the patients with SSc exhibiting the most markedly increased skewing, which in turn was associated with less efficient suppressive activity (57). Females suffering from Turner's syndrome, and are hence harbouring only one X chromosome, are at increased risk for developing autoimmune disease. Based on this observation an effort was undertaken to investigate the presence of X monosomy in peripheral blood leukocytes from 44 females with SSc and 73 age-matched healthy women. Interestingly, monosomy rates in SSc, regardless of its clinical subtype, were significantly higher compared to healthy women. Furthermore, X monosomy rates increased with age and were higher in T and B cells compared to monocytes/macrophages, polymorphonuclear, and natural killer cells. Noteworthy, male cell microchimerism, also advocated to play a role in SSc, was ruled out by excluding the presence of an Y chromosome in these cells (58). These observations together imply that age related X chromosomal changes might play a role in the higher SSc prevalence in females at increasing age. Conclusions This review aimed to summarize findings related to biological ageing that are involved in SSc susceptibility and pathogenesis. When we overlook the publications in this field it becomes obvious that most of the investigations can be traced back to chromosomal changes, whether it concerns telomere and telomerase associated damage control, or senescence as well as well as altering X chromosomal expression. The pivotal question in addressing the relevance of the described findings is whether the observed changes in cell senescence, XCI and telomeres/telomerase are caused by a higher turnover of cells, forced by the ongoing inflammatory processes in SSc, or that some of these results are truly involved in initiating or perpetuating SSc. When considering the results describing telomere shortening, increased XCI, X monosomy and early MSC senescence, these results might all flow logically from a higher demand of immune progenitor cells and epithelial/endothelial cells in SSc. This cannot be said about the finding of decreased telomere attrition in lcSSc PBLs and the decreased rate of physiologic thymus function reduction, which seems counterintuitive considering healthy ageing processes and which is different to other autoimmune diseases. These findings are potentially very relevant in pointing towards processes sustaining or initiating the inflammatory status. More specifically, the factors sustaining thymic cell production and telomeric repeat length could be involved in the decreased capability to drive out immune cells based on cell damage or senescence, more prone to be autoreactive. It has to be noted here, that both processes take place predominantly in the lcSSc subset of patients, advocating for full clinical data to be included in future studies. The sustenance of telomeric length in PBLs from lcSSc patients is, based on the published literature, unlikely to come from an increase in telomerase activity, which was found steeply decreased in SSc patients. In this light it is of interest to compare telomere shortening in SSc with other ageing markers, such as CDKN2A, to see whether the shortening is an isolated process, or follows a general, systemic state of increased biologic ageing (33). Notably, although telomeric shortening seems to be influenced by socioeconomic factors and events, no ubiquitous socio-economic correlations have been made with SSc so far (2-6, 15, 59). The involvement of the X chromosome in SSc is also interesting, considering the increased prevalence of SSc in females. In this light it has to be noted that genetic data on X chromosomal genes in SSc are a scarce commodity and were not included in a recent GWAS publication (8). Genetic analysis of the X chromosome might identify genes involved in SSc directly or either indirectly in prompting XCI and X monosomy at an earlier onset than expected by physiological ageing alone. Finally, when over-viewing the literature in this field it becomes apparent that although very interesting observations have been made, the results described are hampered by small numbers of SSc patients and therefore have to be regarded cautiously. Nevertheless, these observations warrant more research since a strong point can be made for the involvement of age related phenomena in SSc. Therefore, a large study with well characterized SSc patients addressing current controversies in telomere and telomerase functioning, as well as further corroboration of EMT response aberrances is currently highly anticipated.
2019-03-07T14:04:29.281Z
2012-02-03T00:00:00.000
{ "year": 2012, "sha1": "569fdbaf2655e6f0786a2c3f72aa7706c234c55a", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/27220", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "889491d92acac1c769bd88a0d02e4715ade655f5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17485560
pes2o/s2orc
v3-fos-license
Prevention of heterotopic ossification: an experimental study using a plasma expander in a murine model Background Heterotopic ossification (HO) is a frequent complication following orthopedic and trauma surgery. It often leads to substantial morbidity as many affected patients suffer from pain and joint contractures. Current prophylactic measures include nonsteroidal anti-inflammatory drugs (NSAID) and local radiation. However, several disadvantages such as delayed fracture healing and impaired ossification have been reported. For this reason, a novel approach for prevention of HO was searched for. We hypothesized that systemic administration of hydroxyethyl starch (HES), a substance known to influence microcirculation, would reduce formation of HO in a murine model. Methods A pre-established murine model was used where HO has been shown to develop following Achilles tendon tenotomy. Twenty CD1 mice were randomly assigned to a control (n = 10) or treatment group (n = 10). The treatment group received two intravenous HES injections perioperatively, while the control group underwent tenotomy only. After ten weeks, the mice were euthanized and micro CT scans of the hind limbs were performed. HO was manually identified and quantitatively assessed. A Wilcoxon rank sum test was used for comparison of both groups. Results The mean heterotopic bone volume in the control group was significantly larger compared to the HES group (2.276 mm3 vs. 0.271 mm3, p = 0.005). A reduction of mean ectopic bone volume of 88 % was found following administration of HES. Conclusion A substantial reduction of HO formation was found following perioperative short-term administration of HES. This work represents a preliminary study, necessitating further studies before drawing ultimate conclusions. However, this simple addition to current prophylactic measures might lead to a more effective prevention of HO in the future. Background Heterotopic ossification (HO) is defined as the presence of lamellar bone in soft tissues, where bone does not normally occur [1]. It is typically found following fractures and dislocations, burns, as well as operative procedures. Apart from some genetic origins, it is most commonly observed in the setting of traumatic brain injury [2]. Its predominant site is within the soft tissue surrounding joints, mostly affecting the hip. Much is yet unknown concerning the pathophysiology leading to HO, but several contributing factors have been identified. It is believed that inappropriate differentiation of pluripotent mesenchymal stem cells into osteoblastic stem cells plays an important role, which among others is triggered by local tissue hypoxia [3]. In case of stimulation, these stem cells begin to differentiate into osteoblasts with consequent osteoid formation [1]. In this context, previous studies have demonstrated a clear correlation between a hypoxic microenvironment and HO development [4]. Furthermore, an inducing agent and a permissive environment seems to be necessary as described previously by Balboni and colleagues [2]. Urist et al. postulated a small hydrophobic bone morphogenetic protein as a further causative agent [5]. It was suggested that this protein is liberated from normal bone in response to venous stasis, inflammation, or in case of a disease of the connective tissue attachments to bone [6]. All these conditions are often found in immobilized patients as well as following trauma. Finally, Prostaglandin E2 has been shown to influence the differentiation of stem cells as well [7,8]. As of today, surgical removal is the only treatment option once HO has occurred. However, postoperative results are often dissatisfying as high recurrence rates after excision have been reported and frequently complicate the further course of treatment. Therefore, an effective prophylactic regimen is of great interest. Current prophylactic measures generally adhere to one or more of the following three principles: disrupting the relevant inductive signaling pathways, altering the relevant osteoprogenitor cells in the target tissue, or modifying the environment conducive to heterotopic osteogenesis. The latter can be influenced by optimizing microcirculation, which prevents local tissue hypoxia. In this context, hydroxyethyl starch (HES, Voluven®) has been shown to reduce local tissue hypoxia by enhancing tissue oxygen tension and regulating microcirculation in a murine model [9,10]. Hoffmann et al. studied the effects of volume support during microcirculatory disorders in an animal model. They examined leukocyte-endothelial cell interaction (LE), functional capillary density (FCD) and macromolecular leakage as indicators of microcirculation using intravital microscopy. A significantly increased FCD and less macromolecular leakage was found following the administration of HES compared to a saline and control group indicating a positive effect on microcirculation [9]. In an ischemia/reperfusion model in rabbits, one group was infused with 0.9 % saline and the other group with HES. Later, muscle biopsies were performed and significantly lower myeloperoxidase (MPO) levels were found in the HES group, demonstrating a positive effect on oxidative stress compared to the saline infusion group [10]. In light of this evidence, the aim of our current study was to quantitatively assess HO formation following intravenous administration of HES. We hypothesized that intravenous administration of HES would lead to a decrease in HO formation. Animal model Prior to the investigation, approval from the relevant Swiss authorities was acquired (Kantonale Tierversuchskommission Zürich, Switzerland, approval number 175/2008), and experimental animal investigation guidelines of the European Union (Directive 2010/63/EU) were strictly adhered to. A pre-existing well-established murine model was chosen, where HO reliably occurs following Achilles tendon tenotomy [11][12][13]. The site of subsequent HO formation is within the soft tissue surrounding the tenotomy. Although the model described does not require a specific strain or breed of mice, only male specimens were used. This ensured comparability to other similar investigations, where only male subjects were used as well. Cluster of differentiation 1 (CD1) mice were chosen as they are not genetically modified, bred locally, and finally handling was facilitated as they are rather large animals. Identification was carried out by individualized markings on the tail. All specimens underwent bilateral midpoint Achilles tendon tenotomy through a posterior approach (Fig. 1). The skin was subsequently closed using nonabsorbable sutures. Anaesthesia consisted of Isoflurane (Baxter International Inc., USA), 5-2 % in oxygen at a flow rate of 400 ml/min via a nose cone, combined with subcutaneous administration of Buprenorphine (Temgesic, Reckitt Benckiser, Slough, Great Britain). Following this procedure, the animals were randomly assigned to one of two groups: A control group (n = 10) and a treatment group (n = 10). This sample size was based on convenience. The control group underwent Achilles tenotomy only. The treatment group additionally received 200 μl HES (Voluven, Fresenius Kabi, Bad Homburg, Germany) intravenously by means of a tail vein injection immediately postoperatively as well as on the first postoperative day. Intravenous substance application in a murine model is best carried out either via the internal jugular vein by micro-surgically inserting a catheter, or by means of administration via the tail vein. The latter is easier to perform, but drawbacks include limited applications, as tail vein thrombosis commonly occurs after multiple injections. We therefore chose to administer HES by means of a total of two consecutive injections. The dose was chosen comparable to a standard administration in humans (7 ml/kg). This was followed by 10 weeks of cage activity only for both groups. All animals were evaluated several times a day postoperatively for possible signs of distress, pain and discomfort such as apathy, shivering as well as reduced chow and water intake. The findings were recorded on a score-sheet. If any such signs were noted, treatment with acetaminophen was extended and early abortion would have been considered if any of these signs had persisted. At ten weeks post-surgery, all mice were euthanized and the limbs harvested. Assessment A micro-computed tomography (CT) scan (SCANCO Medical Micro CT, Zurich, Switzerland) of all specimens was performed with a resolution of 30 μm. The generated images showed a series of 2-dimensional slices through the specimen (Fig. 2). Using a fixed threshold procedure, skeletal and ectopic bone were segregated from the background. Heterotopic bone was then manually identified and marked. Per definition, HO was identified as any bone in the soft tissue with a density that was at least equal to that of spongy skeletal bone. Afterward, each limb was three-dimensionally reconstructed and the volume of HO was visualised (Fig. 3). Thereafter, the volume of heterotopic bone was calculated using the quantitative bone analysis software provided with the micro-CT system. Statistical analysis Statistics were performed in cooperation with the division of biostatistics at the Institute for Social and Preventive Medicine of the University of Zurich, Switzerland. Data were given as heterotopic bone volume in mm 3 . The analysis was performed with SPSS (Version 2.0, IBM, Chicago, IL, USA). Bone volume was examined using descriptive statistics (ANOVA) and as data was not normally distributed, differences between the groups were identified using the Wilcoxon rank sum test. Categorical data was assessed with the Fisher's exact test. The level of significance was set at p < 0.05. Results The first two specimens died perioperatively which was attributed to anaesthesiologic reasons. All subsequent procedures could be carried out without complications. All of the remaining 18 animals survived and were randomly allocated to either the control or treatment group. No further severe adverse events were recorded. Acetaminophen use and postoperative ambulation were comparable in both groups. All limbs were harvested at ten weeks postoperatively and scanned to assess HO formation. Overall This difference was, however, not significant (p = 0.603). In both groups alike, HO formation occurred in several different independent small areas and did not typically consist of one single continuous mass. There was no statistical difference between both groups regarding the number of islets of HO that formed. There was a highly significant difference in the cumulative volume of HO between both groups with decreased HO volume in the treatment group compared to the control group (p = 0.005, Fig. 4). In the treatment group, the mean cumulative HO volume was 0.271 mm 3 (range, 0-8.27 mm3, SD 0.269 mm 3 , Fig. 5) compared to 2.276 mm 3 in the control group (range, 0-17.0 mm 3 , standard deviation (SD) 4.047 mm 3 ). Following administration of HES, a subsequent substantial reduction of mean HO bone volume of 88 % was therefore recorded. Discussion Prevention of heterotopic ossification is of great clinical interest as it is a complication commonly seen following trauma, orthopaedic surgery and particularly joint arthroplasty [15]. It may lead to pain and joint contractures. Prevention and treatment of HO is based on three principles: (1) disrupting the relevant inductive signaling pathways, (2) altering the relevant osteoprogenitor cells in the target tissue and (3) modifying the environment conducive for heterotopic osteogenesis [2,3]. NSAIDs and radiation therapy are currently considered the gold standard in HO prevention [16,17]. They act by modifying the microenvironment as they reduce the associated inflammatory process involved in HO formation. Despite their efficacy, complete prevention of HO often cannot be ensured. Furthermore, use of NSAIDs is controversial due to potentially deleterious gastrointestinal side effects, impaired fracture healing and possibly decreased implant ingrowth [18][19][20]. These are two particularly unfavorable effects in a trauma setting or following joint arthroplasty. To date, none of these undesirable effects have been documented following administration of HES. Furthermore, HES is already in use today for volume resuscitation in a trauma setting to reduce the need for allogenic blood transfusion and improve rheology by decreasing the blood viscosity [21,22]. Previous studies demonstrated reduced local tissue hypoxia following the administration of HES by enhancement of tissue oxygen tension and regulation of microcirculation in animal models [9,10]. A hypoxic microenvironment appears to be an important contributing factor in the formation of ectopic bone. In this context, Olmsted et al. have studied the microenvironment surrounding the site where HO develops and reported that brown adipocytes started accumulating in this area through generation of hypoxic stress within the target tissue, a necessity for the differentiation of stem cells into chondrocytes, subsequently leading to heterotopic bone formation [23]. A key factor in differentiation of mesenchymal stem cells to chondrocyte cells is HIF-1 alpha [24]. It directly influences chondrocyte-specific gene expression and subsequent differentiation of mesenchymal stem cells to chondrocytes [25]. With pre-existing evidence indicating a beneficial effect of HES on local soft tissue microenvironment and microcirculation, we set out to investigate the clinical effect of HES on HO formation in a murine model. The model used has been shown to consistently produce islets of ectopic bone in the surrounding soft tissue following a midpoint Achilles tendon tenotomy [11]. Mouse hind limbs were harvested ten weeks postoperatively and formation of HO was assessed with use of Micro CT. Contrary to the original publication of McClure and colleagues, where HO was reported in 100 % of the tenotomized specimens, we found an overall occurrence of ectopic bone following this procedure in 88.9 % of all limbs (n = 32/36), while no HO was observed in 11.1 % (n = 4/36). It occurred in all but one of the specimens in the control group (94.4 %, n = 17/18), while three specimens in the treatment group (16.7 %, n = 3/18) showed no signs of HO bone formation at all. The complete absence of HO in three cases in the treatment group was desired, but due to the small sample size, this difference may have not been statistically significant (p = 0.603). A highly significant difference in mean ectopic bone volume could be found however, with a mean HO bone volume of 2.276 mm 3 in the control group compared to 0.271 mm 3 in the treatment group. A remarkable reduction of mean volume of ectopic bone formation of 88 % was therefore found following the delivery of HES. Beside other previously investigated experimental substances influencing the formation of ectopic bone formation, such as Echinomycin or Imatinib, the great advantage of HES is the substantially better side effect profile with a low incidence of undesirable effects [12,13,26]. HES is already implemented in daily use for volume resuscitation in a trauma or surgical setting involving hemorrhage or shock. Although the volume of HO formation could be reduced, it was not possible to completely prevent ectopic bone formation in most cases (83.3 % n = 15/18). We assume that multiple additional involved signaling pathways, which were not influenced by HES, may account in part for this. Therefore, application of just one substance seems unlikely to completely prevent ectopic bone formation. Further studies are necessary to evaluate a possible combination of administered substances for a more effective prevention of HO, possibly by influencing other contributing factors such as increased release of Prostaglandin E2, hypercalcemia, changes in symptomatic nerve activity and the disequilibrium of parathyroid hormone and calcitonin [3,7,27]. To our knowledge, this is the first study to describe the clinical effect of HES on HO formation in a standardized model. However, this investigation has its limitations. As our study utilized a murine model, the applicability of our results in a clinical setting has yet to be investigated. Further, the small sample size of ten mice per group and nine mice per group at final follow-up may be criticized. As this work can be regarded as a pilot study, the sample size was chosen as small as possible. However, we believe that this does not substantially influence our results as we were able to demonstrate a statistically significant difference in HO formation between the groups despite the small sample size. Although previous investigations have brought up substantial evidence in order to explain the above mentioned involved signaling pathways, which are involved in HO formation, the exact mechanism by which HES led to a reduction of HO formation in our current study remains speculative and has yet to be explored. The present study merely evaluated the clinical effect of HES. Furthermore, a third study arm would be desirable with administration of intravenous saline as a control as well as a comparison with current standard treatment with a non-steroidal anti-inflammatory drug, such as Indomethacin, in order to better understand the potential of HES to reduce HO formation. Finally, it is known that current preventive measures for HO may interfere with physiological fracture healing as well as implant ingrowth. The effects of HES on fracture healing as well as implant ingrowth have yet to be investigated as well. Beside these limitations, our results are the first to demonstrate a clinical effect of intravenous perioperative administration of HES on HO formation with a relevant decrease of ectopic bone. Finally, we are aware that this novel approach is merely the groundwork in an area warranting further research to assess the potential of HES in patients. These promising preliminary results, however, may lead to a more effective prevention of HO in the future by a simple addition to current prophylactic measures. Conclusions Formation of heterotopic bone is a frequent complication following trauma, burns and orthopedic surgery. Treatment is often challenging, as high recurrence rates following surgical excision have been reported. Current prophylactic measures have proven efficacy, but may not always completely prevent HO from occurring. For this reason, we searched for a novel approach to further improve prevention of HO. We found a substantial decrease in heterotopic bone formation following perioperative short-term administration of hydroxyethyl starch in a standardized murine model. This work represents a preliminary study as the mechanism of action of HES, as well as applicability in humans, have yet to be investigated. However, this simple addition to current prophylactic measures could lead to a more effective prevention of HO in the future. Experimental animal investigations guidelines of the European Union (Directive 2010/63/Eu) were strictly adhered to. Availability of data and materials All data of the present investigation can be obtained from the corresponding author.
2018-05-08T18:21:54.588Z
2016-05-04T00:00:00.000
{ "year": 2016, "sha1": "0169639b269d55db928b58217ffdca4cad5ebbd5", "oa_license": "CCBY", "oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/s12893-016-0144-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0169639b269d55db928b58217ffdca4cad5ebbd5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245650845
pes2o/s2orc
v3-fos-license
Nonlinear adiabatic electron plasma waves. II. Applications In this article, we use the general theory derived in the companion paper [M. Tacu and D. B\'enisti, Phys. Plasmas (2021)] in order to address several long-standing issues regarding nonlinear electron plasma waves (EPW's). First, we discuss the relevance, and practical usefulness, of stationary solutions to the Vlasov-Poisson system, the so-called Bernstein-Greene-Kruskal modes, to model slowly varying waves. Second, we derive an upper bound for the wave breaking limit of an EPW growing in an initially Maxwellian plasma. Moreover, we show a simple dependence of this limit as a function of $k\lambda_D$, $k$ being the wavenumber and $\lambda_D$ the Debye length. Third, we explicitly derive the envelope equation ruling the evolution of a slowly growing plasma wave, up to an amplitude close to the wave breaking limit. Fourth, we estimate the growth of the transverse wavenumbers resulting from wavefront bowing by solving the nonlinear, nonstationary, ray tracing equations for the EPW, together with a simple model for stimulated Raman scattering. I. INTRODUCTION Although electron plasma waves (EPW's) have been extensively studied since the seminal work by Tonks and Langmuir [1], a complete nonlinear theory for these waves is still to be derived. Actually, this remains a formidable task even when one restricts to a kinetic description in the classical regime. Indeed, this would require a theoretical resolution of the Vlasov-Maxwell equations, valid whatever the space and time variations of the wave and of the plasma. In this article, we do not aim at such a universality. Instead, we focus on a particularly important class of nonlinear EPW's, the so-called adiabatic ones. These mainly result from the electron motion, provided that this motion may be accurately described by making use of the adiabatic approximation, i.e., by assuming that the dynamical action remains essentially constant (up to some geometrical changes entailed by separatrix crossing). As discussed in Ref. 2, this lets us restrict to waves such that γ/kv th 0.1, where γ is the typical wave growth rate, k is the wavenumber and v th is the electron thermal velocity. Moreover, we also restrict to propagating waves, so that physics situations which could lead to Anderson-like localization [3] are excluded. Under these conditions, we address in this article several long-standing issues regarding nonlinear EPW's. Fist of all, there has been a considerable effort to derive stationary solutions to the Vlasov-Poisson system, which are the so-called Bernstein-Greene-Kruskal (BGK) modes [4]. However, since a wave is never exactly stationary and an EPW is never exactly electrostatic [2], the relevance of BGK modes to model actual physics problems is not always clear. In particular, one may wonder whether these modes may correctly approximate slowly growing waves, resulting from an instability, and described in the companion paper [2]. We address this issue in Section II, where we compare the electrostatic field of previously proposed BGK modes with that derived in Ref. 2. This lets us discuss when an accurate description of the electrostatic field may be obtained much more rapidly and more simply than by going through the whole derivation of Ref. 2. In this respect, special attention is paid to the well-known solution provided by Dawson in Ref. 5 . Moreover, in Section II, we clearly explain the analogies and differences between our theory and the derivation of BGK modes. Second, a nearly monochromatic wave cannot grow beyond a maximum amplitude known as the wave breaking limit. Deriving this limit is a long-standing and important issue. Indeed, this would allow to conclude about the saturation level of an instability, or about the effectiveness of stimulated Raman scattering (SRS) as a means for laser pulse amplification [6]. One way to obtain an upper bound for the wave breaking limit is to find the maximum amplitude allowing a solution to the nonlinear dispersion relation. This is what we do in Section III using the dispersion relation derived in the companion paper [2]. Moreover, we compare our results with those obtained by Coffey in Ref. 7 for a stationary wave in an initially waterbag distribution function. Furthermore, we discuss the relevance of the upper bound thus derived. Third, in order to fully describe a nonlinear EPW, one must be able to predict the space and time evolution of its amplitude. Resorting to envelope equations has proven to be a very effective and accurate way to do so for slowly varying waves [8][9][10][11]. Such equations have been derived in Refs. 12 and 13 within the geometrical optics limit and by assuming a near adiabatic electron motion. They are valid whatever the harmonic content of the wave which is, however, not specified. Consequently, no explicit analytical formula is provided, except in Ref. 13 when the electrostatic field is assumed to be sinusoidal (but without discussing the range of validity of the sinusoidal approximation). Using the discussion of Section II regarding the relevance of BGK modes, we provide in Section IV explicit expressions for the nonlinear envelope equation of growing electron plasma waves, which are accurate whatever kλ D (k being the wavenumber and λ D the Debye length), and up to amplitudes close to the wave breaking limit. Fourth, an EPW, strongly driven into the nonlinear regime by SRS from a laser hot spot, exhibits large transverse wavenumbers. These have been evidenced experimentally in Ref. 14 using Thomson scattering, and shown to be much larger than expected from the opening angle of the focal spot. Now, there may be two different reasons for the growth of these transverse modes. They may result from an instability due to trapped particles, as shown numerically in Refs. [14][15][16][17]. They may also be due to wavefront bowing, observed numerically in Refs. 14-23. Indeed, an SRS-driven EPW grows faster where the laser intensity is larger, near the center of the focal spot. Consequently, the wave amplitude is inhomogeneous in the direction transverse to the laser propagation. Then, so are the wave frequency and wave phase velocity, since these are nonlinear functions of the amplitude [2,24]. As a result, the wavefront bends, usually so as to induce selffocussing [14][15][16][17][18][19][20][21][22][23]. This, in turn, entails the growth of transverse modes, since the local wavenumber is perpendicular to the wavefront. Kinetic simulations, either using a particle-in-cell (PIC) or a Vlasov code, always show the wavefront bowing and the unstable growth of secondary modes. Consequently, one cannot tell which is the dominant effect, as discussed in detail in Ref. 14. This issue is addressed in Section V, where we calculate the transverse wavenumbers which only result from wavefront bowing. To do so, we clearly need to go beyond the paraxial [25,26] or quasiop-tical [27,28] approximations. Indeed, we have to solve, very finely, for the time variations of the EPW wavenumber, which depend on the local wave amplitude. In other words, we have to solve the nonlinear, nonstationary, ray-tracing equations for the EPW, together with its envelope equation. Our numerical resolution follows from that introduced in Ref. 29, where the physical space is subdivided into regular cells. In order to derive the nonlinear ray dynamics, we need the local value of the EPW amplitude. This is estimated as an average over the rays located within the same cells. More precisely, using the same technique as that introduced in particle-in-cell (PIC) codes, the EPW amplitude is first estimated on the cell nodes by making use of a shape factor. Then, it is projected back onto the rays by resorting to the same shape factor. For this reason, we dubbed our numerical scheme "ray-in-cell" (RIC). By comparing the results of our model with those from two-dimensional (2-D) PIC simulations of SRS, we can conclude on the ability to derive the EPW transverse spectrum by relying, only, on wavefront bowing. This is an important issue because the opening angle of the backscattered light directly follows from that of the EPW. Then, a simple model that quantifies the transverse modes of the EPW is needed at least for two reasons: (i) to correctly predict the impact of SRS on the plasma hydrodynamics; (ii) to properly interpret experiments of laser-plasma interaction as regards the direction of the backscattered light. This paper is organized as follows. In Section II, we compare the electrostatic field derived from the adiabatic theory of the companion paper, Ref. 2, with those of previously proposed BGK modes. Section III addresses the wave breaking limit for adiabatic EPW's. In Section IV, we provide an explicit expression for the nonlinear envelope equation of a growing electron plasma wave, which is valid whatever kλ D and up to amplitudes close to the wave breaking limit. Section V introduces a simple model to quantify the transverse modes resulting from wavefront bowing, and compares the predictions of the model with those of 2-D PIC simulations. Section VI summarizes and concludes our work. There are clear differences between the adiabatic waves considered in this paper and BGK modes. Indeed, the latter modes are stationary solutions to the Vlasov-Poisson system and, most often, they are space-periodic, so that the mode amplitude is time and space independent. By contrast, although its variations must be slow, the amplitude of an adiabatic wave may vary in space and time. Moreover, BGK modes are purely electrostatic while we showed in Ref. 2 that nonlinear adiabatic waves had a nonzero vector potential. However, when the vector potential is negligible, for a uniform wave, and for each fixed value of the amplitude, a nonlinear adiabatic EPW, as derived in Ref. 2, is a BGK mode. Nevertheless, in spite of the previous strong analogy, our theory is developed in a spirit totally different from that leading to BGK modes. Indeed, usually, nothing is said about the way a BGK mode has, or could have been, generated. Usually, such a mode has no history. By contrast, in Ref. 2, we build the self-consistent wave potential and electron distribution function by accounting for the full wave history. In particular, for a given wave amplitude, our result will be different depending on whether the wave has kept on growing or if its amplitude has not been a monotonous function of time. Actually, our theory is designed to predict the space and time evolution of the wave, by using envelope equations like those derived in Section IV. However, the general derivation of Ref. 2 is quite tedious, while BGK modes are explicit solutions to the Vlasov-Poisson system, which usually depend on several free parameters. Then, one may wonder whether the theory could be simplified by choosing those parameters so as to get a relevant description of nonlinear adiabatic waves. In particular, we discuss in Paragraph II B the relevance of the very simple solution introduced by Dawson in Ref. 5, using previous results by Akhiezer and Lyubarskizs [30]. Dawson's solution is for nonlinear plane waves in a cold plasma, and it depends on a single parameter, the wave amplitude. The corresponding electric field reads with where n e and −e are the electron density and charge. B. Detailed comparisons between uniformly growing adiabatic waves and Dawson's solution for nonlinear plane waves in a cold plasma Field profile Let us introduce the dimensionless electric field, where m is the electron mass, ω pe = n e e 2 /ε 0 m is the plasma frequency, and E 0 is the spaceaveraged value of E(x) over one wavelength. For the adiabatic waves of Ref. 2, (E − E 0 ) would just be the electrostatic field. Moreover, 2, remains constant when the wave amplitude does not change. This means that, if the adiabatic EPW reaches a given amplitude at t = t 0 , that does not change whenever t > t 0 , E 0 = 0 for times larger than t 0 . Hence, for an adiabatic wave with constant fixed amplitude, E 0 = 0. Figs. 1 and 2 compare the profiles of the electric field for adiabatic waves (derived by accounting for harmonics 1 to 3 in the scalar potential) with those of Dawson's solution, Eq. (4), and with those of a purely sinusoidal wave, for given maximum values, E max , of the dimensionless field. When kλ D = 0.1, the field profile proposed by Dawson agrees very well with that of nonlinear adiabatic waves whenever E max 0.64. Indeed, if we denote by δ E D the difference between the electric field derived from Dawson's formula Eq. (4) and that derived from the adiabatic theory, δ E 2 D / E 2 is less than 10% whenever E max 0.64 (it is close to 9% when E max = 0.6401 and close to 0.5% when E max = 0.1959). When E max ≈ 0.7459, which is close to the largest amplitude allowing solutions to the adiabatic nonlinear dispersion relation, the field profile proposed by Dawson is slightly steeper than that of adiabatic waves. Indeed, if we denote by δ x D (respectively by δ x a ) the difference between the x-position of the minimum and maximum values of Dawson's electric field (respectively of the adiabatic electric field), kδ x D /π ≈ 0.53 while kδ x a /π ≈ 0.64 when E max ≈ 0.7549. Nevertheless, whatever the amplitude, the electrostatic field for nonlinear adiabatic waves is better approximated by Dawson's solution than by a sine function. When kλ D = 0.2 and E max 0.5169, Dawson's profile for the electrostatic field is very close to that of nonlinear adiabatic waves. Indeed, δ E 2 D / E 2 is less than 10% whenever E max 0.5169 (it is close to 10% when E max ≈ 0.5169 and close to 2% when E max ≈ 0.1967). However, when E max ≈ 0.6218, which is close to the maximum amplitude allowing a solution to the nonlinear adiabatic dispersion relation, Dawson's profile is slightly steeper than the adiabatic one, kδ x D /π ≈ 0.6 while kδ x a /π ≈ 0.8. The sinusoidal profile also provides quite a good approximation of the adiabatic one. Indeed, whenever E max 0.6218, δ E 2 s / E 2 < 20%, where δ E s is the difference between the adiabatic and sinusoidal electric fields. Actually, the sinusoidal profile is slightly more accurate than Dawson's one for the largest values of E max . In particu- kλ D = 0.2, the advantage of resorting to Dawson's profile in order to approximate adiabatic waves, instead of simply using a sine function, is less obvious than when kλ D = 0.1, although Dawson's profile is more accurate whenever E max 0.6. Increasing kλ D beyond 0.2 lets nonlinear adiabatic waves get closer and closer to sinusoids. Actually, whenever kλ D > 0.3, they are better approximated by a sine function than by Dawson's profile (not shown here). In conclusion, we find that the electrostatic field of uniformly growing adiabatic waves is well approximated by the solution proposed by Dawson whenever kλ D 0.2, although the accuracy decreases close to the wave breaking limit. This result is expected, since Dawson only investigated waves in a cold plasma, i.e., in the limit when ω/kv th → ∞, where ω is the wave frequency and v th the electron thermal speed. Now, in a plasma with finite temperature, and in the linear limit, ω/kv th ≈ 1/kλ D , so that the cold plasma limit is more relevant for smaller values of kλ D . However, as the wave amplitude increases, ω decreases, so that the cold plasma limit becomes less accurate. If the wave amplitude does not keep on increasing, some electrons will be detrapped, which would change the distribution function. How this would impact the previous conclusions regarding the relevance of Dawson's solution depends on the variations of V φ = ω/k − eA 0 /m, A 0 being the wave vector potential. If V φ changes more slowly than the separatrix width (in velocity), electrons are detrapped symmetrically with respect to V φ . Then, detrapping would not significantly change the values of cos( jϕ) and, therefore, the harmonics content of the field. In this case, Dawson's solution accurately models the electrostatic field of nonlinear adiabatic waves even when they are not uniformly growing. However, only the theory of Ref. 2 can address the most general situation, and remains valid whatever variations of V φ compared to those of the separatrix width. Nonlinear dispersion relation In this Paragraph, we derive an approximate nonlinear adiabatic dispersion relation using Dawson's solution for the electrostatic field, and compare it against the results found from Ref. 2. More precisely, we still derive V φ = ω/k − eA 0 /m by solving [31] − 2 cos(ϕ) = Φ 1 , where Φ 1 is the first harmonic of the dimensionless potential, Φ, such that ∂ ϕ Φ = −E (ϕ). Here, E (ϕ) is a plain generalization of Eq. (3), namely, where ϕ now depends on space and time, ∂ x ϕ = k and ∂ t ϕ = −ω. As for cos(ϕ) in Eq. (5) where ϕ 0 is related to ϕ through Then, where J n (E) is the Bessel function of order n [32]. Hence, Eq. (5) can be sloved without having to self-consistently calculate the wave potential, which considerably simplifies the derivation of V φ . i.e., when they differ from ω 3 , which is our reference, by much less than |δ ω|. Only when this condition is fulfilled may the approximate values for the nonlinear frequency be used to derive an accurate nonlinear ray tracing, as that described in Section V. and |δ ω| (black solid line), normalized to the plasma frequency ω pe , as a function of kλ D for given values of Φ 1 . When Φ 1 = 0.2 and kλ D = 0.2, Fig. 3 (a) shows that |ω 3 − ω 1 | is larger than |δ ω|, but quickly decreases compared to |δ ω| as kλ D increases. Moreover, when Φ 1 = 0.2, |ω 3 − ω 1 | < 10 −2 ω pe whatever kλ D , so that ω 1 remains very close to ω 3 . Therefore, in agreement with the results of Paragraph II B 1, we conclude that a harmonic potential yields accurate estimates for the nonlinear frequency whenever kλ D 0.2, although Dawson's potential may yield more accurate results for small amplitudes. Moreover, better results are obtained with a sinusoidal potential than with Dawson's one whenever kλ D 0.25 and Φ 1 0.3. Panel (a) is for These conclusions may also be appreciated from Fig. 4 plotting |ω 3 − ω D |, |ω 3 − ω 1 | and |δ ω| as a function of Φ 1 , for fixed values of kλ D . When kλ D = 0.1, Fig. 4 (a) shows that |ω 3 − ω D | < |δ ω|/5 whenever Φ 1 0.5 (except close to the region when δ ω changes sign), so that ω D is quite accurate for this range of amplitudes. However, Fig. 4 (a) also shows that ω 1 happens to be more accurate than ω D when Φ 1 0.5. This is quite unexpected because, as may be clearly seen in Fig. 1, the profile of the adiabatic electrostatic field is much closer to Dawson's one than to a sinusoid. The good accuracy of ω 1 is due to the fact that it happens to match ω 3 when Φ 1 ≈ 0.55, which lets it be more accurate than ω D for large amplitudes. However, neither ω 1 nor ω D are accurate for the largest amplitudes, close to the wave breaking limit. Moreover, although this may not be seen in Fig. 4, using Dawson's potential allows for solutions to the nonlinear dispersion relation over a narrower range in Φ 1 than when using the adiabatic potential. Indeed, when kλ D = 0.1, solutions only exist when Φ 1 < 0.66 with Dawson's potential, instead of Φ 1 < 0.71 with the adiabatic one. Hence, Dawson's potential cannot be used for the largest wave amplitudes. This is true whatever kλ D . For example, one may see in Fig. 3 is only plotted up to kλ D = 0.25, unlike |ω 3 − ω 1 | which is plotted up to kλ D = 0.29. This is because, using Dawson's potential, we could not solve the dispersion relation beyon kλ D = 0.25 By comparing the results obtained with the four values of kλ D considered in Fig. 4 (a)-(d), one clearly sees that the accuracy of ω 1 increases with kλ D . Fig. (4) (d) shows that it is excellent when kλ D = 0.3, |ω 3 − ω 1 | < |δ ω|/10 whatever Φ 1 . Fig. 4 (c) shows that it is also very good when kλ D = 0.2, although |ω 3 − ω 1 | > |δ ω| when Φ 1 0.25. However, ω 1 remains very close to ω 3 for such small amplitudes, |ω 3 − ω 1 | < 10 −2 ω pe . Hence, we conclude again that a harmonic potential yields accurate results for the nonlinear frequency whenever kλ D 0.2. As for Dawson could only be accurately derived by accounting for the fact that the wave frame was not inertial, which made the nonlinear electron distribution a nonlocal function of the phase velocity. In this Section, we discuss in detail the reason why we cannot solve the dispersion relation beyond Φ max 1 , and what this implies for slowly varying EPW's. As may be seen in Fig. 5 when kλ D = 0.4, the values of ω solving the nonlinear adiabatic dispersion relation seem to be such that dω/dΦ 1 → −∞ when Φ 1 → Φ max 1 . Then, clearly, no solution to the adiabatic dispersion relation can be found when Φ 1 > Φ max 1 . Now, the adiabatic dispersion relation is only valid when |dω/dt| is small enough. If γ is the wave growth rate, the latter condition translates into (γΦ 1 )|dω/dΦ 1 | be small enough. Since |dω/dΦ 1 | → +∞ when Φ 1 → Φ max 1 , we conclude that there exists a maximum amplitude, Then, the question remains to know whether there can be any solution to the EPW dispersion If such a solution existed, the dispersion relation would necessarily be nonadiabatic. Consequently, ω would decrease very rapidly with The values found for Φ max 1 are plotted in Fig. 6 as a function of kλ D , when 0.1 < kλ D < 1. When The values we derive for Φ max 1 are systematically smaller than the wave breaking limit, E = 1, given by Dawson in Ref. 5. Indeed, from Eq. (10), E = 1 corresponds to, This is because Dawson only requires that the electric field has to be single-valued, while we impose the more restrictive condition that a solution to the dispersion relation must exist. Moreover, Fig. 6 shows that our values for Φ max 1 are larger than the wave breaking limit derived by Coffey [33] (except, maybe, when kλ D < 0.02). There are two reasons for such a discrepancy. First, Coffey assumes that the unperturbed distribution function is a waterbag, while we assume that it is a Maxwellian. Second, unlike Coffey, we account for the whole past history of the wave in order to derive Φ max 1 . In particular, the values plotted in Fig. 6 are for a wave that has kept on growing in a homogenous plasma. The nonlinear dispersion relation would change if the time variations of the wave amplitude were not monotonous, or if the wave propagated in an inhomogeneous plasma (see Ref. 34). Consequently, the values of Φ max 1 are expected to depend on the particular way the wave has reached the amplitude Φ max 1 . Therefore, it is impossible to derive a priori a wave breaking limit valid in any situation. Note that we choose to plot the wave breaking limit as a function of the amplitude of the first harmonic of the potential, because this yields very simple scaling laws. However, the wave breaking limit is usually defined as a function of δ n e /n e , where δ n e is the density fluctuation induced by the wave. From Poisson equation and with our normalization, δ n e /n e = − ∑ j j 2 Φ j cos( jϕ), which significantly departs from the sinusoidal approximation, −Φ 1 cos(ϕ), for the largest wave amplitudes. In particular, the minimum value of δ n e over ϕ, which we denote by δ n min e , is significantly smaller than Φ 1 while its maximum value, δ n max e , is significantly larger than Φ 1 . For example, when kλ D = 0.14 (a situation we investigate in detail in Section V), we find Φ max 1 ≈ 0.65, which corresponds to δ n max e /n e ≈ 1.04 and −δ n min e /n e ≈ 0.43. The latter estimate for −δ n min e /n e is in good agreement with the minimum density derived in the PIC simulation of Ref. 14 just before the wave starts to break. Indeed, as illustrated in Fig. 9 (d), −δ n min e /n e 0.4 just before wave breaking in the PIC simulation, which would correspond to Φ max 1 0.59. Hence, at least for the example studied in Section V, we could check that values plotted for Φ max 1 in Fig. 6 do yield an upper bound for the wave breaking limit, which is close to the actual limit. In general, an EPW breaks because of the unstable growth of secondary modes due, for example, to the trapped particle instability [35][36][37][38]. This has been clearly shown in Ref. 38 for an SRS-driven EPW. Wave breaking occurs when the secondary modes grow so fast that their amplitude eventually overtakes that of the EPW. Accurately describing such a complex situation is a difficult task, which is part of our research program. However, regardless of the reason why the EPW should break, the values plotted for Φ max 1 in Fig. 6 do provide a rigorous upper bound for the amplitude of an adiabatic wave growing in a uniform plasma. To the best of our knowledge, such a rigorous result was not available in previous publications. Moreover, the theory of Ref. 2 is general enough to address any situation, regardless of the time and space evolution of the wave amplitude and plasma density. Therefore, the procedure described in this Section may be applied to derive the wave breaking limit in any physics situation, provided that the EPW varies slowly enough. IV. ENVELOPE EQUATION FOR ADIABATIC ELECTRON PLASMA WAVES One of the most important issues, regarding nonlinear EPW's, is the ability to predict their space and time variations. Envelope equations have proven to be a very effective and accurate way to do so, as shown in Refs. 8-11 for an EPW driven by SRS in an initially uniform Maxwellian plasma. Moreover, envelope equations valid in a nonstationary and non-uniform situation have been derived in Refs. 12 and 13 by resorting to a variational formalism. However, in the latter articles, the envelope equations have been written in a rather formal way, where the role played by the vector potential did not appear clearly, nor did the space-dependence of the scalar potential. In this Section, we provide explicit expressions for the nonlinear envelope equation of a driven EPW, valid whatever kλ D and up to amplitudes close to the wave breaking limit. A. General results The Lagrangian density for the self-consistent wave-particle interaction, as derived in Refs. 12 and 13, reads where A 0 is the vector potential, and where (k 2 φ 2 A )/2 is the averaged value of the electrostatic field squared. Namely, using the same notation as in Ref. 2, the electrostatic field reads E el = ∑ n≥1 E n sin(nϕ). Then, (kφ A ) 2 = ∑ n≥1 E 2 n . Moreover, since we only look for an envelope equation at first order in the space and time derivatives of the fields, it is enough to derive the electrostatic potential, φ , at zeroth order. Hence, it may be approximated by φ = ∑ n≥1 φ n cos(nϕ), with φ n ≈ E n /nk. Then, As for L u and L t , they read where where kp = mv−eA 0 , v being the electron velocity. Note that H is m times the Hamiltonian defined in Ref. 2, so as to make it scale as an energy. Moreover, in Eq. (15) for L t , I is the action for the Hamiltonian H, while in Eq. (14) for L u , P = kI, and X is canonically conjugated to P for H u . In Eqs. (14) and (15) where γ is the wave growth rate, as seen by the electron, and T B is the period of a deeply trapped orbit. Then, the envelope equation for a driven wave reads [13] ε where E d is the drive amplitude (assumed to be sinusoidal) and δ ϕ d is the phase difference between the drive and the electrostatic field. Moreover, the symbol | A s means that the integral boundaries in Eq. (14) and (15) are not to be derived or, more precisely, that the fractions of trapped and untrapped electrons are to be considered as constants. Namely, Moreover, Note that the vector potential explicitly enters the envelope equation through V φ = ω/k − eA 0 /m. For untrapped electrons, In addition to where η = +1 for orbits above the separatrix and η = −1 below the separatrix, so that where Ω = k∂ P H For trapped electrons, so that where Hence, the explicit expression of the nonlinear EPW envelope equation follows from the sole derivation of Ω, which may only be performed once the ϕ-variations of φ are known. Nevertheless, simple approximations for Ω are easily obtained. For untrapped electrons whose orbits are far away from the separatrix, For trapped orbits far away from the separatrix, where φ ′′ (0) ≡ d 2 φ /dϕ 2 calculated at the O-point. Moreover, let φ m be the minimum of φ over one wavelength, assumed to be reached at the X -point (which always happens for the situations considered in Ref. 2). Then, for orbits very close to the separatrix, Ω goes to zero as where φ ′′ (π) ≡ d 2 φ /dϕ 2 calculated at the X -point, and where B. Sinusoidal potential As discussed in Section II, a sinusoidal approximation is accurate for a growing wave, whatever the wave amplitude, provided that kλ D 0.2. Then, for a sinusoidal potential, Ω is given by the following formulas. For untrapped electrons, where K 1 is the elliptic integral of first kind [32], where, is the so called bounce frequency, and where ζ u is related to P by where K 2 is the elliptic integral of second kind [32]. When ζ u < 0.85, Ω differs from the approximate expression, Eq. (36), by less than 10%. For trapped electrons, where ζ t is related to I by When ζ t < 0.6, Ω differs from the approximate expression, Eq. (37), by less than 25%. C. Dawson's potential As discussed in Section II, the field profile proposed by Dawson is very close to the adiabatic one whenever kλ D 0.2, and up to values close to the wave breaking limit. Moreover, the nonlinear frequency derived from Dawson's potential, ω D , was shown in Section II to be quite accurate whenever kλ D 0.2 and Φ 1 0.5. As regards the envelope equation derived using Dawson's potential, it is expected to be accurate up to amplitudes close to the wave breaking limit. Indeed, as discussed in Paragraph IV A, the coefficients of this equation mainly depend on Ω and, from Eqs. (23)-(26), on ω and V φ . In Section II, we found that replacing ω by ω D would entail an error much less than the nonlinear frequency shift, δ ω, only when kλ D 0.2 and Φ 1 0.5. However, unless the wave amplitude is close to the wave breaking limit, δ ω ≪ ω, and it is valid to replace ω with ω D in Eqs. (23)- (26). The same conclusion holds for the value of Ω calculated for passing particles away from the separatrix, and whose expression is given by Eq. (36). As for trapped particles away from the separatrix, Eq. (37) shows that Ω is proportional to φ ′′ (0), which is always very well estimated using Dawson's potential, even for the largest amplitudes. For example, when kλ D = 0.1 and E max ≈ 0.745, which corresponds to Fig. 1 (a) Fig. 7 plotting the group velocity, v g , as a function of Φ A when kλ D = 0.14. Using Eq. (8) one finds that, with Dawson's potential, Ω is given by the following formulas. From Eq. (35), Ω for untrapped electrons is, where h = k 2 H/mω 2 pe and where Φ(ϕ 0 ) is given by Eq. (9). Moreover, the relation between h and P follows from Eq. (27), which reads, For trapped electrons, with h + Φ(ϕ max ) = 0. Moreover, from Eq. (32), h is related to I by , where ν L is the Landau damping rate, f ′ 0 being the derivative, with respect to velocity, of the unperturbed velocity distribution function, and χ being the adiabatic limit of the linear electron susceptibility, growing, provided that V φ varies less rapidly than A s . E. Approximate expression for the envelope equation In this Section, we specialize to plasmas which are essentially uniform, so that the last term in Now, it is easily shown that, which lets us write From the results of the companion paper Ref. 2, we know that, except when φ A is close the wave breaking limit, the harmonic content of the scalar potential, and the wave frequency, do not vary much with φ A . This implies that χ a does not depend much on the wave amplitude so that, in the integral of Eq. (55), one may replace 1 + χ a (φ ′ A ) with 1 + χ a (φ A ). Then, where, and, where we have denoted E A = kφ A . E A may be viewed as the effective amplitude of the electrostatic field (for a sinusoidal wave, E A = E 1 , is the field amplitude). As for v g , its variations with Φ A ≡ (kλ D ) 2 φ A are illustrated in Fig. 7 when kλ D = 0.14, for the self-consistant potential derived as in Ref. 2, for a purely sinusoidal potential, and for Dawson's potential [31]. It is noteworthy that the nonlinear values of v g may be significantly (up to 30 times) larger than its linear limit. One may also see in Fig. 7 that the values of v g , derived with the self-consistent potential, are very close to those obtained with Dawson's potential. This shows the relevance of using the latter simple potential to derive the EPW envelope equation. Note that, strictly speaking, v g is only useful to derive the ray equations, i.e., the transport of the As further discussed in Section V, using Eq. (61) greatly simplifies the derivation of the space and time variations of the plasma wave, because the same v g is used in the equation for Λ a and for the ray tracing. Then, it is clear that Eq. (61) is valid when such effects as the group velocity splitting [41,42] are negligible. Moreover, in a uniform plasma, the space and time variations of Λ a are mainly due to those of the wave amplitude, so that Eq. (61) may be further simplified by assuming The approximation, Eq. (62), is explicitly used in the simple model introduced in Section V in order to derive the transverse modes resulting from wavefront bowing. V. TRANSVERSE MODES RESULTING FROM WAVEFRONT BOWING As shown in several papers [14][15][16][17], when an EPW grows and enters the strongly nonlinear regime where kinetic effects are important, its spectrum enriches in transverse wavenumbers. It is usually impossible to disentangle the role of each effect directly from experimental results and 2-D PIC simulations, as discussed in the detailed analysis of Ref. 14. In this Section, we estimate the transverse wavenumbers which only result from wavefront bowing. This lets us conclude about the ability to correctly describe the EPW spectrum by only accounting for the latter effect. Moreover, SRS essentially occurs where the EPW is nearly monochromatic. Indeed, once secondary modes have grown unstable, the magnitude of the density fluctuations dramatically drops (see Ref. 14), and one would expect Thomson scattering rather than Raman scattering. Therefore, the opening angle of the SRS backscattered light is expected to directly follow from the EPW wavefront bowing. This further vindicates the introduction of an accurate and effective model to quantify it. A. The ray-in-cell method In order to address the EPW wavefront bowing, we cannot rely on the numerical methods based on the paraxial [25,26], nor on the quasioptical [27,28], approximations. Indeed, these do not accurately estimate the transverse variations of the wavenumbers, which are assumed to be small, while we precisely need to derive these variations in order to properly describe wavefront bowing. Consequently, we introduce in this Section the prototype of a new numerical method which we dubbed ray-in-cell (RIC). It combines the resolution of nonstationary ray tracing and envelope equations. The number of quanta for each wave is derived along the rays from the envelope equations. The ray dynamics, from which follow the wavenumbers, is derived from the dispersion relations. For the EPW, the dispersion relation is nonlinear so that the ray dynamics keeps changing while the wave is growing. This explains why the ray tracing has to be nonstationary and has to be solved together with the envelope equation. To do so, we first estimate the wave amplitude on a fixed mesh, from an averaging of the wave quanta derived along the rays. This lets us derive the gradient of the wave amplitude on the mesh midpoints, which we project back onto the rays to derive their dynamics (see Paragraph V C and Fig. 8). Actually, the RIC method may be generalized, as in Ref. 29, to address multiple wave-wave interaction in various contexts. Indeed, the amplitude of any wave may be estimated on the mesh and then projected onto the rays of any other wave to account for their coupling. Moreover, solving envelope equations like Eq. (22) also allows wave-particle interaction (i.e., nonlinear kinetic effects) to be captured. Hence, we expect the RIC method to let us address laser-plasma interaction over space and time scales relevant to inertial confinement fusion (ICF), which is still far from being attainable with kinetic codes. Our long-term objective is to provide quick methods, which can be implemented in the hydrodynamical codes used in ICF, to correctly model laser propagation inside a fusion plasma. B. A simplified theoretical model Our modeling of the EPW wavefront bowing rests on several simplifying hypotheses. First, we use the geometrical optics limit, so that the transverse wavenumbers result from the following ray equations, where k R (t) ≡ k[x R (t),t], and Ω R [x, k(x,t),t] ≡ ω(x,t) solves the EPW nonlinear dispersion relation, 1 + χ a = 0. Hence, ∂ k Ω R = v g as defined by Eq. (59). Now, the geometrical optics limit usually remains valid as long as most of the EPW energy is not confined within a volume less than k −3 . Therefore, it should not be suited to address the EPW self-focussing resulting from wavefront bowing. However, as discussed in Paragraph V C, the RIC method allows to alleviate most difficulties entailed by self-focussing or ray crossing. This is mainly due to the fact that the wave amplitude is bounded from above, because it is averaged over a grid cell. Moreover, in order to derive Ω R in Eqs. (63) and (64), we assume that the longitudinal component of k does not change much, and remains much larger than its transverse components. This hypothesis is consistent with the neglect of the k-rotation, and of the variations of k ≡ |k|, to derive Ω R . Therefore, Ω R may be directly obtained from the results of Ref. 2, as plotted in Fig. 3. We also restrict to uniform plasmas, so that the space-dependence of Ω R directly follows from that of Clearly, from Eq. (65), the ray equations have to be solved together with the envelope equation for Φ A . Moreover, the growth of transverse wavenumbers is intrinsically a nonstationary problem, and will be considered as such. Namely, the nonstationarity in Ω R follows from the nonstationarity When the EPW results from SRS, Eq. (22) has to be solved together with the envelope equations for the laser and scattered lights. Solving these coupled equations is a difficult task, which is part of our current research work, but which is way beyond the scope of this paper. Here, we do not aim at an accurate description of SRS. Instead, we want to solve a much simpler problem which is the estimate of the transverse extent of the EPW spectrum. It mainly depends on the growth rate of the transverse wavenumbers, compared to the time it takes for the EPW amplitude to reach the wave breaking limit. Indeed, if the EPW grows very quickly and breaks by the time the k direction could change, wavefront bowing is insignificant. By contrast, if the EPW grows very slowly and the laser duration is large enough, significant transverse components in k have the time to build up. Hence, in order to correctly estimate the transverse extent of the EPW spectrum, we only need the correct order of magnitude for the EPW growth rate. It is well-known [11,43] that the growth rate of an essentially undamped SRS-driven plasma wave is of the order of, where ω las and ω s are, respectively, the laser and scattered wave frequencies (ω s = ω las − ω), and where E las is the amplitude of the laser electric field. Consequently, following the lines of Paragraph IV E, we use the following simplified envelope equation for the EPW, where Λ a is defined by Eq. (62). Let us now introduce J such that ∂ t J + v g .∇J = J∇.v g . From Liouville theorem, J is the Jacobian, J = |dx R (t)/dx R (0)|. Then, Eq. (67) reads Since we only solve for the EPW amplitude, we cannot account for pump depletion. Then, E las in Eq. (66) is given by where the laser intensity, I las , is assumed to remain undepleted, and where v g las is the laser group velocity. C. The ray-in-cell numerical scheme The RIC numerical scheme follows from that introduced in Ref. 29, where the wave quanta are calculated along rays, and the wave amplitudes are estimated at the nodes of parallelipedic cells. The wave amplitudes at a given cell node follow from a simple averaging over all the rays located inside the cell. Consequently, these amplitudes are necessarily bounded from above, the upper bound being fixed by the cell volume, ∆V . This allows to avoid the divergence in amplitude inherent to the use of the geometrical optics approximation when the rays cross each other, e.g., at caustics or when the wave self-focuses. Actually, in the RIC method, we do not use simple averages as in Ref. 29. Instead, we first estimate the wave amplitude at the cell nodes, using a shape factor. This allows us to derive the gradient of the field amplitude on the mesh midpoints, which we project back onto the rays using the same shape factor. This technique is borrowed from PIC codes, whence the acronym RIC. Moreover, unlike in Ref. 29 More precisely, from Eq. (68), we may only derive J∂ ω χ a E 2 A , while the gradient of Φ A = (kλ D ) 2 E A /k is needed to solve the ray equation (65). As a first step to derive Φ A , we get rid of the Jacobian in order to estimate Λ a ∝ ∂ ω χ a E 2 A on the cell node, where δ (x) denotes the Dirac distribution. Now, if the variations of Λ a are sufficiently smooth and small over one cell, Eq. (70) may be replaced by where S (n) is a shape factor of order n [44]. It is such that S (n) dx/∆V = 1, so that S (n) is The estimate of Λ a at the cells nodes, using Eq. (72), is illustrated by the panel (a) of Fig. 8 showing a schematic of the RIC method. Since we only evaluate N p over a discrete set of x R 's, we replace the integral in Eq. (72) by a Riemann sum. In our simulations, we choose the initial ray positions, x R (0), evenly distributed over each cell. The number of initial rays may vary from one cell to the other, which lets us associate an initial volume, dx i R (0), to each ray i. If ray i starts from a cell where we have placed N i 0 initial positions then, clearly, dx i R (0) = ∆V /N i 0 . This lets us approximate Eq. (72) by For the sake of simplicity, in our simulations, we chose the same number of initial rays in each cell, so that N i 0 is a constant, which we denote by N 0 . Moreover, all the results presented in this Section have been obtained by using a first-order shape factor. From the value of Λ a at x [ j] , we derive that of Φ A at the same location by solving which corresponds to panel (b) of Fig. 8. Once Φ A is known on a regular mesh, its gradient is easily derived on the cells midpoints by making use of finite differences. Then, ∇Φ A is projected back onto the rays, using the same shape factor, S (n) , as for Λ a . This allows to estimate the right-hand side of Eq. (65) and to move the rays forward, as illustrated by the panel (c) of Fig. 8. The ray equations (63) and (65) are solved using the same time step, δt, as for Eq. (68) on N p . Therefore, all quantities are always estimated on the same location along a ray. δt is chosen so that γ 0 δt be small enough, γ 0 δt 10 −2 . We use a symplectic leap-frog time integrator [45] to solve Eqs. (63) and (65), while N p is derived from Eq. (68) the following way, where we have denoted γ 0 (t + δt) ≡ γ 0 [I las (t + δt), E a (t)]. Moreover, N p is initialized at the same noise level in all cells, whose value, N B , is discussed in Paragraph V D. The cell sizes should be chosen so that the variations of N p within each cell be small enough for the estimate Eq. (72) to remain accurate. In particular, their transverse size, l ⊥ , should be significantly less than the laser waist, w 0 . Indeed, the transverse extent of the EPW could be much less than w 0 due to its inhomogeneous amplification and to self-focussing. Moreover, l ⊥ should be at least of the order of the wavelength, λ . Indeed, if l ⊥ ≪ λ , Eq. (72) overestimates Λ a when most rays are located within a few cells due to self-focussing. Hence, l ⊥ should be chosen so that l ⊥ ≈ λ . If the ray direction changes significantly due to the k-rotation, it is not possible to clearly identify the longitudinal and transverse directions over the whole simulation domain. Then, the cells should be cubes with volume ∆V ∼ λ d (d being the dimension for the simulation). Since the rays are moving from left to right, they eventually leave the leftmost cells of the simulation box. Then, in these cells, the gradient of the wave amplitude is ill defined. In order to overcome this difficulty, we create a zone, on the left part of the simulation box, where we replace Eq. (68) by, and where γ linearly rises from 0 to γ 0 , defined by Eq. (66). Moreover, the leftmost part of this zone is fed with rays which move at the linear group velocity and which carry a number of plasmons set to the noise level. Hence, the leftmost cells of our simulation box are never void of rays. The transverse boundaries of the simulation box also have to be treated with care. Indeed, on a node located away from these boundaries, are projected the plasmon numbers carried by the rays which are below and above it. However, a node located at the upper boundary can only receive the contributions from the rays which are below it. Indeed, there is no ray above the upper boundary of the simulation box. Consequently, the wave amplitude at such nodes could be underestimated, which would entail spurious gradients and lead to a wrong estimate of the rays trajectories. In order to alleviate this difficulty, each node carries a number of plasmons set to the noise level, N B , before the projections from the rays to the nodes. Then, instead of projecting the number of plasmons, N p , carried by each ray, we only project N p − N B . Namely, we replace N p with N p − N B in Eq. (73). This would lead to a correct estimate of the EPW amplitude on the mesh, provided that the number of plasmons carried by the rays near the transverse boundaries remain close to N B . Hence, the laser intensity at these boundaries must be so weak that it cannot significantly amplify the plasma wave. Because of the EPW self-focussing entailed by wavefront bowing, if the rays move according to geometrical optics, they converge towards the beam axis (chosen as the x-axis) and can cross it. However, physically, when a ray gets very close to the x-axis, it is reflected back due to diffraction. It cannot cross the axis. Once it starts to be reflected and moves away from the axis, the nonlinear frequency gradient bends its trajectory again and lets it converge back towards the axis. Hence, on the average, this ray moves along the x-axis, so that the averaged value of k ⊥ is 0. In order to qualitatively reproduce this feature, we multiply k y,z by tanh(|y, z|/2λ )/ tanh(1) for all rays such that |y, z| < 2λ . However, for the simulation parameters detailed in Paragraph V D, we usually do not have to do so. Indeed, the EPW usually breaks before the rays could have a chance to cross the x-axis. Our model stems from the envelope equations derived in Ref. 11, which are only valid for nearly monochromatic waves. Consequently, additional modeling is required to correctly describe the EPW once it has broken. The PIC simulation results reported in Ref. 14 show that the field amplitude dramatically drops after wave breaking. This is most probably due to electron acceleration by chaotic transport, at the expense of the wave energy. In order to account for it, when the EPW amplitude on a ray is too large, we reduce the number of plasmons carried by this ray. Namely, when Φ A ≥ Φ wb , where Φ wb is close to the upper bound for wave breaking derived in Section III, we only project on the mesh a fraction of the number of plasmons carried by the ray. More precisely, we project a number of plasmons that linearly decreases from N p to 0 when Φ A varies from Φ wb to 1.1 × Φ wb . Moreover, we remove from the simulation box all the rays such that Φ A > 1.1 × Φ wb . Then, clearly, 1.1 × Φ wb must be less than the maximum value for Φ A deduced from the results of Section III, which we denote by Φ max A . However, one cannot just choose Φ wb = Φ max A /1.1 because, due to self-focusing, the local wave amplitude on a node may be larger than the maximum amplitude on the rays. For the RIC simulation results of Paragraph V D 2, which correspond to kλ D = 0.14, Φ max Simulation setup In our simulations, we assume that the laser propagates along the x direction, in a twodimensional (2-D) plane geometry, (x, y). The intensity distribution is a Gaussian in space and time, t, and w 0 = 16λ las /π for an f /8 beam aperture. Moreover, the laser is assumed to be focussed at x f = 1000c/ω las . As for the simulated plasma, it is homogeneous, with density n e /n c = 0.08 (n c = ε 0 mω 2 las /e 2 being the critical density), and its temperature is 300 eV. Then, the EPW resulting from SRS is such that kλ D ≈ 0.14. The simulation box ranges from y = −100c/ω las to y = 100c/ω las , and from x = −500c/ω las to x = 1000c/ω las (it is 253 µm long and 33.7 µm wide). When x ≤ 0, the EPW amplification is derived from Eq. (76) with γ varying linearly from 0 to γ 0 , when x varies from −500c/ω las to 0. When x ≥ 0, we solve Eq. (68) to derive N p . In the plasma domain 400 ≤ xω las /c ≤ 600 which we investigated more particularly, when t ≤ −1 ps, and in the domain 20 ≤ |y|ω las /c ≤ 50 where bowing is most effective, the averaged value of γ 0 /kv th is close to 0.25. This is above the condition for adiabaticity, γ 0 0.1. Note, though, that we do not account for pump depletion, that should make the adiabatic approximation used to derive Ω R more accurate. As a matter of fact, and as discussed in Paragraph V D 2, our results compare very well with those from the PIC simulations of Ref. 14. The EPW rays are assumed to be initially aligned with the laser rays. Consequently, the initial values of k x and k y are derived from the gradient of the complex phase of the Gaussian beam [46]. Moreover, the initial amplitude on each ray corresponds to the noise level, Φ A = 5 × 10 −8 . It has been chosen so that, at t = −1.54 ps and in the region 400 ≤ xω las /c ≤ 600, the maximum value of Φ A on the rays be close to the limit we choose for wave breaking, Φ wb = 0.5. Because of self-focusing, the maximum value of Φ A on the cell nodes exceeds that carried by the rays. When t = −1.54 ps, this maximum value is close to 0.58. Note that, from Eq. (13), Φ A = ∑ j j 2 Φ 2 j , while with our normalization and from Poisson equation, the density fluctuation induced by the EPW, δ n e , is such that δ n e /n e = − ∑ j j 2 Φ j cos( jϕ). When kλ D = 0.14, the minimum value for δ n e [47] is reached when ϕ = 0 so that, Because δ n min e /n e converges more slowly than the potential, we use then harmonics (instead of three) to derive it from Eq (78). Then, for the largest amplitude reached by Φ A in our RIC simulation when t = −1.54 ps, Φ A ≈ 0.58, we estimate −δ n min e /n e ≈ 0.38. This is in very good agreement with the PIC simulation results of Ref. 14. Indeed, as may be inferred from Fig. 9 (d) [47], just before the EPW starts to break, the minimum value reached by the electron density is close 0.05n c , so that −δ n min e /n e ∼ 35 − 40% (since n e /n c = 0.08). Hence, we choose our noise level so as to match the PIC simulation results at t = −1.54 ps as regards the maximum wave amplitude. We use the same time step, δt = 1 fs, to numerically solve Eqs. (63), (65) and (68) from t = −2.2 ps (like in the PIC simulation of Ref. 14). Our mesh is made of rectangular cells, with longitudinal size l x = 50c/ω las , and transverse size l y = 5c/ω las . Hence, there are only 30 cells along the x-direction and 40 ones along the y-direction. This very low resolution is enough for the RIC method to yield accurate results (no significant change could be found in our results when l x and l y were reduced by a factor of 5). This makes the method very effective. When using 64 rays per cell, the results of Paragraph V D 2 are obtained within a CPU time of 2 minutes. This is about 10 6 times faster than a PIC simulation. Results from the RIC simulation In this Paragraph, we present our RIC simulation results regarding wavefront bowing, which we systematically compare against those from the PIC simulation of Ref. 14. Consequently, whenever we refer to the PIC simulation, we actually mean "the PIC simulation of Ref. 14", without systematically specifying it. Figs. 9 (a)-(c) plot the maps of −δ n min e /n e , estimated on the cells nodes from our RIC simulation, and deduced from Φ A using Eq. (78). On top of these maps, we plot the curves perpendicular to the local wavenumber. These curves mimic the wavefronts. They are plotted with the same color as that corresponding to δ n min e = 0, so that they would not appear where the wave amplitude is very small. Although −δ n min figures, wavefront bowing is very similar, and the EPW is amplified over the same transverse region, but not over the same longitudinal one. The EPW is strongly amplified up x ≈ 750c/ω las in the RIC simulation (not shown here), and only up to x ≈ 500c/ω las in the PIC simulation. Hence, as regards the region where the EPW is strongly amplified, the agreement is not perfect because we do not solve the actual three-wave problem for SRS, and neglect pump depletion. However, wavefront bowing is very well reproduced in our RIC simulation, thus meeting our prime objective. The brown region in the center of Figs. 9 (b) and (c), is where we have withdrawn the rays which carry such a large amplitude that we estimate that the EPW is totally broken. As discussed in Paragraph V C, this happens where Φ A > 1.1Φ wb , where we have chosen Φ wb = 0.5. At t = −1.44 ps, and at x = 400c/ω mas , we find in our RIC simulation that the EPW is broken over a region that extends up to |yω las /c| ≈ 20, as may be seen in Fig. 9 (b). This is in good agreement with the PIC simulation result plotted in Fig. 9 (d). However, in the PIC simulation the EPW is broken only up to x ≈ 500c/ω las , while in the RIC simulation it is broken up to x ≈ 700c/ω las (not totally shown here), although over only a narrow region |yω las /c| < 10 when xω las /c > 450. Hence, there is a fair agreement between the RIC and PIC results as regards wave breaking, although the agreement is not perfect because we neglect pump depletion in the RIC simulation. At t = −1.37 ps, the EPW is broken over about the same region in the RIC and PIC simulations, at least within the domain 400 ≤ xω mas /c ≤ 600, as may be seen in Figs. 9 (c) and (f). This lets us conclude that, not only can the RIC method be used after the EPW has broken, but it also gives a fair account of the space region where the EPW is broken. This is quite remarkable considered the simplicity of the model, compared to the complexity of wave breaking. One may also appreciate in Figs. 9 (a)-(c) the very low definition that was enough to use in our RIC simulation to get accurate results. This is one of the main reasons for the effectiveness of the RIC method. In order to make our comparisons with the PIC simulation more quantitative, we compute where the sum is over all the rays located in the region 400 < xω laser /c < 600, and whose wavenumbers, k i , are such that |k x,y − k i x ,i y | < 10 −2 ω las /c. e found from our RIC simulation is the same as that of the Fourier transform of the EPW density, as derived from the PIC simulation. At this time, the EPW is not broken, so that the extent unlike the ñ 2 e map, the Fourier spectrum accounts for the x-variation of the wave amplitude. This entails a width in k x which is not related to the longitudinal gradient of the EPW frequency, Ω R . At t = −1.44 ps, the k y -span in ñ 2 e is very similar to that of the PIC Fourier spectrum of the EPW density. This shows that the latter is mainly due to bowing, although the EPW has already broken in the domain 400 ≤ xω las /c ≤ 500. However, there is a well marked maximum in the Fourier spectrum at k y ≈ 0.25, and a minimum at k y ≈ 0, absent from the ñ 2 e map. This suggests that, at t = −1.44 ps, the density spectrum is affected by the growth of sidebands, especially close to k y = 0. At t = −1.37 ps, the Fourier spectrum of the density is most significant when |yc/ω las | ≤ 0.6, which corresponds to the span in k y for the ñ 2 e map in Fig. 10 (c). Therefore, even at t = −1.37 ps, when the EPW is broken over a significant part of the space domain, the main features of the density spectrum result from bowing. This PIC spectrum also contains some low signal at large transverse wavenumbers, up to |k y ω las /c| ∼ 1. These are not recovered in our RIC simulation. Therefore, we can unambiguously conclude that they result from transverse instabilities. Due to wavefront bowing, the EPW propagates at a nonzero averaged angle, θ , with respect to the averaged direction of propagation of the laser beam (which is the x-direction). These values for θ PIC are reported in Fig. 11. Using our RIC simulation results, we can estimate θ the following way, where, for each ray, θ i = tan −1 (k y i /k x i ). Moreover, in Eq. (80), the sum is limited to those rays located in the upper plane y > 0, so that θ RIC < 0. The values for θ RIC derived from Eq. (80) are plotted in Fig. 11. When t −1.7 ps, they are nonzero because of the finite opening angle of the laser beam. The increase in θ RIC , and the relatively low values it assumes when −1.7 t −1.6, is the consequence of gain narrowing. Indeed, at these times, the EPW is mostly amplified on the rays located close to the x-axis, which mainly propagate along the x-direction. When −1.6 t −1.2, θ RIC decreases because of the wavefront bowing. As may be seen in Fig. 11, a very good agreement is found between θ PIC and θ RIC at t = −1.54 ps, t = −1.44 ps and t = −1.37 ps. This shows, once again, the relevance of the RIC method to derive the EPW wavefront bowing. After t = −1 ps, we find that θ RIC starts to increase. This is because the EPW has broken in so large a region that only remain in our simulation the rays located far away from the x-axis. There, the laser intensity is so small that the plasma wave is only poorly amplified, and bowing does not really occur. Moreover, when the EPW has broken nearly everywhere, the RIC method becomes doubtful, and the corresponding results are not shown here. The maximum value found for |θ RIC | is close to 14°. It is rather small, which vindicates the neglect of the k-rotation when computing the nonlinear EPW frequency, Ω R [2]. However, this does not mean that the effect of the EPW wavefront bowing is negligible. Indeed, from k s = k las − k, where k las and k s are, respectively, the laser and scattered wavenumbers, one finds that the scattered wave would propagate at an average angle close to 35°with respect to the x-axis when θ ≈ 14°. This is a significant angle showing that, due to the nonlinear wavefront bowing, there is an effective side-scattering. This has to be accounted for when modeling laser-plasma experiments. Indeed, this allows to correctly derive where the backscattered light, that may be collected in an experiment, actually comes from. This also allows to correctly account for the effect of SRS on the plasma hydrodynamics. VI. CONCLUSION In this paper, we provided a description, as complete as possible, of nonlinear adiabatic electron plasma waves. Using the results derived by Dawson [5], together with the general theory of the companion paper [2], we could find an explicit expression for the electrostatic potential, valid whatever kλ D and up to amplitudes close to the wave breaking limit. This expression was for a growing wave, but should remain valid if the wave has not kept on growing, provided that V φ has varied more slowly than the width, in velocity, of the separatrix. We proved rigorously that an adiabatic EPW could not keep growing beyond an amplitude, Φ max 1 , which we derived. In practice, the EPW is expected to break before reaching Φ max 1 , due to the unstable growth of secondary modes. Hence, Φ max 1 is only an upper bound for the wave breaking limit which, nevertheless, provides a good estimate of the actual one for the physics situation considered in Section V. Moreover, as discussed in Section V, our estimate for Φ max 1 is useful, and relevant, to address the EPW wavefront bowing. The values we plotted in Fig. 6 for the maximum EPW amplitude are only for a growing wave. However, by using the general theory of the companion paper, we can derive Φ max 1 whatever the space and time variations of the scalar and vector potentials, and of the wavenumber and wave frequency. In order to derive the space and time variations of the wave amplitude, one may resort to envelope equations. From Section II, we know that the scalar potential is either sinusoidal, or as derived by Dawson except, maybe, for amplitudes close to the wave breaking limit. Using the latter result, we provided an explicit expression for the nonlinear EPW envelope equation, valid to describe the wave growth up to its breaking, whatever kλ D . Moreover, we also showed how the general equation could be simplified when the variations of the amplitude of the scalar potential were much faster than those of the vector potential and phase velocity. In a multidimensional geometry, the envelope equation needs to be solved along rays, whose trajectories have to be calculated self-consistently while the wave amplitude is changing. We did perform such a nonlinear calculation by using for the EPW an envelope equation that mimicked, in a simplified way, the SRS drive. To the best of our knowledge, this had never been done before, and this let us introduce a new numerical scheme, dubbed ray-in-cell (RIC). From RIC simulations, we could compute transverse modes which only resulted from wavefront bowing. We showed that they compared very well with those derived from the PIC simulations of Ref. 14 before the EPW had broken. This allowed us to unambiguously find which transverse wavenumbers, in the PIC Fourier spectra reproduced in Figs. 10 (d)-(f), resulted from bowing or from the growth of secondary instabilities. Moreover, using our RIC simulations, we could estimate the averaged angle of propagation, θ , of the EPW with respect to the x-axis, and found that it agreed nicely with that inferred from PIC simulations. Although this angle remains modest, θ 14°, it entails a significant angle for the backscattered wave, which may be as large as 35°. Therefore, nonlinear wavefront bowing should induce substantial SRS side-scattering, which has to be accounted for in the modeling of laser-plasma interaction. This is needed in order to correctly estimate the impact of SRS on the plasma hydrodynamics. This is also needed to correctly estimate where the scattered light, that may be collected in an experiment, actually comes from. RIC simulations proved to provide accurate estimates for wavefront bowing at a much reduced computational cost than PIC simulations (they are about 10 6 faster). They will be used in a future publication to perform large-scale simulations of laser-plasma interaction, accounting for SRS in the nonlinear kinetic regime.
2022-01-04T02:16:26.514Z
2022-01-03T00:00:00.000
{ "year": 2022, "sha1": "09c810357445d347b326dce40c4c293d1a269738", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "09c810357445d347b326dce40c4c293d1a269738", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
16053533
pes2o/s2orc
v3-fos-license
Regulation of Schwann cell proliferation and migration by miR-1 targeting brain-derived neurotrophic factor after peripheral nerve injury Peripheral nerve injury is a global problem that causes disability and severe socioeconomic burden. Brain-derived neurotrophic factor (BDNF) benefits peripheral nerve regeneration and becomes a promising therapeutic molecule. In the current study, we found that microRNA-1 (miR-1) directly targeted BDNF by binding to its 3′-UTR and caused both mRNA degradation and translation suppression of BDNF. Moreover, miR-1 induced BDNF mRNA degradation primarily through binding to target site 3 rather than target site 1 or 2 of BDNF 3′-UTR. Following rat sciatic nerve injury, a rough inverse correlation was observed between temporal expression profiles of miR-1 and BDNF in the injured nerve. The overexpression or silencing of miR-1 in cultured Schwann cells (SCs) inhibited or enhanced BDNF secretion from the cells, respectively, and also suppressed or promoted SC proliferation and migration, respectively. Interestingly, BDNF knockdown could attenuate the enhancing effect of miR-1 inhibitor on SC proliferation and migration. These findings will contribute to the development of a novel therapeutic strategy for peripheral nerve injury, which overcomes the limitations of direct administration of exogenous BDNF by using miR-1 to regulate endogenous BDNF expression. Scientific RepoRts | 6:29121 | DOI: 10.1038/srep29121 peripheral nerve injury, the expressions of various miRNAs are altered in a time-dependent manner, and these differentially expressed miRNAs regulate biological behaviors of neural cells (neurons and SCs), such as neuronal survival, neurite outgrowth, SC proliferation, SC migration, and axon remyelination by SCs 17 . We have previously identified that a number of miRNAs and mRNAs are differentially expressed after sciatic nerve injury 18,19 . These data from microarray analysis suggested that the expression of BDNF was up-regulated following sciatic nerve injury and the expression profile of BDNF was opposite to that of miR-1. It is easily assumed that miR-1 may negatively regulate the BDNF expression and further mediate peripheral nerve regeneration. In the current study, therefore, we aimed to identify whether BDNF was a direct target of miR-1 and to determine how miR-1 together with BDNF affected peripheral nerve regeneration. We found that there existed 3 binding sites of miR-1 at the 3′ -UTR of BDNF. Target site 3, by mediating the mRNA degradation of BDNF, played the most significant role among these 3 target sites. Through direct binding, miR-1 reduced the mRNA expression, the protein expression, and the secretion of BDNF, and meanwhile inhibited SC proliferation and migration. These findings will contribute to understanding the molecular mechanisms regulating peripheral nerve regeneration, and will lead to a new strategy for applying BDNF in peripheral nerve repair. Materials and Methods Animal surgery and tissue preparation. Adult, male Sprague-Dawley (SD) rats were obtained from the Animal Experiment Center of Nantong University in China. The animals underwent sciatic nerve crush as described previously 20 . Briefly, after anaesthetization, the sciatic nerve at 10 mm above the bifurcation into the tibial and common fibular nerves was crushed twice. The injured nerve segments of 0.3 cm in length, together with both nerve ends of 0.1 cm in length, were harvested at 0, 1, 4, 7, and 14 days post nerve injury (PNI), respectively. All animal procedures were performed in accordance with Institutional Animal Care guideline of Nantong University and were ethically approved by the Administration Committee of Experimental Animals in Jiangsu Province, China. SC culture and transfection. Primary SCs were isolated from the sciatic nerve of 1-day-old SD rats and further treated with anti-Thy1.1 antibody (Sigma, St Louis, MO) and rabbit complement (Invitrogen, Carlsbad, CA) to remove the fibroblasts as described previously 21 . The final cell preparation consisted of 98% SCs, as determined by immunocytochemistry with SC marker anti-S100 (DAKO, Carpinteria, CA). A rat SC line (RSC96) was purchased from the American Type Culture Collection. Primary SCs and RSC96 SCs were cultured in Dulbecco's modified eagle medium (DMEM) containing 10% fetal bovine serum (FBS) in a humidified 5% CO 2 incubator at 37 °C. Primary SCs were passaged for no more than 3 times prior to use. Plasmid construction and luciferase assay. miRNA target prediction programs (TargetScan and MiRanda) were used to predict the binding sites of miR-1 on BDNF. The 3′ untranslated region (3′ -UTR) of BDNF was amplified by PCR using rat genomic DNA as a template. The PCR products were subcloned into the region directly downstream of the stop codon in the luciferase gene in the luciferase reporter vector to generate p-Luc-UTR reporter plasmid. Overlap PCR was used to construct 3′ -UTR mutant reporter plasmid. Primers used to generate wild type and mutant BDNF 3′-UTR were as follows: BDNF 3′-UTR: CCGGAATTCGGACATATCCATGACCAGA, CCGCTCGAGGGATGGAGGCCATAAATGGA; BDNF 3′-UTR mutant 1: CTGCATTACATAGGTCGATAATGTTGTGGTTTG, CAACATTATCGACCTATGT AATGCAGACTTTTA; BDNF 3′ -UTR mutant 2: GAACCAAAACATAGGGTTTACATTTTAGACACTA, TAAA ATGTAAACCCTATGTTTTGGTTCAAATTT; BDNF 3′-UTR mutant 3: TACTTGAGACATAGGTAAAGG AAGGCTCGGAAG, GCCTTCCTTTACCTATGTCTCAAGTACCATTC. The sequences of wild-type and mutant 3′-UTR were confirmed by sequencing. For luciferase assay, HEK 293T cells were seeded in 24-well plates and co-transfected with a mixture of 120 ng p-Luc-UTR, 20 pmol miRNA mimics, and 20 ng Renilla luciferase vector pRL-CMV (Promega, Madison, WI) using the Lipofectamine 2000 transfection system (Invitrogen). At 36 h after transfection, the firefly and Renilla luciferase activities were measured using the dual-luciferase reporter assay system (Promega) from the cell lysates. Quantitative real-time RT-PCR (qRT-PCR). Total RNA was extracted using Trizol (Life technologies, Carlsbed, CA) according to manufacturer's instructions. Contaminating DNA was removed using RNeasy spin columns (Qiagen, Valencia, CA). The quality of isolated RNA samples was evaluated using Agilent Bioanalyzer 2100 (Agilent technologies, Santa Clara, CA) and the quantity of RNA samples was determined using NanoDrop ND-1000 spectrophotometer (Infinigen Biotechnology Inc., City of Industry, CA). A total amount of 20 ng RNA samples was reversely transcribed using TaqMan MicroRNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA) and stem-loop RT primers (Ribobio) according to manufacturer's instructions to determine miR-1 expression. RNA samples were reversely transcribed to cDNA using a Prime-Script reagent Kit (TaKaRa, Dalian, China) according to manufacturer's instructions to determine BDNF expression. Quantitative real-time RT-PCR was performed using SYBR Green Premix Ex Taq (TaKaRa) with BDNF primer on an Applied Biosystems Stepone real-time PCR System. The sequences of BDNF primer were as follows: CAGGGGCATAGACAAAAG, CTTCCCCTTTTAATGGTC. The thermocycler program was as follows: 5 triplicate. Relative expressions of miR-1 and BDNF were conducted using the comparative 2 −∆∆Ct method with U6 and GAPDH as the reference gene, respectively. Western blot analysis. Protein lysates were extracted from lesioned sciatic nerve tissues or cell cultures through direct homogenization, and lysis in a Laemmli sample buffer (2% SDS, 52.5 mM Tris-HCl, and protein inhibitors). The protein concentration was determined by the Micro BCA Protein Assay Kit (Pierce, Rockford, IL). Protein lysates were mixed with β -mercaptoethanol, glycerin, and bromophenol-blue, and allowed to incubate at 95 °C for 5 min. Equal amounts of protein were separated on 12% SDS-polyacrylamide gels. Following electrophoresis, proteins were transferred onto polyvinylidene fluoride (PVDF) membranes (Bio-Red, Hercules, CA). Membranes were blocked with 5% non-fat dry milk in PBS with 0.1% Tween-20 for 2 h, probed with primary BDNF antibody (Abcam, Cambridge, MA) overnight at 4 °C, incubated in horseradish peroxidase (HRP)-conjugated secondary antibody (Pierce), developed with enhanced chemiluminescence reagent (Cell Signaling, Beverly, MA), and exposed to Kodak X-Omat Blue Film (NEN life science, Boston, MA). Quantification of band signal intensity was conducted with Grab-it 2.5 and Gelwork software. Enzyme-linked immunosorbent assay (ELISA). Primary SCs or RSC96 SCs were transfected with miR-1 mimic and control, miR-1 inhibitor and control, BDNF siRNA and control, respectively, using Lipofectamine RNAiMAX transfection reagent (Invitrogen). After incubation for 24 h, the medium of transfected SCs was replaced with FBS-free medium for addition 48 h incubation. The medium was then taken out and filtered through a 0.22 μ m filter (Millipore, Bedford, MA) to furnish the supernatant. The protein levels of BDNF in the medium were measured using a ChemiKine BDNF ELISA Kit (Millipore) according to the manufacturer's instructions. Data were measured and summarized from 3 independent experiments, each comprising triplicate wells. Cell proliferation assay. Primary SCs were resuspended in fresh pre-warmed (37 °C) complete medium, counted, and then plated on poly-L-lysine-coated 96-well plates at a density of 3 × 10 5 cells/ml. At 36 h after transfection, 100 μ M 5-ethynyl-20-deoxyuridine (EdU) was applied to cell culture. After additional incubation for 24 h, cells were fixed with 4% paraformaldehyde in phosphate buffered saline (PBS) for 30 min. The proliferation rate of SCs was determined using Cell-Light EdU DNA Cell Proliferation Kit (Ribobio) according to the manufacturer's protocol. The ratio of EdU-positive cells to total cells was calculated using images of randomly selected fields obtained under a DMR fluorescence microscope (Leica Microsystems, Bensheim, Germany). Assays were performed 3 times using triplicate wells. Cell migration assay. The migration ability of SCs was examined using 6.5 mm Transwell chambers with 8 μm pores (Costar, Cambridge, MA). The bottom surface of each membrane was coated with 10 μ g/ml fibronectin. 100 μ l Primary SCs (3 × 10 5 cells/ml) were resuspended in DMEM and transferred to the top chambers of each transwell to allow their migration in a humidified 5% CO 2 incubator at 37 °C with 500 μ l complete medium being pipetted into the lower chambers. The upper surface of each membrane was cleaned with a cotton swab at the indicated time point. Cells adhering to the bottom surface of each membrane were stained with 0.1% crystal violet and then counted under a DMR inverted microscope (Leica Microsystems). Assays were performed 3 times using triplicate wells. Data analysis. All numerical results were reported as means ± SEM. The student's t-test was used for statistical analyses by the aid of SPSS 15.0 (SPSS, Chicago, IL). p < 0.05 was considered statistically significant. Results miR-1 negatively regulated BDNF by directly targeting its 3′-UTR. The data from mRNA and miRNA microarray analysis indicated that following sciatic nerve injury, the mRNA expression levels of BDNF were dramatically up-regulated with a peak value at 7 d PNI while the expression levels of miR-1 were dramatically down-regulated with a valley value at 7 d PNI, both compared to that at 0 h PNI (Fig. 1A). To determine whether BDNF was regulated by miR-1 through direct binding to its 3′ -UTR, the wild-type and mutant 3′ -UTR of BDNF, including single target site mutant (mut1, mut2, and mut3), double target site mutant (mut1&2, mut1&3, and mut2&3), and triple target site mutant (mut1&2&3) were constructed and inserted into the downstream region of the luciferase reporter gene (Fig. 1C,D). miR-1 mimic and p-Luc-UTR constructs were co-transfected into HEK 293T cells to analyze the relative luciferase activity. The relative luciferase activity was significantly decreased when miR-1 mimic was co-transfected with the wild-type, single target site mutant, or double target site mutants, but was not altered when miR-1 mimic was co-transfected with triple target site mutants (Fig. 1E). Notably, 3 loci of BDNF, mut1, mut2, and mut3 exhibited different inhibiting effects. Among cells co-transfected with miR-1 mimic plus mut1, mut2, or mut3, the reduction in relative luciferase activity was the least robust in cells co-transfected with miR-1 mimic plus mut3 (p = 0.0147), while the reduction in relative luciferase activity was the most significant in cells co-transfected with miR-1 mimic plus mut1 (p = 0.0031) (Fig. 1E). Similarly, compared with cells co-transfected with miR-1 mimic plus mut1&2 (p = 0.0091), the reduction in the relative luciferase activity was less dramatic when cells were co-transfected with miR-mimic plus mut2&3 or mut1&3 (p = 0.0452, and p = 0.0209, respectively) (Fig. 1E). Taken together, these observations suggested that miR-1 targeted BDNF through its direct binding to the 3′ -UTR of BDNF. All 3 target sites of BDNF were critical for the formation of miR-1-BDNF complex but with unequal significance. Among these 3 target sites, target site 3 (1294-1300 bp) seemed to have the greatest impact on miRNA binding. miR-1 inhibited BDNF expression through both mRNA degradation and translation repression. To identify the effect of miR-1 on the expression of BDNF, miR-1 mimic or inhibitor was transfected in mutant (mut1&2, mut1&3, mut2&3), and triple mutant (mut1&2&3) BDNF. (E) The relative luciferase activity was analyzed after the p-Luc-UTR vectors including 3′ -UTR of wildtype and mutant BDNF were co-transfected into HEK 293T cells with miR-1 mimic (miR-1) or mimic control (miR Con). Renilla luciferase vector was used as an internal control. * * p < 0.01, *p < 0.05. Scientific RepoRts | 6:29121 | DOI: 10.1038/srep29121 cultured SCs, respectively. qRT-PCR analysis showed that the mRNA expressions of BDNF were significantly suppressed by over-expression of miR-1, and were significantly enhanced by silencing of miR-1 ( Fig. 2A). Western blot analysis showed that the protein expressions of BDNF were also reduced by over-expression of miR-1, and increased by silencing of miR-1 (Fig. 2B,C). Moreover, miR-1 over-expression-induced decrease in BDNF protein expression was greater than that in BDNF mRNA expression, suggesting that BDNF was negatively regulated by miR-1 possibly through both mRNA degradation and translation repression. The effect of miR-1 on BDNF mRNA expression was further determined. Co-transfection of cultured SCs with miR-1 mimic plus wild-type BDNF 3′ -UTR-containing plasmid significantly decreased the relative luciferase mRNA level, which was calculated as the ratio of luciferase firefly to Renilla luciferase mRNA (Fig. 2D). In contrast, the relative luciferase mRNA level in cells transfected with miR-1 mimic plus mutant BDNF triple target site (mut 1&2&3) was not significantly changed (Fig. 2D). An inter-similar reduction in the relative luciferase mRNA level was observed in cultured SCs co-transfected with miR-1 mimic plus mut1, mut2, or mut1&2, respectively, but no change in the relative luciferase mRNA level was found in cultured SCs co-transfected with miR-1 mimic plus mut3, mut2&3, or mut1&3 (mut3-containing BDNF 3′ -UTR) (Fig. 2D), suggesting that miR-1 induced BDNF mRNA degradation primarily through binding to target site 3 rather than target site 1 or 2 of BDNF 3′ -UTR. Temporal expression changes of BDNF were inversely associated with those of miR-1. To verify the correlation between miR-1 and BDNF expressions, the expression profiles of miR-1 and BDNF mRNA following sciatic nerve injury were investigated by qRT-PCR. The expression of miR-1 in the injured nerve was nearly unchanged at 1 d PNI and then drastically decreased at 4, 7, and 14 d PNI with a valley value at 7 d PNI, compared to that at 0 h PNI (Fig. 3A). On the contrary, the mRNA expression of BDNF was significantly increased at 1, 4, 7, and 14 d PNI with a peak value at 7 d PNI, compared to that at 0 h PNI (Fig. 3B). The protein expression profile of BDNF following sciatic nerve injury was also investigated. Results from Western blot analysis showed that protein expression of BDNF was not significantly increased at 1 d PNI, but was extensively increased at 4 d or 7 d compared to that at 0 h PNI, with a peak value at 7 d (Fig. 3C,D). Notably, The relative luciferase mRNA was analyzed after co-transfection with wild-type or mutant BDNF plus miR-1 mimic (miR-1) or mimic control (miR Con). * * p < 0.01, * p < 0.05. the protein expression profile of BDNF was not precisely parallel to its mRNA expression profile, suggesting that BDNF might probably be regulated at post-transcriptional level. The above analyses provided further evidence that that after peripheral nerve injury, the temporal expression profile of miR-1 was roughly inversely correlated with that of BDNF. In other words, miR-1 negatively regulated BDNF in the injured peripheral nerves. miR-1 inhibited BDNF secretion from SCs. Following peripheral nerve injury, SCs synthesize BDNF and release it into the basal laminae to promote nerve regeneration. In the current study, ELISA analysis was performed to investigate the effects of miR-1 on BDNF production from SCs. Transfection of either primary SCs or RSC96 SCs with miR-1 mimic significantly decreased the cellular secretion of BDNF compared to that with non-targeting negative control. Inversely, transfection of either primary SCs or RSC96 SCs with miR-1 inhibitor significantly increased the cellular secretion of BDNF compared to that with non-targeting negative control (Fig. 4A,B). To further determine whether the effects of miR-1 on BDNF secretion were through targeting the 3′ -UTR of BDNF, miR-1 mimic and BDNF 3′ -UTR plasmid were co-transfected into RSC96 SCs. Transfection with miR-1 mimic alone significantly decreased BDNF secretion, but this reducing effect of miR-1 mimic was attenuated by co-transfection with BDNF 3′ -UTR plasmid (Fig. 4C). miR-1 suppressed SC proliferation and migration. Primary SCs were transfected with miR-1 mimic, miR-1 inhibitor, and non-targeting negative controls, respectively, and then subjected to cell proliferation and migration assays. EdU incorporation results showed that over-expression of miR-1 reduced the proliferation rate of SCs to less than 50% of the control value while silencing of miR-1 increased the proliferation rate of SCs to nearly 1.5 fold the control value, suggesting that miR-1 could suppress SC proliferation (Fig. 5A). Transwell migration assay results showed that SCs transfected with miR-1 mimic or miR-1 inhibitor induced a significant decrease or increase in cell migration rate compared to SCs transfected with non-targeting negative controls, respectively, suggesting that miR-1 could also suppress SC migration (Fig. 5B). BDNF knockdown recapitulated miR-1 effects on phenotype modulation of SCs. To further investigate whether the effects of miR-1 on SC proliferation and migration were recapitulated through down-regulation of BDNF, primary SCs were transfected with BDNF siRNA. The qRT-PCR and ELISA data confirmed that stable knockdown of BDNF was achieved (Fig. 6A). BDNF knockdown led to a significant reduction in cell proliferation or cell migration, which was similar to the influence of miR-1 over-expression (Fig. 6B). After primary SCs were co-transfected with BDNF siRNA and miR-1 inhibitor, miR-1 inhibitor-induced increase in cell proliferation and migration was significantly abrogated by BDNF knockdown (Fig. 6C,D). Collectively, all the results further demonstrated that BDNF was a functional mediator for miR-1 regulation of SC phenotype. Discussion Peripheral nerve regeneration is a complex biological process that involves numerous differentially expressed coding and non-coding RNAs. The regulatory role of miRNAs in peripheral nerve regulation has attracted recent research interest. Many previous studies report that phenotype modulation of SCs can be regulated by miRNAs after peripheral nerve injury 17 . As is well known, SCs are the principal glial cells in the peripheral nervous system and play essential roles during peripheral nerve development and regeneration. Following nerve injury, SCs help the removal of myelin debris, and undergo dedifferentiation, proliferation, and migration to form Bands of Bungner, thus guiding the directed growth of regenerating axons to the denervated targets 22,23 . Meanwhile, SCs synthesize and secrete neurotrophic factors, including nerve growth factor (NGF), BDNF, neurotrophin-3 (NT-3), and neurotrophin-4/5 (NT-4/5), which enhance the survival and growth of neurons 24,25 . These secreted -1), miR-1 inhibitor (Anti-miR-1), mimic control (miR Con), or inhibitor control (Anti-miR Con), respectively. The BDNF secretion from both primary SCs and RSC96 SCs transfected with miR-1 mimic was significantly decreased, while the BDNF secretion from both primary SCs and RSC96 SCs transfected with miR-1 inhibitor was significantly increased, as compared to that from both primary SCs and RSC96 SCs transfected with control. Histogram (C) showing that miR-1-induced reduction of BDNF secretion was rescued by co-transfection with miR-1 mimic plus BDNF 3′ -UTR plasmid. * * p < 0.01, * p < 0.05. Figure 5. miR-1 decreased the proliferation and migration of SCs. Primary SCs were transfected with miR-1 mimic (miR-1), miR-1 inhibitor (Anti-miR-1), mimic control (miR Con), or inhibitor control (Anti-miR Con) respectively. (A) The proliferation rate of SCs transfected with miR-1 was significantly decreased while the proliferation rate of SCs transfected with miR-1 inhibitor was significantly increased compared with that of control. (B) The migration rate of SCs transfected with miR-1 mimic was significantly decreased while the migration rate of SCs transfected with miR-1 inhibitor was significantly increased compared with that of control. * * p < 0.01, * p < 0.05. Scientific RepoRts | 6:29121 | DOI: 10.1038/srep29121 neurotrophic factors, in turn, exert beneficial actions on phenotype modulation of SCs and neurons, thus forming a positive feedback for nerve development and regeneration 26,27 . Since neurotrophic factors (including BDNF) can promote peripheral nerve regeneration, they are considered to hold great therapeutic potential for the treatment of peripheral nerve injury. The clinical use of exogenous BDNF, however, is limited by many difficulties, such as the delivery problem, maintenance of effective pharmacological dosages, and tumorigenic risk at high concentrations 13,14,28 . In order to seek for an alternative to direct application of exogenous BDNF for peripheral nerve repair, the current study was performed to investigate the endogenous regulation of BDNF by miRNAs after peripheral nerve injury. We identified that miR-1 mediated phenotype modulation of SCs by targeting BDNF, providing further evidence for miRNA-mediated post-transcriptional regulation of peripheral nerve regeneration. In the current study, we performed qRT-PCR and Western blot analyses to verify the inverse association between the expressions of miR-1 and BDNF. Then, we demonstrated that miR-1 inhibited both the mRNA and The mRNA expression of BDNF as well as the BDNF secretion in primary SCs transfected with BDNF siRNA (si BDNF) was significantly decreased as compared to that in SCs transfected with siRNA control (si Con). (B) Both the proliferation and the migration rate of SCs transfected with BDNF siRNA were significantly decreased compared to those of SCs transfected with siRNA control. (C) The proliferation rate of SCs were significantly increased by miR-1 inhibitor (Anti-miR-1), but was then rescued by co-transfection with miR-1 inhibitor plus BDNF siRNA (Anti-miR-1 + si BDNF). (D) The migration rate of SCs were remarkably increased by miR-1 inhibitor, but was then rescued by co-transfection with miR-1 inhibitor plus BDNF siRNA. * * p < 0.01, * p < 0.05. Scientific RepoRts | 6:29121 | DOI: 10.1038/srep29121 the protein levels of BDNF by directly targeting the 3′ -UTR of BDNF, and showed that miR-1 also reduced the abundance of endogenous BDNF synthesized by SCs. Target prediction algorithm as well as dual-luciferase reporter assay suggested that BDNF was a binding target of miR-1 and there were 3 binding sites of miR-1 at BDNF 3′ -UTR. Although all 3 target sites were involved in miR-1 binding, target site 3 might be the most effective among all sites. As a rule, miRNAs negatively regulate the expression of their target genes by promoting mRNA degradation and/or inhibiting protein translational 29,30 . In the current study, we determined the relative luciferase mRNA level (as the ratio of Firefly to Renilla luciferase). This level was reduced when cultured SCs were co-transfected with miR-1 plus BDNF 3′ -UTR containing no mutant target site 3 (including wild-type, mut1, mut2, and mut1&2), but this level was not significantly changed when cultured SCs were co-transfected with miR-1 plus BDNF 3′ -UTR containing mutant target site 3 (mut3, mut2&3, and mut1&3). It was supposed that target site 3 at BDNF 3′ -UTR primarily affected mRNA degradation whereas target sites 1 and 2 affected protein translation. To determine the biological role of miR-1 in phenotype modulation of SCs, cultured SCs were transfected with miR-1 mimic and with miR-1 inhibitor respectively. We found that over-expression and silencing of miR-1 caused suppressing and promoting effects on SC proliferation and migration respectively. Moreover, cultured SCs were transfected with miR-1 inhibitor in the presence or absence of BDNF siRNA. We noted that BDNF knockdown significantly attenuated miR-1 inhibitor-induced changes in SC proliferation and migration, suggesting that BNDF was a functional mediator of miR-1 in regulating SC phenotype. Our previous study has reported that after sciatic nerve injury, the differentially expressed let-7 miRNAs regulate SC phenotype by directly targeting NGF and affect sciatic nerve regeneration 20 . In the current study, we showed that another neurotrophic factor, BDNF, could also be regulated endogenously by a miRNA molecule. These findings open up a bright prospect for developing a novel therapeutic strategy that bypasses the limitations of direct administration of exogenous neurotrophic factors and promotes peripheral nerve regeneration through miRNAs targeting the expression of endogenous neurotrophic factors. In summary, we identified that miR-1 was down-regulated at 4, 7, and 14 d following sciatic nerve injury, reaching a valley value at 7 d. The reduced expression of miR-1 increased the expression and secretion of BDNF, and promoted SC proliferation and migration. The data contribute to better understanding of biological processes during peripheral nerve regeneration, and provide new approach to peripheral nerve repair.
2018-04-03T05:03:55.407Z
2016-07-06T00:00:00.000
{ "year": 2016, "sha1": "4cdccfb12477a199a92d33adc51d3ec27eaea6b7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep29121.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4cdccfb12477a199a92d33adc51d3ec27eaea6b7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
5910633
pes2o/s2orc
v3-fos-license
Investigating causality between interacting brain areas with multivariate autoregressive models of MEG sensor data Abstract In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor‐level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known. We further demonstrate, if a very large number of brain voxels is considered as potential activation sources, that PDC as a measure to reconstruct causal interactions is less accurate. In such case the MAR model coefficients alone contain meaningful causality information. The proposed method overcomes the problems of model nonrobustness and large computation times encountered during causality analysis by existing methods. These methods first project MEG sensor time‐series onto a large number of brain locations after which the MAR model is built on this large number of source‐level time‐series. Instead, through this work, we demonstrate that by building the MAR model on the sensor‐level and then projecting only the MAR coefficients in source space, the true casual pathways are recovered even when a very large number of locations are considered as sources. The main contribution of this work is that by this methodology entire brain causality maps can be efficiently derived without any a priori selection of regions of interest. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc. r r Abstract: In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor-level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known. We further demonstrate, if a very large number of brain voxels is considered as potential activation sources, that PDC as a measure to reconstruct causal interactions is less accurate. In such case the MAR model coefficients alone contain meaningful causality information. The proposed method overcomes the problems of model nonrobustness and large computation times encountered during causality analysis by existing methods. These methods first project MEG sensor time-series onto a large number of brain locations after which the MAR model is built on this large number of source-level time-series. Instead, through this work, we demonstrate that by building the MAR model on the sensor-level and then projecting only the MAR coefficients in source space, the true casual pathways are recovered even when a very large number of locations are considered as sources. The main contribution of this work is that by this methodology entire brain causality maps can be efficiently derived without any a priori selection of regions of interest. Hum Brain Mapp 34:890-913, 2013. INTRODUCTION The importance of studying interactions between specialized areas in the human brain has been increasingly recognized in recent years [Schnitzler and Gross, 2005a,b;Schnitzler et al., 2000;Schoffelen et al., 2005Schoffelen et al., , 2008. Magnetoencephalography (MEG) is particularly suited for connectivity studies as it combines a good spatial resolution with high temporal resolution. The high temporal resolution affords the investigation of transient coupling and is a prerequisite to study frequency dependent coupling. A large number of measures for the quantification of neural interactions have been introduced over the years. For these various measures it is custom to distinguish between functional and effective connectivity. Functional connectivity measures assess interactions by means of similarities between time series (e.g., correlation and coherence) or transformations of these time series (e.g., phase synchronization and amplitude correlation). In contrast, effective connectivity methods are used to study the causal effect of one brain area on another brain area. Besides the distinction between functional and effective connectivity one has to be aware that connectivity analysis can be performed at the sensor-level or the source-level. In the first case, connectivity measures are evaluated on the time series recorded by MEG/EEG sensors. In the second case, connectivity measures are evaluated on time series that represent the activity of individual brain areas. Unfortunately, the interpretation of sensor connectivity results is difficult because of the complex and often diffuse sensitivity profiles of MEG/EEG sensors [Schoffelen and Gross, 2009]. Significant connectivity between (even distant) sensors cannot be easily assigned to underlying brain areas, may be spurious, and can be affected by power modulations of nearby or distant brain areas [Schoffelen and Gross, 2009]. These negative effects can be reduced (though not abolished) by performing connectivity analysis in source space. Most MEG/EEG source connectivity methods are based on functional connectivity measures such as coherence or phase synchronization [Gross et al., 2002;Hoechstetter et al., 2004;Jerbi et al., 2007;Lachaux et al., 1999;Lin et al., 2004;Pollok et al., 2004Pollok et al., , 2005Timmermann et al., 2003]. Effective connectivity in source space has been studied with dynamic causal modeling (DCM) [David et al., 2006;Kiebel et al., 2009] or Granger causality [Astolfi et al., 2005;Gó mez-Herrero et al., 2008]. Here, we present and test a new efficient method for Granger causality analysis in source space. Granger causality is a concept from economics that quantifies the causal effect of one time series on another time series. Specifically, if the past of time series x improves the prediction of the future of time series y time series x is said to granger-cause y. Classically, Granger causality is defined in the time domain, but a frequency domain extension has been proposed [Geweke, 1982]. Granger causality has also been extended from its original pairwise form into a multivariate formulation in both the time and frequency domains, known as conditional Granger causality [Chen et al., 2006Geweke, 1984]. This methodology is comparative in the sense that in a multivariate system if one investigates if y is causing x, then a model of x based on every variable including y is compared with a model of x based on every variable excluding y. In simple terms if inclusion of y reduces significantly the variance of the model of x as compared to the variance of the model of x when y is excluded then y is assumed to cause x. Several other multivariate metrics derived from Granger causality have been suggested, such as partial directed coherence (PDC) [Baccalá and Sameshima, 2001] and directed transfer function [Kaminski and Blinowska, 1991]. These metrics are estimated in the frequency domain and are thus frequency specific. One of their main differences with conditional Granger causality is that they are not comparative methods but they are computed directly from the multivariate model built based on all the variables in the system. Source space Granger causality analysis is typically performed in the following way. First, regions of interest (ROIs) are selected. Second, the activation time series are computed for all ROIs. Third, a multivariate autoregressive model is computed for these time series and measures of Granger causality are computed. The most significant drawback of this approach is that a large number of potential activation sources correspond to a large number of projected activation time-series. This is prohibitive for the derivation of numerically robust MAR models without the assumption of sparse connectivity. [Haufe et al., 2010;McQuarrie and Tsai, 1998;Valdés-Sosa et al., 2005] For example, dividing the brain volume into a regular 6 mm grid leads to roughly 10,000 voxels. In addition, Granger causality computation for a different set of ROIs requires time consuming computations because Steps 2 and 3 in the procedure mentioned earlier need to be repeated. The computational complexity precludes a tomographic mapping of Granger causality. To bypass these limitations, we investigate an alternative approach, which entails the derivation of the MAR model directly on MEG sensor data and its projection into the source space. In this method the modeling process is performed in sensor space, which has moderate dimensionality as compared to the high-dimensional source space. This leads to greater model robustness as well as significantly reduced computation times. Feasibility of a similar approach for EEG data has already been shown in [Gó mez-Herrero et al., 2008], where the multivariate model was projected onto a small number of locations in source space identified by independent component analysis (ICA) of the residuals of the MAR model and localized by swLORETA [Palmero-Soler et al., 2007]. Causality was inferred using the directed transfer function metric (DTF). In our work, we demonstrate the feasibility of the methodology when the MAR model is projected in the entire brain volume without any a priori assumption or estimation of the activity locations. The main advantage of this approach is that all the voxels inside the brain volume can be investigated in terms of causality, something not practical with the traditional approach. This method also offers benefits in terms of data compression as the elements that need to be projected are the coefficients, which are typically significantly less than the data points used to derive them and which would be projected in the traditional case. Another advantage is that the derivation of the MAR model at the sensor space is much more robust, because of the moderate number of variables, than the derivation of the MAR model on projected time-series in a very large number of brain locations. Additionally, even if different ROIs are recursively selected to examine different network topologies in the brain, the sensor space MAR model is always the same and the only thing that changes is the locations where the model is projected. In the traditional approach, one would have each time to project the sensor time-series in the new set of brain locations and then build the MAR model again. Finally, due to the computational efficiency of this methodology, application of statistical inference methods on entire brain causality maps from MEG data is feasible. To infer causality, PDC and the coefficients of the MAR model themselves are used. Although, conditional Granger causality would be, in terms of theory, a more robust choice because of its intrinsic normalization, its computational load for a very large number of considered source locations makes its use problematic. In a traditional approach, if 10,000 voxels are considered, 10,001 multivariate models must be computed. One including all the 10,000 projected voxel time-series and 10,000 models, each one with one voxel time-series excluded. In the proposed methodology, where the model is built on the sensor level and only the coefficients are projected, in order to implement conditional Granger causality, again 10,001 models must be built at the sensor level. One on the original sensor data and 10,000 models, each with the effect of one voxel extracted through the derived inverse solution. Then the coefficients of these 10,001 models must be projected in source space. This imposes a heavy computational load. Also, due to the fact that in each of the 10,000 models the effect of one voxel is extracted through the derived inverse solution, under the condition that the number of sensors is much smaller than the number of voxels, the projected activity will be diffused around the voxels of actual activity. This simply means that even if one voxel's activation effect is excluded from the sensor data, in the context of Geweke's measures computation, the causal pairing will be modeled by the effect of the neighboring voxels. Another issue regarding the conditional Granger causality in the frequency domain is that is based on the transfer function of the model, which is the inverse of the z-transform of the MVAR coefficients across model order. For each of the 10,000 models the size of this matrix is 9,999 Â 9,999(10,000 Â 10,000 for the entire brain model). Inversion of such a large matrix, given also the colinearities because of the projection through the inverse solution, can be very problematic and can lead to singular inverse matrices. PDC has the implementational advantage that it is computed directly from the coefficients of one MAR model with all the variables included and that it does not require any inversion. Thus, only one model needs to be built at the sensor level, and after the coefficients are projected in source space, PDC can be efficiently computed for a wide range of frequencies. However, due to its semiarbitrary normalization it can only confidently be used to compare causality between voxel pairs that have the same causal voxel [Baccalá and Sameshima, 2001]. Because of this drawback of PDC, also the projected MAR model coefficients are examined directly without any normalization. The fact that no normalization is applied means that in this approach causality is not bounded. Also when a continuous linear system with linear coefficients matrix A is periodically sampled with sampling frequency f, the resulting discrete linear coefficients are approximated as e Að1=f Þ . This means that the discrete coefficients change in amplitude according to the sampling frequency. Nevertheless the aim of examining the MAR model coefficients is to examine if within the same dataset the causal information is correctly represented in the MAR model coefficients when they are derived at the sensorlevel and then projected to a very large number of voxels inside the brain. This examination of the coefficients is only performed in the time-domain. This approach is used to identify areas inside the entire brain, which are involved in causal interactions. These specific brain areas could then be separately examined with theoretically more robust causality metrics such as the conditional Granger causality. First, our proposed approach is investigated theoretically. Subsequently, the method is validated by simulations where pseudo-MEG data with added noise, uncorrelated, and spatiotemporally correlated, is produced from simulated neural activity in a small number of predefined locations inside the brain with specified causality structure. We show that the PDC reconstructed from the source projection of the MAR model coefficients, is very similar to the PDC, extracted from the simulated source signals directly. The second part of this work is concerned with the investigation of the causality information that can be derived in the case when a very large number of voxels are considered as potential sources. First, the causality information recovered by PDC is investigated. Then the causality information recovered directly from the MAR model coefficients is investigated. The motivation for the latter comes from the fact that PDC, due to the way it is normalized, is very sensitive to the Signal-to-Noise ratio and may not be suitable for applications with very large numbers of voxels [Baccalá and Sameshima, 2006;Faes et al., 2010;Schelter et al., 2006Schelter et al., , 2009. Here the feasibility of using the model coefficients directly is investigated and it is demonstrated that causality information can be extracted more precisely than with PDC, when a very large number of voxels is considered. Within this context a preliminary evaluation of this methodology is performed with real data from a simple motor planning experiment. METHODS Among the many variants of source localization techniques, linear inverse solutions represent an important class of methods because of their efficient computation and numerical stability. These methods are typically used to perform the linear transformation of sensor time series into r Michalareas et al. r r 892 r source space [Baillet et al., 2001;Gross and Ioannides, 1999;Hämäläinen, 1992]. However, these linear transformations can also be applied to other measures such as the cross-spectral density to perform tomographic mapping of power or coherence [Gross et al., 2001]. The method suggested here follows a similar logic. In a first step, a multivariate autoregressive model is computed for the recorded signals of all MEG sensors. The result is a very compact and efficient representation of the data as an NxNxP-matrix (N: number of channels, P: model order). In a second step, the covariance of all channel combinations is computed and the coefficients of a spatial filter are computed for each volume element in the brain. These coefficients are used in the third step to estimate the MAR-model for volume elements in the brain. The new MAR-model (that corresponds to brain areas and not MEG sensor signals) can be used to compute power, coherence, or causality measures (like PDC or DTF) for a given frequency. Interestingly, the computation of these measures given the MAR model is very fast. In summary, the method is based on the computation of a multivariate AR model using the sensor signals followed by the efficient transformation of the model from the sensor-level to the brain areas. Once the model (represented by the model coefficients) is defined a large number of measures can be computed that quantify coupling strength and direction of information flow between brain areas. This technique could possibly overcome most of the aforementioned limitations. It is very fast and efficient in terms of memory use and computational load. Because of the fast computation it is ideally suited for randomization techniques that can be used to establish significance levels for the results. Information about directionality is readily available for all volume elements and several partly complementary measures such as PDC and DTF can be easily computed and compared. Multivariate Modeling of MEG Data If s(t) is the column vector of all the activation signals in brain space at time t and x(t) is the column vector of the sensor measurements in the sensor space at the same time t, then the following relationship holds: where K is the leadfield matrix or forward operator. In the same fashion the inverse projection can be described with the use of an inverse operator. The source signals inside the brain can be described as projections of the sensor signals as: where U is the inverse operator. Assuming that one could measure the activation timeseries of an arbitrarily large number of potential activation sources symbolized by s(t), a multivariate model built on them would have the form: BðsÞ Á sðt À sÞ þ eðtÞ where P is the model order across time lags, s is the time lag, B(s) is the model coefficients matrix for lag s, and e is the model residual column vector. Combining Eqs. (1), (2), and (3) gives the following expression for the multivariate model in the sensor space: In the above derivation it is assumed that UK ¼ I. This is evident if Eq. (1) is used in Eq. (2) to derive sðtÞ ¼ UK Á sðtÞ. This means that if a source activation signal is projected through its leadfield to sensor space and then back to source space through the inverse solution, the recovered signal should be the same as the original. Because of the nonuniqueness of the inverse solution the product UK deviates from the identity matrix. This form of deviation depends on the inverse method used. For beamformers UK ¼ I is satisfied for each voxel individually, as this is the constraint used for the inverse solution for each brain location. When the leadfields and spatial filters for all voxels are entered in product UK, then the offdiagonal components deviate from 0. Similar deviation from the identity matrix is also occurring with minimum norm solutions. However, as there is no unique solution to the inverse problem by these methods, it is assumed that by projecting the sensor data through the inverse solution, the original activation signals are recovered and not a distorted version of them (as there is no way to eliminate this distortion due to the nonuniqueness). This assumption is described by Eq. (2). By combining again Eqs. (1) and (2) to give sðtÞ ¼ UK Á sðtÞ, it is seen that this assumption is translated in the assumption UK ¼ I. Because of the typical spatial proximity of MEG sensors and to the structure in the leadfield operator, it is expected that there will be colinearity between different sensor time-series and a model derived directly on these time-series would provide a poor solution. This can be avoided by applying principal component analysis (PCA) [Jolliffe, 2002] to the time-series, and additionally selecting only the principal components corresponding to the largest eigenvalues (typically those that explain 99% of the variance). Then the MAR model is built on these components. In this way colinearity is largely reduced and components representing mainly noise are omitted. The projection from sensor space to the principal component space is performed through the matrix of selected feature vectors [Jolliffe, 2002] for the principal components as: where V is the matrix of feature vectors mapping from the original recorded time-series to the reduced PCA-space. The number of significant components, explaining the 99% of data variance, is lower than the number of sensors due to the presence of noise. Assuming that the excluded components represent noise, and assuming that the Moore-Penrose pseudo-inverse of V exists, the following assumptions can be made: where 1 denotes the Moore-Penrose pseudoinverse.Consequently: Then combining the model in Eqs. (4) and (5) gives the MAR model in principal component space: If a MAR model of the form: AðsÞ Á x PCA ðt À sÞ þ gðtÞ (10) is developed directly on the principal components of the MEG sensor measured time-series, it is evident from Eq. (9) that for each time lag s up to the model order, the MAR coefficients B(s) in source space can be estimated through: and the MAR model residual time-series in source space can be estimated through: For completeness, the data and noise covariance in source space can be derived from the ones in principal component space through: where C s ,N s are the data and noise covariance in source space, respectively and C PCA ,N PCA in principal component space, respectively. As the projection of the multivariate coefficients is performed through the use of the spatial filters and the leadfield matrices, it is important to mention how the inverse solution affects the projection. If we denote the model coefficients matrix at the sensor level for lag s as C(s) then from Eq. (11) it can be seen that: and If we consider just the coefficient from source j to source k inside brain then: where u kq is the spatial filter weight from sensor q to source k, k rj is the leadfield weight from source j to sensor r, c qr (s) is the multivariate model coefficient from sensor r to sensor q for lag s, N denotes number of sensors, and r, q denotes sensor index with values 1 to N. In this representation it can be seen that the terms that dominate the sum are the ones in which all three componentsu kq ; c qr ðsÞ; k rj have significant values. These three terms can be depicted in Figure 1 where three activated sources i, j, and k are shown c qr is the MAR coefficient from sensor r to sensor q. According to the dipoles' orientation of the actual activated brain sources that have a causal relationship, causality should be also present between the sets of sensors Visualization of the effect of mixing during the projection of coefficients from sensor space to source space. i, j, k: activated sources, r, q: MEG sensors, u kq :spatial filter weight from sensor q to source k, k rj : leadfield weight from source j to sensor r, c qr (s): multivariate model coefficient from sensor r to sensor q. Source j is causing source k but not source i. If the spatial filter weight u iq has a value other than zero then true causality between sources j and k can be misrepresented as causality between sources j and i. where the magnetic fields from the activated sources converge to local-maxima and -minima. u kq depends only on the caused source k and k rj depends only on the causal source j. The double sum in Eq. (17) iterates through all the possible sensor pairs. In Figure 2 are shown the histograms of the leadfield and spatial filter weights to and from all sensors for six simulated active sources and six inactive sources inside the brain. The leadfields and the spatial filters have been combined with the estimated dipole orientation from the inverse solution, which was derived through an LCMV beamformer. From Figure 2 it can be seen that the leadfield and the spatial filter weights for active sources have distributions with heavier tails than these for the inactive sources. The significantly higher positive and negative values at the tails correspond to the sensors that are located in the vicinity of the local-maxima and -minima of the magnetic fields produced by the activated sources. In the case of two sources with actual causal relationship like sources j and k in Figure 1, if it is assumed that sensor r is located at the local maximum of the magnetic field produced by source j and sensor q is located at the local maximum of the magnetic field produced by source k, then the coefficient c qr (s) of the MAR model at the sensor level will be significant representing the underlying causal relationship. In this case, the product u kq c qr ðsÞk rj will attain a high value. The further away sensors r and q are from the local maxima (or minima) of the actual activation dipoles, the lower will be the above product. Consequently, the sum in Eq. (17) is dominated by the factors that correspond to the sensors that are located in the vicinity of the local maxima and minima of the actual activated dipoles, which represent the true causality at sensor level. From Figure 1 it is also evident that the spatial filter weights have much wider distribution away from 0 for the active sources as compared to inactive sources, than in the case of the leadfield weights. This means that for sensors away from the local maxima and minima. While the leadfield weights decrease sharply, the spatial filter weights decrease more smoothly. This can result in mixing of causality information. In the case that there is another activated source with no causal interaction with the other activated sources, like Histograms of leadfield and spatial filter weights to and from all the MEG sensors respectively, for six activated and six nonactivated sources. For the activated sources the leadfield and especially the spatial filter weights have a much wider distribution than the nonactivated sources. The values at the tales correspond to sensors in the vicinity of the local maxima and minima of the magnetic field generated by the activated dipole. source i in Figure 1, then mixing of causal information can result if the spatial filter weights for source i have a wide distribution. This can be seen if in the computation of the coefficient b kj (s) from Eq. (17), the term u iq c qr ðsÞk rj is examined. As discussed earlier, the product c qr ðsÞk rj will have a high value due to the underlying true causality between sources j and k. If u iq has also a value different from zero then it is evident that a portion of the true causal relationship between sources j and k will be erroneously projected also between sources j and i. This mixing affects mostly activated sources for which the spatial filter weights have wider distributions. As nonactivated sources have a much narrower distribution of spatial filter weights, causality information is less likely to be represented in nonactivated areas. As mixing depends mostly on the distribution of the spatial filter weights, the inverse solution used for deriving them plays a central role on the extend and pattern of mixing. Beamformers (LCMV, DICS) cannot separate highly correlated sources (i.e., auditory activations) and tend to represent two such sources with one source located inbetween. In this case, causality information would be projected in erroneous nonactivated locations. Another factor that affects beamformers is the use of a regularization parameter. This regularization parameter is used to make the inverse solution wider so that actual activated sources situated between grid points will not be missed. The use of high regularization values creates spatial filter weights with wide distributions and thus the level of causality information mixing will be higher. In the case if minimum norm solutions the main disadvantage is that they tend to assign observed activity to cortical areas because they are closer to the sensors and thus they fit better to the minimum norm constraint. This has the consequence that even inactive cortical voxels attain spatial filter weights with wider distributions than in the case of beamformers. Consequently, in order to minimize the effect of mixing in this work, an LCMV beamformer is used with no regularization. When the entire brain is considered with no a priori selection of brain locations, a fine grid of 6 mm resolution is used (8,942 voxels). Using the above formulation, in this work we investigate the feasibility of building the MAR model in the Principal Component space of sensor data and projecting it into source space, as described earlier, where causality is estimated from the model coefficients. Here we quantify causality by means of PDC. Partial Directed Coherence Partial directed coherence (PDC) is a metric aimed to identify causal relationships between signals at different frequencies in a multivariate system [Baccalá and Sameshima, 2001]. It belongs to a family of methods that analyze the coefficients from MAR models. Another widely used metric is the DTF [Kaminski et al., 2001] but it has been shown [Astolfi et al., 2005] that PDC is superior over DTF in correctly identifying both direct and indirect causal pathways. The PDC from voxel j to voxel i at frequency f is given by the following equation: are the elements of matrix : where f is the frequency, F s is the sampling frequency, s is the model time lag, H is the Hermitian transpose. As it can be seen from Eq. (19) for a given voxel pair iÀj and frequency, the element A ij ðf Þ is in effect the z-transform of the MAR coefficients series across time lags a ij (s), modeling the effect of voxel j on voxel i. The vectors a j , contain the elements of the jth column of matrix A and they contain all elements A ij ðf Þ from voxel j to all voxels i 2 N. Consequently, it is straightforward to see that the denominator in Eq. (18) is the norm of vector a j . Thus, PDC is normalized with respect to the transmitting voxel. This means that PDC values between different voxel pairs are comparable only for the same transmitting voxel and comparison between pairs of different transmitting voxels is not feasible. It is also evident that if the activity between two voxels is highly correlated, then the norm will be dominated by the PDC of this pair, and the PDC of all other pairs will be down-weighted. It is also natural to infer that if, because of poor signal-to-noise ratio, the MAR coefficients contain modeled noise, then it is highly probable that this will dominate the PDC and real causal pairs will be down-weighted. Finally, the norm depends on the number of voxels, and for a very large number of voxels, the normalization factor increases and consequently the PDC values decrease to low-levels and become more sensitive to erroneous or random correlations. Investigation 1: Small Number of Voxels with Known Causality Structure As a first step to evaluate the feasibility of building the MAR model on the principal component space of the sensor data and then projecting it into source space where causality is inferred, the following process has been used. Six simulated signals with predefined causality were generated to represent activity of six sources inside a typical brain volume segmented in a 6 mm grid (8,942 voxels in brain volume).The 'Ideal' PDC was computed directly from the simulated signals. Through, the forward solution pseudo-MEG sensor time-series were derived. Noise was added to the sensor data. A MAR model was built on the principal component space of the pseudo-MEG sensor data. Spatial Filters and dipole orientations were estimated by a Linearly-Constrained Minimum Variance beamformer for the six known locations of dipole activity. The MAR model coefficients were then projected onto the six known locations. PDC was computed from the coefficients of the projected MAR model and compared to the 'Ideal' PDC. Tolerance of the method with respect to white Gaussian sensor noise, model order and number of samples is investigated. Additionally, tolerance of the method with respect to spatiotemporally background noise is investigated. Confidence intervals of PDC are examined by a jackknife method. Simulated brain dipole MEG sensor data was simulated. Six activation signals inside the brain volume were generated from MAR equations approximating damped oscillators [Baccalá and Sameshima, 2001]: where w n (s) are zero-mean uncorrelated white Gaussian noise processes with identical variance. The causality structure between the simulated source signals is shown in Figure 3. The activation signals were designed with a nominal frequency of 8 Hz (Alpha band) (Fig. 4) and the time courses were sampled with a sampling frequency of 100 Hz. Time-series were organized in 20 trials with each trial having a duration of 1 s. Source 6 has no causal interaction with the other sources and has the highest power. This source was chosen in order to investigate if a strong source unconnected to the rest of the network will significantly affect the estimation of causality. The locations of the six dipoles were defined with respect to the head coordinate system (x-axis pointing to nasion, y-axis pointing to left preauricular point, and zaxis point up) and can be seen in Figure 5. The orientations of the dipoles were chosen to be random unit vectors, orthogonal to the line connecting the center of mass of the brain volume to each activation point. For reference, the locations and orientations of the six dipoles are given in Table I. The overall magnetic field generated by these brain sources was simulated by multiplying the simulated activation signals with their corresponding leadfields, and white Gaussian noise representing noisy environment processes was added. The simulation of the activation sources and the computation of the corresponding magnetic field were performed in MATLAB with the use of the Fieldtrip toolbox [Oostenveld et al., 2010]. PDC deviation from 'Ideal' with respect to environment noise, N of samples and model order We investigated the deviation of PDC from the 'ideal' for different values of environment noise level, number of samples per trial and MAR model order. For the environment noise investigation spatially and temporally uncorrelated white Gaussian noise was added to simulated MEG data. The amplitude of the noise was defined relative to the rms value of the MEG sensor pseudo-measurements, resulting only from the simulated activation signals. The investigated range was 1-20. The above type of noise was chosen because it is the simplest type and usually represents sensor noise. It suitable to investigate the effect of noise that in spatially and temporally uncorrelated. In section ''PDC deviation from 'Ideal' for spatiotemporally correlated noise" the same investigation is repeated for spatiotemporally correlated noise. For the investigation of number of samples per trial, 20 trials were used. The investigated range was 500-4,000 samples/trial. For the investigation of the MAR model order, the number of time lags used in the model was varied. The investigated range was 5-100 lags. The objective here was to investigate average PDC deviation from 'Ideal' PDC for the above parameters, around the frequency of the activation signals (8 Hz) and separately for causal and noncausal pairs. For this purpose two metrics were used. The first metric is defined as the mean (PDC of Projected MAR Model-'Ideal' PDC) for frequency range 7-12 Hz, averaged for the causal pairs 2-1, 3-1, 4-1, 5-4, and 4-5. The second metric is the same as the first one but for noncausal pairs. PDC deviation from 'Ideal' for spatiotemporally correlated noise In the previous investigation, the added noise at the sensor-level was white Gaussian. Although, this investigation is valuable in examining the effect of different levels of uncorrelated noise into the recovery of causal information, in reality it mostly represents MEG sensor noise. If one wants to represent realistic brain background noise, emanating from within the brain, it is more realistic to model the noise as spatially and temporally correlated [Bijma et al., 2003;de Munck et al., 2002;Jun et al., 2002;Lü tkenhö ner, 1994]. We have evaluated the performance of PDC for different rms levels of spatiotemporally correlated noise when the activity locations are known. For creating the spatial noise correlation, an approach similar to Lutkenhoner [1994] and Jun et al. [2002] has been followed. Dipoles (2,184) locations were selected, uniformly distributed within the brain volume. In each of these locations a dipole with random orientation was actuated. For creating the temporal noise correlation, an approach similar to Bijma et al. [2003] and Nolte et al. [2008] was followed. Each of the 2,184 noise dipoles was activated by a pink noise signal. This is due to the fact that it is well known that the background noise is not white but has a 1/f characteristic [de Munck et al., 2002]. Each pink noise signal was derived by passing a white Gaussian signal through a third-order low-pass filter designed with an 1/f frequency spectrum characteristic and with cut-off frequency of 15 Hz providing most spectrum power in the alpha-range Bijma et al. [2003]. The designed filter has the following autoregressive representation: yðtÞ ¼ 2:5yðt À 1Þ À 2:02yðt À 2Þ þ 0:52yðt À 3Þ þ 0:05xðtÞ À 0:1xðt À 1Þ þ 0:05xðt À 2Þ À 0:005xðt À 3Þ where here y(t) is the output of the filter (temporally correlated pink noise) and x(t) is the input (white Gaussian noise). As it can be seen for this equation, the autocorrelation of the resulting pink noise is temporally extended to three time lags. Through this process the output noise signal had both an 1/f spectrum characteristic and a temporal auto-regression extended to three time lags in the past. The 2,184 noise dipole time-series where not temporally cross-correlated similarly to Nolte et al. [2008]. As the pink noise signal is derived from random white Gaussian Noise, the phase of the temporal correlation is also randomized [Bijma et al., 2003]. The rms level of actuation was selected to be the same for all sources. The pseudo MEG sensor measurements were derived by projecting all the 2,184 activation dipole time-series through the leadfield matrices. The instant spatial correlation and the lagged temporal correlation for each sensor is higher with the neighboring sensors and diminishes with distance from the sensor. The resulting noise at the sensor level was adjusted in magnitude in order to investigate noise levels 1-20 times the rms value of the sensor time-series from the six actual activation sources, similarly to the evaluation for the white Gaussian noise in section ''PDC deviation from 'Ideal' with respect to environment noise, N of samples and model order''. The evaluation was also performed with the same metrics, which is the mean deviation of PDC from 'ideal' PDC for the causal and the noncausal pairs separately. Statistical inference of PDC As PDC depends on the spectrum of the estimated MAR model coefficients for each pair, which are random for signals containing no causality, it is instructive to be accompanied by statistical inference of significance. The statistical significance of the PDC causality results was accessed through a jackknife method, the trial based leaveone-out method (LOOM) [Schlö gl and Supp, 2006]. One trial is excluded from the sensor data set. The data from all trials is then concatenated and the MAR model is built. Then the MAR coefficients are projected through the inverse solution into the six known simulated source locations inside the brain volume and PDC is computed for the frequency range 1-50 Hz. Then the next trial is excluded from the sensor data set and the procedure is repeated. This procedure is repeated until each of the trials has been left out once. The LOOM approach provides two main advantages [Schlö gl and Supp, 2006]. Firstly, LOOM obtains the leastbiased estimates over all other resampling methods. Secondly, no a priori assumption regarding the type of distribution is needed. Through the LOOM procedure, a sampling distribution Nðl u ; r 2 u Þ is obtained for the PDC for each of the frequency bins considered with mean l u and standard deviation r u . However, the standard deviation is not derived from N independent trials. Only the (NÀ1)th part of each concatenated data vector was independent (one out of NÀ1 trials). Consequently, the true standard error is r u ffiffiffiffiffiffiffiffiffiffiffiffi ffi N À 1 p [Schlö gl and Supp, 2006]. The mean and the standard error are used in a simple ttest for testing whether the PDC at a specific frequency bin is significantly different from zero or not. From this ttest, the 95% confidence limits of the mean can be computed according to: where p is the significance level, N: is the number of trials, t(p/2, N21) is the upper critical value of the t-distribution with N21 degrees of freedom, and significance level p. In TABLE I. Location inside brain volume and orientation of the 6 simulated activation sources These confidence limits have been calculated for each of the 36 investigated voxel pairs and for each integer frequency in the range 1-50 Hz. Investigation 1:results 'Ideal' reference PDC was calculated directly from the 6 simulated activation signals. It is shown in Figure 6a for all possible pair combinations between signals. The pairs that show distinct PDC are in agreement with the expected causal pairs from the configuration of the simulated activation signals. These pairs are: 2-1, 3-1, 4-1, 5-4, and 4-5. Then the PDC was calculated from MEG sensor data. PCA was applied to the sensor data. Twenty-one of the components explained 99% of the variance so the remaining principal components were discarded. The MAR model was built on the principal components using the Yule-Walker method [Schlö gl and Supp, 2006]. Through Akaike's criterion [McQuarrie and Tsai, 1998] the model order was selected as six. After the MAR model was built on the principal components of MEG sensor time-series, it was projected to the 6 dipole locations with the spatial filters and orientations estimated by a LCMV beamformer for the precise known locations of the activated dipoles. The spatial filters and the dipole orientations were computed in MATLAB with the use of the Fieldtrip toolbox [Oostenveld et al., 2010]. The derived PDC is shown in Figure 6b. PDC calculated from the projected MAR model in approximating the 'ideal' reference PDC from the activation signals. This means that a MAR model built from MEG data in the sensor space and projected in the source space preserves causality information for the underlying generating activation processes. Examination of the levels of PDC shows that it is distinguishably high for the causal pairs when compared to all the rest noncausal pairs. Maxima occur around 8 Hz, the nominal frequency of the activation signals. Subsequently, we examined the average deviation of the reconstructed PDC from 'ideal' with respect to environment noise, N of samples, and model order, averaged across the causal and noncausal source pairs. In Figure 7 the 2 metrics described in section ''PDC deviation from 'Ideal' with respect to environment noise, N of samples and model order" are shown in a comprehensive way describing the variation of all three investigated parameters. Each subplot presents the two metrics for a particular combination of N Samples/Trial, environment noise level, and MAR model order. Up to noise levels four times the strength of the dipoles the deviation of the PDC for the causal pairs remains low. In the presence of higher noise, PDC is deviating significantly from the ideal PDC. When a small number of samples is used, combined with high model order, PDC for both causal and noncausal pairs is inconsistent relative to the ideal. In such cases, one cannot distinguish between causal and noncausal pairs. The number of coefficients in the MAR model is p Á N Á N where p is Average deviation of PDC from 'Ideal' for causal and noncausal pairs. Causal pairs are represented by green color, noncausal pairs by red. In each subplot PDC is plotted versus N Samples/Trial. Each subplot corresponds to a different combination of environment noise level and MAR model order. For low N samples/Trial and high MAR model order (top right), PDC fails to capture causality correctly and non-causal pairs appear to have significant causality. For higher N samples/Trial and lower model order (bottom left) PDC recovers information much more consistently and non causal pairs do not appear to have significant causality. In such cases it can be seen that for noise levels below five times the rms value of the brain signals, the average deviation of both causal and noncausal pairs from the 'Ideal' PDC remains low. For noise levels above five the average deviation of the causal pairs seems to systematically increase. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.] Figure 8. Average deviation of PDC from 'Ideal' for causal and noncausal pairs for different levels of spatiotemporally correlated noise added at the sensor-level. The same deviation is shown for the same levels of white Gaussian noise for comparison. As spatiotemporally correlated noise level increases the noncausal pairs appear in fault to have significant causality. Up to noise level four times the rms of the actual brain signal, deviations from ideal remain low. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.] the model order and N is the number of variables in the model. When a small number of samples per trial is combined with a high model order, then this means that the number of data points per trial is only a modest multiple of the number of estimated parameters. As seen in Figure 7 in such cases the performance of PDC becomes detrimental. The next investigation was the average deviation of the reconstructed PDC from 'ideal' for different levels of spatiotemporally correlated noise as described in section ''PDC deviation from 'Ideal' for spatiotemporally correlated noise''. According to the previous investigation with white Gaussian noise, a robust combination of number of samples per trial and model order was chosen, specifically 2,000 samples per trial and model order 5. The results are shown in Figure 8 where also the same evaluation for the white Gaussian noise case is shown for comparison. In the case of the spatiotemporal noise, the causal pairs seem to be more robust to higher noise levels, as the average deviation from the ideal is lower as compared to the white Gaussian Noise. For the noncausal pairs the average deviation from ideal increases consistently with noise level. This is different from the white Gaussian noise where the average deviation after a noise level of 8, seemed to stabilize around a certain level. The above observations show that because of the spatial and temporal correlation of the noise, PDC for the causal pairs has a tendency for higher rate of correct detection and for the noncausal pairs a higher rate for false detections. These results are in agreement with Nolte et al. 2008, where Granger causality appeared to behave in the same way for high noise levels. Another observation from these results is that up to a noise level four times the rms value of the actual brain signal, the mean deviation of PDC from the 'ideal' remains in low-levels and is similar for white Gaussian and spatiotemporally correlated noise. Finally, the confidence intervals for PDC were evaluated by the LOOM method as described in section ''Statistical inference of PDC''. As it can be seen in Figure 9, the 95% PDC confidence limits for the pairs that have an actual causal relationship are significantly different from 0, especially for the lower frequency range where the most spectral power of the simulated signal is contained. For the pairs that have no actual causal relationship the confidence intervals systematically encompass 0. These results show that the PDC computed from the projected MAR coefficients into the six known source locations is robust and consistent with the actual simulated causality configuration. If the locations of the actual activated sources are known then the PDC can provide consistent representation of the causal interactions within the network of these sources. Investigation 2: Consideration of All Voxels in Brain Volume as Potential Activation Sources Investigation 1 showed that if the actual activation locations inside the brain are known then the methodology of building the MAR model in the principal component space of sensor data and projecting it into the source space correctly reconstructs the causality structure by PDC. However, though this scenario serves as a good demonstration of the theoretical feasibility of the methodology, it is unrealistic as in real experiments the number and location of activated brain sources are not known and have to be identified. To evaluate the methodology in such a realistic scenario a similar approach to investigation 1 was followed as described in section ''Investigation 1: Small Number of Voxels with Known Causality Structure.'' The noise used in this investigation was spatiotemporally correlated noise, as described in section ''PDC deviation from 'Ideal' for spatiotemporally correlated noise'', scaled to two times the rms value of the pseudo-MEG measurements from the simulated dipoles. The main difference in this investigation is that the MAR model coefficients were projected into all voxels inside the brain volume through the beamformer spatial filters (8,942 locations). Then PDC was computed from the coefficients of the projected MAR model. In Investigation 1, as only six voxels were considered it was easy to visualize PDC results for all pairs and frequencies simultaneously. Here, as there are 8,942 voxels, it is impossible to visualize all combinations for all frequencies simultaneously. As it is known that the 'Ideal' PDC has a peak around 8 Hz, PDC was visualized only for this frequency. The method of visualization chosen was a sliced topological plot of PDC Maps. Four different types of PDC maps were constructed. First the map of PDC to all voxels from each of the known activity sources (Receive direction). Second the map of the mean of PDC received by each voxel from all voxels (Receive direction). Third the map of PDC from all voxels to each of the known activity sources (Transmit direction). Fourth the map of the mean of PDC transmitted by each voxel to all voxels. The first case was investigated in order to see if causality information for each of the known simulated sources is preserved when the MAR model is projected in the entire brain volume. Because in this case we still know the locations of actual activity, the second case was investigated as a potential way of identifying causal areas inside the brain without any a priori knowledge about potential source locations. This comes naturally from the fact that PDC is normalized relative to the causal voxel. In the contrary averaging the PDC in the causal direction is not a feasible method due to this semiarbitrary normalization. To demonstrate this fact, cases 3 and 4 have been used. All four cases represent maps of caused and causal activity in the entire brain either from or to a seed or as an average. These maps can provide a very useful initial view of the causality patterns within the entire brain. Such maps of causality have already been used with fMRI data [Roebroeck et al., 2005]. Deriving such maps also from MEG data with a high spatial resolution scan grid offers the advantage that causality maps for the same r Michalareas et al. r r 902 r phenomenon can be derived and compared between these two modalities for the entire brain. The causality recovered by fMRI is typically in a timescale of seconds while the causality recovered by MEG is in a timescale of milliseconds. Combing causality in these two different timescales can prove very useful in understanding how lowfrequency causal networks modulate high frequency causal networks and vice versa. Investigation 2: Results For a clearer interpretation of Figures 10 and 11 one should first get familiar with the topology of the simulated sources inside the brain shown in Figure 5 and with the causality configuration of those signals shown in Figure 3. In Figure 10 the following are presented. Subfigures (af) present the PDC maps from the known sources 1-6 to all voxels. Subfigure (g) presents the mean PDC caused to each voxel. Subfigure (h) presents the actual locations of the simulated sources with spheres of 1.5 cm radius for ease of reference. Subfigure (i) present the original causal configuration between the six sources. The position of each source represents coarsely its expected location in the topographic maps. Depth information is encoded in the radius of each source with bigger radius corresponding to sources closer to the top of the head. This diagram has been included in order to assist the reader to infer the results in the PDC maps. From these plots the following can be observed. Source 1 is in reality causing sources 2, 3, and 4 and itself through autoregression. This is well represented in Figure 10a, where also source 5 appears to be caused by 1. Sources 2 and 3 in reality do not cause any activity in any voxel. However, in Figure 10b,c PDC appears to be prominent to voxels 1, 2, 3, and 4 for both cases. It seems that PDC from source 2 and 3 is a ghost of the PDC from source 1 which Figure 9. Ninety-five percent confidence limits of PDC derived by the LOOM method. PDC has been computed by projecting the sensor-level MAR model into the six known locations of the simulated brain sources. The causal and noncausal pairs can be confidently distinguished. r Causality Analysis with MAR Models and MEG r r 903 r causes these 2 sources. The PDC map values for these two sources remain significantly lower from all other voxels. In Figure 10d, source 4 appears to be causing source 5 and itself which is in accordance with the real causality. In Fig-ure 10e, source 5 appears to be causing source 4 and itself, which is in accordance with the real causality. In Figure 10f, source 6 appears to be caused only by itself, which is also in accordance with reality. An important observation is that in Figure 10a-e for sources 1-5, source 6 does not appear to interfere in the recovered causality maps. To summarize, firstly, PDC seems to indicate the correct areas where real causal connections exist. Secondly, PDC does not seem to always correctly reconstruct the individual causal pathways, and ghosts of real connections appear in other voxels of the causal network. Ghost connections do not appear in locations without activity. Averaging all the maps of PDC from each voxel to all other voxels, provides a map on which are highlighted all areas that are on average caused. This map is presented in Figure 10g. The sources that are actually caused by other sources are 2, 3, 4, and 5. Sources 1 and 6 are auto-correlated. All these sources appear on this map which could serve as an initial indicator of the areas that are involved in a causal functional network. Local maxima or cluster centers could then be selected, at which the MAR model would be projected and PDC (or any other causality metric) would be recalculated only for these points. In Figure 11 the following are presented. Subfigures a-f present the PDC maps from all voxels to the known sources 1 to 6 Subfigure (g) presents the mean PDC caused by each voxel. Subfigure (h) presents the actual locations of the simulated sources with spheres of 1.5 cm radius for ease of reference. Subfigure (i) present the original causal configuration between the six sources. The position of each source represents coarsely its expected location in the topographic maps. Following the analysis in the same fashion as before, it can be seen that in this case, as expected, that PDC from all voxels to a single voxel cannot serve as a useful causality map because PDC is normalized with reference to the causal voxel. So each PDC value from every voxel to a single voxel has been differently normalized. This is evident in the plots, where causality maps fail to resemble the real causality connections. This is a very significant drawback of using PDC in cases when the entire brain volume is considered as potential activity sources. Using the map of mean PDC received by each voxel, one can create a map representing the areas that are in generally caused and in which causal areas might also appear due to mixing. However, a similar map of causal areas cannot be constructed based on PDC. Investigation 3: Using the MAR Model Coefficients Directly for Causality Identification In Investigation 2, it was seen that when a very large number of voxels is considered as potential source locations, PDC can provide a map of causal areas in which also caused areas might appear because of ghost causality connections identified by PDC. Also it was seen that a map of caused locations cannot be constructed on account of the way PDC is normalized. The above limitations have instigated the interest to investigate if causal connections can be recovered directly from the MAR model coefficients without the use of PDC. The simplest metric that could be used for such a purpose is the norm of the MAR model coefficients across time lags of the model. This can be represented as: The main advantage of using this metric is that all MAR model coefficients are derived simultaneously for all voxel pairs, and thus relative comparison between different voxel pairs within the same model is feasible. Additionally, the norm of the coefficients is not normalized relative to causal or caused voxel and thus both causal and caused maps can be constructed from this metric. A similar metric has already been used to infer causality in MEG data [Ramirez and Baillet, 2010]. The main disadvantage of this metric is that it is not frequency-specific and if frequency-specific information is needed then the most feasible solution would be to build a model on a narrowband filtered version of the data. An additional disadvantage is that because the norm of the coefficients is not normalized, comparison between different conditions (MAR models for different data sets) on the relative causality strength cannot be confidently performed only by this metric. The evaluation of this methodology followed a similar approach to investigation 2 as described in section ''Investigation 2: Consideration of All Voxels in Brain Volume as Potential Activation Sources''. The noise used in this investigation was spatiotemporally correlated noise, as described in section ''PDC deviation from 'Ideal' for spatiotemporally correlated noise,'' scaled to two times the rms value of the pseudo-MEG measurements from the simulated dipoles. The main difference in this investigation is that the MAR model coefficients were projected into all voxels inside the brain volume through the beamformer spatial filters (8,942 locations). Then, N coef was computed from the coefficients of the projected MAR model for all voxel pairs. Four different types of N coef were constructed. First the map of N coef to all voxels from each of the known activity sources (Receive direction). Second the map of the mean of N coef received by each voxel from all voxels (Receive direction). Third the map of N coef from all voxels to each of the known activity sources (Transmit direction). Fourth the map of the mean of N coef transmitted by each voxel to all voxels (Transmit direction). Investigation 3: results In Figure 12 the following are presented. Subfigures (af) present the N coef maps from the known sources 1-6 to all voxels. Subfigure (g) presents the mean N coef caused to each voxel. Subfigure (h) presents the actual locations of the simulated sources with spheres of 1.5 cm radius for ease of reference. Subfigure (i) present the original causal configuration between the six sources. The position of each source represents coarsely its expected location in the topographic maps. Depth information is encoded in the radius of each source with bigger radius corresponding to sources closer to the top of the head. This diagram has been included in order to assist the reader to infer the results in the PDC maps. In this case when the metric represents the caused voxels, similar observations as for the PDC can be made. For sources 4, 5, and 6 causality is correctly reconstructed while for the other three known sources, ghost causal connections appear in the maps in addition to the correct connections. Again source 6 does not seem to interfere with the causality maps of the other sources. By examining the map of mean N coef received by each voxel one can see that all areas that have auto-or cross-correlated activity are highlighted similarly to PDC. In Figure 13 the following are presented. Subfigures (af) present the N coef maps from all voxels to the known sources 1-6. Subfigure (g) presents the mean N coef caused by each voxel. Subfigure (h) presents the actual locations of the simulated sources with spheres of 1.5 cm radius for ease of reference. Subfigure (i) present the original causal configuration between the six sources. The position of each source represents coarsely its expected location in the topographic maps. These maps represent the areas from which activity is caused. Source 1 seems to be caused only by itself which is in accordance with reality. Sources 2 and 3 appear correctly to be caused by source 1. Source 4 seems correctly to be caused by itself and source 5 but not from source 1. Source 5 seems also correctly to be caused by sources 4 and 5. Source 6 is also correctly appearing to be causing itself and not interfering with the causal maps of the other voxels. In summary, the causal activity maps highlighted the areas where caused activity was actually present. Most of the recovered individual causal connections were correctly recovered. When plotting the map of the mean N coef caused by each voxel, the correct locations are highlighted. The fact that a consistent causal connectivity map is available in addition to the caused connectivity map, is a significant advantage of using the MAR coefficient norm metric. By using these two maps from Figures 12g and 13g one could infer that areas around sources 1, 4, and 5 are the main causal hubs and areas around sources 1, 2, 3, 4, 5, and 6 are caused hubs, which corresponds to reality. This is a very significant advantage as compared to the inference about causality one could make by only looking at the PDC map in Figure 10g. By selecting voxels at the centers of these hubs and examining the individual N coef maps could lead to ghost connectivity being identified as real. Probably a more consistent approach would be to project the MAR model only into these selected hub centers and recompute N coef and PDC. Local Maxima Causality information as recovered by the MAR model coefficients is represented in maps as hubs of activity. The most obvious choice for quantifying how accurately the locations of actual activity are recovered by these hubs, is to estimate the local maxima. The local maxima of mean N coef caused by and to each voxel were computed using multidirectional derivation with the MinimaMaxima3D toolbox in MATLAB [Pichard, 2007]. The distance between the actual sources of activity and the corresponding hub maxima were computed and presented in Table II. The accuracy is in most cases better than 1 cm and in one case it exceeds 3 cm. Real Data To do a preliminary indicative evaluation with real data, datasets for 2 subjects from the following experiment were used. A cue indicating left or right was presented to the subject, and after 2 s a 'go' cue indicated to the subject to press a button with the cued hand. The actual data sets used here are from the interval 0-500 ms, from the onset of the left-right visual cue. Within this interval, areas involved in motor planning should be activated. The sampling frequency for the measurements was 500 Hz. For subject 1, 146 trials were selected and for subject 2, 162 trials were selected after removal of artifacts. The subjects' brain volumes were segmented in 6 mm grids. By applying PCA it was found that for subject 1, 24 principal components and for subject 2, 27 principal components explained 99% of the variance. To introduce statistical inference in the estimation of causality maps, a LOOM approach similar to the one described in section ''Statistical inference of PDC" was used. One trial was removed from the sensor data. The spatial filters and dipole orientations were estimated through an LCMV beamformer. The MAR model was built on the selected principal components of the sensor data by the Yule-Walker method and the coefficients were projected in source space. Akaike's criterion instructed a model order of 14 time lags. The projected N coef for each voxel pair was computed and the caused and causal maps of its mean were computed. Then the next trial was excluded from the sensor data set and the procedure was repeated. This procedure was repeated until each of the trials has been left out once. Because of the fact that the coefficient norm is always greater than zero a similar LOOM approach was used to derive the level of random causality against which the observed causality was compared. A randomized data set was derived by randomly shuffling data in each channel across all data points and then reslicing it in trials. Then the procedure was the same as above, excluding one trial and repeating until each trial has been excluded once. Welch's t-test for samples with different variances and with P-value 0.05 was used in each voxel for both the caused and causal maps in order to estimate if the mean N coef is above the randomized case. The voxels for which the null hypothesis (estimated causality same as randomized causality) was not rejected were assigned a causality metric of zero. The resulting maps of mean N coef caused by and to each voxel were plotted on a 3D mesh of each subject's cortex. The causal maps have a distinct maximum on the visual cortex for both subjects. This is shown in Figure 14a,c where the posterior view for both subjects is shown. The caused map had two local maxima both on the sensory motor areas. These are shown in Figure 14b,d where the dorsal view for both subjects is shown. For this relatively simple motor planning task, one should expect to identify activated areas involved mainly in the dorsal visuomotor stream as the specific action is underlain by a 'goal-directed' rather than a 'matching' representation [Milner and Dijkerman, 2002;Prinz and Hommel, 2002]. The dorsal stream is considered to emanate from the primary visual cortex and terminates on the motor cortex by projections through the premotor areas [Hoshi and Tanji, 2007;Prinz and Hommel, 2002]. Indeed by using the maps of N coef mean, the visual cortex has been identified as the causal area and the motor cortex as the caused area. The results of this preliminary investigation show that the use of the MAR coefficients directly provides a meaningful functional network of causality. CONCLUSIONS In this work it was investigated how feasible it is to build a MAR model on the principal components of the MEG sensor data, then project the model coefficients through an inverse solution to brain locations and derive from it meaningful causality information. Theoretically, the projection is feasible and the main uncertainty factor is the mixing resulting from the nonuniqueness of the various inverse solutions. The feasibility of this approach was investigated through three different investigations with simulated MEG data from known activity locations inside the brain. From these three investigations the following conclusions were drawn: Investigation 1: Six dipoles with predefined causality configuration were simulated as a network of dumped oscillators with nominal frequency of 8 Hz. Causal connections existed between sources 1-5. Source 6 had the highest power but no causal connections to any of the other sources. It was used to infer if the presence of a strong unconnected source can affect the recovery of causality between the other sources. When such a small number of known activity locations are considered, projecting the MAR model from sensor principal component space to these locations and using PDC as causality metric, correctly recovers the causality information between different pairs. In this case, it was shown that this methodology is quite tolerant of sensor noise (white Gaussian) and background brain noise (spatiotemporally correlated) up to levels of four times the rms value of the brain signal, but can suffer when lowsampling rate is combined with high model order, in other words when the total number of sampling points is low with respect to the number of coefficients that have to be estimated. It was also shown that significance levels estimated by a jackknife method (LOOM) can be used to distinguish significant PDC values. Investigation 2: When the method is used with a very large number of voxels considered as potential activity locations, PDC seems to indicate the correct areas where real causal connections exist but does not seem to correctly reconstruct the individual causal pathways and ghosts of real connections appear between other voxel pairs. Maps of the mean of PDC caused to each voxel seem to provide an acceptable initial indication of the causal areas in which also caused locations might show up because of ghost connections. Maps of the mean of PDC caused by each voxel are not feasible as PDC is normalized with reference to the causal voxel.Investigation 3: When the method is used with a very large number of voxels, a new causality metric was investigated. This is the norm of MAR model coefficients across time-lags, termed here as N coef . The norm metric can be used to construct maps of both causal and caused connections due to the fact that the MAR model coefficients between different pairs are comparable within the same model. The causal and caused maps of the norm metric for individual voxels seem to indicate the correct areas where real causal connections exist but again ghost connections might appear. However, maps of the mean of norm metric from each voxel to all voxels, and from all voxels to each voxel, showed that ghost connections are weak relative to the real connections and both the causal and the caused maps seem to correctly resemble reality. In an attempt to quantify the accuracy of the N coef maps with respect to the known locations of the six simulated dipoles, the local maxima were computed and their distance from the actual locations was calculated. The local maxima accuracy was in most cases in the order of 1 cm and in one case 3 cm. From the above three investigations it was concluded that the MAR model coefficient Norm metric is more appropriate to be used when a very large number of voxels is used as potential activity sources and when no a priori assumptions are made about activity locations. This is due to the fact that PDC employs a semiarbitrary normalization based on the causal voxel. This means that pairs with different causal pairs cannot be compared in terms of PDC. The MAR coefficients contain the linear weights between variables at different time lags. The higher the coefficient values are, the strongest the linear interaction. The main issue regarding the coefficients is that they are not normalized and thus they are not bounded. But within the same model the coefficients are scaled according to the strength of linear interactions. The MAR coefficients although unbounded, do not have the normalization problem of PDC and that is why their maps are much more consistent. Maps of N coef from each voxel to all voxels, and from all voxels to each voxel, can provide a good initial indication of the causal and caused hubs inside the brain. Such maps based on PDC appear to be less reliable. On the basis of these maps the hub or cluster centers can be selected, so that few voxels will represent the functional network topology. Then the MAR model can be projected in only these few locations of activity and the PDC can then be used to infer causality, as in such cases of few activated areas, the use of PDC was shown to be relatively robust. Also in such cases confidence intervals of PDC can be used to estimate significant levels. From all three investigations it was also observed that source 6 did not appear in the caused and causal maps for each of the other five sources. This means that causality recovered by PDC and N coef for each of the five connected sources was not significantly affected by the unconnected source 6, although it had the highest power. To further demonstrate the feasibility of the N coef maps, real data for two subjects from a simple motor planning MEG experiment was used. Through the N coef maps, the visual cortex was identified as the causal end of the functional network and the sensorimotor cortex as the caused end. This seems to be in agreement with the functional structure of the dorsal stream, which is activated during such goal-directed visuomotor tasks. On the basis of the above conclusions, the methodology of building the MAR model in the sensor space and projecting it, even in a very large number of voxels, inside the brain in order to estimate causality is a feasible approach and can be used to provide entire brain maps of causality.
2016-05-04T20:20:58.661Z
2009-07-01T00:00:00.000
{ "year": 2012, "sha1": "a330a4156e962517f6196414309c5431813da4be", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3617463?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "a330a4156e962517f6196414309c5431813da4be", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
14184154
pes2o/s2orc
v3-fos-license
The Enzymatic Decolorization of Textile Dyes by the Immobilized Polyphenol Oxidase from Quince Leaves Water pollution due to release of industrial wastewater has already become a serious problem in almost every industry using dyes to color its products. In this work, polyphenol oxidase enzyme from quince (Cydonia Oblonga) leaves immobilized on calcium alginate beads was used for the successful and effective decolorization of textile industrial effluent. Polyphenol oxidase (PPO) enzyme was extracted from quince (Cydonia Oblonga) leaves and immobilized on calcium alginate beads. The kinetic properties of free and immobilized PPO were determined. Quince leaf PPO enzyme stability was increased after immobilization. The immobilized and free enzymes were employed for the decolorization of textile dyes. The dye solutions were prepared in the concentration of 100 mg/L in distilled water and incubated with free and immobilized quince (Cydonia Oblonga) leaf PPO for one hour. The percent decolorization was calculated by taking untreated dye solution. Immobilized PPO was significantly more effective in decolorizing the dyes as compared to free enzyme. Our results showed that the immobilized quince leaf PPO enzyme could be efficiently used for the removal of synthetic dyes from industrial effluents. Introduction Synthetic dyes are extensively used in many fields of industries, for example, the textile, leather, paper, rubber, plastics, cosmetics, pharmaceutical, and food [1]. The effluent of wastewater from these industries especially textile contains a variety of large quantities of dyes which are inert and may be toxic at the concentration discharged into receiving water. Many of these dyes which have complex aromatic molecular structures are also toxic and even carcinogenic and pose a serious threat to living organisms [2]. Toxic effects of industrial dyes encouraged researchers to continue studies on chemical and enzymatic methods to remove such hazardous materials [3,4]. Compared to physicochemical methods such as precipitation, filtration, and absorption, the enzymatic treatments of dyes have low energy cost and are more ecofriendly, process not still commonly used in the textile industries [5]. Unfortunately, conventional wastewater treatments are ineffectual at removing dyes and involve high cost, formation of hazardous by-products, and intensive energy requirements. Moreover, complete dye removal is unfeasible. This has impelled research into alternative methods like biotechnological processes. Recently, enzymatic approach has attracted much interest in the removal of phenolic pollutants from aqueous solutions [6]. Oxidoreductive enzymes, polyphenol oxidases, and peroxidases are participating in the degradation/removal of aromatic pollutants from various contaminated sites [7,8]. Polyphenol oxidases can act on a broad range of substrates such as substituted polyphenols, aromatic amines, benzenethiols, and a series of other easily oxidizable compounds. Thus, they can catalyze the decolorization and decontamination of organic pollutants. In view of the potential of the enzymes in treating the phenolic compounds, several microbial and plant oxidoreductases have been employed for the treatment of dyes, but none of them has been exploited at large scale due to low enzymatic activity in biological materials and high cost of enzyme purification. In order to improve polyphenol oxidases activity and stability, enzyme immobilization technology has been applied. This technology is an effective means to make enzymes reusable and to improve its stability, which is considered as a promising method for the effective decolorization of dye effluents. According to the previous reports, various types of supporters were applied to immobilize enzyme, such as activated carbon, 2 The Scientific World Journal celite, controlled porosity glass, chitosan microspheres, and alginate [9][10][11][12][13]. In this study, our first objective was to find a cheaper and easily available alternative plant polyphenol oxidase (PPO) enzyme source for the commercially available ones and its immobilization. Quince leaves (Cydonia Oblonga) which are waste in Turkey have been employed in this work as an easily available and inexpensive PPO enzyme source. PPO enzyme was partially purified from quince (Cydonia Oblonga) leaves and immobilized on calcium alginate beads. The biochemical properties were determined for free and immobilized quince (Cydonia Oblonga) leaf PPO. The second objective was to evaluate the performance of free and immobilized polyphenol oxidases regarding the decolorization of various reactive, acid, direct and basic dyestuffs. Materials. Quince (Cydonia Oblonga) leaves, used in this study, were obtained from Sakarya region, Turkey, and stored at -20 ∘ C until used. Polyvinylpolypyrrolidone (PVP), (NH 4 ) 2 SO 4 , Sodium alginate, CaCl 2 , and other chemicals were obtained from Sigma Chemical Co. and dyes were provided by kindly DyStar, Huntsman, and Yorkshire. Extraction and Purification. 30 g of quince (Cydonia Oblonga) leaves was obtained from local Sakarya region. The leaf samples were added to 50 mM sodium phosphate buffer (pH, 7.0), 0.5% g polyvinylpolypyrrolidone (PVPP), and 10 mM ascorbic acid and the mixture was homogenized with blender. After the filtrate was centrifuged at 14.000 g for 30 min, the supernatant was collected. Extraction was fractionated with (NH 4 ) 2 SO 4 ; solid (NH 4 ) 2 SO 4 was added to the supernatant to obtain 80% saturation. The mixture was centrifuged at 14,000 g for 30 minutes and the precipitate was dissolved in a small amount of phosphate buffer and then dialyzed at 4 ∘ C in the same buffer for 24 h with three changes of the buffer during dialysis. The dialyzed enzyme extract was collected and used for all other processes. Enzyme Immobilization. Alginate solution (1, 2, 3%, w/v) was prepared by dissolving sodium alginate in deionized water. Crude quince leaf PPO solution was mixed with 20 mL of alginate solution at the enzyme/alginate ratio of 1 : 10 (v/v). The mixture was stirred with magnetic stirring to ensure that complete Ca-alginate beads were produced as soon as the emulsion was added into 100 mL 3 M CaCl 2 (1, 2, 3%, w/v). The beads were allowed to harden for at least an hour under mild agitation. Then the Ca-alginate beads were removed from the encapsulation medium via centrifugation and rinsed twice with 0.5% (w/v) CaCl 2 containing 1% (v/v). PPO Activity Assay. The activity of free and immobilized PPO was determined at room temperature using catechol as a substrate. The assay mixture consisted of 2.95 mL of 20 mM catechol in 0.05 M potassium phosphate buffer pH 7.0 and 0.05 mL of enzyme. The increase in absorbance at 420 nm was measured as a function at time for 1 min. One unit of enzyme activity is defined as the amount of the enzyme that causes an increase in absorbance of 0.001 per min at 25 ∘ C. PPO activity was assayed in triplicate measurements. For determining Michaelis constant ( ) and maximum velocity ( max ) values of the enzyme, PPO activities were measured with catechol at varying concentrations under optimum conditions of pH and temperature. and max values of PPO, for catechol substrate, were calculated from a plot of 1/ against 1/[ ] by the method of Lineweaver and Burk [14]. Dye decolorization by PPO was monitored at the specific wavelength. The decolorization percentage was calculated by taking untreated dye solution as control of each buffer (100%). Calculation of Dye Decolorization Rate. Starting absorbance at characteristic max for each dye (control) was designated as 100%. The extent of decolorization rate was defined by the following formula: where is the absorbance of the untreated dye and is the absorbance after treatment [15]. Effect of pH on Free and Immobilized PPO. For the determination of the effect of pH on free and immobilized enzymes by acid, phosphate buffers were used within the pH range of 3.5-9.0. The optimum pH for free and immobilized quince leaf PPO was 7.5 respectively (Figure 1). Both free and immobilized quince leaf PPO gave similar pH values. However, the immobilized leaf PPO gave much broader pH stability than the free enzyme. This suggests that immobilized quince leaf PPO was less sensitive to pH changes than the free one. Effect of Temperature on Free and Immobilized PPO. At the determination of the effect of temperature on free and immobilized quince leaf PPO activity was investigated in phosphate buffer in a temperature range of 4-70 ∘ C. The results showed that the optimum temperatures of the free and immobilized enzymes were 30 ∘ C and 35 ∘ C, respectively ( Figure 2). The thermostability of immobilized quince leaf PPO enzyme was obviously better than free enzyme. The enhanced thermal stability of enzymes arising from immobilization would be an advantage for its industrial application due to the high temperatures used in the industrial processes [16]. Kinetic Properties. The Michaelis-Menten constants of free and immobilized quince leaf PPO were calculated by using Lineweaver-Burk double reciprocal models [14]. The calculated values for free and immobilized quince leaf PPO were 5.86 and 12.57 mM respectively. An increase in the value for catechol on the immobilization of PPO was observed. Our results are similar to the Michaelis-Menten constants of partially purified free and immobilized potato PPO. The values of were 8.0 mmol L −1 for free potato PPO and 14.7 mmol L −1 for alginate SiO 2 /PPO, respectively. In general, values of immobilized enzymes are higher than those for free enzymes, revealing an affinity change for the substrate [17]. Effect of pH on the Decolorization of Textile Dyes. The effects of pH on the decolorization of textile dyes by free and immobilized quince leaf PPO are summarized in Table 1. The effect of pH was studied at pH values between 4.0 and 7.0. The results showed that the decolorization rate was significantly higher at lower pH having maximum at pH 4.0 (Table 1). Also the immobilized PPO increased the decolorization percentage compared to the free enzyme. There were several earlier reports regarding the maximum decolorization of dyes by various plant polyphenol oxidases [17], plant peroxidases [18], microbial polyphenol oxidases [19], and laccases at acidic pH values [13][14][15][16]. When pH is increased above 7.0, the extent of decolorization was decreased rapidly. This is an advantage from industrial application point of view since some dye effluents are slightly acidic [19]. The results also showed that Telon Yellow ARB was the most effectively decolorized by immobilized quince leaf PPO at pH 4.0 with 72.68% decolorization. Effect of Time on the Decolorization of Textile Dyes. The enzymatic decolorization of textile dyes by immobilized quince leaf PPO was examined by varying the time of incubation ( Figure 3). The eight different synthetic textile dyes were used for decolorization in this study. The results showed that the decolorization of dyes was increased with time up to 30 min. However, the rate of dye decolorization was quite slow after 1 h which may be probably due to products inhibition. This observation suggested that initial first hour was significant for dyes decolorization. These results were in agreement with earlier published work of decolorization of textile dyes [20,21]. The most effected decolorization by immobilized quince leaf PPO was observed for the Telon Yellow ARB dye. Conclusion Our results showed that PPO enzyme was successfully partially purified from quince leaves and immobilized onto alginate beads. Immobilization of quince leaf PPO increased its stability to pH and temperature that could be more useful 4 The Scientific World Journal
2016-05-12T22:15:10.714Z
2014-01-21T00:00:00.000
{ "year": 2014, "sha1": "bd9d111708743770453d73e0bca12a6c026feada", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2014/685975.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb9e9cbb261a16241475763ec3b4ae5fc13b8986", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
79284106
pes2o/s2orc
v3-fos-license
Incidence of low back pain according to physical activity level in hospital workers BACKGROUND AND OBJECTIVES: Hospitals integrate several risks posed by physical, chemical, psychosocial and ergonomic factors, which may be noxious for different healthcare professionals. This study aimed at evaluating the level of physical activity, the presence of musculoskeletal risk factors and the incidence of low back pain among nursing professionals of a hospital Materials and Sterilization Center. METHODS: Sample was made up of 56 individuals of both genders, working for the Associação Beneficente de Campo Grande/MS-Hospital Santa Casa. Participants were divided in two groups: G1 (insufficiently active, n=27) and G2 (sufficiently active, n=29). In addition to the level of physical activity, anthropometric data, incidence of pain and functional incapacity, flexibility and muscle resistance were evaluated. RESULTS: The incidence of low back pain was lower in G2 (13 cases; 44.8%) as compared to G1 (24 cases; 88.9%). Body mass index, pain intensity and functional incapacity index were lower for G2. Time of physical activity was lower in G1. Abdominal muscles resistance was higher in G2. CONCLUSION: In nursing professionals, the level of physical activity influences the presence of low back pain, pain intensity and functional incapacity index. INTRODUCTION Low back pain is a major public health problem, reaching epidemic levels among general population, affecting economically active people and considered the most important reason for medical leave 1 .Pain is multifactorial, involving individual, psychosocial, occupational, genetic, and biomechanical factors.Among intrinsic risk factors there are age, gender, body mass index, muscle imbalances and sedentary life 2 .Low back pain induced by mechanicalpostural conditions is responsible for a large part of back pain referred by population 1 .Postural stress may change several musculoskeletal system structures, generating imbalances and decreasing muscle strength.Loss of flexibility, regardless of cause, may also induce pain and decrease muscle strength 1,2 .Extrinsic factors, such as labor-related functional overload 1 , may also contribute for low back pain development and worsening.Hospital environment poses several risks caused by physical, chemical, psychosocial and ergonomic factors, which may be noxious to the health of professionals of the area 3 .Among professionals working in hospitals, nurses are professionals more often affected by low back pain, with high incidence rate and prevalence per year 3 .Their work is not limited to direct patients' assistance, but rather is extended to indirect assistance by means of Central Materials and Sterilization Department (CMSD).This is a technical support sector, mostly made up of nurses and aimed at receiving contaminated materials, decontaminating them, preparing and sterilizing them, as well as at preparing and sterilizing clean clothes coming from the laundry and storing such materials for future distribution 4 . Considering the high incidence of low back pain among nurses and the scarcity of CMSD-related studies, this study aimed at evaluating the level of physical activity, the presence of musculoskeletal risk factors and low back pain among nurses of a hospital CMSD.Additionally, the association between these potential risk factors and the incidence of low back pain was investigated. 1 shows demographic and anthropometric variables.There has been no significant difference between groups for height and body mass index (BMI).Age and BMI were lower for G2.As to low back pain, after fixing the group, there has been significant difference in G1, with predominance of individuals with low back pain.In G2 there has been no difference between presence and absence of low back pain.There has also been difference between groups with regard to the incidence of low back pain, being the number of positive cases higher in G1 and the number of negative cases higher in G2. Table 3 shows data on time of physical activity practiced per week, musculoskeletal risk factors for low back pain, low back pain intensity and functional capacity index, according to group.G2 had longer total physical activity time per week as compared to G1.In addition, pain intensity and functional incapacity index were higher in G1 as compared to G2.With regard to musculoskeletal risk factors for low back pain, the number of repetitions performed during maximum repetition test for abdominal muscles was higher for G2 as compared to G1.However, there has been no significant difference between groups for values of the sit and reach and Thomas tests for lower limbs. With regard to sit and reach test, individuals were classified by the level of flexibility and both groups had values compatible just with classifications "below average" and "poor".In G1, 3 individuals (11.1%) were considered below average and 24 (88.9%) with poor performance.In G2, 4 individuals 13.8%) were considered below average and 25 (86.2%) with poor performance.At Goodman test, when fixed the group, the number of individuals with poor performance in the sit and reach test was significantly higher than the number of individuals with performance below average in both groups.However, there has been no difference in the number of cases of hip flexors shortening (Table 4). DISCUSSION CMSD is a technical support sector, primarily made up of nursing professionals, which works around-the-clock to supply the demand of different hospital sectors 4 .Among CMSDrelated ergonomic risks there are accelerated working rhythm, information flow, job organization, upright or static posture for long periods, repetitive upper limbs movements and hard work 11 .The exposure of people to extrinsic and intrinsic risk factors promotes acute body response, characterized by fatigue, discomfort and pain for prolonged periods.In addition, there may be adaptation mechanisms or the development of chronic effects, peaking with Work-Related Musculoskeletal Disorders (WRMD), such as low back pain [1][2][3] .Although being considered multifactorial, low back pain etiology is frequently associated to sedentary life, reflecting the combination of deficient musculoskeletal fitness and lumbar region overload 1 .In our study, the incidence of low back pain was higher in the insufficiently active group.Adequate fitness levels may contribute to maintain body posture during routine functions with lower energy waste, without exceeding tolerable musculoskeletal limit.Physical activity also attenuates major risk factors involved with low back pain syndrome, such as muscle weakness, especially in the abdominal region, and poor joint flexibility of dorsum and lower limbs 12 .Petersen & Marziale 13 have observed lower frequency of low back pain in nurses practicing sports.Interestingly in our study, not only low back pain but also pain intensity was lower in the sufficiently active group. In the biochemical context, trunk muscles weakness is a major risk factor for low back pain.Especially abdominal muscles play a critical role in spine and pelvic girdle stabilization.When there is abdominal weakness, there is hip instability, allowing the psoas muscle to anteriorly traction lumbar vertebrae, leading to pelvic anteversion and increased lumbar lordosis 9,12,14 Chronological age is associated to physical activity decline, thus increasing the risk for low back pain 1,10 .In addition, it is well established that aging is associated to degenerative changes in lumbar spine structures, which may cause pain, decreased flexibility and muscle weakness 10 .Overweight may be considered independent low back pain factor because it increases abdominal circumference worsening pain and may associate it to lumbar spine changes.According to Heuch et al. 2 low back pain is associated to BMI and pain intensity increases as the level of obesity progresses.In addition, CMSD workers carry heavy objects every day during work, which may lead to anterior gravity center shift, generating pelvic anteversion and consequently increased lumbar lordosis. CONCLUSION Among nurses working in a hospital CMSD, the level of physical activity influences the incidence of low back pain, pain intensity and functional incapacity.In addition, sufficiently active individuals have better abdominal muscles resistance. 5935/1806-0013.20170003Incidence of low back pain according to physical activity level in hospital workers Rev Dor.São Paulo, 2017 jan-mar;18(1):8-11 1,3ed on studied population, with 87 individuals, to determine sample size, prevalence of 90% of professionals with history of occupational low back pain was established1,3, with significance level of 95% and admitting sample error of 5%.With this, minimum of 54 participants were obtained to develop the study.This study was approved by the Research Ethics Committee, Universidade Federal de Mato Grosso do Sul, opinion 545.584.Results are presented in descriptive format.Student t test for parametric data and Mann Whitney test for non-parametric data were used to compare between groups.Goodman test was used for proportion analyses.Significance level was 5%. 10d cross-sectional study with nurses of both genders, working in the CMSD of the Associação Beneficente de Campo Grande -Hospital Santa Casa, Campo Grande/ MS.Inclusion criteria were minimum age of 18 years and minimum experience of six month on the job.9.Muscle resistance was evaluated with Maximum Repetition test in one minute for abdominal muscles10.RESULTSStudy sample was made up of 56 individuals who agreed to participate in the survey.From these, 43 were nursing technicians (76.8), 11 were nursing assistants (19.6%) and 2 were nurses (3.6%).With regard to shift, 22 (39.3%)worked in the morning, 18 (32.1%) in the afternoon and 16 (28.6%) in the evening.As to weekly workload, 43 (76.8%)worked 42 weekly hours and 13 (23.2%)had double jobs, in a total of 74 to 84 weekly hours.Among individuals with double jobs, 69.2% (n=9) worked as nursing technician, 15. Table 1 . Demographic and anthropometric variables according to the level of physical activity Table 4 . Number of cases of hip flexors shortening according to the level of physical activity G1 = group of insufficiently active individuals; G2 = group of sufficiently active individuals; RLL = right lower limb; LLL = left lower limb; Goodman test; A, B = for vertical comparisons; a = for horizontal comparisons; different letters mean significant difference (p<0.05). Table 3 . Physical activity practiced per week, low back pain intensity, functional incapacity index and musculoskeletal risk factors for low back pain, according to the level of physical activity. Table 2 . Proportion of low back pain cases according to the level of physical activity G1 = group of insufficiently active individuals; G2 = group of sufficiently active individuals; Goodman test; A, B: for vertical comparisons; a, b: for horizontal comparisons; different letters mean significant difference (p<0.05). 12 is possible that part of the differences found between some studies is due to the way flexibility was evaluated.Although being easy to apply with high reproducibility, sit and reach test is considered an indirect and linear test characterized for expressing results in a distance scale.Linear tests have as weakness the incapacity of giving a global vision of individu-al's flexibility and the possible interference of anthropometric variables on tests results12.As to demographic and anthropometric variables, sufficiently active individuals had younger age and lower BMI.
2018-12-10T00:00:37.935Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "260b86933f5c66919412bf8ae681e6ab0f4ebc32", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5935/1806-0013.20170003", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "260b86933f5c66919412bf8ae681e6ab0f4ebc32", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
103930396
pes2o/s2orc
v3-fos-license
Desulfurization by MOFs as Sorbents for Thiophene Sulfides Metal-organic frameworks UMCM-150 [Cu3(BHTC)2] and its heterobimetallic analogue Co1Cu2(BHTC)2 based on an asymmetrical ligand, biphenyl-3,4’,5-tricarboxylate (H3BHTC), were studied for desulfurization of model oils. The adsorption experiments were conducted under room temperature and atmospheric pressure. The total sulfur concentration of model oils was 250 ppmw determined by WK-2D coulomb integrated micro-analyzer through adding benzothiophene (BT) and dibenzothiophene (DBT) into liquid alkanes. Adsorptive desulfurization experiments were conducted in a consecutive fixed bed adsorption system. The results indicate that Cu3(BHTC)2 has a higher sulfur-capacity than Co1Cu2(BHTC)2. Taking DBT as an example, Cu3(BHTC)2 and Co1Cu2(BHTC)2 have breakthrough adsorption capacities of 10.6 and 5.8 g S/kg of sorbent for model oils. Introduction Sulfur exists in fossil fuel should be removed because the combustion of sulfur is the primary cause of acid rain. Besides, sulfur also can severely poison the catalyst used in automotive emission control, petrochemicals production and fuel cells [1][2][3]. Consequently, the sulfur emission should be strictly regulated. The traditional hydrodesulfurization (HDS) can effectively remove thiols, sulfides and disulfides, but in the removal of thiophene derivative, such as benzothiophene (BT), dibenzothiophene (DBT) and 4, 6-dimethyldibenzothiphene (DMDBT), encountered a serious challenge [4]. Commonly, HDS processes always need higher pressure (60-100 atm) and elevated temperatures (>573 K) to achieve deep desulfurization [5]. Hence, the adsorption desulfurization got great attention owing to some advantages, such as low-cost and feasible under normal temperature and pressure and can be selective in the removal of thiophene derivatives in the fuel [6]. Recently, many adsorbents, such as molecule sieves, activated carbon or alumina [7][8][9][10], have been studied for adsorption desulfurization. However, the capacities, adsorption kinetics, and selectivity of these materials for the organosulfur compounds have not reached industrial requirements. Development of new adsorbents with high sulfur-capacity, selectivity, and regenerability is the key to an efficient adsorption desulfurization process. Metal-Organic Frameworks (MOFs) are highly ordered, porous materials that have aroused increasing attention in the world [11][12][13]. They are composed of metal ions and organic ligands, which make up diverse topology frameworks by forming bridging linkers. As is well-known, MOFs have large pore size, guest exchange kinetics and remarkable volume in the gas adsorption performance compared with traditional molecular sieve and activated carbon [14][15][16]. However, there are only several reports about MOFs to possess a significant sulfur adsorption capacity. Cychosz [17] some pioneering work in this area. They studied five different MOFs materials and their adsorption characteristics for organosulfur compounds, such as BT, DBT and DMDBT in the model oil. Adsorption capacity is determined by pore size and the inner contact area between the organosulfur compound and the channel of frameworks [18]. However, there is little study on the influence of open metal active sites to the desulfurization. Accordingly, we choose a heterobimetallic UMCM-150 isostructural analogue Co 1 Cu 2 (BHTC) 2 as the adsorbents of organosulfur compounds presented in transportation fuels. Finally, regenerability was tested under suitable conditions. Desulfurization experiments The materials were packed into a stainless steel column (15 cm L × 2.0 mm ID) and a certain amount of zeolites 4A were filled in the entrance of the adsorption column to eliminate the influence of water dissolved in the model oil to desulfurization. As-synthesized materials and zeolites were separated by adsorbent cotton. The adsorption desulfurization experiments were performed at a room temperature and atmospheric pressure. Before the experiments, the adsorbent should be activated under N 2 atmosphere at 393 K for 2 h. After heating, the sorbents were allowed to cool down to room temperature also in dry nitrogen. In order to expel the gas adsorbed in the MOFs, the n-octane was injected in the fixed bed at a flow rate of 0.5 ml/min using SZB-1 computer control double-plunger micro-pump. Model oils were spiked to 250 ppmw with n-octane and BT or DBT. All the model oil samples collected during the breakthrough experiments were measured by GC SP 3400 with a capillary column (L= 30 m, ID= 0.32 m) outfitted with a flame ionization detector (FID) and calibrated using solutions of known sulfur concentration. The liquid hourly space velocities (LHSV) for MOFs using in the experiment are 289 h -1 , compared with the typical conditions for zeolite Y of between 1 h -1 and 10 h -1 [17]. The phenomena can be attributed to the much more open pore structures in MOFs making rapid guest diffusion [21]. XRD analysis The XRD patterns in Figure 1 show that the characteristic peaks of Co 1 Cu 2 (BHTC) 2 were identical to Cu 3 (BHTC) 2 in line with the simulated pattern of Co 1 Cu 2 (BHTC) 2 [19], which implies that the structure of the two adsorbents are resemble. Pore structure analysis The porous structure details were measured by N 2 adsorption-desorption isotherms. The results were shown in Table 1. The partial exchange of active metal site can not change the pore size and pore volume, which also suggests Co 1 Cu 2 (BHTC) 2 has the same structure with UMCM-150. However, the specific area of Co 1 Cu 2 (BHTC) 2 is higher than that of UMCM-150. Thermal stability analysis As shown in Figure 2, there are three steps for UMCM-150 in the weight loss process. The first step (<50 ºC) is attributed to the amounts of adsorbed gas in the adsorbents. The second step (50~150 ºC) is the weight loss of water and guest molecules embed in the frameworks of UMCM-150. The third step (>300 ºC) reveals the collapse of structure. In order to remove the water and guest molecules embed in the adsorbents, the adsorbents were activated under 150 ºC for 2 h in helium gases before adsorption experiments. Figure 3 shows the results of Co 1 Cu 2 (BHTC) 2 adsorption capacities for the BT and DBT. The two organosulfur compounds breakthrough at 106, 346 ml/g, respectively. This sorbent can greatly desulfurize significant amount of solution before the breakthrough point (defined as 1 ppmw S). These correspond to the breakthrough capacities of 0.39%, 0.58% for BT, DBT (capacities calculated by integrating the Boltzmann function mentioned above). It is known to us that adsorption desulfurization capacity of previous reported MOFs is determined by pore size and shape, namely, contact area between the MOFs and organo-sulfur compound. Based on the breakthrough curves, Co 1 Cu 2 (BHTC) 2 has a higher capacity for DBT than BT. Total capacities for BT and DBT were 1.18%, 4.03%, and the results greatly verified the former conclusion. To assess the effectiveness of the open metal sites on organo-sulfur compound adsorption in fixed bed experiment, Cu 3 (BHTC) 2 was compared with Co 1 Cu 2 (BHTC) 2 . It was seen from Figure 2, the breakthrough point for the two organo-sulfur compound at 105.9, 346.1 ml/g, respectively, in parallel with the adsorption capacity 0.44%, 1.06%. The BT and DBT breakthrough adsorption capacities are in the order of Co 1 Cu 2 (BHTC) 2 <Cu 3 (BHTC) 2 and overall adsorption is in the same way. Adsorption of thiophene sulfides on MOFs The adsorptive removal of S-compound has been explained with high porosity and interaction like acid-base and π complexation [19]. Due to the same topology and similar porosity of UMCM-150 and its heterobimetallic analogue, the open metal site is the only difference. The coordination of Cu 2+ to organic ligands is dominated by paddle-wheel motif. A trinuclear copper cluster coordinated with six ligands was replaced by a cobalt atom. The change can be explained by acid-base theory. Based on the Hard-Soft Acid Base (HSAB) principle [22], hard acids prefer hard bases; soft acids prefer soft bases. While Cu 2+ and Co 2+ belong to d 1 -d 9 transition metal cations, in addition, metal cations with low valent state and more d electron number, it will be increasingly closed to 18esoft acid. S-compound can supply isolated couple of electrons. Compared with Co 2+ , Cu 2+ has more d electrons, so it is easier to interact with S-compound. That is why Cu 3 (BHTC) 2 present superiority than Co 1 Cu 2 (BHTC) 2 . Conclusions and outlook In conclusion, Co 1 Cu 2 (BHTC) 2 and Cu 3 (BHTC) 2 are found to offer significant potential for the reduction of sulfur levels in transportation fuels, which can meet the regulations of administration and act as the complementary to the HDS. H 3 BHTC was an appropriate ligand, which can be synthesized MOFs with suitable pore size and shape. In addition, regeneration of MOFs fixed bed has been shown to be feasible using a combination of solvent and heat. The adsorption capacity of Co 1 Cu 2 (BHTC) 2 2 , which implies that the interaction of S-atom of organosulfur with Co 2+ is weaker than that of Cu 2+ .
2019-04-09T13:08:59.824Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "66ad7c41462705691a0282f044b67715ac9456ff", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/108/4/042035", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b89a2a43bb66cd0e970d541cb812adcedd4aadb3", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
4637132
pes2o/s2orc
v3-fos-license
Conserved syntenic clusters of protein coding genes are missing in birds Background Birds are one of the most highly successful and diverse groups of vertebrates, having evolved a number of distinct characteristics, including feathers and wings, a sturdy lightweight skeleton and unique respiratory and urinary/excretion systems. However, the genetic basis of these traits is poorly understood. Results Using comparative genomics based on extensive searches of 60 avian genomes, we have found that birds lack approximately 274 protein coding genes that are present in the genomes of most vertebrate lineages and are for the most part organized in conserved syntenic clusters in non-avian sauropsids and in humans. These genes are located in regions associated with chromosomal rearrangements, and are largely present in crocodiles, suggesting that their loss occurred subsequent to the split of dinosaurs/birds from crocodilians. Many of these genes are associated with lethality in rodents, human genetic disorders, or biological functions targeting various tissues. Functional enrichment analysis combined with orthogroup analysis and paralog searches revealed enrichments that were shared by non-avian species, present only in birds, or shared between all species. Conclusions Together these results provide a clearer definition of the genetic background of extant birds, extend the findings of previous studies on missing avian genes, and provide clues about molecular events that shaped avian evolution. They also have implications for fields that largely benefit from avian studies, including development, immune system, oncogenesis, and brain function and cognition. With regards to the missing genes, birds can be considered ‘natural knockouts’ that may become invaluable model organisms for several human diseases. Electronic supplementary material The online version of this article (doi:10.1186/s13059-014-0565-1) contains supplementary material, which is available to authorized users. Authors Authors Peter V. Lovell Background Birds are highly successful and diverse descendants of therapod dinosaurs ( Figure 1) that have evolved a number of distinct characteristics such as feathers, wings and the ability to fly, a sturdy lightweight skeleton, a toothless beak, high metabolic rate and endothermy, and unique respiratory and urinary/excretion systems that distinguish them from other sauropsids (for example, lizards, turtles, crocodiles [1][2][3]). To date, however, the genetic basis underlying these traits has been largely unknown. With the recent sequencing and annotation of a large number of avian (60) and sauropsid (5) genomes, including zebra finch, [4], chicken [5], turkey [6], 45 genomes completed in the context of the avian phylogenomics consortium [7,8], 12 additional avian genomes available at NCBI (listed in Methods), Western painted [9] and Chinese soft-shelled turtles, the green anole [10], and the American alligator and saltwater crocodile [11,12], it has become possible to identify genomic features that are unique to birds, and thus possibly associated with the evolutionary emergence of characteristic avian traits. Avian genomes have been found to be more compact compared to other amniotes. This difference, which correlates with an overall smaller cell size, was speculated to reflect an adaptation related to the higher rates of oxidative metabolism necessitated by the evolution of flight [13,14]. However, more recent evidence for similar genomic streamlining in non-avian dinosaurs, suggests that the evolution of compact genomes may have occurred largely before the emergence of flighted birds [15]. Mechanistically, these reductions in genome size likely occurred as the result of a loss of non-coding DNA sequences, a possibility supported by evidence that avian genomes have less repetitive DNA, fewer pseudogene, and shorter introns compared to mammals [5,16]. Importantly, however, the evolution of avian genomes also appears to have involved a loss of protein coding genes, as the total number of unique identified avian coding genes (for example, 15,508 in chicken according to Ensembl release e71; [17,18]) is considerably smaller than in other tetrapods (20,806 in humans, 18,596 in anole lizard, 18,429 in frogs). Indeed, paralog analysis demonstrates an overall higher occurrence of gene families with fewer members in birds than in other vertebrates [13]. Finally, birds are also known to have high rates of chromosomal rearrangements compared to other organisms, which could in principle have resulted in significant losses of syntenic groups of protein coding genes [5,19]. We have previously observed that analysis of side-byside chromosomal alignments of 1-to-1 orthologs from representative vertebrate species can be used to identify protein coding genes that are missing in birds [4]. Specifically, we found that a syntenic gene block on mammalian chromosome X that includes Synapsin 1 (SYN1) is missing in the genomes of both zebra finch and chicken, but present in lizards. To gain a more comprehensive understanding of the extent of possible avian gene losses, we decided to systematically apply this approach to the entire genome of birds. Specifically, we compared the syntenic arrangements of orthologous genes in the genomes of non-avian sauropsids as well as humans with those of birds, coupled with extensive BLAT/BLAST searches of avian genomes and manual verification of orthology for any resulting hits. Our reasoning was that genes present in non-avian sauropsids and humans but missing in a large number of distantly related birds, including those that were used to define the avian phylogenomic tree [8], likely represent gene losses that are characteristic of the avian lineage, rather than genomic features specific to lizard or to only a few bird species. We found that approximately 274 genes that are present in conserved syntenic blocks or in close proximity to these blocks at discrete chromosomal locations in non-avian sauropsids and mammals are absent in all birds examined. We also found that these genes are for the most part present in crocodilian genomes, indicating that the losses likely occurred within the dinosaur/ avian lineage rather than in a more distant archosaur ancestor. A comprehensive bioinformatics analysis revealed that a substantial number of missing genes are associated with lethality or disease phenotypes that affect major tissues, organs, or systems in mice and/or humans. In several cases paralogous genes and/or biochemical or physiological adaptations that are present in birds may have provided compensation for these gene losses. We discuss the possible functional and evolutionary implications of this loss of protein coding genes. Results Evidence for a large-scale loss of syntenic protein coding genes in birds Starting with the complete set of gene model predictions from Ensembl (e71), we first conducted a comprehensive comparative genomics analysis to identify orthologous gene sets in humans (Homo sapiens), a lizard (green anole; Anolis carolinensis) representing a non-avian sauropsid, a galliform, (chicken; Gallus gallus) representing a basal avian order with a high quality genome assembly, and an oscine passeriform (zebra finch; Taeniopygia guttata; Figure 1). We initially focused on chicken and zebra finch, since these represented the best assembled and curated avian genomes available in Ensembl at the time we began this study. Out of 18,596 protein coding genes in lizard, 12,113 are predicted to have 1-to-1 orthologs in humans. Of these, only 10,554 also have 1to-1 orthologs in chicken and/or zebra finch, thus revealing a total of 1,559 genes that are potential candidates for missing genes in birds (Table 1A). We next aligned side by side the entire set of 1-to-1 orthologs between humans and lizards based on chromosomal location with the corresponding orthologs from birds to search for cases where conserved genes in humans and lizards were missing in both avian species. In several cases we also examined the corresponding regions in the Painted turtle genome (Chrysemys picta bellii), to help establish synteny in regions that are poorly assembled in the lizard genome. We found that 537 out of the 1,559 putative missing genes in birds cluster into approximately 100 conserved syntenic blocks in lizard and humans. The approximately 1,000 remaining candidate missing genes occur as singletons throughout non-avian genomes, or are associated with segments that have not been included in the main avian assemblies (that is, Chr_Unk; see Methods for details). It is thus not possible to conclusively establish orthology, or whether these other missing genes are true singletons or part of syntenic blocks. In contrast, the missing syntenic gene blocks are relatively large (typically >80,000 bps), thus their absence can be verified with high confidence in a high quality genome like that of the chicken. We decided to focus our efforts on these missing blocks, as it is less likely that they are present in unsequenced or unassembled segments of avian genomes. We next conducted exhaustive verification steps (see Methods) to confirm that the genes in the identified syntenic blocks are indeed missing in birds. This effort (summarized in Table 1B) corrected a large number of misannotated gene predictions and cloned mRNAs in birds, while identifying several previously unknown orthologs and paralogs. First, to our initial list we added 25 genes that were not predicted by Ensembl in lizard. Based on curation of other databases (for example, RefSeqs) and/or cross-species BLAT alignments, we found that these genes are truncated by sequence gaps but are present in the correct synteny in lizard (for example, TSPN16 in Figure 2; 'no model' entries in lizard in Additional file 1: Table S1). We also added 50 genes that have an unannotated lizard Ensembl model but whose correct orthology could be established based on synteny (Additional file 1: Table S1 gene models indicated by a ' †'). Next, we found that 28 genes on our missing list have avian entries in Entrez Gene, Ensembl, or RefSeq but these are misannotated, corresponding instead to a related family member (Additional file 1: Table S2); 14 of these represent close but previously uncharacterized paralogs (Additional file 1: Table S3). Next, we removed 89 genes from our missing list that were not predicted in avian genomes by Ensembl, but that we found to be present in birds based on a manual verification of entries in Entrez Gene, RefSeq or NCBI avian mRNAs (Additional file 1: Table S4A). We also removed 75 genes that were not previously described in birds, but that we found to be present based on BLAT Lizard models with 1-to-1 or apparent 1-to-1 orthologs in humans (GRH37) 10,554 Lizard models with 1-to-1 or apparent 1-to-1 orthologs in chicken (WASHUC2) and/or zebra finch (taeGut3.2.4) 1,559 Lizard models with no apparent orthologs in birds B. Confirming gene loss in missing syntenic blocks in birds 537 Initial set of candidate avian missing genes that are present in human/lizard syntenic blocks +25 Additional candidate missing genes that were not predicted by Ensembl, but were identified in lizard genome ('no model' entries on Tables S1A and S6) +50 Additional candidate missing genes with incorrectly annotated lizard model (' †' entries on Table S1) -89 Genes found in birds based on Entrez gene, RefSeq, and cloned mRNA databases (Table S4A) -75 Genes found in birds based on lizard/human mRNA and protein BLAT/BLAST searches of avian genomes, trace archives, and EST/mRNA databases (Table S4B) −174 Genes found in birds based on alligator/lizard/human protein tBLASTn searches of 60 avian whole genome shotgun contigs or evidence based on Refseqs in these species (Table S6) 274 Final set of avian missing genes (Table S1) C. Final breakdown of curated set of genes missing in birds 162 Genes that are part of missing syntenic blocks (Table S1A) 112 Genes that are in close proximity to missing syntenic blocks (Table S1B) 274 Total avian missing genes a Ensembl release e71. or BLAST searches of avian genome assemblies (including turkey, medium ground finch, and budgerigar), or avian EST databases (Additional file 1: Table S4B). In most cases these results provide a first demonstration of the existence of these gene in birds. In contrast, 114 genes in our missing list gave significant hits in cross-species BLAT searches of the chicken or zebra finch genomes using the lizard Ensembl gene models as queries, however all hits were to related gene family members (Additional file 1: Table S5) or to close paralogs (Additional file 1: Table S3). Lastly, we found that a subset (174) of our candidate avian missing genes are present in one or several avian genomes recently assembled and submitted to NCBI, including those sequenced as part of the Avian Phylogenomics Consortium; the evidence derives from extensive further curation of RefSeqs (Additional file 1: Table S6A) and tBLASTn searches of WGS databases (Additional file 1: Table S6B). Of note, the analysis included a ratite (ostrich), indicating that this gene subset is also largely present in basal paleognaths. For two genes, CATSPERB and CCAR12, the only evidence for their presence among birds comes from a ratite, suggesting that they were likely present in basal paleognaths and possibly lost in modern neognaths. We also note that for several genes in this subset, direct confirmation of orthology was not possible as the hits were to segments that do not allow synteny verification. We took, however, a conservative approach and removed from our avian missing gene list all genes that had a significant avian hit that preferentially cross-aligned back to the correct ortholog in a non-avian species (that is, reciprocal best alignment criterion). It is important to note that this subset of genes (Additional file 1: Table S6) cannot be found in the chicken genome. The chicken assembly we have analyzed (galGal4; 2011) is currently the best-assembled and most completely sequenced avian genome, with much shorter and fewer gaps and thus more complete than the version described in Hillier et al. [5]. Accordingly, this latest assembly contains the orthologs for many conserved genes that could not be found in the previous assembly (for example, [20]), and yielded significant BLAT-alignments for approximately 96% of genes from a positive control search set consisting of randomly selected lizard gene models with known orthologs in birds). Lastly, this subset found in other avian species cannot be found in the chicken transcriptome databases. These observations suggest that chicken (or possibly galliformes) may have undergone further syntenic gene losses compared to other birds. As our main goal was to identify genomic losses shared by all birds, these genes are not considered further here and will be the focus of future studies. Out of these efforts, we determined with high confidence that 274 genes are missing in birds. Of these, 162 are clustered in blocks that have identical arrangements in lizard and humans (example in Figure 2; full list in Additional file 1: Table S1A). Altogether, these avian missing blocks amount to 7.42 and 3.92 Mb in humans and lizard, respectively (see Methods for details). The other 113 confirmed avian missing genes are in close proximity to these syntenic blocks (full list in Additional file 1: Table S1B). All these genes currently have no corresponding entries in any avian database. It is important to note that we used permissive search filters followed by extensive manual verification. Furthermore, the successful search of the chicken assembly with a control gene set comprised of randomly selected genes that are 1-to-1 orthologs in lizard and human, are present in birds, indicating that the use of lizard models and the settings and criteria in our cross-species searches was adequate and sensitive to detect the corresponding orthologs, if present, in a well assembled avian genome. We also note that the lizard and human models used in cross-species alignments readily identified crocodilian orthologs (see also section on crocodilians below), even in cases where models had low conservation (for example, orthologs that failed to cross align or that have low percent sequence identity when comparing lizard and humans). Nonetheless, to minimize the concern that we might have missed genes due to low sequence conservation, our searches for low conservation genes in avian WGS databases used queries from multiple species, including from crocodilians. Lastly, we note that 19 genes in our curated missing set have been previously reported as missing in different bird species by independent searches of the genome databases or by a variety of molecular or protein biochemistry methods (Additional file 1: Table S7), lending further support to the validity of our curated list of avian missing genes. To further rule out the possibility that we might not be detecting some genes because they are short and fast diverging (that is, low conservation when comparing orthologs across vertebrate groups) we conducted further analyses to compare the relative distributions of the lengths of coding sequences (CDS) for genes in the avian missing gene set versus those derived from the entire set of lizard genes present in birds). We found that the size distributions are similar in shape and do not differ significantly (two-tailed ANOVA with log-normalized values; P =0.3; Additional file 3: Figure S2A). Moreover, the relative percentages of short genes (that is, genes <500 bp) are nearly identical across the two gene sets (9% vs. 10% for the missing vs. present gene sets). In addition, we found no significant relationship between the sizes of the avian missing genes and the amino acid % identity of the predicted proteins when comparing the human vs. lizard orthologs (Additional file 3: Figure S2B). Thus, there appears to be no obvious bias in the missing gene set towards either smaller (or larger) genes, or towards small genes that are highly divergent in regards to protein sequence. Syntenic gene losses localize to discrete chromosomal sites The genes we found to be missing in birds are not uniformly distributed across the genomes of non-avian species, but instead are concentrated in a small number of chromosomes. This asymmetry is clearly seen when plotting the number of missing syntenic blocks per chromosome ( Figure 3A), and does not simply reflect differences in chromosome size. Instead, the distribution is significantly different from what would be expected if the blocks were uniformly distributed among the chromosomes according to chromosome size (X 2 = 205.8; df = 22; P <0.0001 for human and X 2 = 28.9; df = 12; P <0.0002 for lizard). In fact, only five of the 23 human chromosomes (chr19, X, 11, 14 16), and two of the 12 lizard chromosomes (chr2, and LGf ) have a greater number of deletion blocks than would be expected by chance, whereas the majority of the other chromosomes have fewer blocks than expected. In particular, human chr19, a very gene-rich chromosome, contains the majority of the missing blocks, despite being one of the shortest human chromosomes. A similar asymmetric distributions was observed by plotting the number of avian missing genes per chromosome ( Figure 3B); this distribution differs from what would be expected based on the total numbers of genes present on each chromosome (X 2 = 411.9; df = 22; P <0.0001 for human and X 2 = 108.1; df = 12; P <0.0001 for lizard). We also see an asymmetry when plotting the number of avian missing genes per chromosome relative to the total number of 1-to-1 lizard/human orthologs on each chromosome ( Figure 3C), or by plotting the number of missing genes per chromosome normalized by the total number of missing genes ( Figure 3D; black bars). This latter distribution differs significantly from what would be expected from a randomly selected subset of genes derived from the 1:1:1 chicken:human:lizard ortholog set ( Figure 3D; gray bars, n = 274 genes; X 2 = 1131.8; df = 22; P <0.0001 for human and X 2 = 191.0; df = 12; P <0.05 for lizard). Again this analysis demonstrates the large contributions of human chr19 and lizard chr2. One can also note in lizard the large relative contribution of chrLGf, a microchromosome that contains only a small number of genes, and that a large number of avian missing blocks and genes localize to contigs that are unplaced in the current assembly ( Figure 3A-D, right panels). Interestingly, the avian missing genes are also nonuniformly distributed along the length of each respective chromosome. In fact, most of the deletions cluster within small segments or domains, as visualized by mapping the locations of missing blocks on their respective chromosomes ( Figure 4A), or by plotting the number of missing genes according to chromosomal location (examples on Figure 4B-D). When comparing the relative positions of the missing blocks in lizard vs. humans one notices that these represent chromosomal regions that appear to have undergone extensive rearrangements between these organisms ( Figure 4A). To investigate this issue further, we aligned the complete set of 1-to-1 orthologs according to human chromosome location, identified in humans the gene blocks immediately flanking each avian missing block, and examined in birds the relative positions of these flanking blocks (see Methods for details). Considering 17 individual missing blocks for which we could assign flanking blocks in birds to known chromosomal locations (i.e. not chr_Unk), we found that the majority (n = 10) were present on different chromosomes in birds (example on Additional file 2: Figure S1A), and the remaining seven were located on the same avian chromosome, but the flanking blocks were out of order, in reverse order, or several megabases apart in comparison to their location in humans (Additional file 2: Figures S1B and S1C) and/or lizards (not shown). These results indicate that most of the avian missing syntenic blocks are located in chromosomal regions that appear to have undergone significant inter-and intrachromosomal rearrangements when comparing humans and birds. We also found that the average size of the avian missing syntenic blocks is almost twice as large in lizards as compared to humans (142.7 ± 20.2 vs. 75.6 ± 12.4 Kb; Mean ± SEM; Wilcoxon matched-pairs signed-ranks test; P <0.001), a difference that can be observed when plotting side by side the size distributions of the syntenic blocks in the two species ( Figure 5A). Reflecting this difference, the cumulative size of the missing segments in the lizard genome is also considerably larger than that observed in humans (7.42 vs. 3.92 MB; Figure 5B). Furthermore, this species difference is largely due to size differences in avian missing syntenic blocks that occur on just a few human chromosomes, including 19 (lizard vs. human average block sizes for human chr19: 150.0 ± 72.3 vs. 84.9 ± 21.6 Kb, P <0.001; Wilcoxon matchedpairs signed-ranks tests). Consistent with this finding, of the approximately 4 MB of human genomic DNA that corresponds to the missing syntenic blocks in birds, approximately 90% is derived from combined losses on chr19 (54.0%), X (9.2%), 11 (9.1%), 14 (9.1%), and 16 (7.5%). Estimating gene loss in avian ancestors We next searched for the 274 genes confirmed to be missing in birds in two recently available crocodilian genomes, the American alligator (Alligator mississippiensis) and saltwater crocodile (Crocodylus porosus) [8,11]). To establish a baseline, we first reasoned that genes present in birds, lizard, and humans are also highly likely to be present in crocodilians. To test this prediction, we performed BLAT-alignments with our control set of lizard gene models with known orthologs in chicken (that is, positive control gene set) to the alligator genome. We found that 91% of these genes yielded significant hits to Table S1, gray blocks denote human blocks that are on unplaced contigs in lizard). The boxed segments in the expanded views refer to a missing block example presented in Figure 2. alligator or crocodile. Next, we BLAT-aligned the entire set of 274 lizard gene models corresponding to avian missing genes to both the alligator and crocodile genomes. We found significant hits in alligator and/or crocodile (Additional file 1: Table S8) for 154 (approximately 56%) queries. More recently, as RefSeq annotations have become available for crocodilians, we searched the remaining genes on our avian missing gene list, and manually verified the presence of another 83 genes, at the correct synteny. Thus the vast majority (approximately 86%) of the avian missing genes are present in the current crocodilian assemblies. While we have found no convincing evidence for the remaining 38 genes, we note that these assemblies are still largely incomplete with significant gaps and low quality regions, thus it is very likely that we are underestimating the presence of avian missing genes in crocodiles. To better understand the uniqueness of these gene losses among vertebrates, we further examined the orthology of the avian missing gene set based on a detailed analysis of Ensembl gene models. While by definition 100% of these genes are present in non-avian amniotes (that is, humans and lizard), we found that fully approximately 94% to 95% are present in sarcopterygians (that is, coelacanth, Xenopus), and approximately 90% are present in teleosts (zebra fish, fugu). Thus, the majority of missing avian genes are conserved, and were likely present in a fish ancestor. Moreover, we found that only a small subset of the avian missing genes were also lost in any of the non-amniote vertebrate lineages, including approximately 1% in fish, approximately 3% in coelacanth, and approximately 10% in Xenopus -the latter likely being an overestimate due to the relatively poor quality of genome assembly and predictions. Most importantly, rather than occurring as clusters in syntenic blocks, all of these losses appear to be distributed throughout the respective genomes. Thus, the extensive loss of genes in syntenic blocks appears to have been a unique phenomenon that occurred only in birds, or within an archosaur organism within the dinosaurian/avian lineage ancestral to extant birds. Bioinformatics analysis of avian missing genes We next conducted bioinformatics analyses to assess the potential functional impact of the observed gene losses on the dinosaur/avian lineage. A functional enrichment analysis of the missing gene set using Ingenuity Pathway Analysis (IPA; see Methods for details) revealed a range of biological function categories that are significantly enriched (P <0.05; Figure 6A and Table 2A). This included clusters of genes associated with functional categories such as inflammatory response and gastrointestinal disease, molecular and cellular functions such as free radical scavenging, and physiological system development and function such as tissue morphology, humoral immune response, and immune cell trafficking (Table 2A). Further analysis of enriched specific functional categories revealed that many of the missing genes participate in major cellular functions and/or are implicated in a severe human hereditary diseases and disorders (Additional file 1: Table S9). This included genes associated with cell growth and proliferation, hereditary disorders such as X-linked mental retardation, leukocyte adhesion deficiency types I and III, X-linked spinocerebellar ataxia type 1, hematological system development and function, immune function, and nervous system development. The missing avian genes are also enriched in a number of canonical pathways that regulate the functions of a wide array of organs and signaling systems. The most significant pathways (n = 22; P <0.005) are presented in Table 2B, and include protein kinase A signaling, G-protein coupled receptor signaling, T cell receptor and anergic T lymphocyte regulation, estrogen-dependent breast cancer and GNRH signaling, as well as pathways related to cardiac hypertrophy and melanocyte development to name a few. To examine whether these cellular functions and/or pathways are specific to the avian missing genes or more generally associated with any set of genes of comparable overall size and syntenic organization, we performed a parallel IPA on control gene sets (see Methods for details). These control sets represent a reasonable expectation for the range of phenotypic enrichments that might be expected (that is, null expectation) if a set of losses occurred in randomly deleted syntenic gene clusters located on the same chromosomes as the missing gene set. When we compared broad enrichment categories we found that several were shared with those seen for the missing gene set (listed in Table 2A), including cancer, inflammatory response, and endocrine system disorders. However, careful examination of specific functional categories revealed remarkably few overlaps. In fact, of the top 58 categories found in the missing genes set (as established by a P <0.01 cutoff; Additional file 1: Table S9), only one -X-linked mental retardation -was also present in one of the control sets at the same cutoff, thus suggesting that the majority of these annotations are specific to the missing gene set. Similarly, when we compared the combined set of significantly enriched (P <0.005) canonical pathways associated with either of the control genes set with the top canonical pathways enriched in the missing gene set (listed in Table 2B), we found no overlapping pathways, thus further indicating that many of the pathways that are predicted to be associated with the loss of the avian missing gene set are uniquely associated with the missing gene set. Studies characterizing the effects of spontaneous, induced, or genetically-engineered mutations provide the best inferential kind of evidence for understanding the potential impact of gene losses. We therefore retrieved from the Mouse Genome Informatics (MGI) [22] database the sets of phenotypes that have been observed in rodents in association with manipulations of the avian missing genes. This included cases where just a single knockout was sufficient to cause the phenotype, as well as others where multiple knockouts were required. We then classified the retrieved entries according to affected tissues, organ, and systems, adding genes based on searches of Entrez Gene and the scientific literature (for example, PubMed, Google Scholar). This analysis revealed that 98 genes are associated with at least one phenotype that affects a major organ and/or system, including the central and peripheral nervous systems, the immune system, bone and cartilage, the reproductive system, lungs and respiration, and regulation of weight and appetite ( Figure 6B; Additional file 1: Table S10A); a subset of these phenotypes is only present when genes are knocked out together with other related genes. Interestingly, a small number of avian missing genes (approximately 5%) are related to mouse phenotypes associated with tissues and/or organ functions that are absent in birds, including hair, teeth, placenta, and lactation (Additional file 1: Table S11). Finally, 43 of the missing genes are associated with a lethal phenotype in mice, including partial and complete embryonic or perinatal lethal, or premature death. Of these, 27 have a lethal phenotype when individually knocked out (Additional file 1: Table S12A), and 16 are only lethal when knocked out in combination with one or more additional genes (Additional file 1: Table S12B). Since a large number of the avian missing genes are associated with a severe and/or lethal phenotype in mice, we wondered whether these associations are unique (that is, non-random) with respect to the genes missing The complete list of functions associated with these categories is presented in Additional file 1: Table S9. in birds, or more generally associated with any comparably sized and organized sets of genes. To address this question we applied a permutation analysis, and performed MGI phenotype classification on 1,000 independent control gene sets (see Methods for details). We found that the number of genes associated with a mouse phenotype in the missing gene set (n = 98, excluding 'no abnormal phenotype detected') is significantly smaller than would be expected based on an analysis of the permutation dataset (Additional file 4: Figure S3A; twosided permutation test; P = 0.021). We also compared the number of genes associated with each phenotype in both gene sets, and found 11 phenotypes that are significantly under-or over-enriched in the missing gene set (two-tailed permutation test with Benjamini-Hochberg false discovery rate (FDR) correction for multiple comparisons; q <0.05). The complete list of mouse phenotypes for which the number of expected and observed genes differed by at least one gene is presented in Additional file 1: Table S10B. Among the most significant phenotypes are those associated with body weight and energy metabolism (that is, MP:0003960, increased lean body mass; MP:0009289, decreased epididymal fat pad weight; MP:0010400, increased liver glycogen level), immune function (that is, MP:0008050, decreased memory T cell number; MP:0008765, decreased mast cell degranulation), and lung function (that is, MP:0010809, abnormal Clara cell morphology; MP:0011649, immotile respiratory cilia). We also found a strong trend (two-tailed permutation test without FDR correction; P <0.05) towards a greater association of genes with phenotypes related to lethality, including premature death and complete embryonic lethality, in the permutation set as compared to the avian missing gene set (Additional file 1: Table S10B). Lastly, a broad range of phenotypes are numerically, but not statistically different, or occur with similar frequency in both groups. These correspond to phenotypes that would be generally associated with the loss of similarly sized and organized sets of genes in other regions of the chromosomes of amniotes (Additional file 5: Figure S4). Since the IPA provided suggestive evidence that many gene losses are associated with severe hereditary diseases in humans, we next consulted the Online Mendelian Inheritance in Man (OMIM) [23] and conducted further keyword searches in Entrez Gene. We found that a total of 32 genes are associated with a specific genetic disorder or syndrome in humans. We then verified each OMIM entry and classified cases where the disease was associated with the loss of a gene or gene function (Additional file 1: Table S13A), or caused by a gain of function mutation (Additional file 1: Table S13B). In most cases the loss of function mutations were associated with autosomal recessive disorders, but also included cases of X-linked disorders or autosomal dominant haploinsufficiency. Importantly, a subset of the genes linked to human disorders is also associated with a lethal phenotype in mice ( Figure 6C). Given the severity of many of these diseases we wondered whether the observed set might contain fewer OMIM disease terms than would expected by chance. Such a finding would be consistent with the hypothesis that gene losses associated with highly deleterious phenotypes are less likely to be tolerated and thus will be less frequent in genes that are actually missing in birds than in control sets. Indeed, when we compared the number of OMIM disease terms that are associated with the 1,000 permutations of control sets versus the missing gene set, we found that the missing gene set contained significantly fewer OMIM disease terms than would be expected by chance (two-sided permutation test; P <0.001; Additional file 4: Figure S3B). We also note that, as might be expected, although the control gene sets were associated with a wide range of severe disease phenotypes, we did not find any cases where a specific disease term that was associated with a missing gene was also associated with a gene from the control sets. Thus, we conclude that the set of disease traits associated with the identified avian missing genes is both specific and non-random. Since the disease phenotypes (and lethality) associated with the gene disruptions in mammals did not obviously align with known avian traits, we hypothesized that perhaps the genetic background of birds was capable of providing compensation for the avian missing genes. Evidence for compensation, if found, would be of interest since it would indicate that compensatory genetic or functional mechanisms might underlie avian adaptations, and also suggest possible treatments or cures for lethal and morbid conditions in humans. To explore this possibility we conducted a comparative functional enrichment analysis (Blast2GO) [24] in order to compare the impact of the loss of the same set of avian missing genes against the genetics backgrounds of chicken, humans, and lizard. We first identified the set of enriched GO terms associated with the avian missing genes compared to the entire universe of extant protein coding genes for each of the species analyzed. We reasoned that GO term enrichments in a given organism reflect functions that are over-represented in the missing gene set compared to the genetic background of that organism, and thus are likely not functionally compensated within that genetic background; we note that for lizard and humans the analyses were for hypothetical deletions. We found statistically significant (P <0.05) GO enrichments in all three genomic contexts, but also found that fewer overall GO enrichments (that is, all GO terms associated biological processes or molecular functions) were associated with the analysis in chicken (n = 235) than comparable analyses in humans (n = 294) or lizard (n = 338). These differences do not reflect obvious biases in either the proportion of BLASTp annotated sequences (85.5, 85.9%, and 77.3% for chicken, lizard, and human, respectively), or in the average number of GO terms that could be assigned to each gene by Blast2GO (7.9 vs. 7.6 vs. 10.2 for chicken, lizard, and human). We next compared the resulting GO term enrichments across species, to separate organism-specific enriched terms representing functions that are likely to be disrupted only in one given lineage from shared enriched terms representing functions likely to be disrupted in multiple lineages. This analysis identified three groups of terms (detailed in Additional file 1: Table S14): Group A, were significantly enriched in non-avian species (Figure 7, yellow in the Venn diagram), representing functions/pathways that might be disrupted only if the gene loss were to have occurred in non-avian organisms. This group is of particular interest since it identifies functional terms where the corresponding gene loss may have been compensated by the genetic background of birds. Group B, were significantly enriched in birds, where the gene loss would likely not have been compensated by the genetic background of birds. We subdivided these further into terms enriched in birds only (Group B 1 : Figure 7, dark blue), likely representing functions/pathways that might be affected only in the context of avian genomes, and enriched terms shared between birds and humans and/or lizard (Group B 2: Figure 7, green), likely representing pathways that would be affected in multiple or all species, and for which there are no apparent compensations in these species. Group C, were enriched in chicken and lizard or lizard only but not humans (Figure 7, gray) representing functions that would in principle be affected in sauropsids but likely not in mammals, possibly due to compensation in the latter. A. We analyzed Groups A and B further (Additional file 1: Table S15), focusing on genes for which functional interpretations can be inferred from genetic studies in mouse and humans (Additional file 1: Tables S12 and 13), as well as genes that have been previously identified as missing in birds and/or that result in a unique avian trait (Additional file 1: Table S6). Since few data are available concerning phenotypes that might result from the loss or disruption of specific genes in lizard (for example, genotype/phenotype studies), inferring predictions based on GO enrichments in Group C is difficult and not a main objective of our study, thus this group of terms was not pursued further. The set of terms significantly enriched in humans only or in humans/lizard but not in chicken (Group A, Figure 7, yellow; Additional file 1: Table S14A) were found to be associated with a considerable number of genes that have lethal knockout phenotypes in mice and/or severe human disease phenotypes affecting a range of tissues and organs (skin, muscle, bone, nervous system, lungs, immune system, among others; Additional file 1: Table S15A). In contrast, terms exclusively enriched in chicken (Group B 1 ) were almost never associated with lethal genes or a severe human disease phenotype. Thus, our functional enrichment analysis was robust enough to detect enrichments of phenotypes/genes that may be exclusively deleterious in mammals. The fact that terms in Group A were not enriched in chicken, suggests that birds somehow compensated for the loss of these vital genes. Indeed, we found some examples within this group where the missing gene has been linked to a change in the expression or posttranslation modification of an unrelated gene (for example, DCN/BGN). In other cases, a close paralog (for example, ATP6AP1L/ATP6AP1; SLC6A8L/SLC6A8; Additional file 1: Table S3) or a related family member may have provided compensation. We found just three terms that were exclusively enriched in chicken (Group B 1 , Figure 7, dark blue; Additional file 1: Chicken Human Group A Group B1 Group B2 Figure 7 Assessing the functional impact of the avian missing genes in the context of the chicken, human, and lizard genomes. Comparative functional enrichment analysis was used to compare the impact of the same set of avian missing genes against the genetics backgrounds of chicken, humans, and lizard. Gene Ontology (GO) term enrichments for pairwise comparison were identified by Fisher's test (P <0.05), and a Venn diagram was used to compare the GO term enrichments and identify: Group A, terms significantly enriched in non-avian species but not in birds (yellow panels), representing functions/pathways that might be disrupted only if the gene loss were to have occurred in non-avian organisms; Group B 1 , terms enriched in birds only (dark blue), representing functions/pathways that might be affected only in the context of avian genomes; Group B 2 , enriched terms shared between birds and humans and/or lizard (green), representing pathways that would be affected in multiple or all species, and for which there are no apparent compensations in these species; and Group C, terms enriched in chicken and lizard (gray), representing functions that would in principle be affected in sauropsids but likely not in mammals, possibly due to compensation in the latter. functions that may not have been compensated only in birds, and thus could be related to distinctly avian traits. Genes associated with GO terms in Group B 1 (Additional file 1: Table S15B 1 ) were: NPHS1, a gene whose loss in humans leads to nephrosis; NR1H2, a key regulator of macrophage function; and KIRREL2, a novel immunoglobin gene that is expressed chiefly in beta cells of the pancreatic islets. Importantly, terms enriched in human and/or lizard were never associated with this set of genes. Because relatively few functional terms were enriched only in chicken, we postulated that this might be a strong indicator that elements in avian genomes might be providing a functional compensation for gene losses. If true, then we predicted that if we were to analyze a similarly sized set of genes selected at random from the chicken genome, we should observe a greater number of avian enrichments (Group B from Blast2GO analysis). We tested this possibility by conducting a separate functional enrichment analysis on the two randomly selected sets of 274 genes with the same relatively distribution across human chromosomes as the missing gene set. As predicted, we found a nearly two-fold increase in the number of terms in the control set that were significantly enriched in chicken, compared to the missing gene set (that is, Group B genes), thus suggesting that birds may have compensated for the actual gene losses in at least some cases. Of the GO term enrichments that are shared by all three species (Group B 2 , Figure 7; green; Additional file 1: Table S15B 2 ) a subset (31%) are also associated with genes that are either lethal in mice, or related to human genetic diseases or disorders (Additional file 1: Tables S12 and 13 in), including CEBPE (congenital granule deficiency), STXBP2 (hemophagocytic lymphohistiocytosis), and ATP2B3 (spinocerebellar ataxia). Since these terms are also enriched in chicken, our analysis suggests that the genetic background of birds may not compensate for the missing gene, raising intriguing questions as to how birds might have adapted to and survived the disruption of these vital functions. In other cases within Group B 2 it appears that the gene loss would have been non-lethal or not associated with a highly deleterious phenotype in mammals, suggesting that the disrupted function might also be tolerated in birds (representative examples are THPTA, CYP2F1; see also Discussion). Further analysis of this group could reveal associations with other characteristic avian traits, or genetic compensatory mechanisms that were not captured by our functional enrichment analysis. To further investigate the possible functional impact of the avian missing genes, we next searched for evidence of expressed sequence tag (EST) enrichment in a human gene expression database (Tissue-specific Gene Expression and Regulation (TiGER)) [25]. Not surprisingly, the majority of genes that have known functions (based on inclusion in Additional file 1: Tables S9-13) showed enriched expression in at least one tissue type (Additional file 1: Table S16A). Furthermore, of the 87 genes with no known function, 26 showed enriched expression in at least one tissue type, and several tissues (for example, cervix, eye, spleen, thymus, and small intestine) were found to express several of these genes (Additional file 1: Table S16B). For the remaining 61 genes (Additional file 1: Table S16C), there is currently no information with regards to their tissue-specific expression or functional classification, as these have not yet been studied in detail in any organism. Thus our analysis likely under-represents the functional impact that the set of missing genes may have had for the avian lineage. Some missing genes are members of multi-gene families and/or have close paralogs in birds To identify possible sources of genetic compensation for avian missing genes we next conducted a genome-wide screening for possible avian paralogs, and an orthogroup classification to identify genes that are members of extended multi-gene families; the latter also included a comparative analysis to determine whether orthogroups have undergone expansions in the avian lineage (see Methods for details). A first paralog search using BLAT alignments of the lizard or human ortholog to the chicken genome revealed that eight missing genes have close paralogs in birds (Figure 8; details in Additional file 1: Table S3A). The majority of these are previously uncharacterized in birds, but we found them to be present in lizard and/or in other non-avian vertebrate lineages. These paralog pairs (or triads) are likely to result from duplications in an ancestral tetrapod (not shown). In nearly all cases the novel paralogous gene in a pair (or triad) is absent in humans, although some are present in at least one non-eutherian mammal ( Figure 8A). Several of these cases thus illustrate reciprocal gene losses between birds and mammals. Some of the novel paralogs have been misannotated as the missing avian ortholog, but such errors were corrected by our syntenic analyses (Additional file 1: Table S2). As a representative example, ATP6AP1, which is associated with GO terms enriched in humans and lizard but not birds, is missing in birds and present in the other vertebrate lineages examined. A previously unidentified paralog (ATP6AP1L2) is present in sauropsids (birds and lizard) but missing in mammals, and a different paralog (ATP6AP1L1) is present in all extant tetrapods ( Figure 8B). The absence of ATP6AP1 in birds results from an avian syntenic block loss, and is unrelated to the absence of ATP6AP1L2 in mammals, including non-eutherians ( Figure 8C). To address the possibility that paralogs might be able to functionally compensate for the loss of a given ortholog in birds, we analyzed each sequence pair or triad using NCBI's Conserved Domains Database [26]. In nearly all cases, we found that structural and/or functional domains are conserved across paralogs (examples in Figure 8D; other cases in Additional file 1: Table S3A). With the recent availability of crocodilian genomes, we identified another six cases where the evidence of a novel paralog of a missing avian gene derives from a gene that is in alligator and in bird species, typically not the chicken (Additional file 1: Table S3B); in these cases a predicted model for the novel paralog is not available, thus an analysis of domain conservation was not carried out. Since we directly and exhaustively searched the chicken and other avian genomes by BLAT alignments, we have likely identified the full complement of possible paralogs present in extant birds due to an ancestral duplication of genes in the missing gene list. For the orthogroup analysis we focused on the 40 genes with deleterious phenotypes and that were associated with term enrichments in our Blast2GO analysis (Additional file 1: Table S15). Using the OrthoMCL database [27], we assigned each gene to a distinct orthogroup (Additional file 1: Table S17), and using OrthoMCL phyletic pattern searches we quantified the number of orthologs present in each orthogroup for a select set of organisms (for example, fish, lizard, platypus, chicken, and humans). These searches revealed that in chicken, 20 orthogroups have one or more members that could have provide compensation for the gene loss. In contrast, the other 20 orthogroups currently have no membership (that is, 0 value, Additional file 1: Table S17), making it unlikely that a related gene family member provided functional compensation. Moreover, we found no evidence that any of the missing gene orthogroups has expanded in chicken compared to lizard or human. We also note that none of the orthogroups previously reported as expanded in birds compared to mammals (see Figure S3 in [4] and Additional file 1: Table S6 and Figure 7 in [5]) are related to the missing genes identified in the present study. Discussion We have presented genomic evidence for the avian loss of 274 protein coding genes located within or in close proximity to conserved syntenic blocks with a clustered localization to discrete chromosomal domains in lizards and humans (human chr19, X; lizard chr2). The majority (86%) of these avian missing genes are present in the crocodilian lineage, and 90% to 95% are present in fish, coelacanth, and frog, suggesting that their loss was largely subsequent to the split of dinosaurs/birds from their archosaur ancestor. These avian missing genes are associated with the physiology of a broad range of organs and systems in mammals, as well as lethality in rodents and severe genetic disorders in humans. Some of them provide plausible explanations for known avian traits, while others were likely compensated by elements of the avian genome, including novel paralogs. As discussed below, these findings have important implications for understanding several aspects of avian physiology and the evolution of avian traits and adaptations. They are also potentially important for developing novel animal models for human disease, and could be of relevance to the poultry industry. Evidence supporting the loss of protein coding genes in birds We have high confidence that the genes on our final curated set are absent in birds. Our approach was conservative, focusing on genes that are part of, or closely related to syntenic deletion blocks within discrete chromosomal domains. We also excluded genes for which syntenic verification was not possible. While this approach likely underestimates the full extent of gene losses in birds, it effectively minimizes the chance that genes on our final set might be present, but undetected in the avian genomes analyzed. Our approach included comprehensive and manual searches and synteny verification in the most fully sequenced and annotated avian genomes (chicken, turkey, zebra finch, medium ground finch, and budgerigar), tBLASTn searches of the complete set of available whole genome shotgun contigs in NCBI (60) followed by manual verification of significant hits, and BLAST searches of avian EST/mRNA collections. Moreover, given the large average size of the missing syntenic blocks in lizards and humans, the cumulative size of these missing blocks, the high coverage of the latest chicken genome assembly based on combined Sanger, 454, and BAC sequences and improved assembling algorithms (18X; galGal4.0) [28], and the fact that the various other avian assemblies are largely based on yet a different sequencing technology (Illumina), it is extremely unlikely that the non-detection of the missing sequences in birds is due to lack of sequence coverage or assembly problems. Providing independent validation, previous studies that utilized independent database searches and/or molecular verification techniques (for example, PCR amplification, southern blot analysis, molecular cloning, or purification of protein or corresponding biological activity from avian tissues) have concluded that several genes on our missing gene list are absent in different bird species. We also note that our combined efforts resulted in a much better and exhaustive curation of avian genomes. Lastly, compared to other birds, the chicken genomic and transcriptome sequences lack yet a further subset of genes, which we suggest may represent losses specific to chicken or to galliformes, but since we were searching for genes whose absence is a general feature of birds, these were excluded from our final set. A possible concern is that our searches of avian genomes might have missed genes that are rapidly evolving and have highly divergent sequences across vertebrate lineages. In addition, recent studies suggest that short genes may be more rapidly evolving, which in some cases can lead to errors in the identification of orthologs in large phylogenies (for example, [29]). Indeed, sequences from some avian missing genes do not cross align, and their predicted proteins show <50% identities between lizard and humans. However, we believe that these concerns are minimized for several reasons. First, the low conservation genes represent only a fraction of the avian missing genes; a much higher percentage of these genes were found to have surprisingly high conservation across non-avian organisms. Second, our analysis demonstrates that the missing genes are not disproportionately enriched in small genes (that is, <500 bp), when compared to the full complement of genes that are present in birds. Third, we find no evidence within the missing gene set for a correlation (either positive or negative) between gene length and the degree to which the gene has diverged across non-avian organisms. Finally, we note that: (1) even low conservation genes can be found in cross-species BLAT/BLAST alignments when they are present in an avian genome; (2) in their vast majority, low conservation genes from our list could easily be found in cross-species alignments with crocodilian genomes, which are phylogenetically closer to birds than other non-avian sauropsids; (3) we used probes from multiple species, including from crocodilians when available, as queries in our searches for low conservation genes in avian genomes. It is thus highly unlikely that our results can be explained by lack of detection of orthologous sequences in avian genome databases due to low sequence conservation. We compared our findings to the recently completed analysis that used BLAST alignments of human protein coding sequences to a set of 48 avian and five non-avian reptile genomes. That study identified 640 genes as missing or representing likely pseudogenes in modern birds ( [7]; Additional file 1: Table S8). Surprisingly, the lists from the two studies have a relatively small overlap (91 genes), constituting approximately 33% of the genes we are reporting as missing in birds (Additional file 1: Table S1; genes discovered by both studies indicated by a '^'). While these studies partially corroborate each other, it is important to highlight the differences, which largely relate to the different approaches used. Here, we specifically screened for missing genes that are in highly conserved syntenic blocks in human and lizard, not just in sauropsids. While our initial search revealed >1,500 candidate missing genes, approximately 1,000 are either singletons or pairs in small unassembled segments (<80 kb) of the lizard genome, thus they were not investigated further due to the concern that they may be present in unsequenced portions of avian genomes. We also focused on the subset of protein coding genes of the human genome (12,000 out of 21,000) that have 1-to-1 orthologs in lizards. This was necessary because relying on 1-to-many or many-to-many orthologies complicates substantially the task of syntenic verification and often leads to incorrect ortholog identification. We also used highly stringent criteria to confirm the validity of the missing genes, including comprehensive and manual searches of high quality avian genomes, genome trace archives, and EST/mRNA collections. This effort revealed that a large subset of the initial 538 candidate missing genes is actually present in birds (Table 1). Although limited manual curation was conducted in the Zhang et al. study [19], it was not done for all genomes given the large number of species examined. Indeed, we have found evidence that 35 of the genes reported as missing in that study are likely present in some birds. We note though that all of the species we interrogated were included in the Zhang et al. study, and that some of these 35 genes may only be partial or a pseudogene. In sum, the present study provides a well-curated analysis of missing avian genes that is largely complementary to the findings of Zhang et al. Together, these studies may come close to identifying the full complement of genes that were lost in an avian lineage ancestor. As further higher quality sauropsid genomes become available, it should become possible to further refine the full extent and evolutionary history of gene losses specific to the avian lineage. Evidence for syntenic gene loss The syntenic blocks of missing genes in birds are mostly localized to discrete domains in lizard and human chromosomes. The flanking genes to most of these missing gene blocks in humans are either present in different chromosomes or in very different positions of the same chromosomes in birds. This observation hints that chromosomal rearrangements involving syntenic blocks may have been a main contributor to the loss of protein coding genes that we have discovered in birds, as opposed to the independent deletion of individual genes in an avian ancestor. Interestingly, human chromosome 19, a relatively short but highly gene dense chromosome where rearrangements and segmental duplications are frequent [30], is the major location for the avian missing gene blocks. This is again consistent with view that the avian losses were likely derived from extensive rearrangements of chromosomal segments in an ancestral species. In lizard, the majority of avian missing genes and corresponding blocks localize to small contigs that are unplaced in the current assembly. In fact, many of these unplaced contigs correspond to entire avian missing blocks, possibly resulting in an underestimate of the conserved deletion block size. We thus suspect that the actual size of the avian missing blocks, representing 'chunks' of an ancestral genome, may turn out to be even larger once a better lizard assembly becomes available. The set of avian missing genes is highly conserved throughout the vertebrate phylogeny, approximately 95% of them being present in sarcopterygians, and approximately 90% in teleosts. Thus, the majority of the missing avian genes were likely present in a sarcopterygian ancestor, and lost sometime after the split of dinosaurs and birds from their common archosaur ancestor. Moreover, only a small subset of these avian missing genes were lost in a non-amniote vertebrate lineage, where such losses were dispersed throughout the genome, and not in syntenic blocks. To our knowledge there are no reports of comparable syntenic gene losses in other vertebrates. For example, although the teleosts are known to have undergone whole genome duplication (WGD) [31,32], and subsequently lost a significant number of protein coding genes [33], we have found no reports indicating that these losses were syntenic; instead, they appear to have occurred in a distributed manner throughout the genome. In fact, in a recent comparison between representative species of different teleost lineages (tetraodon and zebra fish) [34], the losses of various paralogs were shown to be largely reciprocal, occurring in an interspersed and distributed manner in paralogous chromosomes instead of in syntenic blocks in each lineage. Thus, the loss of a substantial number of genes in conserved syntenic blocks that are localized to discrete segments of specific chromosomes (Figures 3 and 4) appears to be a uniquely avian phenomenon among vertebrates. Refining the origins of the avian gene loss We found evidence for the presence of a large proportion (86%) of the avian missing genes in crocodilians, an observation further supported by our previous detection of one of these genes (SYN1) in crocodile through PCR amplification [4]. Thus a substantial number of avian missing genes were lost after the split of dinosaurs/birds from crocodilians ( Figure 1). This is also consistent with the suggestion that the genomes of sauropod dinosaurs, which were closer to therapods and therefore to extant birds, were also relatively compact, while those of ornithischian dinosaurs, which were closer relatives of crocodilians, were larger [15]. We note, however, that a detailed syntenic analysis of all significant hits to crocodilian genomes will be required in order to more definitively establish the orthology of these crocodilian loci. We also note that, in spite of a reasonably good coverage in crocodilian genomes (>70X), as attested by a large percentage (91%) of BLAT-alignment hits from the positive control gene set, several genes on our avian missing gene set were only found in one or the other of the two crocodilian species examined. While some genes may have been differentially lost across crocodilian species, or significantly diverged between lizard and crocodiles, it seems more likely that the current crocodilian genome assemblies are incomplete. Thus, the percentage of avian missing genes we detected in alligator/crocodile is likely an underestimate, and an even larger subset of these genes may actually be present in crocodilians. Alternatively, a small but significant subset of the avian missing genes we discovered may also be absent in crocodilians, and thus have resulted from a loss in an ancestral archosaur. More definitive answers to these possibilities await further completion and annotation of crocodilian genomes. Interestingly, except for two genes, all the 274 genes missing in modern neognaths -all living birds with the exception of the paleognaths (that is, tinamous and ratites), are also missing in ostrich, a basal ratite. Thus, practically the entire set of avian missing genes was lost prior to the split between neognaths and paleognaths. Functional implications of avian missing genes How did birds adapt to and survive the loss of such a large number of protein coding genes, many of which associated with vital functions and pathways in other tetrapods? For the subset of genes linked to tissues, organs, or traits that are absent in extant birds (for example, teeth, hair, mammary glands, and placenta, their losses may not have been deleterious, and in some cases may have even co-evolved with the trait loss. For several other avian missing genes we found novel, previously undescribed paralogs. These paralogous pairs or triads are for the most part present in lizard and thus were likely present in ancestral amniotes, but the mammalian vs. avian lineages have retained different members. Most of these paralogs have nearly identical functional domains as the avian missing orthologs, and thus may have provided compensation if expressed in the correct target organ. For example, SLC6A8, which is linked to a creatine deficiency syndrome that causes mental retardation, severe speech delay, and seizures, and SLC7A7, which causes lysinuric protein intolerance are both missing in birds (OMIM), but have closely related paralogs that could provide compensation. In contrast, we have found that birds lack AVPR2, the kidney antidiuretic hormone receptor, whose loss in humans causes a genetic form of diabetes insipidus [35]. Although this loss could be functionally compensated by a close paralog (AVPR2L), which is missing in other lineages including mammals, birds possess a lower capacity to concentrate urine in response to blood hyper-osmolarity compared to mammals [36]. Thus, while AVPR2L may have provided some compensation for a highly detrimental gene loss, this compensation may be only partial. A much larger number of genes are associated with vital functions involving a range of important organ systems and pathways, and their loss would have been highly deleterious if occurring in other organisms. Since little is known about the function of most of these genes, particularly in the context of lizard and avian genomes, we decided to conduct a Blast2GO enrichment analysis. The goal was to gain a better understanding of some of the possible implications of gene loss in birds. According to our comparative Blast2GO gene classification, and considering the impact of gene loss in different genomic contexts, a considerable set of missing genes have GO annotations enriched in humans/lizard but not in birds, pointing to cases where the loss seems to have been compensated in the context of avian genomes, and thus likely well tolerated by birds. Several genes in this group are part of families or orthogroups, some with several members that may have provided compensation for specific losses. As an example, BCAT2 is absent in birds, but BCAT-related activity has been detected in avian tissues like muscle and liver [37], helping prevent deleterious hyperaminoacidemias in birds. This activity likely derives from a compensatory expanded expression of BCAT1, the other gene in this orthogroup, consisting of a cytosolic isoform, which in mammals has predominantly brain and placental expression [37]. Also consistent with this possibility, some avian missing genes are only lethal in mice when combined with a knockout of a related family member. Other genes in this group are not part of multi-gene family members but compensatory changes have been reported in the expression or biochemical properties of proteins from related but different families (for example, BGN/DCN). In other cases, however, a possible avian compensatory mechanism and/or functional impact for the avian gene loss is unknown, including cases of severe disease or lethal phenotypes when the genes are deleted in other organisms, such as ABCD1 and PRX (central and peripheral demyelinating diseases), FGD1 (affecting bone growth), and FTSJ1 and SYP (X-liked mental retardation). Of particular interest are human disease-causing genes that are lethal in mice, which would create considerable difficulties in developing appropriate rodent models for their study. Future in-depth analysis of other genes in Group A will likely reveal further compensatory mechanisms that allowed birds to adapt to and tolerate their losses. This in turn could lead to basic insights into the pathophysiology of human genetic diseases, and potentially to novel avenues for the treatment and/or cure of these disorders. The loss of the set of potentially deleterious missing genes associated with enriched GO terms in birds only was likely compensated in other vertebrates but apparently not in birds. These genes possibly reflect traits that are specific to birds. As an intriguing example, NPHS1, which results in kidney nephrosis and disruption of the glomerular filtration barrier when functionally knocked out in humans, and KIRREL2, which is expressed in kidney and encodes the slit diaphragm protein Neph2/filtrin [38,39], are both missing in birds, possibly leading to a reduced control of glomerular filtration rates compared to mammals. These cases would help explain the lower capacity of birds to concentrate urine under a hyperosmotic challenge, and could relate to the emergence of the birds' ability to regulate water/electrolyte balance by modulating water release from red blood cells [40]. Combined with the avian lack of AVPR2, the kidney appears to be a major target system of avian missing genes. For other genes whose absence is potentially highly deleterious, the related GO term enrichments are shared by birds, lizards, and humans, suggesting that there are no apparent compensations in any of these genomic contexts. Indeed, most genes in this set belong to very small gene families and/or orthogroups with only one or no additional members. This again raises intriguing questions in terms of possible compensatory adaptations. Some cases are discussed in the next paragraphs. More than 20 missing genes are involved in erythropoiesis, the process of red blood cell production in the bone marrow, which could have important implications for the ability of birds to respond to hypoxic conditions. Interestingly, the products of two avian missing genes (EGLN2 and HIF3A) are known to suppress the cellular response to hypoxia [41][42][43]. A possible prediction is that hypoxiaresponsive genes may be more highly expressed in avian tissues compared to other organisms, or be more rapidly elevated under hypoxic conditions. This in turn could potentially provide functional compensation for the absence of several genes involved erythropoiesis. Several other missing genes in this Group B 2 appear to be tightly correlated with specific avian molecular or biochemical traits that are also worth mentioning. For example, the loss of PTGIR provides a likely explanation for the known and puzzling lack of responsiveness of chicken platelets to prostacyclin [44], the most potent anti-aggregation factor in mammals, indicating that other prostaglandins are likely involved in hemostatic function regulation in birds. Avian brain tissue is also known to have high levels of ThTP (thiamine-triphosphate, the triphosphate form of Vitamin B1, or thiamine) than ThDP (thiamine-diphosphate) compared to other tissues and organisms [45]. This fact can be explained by the loss of THTPA, a mammalian brainexpressed enzyme that converts of ThTP to TDP. The loss of CYP2F1, a lung-expressed cytochrome P450 related gene involved in the bioactivation of pulmonary-selective toxicants [46], explains the avian lack of the lung enzymatic activity involved in generating the endotoxin 3-methylindole [47]. This in turn would explain the avian insensitivity to repellents such as naphthalene, also a substrate for this enzyme [48]. Even though we have discovered a paralog for this missing gene, it is unclear whether it is present in the lung, where its expression would be needed to compensate for the gene loss. Conclusions In sum, our findings provide a more accurate understanding of the avian genetic makeup as well as novel insights into the evolutionary origins of gene losses affecting the avian lineage. We also highlight a number of examples wherein birds constitute natural knockouts for genes that in other organisms are known to play fundamental metabolic or physiological roles, or are associated with severe disease phenotypes and genetic disorders. It is also noteworthy that the function of numerous avian missing genes described here relate to areas of biomedical research to which birds have made substantial contributions as model organisms, including development, immune system function, oncogenesis, and brain and behavior, to name a few. It will be important to assess the impact that avian gene deletions might have for these fields of research. Our studies have also identified a number of gene deletions as well as possible compensatory adaptations that have important implications for understanding basic aspects of avian physiology, and could be of potential relevance for improving commercial poultry strains. Identification of syntenic blocks of missing genes in birds In order to identify gene losses that occurred in the avian lineage, we performed a comparative genomics analysis in humans (Homo sapiens); a lizard (green anole; Anolis carolinensis) representing a non-avian sauropsid; two galliformes (chicken; Gallus gallus; turkey, Meleagris gallopavo) representing a basal avian order; and an oscine passeriform (zebra finch; Taeniopygia guttata). These representative species currently have the most well-assembled and annotated genomes within their respective taxonomic groups. To extend this initial analysis we also examined two additional non-avian sauropsids, the painted turtle (Chrysemys picta bellii) and the American alligator (Alligator mississippiensis) to further identify human/sauropsid orthologs. Our rationale was that genes that are present in non-avian sauropsids and mammals, but absent in these representative species from distantly related avian groups likely correspond to gene losses that are characteristic of the avian lineage, rather than reflecting genomic features that are specific to lizards or to specific avian species. For consistency we use human gene naming conventions (HGNC) [49] whenever possible throughout this paper. To identify genes missing in avian genomes we first retrieved from Ensembl BioMart the full list of lizard Ensembl gene models (Broad AnoCar2.0/anoCar2) with their respective chromosomal locations, and identified a subset that had 1-yo-1 orthologs (including apparent 1to-1 orthologs) in humans (GRCh37.p10/hg19). Within this 1-to-1 ortholog set we next searched for genes with 1-to1 orthologs in chicken (ICGSC Gallus_gallus-4.0/ Galgal4) and/or zebra finch (WUGSC 3.2.4/taeGut1). Among these were 1-to-1 orthologs in lizard and humans that have no corresponding Ensembl orthologs in either chicken or zebra finch, and thus are possibly missing in birds. We noticed that a subset of the presumed missing genes in birds have clustered chromosomal locations in lizard and humans, suggesting an organization into syntenic blocks. To further investigate this possibility, we sorted all the identified 1-to-1 orthologs in lizard and humans side by side with the subset of identified orthologs in chicken and zebra finch, initially based on chromosomal location in lizard, and confirmed that a large number of missing genes in birds are clustered into syntenic blocks in both lizard and humans. We next manually scanned the entire list and identified and numbered all syntenic blocks of genes that are present in lizard and human but missing in birds, and that also meet either of the following criteria: (1) the block is at least 80,000 bp in size from the start of the first gene to the end of the last gene in the block, based on Ensembl model coordinates in lizard; or (2) the block contains at least three adjacent genes. In some cases we used the assembled painted turtle genome (v3.0.1/ chrPic1) to identify/confirm the syntenic gene order within missing blocks that are located in poorly assembled regions of the lizard genome. The identified blocks are represented in dark orange on Additional file 1: Table S1A. We also identified additional blocks of missing genes consisting of singlet or doublets that were at least 80,000 bp in size, or of doublets whose average size was approximately 34,000 bp (shaded in medium and light orange, respectively, on Additional file 1: Table S1A). This allowed us to also include pairs of missing genes that are very large. After numbering the syntenic missing blocks in lizard, we realigned the entire list based on chromosomal location of orthologs in humans, and again eliminated any genes that did not meet the inclusion criteria above. This was necessary to identify any differences in chromosomal alignments between lizard and humans reflecting chromosomal rearrangements and that could affect the organization of the syntenic blocks we detected. Overall, this approach allowed us to identify highly conserved blocks of genes that have nearly identical syntenic organization in lizard/turtle and humans but that are missing in birds. We also noticed several cases where presumed avian orthologs (based on the existence of an Ensembl model in at least one avian species) disrupted an apparently larger missing syntenic group, even though the large majority of these avian models were themselves unplaced in the corresponding assemblies (Additional file 1: Table S18). We took a conservative approach and interpreted these Ensembl models as evidence of the presence of these genes (even if only partial) in birds, although a syntenic confirmation of their identity was not possible. Further investigation of these gene models that are putatively present in avian genomes will be an important future goal. Curation and annotation efforts To refine the syntenic analysis, we manually examined the corresponding genomic regions in all four species above (plus turtle and American alligator as needed) in order to verify the correctness of the predicted syntenic blocks, including the position and orientation of orthologous genes. While performing this curation, we found that the syntenic blocks often contained further genes that were initially not included due to the lack of a predictive Ensembl model in lizard. In such cases, we retrieved the predicted nucleotide and/or protein sequences from human, and BLAT aligned them to the lizard genome using the UCSC web browser to confirm the gene is present and in the correct syntenic position (Additional file 1: Tables S1A and B, 'no model' cases). In several additional cases we noted that the Ensembl models in lizard were not included in the missing syntenic blocks because they were not annotated as 1-to-1 orthologs to the corresponding Ensembl models in humans. In most such cases we were able to identify the correct orthology by BLAT alignments and synteny analysis using the human orthologs as queries (Additional file 1: Tables S1A and B, Lizard Ensembl Gene ID Column, Ensembl models indicated with an ' †'). To address possible errors in the orthology annotations in Ensembl, we next examined whether Ensembl had chicken or zebra finch entries for any of our predicted missing genes. Because a gene prediction set from any given database is likely to be incomplete, we also examined whether there were entries that matched the name or gene description of any of our predicted missing genes in other existing chicken and zebra finch databases (Entrez Gene, UniGene, and RefSeqs). We also examined a recent set of chicken gene predictions by Ensembl (release e71), which incorporates more extensive transcriptome data, as well as the gene predictions from all the databases above for three other avian genomes available in NCBI: turkey (Turkey Genome Consortium; Turkey_2.01/melGal1), medium ground finch (Beijing Genomics Institute; GeoFor_1.0/geoFor1), and budgerigar (WUSTL and E. Jarvis; v6.3/melUnd1). We also searched the NCBI's avian nucleotide databases for any evidence of cloned mRNAs in birds that might be annotated as a gene on our missing set. For all the searches above, we manually examined all entries that matched a gene on our missing gene list. Specifically, we systematically BLAT aligned all the reported sequences to lizard, turtle and other genomes and/or BLAST searched the entire NCBI's nucleotide or protein databases and verified the percent identity and synteny of significant hits. Any confirmed positive hits were excluded from our list of missing genes; the evidence for their existence in avian genomes is presented in Additional file 1: Table S4. All other hits, typically consisting of hits to related gene family members and paralogs, were considered false positives; the evidence for this curation/annotation effort is presented in Additional file 1: Tables S2 and S3. In some cases positive identity could not be definitely established as the hits were short or to unplaced contigs, preventing a syntenic analysis. However, we took a conservative approach and removed such cases from our missing gene list, since they provided suggestive evidence of the presence of the gene in birds. In several cases this approach resulted in some of the final syntenic blocks being shorter than in the initial analysis, and in some genes being moved to the category of missing genes that do not directly belong to a missing syntenic block (Additional file 1: Table S1B). BLAT/BLAST searches for missing genes in birds To further confirm that the genes in the identified syntenic blocks are indeed missing in avian genomes, we next conducted a series of BLAT/BLAST searches for genes on our missing list using updated assemblies of the chicken and zebra finch genomes. For all BLAT searches, we used a local BLAT server [50] and house scripts with parameters set to be highly permissive of divergent and incomplete sequence alignments, accepting and manually curating any hits that had an alignment score >50. We note that this cutoff was first established based on the manual curation of hits of lower scores for more than 100 missing genes; in every case, the low scoring hits were to loci not associated with the missing gene, and typically consisted of just a short segment of a single exon from a related gene family member. After establishing this criterion, we BLAT-aligned the complete set of predicted coding DNA sequences (CDSs) from the lizard Ensembl models of missing genes to the assembled genomes of chicken and zebra finch. This procedure allowed us to identify genes that might be present in avian genomes but that were not identified by Ensembl or by other predictive algorithms displayed on UCSC's or NCBI's genome browsers. We noticed that in some cases a lizard gene model itself is missing, usually because the gene sequence is truncated due to a gap in the lizard genome assembly. In such cases we conducted the BLAT-alignment to avian genomes using the CDSs from the human Ensembl genes. We note that we used the most recent version of the chicken genome (gal-Gal4), and an improved version of the zebra finch genome (Mello and Warren, unpublished data) in which additional Illumina sequence data were used to partially fill in the gaps present in the zebra finch genome assembly currently available in NCBI. To address the possibility that some of the genes on our list might be present in unassembled portions of the best-covered avian genome, we also conducted mega-BLAST searches of the individual genome sequencing reads for chicken and zebra finch [51], and an Illumina SOAP de novo chicken genome assembly (Warren lab). For all BLAT and BLAST searches, we manually verified all significant hits. The vast majority of hits were to well-assembled regions of the genomes, which allowed us an accurate assessment of orthology through synteny. We verified that the hits were typically to related gene family members or paralogs, which therefore were considered false positives (Additional file 1: Table S5). With regards to hits to segments that are unplaced in the assembly, in some cases the unplaced segments were large enough to allow direct verification of gene orthology based on synteny. In the other cases we retrieved the target sequences in chicken and BLAT aligned them to the genomes of several organisms (including lizard, turtle, frog, and human) to confirm gene identity by sequence similarity and synteny. In all cases we also performed BLAT alignments to other avian genomes since we noticed that other species, in particular the budgerigar and medium ground finch, have better coverage of specific genomic regions than chicken or zebra finch. Finally, to address the possibility that some of the missing genes might only be present as cloned mRNAs/ESTs, and not represented in any current avian genome assemblies, we conducted a separate series of nucleotide (BLASTn) and protein (tBLASTn) searches of the available chicken EST (for example, BBSRC, Univ. Delaware Chick EST) and avian core nucleotide and protein databases (for example, NCBI). All BLAST searches used conservative parameters (Block Substitution Matrix 45) for highly divergent sequences. Any confirmed positive hits for the BLAT/ BLAST searches were eliminated from our avian missing gene list (Additional file 1: Table S4B). In several cases this conservative approach resulted in the shortening of some further syntenic blocks that were initially larger, and in several further genes being moved to the category of missing genes that do not directly belong to a missing syntenic block (Additional file 1: Table S1B). Importantly, for all BLAT searches of avian databases conducted using the lizard models as queries, we also included a parallel set of 500 randomly selected protein coding genes in lizard that have 1-to-1 orthologs in humans, chicken, and zebra finch as a positive control to ensure the effectiveness of the search algorithm and the adequacy of using lizard models for cross-BLAT searches in birds. Expanded curation and alignment searches of avian genomes A large number (45) of avian genomes beyond those used in our initial analyses have been recently completed in the context of the Avian Phylogenomics Consortium (Additional file 1: Table S1 in [7]; datasets available at [52]), or have been made publically available by various research groups (n = 12; Puerto Rican parrot, Amazona vittata; Golden Eagle, Aquila chrysaetos; Scarlet macaw, Ara macao; Northern bobwhite, Colinus virginianus; Hooded crow, Corvus cornix; Japanese quail, Coturnix japonica; Saker falcon, Falco cherrug; Collared flycatcher, Ficedula albicollis; Black grouse, Lyurus tetrix; Tibetan tit, Pseudopodoces humilis; Canary, Serinus canaria; White-throated sparrow Zonotrichia albicollis); a subset of these have RefSeq annotations. These resources have allowed us to greatly expand our curation and alignment searches of avian genomes as described in the previous sections, to include a broader range of species with much more extensive phylogenetic coverage, including all the main branches of the avian tree of life [8]. To search these genomes for any evidence of the avian missing genes in our curated candidate set, we first examined RefSeq annotations. All entries with the same gene names as our candidate missing genes, or with the same main key terms in their gene descriptions were examined for orthology, including reciprocal crossalignments with non-avian probes and synteny verification when possible. We also performed tBLASTn searches of the corresponding WGS databases of all these genomes. To address the possibility some of the candidate missing genes might have divergent sequences from their nonavian orthologs so that we might have missed them in our previous searches due to low conservation, we took the following additional steps for selecting query sequences for our searches: (1) examined the candidate missing genes for their BLAT scores in cross-species alignments and the % identities of their Ensembl protein sequences (lizard vs. human comparisons); (2) classified them into high vs. low conservation sub-sets, based on the verified BLAT alignment scores and % identities; (3) verified the presence of orthologs in crocodilians (alligator) for the low conservation gene subset; (4) utilized probes from multiple species the low conservation candidate missing gene subset, including alligator when available, as queries in the tBLASTn search of avian WGS databases. As in the preceding sections, all significant hits were manually verified by cross-reciprocal alignment tests and synteny verification when the avian hit presented sufficient flanking sequence. To compare the relative distributions of genes according to size, and rule out the possibility that the missing gene set was particularly enriched in short genes, we constructed frequency distribution plots of protein coding sequence length (CDS) for the missing gene set and the complete set of lizard genes present in birds (Additional file 3: Figure S2A). Distributions were normalized by logtransformation and compared using a two-tailed ANOVA (α = 0.05). To test whether gene size was correlated with protein coding sequence divergence we retrieved (from Ensembl Biomart) the amino acid percent identities (% AA; lizard vs. human orthologs) for the full set of avian missing genes. Genes lacking a clear 1-to-1 orthology, or that did not have an Ensembl model prediction, as was true for several lizard genes, were excluded from further analysis. We then plotted each gene's CDS length as a function of its % AA identity, but found no significant correlation between these two variables. Analysis of chromosomal location of avian missing genes To test whether the distribution of the avian missing gene blocks was significantly different from a uniform random distribution (as was apparent from the frequency distributions presented in Figure 3), we conducted a contingency table analysis using a Chi-squared test for independence. We reasoned that if the deletion events had occurred randomly and uniformly across all chromosomes, then the larger chromosomes should contain the largest proportion of deletions. To address this, for each chromosome we first calculated the number of deletion blocks that would be expected based on a random and uniform assignment of all 52 missing blocks based on chromosome size. We then applied a pair-wise Chi-squared test for independence (α = 0.05; Prism; Graphpad) to determine whether the observed distribution of deletions was significantly different from the expected random distribution. To test whether the distribution of individual gene losses differed significantly from a random distribution, we compared the distribution of all 274 genes missing in birds, including singlets, to an equivalent distribution constructed by taking the average chromosomal positions of a set of 274 genes, selected randomly 10 times from the entire collection of genes in the avian genome. We applied a pair-wise Chi-squared test for independence (α = 0.05; Prism) to determine whether the observed distribution of gene deletions was significantly different from the average randomly selected distribution. To compare the sizes of the missing avian blocks in lizard vs. human chromosomes, we first calculated the size of each corresponding syntenic block (in Mb) by subtracting the start position of the block from the end position based on the Ensembl gene model coordinates. In cases where the lizard genome was poorly assembled, or contained many gaps, we substituted the coordinate calculations based on the turtle assembly. We then compared the distributions of the blocks for all chromosomes, as well as separately for some individual chromosomes, by performing a pair-wise comparison of individual blocks by using a non-parametric Wilcoxon matched-pairs signed-ranks test (α = 0.05; Prism). To calculate the total size of the avian missing blocks in humans and lizard, we added together the sizes (in Mb) of each of the 52 individual blocks. Searches for avian missing genes in crocodilians and non-amniotes In order to refine the evolutionary history of the avian gene loss, we BLAT-aligned our curated avian missing gene set using lizard and human Ensembl CDSs to two recently available, high-coverage crocodilian genomes, the American alligator (Alligator mississippiensis) and the saltwater crocodile (Crocodylus porosus [11,12]), using our local BLAT server and sensitized parameters as described above for avian genome searches. Because these crocodilian genomes are currently not fully assembled or annotated, and the scaffolds are not long enough for a full-scale syntenic analysis, we determined gene presence or absence by comparing the total number of hits in alligator and crocodile to those in chicken, using alignment score and percent identity to separate hits to the orthologous lizard and human genes from hits to related gene family members that are also present in chicken. Genes were considered present in crocodilians if they fell within the following criteria: (1) the gene had a significant hit to either alligator or crocodile but not to birds; or (2) the gene had hits to both crocodilians and birds, but at least one of the crocodilian hits was of substantially higher score and percent identity that those to birds, and the latter were shown to be hits to related gene family members or paralogs (Additional file 1: Tables S3 and S5). To further refine the evolutionary history of the avian missing genes we conducted a separate orthology analysis across a set of representative vertebrate species, including lamprey (Petromyzon marinus), two teleosts (Danio rerio, Takifugu rubripes), coelacanth (Latimeria chalumnae), and frog (Xenopus laevis). For each of these species, we used Ensembl's Biomart [53] to retrieve a complete set of orthologs for each avian missing gene. To determine the extent to which the missing genes were present in the various vertebrate lineages, we sorted the entire set of orthologs present in frog, and identified specific cases where no ortholog was predicted. We then confirmed the presence (or absence) of orthologs in each of the other vertebrate lineages. We repeated this analysis for each species in order to identify cases where a gene was: (1) present in coelacanth and frog, but not fish, indicating gene likely appeared in the sarcopterygian lineage; or (2) present in coelacanth, frog, and fish, indicating the gene was present in an ancestral teleost. Next, we searched for cases where avian missing genes were specifically absent in frog, coelacanth, or fish, but present in the other species. All putative losses in fish were confirmed by directly searching for evidence of an ortholog in lamprey. Finally, for each species, and each set of gene losses, we determined the relative human and lizard chromosomal positions, and searched for cases where the losses were syntenic. Identification and supportive evidence for paralogous gene pairs In some cases a lizard (or human) mRNA and/or protein coding model used as a query had a particularly high BLAT alignment and identity score (>90%) to one or more loci in zebra finch and/or chicken whose synteny did not match the synteny of the query gene or of a related gene family member in lizard. Such hits presented a reasonable likelihood that the avian locus might represent a previously unidentified paralog. To address this possibility, we used a comparative analysis of synteny to fully annotate the avian locus by searching for a corresponding locus in lizard and other non-avian vertebrates. First, we determined the synteny of the avian region by walking the chromosome (or contig) and documenting the order of genes immediately flanking the locus identified by the BLAT hit. In cases where the BLAT hit in chicken and zebra finch was to a short unplaced segment without clear synteny, or to a disrupted region that contained multiple genomic gaps, we relied on separate BLAT and synteny analyses in budgerigar and/or medium ground finch. We next determined whether the lizard genome might contain one or more closely related paralogs by BLAT-aligning the lizard query back to the lizard genome. In positive cases, we next determined the synteny of the resulting hits in lizard by examining the flanking genes of the high scoring hits. This analysis led to the identification of novel (most unannotated) paralogs in lizard. We next compared the synteny of the high scoring hit in avian lineages with the syntenies of the multiple hits in lizard and found cases where the syntenies in birds matched those of the newly found paralogs in lizard. To further characterize these cases of paralogy, we performed a more comprehensive synteny and phylogenetic analysis based on the presence or absence of the paralogous gene pair across a select set of vertebrate genomes, including non-eutherian mammals (that is, opossum and platypus), and a representative eutherian mammal (that is, human). A summary of the results of this analysis are presented in Figure 8A, with details on Additional file 1: Table S3. A representative example of the synteny analysis is presented in Figure 8C. To reconstruct the evolutionary history of these paralogs we retrieved the corresponding protein coding sequences for each, performing multiple protein sequence alignment using PRANK [54] with stringent substitution scoring and otherwise default parameters via the WebPrank server [55]. These alignments were used to construct maximum likelihood phylogenetic trees using PhyML [56], using the approximate Likelihood-Ratio Test to compute branch support. An example of this analysis is presented in Figure 8B. Lastly, we analyzed the sequences in each pair of paralogs using NCBI's Conserved Domains Database [26] and performed a side-by-side visual comparison in order to identify conserved domains as well as DNA and protein binding sites that are related to the established function of the gene (examples in Figure 8D, details in Additional file 1: Table S3). In addition to the above searches, we also used orthogroup classification (via OrthoMCL; [27]) to identify whether the avian missing genes would have been members of a multiple-gene orthogroup, and/or whether a possible paralog might be present in birds, but not lizard or humans. We first used OrthoMCL to assign the missing gene set (439 genes using lizard/human protein sequences), and 13,101 extant chicken genes (Ensembl; e71) to an OrthoMCL group. We note that every missing gene was successfully assigned to an Orthogroup with the exception of FFAR1. We then searched for cases where a missing gene was present in an OrthoMCL group that contained additional members. We were able to confirm that most of the paralogs we found using genome-wide screens (see above) were present in the same orthogroup as the missing gene, providing an independent confirmation of the approach. For cases where one or more member of a missing gene orthogroup was found we then retrieve the corresponding gene names, since such genes might provide possible functional compensation for the missing genes. Bioinformatics and functional classification In an attempt to categorize the identified missing genes, we subjected the entire curated set (Additional file 1: Table S1) to Ingenuity Pathway Analysis (IPA; Qiagen, Inc.). The complete set of 274 genes (as HGNC symbols) was uploaded and contrasted against the Ingenuity Knowledge Base Reference Set (Genes Only) to identify pathway enrichments. Only relationships where confidence = 'Experimentally observed' were included in the final analysis. This analysis revealed broad categories that were enriched in genes related to diseases and disorders, molecular and cellular functions, and/or physiological system development and function (Table 2A). In addition, this analysis revealed specific diseases states (Additional file 1: Table S9), as well as canonical pathways (Table 2B) that were significantly enriched within the avian missing gene data. The significance of each biological/disease or canonical pathway was further tested by Fisher's Exact Test (α = 0.05). To further confirm that disease states and pathways discovered by IPA were specifically associated with the missing gene set, and not present in any randomly selected set of genes we conducted additional IPA analyses on two sets of independently derived control gene sets consisting of 274 genes. To construct these sets, we first retrieved the entire set of protein coding genes from Ensembl (e71) that corresponded to the complete collections of 1-to-1 orthologs in human and lizard, and sorted them according to human chromosomal gene order. Custom scripts were written in R [57]; 'Missing_gene_analysis.R' is available at [58]) to generate control sets that contained blocks of genes (syntenic orthologs) with the same relative size (in Mb), number of genes, and chromosomal distribution as the avian missing blocks presented in Additional file 1: Table S1A). For each block, consisting of N genes (in blocks or as a singleton) on a specific chromosome, we randomly selected a 'seed' gene from that chromosome using the pseudo random number generator function supplied with statistical package R (RNG.kind = 'Mersenne-Twister'; [59]). We then confirmed that all N genes were indeed on the correct chromosome, and were in the same syntenic gene order in both humans and lizard. In a separate analysis, we also retrieved the sets of phenotypes associated with spontaneous, induced or genetically-engineered mutations in genes on our missing gene list, based on the Mouse Genome Informatics (MGI; [22]) database. We then classified the retrieved entries according to the affected tissues or organ systems (Additional file 1: Table S10A). We note that we only retrieved phenotypes where the deletion of a single gene is sufficient for the phenotype to be observed. We also used MGI to identify a subset of missing genes that are associated with a lethal phenotype (including partial and complete embryonic or perinatal lethal, or premature death) in rodents. We next examined the retrieved entries individually to identify cases where knockout of the gene of interest is sufficient for the lethal phenotype (Additional file 1: Table S11A) vs. cases where a combined knockout of one or additional genes is required for lethality (Additional file 1: Table S11B). We also used the MGI database, consultations to the OMIM [23], and keyword searches of Entrez Gene summaries (with terms such as syndrome, disease, mutation, deletion, or loss) to identify gene sets that are associated with genetic disorders in humans. Among these, we manually verified individual OMIM entries searching for evidence of diseases caused by loss of gene or gene function (Additional file 1: Table S12A in; typically autosomal recessive disorders, but also including cases of X-linked disorders or autosomal dominant haploinsufficiency) in contrast to disorders caused by gain of function mutations (Additional file 1: Table S12B). To determine whether the associations with severe and/or lethal phenotype in mice were unique to missing genes, or more generally associated with any comparably sized gene sets, we also performed a complete MGI phenotype classification on 1,000 independent permutations of 274 genes. Using the control set algorithm described above for the IPA, we constructed 1,000 control gene lists and then for each list, retrieved sets of MGI phenotypes that were associated with each gene. Note that the phenotype, 'No abnormal phenotype detected' was not included in the analysis. A two-sided permutation test (α = 0.05) was used to test for differences between the number of phenotypes associated with the missing gene set versus the distribution of the number of phenotypes associated with 1,000 permuted control sets (Additional file 4: Figure S3). We also compared the number of genes associated with each phenotype in the missing gene set versus the distribution of the number of genes associated with the same phenotype in the permutation gene sets using a two-sided permutation test with Benjamin-Hochberg false discovery rate (FDR) multiple comparison correction (Additional file 1: Table S10B). To determine whether the associations of missing genes with OMIM disease terms was greater (or less) than what would expected by chance, we analyzed the distribution of OMIM disease terms associated with the same 1,000 control gene sets described above for the MGI mouse phenotype analysis. Two-tailed permutation testing (α = 0.05) was used to test for statistical differences between the number of genes associated with an OMIM disease term in the missing gene set vs. the distribution of the numbers of disease terms associated with 1,000 control sets (Additional file 4: Figure S3). To compare the impact of the same set of avian missing genes against the genetic backgrounds of chicken, humans, and lizard we conducted a comparative functional enrichment analysis. For each species (chicken, humans, and lizard), we first retrieved nucleotide sequences corresponding to the full set of Ensembl (e71) predicted transcripts, selecting the largest open reading frame for each gene. We then BLAST-aligned (BLAST) each sequence against NCBI's non-redundant protein database (BLAST Expected value = 1.0E-3; matrix = BLOSUM62). We then used Blast2GO ( [24]) to extract Gene Ontology (GO) terms associated with each NCBI hit (E-Value Hit Filter = 1.0E-6; Annotation cutoff = 55; GO-Weight = 5; HSP-Hit Overlap = 0) in order to identify the top 20 most similar protein coding sequences. Based on these alignments, we then used to Blast2GO to assign a set of evaluated GO annotations to each query sequence. Each nucleotide sequence was also subjected to protein domain motif scanning (Interpro scan) in Blast2GO, and the resulting additional GO annotations were merged with the Blast2GO annotations. We observed that the average number of annotations obtained per genome was in the 80,000 to 130,000 range, and was comparable across species. Finally, for each species, we performed a pairwise comparison of GO terms associated with the missing gene set vs. those associated with the remaining protein coding genes, and using a twotailed Fisher's exact test (α = 0.05) identified GO terms enriched in the missing gene set (Additional file 1: Table S14). We note that for the chicken comparison we used the missing gene set created for the lizard comparison. To identify GO term enrichments that were unique or shared within the pairwise comparisons performed in chicken, lizard, and humans we used a Venn diagram (Venny; [60]). We specifically identified significantly enriched terms that were: A, enriched in the non-avian species (lizard/human), but not birds (Figure 7, Group A, yellow panels); B1, enriched only in chicken (Group B1, dark blue); B2, enriched in birds and humans and/or lizard (green); or C, enriched in chicken and lizard (gray). For each of these groups we then retrieved the corresponding sets of genes that were associated with each Groups (A to C) statistically enriched set of GO terms. In some instances, we found that a gene associated with one set of GO enrichments in a group (for example, Group A) was associated with a different set of GO enrichments in a different group (for example, Group B1), creating a potential conflict. To resolve these conflicts we retrieved from each group the corresponding sets of GO terms associated with the gene of interest, and evaluated whether the GO terms were descriptively similar (for example, protein kinase activity vs. kinase activity), or referred to very different functions and/or processes. For cases where the GO terms described a similar function, we used a conservative interpretation, and placed the gene in the category with the most inclusive species membership. In contrast, if the GO terms referred to very different functions, indicating the possibility that protein coding domains within the same protein might be differentially compensated across lineages, we included the gene in each group. The results of this analysis and classification are present in Additional file 1: Table S15. We note that recent papers have pointed to some limitations when performing comparative analyses of functional GO annotations and enrichments, particularly when in the context of identifying orthologous vs. paralogous genes across lineages (for example, [61,62]). However, despite these limitations, functional GO enrichment analysis remains arguably the best approach currently available for comparative analysis with species other than mouse or humans. Unlike other functional enrichment analyses that rely heavily on existing gene curation (for example, DAVID, IPA), Blast2GO treats each gene as if it was a 'novel gene' , and uses BLAST to annotate each novel gene sequence based on the presence of known protein motifs. The motif annotations are then represented by a universal set of Gene Ontology Terms. Although there may be some limitations to this approach due in large part to the limited availability of non-mammalian databases of annotated protein sequences, it still provides the best available tool for attempting to functionally annotate genes in non-rodent and nonhuman species. Moreover, because this method compares actual vs. theoretical losses within each organism's background genome, we were able to further minimize biases due to differences in the overall genomic background of the species being compared. Additional files Additional file 1: Table S1. Missing genes in syntenic blocks, or in close proximity to syntenic blocks, ordered according to chromosomal location in lizard. Table S2. Curation of misannotated Ensembl, Entrez Gene, and RefSeq genes in birds. Table S3. Evidence for closely related avian paralogs of missing genes in birds. Table S4. Evidence supporting the presence of genes in avian Entrez gene, RefSeq, and cloned mRNA databases, as well as lizard/human mRNA and protein BLAT/BLAST searches of avian genomes, trace archives, and EST/mRNA databases. Table S5. Annotation of false positive lizard Ensemble model BLAT alignments to the chicken genome. Table S6. Genes not found in chicken, but possibly present in other birds based on RefSeq annotations and on tBLASTn Searches of 60 Avian WGS contigs. Table S7. Genes previously reported as missing in birds. Reference citations are presented in Additional file 5. Table S8. Assessment of the presence of avian missing genes in crocodilian genomes. Table S9. Detailed list of functions associated with functional enrichment categories presented in Table 2A. Table S10. Major organs and systems affected by the loss of the missing avian genes in mice. Table S11. Genes whose deletion is associated with traits/phenotypes affecting tissues or organs that are absent in birds. Table S12. Genes that result in lethality alone in knockout mice, or when combined with other genes. Table S13. Genes associated with human disease and/or syndromes. Table S14. Gene Ontology terms that are enriched (BLAST2GO; P <0.05) in the chicken, human, and/or lizard lineages. Table S15. Possible functional consequences (and compensations) for genes associated with lethal and disease phenotypes in mammals. Reference citations are present in Additional file 5. Table S16. Human tissue specific expression of avian missing genes. Table S17. Orthogroup analysis of the avian missing genes presented in the Blast2Go analysis presented in Table S15. Table S18. Avian genes on unplaced chromosomes representing partial sequences with no synteny verification. Additional file 2: Figure S1. Avian missing syntenic blocks and chromosomal rearrangements. The avian missing syntenic blocks are closely associated with (A) inter-and (B, C) intra-chromosomal rearrangements that are revealed by local chromosomal alignments of 1-to-1 orthologous genes in chicken and humans. Orthologs are aligned according to human chromosome location. Syntenically ordered genes that are missing in birds (that is, chicken) are shaded in orange or gray (as Additional file 1: Table S1); flanking genes that are present in chicken, humans, and lizard (not shown) are shown in white. The position of each gene locus is indicated by chromosome number (for example, chr2, 19) and the start and end base for each corresponding Ensembl gene model. The location of several orthologous blocks that were removed for clarity is indicated by the dotted lines beneath the gene start/end columns. The solid line in C separates two adjacent syntenic blocks that are found on different chromosomal segments in lizard, and thus do not constitute a single block. In (A), the locations of the syntenic blocks in chicken that immediately flank the missing gene block are on different chromosomes (that is, chr4 and chrZ). In (B) and (C), the flanking blocks are on the same chromosomes, but are out of order (B), or several megabases apart (C), in comparison to their location in humans. Additional file 3: Figure S2. Analysis of gene size and protein sequence divergence for the avian missing gene set. Description of data: (A) Frequency distributions of predicted protein sizes for lizard orthologs of the avian missing genes and the entire set of lizard genes that is present in birds. The overall distributions of predicted protein size are similar. The relative percentage of short genes (that is, <500 bp) is also comparable across the two gene sets (9% vs. 10%). (B) The sizes of each missing gene are plotted against the percent amino acid identity (% AA Identity) of orthologous predicted proteins in humans vs. lizard. Additional file 4: Figure S3. MGI mouse phenotype and OMIM disease term analysis. Description of data: Plots showing distributions of the numbers of genes associated with MGI mouse phenotypes (A), or OMIM disease terms (B) for 1,000 independently derived control gene sets. The average number of phenotypes (A) or disease terms (B) associated with the control gene sets is indicated by the blue lines; the number of phenotypes (A) and disease terms (B) associated with the missing gene set is indicated by the red lines. Two-tailed permutation tests reveal that the number of genes associated with both mouse phenotypes and OMIM disease terms is significantly less than that associated with the control gene sets (P = 0.001 and P = 0.02, respectively). (PDF 582 kb)
2016-05-04T20:20:58.661Z
2014-12-18T00:00:00.000
{ "year": 2014, "sha1": "735f32a3a2b846f46157aa961c346a2832104279", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13059-014-0565-1", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6cacfdc81a67867620f4cd22254fb6448cdfa7e4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211244921
pes2o/s2orc
v3-fos-license
Comparison of Proangiogenic Effects of Adipose-Derived Stem Cells and Foreskin Fibroblast Exosomes on Artificial Dermis Prefabricated Flaps Large prefabricated flaps often suffer from necrosis or poor healing due to a lack of new blood vessels and related factors that promote angiogenesis. The innovative use of adipose-derived stem cell exosomes (ADSC-Exo) resolves the problem of vascularization of prefabricated flaps. We analyzed the differential microRNA (miRNA) expression in ADSC-Exo using next-generation sequencing (NGS) technology to explore their potential mechanisms in promoting vascularization. We observed that ADSC-Exo could significantly promote the vascularization of artificial dermis prefabricated flaps compared with human foreskin fibroblast exosomes. NGS indicated that there were some differentially expressed miRNAs in both exosomes. Bioinformatics analysis suggested that significantly upregulated hsa-miR-760 and significantly downregulated hsa-miR-423-3p in ADSC-Exo could regulate the expression of the ITGA5 and HDAC5 genes, respectively, to promote the vascularization of skin flaps. In summary, ADSC-Exo can promote skin-flap vascularization, and thereby resolve the problem of insufficient neovascularization of artificial dermis prefabricated flaps, thus expanding the application of prefabricated skin-flap transplantation. Introduction Wounds involving large areas of skin and soft tissue caused by trauma, tumor resection, or chronic diseases for various reasons are often difficult to heal, resulting in refractory wounds. Conventional skin transplantation may not be successful for such refractory wounds due to the lack of vascular structure and the inability to reconstruct a blood supply, thus necessitating the use of skin flaps for repair. Although flap transplantation is currently widely used in clinical wound repair [1], the thickness of conventional flaps is limited by the location of the specimen. Moreover, the thickness of the flap is particularly critical for wounds in deep areas, joints, and areas with high wear and weight bearing. Prefabricated flaps thus offer a good method for optimizing traditional flaps. Prefabricated flaps involve reconstructing an arbitrary skin flap into an axial flap for later wound repair by transplanting known vascular tissue [2]. This technology can increase the selection of skin flaps, allow the accurate design and manufacture of flap size and thickness, and reduce loss and waste of donor tissue. Moreover, it also improves aesthetic and local functional recovery of the tissue after repair and protects the patient from pain associated with a forced position [3]. However, the main problem with prefabricated flaps is currently the limited range of options. Furthermore, large prefabricated flaps often suffer from necrosis or poor healing due to a lack of new blood vessels and related factors that promote angiogenesis. Adipose-derived stem cells (ADSCs) are stem cells with multidirectional differentiation potential, first isolated by Zuk et al. in 2001 [4]. ADSCs play a definite role in promoting vascularization during tissue repair and reconstruction; however, the mechanism by which they achieve this is unclear. Most researchers currently believe that ADSCs differentiate mainly into vascular endothelial cells and smooth muscle cells to form a new vascular network [5], or secrete paracrine factors, such as basic fibroblast growth factor, vascular endothelial growth factor, hepatocyte growth factor, platelet-derived growth factor, and other angiogenesisrelated cytokines and growth factors to promote local microvascularization [6,7]. ADSC transplantation has achieved better therapeutic effects than current conventional treatment methods in patients with refractory wounds [8]. However, despite the many advantages of ADSCs, technical problems and the risk of tumor formation currently limit their clinical application [9]. Exosomes are membranous vesicles about 30-150 nm in diameter that are released from the intracellular matrix into the extracellular matrix [10]. They can carry a variety of biological macromolecules, including proteins, lipids, and nucleic acids, and participate in various physiological processes, such as the immune response, antigen presentation, and protein and RNA transport [11]. Previous studies reported that interleukin-6 in ADSC exosomes (ADSC-Exo) protected flaps from ischemia-reperfusion injury [12]. However, no studies have reported on the ability of ADSC-Exo to promote angiogenesis in prefabricated flaps. We therefore applied ADSC-Exo and human foreskin fibroblast exosomes (HFF-Exo) to artificial dermal prefabricated flaps and compared their proangiogenic effects. We also performed next-generation sequencing (NGS) of both types of exosomes and compared the highly enriched micro-RNAs (miRNAs) and identified differentially expressed miR-NAs by quantitative methods. We analyzed the distribution of the target genes using the Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway databases, which indicated that the differentially expressed miR-NAs may play an important role in the regulation of gene function. Materials and Methods 2.1. Isolation and Culture of hADSCs and HFFs. Human subcutaneous adipose tissue and human foreskin tissue samples were obtained from Changhai Hospital affiliated to the Naval Military Medicine University, Shanghai, China. All tissues were sourced after obtaining informed consent from the patients. Primary human ADSCs (hADSCs) and HFFs were generated as described previously [13,14]. hADSC and HFF pellets were resuspended separately in low-glucose and high-glucose Dulbecco's Modified Eagle's Medium (DMEM) (HyClone, UT, USA) with 2.5% exosome-depleted fetal bovine serum (FBS) (Gibco, Grand Island, NY, USA), and cultured in a humidified incubator containing 5% CO 2 . The medium was changed every 2-3 days after cell attachment. Exosome Isolation. Exosomes were isolated from hADSCs and HFFs by differential ultracentrifugation, as described previously [15]. In brief, cell culture medium was derived from 80% to 90% confluent hADSCs or HFFs under sterile conditions. Differential ultracentrifugation was performed at 300 × g and 2000 × g for 10 min to remove dead cells, followed by 10,000 × g for 30 min to remove cell debris. The cell pellets were then centrifuged twice for 70 min at 100,000 × g. All centrifugations were carried out at 4°C. The pellets were finally resuspended in 100 μl of cold phosphate-buffered saline (PBS) and stored immediately at -80°C and used within 1-2 weeks. Exosome Identification 2.3.1. Transmission Electron Microscopy (TEM). The pellets rich in exosomes were diluted in 30 μl of PBS (HyClone, UT, USA) and kept at 4°C until transmission electron microscopy (TEM) analysis. One drop of exosome sample was placed on a carbon-coated copper grid for 5 min and stained with a drop of 2% phosphotungstic acid for 3 min. Excess liquid was removed with absorbent paper, and the sample was air-dried for 15 min. The preparation was then examined by TEM. 2.3.2. Nanoparticle Tracking Analysis (NTA). NTA was performed using a NanoSight instrument (Zetaview, Particle Metrix, Germany), according to the manufacturer's protocols. The particle size distribution and concentration of all types of nanoparticles with diameters of 10-2000 nm could be analyzed rapidly and automatically by NTA. NTA detection technology also ensured the accuracy and repeatability of the sample readings. In Vivo Studies. A total of 48 male Sprague-Dawley rats were divided randomly into four groups: an ADSC group, an ADSC-Exo group, an HFF group, and an HFF-Exo group. All the experiments were approved according to the guidelines of the Health Sciences Animal Policy and Welfare Committee of Changhai Hospital affiliated with the Navy Military Medical University. Artificial dermis prefabricated flap and leg wound rat models were constructed as reported previously [16]. After the graft flap and the abdominal wall wound were sutured, the base of the flaps was multipoint injected with 100 μl of PBS containing 1 × 10 6 ADSCs or HFFs and 200 μg of ADSC-Exo or HFF-Exo, respectively. The flaps in the four groups were observed at 7, 14, 21, and 28 days postoperatively. The survival status of the skin 2 Stem Cells International flaps was assessed by monitoring surface color, blood supply, and circumference of the incision. 2.5. Immunohistochemistry. Flap tissues were excised at 28 days after surgery and analyzed by histological staining. The excised skin flaps were fixed with 10% formalin and dehydrated in graded ethanol. Paraffin-embedded specimens were then cut into 5 μm sections and stained with hematoxylin and eosin and Masson's trichrome for histological observation and to evaluate collagen maturation. Skin-flap angiogenesis was observed by CD31 + immunohistochemical staining (1 : 50, Abcam, UK). Tissue-section preparation and immunohistochemical assays were carried out as reported previously [17]. CD31 positivity was indicated by a brown reaction. Photomicrographs were obtained under an optical microscope (Leica, Germany). Four random areas in each section were selected and analyzed using Image-Pro Plus 6. 2.6. Flap Microangiography. Postmortem microangiography was performed 7 days after flap transplantation. Rats were injected with 30% barium sulfate solution into the right jugular vein, at low pressure. The flaps were harvested the next day and examined by radiography to reveal the vascular network. Flap microangiography was carried out as reported previously [16]. Effects of ADSC-Exo and HFF-Exo on Proliferation of Human Umbilical Vein Endothelial Cells (HUVECs) In Vitro. HUVECs were obtained from the American Type Culture Collection, seeded at 1 × 10 5 cells per well in 24-well plates, and cultured in high-glucose DMEM with 2.5% exosome-depleted FBS to reach 70%-80% confluency. The cells were then divided into two groups: an ADSC-Exo group and an HFF-Exo group. HUVECs were cocultured with 100 μg/ml ADSC-Exo or HFF-Exo for 24 h, and the effects of the respective exosomes on cell proliferation were detected using a 5-ethynyl-2 deoxyuridine (EdU) assay kit (RiboBio, Guangzhou, China) [18], according to the manufacturer's instructions. The cells were finally observed under a fluorescence microscope (Zeiss HLA100, Shanghai, China). Proliferating HUVECs were indicated by green fluorescence and nuclei by blue fluorescence, and the ratio of these was calculated to obtain the proliferation rate. . Basic reads were converted into sequence data (raw data/reads) by base calling. Low-quality reads were filtered, and reads with 5 ′ prime contaminants and poly(A) were removed. Reads without 3 ′ adapters and insert tags and reads with <15 or >41 nucleotides were filtered, and clean reads were obtained. Preparation of Libraries and 2.9. Bioinformatics Analysis. The length distributions of the clean sequences in the reference genome were determined. Noncoding RNAs were annotated as rRNAs, tRNAs, small nuclear RNAs (snRNAs), and small nucleolar RNAs. These RNAs were aligned and then subjected to BLAST [19] search against the Rfam v.10.1 (http://www.sanger.ac.uk/software/ Rfam) [20] and GenBank databases (http://www.ncbi.nlm .nih.gov/genbank/). Known miRNAs were identified by alignment against the miRBase v.21 database (http://www .mirbase.org/) [21], and the expression patterns of the known miRNAs in different samples were analyzed. Unannotated small RNAs were analyzed by miRDeep2 [22] to predict novel miRNAs. Based on the hairpin structure of a pre-miRNA and the miRBase database, the corresponding miRNA star sequence was also identified. Differentially expressed miRNAs were identified with a threshold P value < 0.05. The P value was calculated using the DEG algorithm in the R package. Prediction and Functional Analysis of miRNA Target Genes. Target genes of differentially expressed miRNAs were predicted using miRanda software [23] in animals, with the following parameters: S ≥ 150ΔG≤−30 kcal/mol and demand strict 5′ seed pairing. GO enrichment and KEGG pathway enrichment analyses of differentially expressed miRNA target genes were performed using R, based on the hypergeometric distribution. Real-Time Polymerase Chain Reaction (PCR). Total RNA was isolated from HUVECs using Trizol® Reagent (Life Technologies, USA), according to the manufacturer's instructions. Two micrograms of total RNA was reverse transcribed into cDNA using miRNA-specific primers (hsa-miR-423-3p: MIMAT0001340; hsa-miR-760: MIMAT0004957) with a TaqMan miRNA reverse transcription kit (Thermo Fisher Scientific, USA) [24]. Real-time PCR was then performed with an initial denaturation step at 95°C for 10 min, followed by 40 cycles of 95°C for 15 s and 60°C for 1 min. The level of the U6 small nuclear RNA gene was used as an internal control, and normalized relative expression levels were calculated using the 2 −ΔΔCt method. Statistical Analysis. Data were analyzed using SPSS 17.0 and presented as means ± standard deviation. Statistical significance was determined by ANOVA or Student's t-test, and the gene expression data set was also analyzed using Spearman's rank test. A value of P < 0:05 was considered significant. Characterization of Exosomes. Exosomes are intracellular vesicles with a diameter of 30-150 nm, which can be obtained from cell culture media by various methods, including ultracentrifugation, ultrafiltration, immunoaffinity capture-based techniques, and microfluidics-based isolation techniques. The exosomes isolated in the current study were verified by TEM, NTA, and western blot. TEM (Figure 1(a)) confirmed the typical morphology of the exosomes, and NTA 3 Stem Cells International ( Figure 1(b)) showed that the average diameter of ADSC-Exo was 133:6 ± 1 nm and HFF-Exo was 142 ± 5:1 nm. In addition, western blot ( Figure 1(c)) showed that the exosomal marker HSP70 was highly expressed. These results were all consistent with the characteristics of exosomes. 3.2. In Vivo Studies. The skin flaps and the epidermal blood supplies in all four groups remained good at the observed time points, with no obvious ulceration or infection. Hair growth was observed on all flaps 21-28 days after the operation ( Figure 2(a)). Microscopic observation of flap sections showed no significant differences in flap thickness and collagen between the ADSC and ADSC-Exo groups, but both of these were significantly better than in the HFF group and HFF-Exo group (Figures 2(b), 2(c) and 2(f)). CD31 + immunohistochemical staining (Figures 2(d) and 2(g)) showed that the numbers of blood vessels in the ADSC and ADSC-Exo groups were significantly higher than in the HFF and HFF-Exo groups. Microvascular angiography performed 28 days after surgery revealed similar degrees of vascularization of the artificial dermis prefabricated flaps in the ADSC-Exo and ADSC groups, and both of these were significantly better than in the HFF and HFF-Exo groups (Figure 2(e)). NGS of Small RNA Composition in ADSC-Exo and HFF- Exo. Small RNAs include miRNAs, tRNAs, rRNAs, piwiinteracting RNAs, snRNAs, and others. miRNAs are noncoding single-stranded RNA molecules approximately 22 nucle-otides in length, which are encoded by endogenous genes and are involved in the regulation of posttranscriptional gene expression in plants and animals. To classify the small RNAs in the sequencing results, we compared clean reads using the Rfam database [20], cDNA sequence, species repeat library [25], and miRBase database [21,26]. The length distribution statistics of the known miRNAs in each sample are shown by a line graph (Figure 3(a)). Furthermore, the distribution of small RNAs in each sample based on the above databases is summarized and displayed as a pie chart (Figures 3(b) and 3(c)). 3.3.1. miRNA Expression Analysis. The abundance of a miRNA is directly proportional to its expression level. Small RNA sequencing analysis allows miRNA expression levels to be estimated by locating the mature sequence and counting the newly predicted miRNA sequences. Based on the identified known miRNAs and the newly predicted miRNAs, miRNA expression was calculated as transcript per million [27]. Information on the symmetry and dispersion of the data was displayed by miRNA expression boxplots (Figure 4(a)). When the number of samples was large (≥3), the correlation of miRNA expression levels among samples was an important indicator of the reliability of the experiments and the rationality of sample selection. Similarities between samples were tested using a correlation coefficient thermogram (Figure 4(b)) and sample-to-sample cluster analysis (Figure 4(c)); the closer the sample correlation Stem Cells International coefficient was to one or the preferential aggregation, the higher the similarity of expression patterns between the samples. 3.3.2. Differential miRNA Analysis. Differential expression analysis was used to identify miRNAs that were differentially expressed between different samples. For paired samples with biological replicates, DESeq2 [28] in R was used for differential miRNA screening and it revealed a total of 43 differentially expressed genes, including nine upregulated and 34 downregulated genes ( Table 1). The differential miRNAs screened between different samples are shown in a histogram ( Figure 5(a)), and a volcano plot ( Figure 5(b)) was constructed to clarify the overall distribution of the differentially expressed miRNAs. A heat map ( Figure 5(c)) was used to show the differential expression of miRNAs according to unsupervised hierarchical clustering. The same types of samples could generally be clustered in the same cluster, and miRNAs in the same cluster may have similar biological functions. ily the protein coding region) and regulate gene expression by cutting the target mRNA or inhibiting its translation. In addition, animal miRNAs have been reported to target the 5 ′ end of the RNA as well as the coding region. We predicted (Figure 6(a)). We also used Fisher's algorithm to perform cell composition, biological process, and molecular function enrichment analyses of the target genes predicted for the differentially expressed miRNAs in each group, and created an acyclic graph using topGO (Figures 6(b)-6(d)). This acyclic graph provides a graphical representation of the target gene GO enrichment analysis results, including the GO nodes and their hierarchical relationships of target gene enrichment. miRNA differential analysis combined with target gene enrichment analysis showed that hsa-miR-760 and hsa-miR-423-3p were associated with the ITGA5 and HDAC5 genes, respectively. ITGA5 is involved in wound repair and vascularization [30], and upregulation of hsa-miR-760 may promote the expression of ITGA5, thereby accelerating wound vascularization and healing. In contrast, HDAC5 has a negative effect on angiogenesis [31], and downregulation of hsa-miR-423-3p may promote wound vascularization by reducing the expression of HDAC5. 3.6. miRNA Pathway Analysis. Pathway analysis helps to identify the cellular pathways involving the differentially expressed miRNAs. We selected the 20 most significant signal pathways (Figure 7(a)), among which phosphatidylinositol-3-kinase-protein kinase B (PI3K-Akt) and molecular function (c) of target genes predicted by differential miRNAs in each group. Each GO term is enriched, and the most prominent 10 nodes are represented by a rectangle. The color of the rectangle represents enrichment significance, and the higher the saliency from yellow to red. The basic information of each node is displayed in the corresponding graph, which is GO ID and GO term. 9 Stem Cells International ( Figure 7(b)) and mitogen-activated protein kinase (MAPK) (Figure 7(c)) showed the greatest difference between ADSC-Exo and HFF-Exo in promoting the survival of prefabricated skin flaps. Several studies [32,33] have reported that the PI3K-Akt and MAPK signaling pathways are closely related to wound vascularization. The differential miRNAs may thus enhance endothelial cell proliferation and motility and promote angiogenesis by activating the PI3K-Akt and MAPK signaling pathways. ADSC-Exo Regulate miRNA and Gene Expression Affecting HUVEC Proliferation. We evaluated the effects of ADSC-Exo and HFF-Exo on HUVEC proliferation by EdU assays. The rate of cell proliferation (Figures 8(a) and 8(b)) was significantly higher in the ADSC-Exo compared with the HFF-Exo group (P < 0:01). We also investigated the effects of ITGA5 and HDAC5 on vascularization by coculturing HUVECs with 100 μg/ml ADSC-Exo or HFF-Exo for 72 h, extracting total RNA and protein, and detecting the expression levels of the above miR-NAs. Expression levels of hsa-miR-423-3p and HDAC5 were significantly lower in the ADSC-Exo compared with the HFF-Exo group, while expression levels of hsa-miR-760 and ITGA5 were significantly higher in the ADSC-Exo group (P < 0:01) (Figures 8(c)-8(e)). These results suggested that ADSC-Exo promote neovascularization of artificial dermal prefabricated flaps by regulating the expression of ITGA5 and HDAC5. Discussion Prefabricated flaps were first used successfully in animal models in the 1970s and have since been continuously The top 20 pathway enrichment analysis of differential expression genes by the KEGG database. Each dot in the figure corresponds to a pathway. The order of P value from small to large corresponds to red, orange, yellow, green, blue, indigo, and purple, respectively. In brief, the color tends to be red, the smaller the P value. In addition, the larger the dot, the more genes in the pathway. (b-c) The PI3K-Akt (b) and MAPK (c) pathway in the KEGG database. Red indicates upregulated genes, green indicates downregulated genes, and yellow indicates both upregulated and downregulated genes. refined for clinical use. Prefabricated flaps expand the range of the flap donor area and provide a more adequate and stable blood supply. However, prefabricated flaps have limited thickness and are not wear resistant, and are thus not suitable for the repair of certain areas such as joints. Artificial dermis can increase tissue thickness making the repaired areas wear resistant, but limiting contracture. New prefabricated flaps involving a combination of artificial dermis and prefabricated flaps can improve blood perfusion, but their application is limited by the lack of factors to promote neovascularization and angiogenesis. Recent studies showed that miRNAs and proteins contained in exosomes acted as mediators of intercellular information transmission [34,35]. In addition, numerous studies have shown that stem cell exosomes can accelerate wound repair by promoting wound vascularization, with no risk of stem cell-induced tumorigenesis [36,37]. However, the proangiogenic effects of ADSC-Exo in prefabricated flaps have not previously been reported. Therefore, we constructed artificial dermal prefabricated flaps and transplanted them into wounds in a rat model, and injected them with ADSCs, HFFs, ADSC-Exo, and HFF-Exo. We observed and compared the vascularization of the prefabricated flaps in each group. There was no signif-icant difference in the thickness or vascularization of the artificial dermal prefabricated flaps between the ADSC and ADSC-Exo groups (P > 0:05), and both of these performed significantly better than the HFF and HFF-Exo groups (P < 0:05). ADSC-Exo thus significantly promoted vascularization and improved survival of the transplanted artificial dermal prefabricated flaps. Previous studies [16] showed that the thickness and blood perfusion of artificial dermal prefabricated flaps were superior to conventional prefabricated flaps, but the application of artificial dermal prefabricated flaps was limited by the lack of neovascularization and proangiogenic factors. Exosomes are factors/molecules secreted by cells into the extracellular space, which play a role in intercellular communication. They can be extracted from cell culture media by various methods, including centrifugation, filtration, and ion-exchange chromatography [10]. The size of exosomes can be identified by TEM and NTA, and proteins such as CD63, CD81, TSG101, and HSP70, which are usually rich in exosomes, can be identified by western blot [15]. Numerous recent studies [38][39][40] showed that mesenchymal stem cell-derived exosomes (MSC-Exo) promoted the repair of tissue damage, leading to great progress in wound repair. MSC- Stem Cells International Exo were also shown to induce the proliferation and migration of vascular endothelial cells, promote angiogenesis, and reduce apoptosis of endothelial cells, indicating important roles in injury repair and vascularization [10,39]. In the current study, we therefore applied ADSC-Exo to artificial dermal prefabricated flaps to overcome the issue of insufficient vascularization. Compared with HFF-Exo, the application of ADSC-Exo to the artificial dermal prefabricated flap resulted in better flap repair and vascularization. However, the specific mechanism by which ADSC-Exo promotes the vascularization of artificial dermal prefabricated flaps remains unclear. The above results suggest that ADSC-Exo may play an important role in the vascularization of prefabricated flaps. We therefore used NGS technology to analyze the spectrum of miRNAs in ADSC-Exo and HFF-Exo, and identified nine upregulated and 34 downregulated miRNAs (inclusion criteria: |log2 ðfold changeÞ| ≥ 1 and P < 0:05). We then analyzed the GO enrichment and KEGG pathways of the differentially targeted miRNA genes, and identified hsa-miR-760 and hsa-miR-423-3p as target genes closely related to ITGA5 and HDAC5, respectively. These two genes also play important roles in regulating angiogenesis through various signaling pathways, such as Smad and MAPK. Fibronectin is involved in the regulation of angiogenesis, and fibronectin in the extracellular matrix can induce neovascularization through the activation of endothelial cells by the ITGA5 gene [41]. In addition, ITGA5 has been shown to be closely related to transforming growth factor-(TGF-) β superfamily signaling. ITGA5 can increase the activation of the TGF-β-induced Smad signaling pathway and promote the formation of new blood vessels [42]. Meanwhile, the formation of new blood vessels is regulated by the proangiogenic factor fibroblast growth factor 2 and the guidance factor slip2. HDAC5 inhibits endothelial cell angiogenesis gene expression, which may impede the formation of blood vessels by reducing the effects of FGF2 and slip2 [31]. We therefore hypothesized that upregulation of hsa-miR-760 and downregulation of hsa-miR-423-3p in ADSC-Exo may promote the vascularization of prefabricated flaps through ITGA5 and HDAC5. We subsequently cocultured HUVECs with ADSC-Exo or HFF-Exo to determine their effects on cell proliferation, and detected the relative miRNA expression levels of hsa-miR-423-3p and hsa-miR-760 and the gene expression levels of ITGA5 and HDAC5. These results confirmed the claim that ADSC-Exo promote the proliferation of vascular endothelial cells by regulating the expression of ITGA5 and HDAC5, thereby increasing the neovascularization of artificial dermal prefabricated flaps. According to KEGG analysis results, the target genes of the differential miRNAs were mainly enriched in the PI3K-Akt and MAPK pathways, and also in focal adhesion, Ras signaling, and vascular smooth muscle contraction pathways, which are closely related to cell proliferation and migration. Many studies [32,33] have shown that the activation of these pathways can significantly promote angiogenesis and promote wound repair. We therefore suggest that ADSC-Exo are affected by the ischemic environment, leading to the enrichment and activation of the differential miRNAs in the above pathways, and thus promoting vascularization of the prefabricated flaps. Many other differentially expressed miRNAs in ADSC-Exo are associated with the vascularization of flaps, and further studies are needed to explore the mechanisms of these exosomal miRNAs in the vascularization of prefabricated flaps. In conclusion, the application of exosome miRNAs may provide a new strategy to support the application of artificial dermal prefabricated flaps.
2020-02-06T09:09:32.705Z
2020-01-31T00:00:00.000
{ "year": 2020, "sha1": "50ece72cce17c02868f60bbda565826afac83c80", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/5293850", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79705cf1717debae210a1c70353166179e861acd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
12266082
pes2o/s2orc
v3-fos-license
Hybrid Predictive Control Based on High-Order Differential State Observers and Lyapunov Functions for Switched Nonlinear Systems In this paper, a hybrid predictive controller is proposed for a class of uncertain switched nonlinear systems based on high-order differential state observers and Lyapunov functions. The main idea is to design an output feedback bounded controller and a predictive controller for each subsystem using high-order differential state observers and Lyapunov functions, to derive a suitable switched law to stabilize the closed-loop subsystem, and to provide an explicitly characterized set of initial conditions. For the whole switched system, based on the high-order differentiator, a suitable switched law is designed to ensure the whole closed-loop’s stability. The simulation results for a chemical process show the validity of the controller proposed in this paper. Introduction Switched system is a typical hybrid dynamic system made up of some subsystems and a switched law.In recent years, the stabilization of constrained switched systems became an attractive research subject [1]. Model predictive control (MPC) is a receding horizon control (RHC) method to handle constraints within an optimal control setting [2].There have been many results to show the performance of constrained MPC [3].In MPC design, the initial feasibility of the optimization problem is always assumed.Due to uncertainties and constraints of the practical process, this assumption may not be satisfied.Furthermore, the set of initial conditions, starting from where a given MPC formulation is guaranteed to be feasible, has not been explicitly characterized. In recent years, controller design methods based on Lyapunov functions have been developed, which can give an explicitly characterized set of initial conditions from which the closed-loop system is stable [4].By embedding the Lyapunov-based design methods into the MPC design, we can obtain the set of initial conditions from where the closed-loop system is stable.In refs.[5,6], two Lyapunov-based predictive controllers were derived for constrained nonlinear systems.In refs.[7,8], two Lyapunov-based predictive controllers were proposed for constrained switched systems and constrained switched systems with uncertainties, respectively.In these papers, the states of the system are observable. However, in real processes the system's states are often not measurable, and hence, state-feedback controllers and switched laws cannot be realized.One of the methods to overcome this difficulty is to construct a state observer to estimate the states for constructing the controller and switched law.In ref. [9], an output feedback bounded controller was given for a class of nonlinear systems which was not switched system.In ref. [10], for a kind of nonlinear switched systems without uncertainties and disturbance, a bounded nonlinear controller was given.But it was not guaranteed to be optimal with respect to an arbitrary performance criterion which incurporates requested performance in the design.In ref. [11], a hybrid output feedback predictive controller was proposed for a class of switched nonlinear systems without uncertainties.In papers [9][10][11], the processes' states were estimated using a high-gain observer, but many adjustable parameters of the observer need to be chosen expe-rientially.Sometimes the wrong selection of parameters can cause stability problems and an undesired transient performance of the observer.In refs.[12][13][14], a high-order differential state observer was designed to estimate the states of a nonlinear system.Theoretically the parameters are chosen according to the performance and stability of the observer and theoretically few parameters with explicit meanings have to be selected based on the performance and stability of the observer. In this paper, an output feedback hybrid predictive controller is proposed for a class of uncertain switched nonlinear systems based on high-order differential state observers and Lyapunov functions.The main idea is to design a hybrid predictive controller based on Lyapunov functions and high-order differential state observers, which switches between a bounded feedback controller and a predictive controller for each subsystem, and to provide an explicitly characterized set of initial conditions to stabilize the closed-loop subsystem.Here, we use high-order differentiators as state observers.This high-order differential state observer has simple structure with few parameters.A suitable switched law based on the high-order differentiator is designed to guarantee the whole closed-loop system's stability.Finally, the simulation results for a chemical process show the validity of the procedure proposed in this paper. Problem Description Consider the constrained switched nonlinear system where denotes the vector of continuous-time state variables, denotes the vector of manipulated inputs taking values in a nonempty compact subset , where  is the Euclidian norm, and is the magnitude of the constraints. denotes the bounded uncertain parameter vector taking values in a nonempty compact subset   : is the switching signal assuming to be a piece-wise continuous (from the right) function of time, i.e., for all , implying that only a , ,  denote the set of switching times at which the kth subsystem is switched in and out, respectively.It is assumed that all entries of the vector functions are sufficiently smooth and that x , and The objective of this paper is to design a nonlinear output feedback predictive controller based on Lyapunov functions and a high order differential state observer for the case where state measurements are not available for each mode of the uncertain switched nonlinear system given by Equation (1).Then, for the whole switched system, based on state estimations, a suitable switched law is designed to ensure the whole closed-loop system's stability. High-Order Differential State Observers In order to construct an output feedback controller to stabilize the controlled system (1), we use high-order differential state observers [12][13][14] to estimate the unmeasurable states of the system (1). Firstly, we give some assumptions.Assumption 1: Consider system (1), for every k K  , there exist an integer and a set of invertible coordinates are nonlinear scalar functions of x, such that the system (1) takes the form where ,   , , , , is input-to-state stable (ISS) [9], where T 1 , , The following assumptions are given to reduce the influence of uncertainties. This formula is different from formula (4) since it does not depend on the uncertain parameter k  .We also assume this subsystem is ISS stable. In order to construct a controller to stabilize the controlled system (1), we use high-order differential state observers [12][13][14] to estimate the un-measurable states of system (1).The high-order differential state observer for each mode can be described as , 1, , (7).Note that the HOD is independent of the model of the original system (1). Proposition 1.The HOD does not rely on the model of the estimated system, parameters are chosen using (8), and has following characteristics: 1) The HOD is an asymptotically stable system. State Feedback Bounded Controller Based on Lyapunov Functions We recall the design of a state feedback bounded controller to obtain the set of initial conditions from which the system is stable [9].Define the tracking error variables T and the tracking error vector is the reference input vector, where is a reference input and is its ith time derivative.Then the where function, and The Lyapunov function is chosen as V  e P e , where the positive-definite matrix is chosen to sat- is non-increasing, and x and define the set The continuous bounded control law is constructed as follows where where k g is the it column of and is the column of ; Remark 1.For convenience, this bounded controller ( 13)-( 14) is redefined as .   k B x Remark 2. Here, the Lyapunov functions used in verifying the switching conditions at any given time, , are based on .Note that the Lyapunov functions V are in general different from k used in bounded controllers.For the systems with relative degree Based on this bounded controller ( 13)-( 14), an estimation of the stability region is computed as where 0   is the largest number for which , and The robustness property of the bounded controller in ( 13)-( 14) is formalized by the following proposition: Proposition 2. Consider the system (1) for a fixed value   t k   .Under the Assumptions 1-4, compute the bounded control law of ( 13)-( 14) using the Lyapunov functions k and V 0 k   , and then give the stability Then, given any positive real number k , there exists positive real numbers , and     and the output of the closedloop system satisfies: is similar to the proof of Theorem 1 in ref. [9]). Output Feedback Bounded Controller Based on State Estimations and Lyapunov Functions In this section, we consider the case when some states of system (1) are not measurable.The bounded controller based on state estimations and Lyapunov functions should be designed and the stable region of initial conditions should be described. Based on the high-order differential state observer ( 6)-( 8), the following presents the output feedback controller used for each mode and characterizes its stability properties: Proposition 3. Considering the nonlinear system (1), for a fixed mode     , design the output feedback controller with a high-order differential state observer ( 6)-( 8) where 1 exp Rem as a two ark 4. The ith closed-loop subsystem can be cast time-scale system given by ( 19) where e is a vector of the auxiliary error variables , and Proposition 4 establishes the existence of a set, an a rolled r such that once the state estimation error is smaller th certain value (note that the decay rate can be cont by adjusting ), the presence of the state is output feedb stability region, Propositio Given any po e real numbe : : , where where Owing to the existence of parameter uncertainties and constraints, the initial fe of the MPC in (32) is not guaranteed.If it is infeasible, the control action is switched to the bounded controller (17).To describe the whole control action arg M asibility , we cast the kth subsystem (1) as a switched system of the form where       : 0, 1,2 i t   is the switching signal which is assumed to be a piecewise continuous (from the right) function of time.When   i t 1  , the control input takes i.e., the MPC is used; and when , it takes : 2) Design the MPC controller given by ( 21)-( 31 , if then the whole closed-loop system is stable (See the proof in Appendix B).Remark 7. The controller presented in Theore be implemented using the following steps: 1) Given the system model (1) with constraints on the inputs, and a control Lyapunov function to design the bounded controller (17) with suita compute the stability regions (15) and ( 16).Here the staller design only th able region est n the mth subsystem is switched in, the con- ; if the state is in the neighborhood of origin, then and the closed-loop x ing to Proposition rk 8. [10].The time interval b es sho ld be long enough to ecreased to a suff value such that the closed-loop system is stable.Furased on , bu state t w at me, t system is stable accord 4. Rema etween two consecutive switch u ensure that the estimation error d iciently small thermore, the decision to switch is not b Simulation Consider a continuously stirred tank reactor where three parallel, irreversible, first-order exothermic reactions of the form where A is the reactant species, s the desired product species, U, R denote the by-product species.Under standard modeling assumptions, the mathematical and D i model for the process takes the form [8]   where A C and The boundary of parameters is   .For this system, perform the following sformation Two quadratic, posi e-definite fun ons of the form, tiv cti Note that these positiv e-define function is given for system (39).To estimate the stability regions, the Lyapunov functions where 3 Conclusion In this paper, a hybrid predictive control method is pro- posed for a class of uncertain switched nonlinear systems with input constraints and unavailable state measurements.The main objectives were to design a hybrid controller which switches between a bounded controller and a predictive controller based on Lyapunov functions and a high-order differential state observer with a suitable switched law to stabilize the closed-loop subsystem, and to provide an explicitly characterized set of initial conditions.For the whole switched system, a suitable switched law based on the state estimation was derived to ensure of the controller proposed in this paper. Appendix A Proof of Proposition 4 The proof uses the result of Proposition By Proposition 4, we have , and . This completes the proof of Proposition 5. Appendix B Proof of Theorem 1. (Similar to the proof of Theorem in ref. [7]) Based on Propositions 3-5, we need only to prove that, with the switched law (35)-(37), the whole closed-loop system is still stable. Let t satisfy in Assumption 3 :kVAssumption 5 : 1 , There exists a known constant bk  Before designing the output feedback controller, we have to revise Assumption 1.There exists an invertible coordinate transformation  4 ) When the MPC is infeasible   k e estimation x of the closed-loop s  , i.e., when 5 ) bility regions are only signs, for the states cannot be measured, and in the contro imation   k  x is used.And choose Lyapunov function c k V for the system (19); 2) Determine suitable parameters to design the MPC in (21)-(31).Give the size of the ball to whic the state is required to converge, max d At the time of switch e mth subsystem onstraints and time of switching into the kth subsystem), consider whether the state estimation belongs to the stable regi ; , consider the c in Theorem 1, and choose  y m M satisfying (36) and (37), respectively; 2 e , are then used to synthesize tw ne for each mode of the form o bounded nonlinear controllers (o ) Figure 2 . Figure 2. Closed-loop state (the reactor concentration C A ) profile. Figure 4 . Figure 4.The input Q profile. This work was supported by the National Natural Science Foundation of Peoples Republic of China under Grants 61374004, 61004013, 61104007 and 60804033, the National Specialized Research Fund for the Doctoral Program of Higher Education under Grant 20113705120003, Higher Educational Science and Technology Program Foundation of Shandong Province under Grant J10LG28, nd h Fund from the whole closed-loop system's stability.The simulation results for a continuously stirred tank reactor showed the validity J11LG08 a th the Doctoral Starting Researc e Qufu Normal University. long as d k is small enough, we can have , i.e., k, constraint (35) ensures the initial conditions switched on mode k, using the result of Proposition 5, we can have the mode k is stable.So we need only to prove the stability at the switched time.If is switched out and then switched back in.So we can have the feasibility of constraints (28)-(29), then the value of   k V x j continuously decreases.If this mode is not switched in, there exists at lease some such that mode is active and Lyapunov function 1, , j  p j V continues to decrease until j j V    .Similar to discussion before, the constraint (35) ensures that j V continues to be less than j to denote the time at which, for the rth time, the kth subsystem is switched in and out, re-
2016-02-01T17:59:50.645Z
2013-08-31T00:00:00.000
{ "year": 2013, "sha1": "f3df8f8e2ef3a6523a4c075a1ecddd8352cff9f8", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36703", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "f3df8f8e2ef3a6523a4c075a1ecddd8352cff9f8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
35194338
pes2o/s2orc
v3-fos-license
The Ulster Medical Journal is an excellent chapter on pancreatic function tests which we possibly don't do enough of. Northern Ireland clinicians, who have excellent immunology and gastrointestinal hormone backup, might have liked more detail on the antibody tests for coeliac disease and their limitations and on fasting gastrin, which is probably as useful and a lot less bother than a Schilling test for diagnosis of pernicious anaemia. This book is probably not selective enough for use by trainees as a day to day text, but as a small, easy-to-read reference source it has few competitors. Kept on wards and gastroenterology units, it can be used to determine not only how to do a more esoteric test but also whether it is worth doing. With it to hand, one of the basic tenets of informed consent-that the doctor should know a bit more about the procedure than the patient-will surely be facilitated. This book aims to comprehensively deal with all possible emergencies involving the gastrointestinal tract. Each chapter is written by different contributors, most with multiple authors, who are renowned in their field. All but 3 of the 119 contributors are from North America. At 1064 pages it is clearly not a convenient handbook for ready consultation in the event of an emergency. It is an impressive textbook, which covers the whole range of emergency situations. It covers surgical as well as medical emergencies. Arguably some topics are included which do not immediately spring to mind as emergencies e.g. space-occupying lesions of the liver, ascites and non cardiac chest pain. Some topics which usually merit little mention, such as Boerhaave's syndrome (oesophageal perforation associated with vomiting) and typhilits (bowel infarction of obscure origin in neutropenic patients) are well described. The chapter on foreign bodies of the upper oesophagus is particularly well covered. Each topic is considered in detail, often with helpful practical points, and management is described well beyond the immediate emergency episode. My criticism of this book is not so much the content but its format-it looks and feels too much like a traditional standard textbook. The formulation of management guidelines for acute emergencies, including gastrointestinal ones, is indeed of great current interest. Various bodies including the American Gastroenterology Association and British Society of Gastroenterology are engaged in producing guidelines for acute situations. These groups have applied the techniques of evidence based medicine so that "the strength of evidence" for any action is … is an excellent chapter on pancreatic function tests which we possibly don't do enough of. Northern Ireland clinicians, who have excellent immunology and gastrointestinal hormone backup, might have liked more detail on the antibody tests for coeliac disease and their limitations and on fasting gastrin, which is probably as useful and a lot less bother than a Schilling test for diagnosis of pernicious anaemia. This book is probably not selective enough for use by trainees as a day to day text, but as a small, easy-to-read reference source it has few competitors. Kept on wards and gastroenterology units, it can be used to determine not only how to do a more esoteric test but also whether it is worth doing. With it to hand, one of the basic tenets of informed consentthat the doctor should know a bit more about the procedure than the patientwill surely be facilitated. WILLIAM DICKEY Gastrointestinal Emergencies 2nd Ed, Edit Mark B Taylor Williams & Wilkins Baltimore. Price £120. This book aims to comprehensively deal with all possible emergencies involving the gastrointestinal tract. Each chapter is written by different contributors, most with multiple authors, who are renowned in their field. All but 3 of the 119 contributors are from North America. At 1064 pages it is clearly not a convenient handbook for ready consultation in the event of an emergency. It is an impressive textbook, which covers the whole range of emergency situations. It covers surgical as well as medical emergencies. Arguably some topics are included which do not immediately spring to mind as emergencies e.g. spaceoccupying lesions of the liver, ascites and non cardiac chest pain. Some topics which usually merit little mention, such as Boerhaave's syndrome (oesophageal perforation associated with vomiting) and typhilits (bowel infarction of obscure origin in neutropenic patients) are well described. The chapter on foreign bodies of the upper oesophagus is particularly well covered. Each topic is considered in detail, often with helpful practical points, and management is described well beyond the immediate emergency episode. My criticism of this book is not so much the content but its formatit looks and feels too much like a traditional standard textbook. The formulation of management guidelines for acute emergencies, including gastrointestinal ones, is indeed of great current interest. Various bodies including the American Gastroenterology Association and British Society of Gastroenterology are engaged in producing guidelines for acute situations. These groups have applied the techniques of evidence based medicine so that "the strength of evidence" for any action is systematically documented. In this book the authors have reviewed the literature but without the same rigour. This resulted in two different sets of authors expressing different viewpoints in relation to the benefits of emergency ERCP in suspected acute gallstone pancreatitis. While this demonstrates that the issue is controversial it would have been more helpful to the reader to have a single "evidence based" assessment of the literature. In situations where there was general agreement between groups of authors, such as the management of acute non variceal upper gastrointestinal haemorrhage, repetition of C The Ulster Medical Society, 1998. some points in different chapters was tedious. This topic was divided into 5 chapters, each with a different aspect but inevitably with some overlap. The section required at least better editorial control and might have been simplified and improved if the whole topic had been written by a single set of authors. This textbook in common with all textbooks will suffer from becoming out dated very quickly. The same resource on computer that could be rapidly updated seems more appropriate. Complications Colon & Rectal Surgery by Hicks. Williams & Wilkins Europe Ltd. £90. The concept of a comprehensive guide to the prevention, recognition and treatment of complications of colorectal surgery is an attractive proposition. However, this is a disappointing attempt to fill this niche. As with many multi-author texts, the book lacks consistency of style. Some of the chapters, (particularly "urological complications") fail to address the question of causation and prevention at all. It would have been appropriate to deal with pelvic neural anatomy, where modern understanding of the autonomic nerve pathways has aided surgeons in reducing the incidence of nerve injury. The chapter on "miscellaneous conditions" seems to have little to do with surgical complications at all. The authors ignore much of the European literature and fail to discuss some contentious issues. For example, the chapter on sepsis concentrates inappropriately on intraluminal antibiotic prophylaxis and ignores the widespread use of antibiotic lavage. Many of the authors make dogmatic claims, not substantiated by published evidence. While most surgeons prefer mechanical bowel preparation, in fact several studies suggest that it is unnecessary. This literature is again ignored completely. I found it difficult to cope with the American style ("distalmost", "extirpative operations"). Certainly in the United Kingdom Urologists would not agree that "the most common reason for intra-operative call to the operating room is to place a urethral catheter". I cannot recommend this book to surgeons. It is readable and I quite enjoyed delving in to the occasional chapter, particularly that on anal stenosis. However, if you are looking for guidance on the prevention and treatment of complications in colorectal surgery, regrettably first hand experience in a busy colorectal unit remains your best option. S T IRWIN The Transplantation and Replacement ofThoracic Organs -The present status of biological and mechanical replacement of the heart and lungs. Edited by D K C Cooper, L W Miller and G A Patterson. Kluwer Academic Publishers, London ISBN 0 7923 8898 4. £235. Few books become "the reference" textbook in their first edition. David Cooper has brought together the wealth of experience of the leading authorities in Cardiothoracic transplantation medicine and in this second edition he consolidates this. It comprises four sections, eighty chapters and some eight hundred and twenty four pages. This is a comprehensive and readable review of the "state of the art" in the field of Cardiothoracic transplantation and replacement of thoracic organs. The contributors are predominantly from the United States and the early chapters reflect an American perspective on the medico-legal aspects of brain death and the assessment of potential Donors and Recipients. I would recommend to a general audience the sections on the historical aspects of heart and lung transplantation. These give you a sense of how exciting and hazardous the early days of transplantation were. Physicians with patients who are recipients or potential recipients should find the middle sections on the clinical aspects of transplantation particularly useful in their practice. For the cardiovascular surgeons it is a textbook worth dipping into, especially if preparing for the final part of the FRCS. There are excellent reviews on operative technique and postoperative management, and numerous illustrations of surgical anatomy. Technophiles will find the last section on current and future advances in thoracic organ replacement a delight and hospital administrators will become decidedly queasy at the potential costs. In summary, the book is an excellent review which will probably grace the shelves of some well endowed hospital library and perhaps the occasional office of a physician with an interest in transplantation. M J D ROBERTS Angiology in Practice. Edited by A M Salmasi and A Strano. Kluwer Academic Publishers Group. pp 526. £188. ISBN 0 7923 4143 0. The development of angiology, as a specialty in its own right, has tended, until recently, to be a European phenomenon. This book appears to be a joint venture between the United Kingdom and Italy with the authors, almost exclusively, coming from these two countries. The classification, epidemiology, pathophysiology and clinical manifestations of a wide range of arterial and venous diseases, are addressed in 35 chapters. The scope of the book is extensive, which unfortunately leads to occasional excessive detail in relatively rare conditions and inadequate discussion on some of the more common vascular problems. This begs the questions as to what group of clinicians would benefit from reading "Angiology in Practice"? I suspect it will be of most use to general physicians and cardiovascular surgeons. It should also be of benefit, although perhaps more as a reference text, to general practitioners, neurologists and interventional radiologists. The main advantage of this text is that it reminds clinicians of the systemic nature of cardiovascular disease. "Angiology in Practice" presents a wide range of conditions, not usually found in a single text, thus making it an asset to most post graduate libraries. PAUL BLAIR The first thing to be said about this book is that the title is somewhat misleading: it is neither solely a manual of infection or, deals only with procedures. In fact, it covers a wide range of infection control practice and contains a wealth of useful and helpful information. The author covers, largely from a UK perspective, the organisation of an infection control programme, isolation policies, disinfection, problems with specific pathogens, prevention of infection and precautions for health care workers, etc. Much of what is contained can be adapted for local use and there are a number of helpful tables, including those which contain the incubation periods for a variety of infections, often very useful in daily practice. There are a number of useful line diagrams, usually adapted from other publications, e.g. figure 3.1 which outlines the resistance of micro-organisms to disinfectants. Indeed, the author could have made more use of diagrams and tables as the layout is largely textual in nature. In a book which contains a wealth of information and advice it is not surprising that there are one or two errors or points of disagreement. Figure 5.4 which describes the antigen and antibody response in HIV infection is incorrect as the responses are juxtapositioned and one might disagree with the recommendation that the operating theatre should be routinely cleaned with disinfectants; a good detergent clean is adequate in most instances. The comprehensive nature of this book, the sensible approach of the author and the wealth of information contained therein ensures that this book will be considered by Microbiologists, Infection Control Nurses and other health care workers such as physiotherapy staff as a resource in their clinical practice and during training. HILARY HUMPHREYS Revision of Total Knee Arthroplasty. Engh. £E125 Williams & Wilkins Europe Ltd. Total knee arthroplasty has increased in use throughout the world dramatically in the past decade. In Northen Ireland we have seen primary knee arthroplasty grow from under one hundred in the late 1980' s to a current total in excess of seven hundred and it is likely to expand to over 1000 in the next five years. With this enormous growth in primary knee joint replacement has come an increase in the requirement for revision arthroplasty. The numbers at present are still quite small but with the use of arthroplasy in younger and younger patients the numbers will inevitably rise. Because revision surgery affects a small group of patients, experience in techniques and best methods of treatment must necessarily be limited. It is therefore essential to learn from larger centres throughout the world and to amend our practise accordingly. This book gathers together the experience of a wide group of surgeons mainly from North America and condenses their views into an excellent review of our current knowledge in this field. The book is written in an easily readable format with chapters dealing with all the main aspects of this type of surgery. The book has been divided into five main parts with sections dedicated to planning, technique and outcome and a dedicated
2018-05-08T17:48:06.275Z
1998-05-01T00:00:00.000
{ "year": 1998, "sha1": "5b31da6af60cf79e6e864570ece2a1c7cb849599", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5b31da6af60cf79e6e864570ece2a1c7cb849599", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
216144798
pes2o/s2orc
v3-fos-license
Invariants for Continuous Linear Dynamical Systems Continuous linear dynamical systems are used extensively in mathematics, computer science, physics, and engineering to model the evolution of a system over time. A central technique for certifying safety properties of such systems is by synthesising inductive invariants. This is the task of finding a set of states that is closed under the dynamics of the system and is disjoint from a given set of error states. In this paper we study the problem of synthesising inductive invariants that are definable in o-minimal expansions of the ordered field of real numbers. In particular, assuming Schanuel's conjecture in transcendental number theory, we establish effective synthesis of o-minimal invariants in the case of semi-algebraic error sets. Without using Schanuel's conjecture, we give a procedure for synthesizing o-minimal invariants that contain all but a bounded initial segment of the orbit and are disjoint from a given semi-algebraic error set. We further prove that effective synthesis of semi-algebraic invariants that contain the whole orbit, is at least as hard as a certain open problem in transcendental number theory. Introduction A continuous linear dynamical system (CDS) is a system whose evolution is governed by a differential equation of the formẋ(t) = Ax(t), where A is a matrix with real entries. CDSs are ubiquitous in mathematics, physics, and engineering; they have been extensively studied as they describe the evolution of many types of systems (or abstractions thereof) over time. More recently, CDSs have become central in the study of cyber-physical systems (see, e.g., the textbook [3]). In the study of CDSs, particularly from the perspective of control theory, a fundamental problem is reachability-namely whether the orbit {x(t) : t ≥ 0} intersects a given target set Y ⊆ R d . For example, when x(t) describes the state of an autonomous car (i.e., its location, velocity, etc.). Y may describe situations where the car is not able to stop in time to respond to a hazard. When Y is a singleton set, reachability is decidable [14, Theorem 2]. However, already when Y is a half-space it is open whether or not reachability is decidable. The latter decision problem is known in the literature as the continuous Skolem problem. Some partial positive results were given in [5] and [9]. The continuous Skolem problem is related to notoriously difficult problems in the theory of Diophantine approximation: specifically a procedure for the continuous Skolem problem would yield one for computing to arbitrary precision the Diophantine-approximation types of all real algebraic numbers [9]. In lieu of an algorithm to decide reachability, one approach is to find a set X that separates the orbit from Y . In order for this scheme to be useful, structural restrictions are placed on X to make it easy to verify that X contains the orbit and that it is disjoint from Y (indeed, if we give up either requirement, we can use as X either the orbit itself, or R d \ Y , neither of which makes the problem any easier). Natural candidates for such structured sets are inductive invariants. These are sets that are invariant under the dynamics of the system. If X is an inductive invariant, proving that the orbit is contained in X amounts to proving that the starting point x(0) belongs to X, which is typically easy. Further by restricting the class of sets under consideration (e.g., polyhedra, semi-algebraic sets, etc.), testing whether X intersects Y becomes, likewise, easy. The papers [1,2] study o-minimal invariants for discrete linear dynamical systems. There it is proved that when the target Y is a semi-algebraic set, the question of whether there exists an o-minimal invariant disjoint from Y is decidable. Furthermore, if there is an o-minimal invariant then there is in fact a semi-algebraic invariant which can moreover be constructed effectively. The present paper uses similar ideas, although the case of continuous linear dynamical systems differs in several important ways. Main Contributions. We consider the following problem: given a CDS by means of a matrix A with rational entries, an initial point x 0 = x(0), and a semi-algebraic set Y of error states, decide whether there exists a set that is definable in some o-minimal expansion of the ordered real field and is (1) disjoint from Y , (2) invariant under the dynamics of the system, and (3) contains the initial point x 0 . We show that in searching for such invariants it suffices to look among sets definable in the expansion of the reals with the real exponential function and trigonometric functions restricted to bounded domains. Moroever, assuming Schanuel's conjecture (a unifying conjecture in transcendental number theory), we prove that the existence of such an invariant is decidable, and that invariants can effectively be constructed when they exist. Without assuming Schanuel's conjecture we can decide a related problem, namely the question of whether there exists a set that is definable in an o-minimal expansion of the real field and is (1) disjoint from Y , (2) invariant under the dynamics of the system, and (3) meets the orbit of the initial point x 0 . Notice that such a set-which could be called an eventual invariant-must contain all but a bounded initial segment of the orbit. We show that when such a set exists, it can be effectively constructed and moreover that it can be chosen to be a semi-algebraic set. Such an invariant can serve as a certificate that the orbit does not enter the error set Y infinitely often. The latter is a very difficult problem to decide, even when the target set is a half-space [8]. As mentioned earlier, for discrete linear dynamical systems the question of whether there exists a semi-algebraic invariant that contains the whole orbit is decidable [1,2]. We provide an explanation of why the analogous result for continuous systems is not easy to prove; this is by way of a reduction from a difficult problem that highlights the complications of continuous systems. The problem asks whether a given exponential polynomial of the form f (t) = a 1 e b1t + · · · + a n e bnt has zeros in a bounded interval, where a i , b i are real algebraic numbers. Deciding whether f has zeros in a bounded region seems to be difficult because all the zeros have to be transcendental (a consequence of Hermite-Lindemann Theorem), and they can be tangential, i.e., f never changes its sign, yet it has a zero. Related Work. Invariant synthesis is a central technique for establishing safety properties of hybrid systems. It has long been known how to compute a strongest algebraic invariant [20] (i.e., a smallest algebraic set that contains the collection of reachable states) for an arbitrary CDS. Here an algebraic invariant is one that is specified by a conjunction of polynomial equalities. If one moves to the more expressive setting of semi-algebraic invariants, which allow inequalities, then there is typically no longer a strongest (or smallest) invariant, but one can still ask to decide the existence of an invariant that avoids a given target set of configurations. This is the problem that is addressed in the present paper. Partial positive results are known, for example when strong restrictions on the matrix A are imposed, such as when all the eigenvalues are real and rational, or purely imaginary with rational imaginary part [15]. A popular approach in previous work has been to seek invariants that match a given syntactic template, which allows to reduce invariant synthesis to constraint solving [13,23,16]. While this technique can be applied to much richer classes of systems than those considered here (e.g., with discrete control modes and non-linear differential equations), it does not appear to offer a way to decide the existence of arbitrary semi-algebraic invariants. An alternative to the template approach for invariant generation involves obtaining candidate invariants from semi-algebraic abstractions of a system [21]. Another active area of current research lies in developing powerful techniques to check whether a given semi-algebraic set is actually an invariant [12,16]. Other avenues for analysing dynamical systems in the literature include bisimulations [6], forward/backward reach-set computation [4], and methods for directly proving liveness properties [22]. The latter depends on constructing staging sets, which are essentially semi-algebraic invariants. Often, questions about dynamical systems can be reduced to deciding whether a sentence belongs to the elementary theory of an appropriate expansion of the ordered field of real numbers. While the latter is typically undecidable, there are partial positive results, namely quasi-decidability in bounded domains, see [11] and the references therein. This can be used 23:4 Invariants for Continuous Linear Dynamical Systems to reason about the dynamics of a system in a bounded time interval, under the assumption that it does not tangentially approach the set that we want to avoid. However, it seems unlikely that such results can be easily applied to the problems considered here. The rest of the paper is organised as follows. In Section 2, we give the necessary definitions and terminology. In Section 3, we define cones, which are over-approximations of the orbit, and prove that they are in a certain sense canonical. The positive results assuming Schanuel's conjecture are subsequently given in this section. Section 4 is devoted to the effective construction of the semi-algebraic invariants which allows us to state and prove the unconditional positive results. In Section 5, we give the aforementioned reduction, from finding zeros of exponential polynomials. Preliminaries A continuous-time linear dynamical system is a pair where A ∈ Q d×d and x 0 ∈ Q d . The system evolves in time according the function x(t) which is the unique solution to the differential equationẋ(t) = Ax(t) with x(0) = x 0 . Explicitly this solution can be written as: x 0 and is stable under applications of e At , i.e., e At I ⊆ I for every t ≥ 0. Note that an invariant from time t 0 contains O(t 0 ). Given a set Y ⊆ R d (referred to henceforth as an error set), we say that the invariant I avoids Y if the two sets are disjoint. We denote by R 0 the structure R, 0, 1, +, ·, < . This is the ordered field of real numbers with constants 0 and 1. A sentence in the corresponding first-order language is a quantified Boolean combination of atomic propositions of the form P (x 1 , . . . , x n ) > 0, where P is a polynomial with integer coefficients and x 1 , . . . , x n are variables. In addition to R 0 , we also consider its following expansions: R exp , obtained by expanding R 0 with the real exponentiation function x → e x . R RE , obtained by expanding R 0 with the restricted elementary functions, namely x → e x | [0,1] , x → sin x| [0,1] , and x → cos x| [0,1] . R RE exp , obtained by expanding R exp with the restricted elementary functions. Tarski famously showed that the first-order theory of R 0 admits quantifier elimination, moreover the elimination is effective and therefore the theory is decidable [24, Theorem 37]. It is an open question whether the theory of the reals with exponentiation (R exp ) is decidable; however decidability was established subject to Schanuel's conjecture by For R = R 0 , the ordered field of real numbers, R 0 -definable sets are known as semi-algebraic sets. Remark 2.1. There is a natural first-order interpretation of the field of complex numbers C in the field of real numbers R. We shall say that a set A totally ordered structure M, <, . . . is said to be o-minimal if every definable subset of M is a finite union of intervals. Tarski's result on quantifier elimination implies that R 0 is o-minimal. The o-minimality of R exp and R RE is shown in [27], and the o-minimality of R RE and R RE exp is due to [25,26]. A semi-algebraic invariant is one that is definable in R 0 . An o-minimal invariant is one that is definable in an o-minimal expansion of R exp . Orbit Cones In this section we define orbit cones, an object that plays a central role in the subsequent results. They can be thought of as over-approximations of the orbit that has certain desirable properties, and moreover it is canonical in the sense that any other invariant must contain a cone. Jordan Normal Form Let A, x 0 be a continuous linear dynamical system. The exponential of a square matrix A is defined by its formal power series as Let λ 1 , . . . , λ k be the eigenvalues of A, and recall that when A ∈ Q d×d , all the eigenvalues are algebraic. We can write A in Jordan Normal Form as A = P JP −1 where P ∈ C d×d is an invertible matrix with algebraic entries, and J = diag(B 1 , . . . , B k ) is a block-diagonal matrix where each block B l is a Jordan block that corresponds to eigenvalue λ l , and it has the form From the power series, we can write e At = P e Jt P −1 . Further, e Jt = diag(e B1 , . . . , e B k ). For each 1 ≤ l ≤ k, write B l = Λ l +N l , where Λ l is the d l ×d l diagonal matrix diag(λ l , . . . , λ l ) and N l is the d l × d l matrix diag 2 (1, . . . , 1); where diag j (·) is the j-th diagonal matrix, with other entries zero. The matrices Λ l and N l commute, since the former is a diagonal matrix. A fundamental property of matrix exponentiation is that if matrices A, B commute, then e A+B = e A e B . Thus, we have e Jt = e diag(Λ1t+N1t,...,Λ k t+N k t) = diag(e λ1t , . . . , e λ k t )e diag(N1t,...,N k t) , C V I T 2 0 1 6 23:6 Invariants for Continuous Linear Dynamical Systems where by diag(e λ1t , . . . , e λ k t ) we mean the d × d diagonal matrix that has the entry e λ1t written d 1 times, the entry e λ2t written d 2 times and so on. It will always be clear from the context whether we repeat the entries because of their multiplicity or not. Matrices N l are nilpotent, so its power series expansion is a finite sum, i.e. a polynomial in N l t. More precisely, one can verify that: . From the equation above, the entries of Q(t) are polynomials in t with rational coefficients. Write the eigenvalues as λ l = ρ l + iω l , so that We have in this manner decomposed the orbit into an exponential E(t), a rotation R(t), and a simple polynomial Q(t) matrices that commute with one another. Having the orbit in such a form will facilitate the analysis done in the sequel. Cones as Canonical Invariants In a certain sense, the rotation matrix R(t) is the most complicated, because of it, the orbit is not even definable in R exp . The purpose of cones is to abstract away this matrix by a much simpler subgroup of the complex torus To this end, consider the group of additive relations among the frequencies ω 1 , . . . , ω k : The subgroup of the torus of interest, respects the additive relations as follows: for all a ∈ S, τ a1 1 · · · τ a k k = 1}. Its desirable properties are summarised in the following proposition: Proof. Being an Abelian subgroup of Z k , S has a finite basis, moreover this basis can be computed because of effective bounds, [19, Section 3]. To check that (τ 1 , . . . , τ k ) belongs to T ω , it suffices to check that τ a1 1 · · · τ a k k = 1 for (a 1 , . . . , a k ) in the finite basis. This forms a finite number of equations, therefore T ω is semi-algebraic. The fact that this is a subset of vectors of complex numbers is not problematic in this case because of the simple first-order interpretation in the theory of reals, see Remark 2.1. The second statement of the proposition is a consequence of Kronecker's theorem on inhomogeneous simultaneous Diophantine approximations, see [7, Page 53, Theorem 4]. The proof of a slightly stronger statement can also be found in [8,Lemma 4]. Examples can be found where the set of diagonals of {R(t) : t ≥ 0} is a strict subset of T ω . 2 The orbit cone can now be defined by replacing the rotations with the subgroup of the torus. As it turns out, for our purposes this approximation is not too rough. We prove that the cone is an inductive invariant and also a subset of R d . Proof. Fix t ≥ t 0 and τ ∈ T ω , and consider the point then we can write e Aδ v as The matrix R(δ)diag(τ ) is equal to diag(τ ) for some τ ∈ T ω . Otherwise said, the vector (e δω1i τ 1 , . . . , e δω k i τ k ) belongs to T ω . Indeed this is the case because for any a ∈ S we have The fact that cones are subsets of R d comes as a corollary of the following proposition which is proved in Appendix A. Proposition 3.4. Let A = P JP −1 as above, and let C i ∈ C di×di for i = 1, . . . , k, with dimensions compatible to the Jordan blocks of A, and such that for every i 1 The matrix E(t)diag(τ )Q(t) can be written as diag(C 1 , . . . , C k ) where the C i matrices satisfy the conditions of Proposition 3.4, hence the following corollary. It is surprising that, already, the cones are a complete characterisation of o-minimal inductive invariants in the following sense. Theorem 3.6. Let I be an o-minimal invariant that contains the orbit O(u) from some time u ≥ 0, then there exists t 0 ≥ u such that: Proof sketch. Conceptually, the proof follows along the lines of its analogue in [2]. There are a few differences, namely that the entries of the matrix A in [2] are assumed to be algebraic, while this is not true for the entries of e A . We define rays of the cone, which are subsets where τ ∈ T ω is fixed. Then we prove that for every ray, all but a finite part of it, is contained in the invariant. This is done by contradiction: if a ray is not contained in the invariant, a whole dense subset of the cone can be shown not to be contained in the invariant, leading to a contradiction, since the invariant is assumed to contain the orbit. We achieve this using some results on the topology of o-minimal sets. The complete proof deferred to Appendix B. 2 23:8 Invariants for Continuous Linear Dynamical Systems Another desirable property of cones is that they are R exp -definable. Also, one can observe that for every t 0 , the set {e At x 0 : 0 ≤ t ≤ t 0 } is definable in R RE exp (as we only need bounded restrictions of sin and cos to capture e.g. e iωi up to time t 0 ). As an immediate corollary of Theorem 3.6, we have the following theorems. Theorem 3.8 now allows us to provide an algorithm for deciding the existence of an invariant, subject to Schanuel's conjecture: Thus, the problem reduces to deciding the truth value of the following R RE exp sentence: The theory of R RE exp is decidable subject to Schanuel's conjecture, and therefore we can decide the existence of an invariant. Moreover, if an invariant exists, we can compute a representation of it by iterating over increasing values of t 0 , until we find a value for which Semi-algebraic Error Sets and Fat Trajectory Cones In this section, we restrict attention to semi-algebraic invariants and semi-algebraic error sets, in order to regain unconditional decidability. Substitute s = e t in the definition of the cone to get: Written this way, observe that E(log s) = diag(s ρ1 , . . . , s ρ k ), which is almost semi-algebraic, apart from the fact that the exponents need not be rational. Unconditional Decidability We give the final, yet crucial property of the cones. When the error set is semi-algebraic, it is possible to decide, unconditionally, whether there exists some cone that avoids the error set. Moreover the proof is constructive, it will produce the cone for which this property holds. Theorem 4.1. For a semi-algebraic error set Y , it is (unconditionally) decidable whether there exists t 0 ≥ 0 such that C t0 ∩ Y = ∅. Moreover, such a t 0 can be computed. Proof. Define the set The set U can be seen to be semi-algebraic and thus is expressed by a quantifier-free formula that is a finite disjunction of formulas of the form and notice that C t0 ∩ Y = ∅ if and only if Λ(s) ∈ U for every s ≥ e t0 . Thus, it is enough to decide whether there exists s 0 ≥ 1 such that for every s ≥ s 0 , at least one of the disjuncts m l=1 R l (Λ(s)) ∼ l 0 is satisfied. Since R l (Λ(s)) are polynomials in entries of the form s ρi and log(s), there is an effective bound s 0 such that for all s ≥ s 0 , none of the values R l (Λ(s)) change sign for any 1 ≤ l ≤ m. Hence we only need to decide whether there exists some s 0 ≥ s 0 such that for all s ≥ s 0 we have R l (Λ(s)) ∼ l 0 for every 1 ≤ l ≤ m. Fix some l. After identifying the matrix Λ(s) with a vector in R D for D = d 2 , we see that R l (Λ(s)) is a sum of terms of the form where the n i,j are aggregations of the n i,j for identical entries of diag(s ρ 1 , . . . , s ρ k ), and Q i,j (log s) are polynomials obtained from the entries of Q(log s) under R l . We can join the polynomials Q 1 , . . . , Q D into a single polynomial f i , which would also absorb a i . Thus, we rewrite R l in the form i s n i,1 ρ1+...n i,k ρ k f i (log s) where each f i is a polynomial with rational coefficients (as the coefficients in Q(log s) are rational). In order to reason about the sign of this expression as s → ∞, we need to find the leading term of R l (Λ(s)). This, however, is easy: the exponents n i,1 ρ 1 + . . . + n i,k ρ k are algebraic numbers, and are therefore susceptible to effective comparison. Thus, we can order the terms by magnitude. Then, we can determine the asymptotic sign of each coefficient f i (log s) by looking at the leading term in f i . We can thus determine the asymptotic behaviour of each R l (Λ(s)), to conclude whether m l=1 R l (Λ(s)) ∼ l 0 eventually holds. Moreover, for rational s, every quantity above can be computed to arbitrary precision, therefore it is possible to compute a threshold s 0 , after which, for all s ≥ s 0 , m l=1 R l (Λ(s)) ∼ l 0 holds. This completes the proof. Proof. If there is an invariant I that contains O(u), for some u ≥ 0, then Theorem 3.6 implies that there exists some t 0 ≥ u such that C t0 is contained in I. Consequently, the question that we want to decide is equivalent to the question of whether there exists a t 0 , such that C t0 ∩ Y = ∅. The latter is decidable thanks to Theorem 4.1. The effective construction follows from the fact that such a t 0 is computable and that the cone is R exp -definable. Effectively Constructing the Semi-algebraic Invariant We now turn to show that in fact, for semi-algebraic error sets Y , we can approximate C t0 with a semi-algebraic set such that if C t0 avoids Y , so does the approximation. Intuitively, this is done by relaxing the "non semi-algebraic" parts of C t0 in order to obtain a fat cone. This relaxation has two parts: one is to "rationalize" the (possibly irrational) exponents ρ 1 , . . . , ρ k , and the other is to approximate the polylogs in Q(log s) by polynomials. Relaxing the exponents. We start by approximating the exponents ρ 1 , . . . , ρ k with rational numbers. We remark that naively taking rational approximations is not sound, as the approximation must also adhere to the additive relationships of the exponents. Let = ( 1 , . . . , k ) and u = (u 1 , . . . , u k ) be tuples of rational numbers such that Thus, S captures the integer additive relationships among the ρ i . Define Approximating polylogs. Let , δ > 0. We simply replace log s by r such that δ ≤ r ≤ s . Note that it is not necessarily the case that δ ≤ log s ≤ s , so this replacement is a-priori not sound. However, for large enough s the inequalities do hold, which will suffice for our purposes. We can now define the fat cone. Let , δ > 0 and = ( 1 , . . . , k ) and u = (u 1 , . . . , u k ) as above, the fat orbit cone F s0, ,δ, ,u is the set: That is, the fat cone is obtained from C t0 with the following changes: R(log s) = diag(s ρ1 , . . . , s ρ k ) is replaced with diag(s q1 , . . . , s q k ), where the q i are rational approximations of the ρ i , and maintain the additive relationships. Q(log s) is replaced with Q(r) where δ ≤ r ≤ s . The variable s starts from s 0 (as opposed to e t0 ). We first show that the fat cone is semi-algebraic (the proof is in Appendix C), then proceed to prove that if there is a cone that avoids the error set, then there is a fat one that avoids it as well. Let Y ⊆ R d be a a semi-algebraic error set such that C t0 ∩ Y = ∅ for some t 0 ∈ R, then there exists δ, , s 0 , , u as above such that 1. F s0, ,δ, ,u ∩ Y = ∅, and 2. for every t ≥ 0 it holds that e At · F s0, ,δ, ,u ⊆ F s0, ,δ, ,u . The result is constructive, so when t 0 is given, the constants s 0 , , δ, , u can be computed. It follows that a corollary of this lemma, and Lemma 4.3, is a stronger statement than that of Theorem 4.2, namely one where R exp is replaced by R 0 . We state it here before moving on with the proof of Lemma 4.4. Theorem 4.5. For a semi-algebraic set Y , it is decidable whether there exists a o-minimal invariant, disjoint from Y , that contains the orbit O(u) after some time u ≥ 0. Moreover in the positive instances an invariant that is R 0 -definable can be constructed. The proof of Lemma 4.4 is given by the two corresponding steps. The second step, proving the invariance of the fat cone, is Lemma C.1 in Appendix C. We turn our attention to the first step. Lemma 4.6. Let Y ⊆ R d be a semi-algebraic error set, and let t 0 ∈ R be such that C t0 ∩ Y = ∅, then there exists δ, , s 0 , , u as above such that F s0, ,δ, ,u ∩ Y = ∅. Proof. We use the same analysis and definitions of U , R l , ∼ l , Λ(s) as in the proof of Theorem 4.1 and focus on a single polynomial R l . Recall that we had where each f i is a polynomial with rational coefficients. Denote ρ = (ρ 1 , . . . , ρ k ). We show, first, how to replace the exponents vector ρ by any exponents vector in Box( , u) for appropriate , u, and second, how to replace log s by r where δ ≤ r ≤ s for some appropriate δ and , while maintaining the inequality or equality prescribed by ∼ l . Denote by N the set of vectors n i = (n i,1 , . . . , n i,k ) of exponents in (1). Let µ > 0, such that for every n, n ∈ N , if ρ · (n − n ) = 0 then |ρ · (n − n )| > µ. That is, µ is a lower bound on the minimal difference between distinct exponents in (1). Observe that we can compute a description of µ, as the exponents are algebraic numbers. Let M = max n,n ∈N n − n (where · is the Euclidean norm in R k ). Proof of Claim 4.7. Suppose that ρ·(n−n ) > 0, then by the above we have ρ·(n−n ) > µ, and hence We can now choose and u such that It follows from Claim 4.7 and from the definition of Box( , u) that, intuitively, every c ∈ Box( , u) maintains the order of magnitude of the monomials s ni,1·ρ1+...+n i,k ·ρ k in R l (Λ(s)). More precisely, let Λ (s) = diag(s c1 , . . . , s c k )Q(log s) for some c ∈ Box( , u), then the exponent of the ratio of every two monomials in R l (Λ (s)) has the same (constant) sign as the corresponding exponent in R l (Λ(s)). Moreover, the exponents of distinct monomials in R l (Λ(s)) differ by at least µ 2 in R l (Λ (s)). We now turn our attention to the log s factor. First, let s 0 be large enough that f i (log s) has constant sign for every s ≥ s 0 . We can now let δ be large enough such that for every r ≥ δ, the sign of f i (log s) coincides with the sign of f i (r) for every s ≥ s 0 . It remains to C V I T 2 0 1 6 give an upper bound on r of the form s such that plugging f i (r) instead of f i (log s) does not change the ordering of the terms (by their magnitude) in R l (Λ (s)). Let B be the maximum degree of all polynomials f i in (1), and define = µ 3B (in fact, any < µ 2B would suffice), then we have that, for s ≥ s 0 , f i (r) has the same sign as f i (log s) for every δ ≤ r ≤ s (by our choice of δ), and guarantees that plugging s instead of s does not change the ordering of the terms (by their magnitude) in R l . Since the exponents of the monomials in R l (Λ (s)) differ by at least µ 2 , it follows that their order is maintained when replacing log s by δ ≤ r ≤ s . Let Λ (s) = diag(s c1 , . . . , s c k )Q(r) for some c ∈ Box( , u) and δ ≤ r ≤ s , then by our choice of , the dominant term in R l (Λ (s)) is the same as that in R l (Λ(s)). Therefore, for large enough s, the signs of R l (Λ (s)) and R l (Λ(s)) are the same. Note that since C t0 ∩ Y = ∅, then w.l.o.g. R l (Λ(s)) ∼ l 0 for every l. Thus, by repeating the above argument for each R l , we can compute s 0 ∈ R, > 0, δ ∈ R, and , u ∈ Q k such that F s0, ,δ, ,u ∩ Y = ∅, and we are done. 2 A Reduction from Zeros of an Exponential Polynomial In Theorem 4.5, we showed unconditional decidability for the question of whether there exists an invariant containing the orbit O(u), for some u ≥ 0. Even though we construct such an invariant, it cannot be used as a certificate proving that the orbit never enters the error set; however it is a certificate that the orbit of the system does not enter Y after time u. In this section we give indications that deciding whether there exists an invariant that takes into account the orbit ≤ u is difficult. More precisely, we will reduce a problem about zeros of a certain exponential polynomial to the question of whether there exists a semi-algebraic invariant disjoint from Y containing O(0). Remark 5.1. In the setting of discrete linear dynamical systems, the existence of a semialgebraic invariant from time t 0 immediately implies the existence of one from time 0. This is because the system goes through finitely many points from 0 to t 0 , which can be added one by one to the semi-algebraic set. In this respect CDSs are more complicated to analyse. The problem that we reduce from, can be stated as follows. We are given as input real algebraic numbers a 1 , . . . , a n , ρ 1 , . . . , ρ n , and t 0 ∈ Q, and asked to decide whether the exponential function: = a 1 e ρ1t + · · · + a n e ρnt , has any zeros in the interval [0, t 0 ]. This is a special case of the so-called Continuous Skolem Problem [5,9]. While there has been progress on characterising the asymptotic distribution of complex zeros of such functions, less is known about the real zeros, and we lack any effective characterisation, see [5,9] and the references therein. The difficulty of knowing whether f has a zero in the specified region is because (a) all the zeros have to be transcendental (a consequence of Hermite-Lindemann Theorem) and (b) there can be tangential zeros, that is f has a zero but it never changes its sign. See the discussion in [5,Section 6]. Finding the zeros of such a polynomial is a special case of the bounded continuous Skolem problem. We note that when ρ i are all rational the problem is equivalent to a sentence of R 0 (and hence decidable) by replacing t = log s. The rest of this section is devoted to the proof of the following theorem. real algebraic numbers a 1 , . . . , a n , ρ 1 , . . . , ρ n and t 0 ∈ Q. Without loss of generality we can assume that ρ 1 , . . . , ρ n are all nonnegative, since e ρt f (t) = 0 if and only if f (t) = 0 where ρ is larger than all ρ 1 , . . . , ρ n . Since every ρ i is algebraic, there is a minimal polynomial p i , that has ρ i as a simple root. Let A be the d×d companion matrix of the polynomial p 1 (x) · · · p n (x)x 2 . The numbers ρ i are eigenvalues A of multiplicity one, and the latter also has zero as an eigenvalue of multiplicity two. In addition to those, the matrix A generally has other (complex) eigenvalues as well. We put A in Jordan normal form, P −1 AP = J where J is made of two block diagonals:Ã and B, wherẽ the vector that has n + 2 ones and the rest, d − (n + 2) zeros, whose purpose is to ignore the contribution of the eigenvalues in matrix B in the system. To simplify notation, sincex 0 is ignoring the contribution of the matrix B, the dynamics of the system J,x 0 can be assume to be the same as: Focus on a single eigenvalue, i.e. on the graph {(e ρt , t) : t ≥ 0}, as the analysis will easily generalise to the CDS in question. This is itself a CDS, so terminology such as orbits etc. make sense. The challenge is to find a family of tubes around this exponential curve such that (a) all the tubes together with {(y, t) : t ≥ t 0 } are invariants and (b) the tubes are arbitrarily close approximations of the curve. We achieve this by the following families of polynomials: under-approximations are given by the family indexed by n ∈ N: over-approximations are given by a family indexed by n ∈ N and µ > 1: Define: It is clear from Taylor's theorem and the assumption that ρ > 0, that by taking n → ∞, and µ → 1 + the sets I n,µ are arbitrary precise approximations of the graph {(e ρt , t) : t ≥ 0}, what remains to show is that they are invariant. The proof is in Appendix D. Since the analysis was done on the CDS J,x 0 , whose entries are not rational in general, before proceeding with the proof of Theorem 5.2, we need the following lemma to say that changing basis does not have an effect in the decision problem at hand: A Proof of Proposition 3.4 Proposition 3.4. Let A = P JP −1 as above, and let C i ∈ C di×di for i = 1, . . . , k, with dimensions compatible to the Jordan blocks of A, and such that for every i 1 Then P diag(C 1 , . . . , C k )P −1 has real entries. Write P = P 1 · · · P k with P i having dimension d × d i for i ∈ {1, . . . , k}. The condition A = P JP −1 is equivalent to AP = P J, which in turn is equivalent to AP i = P i J i for i = {1, . . . , k}. Now if AP i = P i J i then AP i = P i J i and hence we may assume without loss of generality that for i 1 , i 2 ∈ {1, . . . , k}, if J i1 = J i2 then P i1 = P i2 . Equivalently we may assume that P = P M for M a permulation matrix that interchanges column (i 1 , j) of P with column (i 2 , j) such that J i1 = J i2 . Then we have Hence P diag(B 1 , . . . , B k )P −1 is real. Before proceeding with the proof, we give some useful definitions and properties of o-minimal theories. Consider an o-minimal theory R. O-minimal theories admit the following properties (see [10] for precise definitions and proofs). 1. For an R-definable set S ⊆ R d , its topological closure S is also R-definable. 2. For an R-definable function f : S → R, the number inf{f (x) : x ∈ S} is R-definable (as a singleton set). We recall the definition of the orbit cone: and define the orbit rays for τ ∈ T ω : Fix I to be an o-minimal invariant, with O ⊆ I definable in R. To prove Theorem 3.6, we begin by making following claims of increasing strength: Claim B.1. For every τ ∈ T ω there exists t 0 ≥ 0 such that r(τ , t 0 ) ⊆ I or r(τ , t 0 ) ∩ I = ∅. Proof of Claim B.1. Fix τ ∈ T ω . Then the set is R-definable and hence comprises a finite union of intervals. If this set contains an unbounded interval then there exists t 0 such that r(τ , t 0 ) ⊆ I; otherwise there exists t 0 such that r(τ , t 0 ) ∩ I = ∅. 2 Proof of Claim B.2. We strengthen Claim B.1. Assume by way of contradiction that there exist τ ∈ T ω and t 0 ∈ R such that r(τ , t 0 ) ∩ I = ∅. Without loss of generality assume that t 0 > 1, and consider e −A · r(τ , t 0 ). Recall from analysis of e At the decomposition: and let τ ∈ T ω be equal to R(−1)diag(τ ). In other words, diag(τ ) = diag(τ )R(1) and hence e A r(τ , t 0 − 1) = r(τ , t 0 ) (this is implicitly shown in the proof of Lemma 3.3). Since I is invariant we have r(τ , t 0 − 1) ∩ I = ∅, and consequently r(τ , t 0 ) itself is disjoint from I. Repeating this argument, we get that for every n ∈ N, the point diag(σ) = R(−n)diag(τ ) satisfies r(σ, t 0 ) ∩ I = ∅. We now prove that, in fact, U = T ω . Assuming (again by way of contradiction) that there exists σ ∈ T ω \ U , then by the definition of U we have r(σ, t 0 ) ∩ I = ∅. It follows that for every n ∈ N, the point diag(σ ) = R(n)diag(σ) also satisfies r(σ , t 0 ) ∩ I = ∅. Define V = {R(n)q : n ∈ N}, then the diagonals of V are dense in T ω . Further the set V = {σ ∈ T ω : r(σ , t 0 ) ∩ I = ∅} satisfies V ⊆ V ⊆ T ω and V = T ω . Now the sets U and V are both definable in R, and the topological closure of each of them is T ω . We employ [2, Lemma 10], which states that if X, Y ⊆ T ω are R-definable sets such that It follows that V ∩ U = ∅, which is clearly a contradiction. Therefore, there is no From this, however, it follows that C t0 ∩ I = ∅, which is again a contradiction, since C t0 ∩ O = ∅ and O ⊆ I, so we are done. 2 Proof of Claim B.3. Consider the function f : T ω → R defined by f (τ ) = inf{t ∈ R : r(τ , t) ⊆ I}. By Claim B.2 this function is well-defined. Since r(τ , t) is R-definable, then so is f . Moreover, its graph Γ(f ) has finitely many connected components, and the same dimension as T ω . Thus, there exists an open set K ⊆ T ω (in the induced topology on T ω ) such that f is continuous on K. Furthermore, K is homeomorphic to (0, 1) m for some 0 ≤ m ≤ k, and thus we can find sets K ⊆ K ⊆ K such that K is open, and K is closed. 2 Since f is continuous on K, it attains a maximum on K . Consider the set {R(n) · K : n ∈ N}. By the density of the diagonals of {R(n) : n ∈ N} in T ω , this is an open cover of T ω , and hence there is a finite subcover {R(n 1 )K , . . . , R(n a )K }. Since K ⊆ K , it follows that {R(n 1 )K , . . . , R(n a )K } is a finite closed cover of T ω . We now show that, for all τ ∈ T ω , we have f (R(1)τ ) ≤ f (τ ) + 1. Indeed, consider any τ ∈ T ω and t > 0 such that r(τ , t) ⊆ I. Applying e A , we get e A · r(τ , t) ⊆ e A I ⊆ I. Similarly to the proof of Lemma 3.3, we have that e A · r(τ , t) = r(R(1)τ , t + 1), so we can conclude that r(R(1)τ , t + 1) ⊆ I. This means that r(τ , t) ⊆ I implies r(R(1)τ , t + 1) ⊆ I; therefore, Finally, we conclude from Claim B.3 that there exists t 0 ≥ 0 such that C t0 ⊆ I. This completes the proof of Theorem 3.6. C Proofs of Section 4 Lemma 4.3. F s0, ,δ, ,u is definable in R 0 , and we can compute a representation of it. Proof. The only part that is not immediately semi-algebraic is the diag(s q1 , . . . , s q k ) factor, as the exponents are not fixed. Thus, define then we can rewrite the fat cone as F s0, ,δ, ,u as the set which is clearly semi-algebraic, and is equivalent by the above. where s 1 will be determined later, and let t ≥ 0. Set t = log x and recall that We will now show that e At v ∈ F s1, ,δ, ,u , by drawing some condition on s 1 . First, we claim that (e iω1 log x τ 1 , . . . , e iω k log x τ k ) ∈ T ω . Indeed, for all j we have |e iωj log x τ j | = 1, and for all z such that z 1 ω 1 + . . . + z k ω k = 0, we have Next, it is also not hard to prove that (x ρ1 s q1 , . . . , x ρ k s q k ) can be written as It remains to show that Q(log x) · Q(r) can be written as Q(y) for δ ≤ y ≤ (xs) . Recall that Q(log x) · Q(r) = Q(log x + r), and that δ ≤ r ≤ s and x ≥ 1. It immediately follows that δ < log x + r. Now, observe that log x + r ≤ log x + s . We prove that if s 1 is large enough, then log x + s ≤ (xs) . Let x 0 ≥ 1 be such that for every y ≥ x 0 we have y ≥ max{log y, 2}. Clearly such x 0 exists. We now split the proof into two cases. If x > x 0 , take s 1 to be large enough such that s ≥ 2 for every s ≥ s 1 . Then by the condition on x 0 we have that where the last inequality follows since both summands are at least 2 (indeed, if A, B ≥ 2 and w.l.o.g. A ≤ B, then A + B ≤ 2B ≤ AB). If x ≤ x 0 , recall that x ≥ 1, and thus log x ≤ x − 1. So it suffices to find s 1 such that for all s ≥ s 1 we have x − 1 + s ≤ x s . The latter is equivalent to x − 1 ≤ (x − 1)s . Now, if x = 1, the inequality holds for any s, and we are done. Otherwise, let x > 1, then observe that the function x−1 x −1 is increasing, and lim x→1 + x−1 x −1 = 1 (e.g., by L'Hôpital's rule). In particular, the function x−1 x −1 is bounded from above on the interval (0, x 0 ]. Set s 1 be large enough such that for every s ≥ s 1 and for every x ∈ (0, x 0 ] we have x−1 x −1 ≤ s , and we are done. By taking the maximal s 1 from the conditions above, we conclude the lemma. 23:20 Invariants for Continuous Linear Dynamical Systems To prove this lemma, we gather some properties of the under and over approximations. We recall their definitions here. Proposition D.1. The under-approximations have the following properties: Property 1: for all n ∈ N and 0 ≤ t ≤ t 0 , we have P n (t) ≤ e ρt , Property 2: for all n ∈ N and 0 < t 1 ≤ t ≤ t 0 , we have P n (t) ≤ (P n (t 1 )e ρ(t−t1) ) , Property 3: max 0≤t≤t0 P n (t) − e ρt → 0 as n → ∞. Proof. Property 3 is satisfied by Taylor's theorem. Property 1 holds since ρ > 0 by our assumption, in which case every Taylor polynomial of e ρt is an under-approximation. We turn to establish Property 2, which is equivalent to P n (t) ≤ ρP n (t 1 )e ρ(t−t1) . Note that it clearly holds for n = 0. Observe that P n (t) = ρP n−1 (t), thus we want to prove that ρP n (t) ≤ ρP n (t 1 )e ρ(t−t1) . Since ρ > 0, we can cancel it from the inequality. Now consider the function g n (t) = P n (t 1 )e ρ(t−t1) − P n−1 (t), we prove that g n (t) ≥ 0 for all t 1 ≤ t ≤ t 0 . First, we have that g n (t 1 ) = P n (t 1 ) − P n−1 (t 1 ) = (ρt1) n n! ≥ 0. We now prove that g n (t) ≥ 0 for t 1 ≤ t ≤ t 0 . We have g n (t) = ρP n (t 1 )e ρ(t−t1) −P n−1 (t) = ρP n (t 1 )e ρ(t−t1) −ρP n−2 (t) = ρ(P n (t 1 )e ρ(t−t1) −P n−2 (t)) Thus, g n (t) ≥ 0 if and only if P n (t 1 )e ρ(t−t1) − P n−2 (t) ≥ 0. Repeating this argument for n − 1 times, we end up with the condition P n (t 1 )e ρ(t−t1) − P 0 (t) ≥ 0, which is equivalent to P n (t 1 )e ρ(t−t1) ≥ 1, and it holds since P n (t 1 ) ≥ 1 and e ρ(t−t1) ≥ 1. 2 Intuitively, Property 1 in Proposition D.1 ensures that the curve of P n (t) always is below that of e ρt , Property 3 says that the under-approximation can get arbitrarily close to the exponential function, and Property 2 is a condition on the derivative of P n (t) which ensures that the resulting set is invariant. Formally, we have the following: Lemma D.2. For every n ∈ N, the set L n def = (y, t) : y ≥ P n (t), 0 ≤ t ≤ t 0 ∪ (y, t) : t > t 0 is a semi-algebraic invariant that contains the orbit from time 0. Proof. Clearly the set L n is semi-algebraic (recall that t 0 ∈ Q). It thus remains to prove that for every (y 1 , t 1 ) ∈ L n and for every δ > 0 it holds that (e ρδ y 1 , t 1 + δ) ∈ L n . Denote t = t 1 + δ. If t > t 0 , then the claim is trivial. Thus, assume t 1 ≤ t ≤ t 0 , and we need to prove that P ρ n (t) ≤ e ρ(t−t1) y 1 . Since (y 1 , t 1 ) ∈ L n , then y 1 ≥ P ρ n (t 1 ), and thus for t = t 1 the claim holds, and Property 2 in Proposition D.1 ensures that the inequality is maintained for all t 1 ≤ t ≤ t 0 (by taking derivative of both sides of the inequality). Proposition D.1 and Lemma D.2 provide us with an under-approximating invariant. We now turn our attention to the over-approximations. Proposition D.3. The over-approximations have the following properties: Property 1: for every µ > 1 there exists n 0 ∈ N such that for all n ≥ n 0 and 0 ≤ t ≤ t 0 , we have Q n,µ (t) ≥ e ρt , For the direct implication assume thatĨ is an invariant of J,x 0 with the properties in the statement. Let I = g(Ĩ). We prove that I is an invariant for P JP −1 , x 0 . Any point in I can be written as hence, sinceĨ is invariant for all δ ≥ 0 we have: P e Jδ P −1 · P (xP −1 ) T = P (e Jδ xP −1 ) T ∈ I. Moreover, by definition x 0 ∈ I sincex 0 ∈Ĩ, so I contains the whole orbit. The set I can be further shown to be disjoint from Y , because the map g is injective. The inverse implication follows along the same lines. This does not prove the lemma because x 0 might have irrational entries. We can amend this by translating the whole system by some vector v such that x 0 + v ∈ Q d , which is feasible because the sets Y + v, and I + v are semi-algebraic. 2
2020-04-27T01:00:37.299Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "bc381cda0545edc53a2cd815fdbe2fdb2c13307f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bc381cda0545edc53a2cd815fdbe2fdb2c13307f", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Physics" ] }
13368663
pes2o/s2orc
v3-fos-license
Target Identification and Intervention Strategies against Kinetoplastid Protozoan Parasites The past few decades have been marked by numerous admirable research efforts and promising technological advancements in the field of research on protozoan parasites. The parasites of this genre cause some devastating diseases that pose alarming threat to the mankind. Though several intervention strategies have been developed to get rid of these parasites, they always seem to frustrate the efforts of the scientific community sooner or later. The intervention strategies include identification of novel drug targets, development of target-based therapy, and development of vaccines. that provide significant impetus in the field of research pertaining to these parasites. In this context, several reviews have appeared in the past few years elucidating different drug targets in these parasites. For example, Das et al. [1], Balana-Fouce et al. [2], and others have described the role of topoisomerases as potential drug targets in these kinetoplastid protozoa. Urbina [3] has described the lipid biosynthetic pathway as a possible chemotherapeutic target whereas McConville [4] has elucidated the potential of parasite surface glycoconjugates as possible drug targets. Other targets include cysteine peptidases [5] and histone deacetylases [6] of the trypanosomatid parasites. The past few decades have been marked by numerous admirable research efforts and promising technological advancements in the field of research on protozoan parasites. The parasites of this genre cause some devastating diseases that pose alarming threat to the mankind. Though several intervention strategies have been developed to get rid of these parasites, they always seem to frustrate the efforts of the scientific community sooner or later. The intervention strategies include identification of novel drug targets, development of target-based therapy, and development of vaccines. that provide significant impetus in the field of research pertaining to these parasites. In this context, several reviews have appeared in the past few years elucidating different drug targets in these parasites. For example, Das et al. [1], Balaña-Fouce et al. [2], and others have described the role of topoisomerases as potential drug targets in these kinetoplastid protozoa. Urbina [3] has described the lipid biosynthetic pathway as a possible chemotherapeutic target whereas McConville [4] has elucidated the potential of parasite surface glycoconjugates as possible drug targets. Other targets include cysteine peptidases [5] and histone deacetylases [6] of the trypanosomatid parasites. Parasites of the genus Trypanosoma and Leishmania are kinetoplastid protozoan parasites that cause trypanosomiasis and leishmaniasis, respectively. Parasites belonging to the genus Plasmodium mainly cause malaria. These diseases are prevalent in tropical and subtropical countries and cause significant morbidity and mortality. However, these diseases are of the lowest priority because they offer little or no commercial incentives to the pharmaceutical companies. This special issue is a much needed and timely compilation of selected research and review articles in the concerned field. Though the selected papers are not a comprehensive representation of the field, but they represent a rich mixture of multifaceted knowledge that we have the pleasure of sharing with the readers. We would like to thank all the authors for their excellent contributions and also the reviewers for their efforts in assisting us. This special issue contains thirteen papers, of which five are research papers, and the rest are review articles. The five research articles mainly focus on development of new drugs and targets and also shed light on novel therapeutic intervention strategies. In the first paper, S. Sengupta et al. have established cryptolepine-induced cell death in the protozoan parasite L. donovani. Interestingly, the death process is augmented when the autophagic mechanism is inhibited by specific chemical inhibitors, and this finding may form the skeleton for novel therapeutic intervention strategies. In the second paper, S. Teixeira de Macedo-Silva et al. have investigated the effect of the antiarrhythmic drug amiodarone on the promastigotes and amastigotes form of L. amazonensis. They have shown that this drug has antiproliferative effect on L. amazonensis promastigotes and amastigotes and causes depolarization of mitochondrial membrane potential in both forms which ultimately leads to cell death of the parasites. So this compound may serve as a potential starting material for antileishmanial drug development. In the third paper, L. Major and T. K. Smith. have screened the MayBridge Rule of 3 Fragment Library to identify compounds targeting Inositol-3-phosphate synthase (INO1) which has previously been genetically validated as a drug target against Trypanosoma brucei, the causative agent of African sleeping sickness. By this approach, they have identified 38 compounds that significantly altered the Tm of TbINO1. Four compounds showed trypanocidal activity with ED50s in the tens of micromolar range, with 2 having a selectivity index in excess of 250. Topoisomerases are key enzymes that play a pivotal role in various cellular processes and also serve as an important drug target. In the paper, A. Roy et al. have described a synthetic peptide, WRWYCRCK, with inhibitory effect on the essential enzyme topoisomerase I from the malaria-causing parasite Plasmodium falciparum. Although Plasmodium falciparum does not belong to the order kinetoplastida, but still it has several features common with the kinetoplastid protozoan parasite T. brucei, for example, antigenic variation. The transition step from noncovalent to covalent DNA binding of P. falciparum topoisomerase I is specifically inhibited by this peptide while the ligation step of catalysis remains unaffected. Molecular docking analyses further provide a mechanistic explanation for this inhibition. This work provides evidence that synthetic peptides may represent a new class of potential antiprotozoan drugs. In the fifth paper, J. Kaur et al. have performed bioinformatic analysis of the Leishmania donovani long-chain fatty acid Co-A ligase (LCFA) as a novel drug target. The authors have previously found this enzyme to be differentially expressed by Leishmania donovani amastigotes resistant to antimonial treatment. In the present study, the authors have confirmed the presence of long-chain fatty acyl CoA ligase gene in the genome of clinical isolates of Leishmania donovani collected from the disease-endemic area in India and propose that this enzyme serves as an important protein and a potential target candidate for development of selective inhibitors against leishmaniasis. This special issue also features some timely and much needed review articles in the field. In the sixth paper, S. Gupta et al. have validated the role of a key enzyme, glucose-6phosphate dehydrogenase (G6PDH) in trypanosomatids as an important drug target and discussed the possibility of drug discovery targeting this enzyme. G6PDH is the first enzyme of the pentose phosphate pathway and is essential for the defense of the parasite against oxidative stress. T. brucei and T. cruzi G6PDHs are inhibited by steroids such as dehydroepiandrosterone and derivatives in an uncompetitive way. The Trypanosoma enzymes are more susceptible to inhibition by these compounds than the human G6PDH. These compounds are presently considered as promising leads for the development of new parasite-selective chemotherapeutic agents. In the seventh paper, A. F. Coley et al. have discussed the possibility of therapeutic development targeting glycolysis in African trypanosomes. The parasite is limited to using glycolysis of host sugar for ATP production while infecting the human host. This dependence on glucose breakdown presents a series of targets for potential therapeutic development, many of which have been explored and validated as therapeutic targets experimentally and has been addressed in this paper in detail. In the eighth paper, S. L. de Castro et al. have given a good overview of experimental chemotherapy in the Chagas disease which is caused by Trypanosoma cruzi, and it affects approximately eight million individuals in Latin America. The authors have presented a nice biochemical and proteomic overview of potential T. cruzi targets with reference to amidine derivatives and naphthoquinones that have showed the most promising efficacy against T. cruzi. In the ninth paper, A. K. Haldar et al. have classically demonstrated the current status and future directions for the use of antimony in the treatment of leishmaniasis. The standard treatment of Kala-azar in the recent past has been the use of pentavalent antimonials (SbV) but there has been progressive rise in treatment failure to Sb(V) due to the problem of chemoresistance that has limited its use in the treatment program in the Indian subcontinent. However, it has been shown recently that some of the peroxovanadium compounds have Sb(V) resistance modifying ability in experimental infection with Sb(V) resistant Leishmania donovani isolates in murine model. Thus vanadium compounds may be used in combination with Sb(V) in the treatment of Sb(V) resistance cases of kala-azar. In the tenth paper, R. Duncan et al. have presented a comprehensive overview of the genes involved in Leishmania pathogenesis with reference to the potential for drug target selection. Proteins that are differentially expressed or required in the amastigote life cycle stage found in the patient are likely to be effective drug targets. Several examples and their potential for chemotherapeutic disruption have been presented in this paper. The programmed cell death pathway that is now recognized among protozoan parasites is reviewed for some of its components and evidence that suggests that they could be targeted for anti-parasitic drug therapy has been presented. In the next paper, A. Biswas et al. have discussed the role of cAMP signaling in the survival and infectivity of the protozoan parasite Leishmania donovani. While invading macrophages, L. donovani encounters striking shift in temperature and pH that act as the key environmental trigger for differentiation and increase cAMP level and cAMP-mediated responses. A differentially expressed soluble cytosolic cAMP phosphodiesterase (LdPDEA) might be related to infection establishment by shifting trypanothione pool utilization bias toward antioxidant defense. This paper explains the significance of cAMP signaling in parasite survival and infectivity. In the twelfth paper, Md. Shadab and N. Ali have elegantly discussed the evasion of host defense mechanism by L. donovani. They have presented a detailed account of the subversion and signaling pathways that allow the parasites to get rid of the host defense mechanism. In the last paper, A. Ghoshal and C. Mandal have presented a detailed perspective of sialic acids that serve as important determinants influencing the parasite biology. Despite the steady progress in the field of parasite glycobiology, sialobiology has been a less traversed domain of research in leishmaniasis. This paper focuses on identification, characterization, and differential distribution of sialoglycotope having the linkage-specific 9-O-acetylated sialic acid in promastigotes of different Leishmania sp. causing different clinical ramifications. There are other areas of relevance not covered in the volume, that is, prophylactic and therapeutic vaccination, targeted drug delivery, and antigenic variation. However, the present issue covers a significant area of the subject and will be of immense interest to the readers.
2014-10-01T00:00:00.000Z
2011-08-09T00:00:00.000
{ "year": 2011, "sha1": "980f5599d04d17e50a68a6db56ba3af1bf575792", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3196916?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8593983a0e971259741047806125a0d76f3de138", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13044256
pes2o/s2orc
v3-fos-license
Revitalizing Personalized Medicine: Respecting Biomolecular Complexities Beyond Gene Expression Despite recent advancements in “omic” technologies, personalized medicine has not realized its fullest potential due to isolated and incomplete application of gene expression tools. In many instances, pharmacogenomics is being interchangeably used for personalized medicine, when actually it is one of the many facets of personalized medicine. Herein, we highlight key issues that are hampering the advancement of personalized medicine and highlight emerging predictive tools that can serve as a decision support mechanism for physicians to personalize treatments. long-term medical and socioeconomic complications. For example, relapsed cancer, secondary neoplasms, heart disease, and many other chronic medical conditions are prevalent among long-term survivors of cancer. Personalized treatment, when applied in clinical settings, helps to answer two important questions: (i) for a given individual, what drug or combination of drugs should be given to treat a specific disease condition? And (ii) How much and when should the drug(s) be administered? Pharmacogenomics, a field that has evolved in the last decade, has been highly recommended for several disease conditions toward predicting the response for a planned treatment protocol on an individual basis and has been put into practice in some cases. Pharmacogenomics has shown great promise in predicting the treatment response for a given patient and has demonstrated the ability to alleviate much of the morbidity that can be associated with treatment, 5,6 making it an excellent tool to address the first of the two questions above. However, because the purview of pharmacogenomics is limited to genotypic variation, it has limited scope to comprehensively answer the second question, which is at least as important to personalized treatment. In addition to genetic variation, several other nongenetic molecular mechanisms interface within the human body. The manifestation of a specific gene sequence into a final disease outcome, with or without drug intervention, proceeds at various levels. First, the genes are transcribed and translated into proteins which act as enzymes in numerous metabolic reactions. Some proteins act as receptors and transporters to interface with the extracellular environment. For each gene encoding a specific protein, variant alleles may exist. This results in a certain pattern of endogenous metabolic fluxes and metabolic products. If a specific gene is implicated in drug disposition, the gene expression also affects the distribution, metabolism, and elimination of the compound. 7 The resultant phenotypes at the bio-atomic or -molecular level then exert phenotypic changes at the cellular, tissue, and organ level through their influence on the disease and response pathways. Variations/aberrations, not only in gene sequence and expression, but in any of the steps mentioned above, will result in an unexpected outcome at an organismal level. Figure 1 enumerates the cascade of events and validated processes contributing to the variations in these steps. The clinical implication of these events is the possibility of observing a subgroup of patients with same genotype but unique proteome, metabolome, and cellular responses, which results in completely different treatment outcomes for each individual patient. The key factors, besides gene expression, responsible for such consequences include-but are not limited to-epigenetic factors, nonheritable functionally induced (extragenetic) factors, stochasticity in biochemical reactions, interactions in signal transduction and metabolic networks, environment, nutrition/lifestyle, organ failure (renal, hepatic), coadministered drugs, pregnancy, disease type, and microflora dynamics. Exclusive, isolated application of pharmacogenomics has therefore come under scrutiny by some in the literature and has resulted in a few new clinical guidelines that are broadly accepted for management of patients. [8][9][10] The foregoing discussion substantiates the need for developing procedures to accurately measure and/or predict the phenotypic outcome for a specific pharmacogenomic variant. In addition, methodologies must also be developed to determine optimal dosing of drug to realize a specific phenotypic outcome. Traditionally, pharmaceutical companies prescribe the "optimal" dosing information based on clinical trials conducted at a population level of the general public and patients. Thus, it is primarily a statistical consolidation imposed on an individual patient. The adoption of standarddose-for-all approach from drug labels and trial-and-error approach to titrate the patient to maximum tolerated dose has resulted in severe toxicity in some patients and insufficient treatment in others. The so-called evidence-based medicine and the adoption of treatment standards based on large epidemiological studies or randomized controlled trials have significantly hampered the efforts to truly personalize the medical practice. 11 A paradigm shift from evidence-based medicine to mechanism-based medicine will ensure that each patient is treated according to his/her own mechanism and thus should be the impetus for expanding implementation and utilization of personalized medicine. In this work, we highlight some of the limitations associated with the isolated application of pharmacogenomics to personalize treatment. Specific instances of biomolecular processes responsible for such outcomes are analyzed in detail with respect to each level of phenotype. Finally, emerging tools and methodologies to augment the potentials of pharmacogenomics for a comprehensive realization of personalized treatment are discussed. We first provide an introduction to the concept of pharmacogenomics, its prevalence, strengths, and limitations in personalizing treatment Figure 1 Manifestation of DNA sequence to molecular phenotypes and cellular responses. Each step in this process is confounded by several biochemical events that add dispersion and uncertainty to the subsequent steps. As such, it would be highly unlikely for there to exist a oneto-one relationship between a specific gene sequence and ultimate clinical outcome. (see Pharmacogenomics and Gene Expression). We will then appraise the complexities in the molecular and cellular level phenotypic manifestation of gene sequences, highlighting potential processes responsible for this variation. The advantages of measuring/predicting cellular responses and challenges are also discussed (see Complexities in Predicting Molecular Phenotypes). Finally, we highlight some of the emerging technological and quantitative tools to extend the scope of personalized treatment beyond pharmacogenomics (see Future Directions and Emerging Technologies). PHARMACOGENOMICS AND GENE EXPRESSION Gene sequencing and expression profiling are excellent tools for discerning variations in disease susceptibility, disease diagnosis and classification, and prognosis for a given treatment. Within the realm of personalized treatment, pharmacogenetics garners prime attention in utilizing gene sequencing tools for clinical translation. Pharmacogenetics aims to provide insight into the influence of genetic variants on the molecular biology of disease and response to drug intervention. The concept of genetic polymorphism in the human genome forms the core of pharmacogenomics. Common sources of genetic polymorphisms include single-nucleotide polymorphisms (SNPs), nucleotide repeats, deletions, insertions, and recombination. Pharmacogenomics possesses a great potential to propel the development of new therapeutic agents and/or administer existing drugs to a targeted subgroup of the patient population who display a specific genotypic trait. Gene expression analysis and pharmacogenomics are being considered as companion diagnostic tools (tests recommended when prescribing a specific medication) in several cases. 10 The first conceivable utility of gene expression variation in the disease cycle is the elucidation of disease susceptibility. A case-control study, conducted to demonstrate the association of NRAMP1 gene and susceptibility to tuberculosis, estimated that the odds of developing tuberculosis is 4.07 among subjects who are heterozygous for two NRAMP1 polymorphisms. 12 Genetic polymorphism has also been exploited in many studies to diagnose and classify the existing categories of many cancers. In a landmark work on molecular classification of cancer using gene expression, DNA microarray technique was utilized to distinguish between acute myeloid leukemia and acute lymphoblastic leukemia. 13 A study devoted to characterizing diffuse large B-cell lymphoma, a common form of non-Hodgkin's lymphoma, using microarray gene expression profile, revealed two molecularly distinct forms of diffuse large B-cell lymphoma. 14 These two new subtypes, germinal center B-like diffuse large B-cell lymphoma and activated B-like diffuse large B-cell lymphoma, are representative of different stages of B-cell differentiation and predict overall prognosis. These systematic and unbiased elucidations of disease subtypes, based on global gene expression profiles, not only assist clinicians in choosing appropriate treatment strategies that maximize efficacy but also minimize unwarranted side effects. Gene expression profiling also helps to direct the prediction of prognosis of the disease for a specific treatment regimen. One of the wellstudied and clinically adopted examples of gene expression techniques is the demonstration of the relationship between HER-2/neu gene and a wide variety of human cancers. Amplification of HER-2/neu gene or overexpression of HER-2/neu protein is observed in as many as 34% of the breast cancer patients. 15 In these patients, abnormalities in HER-2/neu gene and protein dictate relative sensitivity to chemotherapeutic drugs and resistance to tamoxifen. HER-2/neu gene amplification also predicts node status, tumor grade, overall survival, and time to relapse in breast cancer patients. One of the classical examples of applications of pharmacogenetics is the incidence of genetic polymorphism in thiopurine S-methyltransferase (TPMT) gene among humans. 16,17 To date, it serves as a prototypic system for displaying the potential for the utilization of a pharmacogenomics-based approach to individualized drug dosing within clinical settings. TPMT is a cytosolic drug-metabolizing enzyme that plays a key role in the metabolism of purine antimetabolites such as 6-mercaptopurine (6-MP) and azathioprine. 17 Thiopurine is an immunosuppressant that is used to treat childhood acute lymphoblastic leukemia, inflammatory bowel diseases, autoimmune diseases, and immunosuppression following solid organ transplantation. TPMT catalyzes the S-methylation of thiopurines and promotes pathways leading to inactive metabolites of methylated mercaptopurines. Hence, TPMT activity level is inversely proportional to the amount of active cytotoxic metabolite, 6-thioguanine nucleotide (6-TGN), produced. Myelosuppression is the dose-limiting toxicity during thiopurine dosing. A total of 21 genetic polymorphisms have been identified in the TPMT gene which correlate with decreased TPMT activity levels and hence thiopurine-induced toxicity. TPMT *1 is the "wild-type" allele; TPMT *3A is the most common variant allele, found in ~5% of Caucasians, wheres TPMT *3C is the most common variant allele found in East Asian population with a frequency of ~2%. TPMT *3B is a rare allele. The presence of TPMT*3A and *3B results in extremely low or no TPMT enzyme activity, which leads to elevated levels of 6-TGN. If treated with a standard dose, patients who are homozygous for these alleles will encounter life-threatening myelosuppression and, in some cases, even secondary malignancies. [16][17][18] Thus, it was suggested that patients with low TPMT expression should be treated with substantially lower doses of thiopurines. On the other hand, many clinical studies concluded that efficacy will be compromised in patients with high TPMT activity who are treated with standard dosing schedules, and therefore, treatment with higher doses is recommended. 18 In all the above examples, pharmacogenomics provides some vital information for predicting treatment outcome, but they are limited to population-level variations. In some sense, it is equivalent to segregating the population into a few response groups and disregarding intragroup variations. To be complete and effective, following the first step of "genetic personalization," individuals in each subgroup must be characterized based on the downstream response of that genotype. It is well-known that within a specific genotype, there is a distribution of phenotypes across the patient population (Figure 2). For example, in the case of the TPMT gene, although there are only five important genotypes in the human, there are as many enzyme activity levels (the manifestation of the TPMT gene) as there are patients. No two patients with same TPMT genotype will have an identical enzyme activity level. In addition, only two-thirds of the total variance in TPMT activity is accounted through genotyping. 19 Recently, for warfarin, an important US Food and Drug Administrationapproved drug to bear the pharmacogenomics information in the label, pharmacogenomic-guided treatment has been shown to have no significant difference in clinical outcome compared with the traditional treatment. 20 Other studies also point to declining scope for pharmacogenomics in guiding dose regimen, given the cost and effort involved. 21 Complex diseases such as cancer, HIV infection, and many others are invariably treated with complex treatment regimens that often involve multiple drugs. When the drugs are influenced by more than one gene independently, pharmacogenomics-based approach alone may not be sufficient to predict the drug response. Consider a treatment involving a combination of three drugs and having genetic polymorphism in each of the drug-metabolizing enzymes with three different gene expression patterns (high, intermediate, and low). This will produce 27 (3 3 ) unique gene expression profiles in a given patient population. If one or more drugs are also substrates for drug transporters, where genetic polymorphism and hence three different gene expressions are possible, the number surges to 81 (3 4 ). When this is then translated into phenotypes and further into cellular response, it will produce a significant variation in the response. Besides clinically relevant SNPs and their influence over treatment outcomes, there are several other putative genetic polymorphisms that are yet to be characterized, which may play a significant role in determining the drug response. 22 In light of more than 150,000 validated SNPs, proteins, and interactions between them, this works out to be a mind-boggling diversity! However, when phenotypes are measured, which include the drug concentrations and/or cellular response, the characterization of patients may correlate more closely with clinical observations. Additional complexity surfaces when the findings of the gene expression profiles are translated to the global population of different ethnic origin due to the inherent variation in disease susceptibility, risk, incidence, and response. For example, there is a significant variation observed in vincristine-induced peripheral neuropathy among Caucasian and African-American patients undergoing treatment for precursor B-cell acute lymphoblastic leukemia. 23 Pharmacogenomic studies of vincristine-metabolizing enzyme CYP3A5 revealed the polymorphic expression between different races with ~70% of African-Americans expressing CYP3A5 compared with 20% of Caucasians. 23 Dose interruptions and average toxicity grades are significantly lower in African-American patients as a result of elevated metabolism and clearance of vincristine. If the dosing for African patients were to be determined based on studies on Caucasians, these patients will receive significantly lower exposure to vincristine. Another important consideration to expand the scope of pharmacogenomics is related to its reliance on decision making under static conditions. Pharmacogenomics uses gene sequence and gene expression snapshots with the assumption that deterministic evolution of molecular events leads to predictable phenotypes. It considers each genetic variation as an independent causal factor for the observed response. It fails to take into account variation in transport limitations and the spatial heterogeneity of biochemical reactions. However, human physiology is a complex, dynamical system, and often, therapeutic responses are a manifestation of the interplay between many levels of physiological processes. Furthermore, human physiology is complicated by homeostatic feedback loops, molecular cross talk, and bypass mechanisms that can lead to unexpected therapeutic responses. These events might confound many physiological processes including, but not limited to, drug metabolism and disposition, drug transport, cellular targets and signaling pathways, and cellular response pathways (e.g., apoptosis, cell cycle control). 24 Thus, one must remain circumspect about the isolated assessment of pharmacogenomics (or any other upstream biomarker) as a stand-alone personalizing tool. The scope for controlled clinical trials, a gold standard accepted by the US Food and Drug Administration for validating efficacy and safety of a pharmacogenomic tests, to validate and adopt to the clinical practice are also limited as the resulting number of groups make such studies an expensive and time-consuming exercise. To this end, pharmacogenomics has not realized its For each specific gene variant (represented as gene score), there is a distribution of molecular phenotype among the patient population due to variations in random gene activation and repression, mRNA degradation, translational noise, alternate splicing, and protein degradation arising at the individual patient level. At the next level, for each value of molecular phenotype, there is a distribution of cellular response in the population due to protein phosphorylation, membrane drug efflux pumps, transportation limitations, and resistance mechanisms in apoptotic pathways. Eventually, two patients having the same gene variant might fall anywhere in the bivariate distribution in phenotype space. mRNA, messenger RNA. fullest potential in some of the drug-disease applications that were deemed as a classical case for pharmacogenomicsbased personalization. 8,9,25,26 It therefore emerges that a more comprehensive approach to "personalization," encompassing and integrating many dimensions and levels of human physiology, is needed to portray a complete picture of ongoing drug-disease dynamics. COMPLEXITIES IN PREDICTING MOLECULAR PHENOTYPES From the foregoing discussion, it is clear that gene sequencing and gene expression profiling play key roles in identifying patient subgroups, but individual patients are yet to be characterized on the phenotypic distribution. There are several levels of phenotypes, and a specific one depends on the objective at hand. In this work, we classify phenotypes into two broad categories: (i) molecular phenotypes, an immediate effect of a specific gene sequence, and (ii) cellular phenotypes, the influence of molecular phenotypes on various cell populations. We define molecular phenotypes as the biomolecular manifestation of a gene sequence, which encompasses proteins, enzymes (proteome), and metabolite concentrations (metabolic phenotype). If a specific drug distribution and metabolism is found to be modified due to the above factors, the resulting drug concentration at various parts of the body is also considered as the molecular phenotype (drug phenotype). Molecular phenotypes interact with various cellular populations as drug transporters, inhibitors, and signaling molecules to produce a cellular phenotype. Variations in phenotype arise due to numerous factors, including stochasticity in gene expression, transcriptional and translational noise, complexities in biochemical and signal transduction networks, nonlinearity in biochemical processes, and quasi-determinism in biological events. 27,28 This leads to a nonbijective relationship (a single gene producing more than one phenotypic trait and a specific phenotype resulting from the expression of several genes) between genotype and phenotype. Stochasticity in gene expression is one of the important factors that contributes to phenotypic variations observed in isogeneic cell populations. 28 These stochastic events are triggered by transcriptional and translational fluctuation which, in turn, arises due to several factors such as random activation/repression of promoter, degradation of transcriptional and protein products, transcriptional and translational burst, feedback loops, etc. 28 Figure 3 demonstrates various molecular events during gene expression. Moreover, these gene regulatory functions fluctuate dynamically, making static gene expression profiles untenable for personalized treatment. Besides quantitative variations in molecular contents, some of these mechanisms may lead to phenotypically distinct subpopulations. The majority of drugs are metabolized by more than one enzyme (e.g., 6-MP metabolized by TPMT, HGPRT (hypoxanthine-guanine phosphoribosyltransferase), and ITPA (inosine triphosphate pyrophosphatase) and transported/eliminated by several other proteins, where gene variations and some of the above-mentioned processes are inevitable. The end result is the formation of cell populations with uniquely different proteomes and metabolomes from the same genomes. The resulting phenotype is not comprehensible through simple deductive reasoning. As such, it is not uncommon for two genetically identical persons from similar backgrounds to show significantly different clinical phenotypes in response to drug intervention. For instance, inflammatory bowel disease patients with TPMT homozygous-w.t. and treated with 6-MP encountered completely different clinical outcome, both in terms of efficacy and toxicity. 29 One of the main reasons for treatment failure and variability in cellular response, despite the drug being at a therapeutically effective concentration in the plasma, is the development of multidrug resistance to therapeutic medications mediated by drug transporters. This is especially true in critical diseases such as cancer and HIV infection. P-glycoprotein (P-gp), which is a member of the ATP-binding cassette (ABC) family of proteins, is located on the plasma membrane and serves as drug efflux pumps. 7,30 Important members of this family include ABCB1 and ABCG2 (BCRP). On the other hand, solute carrier super family of proteins, such as OATP family (e.g., OATP1B1, OATP1B3, and OATP2B1), act as uptake transporters. 7 Together, they have the ability to uptake/efflux structurally and functionally dissimilar cytotoxic agents, thereby modulating the intracellular drug concentration. These genes display genetic polymorphism in humans which has profound impact on pharmacokinetics and clinical responses to many drugs. Furthermore, stochastic and dynamic regulation of these genes result in heterogeneous population of three different types of cellsintrinsically resistant, acquired resistant, and sensitive cells. Intrinsically resistant cells acquire this phenotype due to a spontaneous mutation involving single or multiple random steps. Acquired resistant phenotypes are initially sensitive to drugs but eventually develop drug resistance, also through random processes. Recently, even sensitive cells have been shown to acquire the resistant phenotype from other resistant populations through the exchange of P-gp via microparticles Figure 3 Depiction of a single gene expression and regulation. Every step in this process is governed by stochastic biochemical events. The gene randomly transits between active and repressed promoter state and hence mRNA is produced in bursts. A fraction of mRNA is randomly degraded, and the rest is translated into protein. A fraction of protein also undergoes decay stochastically. Reprinted with permission from Macmillan Publishers: Nature Reviews Genetics. 28 and tunneling nanotubes. 31 The evolution of these cellular fractions during treatment has the greatest impact on clinical outcomes. Besides these genetic and epigenetic factors, drug resistance may also be conferred by microenvironment characterized by poor vasculature, spatial heterogeneity with regions of hypoxia and acidity, and transport limitations. 32 In these cases, despite abundant drug in plasma, the actual targeted site of action will lack the desired concentration of drug. Given these complex factors, sophisticated models incorporating probabilistic nature of these nongenetic, random events will greatly augment the predictions based on gene expression. The final phase of drug-disease cycle involves the drug or its metabolites interfering with the normal functions of one or more cell types and producing either desired outcomes (efficacy) or undesired outcomes (side effects). Pharmacology literature refers to this as pharmacodynamics. The majority of these cell responses are governed by small molecules, which are the cumulative outcome of all of the processes outlined in the previous sections (gene sequence, drug dosing, and other biochemical processes). At the molecular level, activation or repression of gene and enzyme activity may not be translated into accumulation or depletion of its corresponding metabolites due to multiplicity of metabolic pathway network and robustness of metabolite profiles. In addition to these molecular contents, cellular functions are also affected by stochastic gene regulation. Due to the complex and interacting nature of these regulatory networks, gene products from a specific network can influence the production of proteins in some other unknown network. Ultimately, this results in the disruption of the normal cellular functions and produces unintended consequences. 33 Developments in single-cell measurements and various lineage-tracing techniques have revealed a wealth of knowledge in attributing the sources of nongenetic cell-tocell variations. Even genetically identical cells in the same environment have shown varied response and sensitivity to drugs. 34 Feinerman et al. 35 demonstrated how intraclonal differences in signaling protein levels in T cells produced distribution in response among individual cells, which ultimately leads to diverse biological functions and interferes with antigen discrimination during T-cell activation. Using 10,000 cells from each of 15 different cell lines, Gascoigne and Taylor 36 analyzed cellular response to three classes of antimitotic drugs. These cells, besides inter-cell line variability, exhibited a significant intra-cell line variation. Indeed, these variations are not genetically predetermined but are driven by variations in the signaling network stability as even the sister cells were shown to have faced different fates. Spencer et al. 37 studied nongenetic cell-to-cell variability in response to TRAIL-induced (TNF-related apoptosis-inducing ligand) apoptosis. They have shown the existence of significant differences in timing and death probability in that some cells died within 45 min of exposure, whereas others needed as much as 8-12 h. Apart from genetic and epigenetic sources, stochastic fluctuations in biochemical reactions arising from low copy number, differences in cell cycle phase, and natural divergence of protein levels were cited as the determinant factors of time to death. Dynamic studies on negative feedback loops between tumor suppressor p53 and oncogene Mdm2, with genetically identical cells in uniform environment subjected to gamma irradiation, revealed interesting features on cell-to-cell variability. 38 Significant cell-to-cell variations were observed in the amplitude of the oscillations that was attributed to the production rates of proteins. In addition, due to this variation in protein production, even sister cells lost correlation to each other within 11 h of cell division. Cell-to-cell variability does not necessarily originate from stochastic fluctuations. Recent studies show that despite intrinsic noise in molecular network, phenotypic cell-to-cell variability is rendered by deterministic processes, often through uncharacterized molecular regulatory mechanisms. 39 Microenvironment plays a significant role in determining the ultimate outcome to any pathophysiological stimuli. In vivo experiments in mice seeded with invasive or proliferative melanoma cell types have shown that the melanoma cells experienced "transcriptional signature switching" resulting in heterogeneous distribution of both cell types. 40 In addition, proliferative cell types were predominant in the outer rim of the tumor confirming the role of microenvironment in regulating the switch. Studies have also illustrated the presence of small numbers of slow-cycling melanoma cells (JARID1B+), within the main population of aggressive cells, evading chemotherapy and resulting in the selection of JARID1B+ cells. 41 Interestingly, JARID1B expression is dynamically and temporarily regulated, thereby negating the utility of gene expression profiling. This has a profound impact on treatment planning; a snapshot of gene expression reveals a specific cell type but the emergence of heterogeneity renders the treatment regimen largely ineffective. Microenvironment also promotes genetic instability among cancer cells, specifically through deletion and transversions. 42 Despite the foregoing discussion on cell-to-cell variability, the inherent robustness in metabolic networks drive the physiological state to very few distinct modes, which results in multimodal distribution in response space. 43 Robust computational approaches are available to incorporate these multiple sources of stochasticity and heterogeneity, which enable prediction of population behavior. 28,44 The relationship between cellular heterogeneity and unpredictable clinical response is less obvious but extremely critical. For example, in cancer cells, protein expression outliers allow some cells to fall outside the drug's range of efficacy, enabling those cells to survive ongoing treatments. 34 Given sufficient time, these outliers will repopulate the full distribution of cells and render the treatment inefficient or eventually completely ineffective. The transcriptional switching in melanoma cells allows invasive cell types to escape the proliferation-targeted chemotherapy regimen. 40 When the invasive cell types switch to a proliferative mode, the tumor cells will regrow, thus leading to refractory melanoma. Chemotherapy also allows slow-cycling JARID1B+ melanoma cells in the heterogeneous population to thrive in cytotoxic environments. When the temporary expression of JARID1B is reverted, the melanoma will relapse. As individual patients produce different levels of proteome dispersion and cellular heterogeneity, which in itself is difficult to predict, the prediction of clinical response for these individuals is much more complex. More likely, these cellular phenotypes are further confounded by interplay of other unknown proteins and/or mechanisms. Hence, it is not so trivial to predict the cellular response intuitively or use simple correlations with gene expression alone. A more reliable quantitative prediction of these complex phenomena calls for a sophisticated modeling framework. The literature is frequented with wide range of sources producing differing evidence for correlation between genotype and their corresponding molecular phenotype and/or clinical response. 26,45,46 Continuing with the TPMT case, population genetic studies have shown that the major gene locus which regulates TPMT activity accounts for only two-thirds of the total variance in the red blood cell (RBC) enzyme activity. 19 TPMT enzyme activity is affected by 6-MP, chronic diseases, and other coadministered drugs like diuretics, NSAIDs, and antihypertensives. 47 In addition, tissue-specific regulation of enzyme activity has also been reported in the literature. 48 6-MP metabolism is not only affected by variations in TPMT activity but also by other nongenetic factors. Diet plays a significant role in determining the bioavailability of 6-MP. Coadministered drugs like allopurinol and methotrexate also affect the first-pass clearance and metabolism of 6-MP. 49 These factors indirectly affect the amount of 6-TGN produced. Hence, within a specific enzyme activity range, it is possible to observe as many 6-TGN concentrations as there are patients. In a study involving 170 inflammatory bowel disease patients, wide variation in TPMT enzyme activity, 6-TGN concentration, and treatment response were observed. 50 Patients with TPMT heterozygous had enzyme activity of 5.1-13.7 U/ml with mean 6-TGN concentration of 253.5 pmol/8 × 10 8 RBCs (SD: 136.5) compared with homozygous-w.t. of >13.7 U/ml with 151 pmol/8 × 10 8 RBCs (SD: 84.7). The range for TPMT activity and high SD for 6-TGN concentration signifies the level of dispersion within the same genotype. In addition, there was no significant correlation existed between inflammatory bowel disease questionnaire score and 6-TGN concentration (r = −0.09; P = 0.24). Patient groups having similar 6-TGN concentrations resulted in two different clinical outcomes of active disease and clinical remission. In our own modeling study on 6-MP metabolism using nonparametric Bayesian approach (to be submitted for publication), 6-TGN concentration prediction based only on TPMT genotype resulted in wide 95% confidence region of 23-743 pmol/8 × 10 8 RBCs (black region in Figure 5). With the measurement of TPMT enzyme activity, the confidence region is narrower (gray region: 124-386 pmol/8 × 10 8 RBCs) as the variability in enzyme level is accounted. However, with the availability of 6-TGN measurement, the 95% confidence region is much tighter (red region: 234-252 pmol/8 × 10 8 RBCs). This exemplifies that the prediction of downstream response with an upstream marker will most likely yield a greater variability, which will eventually hamper the dosing decision. Recently, drug transporter protein ABCC4 was found to be actively transporting 6-MP and 6-TGN from the hematopoietic cells, thereby protecting the cells in TPMT-deficient patients. 51 These findings demonstrate that although one can measure TPMT activity (molecular phenotype) precisely, there is a significant uncertainty in the level of 6-TGN (drug phenotype), the compound that is responsible for treatment response. In addition, when 6-TGN acts on the cellular population, another level of uncertainty is added which leads to different clinical outcome for similar 6-TGN concentration range. Hence, phenotyping enzyme activity or even 6-TGN concentration may not be sufficient; rather, the cellular response, which is the ultimate response variable of interest should be regarded as the basis for dose individualization. FUTURE DIRECTIONS AND EMERGING TECHNOLOGIES From the discussions in the previous sections, it is clear that DNA sequencing and gene expression profiles provide some vital information on pathophysiological and clinical response for a given treatment regimen; however, augmenting this information with downstream biomolecular and cellular responses will facilitate unequivocal, quantitative clinical decisions. The application of qualitative upstream information in decision making may lead to uninformed conclusions for a specific individual. What is needed instead is an integrative approach that takes into account different levels of potential variation in the drug-disease cycle to predict the clinical outcome of interest and is adaptive in nature to each individual patient. As such, it should be an ongoing process rather than a "study-and-adopt" approach. In other words, after the initial detailed study and accumulation of information, a basic set of information must be obtained from each new patient in order to adapt the approach before making predictions on clinical outcomes. Given the dynamic nature of physiological responses, this naturally warrants the application of dynamic modeling and in silico approaches at a suitably sophisticated level. It is not only proactive in predicting the clinical response but also alleviates the need for continuous/frequent monitoring, which will be prohibitive from physiological, logistical, and economical points of view. In recent times, there has been increasing recognition of the utility and practice of quantitative tools in medical applications. 52,53 At the same time, the tremendous increase in life science research over the last several decades has resulted in a system, which publishes thousands of relevant articles every year. Clearly unaided individual clinicians and healthcare practitioners are unable to process all these articles and incorporate into practice those advances that will have the greatest clinical impact. As such, there is a tremendous opportunity to develop a systematic approach to embedding scientific advances in clinical decision support tools. Although these tools have been largely statistical, the time is now opportune to expand this quantitative approach to include mathematical models and systems theoretic tools that embed scientific advances toward maximizing clinical impact. Mathematical models, suitably empowered by systems theoretic methodology, derive their strength from their potential to quantitatively evaluate known or conjectured mechanisms of medical cure. Although engineering and mathematical personnel can provide skillful use of quantitative tools, success of such endeavors is contingent on utilizing the judgment of experienced medical personnel. For such a closely integrated effort, collaboration must occur among medical and engineering researchers over an extended period in a clinical setting. A recent report by the National Academy of Engineering and the Institute of Medicine elaborates on how a partnership between healthcare professionals and engineers could change the face of the 21st century healthcare system. 54 A detailed road map was also laid to harness the power of systems engineering tools, information technology, and compliment knowledge across scientific disciplines to achieve what was termed the "six Institute of Medicine quality aims" of the healthcare system that included safety, effectiveness, patient centeredness, timeliness, efficiency, and equitability. Although gene expression information by itself is not ideal, systems biologists have developed methodologies to predict phenotypic outcomes for a specific gene expression pattern through the simulation of metabolic networks. These metabolic networks aid in linking pharmacogenomic variants, such as SNPs, to pathophysiological (phenotypic) outcomes. Through in silico models of these metabolic networks, the effects of sequence variations, alterations in specific components, and resulting biochemical reaction kinetics can be analyzed in the context of the rest of the reactions in the entire network. 55 Application of such an approach has been demonstrated for human RBCs using a large-scale metabolic network, in which in silico models predicted pathophysiological outcomes for two established SNPs associated with two key enzymes. As expected, no clear relationship was observed between the SNPs and their associated kinetic parameters. However, when evaluated in the context of other simultaneously altered enzyme kinetics within the whole network, the model predicted overall cell behavior and eventually the clinical outcome. 55 Augmenting this approach with other omics data will uniquely identify the flux modes and further enhance the predictive power. We have developed a class of dynamic metabolic models, widely known as "Cybernetic Models," which provides a framework to accommodate gene expression information and enzyme regulation and predict the system-level, dynamic metabolic profile and overall cellular outcome. 56,57 The potential clinical utilities of these metabolic models are evident from their ability to identify functionally interrelated sets of reactions and metabolites that are causally related to diverse pathophysiological conditions including xenobiotic metabolism and biomarker identification. [58][59][60] The power of such approaches lies in the automatic integration of patient-specific information, both genotypic and phenotypic, which leads to the prediction of entire metabolic flux profiles and overall cellular outcomes. These metabolic and cellular functions can readily be associated with observed efficacy and/or toxicity through statistical and mathematical tools, which will help in clinical decision making. Recent advances in genome-scale computational models are also expected to provide key insights into how complex phenotypes evolve as a function of gene variants and molecular interactions. Genome-scale metabolic models are largescale extensions to traditional metabolic networks to analyze the entire cell in the light of available data and computational methods. Mathematical representation of physiochemical, environmental, and regulatory constraints and computational solutions enable the identification of feasible and infeasible metabolic behavior, leading to reliable predictions even when comprehensive data is not available. 61 Using a computational model that accounted for all annotated gene functions, Karr et al. 62 have provided an understanding of several biological processes that were not feasible earlier through experimental techniques. In addition, the model has accurately predicted molecular pathologies of single gene disruption phenotypes. The reconstruction of tissue-specific, genome-scale metabolic models such as the ones describing human liver metabolism, cancer cell metabolism, along with an increased availability of extensive patient-specific "omics" data to refine these models, bodes well for advancing these approaches for personalized treatment. 60,63 Often, it is not feasible to obtain objective, quantitative cellular response frequently as in the case of neuropathic pain, cancer progression etc., or the drug is so toxic to some patients that we cannot afford to titrate the drug dose. In these cases, measuring covariates that are closely connected (on drug-disease cycle) to the clinical response may provide acceptable surrogate information for clinical decision making. For example, small molecules qualify as the immediate effector of the clinical response. 64 Recently, a new concept of personalized treatment based on metabolomics phenotype has been proposed by Nicholson et al. 65,66 and termed as pharmacometabonomics. Metabonomics, a special case of metabolomics, studies the systematic variation in the metabolic profiles due to external stimuli such as genetic modification, biological stimulus, and xenobiotic intervention. Pharmacometabonomics, at the intersection of pharmacology and metabonomics, was defined as "the prediction of the outcome, efficacy, or toxicity of a drug or xenobiotic intervention in an individual based on a mathematical model of a preintervention metabolite signature." Pharmacometabonomics aims to study the global metabolic fingerprint in predose biofluids and characteristic change in the metabolic profiles due to drug dosing in the postdose biofluids. These two vital pieces of information can then be correlated to the clinical responses using chemometric tools. The key metabolites identified from this exercise are then mapped onto the relevant metabolic networks through various databases to reveal functional relationships in disease pathways. This approach, if designed carefully, promises to provide an unbiased and hypothesis-free analysis of the metabolic profile, which may help to identify unexpected biomarker combinations. 65 These endogenous metabolites will eventually aid in identifying patient subgroups that may be cured and/or are susceptible to side effects before commencing the treatment. A recent review article provides an extensive discussion of several preclinical and clinical applications of this new emerging area. 67 Recent advancements in single-cell measurement and manipulation technologies allow multiscale and multiparametric measurement of molecular contents, thereby enabling observation of molecular events at single-cell level. 68 Elucidation of system-wide interaction of molecular and signaling events and comparison with the response of pathological cells under the influence of a therapeutic drug provide new quantitative mechanistic insights. 69 When combined with mathematical models, this information provides further insight into underlying mechanisms and important parameters that are unable to be extracted from experimental techniques. Extension of these single-cell models, accounting for deterministic and stochastic population heterogeneity, aid in prediction of overall physiological outcomes and emergence of multimodal cellular populations with distinct phenotypic features. 44,70 Apart from these detailed, global mechanistic approaches, several classes of semimechanistic models have been developed for various in vivo processes over the last few decades. 53,71 These types of models are adequate to describe a macrolevel phenomenon within the exhaustive overall process and predict cellular outcome for treatment intervention. However, rigorous efforts for individualization of these models and effective clinical translation are lacking. Integrated quantitative approaches that combine genotypic, molecular profiling, and clinical data have shown promise in predicting causal relationships between specific genotypic/molecular signatures and biological and/or clinical outcomes. 72 Unlike traditional statistical approaches, which are often restricted to correlational studies, these approaches develop causative models, purely driven by data and prior knowledge, that establishes the dependence structure among various interacting biomolecular entities and have the ability to provide causal influence. Interactomics modeling has also shown promise to evaluate quantitative interactions at macromolecular level (DNA, RNA, proteins, and other molecules) and aid in understanding how local and global molecular network dynamics affect overall cellular properties and ultimately lead to human diseases. 73 Although some of these approaches have largely been in use for drug target discovery, or applied for other simpler organisms, it is now time to gain insights from them and extend to personalized treatment based on an individual patient's omics signature. The quantitative approaches discussed here are admittedly in their infancy and require extensive validation in carefully designed, prospective clinical studies. The key conundrum in this regard would involve translating these fast burgeoning scientific findings emerging from various fronts. Admittedly, it is difficult to verify each one of these findings in controlled clinical trials. An interesting option would be to integrate the detailed, mechanistic network models with PBPK-IVIVE (PBPK: physiologically based PK; IVIVE: in vitro, in vivo extrapolation) approach in which ADME (absorption, distribution, metabolism, and elimination) is managed by physiologically based pharmacokinetics-in vitro, in vivo extrapolation, whereas kinetic insights for physiologically based pharmacokinetics are governed by genetic and other omics data through metabolic network models. 74,75 Eventually, this approach could serve as a screener to select candidates for controlled clinical trials and design them. In addition, several other constraints surface while translating such approaches in routine clinical practice. One should consider logistical, computational, and economic factors. Efforts must also be diverted to effective communication of these techniques to clinicians and ensure ease of use and interpretation. Social, ethical, and privacy-related risks of such personalized approaches are not insignificant. Genetic minorities must be protected from discrimination by drug developers, insurance companies, and employers. Unfortunately, besides these quantitative approaches, there are not many alternatives available to solve current challenges in treating critical diseases. However, we believe that a concerted and collaborative effort by various stakeholders and experts will pave the way for effective implementation of holistic personalized medicine in clinical settings.
2018-04-03T00:36:17.947Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "8e204d8d6005041d8bb54aee39d310484e332fad", "oa_license": "CCBYNCND", "oa_url": "https://ascpt.onlinelibrary.wiley.com/doi/pdfdirect/10.1038/psp.2014.6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e204d8d6005041d8bb54aee39d310484e332fad", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258496319
pes2o/s2orc
v3-fos-license
Enhanced superconductivity by near-neighbor attraction in the doped Hubbard model Recent experiment has unveiled an anomalously strong electron-electron attraction in one-dimensional copper-oxide chain Ba$_{2-x}$Sr$_x$CuO$_{3+\delta}$. While the near-neighbor electron attraction $V$ in the one-dimensional extended Hubbard chain has been examined recently, its effect in the Hubbard model beyond the one-dimensional chain remains unclear. We report a density-matrix renormalization group study of the extended Hubbard model on long four-leg cylinders on the square lattice. We find that the near-neighbor electron attraction $V$ can notably enhance the long-distance superconducting correlations while simultaneously suppressing the charge-density-wave correlations. Specifically, for a modestly strong electron attraction, the superconducting correlations become dominant over the CDW correlations with a Luttinger exponent $K_{sc}\sim 1$ and strong divergent superconducting susceptibility. Our results provide a promising way to realize long-range superconductivity in the doped Hubbard model in two dimensions. The relevance of our numerical results to cuprate materials is also discussed. The origin of unconventional superconductivity is one of the greatest mysteries since the discovery of high-T c cuprates [1]. Contrary to the conventional BCS superconductors, it is widely believed that the strong electronic Coulomb repulsions in the 3d orbitals play the dominant role in the dwave pairing mechanism in cuprates. Along this line, spin fluctuations generated by the doped antiferromagnetic state may provide the pairing glue [2][3][4][5]. Based on the minimal model describing the correlations effect -the single-band Hubbard model [6][7][8][9][10], this pairing mechanism has been proposed based on perturbation theory and instabilities on small clusters [11,12]. However, the ultimate verification requires the exact proof of long-range orders d-wave superconductivity in the thermodynamic limit. To address these questions, advanced numerical simulations have been applied to the Hubbard model and its lowenergy analog -the t-J model, in the past few years [9,10]. Many unusual phases in cuprates, such as a striped phase [13][14][15][16][17] and a strange metal phase [18][19][20], have been identified recently by unbiased and exact methods on clusters such as density matrix renormalization group (DMRG) and determinantal quantum Monte Carlo (DQMC). However, the search for d-wave superconducting (SC) phase is not been quite as successful. [21] Quasi-long-range SC order has been found in the Hubbard and t-J models on four-leg square lattice cylinders, [22][23][24][25][26][27][28][29][30] that may be tuned by the band structure and in the striped Hubbard model [31]. However, when a similar study was extended to the wider six and eight-leg t-J cylinders on the square lattice, which is closer to two dimensions, superconductivity is found to disappear in the holedoped side [27][28][29]. Therefore, the original Hubbard and t-J models themselves might not be sufficient to resolve the high-T c puzzle. Meanwhile, experimental efforts have been devoted to the search of new insights. In a very recent photoemission experiment, a strong near-neighbor electron attractive interac-tion was identified in 1D cuprate chains, which may be mediated by phonons [32,33]. Such an interaction is likely to be a missing ingredient also in high-T c cuprates. Intuitively, the extended Hubbard model (EHM) with on-site repulsion and near-neighbor electron attraction may favor nonlocal Cooper pairs [34,35]. Moreover, a recent DMRG study has identified dominant p-wave SC correlations in the pairing channel in the one-dimensional (1D) EHM [36]. These recent experimental and theoretical discoveries motivate the investigation of dwave superconductivity with the presence of a near-neighbor attractive electron interaction. Principal results -Previous DMRG studies [22][23][24] have shown that the ground state of the lightly doped Hubbard model on four-leg square cylinders with next-nearestneighbor (NNN) electron hopping t is consistent with that of a Luther-Emergy liquid (LE) [37] which is characterized by quasi-long-range SC and charge-density-wave (CDW) correlations but exponentially decaying spin-spin and singleparticle correlations. However, while both the SC and CDW correlations are quasi-long-ranged, the former dominates over the latter in all cases, which suggests that CDW order may be realized in the two-dimensional (2D) limit. In this paper, we show that the presence of a finite nearest-neighbor (NN) electron attraction V can notably enhance the SC correlations, while suppressing the CDW correlations simultaneously. This demonstrates the mutual competition relation between the SC and CDW orders in the Hubbard model. More importantly, we find that the SC correlations become dominant over the CDW correlations when the electron attraction is modestly strong. This suggests that the SC order, instead of the CDW order, may be realized in the 2D limit when NN attractions are present. Our results provide a promising pathway to potentially realize long-range superconductivity in the Hubbard model. Model and Method -We employ the DMRG method [38] to study the ground state properties of the single-band extended arXiv:2206.03486v1 [cond-mat.str-el] 7 Jun 2022 Hubbard model on the square lattice, defined by the Hamiltonian Here,ĉ † iσ (ĉ iσ ) is the electron creation (annihilation) operator with spin-σ (σ =↑, ↓) on site i = (x i , y i ),n iσ =ĉ † iσĉ iσ and n i = σn i,σ are the electron number operators. The electron hopping amplitude t ij equals t when i and j are the nearest neighbors, and equals t for next-nearest neighbors. U is the on-site repulsive Coulomb interaction. V is NN electron interaction where V < 0 and V > 0 represent electron attraction and repulsion, respectively. We take the lattice geometry to be cylindrical with a lattice spacing of unity. The boundary condition of the cylinders is periodic along theŷ = (0, 1) direction and open in thex = (1, 0) direction. Here, we focus on four-leg cylinders where the width is L y = 4 and length up to L x = 64, with L x and L y are the number of lattice sites along thex andŷ directions, respectively. The doped hole concentration is defined as δ = N h /N , where N = L y × L x is the total number of lattice sites and N h is the number of doped holes. For the present study, we consider the lightly doped case with hole doping concentration δ = 12.5%. We set t = 1 as an energy unit, and focus on U = 12 and t = −0.25 as a representative parameters set. In our calculations, we keep up to m = 16000 number of states in each DMRG block with a typical truncation error ∼ 10 −6 . Charge density wave order -To describe the charge density properties of the ground state of the system, we have calculated the charge density profile n(x, y) = n(x, y) and its local rung average n(x) = Ly y=1 n(x, y)/L y . For relatively weak electron attraction V , e.g., V = −0.1 as shown in the inset of Fig.1, the charge-density distribution n(x) is consistent with the "half-filled" charge stripe [22,26,40,41] of wavelength λ c = 1 2δ , i.e., spacings between two adjacent stripes, with half a doped hole per unit cell. This is consistent with previous DMRG studies of the single-band Hubbard model in the absence of electron attraction, i.e., V = 0 [22][23][24]. Accordingly, the ordering wavevector Q = 2π/λ c can be obtained by fitting the charge density oscillation induced by the boundaries of the cylinder [42,43] n(x) ≈ A cos(Qx + φ 1 ) [L eff sin(πx/L eff + φ 2 )] Kc/2 + n 0 . (2) Here A is a non-universal amplitude, φ 1 and φ 2 are the phase shifts, K c is the Luttinger exponent, and n 0 is the mean density. We find that an effective length of L eff ∼ L x − 2 best describes our results. As expected, we find that λ c ∼ 4 (Q = 4πδ ∼ π/2) for the "half-filled" charge stripe. When the electron attraction becomes relatively strong, the CDW wavelength λ c starts deviating notably from the"halffilled" charge stripes on finite cylinder (see Fig.1 insets). We note that such a deviation for both λ c and Q from their halffilled stripe values becomes smaller with the increase of the length of cylinders, suggesting that this deviation could be a finite-size effect. This is indeed supported by the finite-size scaling of Q as shown in Fig.2(b). It is clear that in the long cylinder limit, i.e, L x → 0 or 1/L x → 0, Q for all different V will converge to the same value Q = 4πδ = π/2 where λ c = 1/2δ = 4 at δ = 12.5%. As a result, the"half-filled" charge stripes may restore in the thermodynamic limit. It is also interesting to note that for cuprates such as La 2−x Ba x CuO 4 and La 2−x Sr x CuO 4 near 12.5% hole doping, the CDW wavevector Q determined from scattering measurements is always smaller than that expected for the "half-filled" charge stripe of 4πδ [39,[44][45][46]. Considering the fact that experimentally the charge order in cuprates is short-ranged with a correlation length of ξ co ≈ 11λ c in LBCO [39] and ξ co ≈ 3λ c in LSCO [44][45][46]. Our results shown in Fig.2(b) on finite-length cylinders (with L x comparable to ξ co in cuprates) are suggestive of significant electron attraction V in these materials, where the estimated values of electron attraction (in the context of single-band Hubbard model) are V ≈ −1.0(2) in LBCO and V ≈ −0.40 (5) in LSCO. Importantly, another prominent observation in our study is that while "half-filled" charge stripes may be restored in the thermodynamic limit, their amplitude and strength is monotonically suppressed by the electron attraction. This is supported by the fact that the CDW exponent K c defined in Eq. (2) increases notably with the increase of electron attraction |V | as shown in Fig.3. For instance, we find that K c = 0.65(1) for V = −0.1 while K c = 0.87(1) for V = −0.6 on the four-leg cylinder of length L x = 64, with an apparent insensitivity to the length of the ladders used in this study. It is worth mentioning that when V −1.0 the CDW order is sufficiently suppressed to be secondary, where the SC correlation becomes dominant, as shown in Fig. 3. Similar to the CDW correlation, at long distances, Φ yy (r) is characterized by a power law as shown in Fig.4(a) with the appropriate Luttinger exponent K sc defined by Φ yy (r) ∼ r −Ksc . (4) As mentioned, previous studies [22][23][24] on 4-leg square cylinders have shown that without electron attraction, that is, V = 0, the CDW correlations dominate over the SC correlations as K c < K sc . This suggests that the CDW order may be realized in the 2D limit, although the SC correlations are substantial. It is hence highly nontrivial to find a way to enhance the SC correlations while suppressing the CDW order. In the previous section, we have shown that CDW correlations can be notably suppressed by NN electron attraction V . Accordingly, we would expect that the SC correlations can be enhanced by the NN electron attraction V since the CDW and SC orders are mutually competing [47]. Our numerical results are indeed consistent with this expectation. As shown in Fig.3, SC correlations become dominant over CDW correlations when V −1.0, whereK sc < K c . While a slow decay of SC correlations with K sc < 2 implies a SC susceptibility that diverges as ξ sc ∼ T −(2−Ksc) as the temperature T → 0, a much smaller K sc , i.e., K sc ∼ 1, would lead to a much more divergent SC susceptibility. This suggests that the SC order, instead of the CDW order, could be realized in the 2D limit. As far as we know, to date, this is the first time that the dominant SC correlation has been observed via DMRG in the uniform Hubbard model on the square lattice of width L y > 2. It is worth mentioning that while adding a near-neighbor attraction flips the dominant order, the ground state of the system is still consistent with that of a LE liquid phase where K sc K c ∼ 1. Spin-spin and single-particle correlations -To describe the magnetic properties of the ground state, we calculate the spinspin correlation function defined as F (r) = S x0,y0 · S x0+r,y0 . Here S x,y is the spin operator on site i = (x, y) and i 0 = (x 0 , y 0 ) is the reference site with x 0 ∼ L x /4. Fig.5(a) shows F (r) for four-leg cylinder of length L x = 64. It is clear that F (r) decays exponentially as F (r) ∼ e −r/ξs at longdistances, with a finite correlation length ξ s . This is consistent with a finite excitation gap in the spin sector. Moreover, we find that ξ s decreases with increasing |V , e.g., ξ s = 7.8(1) for V = −0.1 and ξ s = 7.0(2) for V = −0.6. This suppression of short-range antiferromagnetic correlations may thus help to destabilize CDW order and promote SC order. Consistent with previous studies [22][23][24], F (r) displays spatial modulations with wavelengths twice that of the charge. We have also calculated the single-particle Green function, defined as G σ (r) = c † x0,y0,σ c x0+r,y0,σ . Examples of G σ (r) are shown in Fig.5(b). The long distance behavior of G σ (r) is consistent with exponential decay G σ (r) ∼ e −r/ξ G . Similar to ξ s , we find that ξ G also decreases with the increase of |V |. For instance, the extracted correlation lengths are ξ G = 3.7(1) for V = −0.1 and ξ G = 3.5(1) for V = −0.6, respectively. These are consistent with that of the LE phase. Summary and discussion -In this paper, we have studied the ground state properties of the lightly doped extended Hubbard model on the four-leg square cylinders in the presence of near-neighbor electron attraction. Taken together, our results show that the ground state of the system is consistent with a LE liquid [37] where both CDW and SC correlations decay as a power-law and K sc K c ∼ 1. However, previous studies [22][23][24] show that SC correlations are secondary when V = 0 compared with CDW correlations since K c < 1 < K sc . Interestingly, we find that the near-neighbor electron attraction V can significantly enhance SC correlations while simultaneously suppress CDW correlations. As a result, when V −1.0, SC correlations become dominant while CDW correlations become secondary as K sc < 1 < K c . To the best of our knowledge, to date, this is the first numerical realization of dominant superconductivity in doping the uniform Hubbard model on the square lattice of width wider than a 2-leg ladder. While in this paper we have focused on the effect of electron attraction V on the LE liquid phase, it will also be interesting to study its effect on doping a qualitatively distinct phase, such as the insulating "filled" stripe phases [48] with t = 0 on the Hubbard model, and see whether superconductivity can likewise be obtained. Answering these questions may lead to better understanding of the mechanism of hightemperature superconductivity. We note that the critical V c , where superconductivity starts to dominate in our simulation, is consistent with the recently identified attractive interaction in 1D cuprates Ba 2−x Sr x CuO 3+δ . [32] Considering the chemical similarity, it is reasonable to believe that the effective near-neighbor attraction in the CuO 2 plane is comparable to V ∼ −t. Therefore, our finding suggests the importance of additional interactions beyond the Hubbard model to stabilize superconductivity over CDW. Future high-resolution experiments, such as photoemission and x-ray scattering, and their comparisons with numerical simulations may quantify this effective interaction V in high-T c cuprates, in a similar way as for the 1D cuprate chains. Another approach to estimating this effective V in realistic materials is a microscopic analysis of cuprates' crystal and electronic structure. Site phonons coupled with electronic density are possible candidates to mediate such an attractive interaction, which has been discussed quantitatively [33]. However, a combined impact of other phonons and other bosonic excitations may contribute to this effective interaction and ultimately result in the strong d-wave superconductivity in cuprates. We would like to thank Steven Kivelson
2022-06-08T01:15:59.535Z
2022-06-07T00:00:00.000
{ "year": 2022, "sha1": "ab517a9b41eab20a4bca4209750da9d7f7bd992f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ab517a9b41eab20a4bca4209750da9d7f7bd992f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
204101636
pes2o/s2orc
v3-fos-license
Enantioselective total synthesis of the unnatural enantiomer of quinine A practical enantioselective total synthesis of the unnatural (+)-quinine and (–)-9-epi-quinine enantiomers, which are important organocatalysts, is reported. Introduction For more than a century, (À)-quinine (1), which is widely known as an antimalarial agent, 1 has attracted intense interest from both academia and industry. By the end of the 20th century, cinchona alkaloids, which include 1, became game-changing materials in the eld of catalytic enantioselective synthesis because these alkaloids and their derivatives work as irreplaceable chiral ligands and organocatalysts. 2 There is no doubt that the usefulness of these compounds has been boosted by the ongoing efforts of synthetic chemists. However, the chiral source material is derived from cinchona alkaloids, and the latter are only available in economically viable quantities from natural sources. Although quinine and quinidine are diastereomers with the same stereochemical conguration at C3 and C4, fortunately, they have been found to be pseudoenantiomers with respect to chiral ligands and catalysts. 3 However, given that neither the reactivity nor enantioselectivity of the compounds is identical, with both depending on the reaction conditions, an efficient and scalable synthesis of the unnatural enantiomer (+)-quinine (2) has been in high demand. The synthesis of cinchona alkaloids has advanced since the rst (formal) total synthesis of quinine (1) was achieved by Woodward and Doering. 4 The rst asymmetric total synthesis of quinine was reported by Stork in 2001; this approach featured construction of C3 and C4 contiguous chiral centers and quinuclidine synthesis using reductive amination followed by S N 2 cyclization (Fig. 1a). 5 Subsequently, the late-stage construction of the quinuclidine scaffold through N1-C8 bond formation was developed by Jacobsen, and this represents one of the most powerful synthetic strategies for the synthesis of cinchona alkaloids. 6 This strategy has the clear advantage that it can be used to obtain either quinine or quinidine from the same intermediate. Furthermore, two elegant syntheses were recently reported in 2018. 7,8 Maulide's synthesis features incorporation of the vinyl group through C-H activation and an innovative C8-C9 bond disconnection. 7 In the same year, a stereoselective divergent synthesis of quinine and quinidine based on local desymmetrization at the C3 and C5 positions was reported by Chen. 8 Although several known synthetic pathways can be used for the preparation of (+)-quinine (2) on a reasonable scale, we envisioned a synthetic design that could allow a practical synthesis of 2 through direct coupling between a quinoline unit and quinuclidine precursor that would expand the diversity of the aromatic portion and facilitate the development of novel organocatalysts (Fig. 1b). In addition, this strategic bond disconnection also allows the generation of (À)-9-epi-quinine (3), which is a known precursor of urea, thiourea, and primary amine organocatalysts. 9 The synthetic plan for our approach is shown in Fig. 1b. We envisioned that three chiral centers would be constructed by using our previously developed multisubstituted chiral piperidine synthesis based on a secondary amine organocatalyzed formal aza [3 + 3] cycloaddition reaction 10 followed by Strecker cyanation. This key sequence allows highly functionalized C-4 alkyl piperidine derivative 6 to be obtained, which can be transformed into quinuclidine precursor 7, a key unit of both 2 and 3. Initially, we considered coupling the quinoline fragment with quinuclidine 2-carbaldehyde; however, difficulties associated with controlling the stereochemistry of the aldehyde moiety on the quinuclidine ring were anticipated. Thus, we expected to introduce the quinoline fragment by using the thermodynamically controlled aldehyde attached on the chiral piperidine. To complete the total synthesis, direct coupling of the quinoline unit through nucleophilic addition of the metal complex of quinoline derivative to the C9 position of 7 followed by quinuclidine formation was designed. Herein we report a practical and enantioselective total synthesis of (+)-2 and (À)-3 using only 0.5 mol% chiral source over 15 steps in 16% overall yield. Results and discussion Our synthesis commenced with the enantioselective construction of the fully substituted piperidine 11 by using our reported asymmetric formal aza [3 + 3] cycloaddition reaction employing diphenylprolinol diphenylmethylsilyl ether catalyst 10 (ref. 11) (Scheme 1). The methodology required less than 1 mol% catalyst, and chiral 4-alkyl piperidine-2-ol derivatives were obtained in excellent yield and enantiomeric excess (up to 97% ee) within acceptable reaction time (<50 h). 10 The reactivity of the nucleophile was increased by the ease of enolization of the thiocarbonyl group and by the addition of suitable additives (1.0 equiv. of benzoic acid and 3.0 equiv. of MeOH). Thus, treatment of known 5-hydroxypentenal derivative 8 (1.1 equiv.), which was prepared in two steps, 12 and thiomalonamate 9 (1.0 equiv.), which was prepared in a one-pot operation from commercially available 1,3-dimethoxybenzylamine (see details in the ESI †) in the presence of 0.5 mol% catalyst 10, benzoic acid (1.0 equiv.), and MeOH (3.0 equiv., toluene, 30 C, 20 h), provided the corresponding chiral 4-alkyl piperidine-2-ol. Cyanation of the hemiaminal moiety of the crude diastereomeric product mixture formed from the formal aza [3 + 3] cycloaddition reaction was carried out directly in a Strecker-type cyanation reaction. Thus, 6.0 equiv. of TMSCN in the presence of 1.2 equiv. of BF 3 $Et 2 O was added to the reaction mixture, and the desired cyano d-thiolactam 11 was obtained in 90% yield over two steps as a mixture of three diastereomers. Elaboration of 11 toward the tetrasubstituted piperidine intermediate 16 required siteselective reduction among the ester, nitrile, and thiolactam, Scheme 1 Preparation of the key piperidine compound 16. followed by one-carbon elongation to form the terminal olen. We initially tried several derivatizations with the nitrile group in place. However, most of the reaction conditions ultimately led to difficulties resulting from removal or reduction of the cyano group; therefore, we explored the derivatization of the cyano group rst. Aer several experiments, we discovered the electron-decient cyano group near the thiolactam was easily transformed into methyl imidate, which resists reduction conditions (see below). Thus, the diastereomer mixture of 11 was treated with DBU in the presence of MeOH to provide methyl imidate 12. At this stage, the two major diastereomers were isolated (79%, dr ¼ a-CO 2 Me/b-CO 2 Me ¼ 3 : 1). The enantiomeric excess of each diastereomer was 94% ee. The stereochemistry of both diastereomers was determined by analysis of the coupling constants in 1 H NMR spectra (see details in the ESI †). We then conrmed that the cyano anion was predominantly introduced from the axial site in the Strecker reaction. With imidate 12 as a mixture of two diastereomers, the thiolactam-selective reduction was achieved by treatment with nickel boride, generated in situ from NiCl 2 $6H 2 O (3.0 equiv.) and NaBH 4 (12.0 equiv., THF, MeOH, À20 C, 5 min, 78%) 13 to provide piperidine derivative 13 as a mixture of two diastereomers. The subsequent ester-selective reduction with diisobutylaluminum hydride (DIBAL-H, 5.0 equiv., CH 2 Cl 2 , À95 C, 2 h) succeeded and the desired aldehyde was obtained without reduction of the imidate moiety. Subsequently, the methyl imidate moiety was hydrolyzed under mild acidic conditions (AcOH/THF/H 2 O ¼ 1 : 30 : 6, rt). At the same time, the aldehyde was also isomerized to the thermodynamically more stable trans form, and the desired methyl ester 14 was obtained as a single diastereomer. As a result, by using the protocol reported herein, the cyano group was easily transformed into the ester via the imidate, while not having effected by the reduction. The introduction of the vinyl group by using a Wittig reagent with strong basicity was not suitable for 14; the reaction afforded low yield (<30%) because of enol formation and an unexpected degradation via retro-Mannich reaction. On the other hand, the one-carbon elongation of 14 was achieved upon treatment with Lewis acidic Tebbe reagent (1.1 equiv., toluene/ THF, 0 to 23 C, 2 h) to provide 15 in high yield (70%, over three steps). The C6 ester was then reduced with DIBAL-H (3.0 equiv., CH 2 Cl 2 , À95 C, 2.5 h) to provide tetrasubstituted piperidine derivative 16 in excellent yield (83%). As expected, aer purication by silica gel column chromatography, the aldehyde in 16 spontaneously adopted the equatorial position in the sixmembered ring system, which possesses the desired conguration for 2 and 3. Chiral piperidine 16 constitutes a potential key intermediate to prepare novel organocatalysts, and it was obtained in gram-scale quantities in eight steps in 32% overall yield from thiomalonamate 9 using only 0.5 mol% chiral source. In our established protocol, the C4 chiral center that was constructed by organocatalytic asymmetric reaction was used to control the conguration at the other two stereocenters (C3 and C6) through thermodynamic isomerization reactions. As a result, we could employ any of the diastereomers to prepare 16. Although we initially planned to introduce the quinoline derivative directly to 16 by using a quinoline metal complex (M ¼ Li, MgBr, MgCl$LiCl, Me 2 Zn, Me 3 Al, LaCl 3 ), all attempts failed to provide the coupling product with acceptable yield because of the unavoidable generation of side-products from the homo-coupling of quinoline. 14 These results indicated that the electrophilicity of 4-bromoquinoline was much higher than that of piperidine-2-carbaldehyde. In addition, the C4 anion generated on the quinoline ring was stabilized by the p-orbitals of the incorporated nitrogen; thus, the nucleophilicity of the quinoline metal complex was not enough to facilitate reaction with aldehyde 16. We then considered the use of 17 as an alternative nucleophile, given that deconjugation at the 1-and 2-positions on the quinoline ring of this compound was expected to both prevent the side reaction and increase the reactivity (Scheme 2). By using this approach, the desired coupling reaction between the quinuclidine fragment and dihydroquinoline derivative 17 as the deconjugated equivalent of quinoline was achieved (Scheme 2). Thus, the reaction of 17, which was prepared in four steps from p-methoxyaniline, 15 with n-BuLi (1.2 equiv., THF, À90 C, 30 min) provided the corresponding 4-lithiated dihydroquinoline. The addition of the latter to 16 (THF, À80 C, 22 h) gave the desired coupling product 18 in good yield (72%, brsm 97%) as a mixture of two diastereomers at the C-9 position (a/b ¼ 1 : 1). At this stage, with over 2 g of 18 in hand, the stereoisomers could be separated by silica gel column chromatography, and a full optimization of the synthesis of (+)-quinine using the C-9 a-hydroxy isomer was accomplished (see details in ESI †). Thereaer, the simultaneous synthesis of 2 and 3 using the mixture of C-9 stereoisomers was carried out, as shown in Scheme 2. Thus, acetylation of secondary alcohol 18 was accomplished by treatment with acetic anhydride in the presence of DMAP. Quantitative removal of the silyl protecting group with methanolic hydrochloric acid provided 19 in quantitative yield in gram-scale quantities, and subsequent mesylation of 19 with methanesulfonyl chloride (1.5 equiv.) in the presence of Et 3 N (5.0 equiv.) afforded the precursor of the quinuclidine ring formation reaction. The tertiary amine was then rapidly obtained by quinuclidine formation in a thermal and neutral intramolecular S N 2 reaction in toluene at 120 C by spontaneous removal of the DMB group of the resulting ammonium salt in the presence of anisole. In this spontaneous removal of the benzyl moiety, the dimethoxy group on the benzene ring was essential; monomethoxy derivatives such as the PMB group did not have sufficient electron density to enable spontaneous removal. In addition, the avoidance of redox processes such as CAN oxidation or hydrogenation to remove the benzyl moiety was crucial to complete the total synthesis. Finally, removal of the acetyl and phenyl sulfone groups of 20 using potassium t-butoxide (3.0 equiv., t-BuOH, 60 C, 1.5 h), provided 2 and 3, respectively, in excellent yield (77% over three steps, 2/3 ¼ 1.1 : 1; 310 mg and 272 mg of 2 and 3 were obtained, respectively). The dihydroquinolines were aromatized via spontaneous aerobic oxidation or elimination of sulnic acid under the reaction condition. Conclusions In conclusion, the synthesis of the title compound was achieved by using an enantioselective formal aza [3 + 3] cycloaddition/ Strecker reaction sequence, followed by sequential chemoselective reduction/transformation of ester, thiolactam, and cyano groups via the transformation of cyano into imidate moieties. In addition, an organolithium-mediated coupling reaction between the dihydroquinoline derivative and piperidine-2-carbaldehyde, followed by construction of the quinuclidine ring with spontaneous removal of the DMB group under neutral conditions were established. The key enantioselective organocatalysis step provides multiply substituted thiolactam as an equivalent of a highly functionalized piperidine derivative and this allowed the straightforward synthesis without any oxidation except for the last step (autooxidation from dihydroquinoline to quinoline). The practical enantioselective total synthesis of (+)-2 and (À)-3 required only 0.5 mol% of chiral source (diphenylprolinol silyl ether catalyst 10) and was achieved in 15 steps, including Mitsunobu conversion, both in 16% overall yield from thiomalonamate 9. Our synthesis not only provides the unnatural enantiomer of quinine, which is required in many areas of current chemical endeavour, but also enables several aromatic groups to be introduced to key intermediate 16. Thus, a wide range of new catalysts that were previously difficult to derive from naturally occurring cinchona alkaloids are now available by using our method. Indeed, further development of new classes of cinchona alkaloid-mimic catalysis is under way. Conflicts of interest There are no conicts to declare.
2019-10-03T09:06:25.704Z
2019-09-27T00:00:00.000
{ "year": 2019, "sha1": "b3c48619fce73fb0cf74dc4375056e0771367307", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/sc/c9sc03879e", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b03ea51541c844f9cecedf9335adbd9fee4ca462", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
246360799
pes2o/s2orc
v3-fos-license
Nociceptor Neurons Magnify Host Responses to Aggravate Periodontitis Periodontitis is a highly prevalent chronic inflammatory disease that progressively destroys the structures supporting teeth, leading to tooth loss. Periodontal tissue is innervated by abundant pain-sensing primary afferents expressing neuropeptides and transient receptor potential vanilloid 1 (TRPV1). However, the roles of nociceptive nerves in periodontitis and bone destruction are controversial. The placement of ligature around the maxillary second molar or the oral inoculation of pathogenic bacteria induced alveolar bone destruction in mice. Chemical ablation of nociceptive neurons in the trigeminal ganglia achieved by intraganglionic injection of resiniferatoxin decreased bone loss in mouse models of experimental periodontitis. Consistently, ablation of nociceptive neurons decreased the number of osteoclasts in alveolar bone under periodontitis. The roles of nociceptors were also determined by the functional inhibition of TRPV1-expressing trigeminal afferents using an inhibitory designer receptor exclusively activated by designer drugs (DREADD) receptor. Noninvasive chemogenetic functional silencing of TRPV1-expressing trigeminal afferents not only decreased induction but also reduced the progression of bone loss in periodontitis. The infiltration of leukocytes and neutrophils to the periodontium increased at the site of ligature, which was accompanied by increased amount of proinflammatory cytokines, such as receptor activator of nuclear factor κΒ ligand, tumor necrosis factor, and interleukin 1β. The extents of increase in immune cell infiltration and cytokines were significantly lower in mice with nociceptor ablation. In contrast, the ablation of nociceptors did not alter the periodontal microbiome under the conditions of control and periodontitis. Altogether, these results indicate that TRPV1-expressing afferents increase bone destruction in periodontitis by promoting hyperactive host responses in the periodontium. We suggest that specific targeting of neuroimmune and neuroskeletal regulation can offer promising therapeutic targets for periodontitis supplementing conventional treatments. Introduction Periodontitis is a chronic inflammatory disease affecting approximately 40% of the adult population in the United States (Eke et al. 2012). Periodontitis causes destruction of the periodontium, including alveolar bone and periodontal ligament, eventually resulting in tooth loss if left untreated. Current approaches to delaying the progression of the disease or regenerating lost periodontium are unsatisfactory, and novel treatment approaches are critically needed. The destruction of periodontal tissues during the progression of periodontitis is primarily due to the dysregulation of local host inflammatory or immune responses, in conjunction with microbial dysbiosis (de Vries et al. 2017;Lamont et al. 2018). Consequently, numerous factors regulating periodontal host responses can affect the progression of periodontitis. The periodontium contains numerous sensory nerves that transduce noxious stimuli. These nociceptive afferents often contain neuropeptides and express high levels of the capsaicin receptor transient receptor potential vanilloid 1 (TRPV1), a Ca 2+ -permeable nonselective cationic channel (Chung, Jung, and Oh 2011). TRPV1 + peptidergic afferents modulate the function of barrier tissue in the skin, lung, and gut and also regulate host responses under bacterial infection and tissue injury (Baral et al. 2019). Nociceptive afferent-mediated modulation of host responses results in either protective or destructive outcomes in different contexts (Razavi et al. 2006;Talbot et al. 2015;Pinho-Ribeiro et al. 2018). However, their roles in modulating host responses in periodontitis are not well understood. Previous studies evaluating the neural control of bone remodeling and periodontal bone loss have produced contradictory results (Hill et al. 1991;Adam et al. 2000;Offley et al. 2005;Breivik et al. 2011;Takahashi et al. 2016), creating a need to clarify the role of TRPV1 + afferents in periodontitis. In this study, we tested the hypothesis that TRPV1 + nociceptive afferents aggravate periodontitis by promoting hyperactive host immune responses. Experimental Animals All animal procedures were performed according to the NIH Guide for the Care and Use of Laboratory Animals (Publication 85-23, Revised 1996), a protocol approved by the University of Maryland Institutional Animal Care and Use Committee, and the ARRIVE guidelines. C57Bl/6, TRPV1 Cre Rosa26 mTmG , and Rosa26 LSL-hM4Di mice were purchased from Jackson Laboratory. We used both male and female mice older than 8 wk. Mouse Models of Periodontitis Ligature-induced periodontitis was initiated by placing a 5-0 silk ligature around the maxillary left second molar (Abe and Hajishengallis 2013). The sutures were tied gently to prevent damage to the periodontal tissues. The ligatures remained in place in all mice throughout the indicated experimental period. In a separate experiment, bacteria-induced periodontitis was induced as described previously by oral inoculation of Porphyromonas gingivalis (10 9 colony-forming units) and Fusobacterium nucleatum (10 9 colony-forming units) (Xiao et al. 2015). Osmotic Pump Implantation Micro-osmotic pumps (Alzet; 100 µL) were filled with clozapine-N-oxide (CNO), a chemical actuator of DREADD, and inserted under back skin. This allows slow release of CNO for 14 d. Micro-Focus Computed Tomography After transcardial perfusion using 3.7% paraformaldehyde, the maxillae were hemisected, and micro-focus computed tomography (µCT) scanning was performed using Siemens Inveon Micro-PET/SPECT/CT (Siemens) with 9 µm spatial resolution. The linear distance from the cementoenamel junction (CEJ) to the alveolar bone crest (ABC) on the buccal side was measured (Abe and Hajishengallis 2013;Xiao et al. 2015). Histological Assays Immunohistochemical assays of TG and decalcified maxillae were performed as previously described (Chung, Lee, et al. 2011;Chung et al. 2012;Wang et al. 2017). Conventional immunohistochemical procedures were performed using rabbit anti-TRPV1 or chicken anti-GFP. For tartrate-resistant acid phosphatase (TRAP) staining, the maxillae were decalcified in 0.5 M EDTA, embedded in paraffin, and sectioned into 5-µm thickness. TRAP staining was performed using a commercial kit (Wako Pure Chemical). Flow Cytometry We prepared single-cell suspensions from gingival tissues according to a previously published method (Dutzan et al. 2016). The mice were transcardially perfused using >20 mL phosphatebuffered saline (PBS) to flush out the immune cells from the vasculature. Gingival tissues around 3 molars on both the buccal and palatal sides were dissected out, digested in type IV collagenase and DNase, and mashed through a cell strainer. Flow cytometry was performed using a Cytek Aurora flow cytometer (Cytek Biosciences). For detecting neutrophils, we used CD45-PE, CD11b-BV42, and Ly6G-PE-Cy7. 7-Aminoactinomycin D (7-AAD) was used to exclude dead cells from analysis. The results were analyzed using FCS Express 7 software. Real-Time PCR Maxillae were dissected out and total RNA was extracted using Trizol and purified using a Direct-zol RNA MicroPrep kit (Zymo Research). Real-time polymerase chain reaction (PCR) was performed and analyzed as described previously (Wang and Chung 2020) using the primer pairs described in Appendix methods. Relative quantification of messenger RNA (mRNA) was achieved using the 2 −ΔΔCt method. Luminex Multiplex Cytokine Assay and Enzyme-Linked Immunosorbent Assay Maxillae were dissected out and ground in a Tris buffer containing a protease/phosphatase inhibitor cocktail, and the supernatant was used for the Luminex assays. Circulating norepinephrine was measured from blood collected retro-orbitally using a norepinephrine enzyme-linked immunosorbent assay (ELISA) kit. 16S Ribosomal RNA Sequencing Bacterial 16S ribosomal RNA (rRNA) genes were PCR amplified with dual-barcoded primers targeting the V4 region. The amplicons were sequenced with an Illumina MiSeq, taxonomically classified, and clustered into operational taxonomic units (OTUs) using the Mothur software package. The α diversity was estimated using the Shannon index on raw OTU abundance tables. To estimate the β diversity across samples, we computed the Bray-Curtis indices, and the variation in community structure was evaluated using permutational multivariate analyses of variance (ANOVAs). The results were uploaded to NCBI SRA (PRJNA750467). Statistical Analysis All data are presented as mean ± SEM. The data were analyzed using Student's t tests and 1-way or 2-way ANOVA followed by post hoc assays. TRPV1 + Afferents Drive Periodontitis-Induced Bone Loss To determine the roles of TRPV1 + afferents in periodontitis, RTX was stereotaxically microinjected into the bilateral TG of adult mice (Fig. 1A). One week after RTX injection, the effects of nerve ablation were assessed using a ligature-induced periodontitis model. A ligature was tied around the maxillary left second molar, with the contralateral unligatured tooth serving as a control. The mice were euthanized 12 to 14 d after the placement of the ligature. The extent of bone loss was evaluated using µCT (Fig. 1B). Significant crestal bone loss around the second molars was noted in both vehicle (Veh) and RTX-treated groups (Fig. 1B, C). However, the extent of bone loss was approximately 45% lower in RTX-injected mice (P < 0.001) when compared with Veh-injected controls. The bone levels around unligatured control teeth were comparable in RTX and Veh groups (Fig. 1B, C). The bone volume/total volume ratio in the furcation of unligatured maxillary first molars (Veh, 0.40 ± 0.11; RTX, 0.51 ± 0.11; P = 0.11; n = 7/ group) also did not differ. We also determined the effects of ablation of TRPV1 + afferents on bone loss in an independent periodontitis model, by inoculation of oral bacteria ( Fig. 1D-F). A week after intra-TG injections of RTX or Veh, we inoculated P. gingivalis and F. nucleatum. Orally inoculated bacteria induced bone loss, which was significantly reduced when TRPV1 + afferents were ablated. Because the extent of bone loss in the bacterial , and an unligatured contralateral tooth served as a control. After 2 wk, the mice were euthanized for micro-focus computed tomography (µCT) and histology study. Scale bar: 1 mm. (B) Three-dimensional (3D) reconstruction of µCT scanned images. ABC, alveolar bone crest; CEJ, cementoenamel junction; the red arrows indicate points where distances between the ABC and the CEJ were evaluated. Four measurements were averaged in each sample. Scale bar: 1 mm. (C) Distances from the ABC to the CEJ assessed in µCT. ***P < 0.001 in Bonferroni post hoc tests following 2-way analysis of variance. n = 6 to 9 per group. (D) Timeline of an experiment for bacteria-induced periodontitis. In C57BL/6 mice, antibiotics were administered through drinking water for 8 d, followed by 2 d of normal drinking water. Oral inoculation of Porphyromonas gingivalis (2 × 10 9 colony-forming units [CFU], 200 µL) and Fusobacterium nucleatum (2 × 10 9 CFU, 200 µL) or vehicle (2% methylcellulose) only was performed over 12 d (6 times every 2 d). Mice were euthanized 2 wk after the last inoculation. (E) Examples of 3D reconstructed µCT images. Nine measurements were averaged in each sample. Scale bar: 1 mm. (F) RTX-induced ablation of TRPV1 + afferents reduced bacterial inoculation-induced resorption of alveolar bone. Averaged distances from the ABC to the CEJ assessed in µCT. *P < 0.05 in Bonferroni post hoc tests following 2-way analysis of variance. n = 4 in the control and 5 in the bacteria group. inoculation model was modest, we used the ligature model for the remaining study. Next, we determined the effects of nociceptor ablation on the number of osteoclasts using TRAP staining ( Fig. 2A, B). Mice with ligature-induced periodontitis had more osteoclasts than unligatured controls. Significantly fewer osteoclasts were seen around the second molars in the RTX/ligature group than in the Veh/ligature group, while the numbers of osteoclasts in unligatured control teeth were not affected. We validated the effects of ablation by intra-TG injected RTX functionally and histologically. One week after intra-TG injection of RTX, eye-wiping behaviors in response to capsaicin application were significantly reduced (Fig. 2C). Postmortem histology indicated that the number of TRPV1 + neurons in TG was substantially reduced (Fig. 2D), as we have previously found (Wang et al. 2019). Three weeks after RTX injection, the levels of norepinephrine in circulating blood were the same in the RTX-and Veh-injected groups (Fig. 2E). Chemogenetic Inhibition of TRPV1 + Afferents Reduces Bone Loss in Periodontitis Crossing TRPV1 Cre mice with floxed reporter lines results in GFP labeling of TRPV1-lineage neurons, which densely project to interproximal gingiva (Fig. 3A). To further investigate the role of TRPV1 + afferents in bone loss, we used a chemogenetic silencing approach, in which we functionally manipulated TRPV1 + afferents in a noninvasive way (Wang et al. 2017). Initially, we used TRPV1 hM4Di mice (TRPV1 Cre ;R26 LSL-hM4Di ; Fig. 3B). To activate hM4Di, the inhibitory DREADD receptor, an osmotic pump was implanted beneath the back skin to allow slow, long-term release of CNO. Bone loss was decreased by approximately 50% compared to TRPV1 Cre control (Cre) by chemogenetic silencing of TRPV1 + afferents (hM4Di in Fig. 3C, D). This approach, however, has a caveat that only half of TRPV1-lineage neurons express TRPV1 (Cavanaugh et al. 2011). This problem can be overcome by injecting cre-dependent AAV into adult cre mice. We therefore conditionally expressed hM4Di or GFP (control) in TRPV1 + afferents by microinjecting Cre-dependent AAV into the TG of adult Trpv1 Cre mice (Fig. 3E). Using this approach, we tested whether the silencing of TRPV1 + afferents could delay the progression of periodontitis when chemogenetic inhibition was performed after the onset of periodontal bone loss. For this purpose, the chemogenetic approach was superior to chemical ablation, since RTX-induced ablation is always preceded by a robust activation of the nociceptive nerves, which confounds the interpretation of the results, since the outcomes may be due to the activation or the ablation of TRPV1 + afferents. The chemogenetic approach functionally silences the TRPV1 + afferents without preceding activation. In our model, substantial bone loss occurred within 1 wk after ligature (group a; Fig. 3E, F). When CNO was administered for 2 wk after ligature (groups b and c), the GFP/CNO group (group b) showed further progression of bone loss, whereas the hM4Di/CNO group (group c) showed a reduced rate of progression, even in the continued presence of a ligature. The 3 groups were significantly different (P < 0.005 in 1-way ANOVA). Continuous administration of CNO did not produce bone loss in the control side without ligature (data not shown). Postmortem histology showed that all GFP + neurons expressed TRPV1 (Fig. 3G), indicating that this approach effectively targeted the TRPV1 + TG neurons. Ablation of TRPV1 + Afferents Decreases Local Host Responses in Periodontitis Periodontal bone loss is mediated by dysregulated local immune responses in the periodontium (Garlet 2010;Graves et al. 2011). We therefore investigated whether TRPV1 + afferents contribute to the immune cell infiltration in the gingiva. Since the inhibitory effects of chemical ablation are more robust than those of chemogenetic silencing, we tested the effects of TRPV1 + afferent ablation on the profiles of gingival immune cells (Fig. 4A). One week after intra-TG injection of RTX or vehicle, the ligature was placed. Single-cell suspensions were prepared from gingival tissues around the maxillary molars, and flow cytometry was performed. In mice with Veh injection into the TG, the fraction of the CD45 + leukocytes were approximately 30% of live single cells in 2 wk after ligature placement, which is more than 5-fold greater than the unligatured control (Fig. 4A). The changes in the Ly6G + CD11b + population showed a similar trend, and the neutrophil percentage in the ligature group was significantly increased (Fig. 4B). In mice injected with RTX into the TG, ligature-induced increases in the number of CD45 + cells and neutrophils were significantly lower than those in vehicle-injected mice with ligature (Fig. 4A, B). We also analyzed the changes in proinflammatory cytokines, including tumor necrosis factor (TNF), interleukin (IL)-1β, and receptor activator of nuclear factor κΒ ligand (RANKL), which are associated with immune cells and periodontal bone destruction (Graves et al. 2011;de Vries et al. 2017). The expression levels of all of these cytokines were increased in the ligature group compared to the control, and the extent of increase was significantly lower in mice injected with RTX in the TG (Fig. 4C). We also investigated the changes in the mRNA levels of 2 additional genes implicated in periodontitis-Csf1 and Mmp9-and found that the application of ligature induced significant upregulation of these genes (Fig. 4D). In mice injected with RTX into the TG, the ligatureinduced increase was significantly lower than that in vehicle-injected mice. For all cytokines, the ablation of TRPV1 + afferents did not alter cytokine expression under control conditions. Ablation of TRPV1 + Afferents Does Not Alter the Microbiome in Periodontitis We then investigated the effects of ablation of TRPV1 + afferents on the composition of periodontal microbiota, using 16S rRNA sequencing of bacterial samples recovered from the ligatures (Fig. 5A). The ligatures were recovered after 3 h (3h) from the right side (healthy control) or after 2 wk (2w) from the left side (periodontitis group) in mice in which RTX (R) or Veh (V) was injected into TG, and recovered bacteria were used for 16S rRNA sequencing. The 4 groups (i.e., V/3h, V/2w, R/3h, and R/2w) showed a variety of microbial compositions at the phylum level (Fig. 5B). The 4 groups showed no difference in α diversity (Fig. 5C), and β diversity was significantly different among groups (P < 0.01 using permutational multivariate analysis of variance). In post hoc tests, the 3-h groups and the 2-wk groups showed significant differences (P < 0.05 in V/3h vs. V/2w; P < 0.05 in V/3h vs. R/2w; P < 0.05 in R/3h vs. V/2w; P < 0.05 in R/3h vs. R/2w; Fig. 5D). However, vehicle-and (B). A ligature was placed in Trpv1 Cre mice (Cre) or TRPV1 Cre ;R26 LSL-hM4Di mice (hM4Di). To activate hM4Di, an inhibitory designer receptor exclusively activated by designer drugs (DREADD) receptor, an osmotic pump (OP) was implanted under the back skin to continuously release clozapine-N-oxide (CNO; 0.25 mL/h for 2 wk). After 2 wk, the mice were euthanized for micro-focus computed tomography (µCT) assay. Examples of µCT images are shown (C). Scale bar: 0.5 mm. Bone loss evaluated in µCT (D). *P < 0.05 in unpaired Student's t tests. n = 3 or 4 per group. (E) Experimental protocol. In Trpv1 Cre mice, adeno-associated virus (AAV) encoding cre-dependent hM4Di fused with a fluorescent reporter (mCherry) under a neuronal promoter (human synapsin) (AAV5-hSyn-DIO-HA-hM4D(Gi)-mCherry; abbreviated to AAV-DIO-hM4Di) or GFP (AAV5-DIO-GFP) was microinjected into the left trigeminal ganglia (TG) (0.5 µL). After 3 wk, the ligature was placed on Mx Lt M2. X marks indicate the time points of euthanasia. In group a, AAV-DIO-GFP was injected without CNO administration. In protocols b and c, AAV-DIO-hM4Di or AAV-DIO-GFP was injected, and the OP was implanted 1 wk after ligature when CNO administration began. CNO was administered for the following 2 wk, and then the mice were euthanized. (F) Bone loss evaluated in µCT. **P < 0.005 in Sidak post hoc tests following 1-way analysis of variance. n = 7 to 8 per group. (G) Labeling of GFP and TRPV1 in AAV-DIO-GFP-injected TG. Scale bar: 50 µm. RTX-injected groups did not show significant differences (P > 0.8 in V/3h vs. R/3h; P > 0.8 in V/2w vs. R/2w). Analysis of the prevalent bacterial taxa showed genus-and family-level differences in bacterial compositions among groups (Fig. 5E). Discussion In this study, we found that the ablation of TRPV1 + afferents confined to TG decreased bone destruction in 2 independent models of experimental periodontitis. The reduction was as efficacious as other pharmacological treatments, such as resolvin (Lee et al. 2016). The protective effects of nociceptor ablation on bone loss were accompanied by decreased osteoclast numbers in alveolar bone, reduced infiltration of leukocytes and neutrophils into the periodontium, and reduced levels of cytokines, such as RANKL, TNF, IL-1β, matrix metallopeptidase 9 (MMP9), and Colony stimulating factor 1 (CSF1), which are associated with innate immunity and osteoclastic differentiation. Not only chemical ablation but also chemogenetic functional silencing of nociceptive neurons displayed protective effects. These results unequivocally support that the TRPV1 + afferents signaling is critical for promoting dysregulated host responses and bone loss in periodontitis in vivo. In contrast to our results, previous studies have found that the systemic injection of RTX or capsaicin accelerates alveolar bone destruction in periodontitis and reduces bone density (Offley et al. 2005;Takahashi et al. 2016). However, these results should be interpreted with care, because this treatment produces a deficiency of TRPV1 + neurons throughout the entire body, including the dorsal root ganglia and vagal ganglia, and probably involves strong compensatory processes in the nervous system. Many studies have shown that systemic capsaicin injection in neonates increases sympathetic activity in various organs (Luthman et al. 1989;Ralevic et al. 1995; . ****P < 0.0001 in Sidak post hoc tests following 1-way analysis of variance. n = 6 in all groups. (C, D) Luminex assay for measuring cytokines (C) or real-time polymerase chain reaction assay for evaluating gene expression (D) in periodontium from control (Con) or ligature (Lig) side after intra-TG injection of Veh or RTX (RTX). The mice were euthanized 2 wk after placing the ligature. **P < 0.005 and ***P < 0.0005 in Sidak post hoc tests following 1-way analysis of variance. n = 10 (C) or 9 per group (D). Sann et al. 1995;Wang et al. 2001). Since sympathetic nerves are known to enhance bone resorption (Takeda et al. 2002;Khosla et al. 2018), it is difficult to attribute the bone changes following systemic treatment with vanilloids entirely to the selective manipulation of sensory neurons. For example, systemic genetic ablation of TrkA + neurons, which mainly overlap with TRPV1 + neurons, in entire sensory neurons increases serum norepinephrine and bone resorption, which can be reversed by propranolol, suggesting the involvement of enhanced sympathetic activity (Chen et al. 2019). In our study, we used localized ablation of trigeminal afferents, limited to the ipsilateral ophthalmic/maxillary region of the TG (Wang et al. 2019), to minimize undesired exposure and compensatory effects. This manipulation did not produce changes in the level of circulating norepinephrine, which excludes the possibility that altered sympathetic activity mediated the neural regulation of bone loss. Bone destruction by periodontal pathogens is mediated by innate and adaptive immune responses, and hyperactive host responses contribute to tissue destruction (Garlet 2010;Graves et al. 2011). Bacterial components trigger innate immune responses followed by a chain of host reactions, and neutrophils play important roles in the induction of bone loss through multiple pathways, including regulating other leukocytes, and secreting cytokines and tissue-destructing enzymes (Lee et al. 1995;Kantarci et al. 2003;Garlet 2010;Graves et al. 2011;Hajishengallis et al. 2016). Ablation of TRPV1 + afferents profoundly decreased the infiltration of leukocytes and neutrophils along with proinflammatory cytokines, which is consistent with the decreased bone loss in ligature-induced periodontitis. Therefore, TRPV1 + nerves contribute to exacerbating innate immunity in the periodontium under periodontitis. Ablation of TRPV1 + nerves apparently does not interfere with physiological bone regulation and local immunity without Time course of the experiment to evaluate the periodontal microbiome. One week after intra-trigeminal ganglia (TG) injection of vehicle (V) or resiniferatoxin (R), ligatures were placed and then recovered either after 3 h (control) or 2 wk (periodontitis). Microbial genomic DNA was isolated from the ligatures, and bacterial 16S ribosomal RNA (rRNA) sequencing was performed. (B) Taxonomic composition of each sample at the phylum level. (C) Richness and evenness in the samples were evaluated using the Shannon diversity index. n = 5 (V/3h), 6 (R/3h), 9 (V/2w), and 9 (R/2w). P = 0.58 in one-way analysis of variance. (D) Principal coordinates analysis (PCoA) plot was generated from Brady-Curtis dissimilarity data. P = 0.007 in permutational multivariate analysis of variance. Post hoc pairwise comparisons; P = 0.85 between Veh/3h versus RTX/3h, P = 0.02 between Veh/3h versus Veh/2w, P = 0.02 between Veh/3h versus RTX/2w, P = 0.03 between RTX/3h versus Veh/2w, P = 0.02 between RTX/3h versus RTX/2w, and P = 0.85 between Veh/2w versus RTX/2w. (E) Four examples of differentially abundant taxa among group variables. Differential abundance testing (DESeq2, R package) identified 15 operational taxonomic units that were differentially abundant among treatments, 4 of which are shown. 2w, 2 wk after ligature placement; 3h, 3 h after ligature placement; FDR, false discovery rate; RTX, resiniferatoxin injection into TG; Veh, vehicle injection into TG. n = 5 (Veh/3h), 6 (RTX/3h), 9 (Veh/2w), and 9 (RTX/2w). periodontitis. Furthermore, ablation of TRPV1 + nerves did not alter the microbiome in gingival sulcus in healthy conditions or in periodontitis. These results suggest that the inhibitory effects of nerve ablation on ligature-induced bone loss do not appear to be caused by the alteration, or normalization of the dysbiosis of the microbiome, but are more likely to be due to the decrease in dysregulated host responses. Therefore, selective interference of nociceptive nerve-induced aggravation of innate immunity can be a novel therapy to alleviate detrimental hyperactive host responses without interfering with homeostasis. In this study, we focused on determining the effects of manipulating nociceptive nerves on bone loss, host responses, and microbial changes under periodontitis. Determining the mechanisms whereby TRPV1 + nerves aggravate innate immunity and bone loss is beyond the scope of this study, and we will further determine the mechanisms of neuroimmune and neuroskeletal regulation in periodontitis. Neural controls of other innate immune cells, such as monocytes and macrophages, or adaptive immunity need to be determined. Investigation of the upstream and downstream signaling associated with neuroimmune regulations is also critical. As previously suggested, neuropeptides released from TRPV1 + nerves regulate the infiltration and function of immune cells (Talbot et al. 2015;Baral et al. 2019), and the major neuropeptides contributing to bone loss in periodontitis need to be clarified. It is also important to understand the mechanisms underlying the "painless" contribution of nociceptive nerves to periodontitis. In our model, we presume that chronic periodontitis is accompanied by the persistent activity of nociceptive afferents. However, patients do not experience overt pain from the periodontium. Some unique pathobiology of the periodontium and periodontitis may contribute. Unlike lipopolysaccharides (LPS) from other sources, LPS from P. gingivalis does not produce hyperalgesia, but it is even antinociceptive (Nagashima et al. 2017;Khan et al. 2019). Since the release of neuropeptides from sensory afferents can occur without pain, because Ca 2+dependent release of neuropeptides can occur without action potential firing (Németh et al., 2003), nociceptive nerve terminals can still regulate inflammation and host responses in the gingiva without pain. Further mechanisms underlying nonpainful chronic periodontal inflammation need to be explored. In conclusion, we suggest that TRPV1 + afferents initiate neuroimmune signaling modulating bone loss in marginal periodontitis. Therefore, manipulating this neuroimmune axis (e.g., by the localized inhibition of TRPV1 + afferents or by the modulation of downstream signaling leading to neurogenic inflammation in affected gums) could provide novel therapeutic approaches for treating periodontitis, supplementing conventional therapies. Author Contributions S. Wang, contributed to conception, design, data acquisition, analysis, and interpretation, drafted and critically revised the manuscript; X. Nie, X. Fan, contributed to design, data acquisition, analysis, and interpretation, critically revised the manuscript; Y. Siddiqui, contributed to conception and data interpretation, critically revised the manuscript; X. Wang, contributed to conception, design, data acquisition, analysis, and interpretation, critically revised the manuscript; V. Arora, contributed to design, data acquisition, and analysis, critically revised the manuscript; V. Thumbigere-Math, contributed to conception, design, data interpretation, critically revised the manuscript; M.K. Chung, contributed to conception, design, and drafted manuscript. All authors gave final approval and agree to be accountable for all aspects of the work.
2022-01-29T06:17:17.674Z
2022-01-27T00:00:00.000
{ "year": 2022, "sha1": "5755a42f70d0b9f2900c56a23307305897791974", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00220345211069956", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "9c3ad32fa0d40dc399b00efa0e01cce92e0529e5", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15337657
pes2o/s2orc
v3-fos-license
Generating a Generation of Proteasome Inhibitors: From Microbial Fermentation to Total Synthesis of Salinosporamide A (Marizomib) and Other Salinosporamides The salinosporamides are potent proteasome inhibitors among which the parent marine-derived natural product salinosporamide A (marizomib; NPI-0052; 1) is currently in clinical trials for the treatment of various cancers. Methods to generate this class of compounds include fermentation and natural products chemistry, precursor-directed biosynthesis, mutasynthesis, semi-synthesis, and total synthesis. The end products range from biochemical tools for probing mechanism of action to clinical trials materials; in turn, the considerable efforts to produce the target molecules have expanded the technologies used to generate them. Here, the full complement of methods is reviewed, reflecting remarkable contributions from scientists of various disciplines over a period of 7 years since the first publication of the structure of 1. Introduction The ubiquitin and proteasome dependent proteolytic system (UPS) is the major pathway for regulated protein degradation in eukaryotic cells [1,2]. Central to the UPS is the 26S proteasome, a OPEN ACCESS 2.5 MDa multi-catalytic enzyme complex that houses a 700 kDa proteolytic 20S core particle in which protein substrate hydrolysis is executed. Substrates for this non-lysosomal protein degradation pathway include misfolded and defective proteins, as well as others that are selectively polyubiquitin-tagged and targeted for degradation by the UPS [1 -3]. Proteasome structure, function, and the impact of proteasome inhibitors as biochemical tools and therapeutic agents have been extensively reviewed [1][2][3][4][5][6]. In addition to providing a mechanism for cellular protein quality control, the UPS facilitates essential processes ranging from antigen processing to signal transduction, cell cycle control, cell differentiation and apoptosis [1][2][3][4]. These critical functions, together with the ubiquitious nature of the proteolytic 20S core particle, suggest a wealth of potential applications for proteasome inhibitors ranging from crop protection [7] and antiparasitics [8] to new therapies for inflammation [9] and autoimmune diseases [4], with demonstrated utility in the treatment of cancer [4,5,[10][11][12][13][14]. The proteasome's impact on diverse and essential cellular processes stems directly from its core function, i.e., the proteolysis of a wide variety of target proteins. In turn, inhibiting proteasome activity has important downstream consequences that can be used to advantage in tumor cells, for example, the stabilization of proapoptotic proteins (e.g., p53, Bax, IқB) and the reduction of some antiapoptotic proteins (e.g., Bcl-2, NF-қB), collectively inducing a proapoptotic state [4,5]. These and other findings provided strong rationale for targeting the proteasome for the treatment of cancer, an approach which received initial validation through Food and Drug Administration (FDA) approval of bortezomib [(R)-3-methyl-1-((S)-3-phenyl-2-(pyrazine-2-carboxamido)propanamido)butylboronic acid); PS-341; Velcade ® ] for the treatment of relapsed and relapsed/refractory multiple myeloma (MM) in 2003 [10,11]. Since that time, structurally unique proteasome inhibitors with the potential to treat patients that had failed or were not candidates for treatment with bortezomib have entered clinical trials [5]. One such agent is the marine-derived natural product salinosporamide A (marizomib; NPI-0052; 1) ( Figure 1) [15]. Accounts of its discovery and development have recently been reported [16,17] along with extensive preclinical indicators of strong clinical potential [5,12,13,[16][17][18][19][20][21][22]. (4), salinosporamide C (6) and its hypothetical precursor (7), and salinosporamide I (12). Identification and optimization of new inhibitors have benefited from knowledge of proteasome structure and biology; conversely, new proteasome inhibitors have contributed to the understanding of proteasome structure and function (for reviews, see [3,4,6]). The 26S proteasome comprises one or two 19S regulatory caps and a cylindrical 20S core particle housing three pairs of proteolytic subunits, 5, 2 and 1. These three subunit types have been ascribed chymotrypsin-like (CT-L), trypsin-like (T-L) and PGPH or caspase-like (C-L) activities based on their substrate preferences, and work in concert to degrade polyubiquitin-tagged proteins into small peptides. Substrate binding entails recognition of amino acid side chains (P1-Pn) by sequential binding pockets (S1-Sn) proximal to the enzyme active site, in analogy with other proteases. The S1 "specificity pocket" immediately adjacent to the active site largely confers the CT-L, T-L, and C-L sites with their preferential (albeit non-exclusive) binding to hydrophobic, positively-, and negatively-charged residues, respectively. Once bound, hydrolysis of the substrate peptide bond adjacent to S1 is catalyzed by the N-terminal threonine residue (Thr1), classifying the 20S proteasome among the N-terminal hydrolase family of enzymes. Thr1NH 2 putatively acts as the general base, catalyzing Thr1OH  nucleophilic addition to the sessile substrate peptide bond to initiate bond cleavage. Based on this mechanism, it is perhaps not surprising that bortezomib and many other known proteasome inhibitors comprise peptides that are derivatized with reactive functional groups at the C-terminus, enabling formation of covalent adducts with Thr1 [3,4,6]. Despite the rational basis for peptidyl inhibitors, the structurally unique and terrestrially-derived microbial natural product lactacystin (2), comprising a -lactam substituted with a thioester and an isopropylcarbinol [23,24] (Figure 1), was found to specifically target the proteasome [25]. Lactacystin undergoes in situ transformation to the corresponding -lactone known as "clasto-lactacystin-lactone" or "omuralide" (3), which represents the active species that acylates Thr1O  in the proteasome active site [25][26][27][28]. The evolution of 2 and 3 as biochemical tools that played pivotal roles in identifying the proteasome catalytic residues and enhancing general understanding of proteasome biology marked the birth of the -lactone--lactam family of proteasome inhibitors. Moreover, the structures of 2 and 3 offered attractive synthetic targets that inspired elegant and inventive strategies (for reviews, see [29][30][31][32]). Although 3 has not been developed as a therapeutic agent, its affinity and specificity for the proteasome demonstrated that peptidyl inhibitors can be challenged by densely functionalized lower molecular weight ligands of the -lactone--lactam family. In fact, the close structural analog PS-519 (4) (Figure 1) was evaluated in Phase I clinical trials based on preclinical data demonstrating neuroprotective efficacy in a preclinical model of cerebral eschaemia [33]. Then, in a timely 2003 publication, Fenical and coworkers reported that the marine actinomycete Salinispora tropica produced the potent and structurally novel proteasome inhibitor salinosporamide A (marizomib; NPI-0052; 1; Figure (1) [15]. The fused bicyclic ring system of 1 revealed its structural relationship to 3 and suggested that the two molecules may share a common molecular target. This hypothesis was confirmed by assaying the two compounds for inhibition of purified 20S proteasome CT-L activity, and also established the enhanced potency of 1 (IC 50 = 1.3 nM) versus 3 (IC 50 = 49 nM) [15]. Moreover, 3 inhibited only CT-L activity while 1 inhibited all three proteolytic activities (CT-L, T-L, and C-L) [13,34]. In vitro cytotoxicity assays for 1 revealed IC 50 values in the nM range against a panel of cancer cell lines [13,15,34], including MM, where proteasome inhibitors have shown clinical benefit [10,11]. Again, 1 (MM cell line RPMI 8226, IC 50 = 8 nM) exhibited enhanced potency over 3 (RPMI 8226, IC 50 = 3300 nM) [34]. The enhanced activity of 1 is rooted in its unique structure. While related to 3 by virtue of the shared -lactone--lactam core structure, 1 is distinguished by chloroethyl, methyl, and cyclohex-2-enylcarbinol substituents at the C-2, C-3 and C-4 positions, respectively, which give rise to specific and mechanistically important interactions within the proteasome active site that include recognition of the cyclohexenyl group by the S1 specificity pocket and acylation of the catalytic Thr1O  by the -lactone followed by chloride displacement, rendering the ligand irreversibly bound (Figure 2) [35]. Recognizing the potential for the unique properties of 1 to translate into therapeutic benefit, the compound was licensed from the University of California, San Diego (UCSD) to Nereus Pharmaceuticals, San Diego, CA [16,17]. Intensive preclinical development included evaluation of marizomib in various solid tumor and hematological cancer models [12,13,[16][17][18][19][20][21][22]. A human MM xenograft model in immunodeficient mice demonstrated efficacy after twice weekly IV (0.15 mg/kg) or oral (0.25 mg/kg) administration. Specifically, 1 inhibited MM tumor growth in vivo and prolonged survival, without the reoccurrence of tumor in 57% of mice. With respect to proteasome inhibition, treatment with 1 resulted in sustained inhibition of the CT-L, T-L and C-L activities in packed whole blood, a profile that was distinct from bortezomib. Moreover, 1 induced apoptosis in MM cells that were resistant to conventional and bortezomib therapies, without affecting normal lymphocyte viability, and did not affect the viability of MM patient-derived bone marrow stromal cells [13]. Interestingly, the two structurally distinct proteasome inhibitors, marizomib (1) and bortezomib, triggered differential apoptotic signaling pathways, suggesting a rationale for evaluating them in combination; indeed, combinations of low doses of the two agents triggered synergistic anti-MM activity [12,13,18]. These findings established the basis for a clinical development program, and an Investigational New Drug (IND) application was filed with the FDA in 2005 [16,17]. Strong preclinical indicators were also observed in leukemia cells [19][20][21], including synergistic cytotoxicity with histone deacetylase inhibitors (HDACi) [20,21], which provided rationale for ongoing clinical trials combining 1 with the HDACi, vorinostat [36]. In addition to promising results in hematological cancer models, oral administration of 1 improved tumoricidal response to multidrug treatment in a colon cancer xenograft model [22]. These and other studies suggested that 1 may be efficacious against hematological and solid tumors either as a single agent, and/or in combination with biologics, chemotherapeutics and targeted therapeutic agents [17]. At the time of writing, marizomib is being evaluated in several concurrent phase 1 clinical trials in patients with multiple myeloma, lymphomas, leukemias and solid tumors, including those that have failed bortezomib treatment, as well as patients with diagnoses where other proteasome inhibitors have not demonstrated efficacy [5,12,16,17,[36][37][38]. The -lactone of the inhibitor acylates Thr1O  , followed by displacement of chloride to form a 5-membered cyclic ether ring [35]. The structure of 1, together with its enhanced potency and therapeutic potential, sparked intense interest from the synthetic organic chemistry community (for an earlier review, see [32]). While strategies towards its de novo total synthesis reaped some benefits from the teachings of omuralide (vide supra), the more reactive functional groups of 1, along with the additional stereocenter at C-6, added a new level of complexity that raised the bar of the synthetic challenge. This was answered with several enantioselective total synthetic routes [39][40][41][42][43] and complemented by a growing number of racemic strategies [44,45] and formal syntheses [46][47][48][49][50] (see Section 6. Total Synthesis of 1). Nevertheless, S. tropica remains the most efficient producer of 1. In a key demonstration of the industrial potential of marine microbiology, clinical supplies of 1 are being manufactured through a robust saline fermentation process [16,17] (see Section 2.2. Fermentation Optimization of 1 to Clinical Trials Materials). In parallel, S. tropica was further exploited in several important ways: (i) structurally related natural products were identified [51][52][53] (see Section 2.1 Natural Products of S. tropica); (ii) modified media and precursor-directed biosynthesis gave rise to new chemical entities, altered the ratios of secondary metabolites, and offered insights into the biosynthetic pathways of 1 and analogs [52,[54][55][56][57][58] (see Section 2.3 Products of Precursor-Directed Biosynthesis); (iii) access to large quantities of 1 through fermentation enhanced its utility as a precursor for semi-synthesis [34,52,[59][60][61] (see Section 4. Products of Semi-Synthesis); (iv) its genome was sequenced [62]; and (v) knockout mutants were generated, opening the door to bioengineered products [63][64][65][66] (see Section 2.4 Products of Mutasynthesis). Here, we capture the full complement of methods for generating a generation of proteasome inhibitors in the salinosporamide family that have been developed by microbiologists and organic chemists working collaboratively or independently. The collective body of work reflects enormous progress over a period of 7 years since the first publication of 1 by Fenical and coworkers [15]. As a guide to the reader, we refer to Figure 1 and adopt the following nomenclature throughout this review. Based on crystallographic analysis of -lactone--lactam inhibitors in complex with the 20S proteasome [28,35,67] and the orientation of key substituents relative to those of peptidyl inhibitors, the C-4 substituent is referred to as the P1 residue, while the C-2 substituent is denoted P2 (despite its non-amino acid origins [68]). Other substituents and functional groups will be referred to by atom number according to Figure 1. All P1 and P2 analogs are captured in Tables 1 and 2, along with their published methods of production and their IC 50 values for inhibition of purified 20S proteasome CT-L activity. Compounds that fall outside of these structural boundaries are captured in Figures 1 and 3 and Schemes 1-3. Finally, the synthetic routes for the total and formal synthesis of 1 are presented in Schemes 4-15. While the main focus of this review article is on methods of production, structure-activity relationship trends are briefly presented (see Section 5. Structure-Activity Relationships). Natural and Unnatural Products of S. tropica In this section, we focus on salinosporamides generated from S. tropica, including those isolated from wild type and genetically modified strains. Natural Products of S. tropica The genus Salinispora represents a group of taxonomically diverse actinomycetes that is widely distributed in ocean sediments [69,70]. The discovery of this marine taxon was part of a larger effort by Fenical and coworkers to explore the ocean as a source of new marine microbes that produce novel chemical entities with therapeutic potential. Strains representing three Salinispora species (tropica, arenicola, and pacifica) were isolated from samples collected in tropical and subtropical regions, and fermentation extracts produced from these strains gave rise to a high hit rate in anticancer and antibiotic screens. A detailed investigation of S. tropica ensued, which led to the discovery of 1 [15,16]. S. tropica was first isolated from a heat-treated marine sediment sample collected in the Bahamas. The potent biological activity of crude extracts obtained from shake-flask culture and solid phase extraction led to the bioassay guided fractionation and isolation of the major secondary metabolite salinosporamide A (1) by Feling et al. [15]. Structure elucidation revealed its dense functionality (Figure 1), including the fused bicyclic -lactone--lactam core reminiscent of omuralide (3) and 5 contiguous stereocenters (2R,3S,4R,5S,6S) that were unequivocally established by X-ray crystallography. Publication of initial findings on the source organism, structure and proteasome inhibitory profile of 1 in 2003 [15], when the proteasome was receiving considerable attention through positive clinical trials results with bortezomib [10,11], triggered a rigorous preclinical evaluation of 1 that formed the basis for ongoing clinical trials (vide supra). Encouraged by the phylogenetic novelty of Salinispora and the exciting new chemistry exemplified by 1, research on S. tropica continued at UCSD. A thorough evaluation of crude extracts resulted in the identification of deschloro analog salinosporamide B (5) (Table 1) [51]. Although less potent than 1 in terms of proteasome inhibition and cytotoxicity, 5 provided important mechanistic and biosynthetic insights: biochemical and structural biology studies of 5 in direct comparison with 1 highlighted the importance of the chlorine leaving group of the parent natural product for inducing irreversible binding to the proteasome [35,61], while precursor-directed biosynthesis demonstrated distinct origins for the C1/C2/C12/C13 carbons of 1 versus 5 [55]. Salinosporamide C (6), a tricyclic cyclohexanone derivative of 1, was also isolated from the fermentation broth; while considered a natural product, rearrangement pathways from 1 involving the proposed -lactone precursor 7 were envisioned ( Figure 1) [51]. Moreover, 7 was subsequently isolated as a byproduct of chemical transformations of 1 under oxidative conditions (Macherla, Manam and Potts, unpublished observation; see Section 4. Products of Semi-synthesis). In addition to salinosporamides A-C, S. tropica crude extracts contained several products of -lactone ring hydrolysis or decarboxylation. The ability to generate these compounds from 1 under conditions similar to those used during fermentation and extraction led to their assignment as degradants as opposed to natural products [51]. Nevertheless, these findings offered important insights into the reactivity of 1 (for structures and discussion, see Section 3. Products of Chemical Degradation). [61] Semi-synthesis [61] Total synthesis [61] 3.0 ± 0.5 [61] Semi-synthesis [61] a Purified rabbit 20S proteasomes, unless otherwise indicated. Where n ≥ 3, the mean IC 50 value ± standard deviation is presented; where n < 3, results of individual experiment(s) are shown. b Purified yeast 20S proteasomes. c Purified human 20S proteasomes. The advancement of 1 into preclinical development at Nereus demanded a constant supply of pure compound, which further necessitated fermentation scale-up and process development (see Section 2.2. Fermentation Optimization of 1 to Clinical Trials Materials). Purification of 1 from larger scale crude extracts (8 g derived from 40 L of fermentation broth) facilitated the isolation and structural characterization of several less abundant congeners [52]. Most of these natural products represented modifications to P2 (Table 1), including salinosporamide D (8; P2 = methyl), the previously described salinosporamide B (5; P2 = ethyl), and salinosporamide E (9; P2 = propyl), the latter of which had first been identified by semi-synthesis [34]. Stereoisomers representing epimers at the C-2 position were also identified, including salinosporamides F (10) and G (11), the C-2 epimers of 1 and 8, respectively [52]. Sampling and HPLC analysis of fermentation culture extracts over time indicated that the ratio of 1 to 10 was fairly constant throughout the fermentation cycle, suggesting that C-2 is not post-biosynthetically racemized. The corresponding diastereomer of salinosporamide B was not detected in the large scale crude extract but was identified in extracts obtained from modified fermentation conditions using NaBr-based media (see Section 2.3. Products of Precursor-Directed Biosynthesis). In addition to these P2 congeners, the large scale crude extract contained salinosporamide I (12), in which the methyl group at the C-3 ring junction is replaced with an ethyl group (Figure 1), and P1 analog salinosporamide J (13) ( Table 2), reflecting dehydroxylation at C-5 [52]. [15] Total Synthesis [39][40][41][42][43][44][45] Formal Synthesis [46][47][48][49][50] 13 Salinosporamide J 52  2 [52] Natural Product [52] Up to 2007, only natural products bearing a cyclohexenyl substituent at C-5 had been identified; specifically, congeners with an omuralide-like isopropyl group had not been reported. Nevertheless, the structural similarity between 1 and omuralide (3) inspired the total synthesis of 'antiprotealide' (14), a molecular hybrid in which the cyclohexenyl substituent of 1 is replaced with an isopropyl group, as contributed by Corey and coworkers in 2005 (Table 2) [71,72]. Then, in 2008, 14 was reported as product of bioengineering of S. tropica [64]. Meanwhile, 1 had advanced from preclinical to clinical development, and large scale production of clinical trials materials was undertaken at up to 1000 L scale (vide infra). Purification of 1 from 72 g of crude extract obtained from a 350 L fermentation broth generated side fractions that were enriched in salinosporamide B (5) and a new congener, which spectroscopic analysis revealed to be identical to antiprotealide [53]. While access to large scale fermentation extracts facilitated in its identification, analysis of crude extracts from shake flask cultures of three wild type S. tropica strains confirmed the production of 14 in quite reasonable titers of ~1 to 3 mg/L. These findings firmly established antiprotealide as a natural product of S. tropica [53]. Antiprotealide represents one of a limited number of cases in which a natural product was identified subsequent to its synthesis. Notably, the synthesis of antiviral agent 9-(-D-arabinofuranosyl)adenine (Ara-A) [73] preceded the production of the same compound by fermentation of Streptomyces antibioticus [74]. The demonstration that wild type S. tropica strains produce antiprotealide, together with its close structural relationship to omuralide, begged the question of whether lactacystin-like analogs might also be natural products of S. tropica. Despite our thorough examination of S. tropica extracts, thioester analogs of the salinosporamides were not identified, nor have they been reported by other laboratories. In contrast, omuralide (3) is found in nature as its thioester precursor lactacystin (2) [23,24], while the structurally related cinnabaramides (P1 = cyclohexenylcarbinol; P2 = substituted or unsubstituted n-hexyl) are also found as either -lactones or thioesters [7,75]. However, these terrestrially-derived natural products do not bear halogen leaving groups at the P2 position. Semi-synthetic analogs of the marine-derived natural product 1 confirmed that the thioester form is prone to premature triggering of chloride displacement, rending the molecule significantly less active and offering no apparent advantage to the producing organism [52] (see Section 4. Products of Semi-Synthesis). Fermentation Optimization of 1 to Clinical Trials Materials In order to execute a successful preclinical development program, a reliable source of high purity drug substance (i.e., "active pharmaceutical ingredient" (API)) is required. The original fermentation conditions and the production strain (S. tropica CNB476) transferred from Fenical's research group at Scripps Institution of Oceanography, UCSD, afforded the production of a few mg per liter of 1 in shake flask cultures. The original seed and production media contained numerous animal-derived media components and natural seawater that cannot be used to manufacture the API under current Good Manufacturing Practice (cGMP). Extensive fermentation development to replace the non-compliant media components and improve production was carried out at Nereus Pharmaceuticals. We successfully replaced seawater with a commercially available synthetic sea salt, Instant Ocean, for the production of 1. We also replaced all animal-derived nutrients with plant-derived nutrients to meet the FDA requirement. The yield improvement processes are summarized in Table 3 and discussed below. It has been well documented that the addition of resins to the fermentations of reactive and/or highly potent secondary metabolites leads to increases in production of these metabolites [76][77][78][79]. The key to the initial success of yield improvement of 1 was the addition of solid resins to the production culture (Table 3; step 1). The inherent instability of the β-lactone ring of 1 in aqueous solution [80], such as in the submerged saline fermentation, was overcome by addition of solid resin to the fermentation in order to bind and capture 1. The addition of resin to the production culture led to an 18-fold increase in yield in a preliminary study (Table 3). Further investigation of the resin stabilization effect on 1 using production strain NPS21184 (see below) established the conditions for the large-scale resin addition process [81]. Wild type strain often contains a heterogeneous population of cells that have different productivity. A simple experiment involves spreading the wild type strain on agar plates to obtain single colonies, comparing the productivity of the single colonies, and selecting the colony with the best productivity and/or characteristics for further studies. The second key yield improvement for 1 was the isolation of S. tropica strain NPS21184, a single colony isolate directly derived from strain CNB476 without mutation and genetic manipulation (Table 3; step 4). Besides supporting higher production of 1, strain NPS21184 produces three-fold less of deschloro analog 5 than the parent strain CNB476. This is beneficial given that this interfering analog must be removed during API purification. Table 3. Fermentation yield improvement of 1 in shake flask and laboratory fermentor. Improvement parameters Shake flask (mg/L) Effect of resins 70 25 2 Length and timing of seed, production and resin addition cycle 120 120 3 Media formulation 220 220 4 Single colony isolation 330 330 5 Statistical design media optimization 450 360 When developing an industrial fermentation process, designing the fermentation medium is of critical importance. The fermentation medium affects the product yield and volumetric productivity, and also needs to comply with cGMP guidelines set by FDA. Media formulation studies (Table 3; steps 3 and 5) were successfully carried out to replace natural seawater and animal-derived media components with media components that are acceptable for cGMP manufacturing. Furthermore, additional yield improvement was achieved via media formulation studies. A greater than 100-fold increase in the production of 1 in shake flask culture was obtained after the above yield improvement processes with a production titer of 450 mg/L. The production of 1 by marine actinomycete strain NPS21184 was carried out via a saline fermentation process. Saline fermentation poses a major challenge in scale-up since published literature suggested that the 316-type stainless steel fermentors commonly found in manufacturing facilities are not resistant to the corrosive effect of the saline media [82]. Using a 316 stainless steel B. Braun Biostat-C fermentor (42 L total volume), we developed a process to overcome the corrosive effect of saline fermentation media based on this stainless steel fermentor. The foaming, aeration and agitation issues associated with the scale-up production of 1 in fermentors were also addressed using the B. Braun Biostat-C fermentor. We successfully transferred the yield improvement conditions developed in shake flasks to a laboratory fermentor as shown in Table 3. A titer of 360 mg/L for 1 was achieved in the 42 L laboratory fermentor, which is lower than the maximum titer of 450 mg/L detected in shake flask. The discrepancy in titers is due to the foaming problem that occurred in the fermentor (but not in shake flask) when rich media containing high concentrations of starch and soy type products were used. Marizomib (1) API is currently manufactured under cGMP through a robust saline fermentation process by S. tropica strain NPS21184 at two different contract manufacturing organizations. The final fermentation process development effort standardized parameters such as temperature exposure, operating parameters, cleaning and passivation to overcome the corrosive effect of saline fermentation, and was performed in 500-1500 L industrial stainless steel fermentors. This, together with careful design of the timing and method for introducing the resin to the production fermentor, resulted in production titers of 250-300 mg/L in 500-1500 L industrial fermentors. During the peak production cycle, the resin-bound drug is collected, filtered, extracted with ethyl acetate and concentrated for downstream processing (DSP) in an environment with appropriate containment for a high biological potency substance. To maintain optimal stability of 1, all DSP steps are executed in non-aqueous solvent systems. The crude extracts from the resin undergo purification involving a highly effective silica gel flash chromatography step, which removes all unrelated substances as well as most congeners of 1, such as the deschloro analog 5. In fact, the purity of 1 increases from ~55% to ~95% (UV area by HPLC) after this single flash chromatography step. The resulting highly purified API obtained after flash chromatography may contain up to ~3% of diastereomeric impurity 10. Using an evaporative crystallization process that exploits subtle solubility differences between 1 and 10, this impurity is reduced to <1%, and 1 is isolated as a white crystallize solid. The final pharmaceutical grade cGMP marizomib API is obtained in >98% purity with overall ~50% recovery from the crude extract. Based on the potency of 1, the production titer at fermentor scale and the recovery yield are adequate for both clinical development and commercial production. To the best of our knowledge, this represents the first manufacture of clinical trial materials by saline fermentation. Products of Precursor-Directed Biosynthesis Precursor-directed biosynthesis is the addition to the fermentation medium of an analog of a part of the secondary metabolite which the organism is then capable of incorporating into its enzymatic process to yield a modified metabolite. The production of new secondary metabolites using directed biosynthesis is an attractive, efficient and simple-to-use method that has wide application in the field of industrial secondary metabolite production [83][84][85][86][87]. We have successfully employed this technique in increasing the production of minor salinosporamides and generating novel salinosporamides in S. tropica fermentations ( Table 4). One of our key successes in applying this technique to generate novel salinosporamides was in developing the proper media to support the production of the novel analogs. In the NaCl-based medium, 1 is the major product of the fermentation with a titer of 277 mg/L ( Table 4, condition 1). Several minor P2 analogs, such as 8 (P2 = methyl; 0.15 mg/L), 5 (P2 = ethyl; 4.4 mg/L) and 9 (P2 = propyl; 0.11 mg/L), are coproduced in the S. tropica NPS21184 fermentation (Table 4, condition 1). Replacing the NaCl-based medium with a NaBr-based medium produced a novel brominated analog (15) as the second major salinosporamide (19 mg/L) in the fermentation (Table 4, condition 2). The major salinosporamide produced was deschloro analog 5 (80 mg/L), while 1 was only a minor component (1.2 mg/L) in the fermentation (Table 4, condition 2) [54]. The increased production of 5 was accompanied by the presence of its C-2 epimer 16 [52]. We developed a Na 2 SO 4 -based medium with no discrete chloride ion added to this medium to suppress the production of 1 (Table 4, condition 3). The production of 1 was significantly reduced to 53 mg/L while the productions of 5, 8 and 9 were increased by 64 to 127% (Table 4, condition 3). By feeding 1.5% NaBr to this Na 2 SO 4 -based medium, the production of bromosalinosporamide (15) was significantly enhanced and was the major salinosporamide produced in the fermentation (73.3 mg/L) ( Table 4, condition 4). The production of 1 was further reduced to 18.7 mg/L in the NaBr-fed medium ( Table 4, condition 4). The result from this feeding study confirmed that bromide ion enhanced the production of 5 by 3-fold with a production titer of 22.3 mg/L ( Incorporation of fluorine could not be achieved via the approach used to generate bromosalinosporamide, as NaF (1-2%) inhibits the growth of the organism [54]. Moreover, fluoride is not a substrate for the chlorinase enzyme that catalyzes the synthesis of the 5′-chloro-5′deoxyadenosine (5′-ClDA) precursor to 1 [68]. Replacing the salL chlorinase gene by the [61] (see Section 4. Products of Semi-Synthesis). The production of 17 by mutasynthesis (feeding 5′-fluoro-5′-deoxyadenosine (5′-FDA) to a salLknockout mutant of S. tropica; see Section 2.4. Products of Mutasynthesis) was reported by Eustăquio and Moore [63], however, both the volumetric productivity (1.5 mg/L) and conversion yield (5%) were low. Another application of the Na 2 SO 4 -based medium is the production of 17 by feeding 0.025% 5′-FDA to the medium. We obtained a volumetric productivity of 17 at 55.8 mg/L with a conversion yield of 22% by precursor-directed biosynthesis in the Na 2 SO 4 -based medium ( Table 4, condition 5) (Lam, Tsueng, Potts and Macherla, unpublished observation), significantly superior to the mutasynthesis method. Furthermore, 17 is a minor salinosporamide in the fermentation produced by the mutasynthesis approach. In contrast, 17 is the major salinosporamide in the fermentation of the Na 2 SO 4 -based medium produced by the precursor-directed biosynthesis approach. Antiprotealide (14) is a molecular hybrid comprising the core structure of 1 with the omuralide (3)derived isopropyl group in place of the cyclohexene ring. 14 was first characterized by Corey and coworkers as a synthetic analog [71,72] and then as an unnatural salinosporamide produced by a genetically engineered strain of S. tropica [64]. While McGlinchey et al. reported that the parent type strain S. tropica CNB440 did not produce 14, ~0.5 mg/L was detected in the S. tropica salXmutant in which the pathway for the biosynthesis of the cyclohexenyl moiety (L-3-cyclohex-2 ′ -enylalanine) of 1 had been inactivated. Feeding 0.38 mM L-leucine to the S. tropica salXfermentation increased the production of 14 by 2-fold to ~1 mg/L and established that L-leucine is the biosynthetic precursor of 14 [64] (see 2.4. Products of Mutasynthesis). We observed the production of 14 in three wild type strains of S. tropica, including the type strain CNB440 with a production titer of 1.1 mg/L, thereby establishing for the first time that antiprotealide (14) is indeed a natural product [53] (see 2.1. Natural Products of S. tropica). The best production of 14 was observed in S. tropica NPS21184, a single colony isolate derived directly from wild type strain CNB476 without any mutation or genetic modification, with the titer of 3.0 mg/L ( Table 4, condition 1). We later demonstrated that feeding 1% L-leucine to the S. tropica NPS21184 fermentation increased the production of 14 by 3.7-fold to 11 mg/L ( Table 4, condition 11; Tsueng and Lam, unpublished observation). We also examined the effect of feeding 1% L-isoleucine to the S. tropica NPS21184 fermentation, which increased the production of 8 from 0.15 mg/L ( Table 4, condition 1) to 4.63 mg/L while completely inhibiting the production of antiprotealide (Table 4, condition 12). The 31-fold increase in production of 8 in the L-isoleucine-fed culture might be due to the fact that L-isoleucine is the precursor of propionate [88], which in turn, is the precursor of the contiguous three-carbon unit C-1/C-2/C-12 of 8. The above postulation was confirmed by feeding 1% propionate to the S. tropica NPS21184 fermentation, which led to a similar production of 8 as in the L-isoleucine-fed culture ( Feeding 1% L-valine to the S. tropica NPS21184 fermentation increased the production of 5 from 4.4 mg/L ( Table 4, condition 1) to 16.7 mg/L while the production of antiprotealide was completely inhibited (Table 4, condition 10). The 3.8-fold increase in production of 5 in the L-valine-fed culture might be due to the fact that L-valine is the precursor of butyrate [88], which in turn, is the precursor of the contiguous four-carbon unit C-1/C-2/C-12/C-13 of 5 [55]. The above postulation was confirmed by feeding 1% butyrate to the S. tropica NPS21184 fermentation, which led to a similar 4.3-fold increase and production of 5 at 19 mg/L as in the L-valine-fed culture ( Table 4, condition 8). The production of 5 in the control culture and the butyrate-fed culture were significantly less than reported in our previous publication [55] because the NaCl-based medium used in this study contains cobalt chloride. We have demonstrated that cobalt and vitamin B 12 inhibit the production of 5 [58]. Even though the absolute amounts of production of 5 in these two studies are different, the effect of butyrate in increasing the production of 5 is the same (4.3-fold versus 4.2-fold). In the earlier study, we also confirmed the incorporation of [U-13 C 4 ]butyrate into 5; feeding sodium [U-13 C 4 ]butyrate to S. tropica cultures enhanced the production of 5 by over 300% while inhibiting production of 1 by over 25%. NMR analysis confirmed the incorporation of butyrate as a contiguous 4-carbon unit (C-1/C-2/C-12/C-13) into 5 but not 1, providing the first direct evidence that the biosynthesis of 5 is distinct from 1 and that 5 is not a precursor of 1 [55]. The precursor for the chloroethyl group of 1 was subsequently identified as 5′-ClDA [68]. The production of 9 by S. tropica NPS21184 is extremely low at 0.11 mg/L, compared to the production of 1 at 277 mg/L in shake flask culture (Table 4, condition 1). Feeding 1% valerate to the S. tropica NPS21184 fermentation in the NaCl-based medium led to a 1,100-fold increase in production of 9 to 121 mg/L and concomitant decrease in the production of 1 by 53% to 131 mg/L ( Table 4, condition 9). We demonstrated the incorporation of valerate labeled with deuterium into 9 at the contiguous five-carbon unit C-1/C-2/C-12/C-13/C-16 and thereby established that valerate is the precursor of 9. Even though the production of 9 was increased by 1,100-fold and was similar to the production of 1 in the valerate-fed culture grown in NaCl-based medium, the isolation of 9 was a major challenge due to the close chromatographic elution profile of 9 and 1. We overcame this purification challenge by feeding 1% valerate to the S. tropica NPS21184 fermentation in the Na 2 SO 4 -based medium. While there was only a 20% increase in the production of 9 in the Na 2 SO 4 -based medium, the production of 1 decreased to 45 mg/L due to the reduction of chloride ion in the Na 2 SO 4 -based medium. With a significant increase in the ratio of 9 to 1, the purification of 9 from 1 can now be achieved (Tsueng, McArthur, Potts and Lam, unpublished observations). The above account demonstrates that precursor-directed biosynthesis together with the use of proper media represents a powerful technique for increasing the production of minor salinosporamides and generating novel salinosporamides. Products of Mutasynthesis Mutasynthesis, or mutational biosynthesis, is a term originally defined by Nagaoka and Demain [89] and by Rinehart and Stroshane [90] for the concept that an exogenous moiety is needed for the synthesis of a secondary metabolite by a mutant of the producing organism. The mutant which requires this special nutrient to produce a product peculiar to that organism has been termed an "idiotroph" [89]. Application of mutasynthesis in generating novel analogs of different classes of medically important secondary metabolites has been well documented [91][92][93][94][95]. Bioinformatic analysis of the salinosporamide biosynthetic gene cluster (sal) from the genome sequence of S. tropica CNB440 [62] revealed a subset of genes that were subsequently exploited for the bioengineering of new analogs by Moore and coworkers. To eliminate production of the nonproteinogenic amino acid L-3-cyclohexen-2′-enylalanine precursor to the P1 substituent of 1, the prephenate dehydratase homologue gene salX was targeted for genetic disruption via PCR-based mutagenesis. Fermentation of the S. tropica salXdisruption mutant, complemented by feeding select substrate amino acid precursors (proteinogenic, nonproteinogenic, and synthetic), successfully generated several target P1 analogs, including alicyclics 18-21 bearing cyclohexyl, cyclopentenyl, cyclopentyl, and cyclobutyl groups, respectively; branched aliphatic antiprotealide (14); straight chain aliphatics 22 and 23; and phenyl analog 24 [64,65]. Using a similar approach, Eustăquito and Moore targeted SalL [63]. SalL chlorinates S-adenosyl-L-methionine to produce 5′-ClDA, the C1/C2/C12/C13-Cl precursor of 1, and does not accept fluorine as a substrate [68]. Thus, fluorosalinosporamide (17) was generated by feeding 5′-FDA to a salLknockout mutant of S. tropica that had lost the capacity to produce 1 [63]. Recently, Eustăquio et al. demonstrated that replacing the salL chlorinase gene in S. tropica with a Streptomyces cattleya FlA fluorinase gene resulted in an S. tropica salL -FlA + mutant strain that can accept fluoride as a substrate for the production of 17 at a concentration of 4 mg/L [66]. Fluorosalinosporamide has also been generated in low yield semi-synthetically [61] (vide infra), but now most successfully by directly feeding 5′-FDA to the wild-type S. tropica strain in a Na 2 SO 4 -based medium, as reported herein (see 2.3. Products of Precursor-Directed Biosynthesis). Products of Chemical Degradation While highlighting chemical degradation in an account devoted to methods of preparing intact target molecules may seem unusual, knowledge of the mechanisms by which 1 is degraded led to the incorporation of appropriate precautionary measures and processes to circumvent or attenuate degradation during API manufacturing (vide supra), formulation development, and processing of blood samples for pharmacokinetic analysis. Moreover, the products of -lactone ring opening effectively anticipated the chemical mechanism of inhibition of the 20S proteasome by 1 [34,35]. Chemical degradants of 1 are largely formed via -lactone ring hydrolysis or decarboxylation, with oxidation of the cyclohexene ring occurring as a minor pathway (Scheme 1). Methanolysis of the -lactone to the corresponding methyl ester 25 was noted in the original account of the discovery of 1 [15] and formally characterized by Williams et al. [51]. Unveiling of the C-3 tertiary alcohol upon cleavage of the -lactone ring was followed by intramolecular nucleophilic displacement of chloride to give 26. We subsequently reported the analogous carboxylic acids NPI-2054 (27) and NPI-2055 (28) as products of aqueous hydrolysis, which was highly accelerated in base but occurred very slowly in acid. These structures led us to propose that chloride elimination may occur subsequent to Thr1O  acylation at the proteasome active site (Figure 2) [34], which was later confirmed by crystallography of 1 in complex with the yeast 20S proteasome [35]. Detailed kinetic studies by Denora et al. [80] demonstrated that -lactone ring hydrolysis occurs via standard ester hydrolysis (as opposed to a carbonium ion mechanism) and is moderately buffer-catalyzed, pH-independent in the range of 1-5, and base-dependent above pH 6.5. A kinetic deuterium isotope effect showed that the rate-determining step involves only a single proton transfer, suggesting that the neighboring C-5OH (as opposed to a second water molecule) facilitates attack of water at the -lactone ring. The subsequent nucleophilic displacement of chloride is also moderately buffer catalyzed. The data suggested that 27 exists in the carboxylate form above pH 4; at lower pH values, the (protonated) carboxylic acid is expected to inhibit further degradation: the rate of conversion from 27 to 28 was slowest in the pH range 1-3; a plateau or pH-independent region was observed at pH 4.5-6.5; and at pH > 7, the degradation rate increased with increasing pH [80]. While reactions in aqueous buffer cannot directly model the drug-enzyme complex, these findings are consistent with base (Thr1NH 2 ) catalyzed nucleophilic displacement of the halide in the proteasome active site [35,67]. In fact, Thr1N is sufficiently basic to catalyze the unusual reaction of fluoride displacement from an sp3-carbon in the case of fluorosalinosporamide (17), which recently led us to propose a proteasome Thr1NH 3 + pKa > 10 [67]. The second dominant mechanistic pathway for the degradation of 1 involves decarboxylation (Scheme 1). Three products of this pathway (29, 30 and 31) were isolated from S. tropica crude extracts; direct conversion of 1 to these same products under pH conditions identical to those used during fermentation allowed them to be assigned as degradants as opposed to natural products [51]. It was proposed that the two diastereomers are generated with retention of configuration at C-5 (29 and 30) followed by dehydration to give 31. During the course of our semi-synthetic studies, we frequently observed the formation of these same products at elevated pH, particularly in the presence of tertiary amines. The diasteromeric pair 29 and 30 could be purposefully generated in the presence of triethylamine in dichloromethane at 40 °C; concentration at elevated temperatures gave 31 as a byproduct. Low levels of these same degradants were also detected during cGMP stability studies of the parenteral Phase 1 cosolvent formulation of 1 [0.24 mg/mL in 98% propylene glycol, 2% ethanol], particularly under accelerated (elevated temperature) conditions (Manam, Macherla and Potts, unpublished observations). Overall, we observed that the C-4 diasteromers 29 and 30 formed first, followed by 31, supporting the earlier suggestion that C-4/C-5 dehydration occurs as the final step in the degradation pathway. Degradant 32 was also detected during cosolvent formulation stability testing, and presumably results from further oxidative cleavage of the C-4/C-5 double bond of 31. The relatively high UV extinction coefficient for these conjugated compounds overestimates their presence in samples when not corrected for relative UV response. The unusual degradant 33 has also been observed under some conditions, which reflects decarboxylation with chloride displacement, giving rise to a spiro-cyclopropyl group (Macherla, Mitchell, McArthur and Potts, unpublished observations). Long-term and accelerated stability studies of 1 indicate that the API is highly stable when stored as a solid. The only observed degradation pathway involved exceedingly slow oxidation of the cyclohexene ring to the corresponding cyclohexenone 34 (Macherla, Manam and Potts, unpublished observation) (Scheme 1), the structure of which was confirmed by semi-synthesis (vide infra) and is reminiscent of the natural product salinosporamide C (6) (Figure 1) [51]. The above summary provides a window into the precautions required when generating and handling 1, and serves as a preface to the following discussion on the opportunities and challenges of using 1 as a starting material for semi-synthesis. Products of Semi-Synthesis Process development for API manufacturing gave way to gram quantities of 1 to support preclinical studies and formulation development, and afforded an opportunity for the semi-synthesis of analogs. This windfall of material was tempered by challenges associated with the potential reactivity and instability of the functional groups of 1, including the double bond of the cyclohexene ring, the -lactone ring, and the chloroethyl group, as discussed above. Indeed, these same functional groups required careful consideration in the development of successful strategies for the total synthesis of 1 (vide infra). Nevertheless, we set out to modify the various structural elements using a classical semisynthesis approach. With respect to the cyclohexenyl carbinol (P1 residue; Table 2), the C-5 secondary hydroxyl group proved difficult to derivative due to steric hindrance, as originally noted by Fenical and coworkers [15]. Oxidation to the ketone 35 [Dess-Martin periodinane; CH 2 Cl 2 ] and subsequent reduction [NaBH 4 ; monoglyme + 1% water; −78 °C] gave 36, the C-5 epimer of 1 in 90% diastereomeric excess (de), along with parent compound 1 as a byproduct [34,60]. We subsequently explored many commercially available reagents and reaction conditions in attempt to control the de in favor of 1, but without favorable outcome [60]. This led us to evaluate the potential for ketoreductase enzymes to execute the stereoselective reduction. After screening a library of ~100 ketoreductases, two enzymes (KRED-EXP-B1Y and KRED-EXP-C1A; BioCatalytics, Inc., Pasadena, CA) were identified that cleanly converted ketosalinosporamide 35 to 1 with complete stereoselectivity [60]. Foreknowledge of the utility of this reaction was strategic in the development of an endgame for our total synthesis of 1, which was successfully completed upon executing the reduction as the final transformation in the sequence [41] (see Section 6.1.3. Nereus (Ling) Enantioselective Synthesis). Efforts were also extended towards modification of the cyclohexene ring, the published scope of which encompassed reduction [10% Pd/C, H 2 ; acetone] to give 18 [34], which was subsequently produced by mutasynthesis [64,65], epoxidation [mCPBA; CH 2 Cl 2 ] of both faces of the cyclohexene ring (37 and 38) and subsequent halohydrin formation [HCl; acetonitrile] to give 39 [34]. Oxidation using t-BuOOH and CoAc 2 produced a mixture of cyclohexenones, 40 and 34; during reversed phase HPLC purification of the latter, intramolecular Michael addition of the lactam nitrogen to the cyclohexenone gave tetracyclic product 7 (Macherla, Manam and Potts, unpublished observations). We note that 7 is identical in structure to the hypothetical precursor to salinosporamide C (6) (Scheme 2) proposed by Williams et al. [51]. Oxidative cleavage of the cyclohexene ring double bond to create an acyclic substituent for further derivatization was also explored. In light of the role of P1 in recognizing the S1 specificity pocket of the proteasome substrate binding site, further exploration of P1 analogs is warranted. Efforts to generate P1 diversity using mutasynthesis are particularly encouraging [64,65] (see Section 2.4. Products of Mutasynthesis). In contrast, the only P1 analog generated to date by total synthesis is antiprotealide (14) [71,72], although this likely reflects the propensity of the synthetic organic chemistry community to target the parent natural product. Clearly, several key synthetic intermediates are excellent candidates for introducing novel P1 architecture (vide infra). With a variety of complementary methods now available, the authors anticipate a more extensive evaluation of P1, including the design and 'synthesis' (by any means) of subunit-specific inhibitors. Semi-synthetic modifications to P2 (Scheme 3) were largely achieved through derivatization of two substrates, hydroxysalinosporamide (41) and iodosalinsoporamide (42). The parent (chlorinated) compound 1 was found to undergo a slow and low yielding transformation to iodosalinosporamide (42) [NaI; acetone; 11% in 6 days at RT]; bromosalinosporamide (15) was a superior starting material for this transformation [NaI; acetone; 84% in 2 days] but was not available in the same abundance as 1 in our laboratory [34,61]. The utility of 42 as a substrate for further analoging is clearly based on the greater propensity of iodide to displacement compared to chloride. Treatment of 42 with Gilman's reagent in dry THF at −78 °C gave the corresponding propyl analog 9 [34], which we subsequently identified in S. tropica crude extracts as the natural product salinosporamide E [52]. Azido [NaN 3 ; DMSO], propionate [Na(C 2 H 5 CO 2 ); DMSO] and thiocyano [NaSCN, diethylamine; acetone] derivatives (43)(44)(45) were also prepared from 42 [34,59]. Despite the susceptibility of the -lactone ring to base-catalyzed hydrolysis, brief treatment of 42 with NaOH [5N; acetone] afforded a complex mixture from which the first sample of hydroxysalinosporamide (41) was isolated, albeit in low yield [34]. Efforts to generate fluorosalinosporamide (17) to complete the halogen series using AgF in THF surprisingly gave the hydroxyl analog 41 as the major product (15%), with the target fluorinated 17 as a very minor byproduct [61]. AgF reagent has known utility in introducing fluorine in place of iodine or bromine [96,97], but in our hands gave rise to a hydroxy group. As 41 proved to be another important substrate for derivatization (vide infra), we sought to optimize its production by further exploring the utility of AgF reagents. This led to a 1-step method to convert the parent chlorinated natural product 1 directly to 41 using AgF supported on CaF 2 in 35% yield (Macherla, Manam and Potts, unpublished observation). Hydroxyl analog 41 was subsequently used to generate analogs bearing non-halogen leaving groups, including mesyl, tosyl, and dansyl derivatives (46)(47)(48) [61], the latter of which may provide utility for fluorescence monitoring. Various carboxylate esters were also prepared. The P2 analog series has been evaluated for the ability to induce prolonged duration proteasome inhibition in vitro and firmly established the role of the leaving group at C-13 in inducing irreversible binding to the proteasome [ With no evidence for naturally occurring thioesters in place of the -lactone ring in S. tropica crude extracts (see Section 2.1. Natural Products of S. tropica), we endeavored to prepare them semisynthetically. Treatment of 1 with methyl-3-mercapto-propionate or N-acetyl-L-cysteine methyl ester gave the corresponding thioesters 49 and 50, which underwent slow and partial intramolecular nucleophilic displacement of chlorine to give cyclic ethers 51 and 52 (Figure 3). Salinosporamide B (5) was similarly derivatized (53). While proteasome inhibition assays suggested that the thioester may directly react with the proteasome, those species that retained the potential to reform the -lactone ring i.e., 49, 50 and 53, were more potent inhibitors of proteasome activity than cyclic ethers 51 and 52 [52] (see Section 5. Structure-Activity Relationships). As premature chloride elimination disables the molecule's full inhibitory potential, it is no wonder that thioester analogs of the salinosporamides have not been found in nature. Nevertheless, if the cellular metabolism pathways identified for omuralide and lactacystin [27,98] are relevant to the salinosporamides, then thioesters may indeed be generated upon in vivo administration of 1. Preliminary studies suggest that this may be the case. Structure-Activity Relationships Structure-activity relationship (SAR) trends are evaluated below with respect to inhibition of CT-L activity against purified 20S proteasomes. IC 50 values for P1 and P2 analogs are captured in Tables 2 and 1, respectively. -Lactone Derivatives The -lactone ring of 1 and other salinosporamides directly acylates the proteasome active site residue Thr1O  (Figure 2), as demonstrated by crystal structures of various analogs in complex with the yeast 20S proteasome [35,67]. It is therefore not surprising that modification of the -lactone ring has a major impact on proteasome inhibition. Indeed, degradation product 28 (Scheme 1) shows no CT-L inhibitory activity at the highest concentrations tested (IC 50 > 20 µM) [34]; this is consistent with the loss of both the -lactone that acylates the proteasome and the chloroethyl trigger that induces sustained proteasome inhibition (vide infra). It was not possible to directly evaluate -lactone hydrolysis product 27 due to rapid conversion to 28 upon attempted purification. While thioesters of the salinosporamides are not found in nature, they have been generated by semi-synthesis ( Figure 3) [52]. The thioester derivative of salinosporamide B is only half as potent as its -lactone precursor (5: IC 50 = 26 nM; 53: IC 50 = 50 nM). Interestingly, when salinosporamide A (1) was similarly derivatized, the corresponding thioester could be isolated in both the C-3OH (seco) (e.g., 49, 50) and cyclic ether (e.g., 51, 52) forms. The thioester derivative in the cyclic ether form retains modest proteasome inhibitory activity (51: IC 50 = 230 nM), indicating that the less reactive thioester can still bind and inhibit CT-L activity, but with ~100-fold less potency than 1 (IC 50 = 2.5 nM). In the seco form, C-3O can either displace chloride or undergo the competing reaction of in situ -lactone reformation. Since the seco form is ~25-fold more active (49: IC 50 = 9.3 nM) than the corresponding cyclic ether 51, but only ~4-fold less potent than the -lactone 1, the dominant pathway appears to be reformation of the -lactone ring to give the highly activated species [52]. This follows the precedent of lactacystin, which similarly gives omuralide in cells [26,27,98]. P1 Analogs Natural products chemistry and semi-synthesis provided an opportunity to evaluate the role of C-5OH. Salinosporamide J (13; C-5H 2 ) is a 20-fold less potent inhibitor of CT-L activity (IC 50 = 52 nM) than the parent 1, but significantly more active than ketosalinosporamide (35) and C-5-episalinosporamide (36) (IC 50 = 8.2 µM and >20 µM, respectively). Thus, reduction of the C-5 hydroxyl group to a methylene group is preferred to epimerization or oxidation [34,52]. The crystal structure of 1 in complex with the yeast 20S proteasome revealed hydrogen-bonding interactions between the ligand C-5OH and the proteasome Thr21NH, and further suggested that 35 and 36 may introduce steric interactions that are not well tolerated [35]. In the case of 13, the hydrogen bonding potential is lost, but problematic steric interactions would not be expected, in agreement with the trends in the assay data [IC 50 1 < 13 (C-5H 2 ), 35 (keto-C-5) < 35 (epi-C-5OH)] [34,35,52]. Cyclohexene ring modification or replacement has been achieved by mutasynthesis [64,65], semisynthesis [34] and total synthesis [71,72]. The P1 analogs generated to date largely represent hydrophobic hydrocarbons, for which the potency rank order is cycloalkenyl > cycloalkyl > branched aliphatic > linear aliphatic > aromatic, with respect to inhibition of CT-L activity. The reduced potency of the phenyl analog is in agreement with SAR studies of omuralide [29]. Epoxidation of the cyclohexene ring is only well tolerated on one face (37: IC 50 = 6.3 nM versus 38: IC 50 = 91 nM) [34]. Ring contraction from a 6-to a 5-membered ring gave promising results; the cyclopentenyl analog is ~equipotent with 1 with respect to inhibition of CT-L activity and more cytotoxic against human colon carcinoma HCT-116 cells [65]. Given the role of P1 in recognizing the S1 specificity pocket of the proteasome, which largely imparts the CT-L, T-L and C-L sites with their substrate cleavage preferences, the authors stress the importance of evaluating P1 analogs against all three proteolytic sites, which may reveal unique inhibition profiles across subunits. P2 Analogs Fermentation extracts of S. tropica contained low levels of compounds 10, 16, and 11, the C-2 epimers of 1, 5, and 8 [34,52]. These 2(S) diasteromers were >50-fold less potent than their 2(R) congeners with respect to inhibition of CT-L activity, indicating that the 2(R) stereochemistry of the major secondary metabolites is well optimized. In the case of 1, this stereoconfiguration is particularly important: the syn relationship of the chloroethyl and C-3OH substituents of the -lactam ring supports intramolecular nucleophilic displacement of chloride to give a cis-fused bicyclic lactam upon binding to the proteasome (Figure 2), which results in irreversible proteasome inhibition in vitro [35,61]. P2 analogs were highly instrumental in establishing the mechanistic role of the chloride leaving group. Crystallographic studies and proteasome inhibition/recovery experiments using purified 20S proteasomes confirmed that displacement of chloride (or alternative leaving groups) at the proteasome active site results in irreversible inhibition of proteasome activity [35,61,67]. The irreversible inhibitors, which bear a leaving group at the C-13 position of P2 [e.g., Cl (1), Br (15), I (42), and various sulfonate esters (46,47,48)] are generally more potent proteasome inhibitors when compared to their slowly reversible congeners, which do not bear a leaving group at this position [61]. Interestingly, fluorosalinosporamide (17) behaves intermediately, which is attributed to the poor leaving group potential of fluoride [61]. Slow fluoride elimination in the proteasome active site was nicely captured in freeze-frame (short and long soak) crystal structures of 17 in complex with yeast 20S proteasomes [67]. These findings are in good agreement with its behavior as a partially reversible proteasome inhibitor [61] and its intermediate behavior between 1 and 5 [61,63]. However, P2 analogs that do not bear a leaving group are still very potent inhibitors of purified 20S proteasomes, with IC 50 values in the low nM range (Table 2). Thus, while the kinetic distinction between slowly reversible and irreversible inhibition of purified proteasomes is evident, the greatest impact of irreversible binding is on cellular events downstream of proteasome inhibition, whereby sustained proteasome inhibition leads to potent cytotoxicity in tumor cells. Indeed, P2 analogs bearing a leaving group exhibit much more potent cytotoxicity in hematological and solid tumor cell lines [34,51,59]. A comprehensive discussion of this and other lessons learned from β-lactone proteasome inhibitors is currently in review (M. Groll and B. Potts, 2010). Total Synthesis of 1 In this section, the total synthesis of 1 is reviewed. At the time of writing, 5 enantioselective total syntheses [39][40][41][42][43], 2 racemic syntheses [44,45], and 5 formal syntheses [46][47][48][49][50] have been published, reflecting a wide variety of strategies that often converge to common advanced intermediates. The strategies are captured in Schemes 4-15, in which the atom numbering for all intermediates (including well known starting materials or their derivatives) correlates with the atom numbers of the final synthetic target 1 (Figure 1). The reader is directed to the source articles for supporting references. Corey Enantioselective Synthesis The first total synthesis of 1 was reported by Corey and coworkers [39], marking a key milestone and setting the standard for all who followed. Key features of this innovative route included: (i) an intramolecular Baylis-Hillman aldol reaction to construct the -lactam with the desired stereochemistry at the C-3 tertiary alcohol; and (ii) simultaneous construction of the C-5/C-6 stereocenters by allylation of a late-stage intermediate aldehyde with 2-cyclohexenyl zinc chloride. The overall synthesis is captured in Scheme 4. (S)-threonine methyl ester served as a natural choice for the starting material, comprising the C-15/C-4/C-3/C-14 contiguous carbons of 1. N-acylation with 4-methoxybenzoyl chloride and subsequent p-TsOH catalyzed cyclization gave the corresponding oxazoline 1-2. Stereoselective alkylation with ClCH 2 OBn afforded 1-3 with the desired chirality at the quaternary C-4 stereocenter while effectively introducing C-5. Reductive oxazoline ring opening with NaBH 3 CN-HOAc gave PMB derivative 1-4. After TMS-protection, selective N-acylation with acrylyl chloride and acidic workup installed the contiguous 3-carbon unit C-1/C-2/C-12; subsequent Dess-Martin periodinane oxidation afforded keto amide ester 1-5 in preparation for -lactam formation in the next step. This was initially achieved via a quinuclidine base-catalyzed intramolecular Baylis-Hillman aldol reaction that occurred over 7 days to give -lactam 1-7 with the desired C-3 stereochemistry with high selectivity (9:1). Both the efficiency and stereoselectivity of this important step were subsequently improved with an alternative cyclization strategy that was reported independently (vide infra) [72]. The corresponding silyl ether underwent tri-n-butyltin hydride mediated radical cyclization to the cis-fused -lactam 1-8. The benzyl ether was cleaved and the resulting alcohol oxidized to obtain key intermediate aldehyde 1-9, which was reacted with 2-cyclohexenyl zinc chloride to complete the construction of the P1 cyclohexenyl carbinol residue. Notably, this diastereoselective allylation introduced the contiguous C-5/C-6 stereocenters simultaneously and in high stereoselectivity (20:1). Testament to the remarkable utility of this step is best offered by the synthetic routes that subsequently adopted it (vide infra). Tamao-Fleming oxidation of 1-10 followed by deprotection of the lactam nitrogen gave triol 1-11. Finally, the methyl ester was hydrolyzed to set the stage for clean and efficient -lactone formation and chlorination in one pot to give (-)-1 for the first time by total synthesis. Scheme 4. Corey and coworkers' synthesis of (-)-1 from L-threonine methyl ester [39]. This "simple stereocontrolled synthesis" of 1 by Corey and coworkers [39] was subsequently improved in overall efficiency by replacing the Baylis-Hillman reaction (7 days) with a diastereoselective cyclization sequence; treatment of 1-5 with Kulinkovich reagent followed by iodination and HI elimination was completed over 5 hours (overall sequence) with remarkable selectivity (dr > 99:1). The resulting, highly functionalized -lactam 1-7 is a versatile intermediate, serving as a common precursor to 1 and hybrid analogs antiprotealide (14), -methyl omuralide, and other potential analogs [72]. This intermediate was later targeted in the formal synthesis of 1 by Langlois and coworkers (vide infra) [46]. Moreover, a precursor related to 1-5 but comprising the isopropyl carbinol P1 moiety was also advanced to 14 [71], further demonstrating the utility of this transformation. Danishefsky Enantioselective Synthesis The enantioselective synthesis of (-)-1 from a bicyclic derivative of L-glutamic acid was reported by Endo and Danishefsky in 2005 [40]. This novel synthesis features a cationic hemiacetal-mediated phenylselenenylation of an exocyclic methylene to stereoselectively install the quaternary center at C-3; this step, together with subsequent radical deselenylation to provide the C-3 methyl substituent, were later adopted by Hatakeyama and coworkers' in their formal synthesis of 1 [42]. Scheme 5. Danishefsky and Endo synthesis of (-)-1 from a pyroglutamate derivative of L-glutamic acid [40]. The total synthesis, captured in Scheme 5, exploited the strong facial bias of pyroglutamate derivative 2-1 to attack at C-3 from its -face, with subsequent alkylation at C-2 from its -face. The vinyl group of the C-3 substituent of intermediate 2-2 was advanced to a carbonate ester acylating agent for subsequent intramolecular and stereoselective delivery to C-4. This required that the lactam functionality be masked in the form of the imidate ester 2-4 to enable exclusive anion formation at C-4. The resulting lactone 2-5 carried an advanced stereochemical imprint for further evolution to 1. Nucleophilic ring opening of the lactone was achieved regioselectively with a phenylselenium ion, and the resulting carboxylic acid was benzylated to give 2-6, thereby differentiating C-5 and C-15. The C-3 and C-2 substituents were then converted to exocyclic methylene and acetaldehyde moieties, respectively (2)(3)(4)(5)(6)(7). This set the stage for the hemiacetal-mediated phenylselenenylation of the exocyclic methylene, which gave 2-8, thereby establishing the C-3 quaternary center with complete stereocontrol. Subsequent radical deselenylation of 2-8 provided the C-3 methyl substituent. With the C-2/C-3/C-4 contiguous stereocenters in place, the benzyl ester was converted to the corresponding C-5 aldehyde 2-9. Introduction of the cyclohexenyl group was achieved by adopting the elegant method established by Corey and coworkers (vide supra) [39]; indeed, allylation using cyclohexenyl zinc chloride occurred with the desired stereochemical outcome at C-5 and C-6 to give 2-10. The authors noted that allylation of the corresponding imidate aldehyde substrate (derived from 2-4) gave poor diastereoselectivity with the same reagent, highlighting the importance of the PMB protecting group for diastereoselection [40]. Finally, the corresponding triol 2-11 was unveiled for -lactone formation and replacement of the primary alcohol with chloride, ala Corey and coworkers [39]. Nereus (Ling) Enantioselective Synthesis In our own laboratory, a novel enantioselective strategy was envisioned by Taotao Ling that involved an intramolecular aldol cyclization to generate key intermediate 3-4 using the Self-Regeneration of Stereocenters (SRS) principle developed by Seebach et al. [99], as captured in Scheme 6 [41]. The advantage of this approach was the efficient, scalable, and simultaneous generation of three contiguous stereocenters C-2/C-3/C-4, in contrast with earlier syntheses that employed their stepwise introduction [39,40]. Scheme 6. Nereus synthesis of (-)-1 from D-serine [41]. Enantiomerically pure oxazolidine-γ-lactam 3-4 was prepared from ß-keto amide 3-3, where the C-4 chirality (derived from D-serine) was maintained during the intramolecular aldol cyclization following a strategy previously described by Andrews et al. [100] and the C-2 and C-3 stereocenters were simultaneously constructed in a substrate-directed fashion. The resulting, highly functionalized intermediate 3-4 served as a key precursor for the enantioselective total synthesis of (-)-1. Thus, the D-serine derived oxazolidine served as both a chiral directing group during the intramolecular aldol cyclization and as a protecting group during subsequent steps of the synthesis, and would ultimately be unveiled to allow oxidation of C-15 in anticipation of ß-lactone formation in the last stages of the synthesis. Compound 3-4 was advanced to aldehyde 3-6 in preparation for allylation. Cyclohexene ring installation using Corey's method (with cyclohexenyl zinc chloride) [39] indeed gave an anti addition product, but with both undesired C-5 and C-6 stereocenters. This clearly distinguished our oxazolidine protected substrate 3-6 from the PMB-protected -lactam used in other routes [39,40,44]. We therefore turned to Brown's allylboration chemistry (i.e., coupling of 3-6 with B-2-cyclohexen-1yl-9-BBN [101]), which was expected to give a syn addition product. Fortunately, the product 3-7 had the desired stereochemistry at C-6; thus, the required stereochemistry at C-5 would need to be generated later, which was known to be feasible based on our prior development of selective semisynthetic transformations on the natural product [60] (see 4. Products of Semi-Synthesis). With the overall carbon skeleton in place, the oxazolidine-protected alcohol (C-15) was revealed (3-9) and oxidized in preparation for -lactone formation, followed by halogenation of the C-2 sidechain to give 3-11 (equivalent to compound 36), the C-5 epimer of 1. The final C-5 stereocenter was established by Dess-Martin periodinane oxidation to the corresponding ketone 3-12 (equivalent to compound 35), which was stereoselectively reduced by a ketoreductase enzyme [41,60] to afford (-)-1. In summary, the key features of the enantioselective route developed in our laboratory included intramolecular aldol cyclization to simultaneously generate the three contiguous stereocenters of intermediate 3-4, of which 100g of material was produced via this scaleable process; cyclohexene ring addition using B-2-cyclohexen-1-yl-9-BBN; and inversion of the C-5 stereocenter by oxidation followed by enantioselective enzymatic reduction. Hatakeyama Enantioselective Synthesis Hatakeyama and coworkers' total synthesis of (-)-1 (Scheme 7) [42] represents a successful application of the construction of highly functionalized pyrrolidinones using an indium-catalyzed Coniaene reaction. Conia-ene reactions [102] generally require harsh conditions, under which racemization and isomerization of the exocyclic olefin from the ,to the ,-position are of considerable concern, while metal catalyzed reactions may be carried out under milder conditions. If the target pyrrolidinone 4-5 could be obtained by this strategy, advancement to (-)-1 was envisioned as follows. The C-3 quaternary center would be constructed stereoselectively by intramolecular delivery of oxygen from the C-2 substituent to the exo olefin, as established by Endo and Danishefsky [40], while the C-4 center could be created by selective reduction of one of the geminal esters of the resulting bicyclic intermediate. This would set the stage for cyclohexenyl zinc chloride addition, per Corey and coworkers [39]. The synthesis was executed as outlined above. Specifically, to prepare amide 4-4, the substrate for the key In(OTf) 3 -catalyzed cyclization, chiral propargyl alcohol 4-1 was converted to the mesylate, which was then reacted with (t-butyldimethylsilyloxy) acetaldehyde via the allenylzinc species to give 4-2 as a 9:1 epimeric mixture. Removal of the PMB group, selective acetylation, and desilylation afforded 4-3, which was treated with CrO 3 and HIO 4 in aqueous acetone to obtain the corresponding carboxylic acid. Subsequent condensation with dimethyl-2-(4-methoxybenzylamino)malonate via the acid chloride afforded the key precursor 4-4, anticipating Conia-ene cyclization. Interestingly, during purification on silica gel, amide 4-4 partially underwent cyclization to give an inseparable mixture of 4-4 and 4-5 (72:28) [Further subjecting this mixture to silica gel chromatography conditions gave 4-5 quantitatively and in 90% ee, suggesting a silica gel promoted Conia-ene reaction (rather than cyclization through the corresponding achiral allenylamide) [42,103]]. Treatment of the mixture of 4-4 and 4-5 with a catalytic amount of In(OTf) 3 in toluene at reflux (the original conditions developed for the Conia-ene cyclization!) indeed resulted in complete conversion of 4-4 into 4-5. Importantly, no significant loss of enantiomeric purity was observed. Having demonstrated this key transformation, the acetoxy group of 4-5 (a base labile intermediate) was hydrolyzed under mild lipase-catalyzed reaction conditions to give the corresponding alcohol, which was then oxidized to aldehyde 4-6. This set the stage for the assembly of the C-3 quarternary center, which was achieved according to the precedent established by Endo and Danishefsky [40] to obtain 4-7 (vide supra). Radical deselenenylation of 4-7 was followed by selective NaBH 4 reduction, which nicely discriminated between the geminal esters, after which Dess-Martin periodinane oxidation afforded aldehyde 4-8. Reaction of 4-8 with cyclohex-2-enylzinc chloride according to Corey and coworkers [39] yielded 4-9 as a single stereoisomer. Removal of the PMB group followed by reductive ring opening of the cyclic acetal afforded known triol 4-10 (i.e., identical to 1-11, Scheme 4). Finally, dealkylative cleavage of the methyl ester was promoted by (Me 2 AlTeMe) 2 , adopted from Mulholland et al. [44] (vide infra), followed by β-lactonization and chlorination to obtain (-)-1. Omura Enantioselective Synthesis It is most fitting that Omura and coworkers, who first discovered lactacystin (2) [23,24], have developed a total synthesis of (-)-1 [43]. Their novel strategy (Scheme 8) presents with the early construction of the cyclohexene ring, with introduction of the C-5/C-6 stereocenters via a chelationcontrolled aldol reaction. This represents a distinct approach from the many routes that adopted the Corey strategy [39] to install the cyclohexene ring. The Omura synthesis also features an intramolecular aldol reaction to construct the lactam C-2/C-3 bond and an intermolecular Reformatsky-type reaction followed by 1,4-reduction to generate the P2 substituent. Scheme 8. Omura and coworkers' synthesis of (-)-1 through novel cyclohexene construction [43]. The initial phase of the total synthesis of (-)-1 comprised generation of aldehyde 5-4 in preparation for cyclohexanone addition, and subsequent cyclohexene formation. Towards this end, optically active acetate 5-2 was prepared by Wittig olefination of aldehyde 5-1, followed by hydrolysis to the corresponding diol, enzymatic desymmetrization, and TBDPS protection of the remaining primary alcohol. Then, the corresponding MEM ether underwent intramolecular cyclic carbamation and N-PMB protection to obtain 5-3, which was subjected to osmium-catalyzed dihydroxylation followed by oxidative cleavage of the corresponding diol to give aldehyde 5-4. This set the stage for the addition of cyclohexanone via a chelation-controlled aldol reaction quenched with BzCl that effectively installed both of the desired C-5 and C-6 stereogenic centers of intermediate 5-5. The next step was the conversion of cyclohexanone to cyclohexene, a challenging problem that was solved by stereoselectively generating the anti 1,3-diol and derivatization to the corresponding cyclic sulfate 5-6, the elimination of which occurred in high yield to give the desired cyclohexene 5-7. The next phase of the synthesis involved construction of the -lactam ring and the quaternary C-3 stereocenter. Intramolecular transcarbamation of 5-7 with NaH followed by Swern oxidation gave aldehyde 5-8, which was advanced to the corresponding ketone to provide the required methyl group of 1. The nitrogen was deprotected to give 5-9, enabling construction of the -lactam and C-3 stereogenic center via N-acylation followed by an intramolecular aldol reaction using LHMDS and chloroacetyl chloride in one pot, which generated the desired -lactam 5-10 as a single isomer. In the final stages of the synthesis, the C-2 side chain (P2) was installed using a SmI 2 -mediated intermolecular Reformatsky-type reaction of 5-10 with benzyloxy acetaldehyde. The resulting β-hydroxy--lactam 5-11 was converted to the α,β-unsaturated lactam by mesylation-elimination, followed by alkaline hydrolysis and stereoselective 1,4-reduction with LiEt 3 BH to obtain 5-12. A series of selective protection and deprotection steps afforded 5-13, in preparation for oxidation of the primary alcohol to enable β-lactonization and finally, chlorination, to generate the desired β-lactone 5-14, which was deprotected to afford (-)-1. Pattenden Racemic Synthesis The concise racemic route developed by Pattenden and coworkers (Scheme 9) was first communicated in 2006 [44], and a full account was reported in 2008 [104]. The straightforward 14-step total synthesis commenced with intramolecular aldol cyclization of protected -keto-amide 6-2 to generate -lactam (±)-6-3. This approach nicely parallels the strategy pursued in our own laboratory [41], albeit without enantioselective control. Nevertheless, the cyclization gave the required relative stereochemistry at C-2 and C-3, which was controlled with careful attention to the temperature of this deprotection-aldol cyclization reaction. After TMS and PMB protection of the tertiary alcohol and lactam nitrogen, respectively, dimethyl ester 6-4 underwent regioselective superhydride reduction to give C-5 aldehyde 6-5; specifically, the methoxycarbonyl trans to the sterically hindered C-3OTMS group of intermediate 6-4 underwent selective reduction, successfully exploiting the facial bias of the substrate. The remainder of the synthesis, including stereoselective allylation with cyclohexenyl zinc bromide, was concluded in analogy to the strategy of Corey and coworkers [39] to give (±)-1. Of note, dimethylaluminium methyltelluride (60% yield) was used in place of 3M LiOH (<10% yield) to hydrolyze the methyl ester of 6-7, prior to lactonization and chlorination. Scheme 9. Pattenden and coworkers' synthesis of (±)-1 [44,104]. Romo Racemic Synthesis The synthesis of (±)-1 by Romo and coworkers [45] was a natural extension of their keen interest in constructing carbocycle-fused -lactones (e.g., [105]). Their strategy comprised the coupling of an -amino acid with a heteroketene dimer, the product of which underwent nucleophile promoted biscyclization to simultaneously construct the highly functionalized -lactone--lactam core (±)-7-6a. The cyclohexenyl group was successfully introduced at the penultimate stage of the synthesis, demonstrating the stability of both the -lactone and chloroethyl functionalities to the conditions of this key reaction, thereby suggesting strong potential for the generation of a variety of P1 analogs from late stage aldehyde intermediate 7-7. The details of this concise synthetic route are captured in Scheme 10. N-PMB serine allyl ester 7-2 was coupled with heteroketene dimer 7-3; this key interemediate was reportedly readily generated in gram quantities. The resulting -keto amide underwent Pd-mediated ester deprotection to give 7-4 in anticipation of the key bis-cyclization reaction, which was executed with modified Mukaiyama reagent 7-5 to activate the carboxylic acid in the form of a pyridone ester (not shown). The desired -lactone--lactam 7-6a was obtained in 2-3:1 dr. Deprotection of the benzyl ether followed by modified Moffatt oxidation to the corresponding aldehyde 7-7 set the stage for treatment with cyclohexenyl zinc chloride to complete the P1 moiety and simultaneously establish the C-5/C-6 stereocenters, according to the precedent of Corey and coworkers [39]. Final unveiling of the lactam nitrogen gave (±)-1. This approach was also extended to the total synthesis of the related (±)-cinnabaramide A [45]. The formal synthesis of 1 is outlined in Scheme 11. Chiral intermediate (S)-8-2 had been previously generated by Langlois and Nguyen from bicyclic nitrile 8-1 in their synthesis of deoxydysibetaine [107] and was prepared accordingly. Then, selective O-benzylation of (S)-8-2 was achieved with 2-benzyloxy-1-methylpyridinium triflate as a mild and nearly neutral benzylating agent in the presence of MgO, giving (S)-8-3 in 75% yield. According to their prior work [106], the subsequent steps in the synthesis would not induce racemization, thus, the remainder of the formal synthesis was demonstrated using racemic 8-3. Towards this end, the N-PMB derivative of 8-3 was generated (not shown), however, subsequent introduction of the conjugate double bond was low yielding compared to that achieved previously with the corresponding N-Boc derivative. Consequently, introduction of the PMB group was reserved for a later stage of the synthesis, and 8-3 was instead Boc-protected for advancement to 8-6 by means established previously [106]. Specifically, introduction of the conjugate double bond was achieved via phenylselenylation using LDA as a base followed by selenoxide elimination; the resulting intermediate 8-4 could be used to prepare -methyl unsaturated lactam 8-6 via two possible pathways: (i) treatment with diazomethane to give pyrazolines 8-5a and 8-5b, followed by thermolysis (overall 35%); or (ii) stereoselective addition of the C-3 methyl group using methylcuprate to give 8-5c, with subsequent introduction of the double bond via phenylselenylation and selenoxide elimination (overall 63%). Thereafter, the Boc group of pyrrolinone 8-6 was removed and the nitrogen was PMB-protected to give 8-7. This set the stage for the selective 1,3-dipolar cycloaddition of N-methylnitrone to simultaneously introduce the C-3 oxygen and the precursor to the exo-methylene group. Specifically, formation of cycloadducts 8-8a and 8-8b was achieved by heating 8-7 with N-methylnitrone in toluene, which gave 8-8a as the major product. The isoxazolidine ring was hydrogenolyzed in the presence of Pd(OH) 2 , affording 8-9. Finally, the target α-methylene-lactam 8-10 was obtained by forming the trimethylammonium salt with iodomethane in methanol, which was further treated with a biphasic mixture of aqueous Na 2 CO 3 and CH 2 Cl 2 (4 days at room temperature) to induce elimination to the exo-methylene group in high yield [46]. This completed the formal synthesis, as intermediate 8-10 was previously advanced to 1 by Corey and coworkers [39]. Scheme 11. Langlois and coworkers' stereoselective formal synthesis of 1. Synthesis of 8-2 from 8-1 was performed according to [107]. 8-10 was synthesized from 8-2 as described in [47,106]. Lam Formal Synthesis Lam and co-workers achieved a formal synthesis of 1 using a sequential nickel-catalyzed reductive aldol cyclization-lactonization reaction as a key step [47]. ,-unsaturated amide 9-3 was targeted as a highly functionalized substrate for this reaction, which would give rise to an advanced -lactam comprising the precursor to the P2 substituent. A related cyclization of a less densely functionalized substrate had been achieved previously [108]. Thus, application to 9-3, which was more sterically congested and comprised several Lewis basic groups that could potentially bind the catalyst and reductant and divert the course of the intended reaction, along with only a single stereocenter to control the absolute configurations of the two new centers generated upon cyclization, presented significant challenges that would rigorously test this methodology. The formal synthesis is captured in Scheme 12 and commenced with Swern oxidation of known amino alcohol 9-1 [39] to give aminoketone 9-2, which was then suitably acylated to afford the target α,β-unsaturated amide 9-3 in high yield. This set the stage for the key reductive aldol cyclization reaction, which was attempted using a variety of conditions. Commercially available nickel-phosphine complexes (Ph 3 P) 2 NiBr 2 and (Me 3 P) 2 NiCl 2 were identified as effective precatalysts when used in conjunction with Et 2 Zn reductant, and although the target 9-4c was not formed, a fused -lactam-lactone 9-5a (presumably generated via 9-4a) with the desired stereochemistry was isolated in 35% and 42% yields, respectively. This provided the unexpected benefit of protecting the C-3 tertiary alcohol during subsequent steps. To complete the synthesis, 9-5a underwent Pd-catalyzed debenzylation to afford alcohol 9-6, which was oxidized to the corresponding aldehyde 9-7 via Dess-Martin periodinane oxidation [47]. Due to its relative instability, the aldehyde was immediately reacted with 2-cyclohexenylzinc chloride as described by Corey and coworkers [39] to give homoallylic alcohol 9-8. The formal synthesis was completed by reductive ring opening of the 9-8 lactone using NaBH 4 to give triol 9-9, for which the conversion to 1 had been previously described by the Corey [39] and Pattenden groups [44]. Bode Formal Synthesis Struble and Bode have explored N-heterocyclic carbene (NHC) catalyzed intramolecular lactonization to prepare densely functionalized bicyclic -lactam--lactone adducts from enals. This methodology was applied to their formal synthesis of 1 (Scheme 13) based on its potential for a concise and high yielding entry to the core bicyclic -lactam--lactone scaffold 10-6a [48]. Ene-type alkylation of racemic oxazolone 11-1 with t-butyl enol ether followed by sodium borohydride reduction gave 11-2 in 88% yield as a 3:1 mixture of diastereomers. The major diastereomer was advanced to the target, however, the authors observe that both diastereomers could be used to generate (±)-1, since the second stereocenter (C-3) is ultimately destroyed during oxidation of 11-6 or 11-9 to the corresponding ketone 11-7. The amide functionality of 11-2 was selectively reduced by dehydrative cyclization under basic conditions with MsCl, followed by reduction of the resulting oxazoline 11-3 with NaCNBH 3 in acetic acid to afford N-PMB-protected amino alcohol 11-4. The primary alcohol was subsequently protected to give benzyl ether 11-5. Acylation with acrylyl chloride was smoothly executed according to Corey and coworkers [39] to obtain amide 11-8. However, deprotection of the t-butyl group with TFA or phosphoric acid to obtain alcohol 11-9 as a single diastereomer was low yielding (25% or 27%), suggesting that earlier stage removal may be preferable. Indeed, conversion of 11-5 to 11-6 was more rapid (6h) and high yielding (98%). Target ketone 11-7 could be obtained from either 11-6 or 11-9 according to Corey's synthesis [39], thereby completing the formal synthesis of (±)-1. Chida Formal Synthesis Chida and coworkers previously demonstrated that Overman rearrangement on sugar-derived scaffolds followed by further exploitation of residual carbohydrate functional groups is a successful methodology for the synthesis of natural products comprising -substituted -amino acids, including lactacystin [110]. Accordingly, Momose et al. reported the formal synthesis of (-)-1 from D-glucose, featuring Overman rearrangement of allylic trichloroacetimidate 12-14 to construct the quaternary C-4 stereocenter bearing nitrogen (12)(13)(14)(15). During this step, the chiral information was relayed from the C-3 stereocenter, which had been previously generated with complete stereoselectivity under substrate control by reaction of D-glucose-derived cyclic ketone 12-8 with Me 3 Al (Scheme 15) [50]. The synthesis commenced with diacetone-D-glucose, which was advanced to primary alcohol 12-1 in 4 steps according to Fleet et al. [111], followed by protection with BnBr to obtain 12-2. The exocyclic acetonide was selectively cleaved to the corresponding diol 12-3, which was suitably protected . Hydrolysis of the remaining acetonide and cleavage of the resulting glycol afforded pyranose derivative 12-6. The corresponding PMB β-glycoside was generated and the O-formyl group removed to give pyranoside 12-7, which was oxidized with DMSO-Ac 2 O to obtain ketone 12-8 in preparation for generation of the C-3 tertiary alcohol. This important step was achieved stereoselectively upon reaction with Me 3 Al in toluene, which afforded 12-9 as the sole product in high yield. The high stereoselectivity occurred under substrate control, specifically, in the presence of the bulky alkyl side chain and the OTs group flanking the ketone carbonyl. Tosyl group deprotection was achieved using Mg in MeOH to give 12-10, followed by oxidation of the secondary alcohol to a ketone in anticipation of generating the key allylic trichloroacetimidate 12-14. Towards, this end, 12-11 underwent Horner-Wadsworth-Emmons reaction with subsequent TMS protection of the tertiary alcohol to afford E-alkene 12-12 as a single isomer. After reduction of the ester with DIBAL-H, the primary alcohol 12-13 was converted into trichloroacetimidate 12-14 via CCl 3 CN and DBU. This set the stage for the centerpiece of the synthesis, the Overman rearrangement, which was executed in t-butyl benzene at 150 °C in the presence of Na 2 CO 3 in a sealed tube for 2 days, and gave rise to the desired isomer 12-15 and its C-4 epimer (not shown) in 69% and 16% isolated yields, respectively. The observed stereoselectivity was rationalized by considering both steric and electronic factors of two possible chair-like transition state intermediates, with the desired isomer obtained from the transition model which did not give rise to repulsive interactions between the nitrogen and the neighboring TMS group. Having successfully demonstrated the key transformation, 12-15 was advanced to Corey's intermediate 12-24 [39]. First, the acetal needed to be transformed to a hemiaminal that could be oxidized to the desired -lactam. Towards this end, the trichloroacetyl group in 12-15 was replaced with Cbz to give 12-16. Then, hemiaminal formation was achieved in two ways: i) removal of the PMB group with DDQ and subsequent treatment with TBAF revealed hemiacetal 12-17, which was spontaneously converted into 5-membered hemiaminal 12-18 (67%); and ii) treatment with aqueous TFA in methylene chloride, which gave 12-18 in one step (96%). Jones oxidation generated -lactam 12-19; interestingly, the primary alcohol was oxidized only to the aldehyde, which was attributed to severe steric hindrance circumventing the formation of the bulky chromate ester. Further oxidation was required to create the precursor for downstream -lactone formation and was achieved using NaClO 2 , with subsequent esterification to methyl ester 12-20 in 44% overall yield from 12-16. Protection of the tertiary alcohol and oxidative cleavage of the vinyl group with OsO 4 and NaIO 4 gave aldehyde 12-22 in preparation for reaction with cyclohex-2-enylzinc chloride as described by Corey's group [39]; this gave 12-23 as the sole product in 90% yield. Unveiling of the corresponding triol was executed with BCl 3 to afford Corey's intermediate 12-24, thereby completing the formal synthesis of 1. Scheme 15. Chida and coworkers' formal synthesis of (-)-1 from D-glucose [50]. Closing Remarks The novel structure and biological activity of 1 have inspired scientists from a variety of disciplines to target the salinosporamides for optimal production and analoging using traditional fermentation, industrial microbiology, classic natural products chemistry and semi-synthesis, total synthesis, and bioengineering. The resulting compounds have provided important insights into SAR, with the majority of structural modifications centered around P1 and P2. The P2 analogs have been excellent subjects for crystallographic studies in complex with the 20S proteasome, which together with SAR, firmly established the role of the leaving group in the mechanism of irreversible binding and prolonged duration proteasome inhibition in vitro and in vivo. Mutasynthesis offers a powerful technique to generate new P1 analogs, and has been complemented by total and semi-synthetic approaches. Clearly, the potential to generate proteasome subunit specific inhibitors exists, but apparently remains unfulfilled at the time of writing. With respect to the total synthesis of the parent natural product 1, its dense functionality has attracted the attention of some of the most prestigious laboratories in the world.
2014-10-01T00:00:00.000Z
2010-03-25T00:00:00.000
{ "year": 2010, "sha1": "65e884bb59e30640191b274cb2ea3e590e96334c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/8/4/835/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65e884bb59e30640191b274cb2ea3e590e96334c", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
59356005
pes2o/s2orc
v3-fos-license
Shift-symmetries and gauge coupling functions in orientifolds and F-theory We investigate the field dependence of the gauge coupling functions of four-dimensional Type IIB orientifold and F-theory compactifications with space-time filling seven-branes. In particular, we analyze the constraints imposed by holomorphicity and covariance under shift-symmetries of the bulk and brane axions. This requires introducing quantum corrections that necessarily contain Riemann theta functions on the complex torus spanned by the D7-brane Wilson line moduli. Our findings hint towards a new underlying geometric structure for gauge coupling functions in string compactifications. We generalize this discussion to a genuine F-theory compactification on an elliptically fibered Calabi-Yau fourfold. We perform the first general dimensional reduction of eleven-dimensional supergravity and dualization to the F-theory frame. The resulting effective action is compared with the circle reduction of a four-dimensional N=1 supergravity theory. The F-theory geometry elegantly unifies bulk and brane degrees of freedom and allows us to infer non-trivial results about holomorphicity and shift-symmetries. For instance, we gain new insight into kinetic mixing of bulk and brane gauge fields. Introduction In four-dimensional effective actions with minimal N = 1 supersymmetry, the dynamics of the vector fields crucially depends on the gauge coupling functions determining their kinetic terms. Supersymmetry requires this function to be holomorphic in the complex scalars that arise as the bosonic parts of chiral multiplets [1]. This holomorphicity allows to infer certain non-renormalization theorems for this coupling function. In particular, one can show that it only receives perturbative corrections to one-loop order, while non-perturbative corrections can be generally present. In effective theories arising from string theory, the gauge coupling function can depend on scalars admitting classical shift-symmetries. While this is key in the implementation of anomaly cancellation via the Green-Schwarz mechanism [2,3], these symmetries can also constrain the functional form of the coupling independent of any gauging. In this work we exploit the interplay between holomorphicity and symmetries in the study of gauge coupling functions of brane and R-R gauge fields. Deriving the gauge coupling function in a full-fledged string model can be challenging. In intersecting D-brane models this function has been investigated since their first construction [4,5]. Of particular interest in this work will be intersecting Type IIB Dbrane models with space-time filling D7-branes and O7-planes and their generalizations to F-theory models with seven-branes of general type. We furthermore focus on compactifications yielding a four-dimensional effective theory with N = 1 supersymmetry. At weak string coupling, i.e. when D7-branes and O7-planes are considered, the gauge coupling function can be studied by dimensionally reducing the D7-brane effective action as done in [6,7]. Interestingly, it was already pointed out in [6] (and for the mirror-dual configurations in [8]) that the gauge coupling functions determined by direct classical reduction are not holomorphic in the complex coordinates determined for the rest of the effective action. First, this was observed for the D7-brane gauge coupling function in the presence of D7-brane Wilson line moduli. A solution to this problem was, however, suggested in [6], by arguing that the missing terms arise at one-string-loop order by using the orbifold results of [9,10]. Second, including the mixing with R-R bulk U(1)'s, a further seeming conflict with holomorphicity in the independently derived complex coordinates is encountered. Given these gaps in our understanding of these basic couplings, one might wonder if there is a more systematic approach to determine and analyze these couplings. In this paper we suggest that by carefully studying the shift-symmetries of the axions in the theory, one can significantly constrain the gauge coupling function of both closed and open string gauge fields. This is done for the Type IIB weak string coupling setting in detail in section 2, while the generalization to F-theory can be found in section 3 and section 4. We should note however, that the F-theory analysis is not simply a generalization, but it is also useful in uncovering new interesting facts about the Type IIB case. In general, the gauge coupling functionsf of D-branes depend on R-R form axions of the underlying supergravity theory. Since these forms admit shift-symmetries, they can be used to constrain the functional dependence off on the R-R-form axions. Using holomorphicity, one is then lead to constraints on the dependence off on the complex coordinates. Clearly, exploiting the symmetry properties for determining the gauge coupling function in string compactifications is not new and has, for example, already been discussed intensively in heterotic models (for early works on this subject, see e.g. [11,12] and references therein). However, one fact that has not been exploited systematically is that higher-degree R-R forms can transform non-trivially under the shift-transformations of lower-degree R-R or D-brane gauge transformations. This is a direct consequence of having Chern-Simons terms in higher dimensions which, as we will discuss in detail, translates into having non-Abelian shift-symmetries among the axions in the lower-dimensional effective field theory. Our strategy to constrain the corrections to the gauge coupling function is to combine our knowledge of the appropriate N = 1 complex coordinates with the expected symmetry properties of the gauge coupling function. More precisely, we first note that the gauge coupling functionf D7 is proportional to the Kähler coordinates T α in the absence of R-R and NS-NS two-form scalars G a and D7-brane Wilson line scalars a p . Including these fields, one finds corrections to T α depending on G a , a p as well as their complex conjugatesḠ a ,ā p . We argue that once these moduli are included, the gauge coupling function cannot be simply given by T α , since that would break the discrete shift symmetries. However, just by using holomorphicity and such discrete symmetries, we can derive that the correction tof D7 is a holomorphic section of a certain line bundle over the complex torus spanned by the axions. Finally, this fixes the form of the corrections, which consist of logarithms of Riemann theta functions depending on the Wilson lines. The improved understanding of D7-brane gauge coupling functions finds an elegant description when moving to F-theory models studied via M-theory. In the F-theory description, the seven-brane dynamics is encoded by the geometry of an elliptically fibered Calabi-Yau fourfold Y 4 . In particular, the complex structure moduli, seven-brane positions, and the axio-dilaton reside in a joint moduli space: the moduli space of complex structure deformations of Y 4 . We also have that the two-form scalars G a , Wilson lines a p and R-R gauge fields are unified as arising from elements of the third cohomology of Y 4 . In fact, they parameterize the complex torus H 2,1 (Y 4 )/H 3 (Y 4 , Z). The gauge coupling function can then be determined via the duality to M-theory on the same fourfold by the following procedure: (1) compactify a general four-dimensional N = 1 theory on a circle, (2) integrate out all massive modes in the three-dimensional Coulomb branch, (3) compare the result with an M-theory compactification on a smooth Calabi-Yau fourfold. Using this procedure, the leading seven-brane gauge coupling function was found in [13] and some first results on corrections to this result have been obtained using this duality in [14]. As for the Type IIB case, we expect that in general the gauge coupling function depends on the scalars G a and Wilson lines. As of now, however, the contribution from two-form scalars and Wilson lines has not been obtained via an M-theory reduction. Thus, in this work we will perform an M-theory reduction on a generic elliptically fibered Calabi-Yau fourfold keeping track of all fields including the two-form scalars, the Wilson lines and the R-R gauge fields, thereby generalising the results in [13]. We also explain in great detail the relevance of having an elliptically fibered space and the dualization procedure to bring the effective action to the correct F-theory duality frame to compare with a four-dimensional theory. Exploiting the shiftsymmetries in the M-theory reduction and the F-theory frame we present a detailed discussion of the F-theory gauge coupling function. We extend the analysis of [15] and propose quantum corrections to ensure holomorphicity and shift-symmetry invariance. This work is organized as follows. In section 2 we discuss the N = 1 effective action of a Type IIB orientifold compactification with a space-time filling D7-brane. We introduce the complex coordinates and Kähler potential capturing the dynamics of a rigid D7-brane with Wilson line moduli. We then study the symmetries of the moduli space and their action on the gauge coupling function, which allows us to derive certain constraints forf . In section 3, we perform the dimensional reduction of M-theory on a generic smooth Calabi-Yau fourfold and dualize to the correct F-theory duality frame. We carefully derive the shift-symmetries of the effective theory and the effect of the dualization on them. In section 4 we determine the gauge coupling function by matching the M-theory reduction with a circle reduction of a four-dimenisonal theory. Finally, we discuss the constraints that holomorphicity and gauge-invariance imposes on it. We leave a detailed discussion of the dualization of three-dimensional action to appendix A and of the circle reduction of a four-dimensional theory to appendix B. The D7-brane gauge coupling function and kinetic mixing In this section we consider the four-dimensional effective action that arises from Calabi-Yau orientifold compactifications of Type IIB with D7-branes and O7-planes. In particular, we aim to determine the characteristic functions determining the standard N = 1 supergravity with bosonic action [1] whereK A s B are the second derivatives of a real Kähler potentialK(M , Ď M ) andf IJ (M ) is the holomorphic gauge coupling function. We will denote four-dimensional quantities with a hat. The functionsK,f IJ as well as the complex coordinatesM A are determined by reducing Type IIB supergravity coupled to the D7-brane and O7-plane world-volume actions following and extending [6,7,16]. We will also discuss the shift-symmetries and certain quantum corrections of the effective theory. Complex coordinates and the Kähler potential in Type IIB orientifolds The general form of the effective action for the bulk fields in such compactifications was determined in [16] by reducing Type IIB supergravity on a Calabi-Yau manifold Y 3 , while also including the action of an holomorphic involution σ : Y 3 → Y 3 . The action of σ * on the cohomology groups splits them into eigenspaces . The basis used to span these cohomologies is listed in table 1. This leads to the following cohomology group basis elements fields index range . (α κ , β κ ) and (αk, βk) are symplectic basis. Our index conventions include k = 1, . . . , h 2,1 − , while the hat onk indicates the labeling of one further element. We also list the four-dimensional fields associated to these basis elements in the expansions (2.2). expansion of the Kähler form J of Y 3 , and the NS-NS and R-R form fields where c a , b a , and ρ α are scalars, C α 2 are two-forms, and (A κ ,à κ ) are vectors in the four-dimensional effective theory. It is crucial to stress that C 4 has a self-dual fieldstrength, given by This yields a duality between the two-forms C α 2 and scalars ρ α , and identifiesà κ as the magnetic dual of A κ . Therefore, we can eliminate the two-forms C α 2 in favor of ρ α and the vectorà κ in favor of A κ . It is, however, interesting to point out that the structures we discuss later on can be also analyzed in the dual frames as we will see in section 3. In addition to the zero modes of the forms (2.2), also the axio-dilaton τ = C 0 + ie −φ reduces to a four-dimensional field. Finally, the deformations of the Calabi-Yau metric compatible with σ are the Kähler structure deformations v α and the complex structure deformations z k parameterizing forms in H 2,1 − (Y 3 , C). Note that τ and z k are complex fields. Before turning to the D7-branes let us note that a general N = 1 compactification can include background fluxes H 3 and F 3 [17,18]. These transform negatively under σ * and therefore admit an expansion with the basis introduced in table 1. It is well-known that these fluxes induce a nontrivial superpotential in this Type IIB setting [19]. In the following we will not discuss background fluxes in much detail. While they can be included in the bulk sector without much effort, we will require however that they do not alter the couplings of the D7-brane. The coupling to a single space-time filling D7-brane was studied in detail in [6,7] by dimensionally reducing the D7-brane Born-Infeld and Chern-Simons actions. In order to review the results we will make some simplifying assumptions. In particular, we will analyze on the dynamics of a single D7-brane while being aware that a tadpole canceling configuration requires the inclusion of other D7-branes. 2 This will allow us to focus on the structures relevant to this work. Some interesting generalizations will appear in the study of the F-theory vacua of section 3. In particular, the F-theory analysis contains the proper inclusion of the seven-brane deformation (or position) moduli. Let us consider a D7-brane wrapped on a divisor S in Y 3 and denote its orientifold image by S = σ(S). It is useful to introduce S + = S ∪ σ(S) and S − = S ∪ −σ(S), where the minus sign stands for orientation reversal. This allows to split the cohomologies 1 Notice that we use a different convention than [6] for the field C 4 . In particular, they are related by C here In order to compare with the results obtained from the F-theory reduction, it is more convenient to use this convention, which makes C here 4 invariant under Sl(2, Z). 2 A more thorough discussion of the global constraints on such settings can be found, for example, in [20]. We refer the reader to these works especially for the discussion of the D5-brane tadpole constraint and the appropriate quantization conditions. Then, the eight-dimensional gauge field A and embedding ζ of the D7-brane image pair can be expanded as [6,7] where P − is a function equal to +1 on S and −1 on σ(S). The fact that these have to be expanded into H 1,0 − (S + ) and H 2,0 − (S + ), respectively, follows from the action of the orientifold on the open string states. It is important to stress that the notion of γ p being (0, 1) implies that the forms depend on the complex structure moduli z k of the ambient Calabi-Yau space Y 3 . To make this dependence more explicit, we can expand where (α p ,β p ) is a real basis of H 1 (S). Here f pq is a holomorphic function in the complex structure moduli z k . For an appropriate basis, its real part Re f pq is invertible and we denote the inverse by Re f pq . This ansatz can be justified in the F-theory reduction as argued in [13,15,21] and was recently used in Type IIB orientifolds in [22]. While not a priori obvious a parametrization of the form (2.6) will allow us to bring the effective action into standard N = 1 form. This is most clearly seen in the F-theory treatment to which we will come back in section 3. Clearly, one can also expand A into the real basis (α p ,β p ) such that A = A D7 P − +c pα p + c pβ p , The basis (α p ,β p ) is independent of the complex structure deformations and therefore all complex structure dependence in a p is again captured by the function f pq . We summarize our notation for the open string sector in table 2. We are now in the position of stating our simplifying assumptions. First, we will assume that 3 [ i.e. that S and its orientifold image S are in the same homology class. This implies that the U (1) gauge field of the D7-brane is not massive by a geometric Stückelberg mechanism [6,23,24]. And second, we will assume the vanishing of the intersections cohomology group basis element fields index range where i denotes the embedding map of S + into Y 3 , i : S + → Y 3 . This condition ensures that there is no superpotential that obstructs complex structure and Wilson line deformations. 4 The considered D7-branes can admit an arbitrary number h 2,0 − (S + ) of deformations ζ K and h 1,0 − (S + ) of Wilson line moduli a p . To keep the presentation simple, we will freeze the fields ζ K as well as all matter fields arising at the intersections among D7-branes. This will allow us to focus the following discussion the couplings of the Wilson line moduli a p . In the F-theory reduction, presented in section 3, a general dependence on the seven-brane deformations will be included and also charged matter states are (implicitly) accounted for. Let us note that the condition (2.9) is only imposed for the orientifold negative forms (αk, βk) in Y 3 . The positive forms (α κ , β κ ) can non-trivially intersect the negative one-forms on S − . Thus, we introduce the intersection numbers (2.10) As we discuss in subsection 2.5, these couplings control the kinetic mixing of the D7-brane U (1) A D7 with the R-R gauge fields A κ of the bulk theory. We are now in the position to display the four-dimensional N = 1 complex coordinates. First, we have the complex fields which are already complex in our reduction ansatz. Their complex structure does not depend on other fields in the reduction. Note that the D7-brane deformations ζ K are part of Set 1, but have been frozen to keep the presentation simpler. Second, there are the complex fields Set 2: which admit a complex structure that changes with the values of the fields in Set 1 given in (2.11). This is obvious from the definition of G a and readily inferred for the a p 's by noting that they are coefficients of complex structure dependent (0, 1)-forms in (2.4). Finally, there is a third set of fields: which non-trivially depends on the fields in Set 1 and Set 2. The T α are often termed the complexified Kähler structure moduli. The introduced couplings are given by the Y 3 intersection numbers as well as the complex structure dependent function For completeness, let us note that the Kähler potential takes the seemingly simple formK This Kähler potential depends on the complex coordinates (2.11)-(2.12), i.e. we identify in (2.1) thatM All the field dependence of thisK on the fields of Set 2, i.e. the G a and a p , arises only through the definition of T α . In fact, we note that the volume V = 1 6 K αβγ v α v β v γ in (2.17) depends on T α by solving (2.13) for v α , which then introduces a dependence on G a , a p mixed with τ, z k . To conclude this subsection we discuss a special case for the above compactification separately in which several of the couplings simply. More precisely, we briefly summarize the above result for h 1,0 − (S + ) = 1 and h 1,1 − (Y 3 ) = 0, i.e. the case in which the rigid D7branes only admits a single complex Wilson line modulus a. In this case the dynamics of a is encoded by the correction to T α given by where we have used that M αp q in (2.16) reduces to a vector denoted by M α and that M α pq vanishes due to antisymmetry for one modulus. The kinetic terms of a depend non-trivially on the complex structure moduli z k through the holomorphic function f . Continuous and discrete shift-symmetries Having introduced the complex coordinates (2.11), (2.12), and (2.13) we are now in the position to discuss the symmetries. In order to do that we first recall that G a and T α contain zero modes of R-R and NS-NS forms and therefore inherit discrete symmetries from large gauge transformations of C 2 , B 2 and C 4 . These are shifts by integral closed 2-forms, namely where λ a andλ a are appropriately quantized constants. 5 Turning to C 4 , an obvious large gauge transformation is δC 4 = λ αω α , for constant λ α . However, we note that the field-strength F 5 = dC 4 + 1 2 B 2 ∧ dC 2 − 1 2 C 2 ∧ dB 2 actually contains terms depending on C 2 and B 2 . Therefore, the shifts (2.20) induce a shift of C 4 as A second set of symmetries arises from internal gauge transformations on the D7-brane world-volume. For constants λ p ,λ p these are parameterized by δA =λ pα p + λ pβ p . (2.22) Also in this case one finds that the four-form C 4 has to shift. While we will not give the transformation of C 4 directly, let us point out that it can be inferred by noting the NS-NS two-form B 2 naturally combines with F = dA on the D7-brane world-volume as where we have temporarily restored the α dependence. This implies that one can capture the gauge degrees of freedom of an Abelian D-brane with B 2 , and the fact that the field C 4 shifts under (2.22) is already contained in (2.21). A more detailed discussion how this is done in practice can be found in [26]. The transformations can be simply inferred when investigating the N = 1 coordinates as we will see next. Furthermore, since F is gauge invariant, under a shift of the B-field (2.20), we have to shift the worldvolume flux on the brane accordingly To examine the shifts of the N = 1 chiral coordinates, we first focus on the fields of Set 2 defined in (2.12). Performing the transformations (2.20) and (2.22) we find that where we have used that a p arises in the expansions (2.4) and (2.7). Both shifts are holomorphic in the moduli of Set 1 given in (2.11) and are shown to unify when using the F-theory description in terms of a Calabi-Yau fourfold (see section 3). The fields of Set 3 have the most involved transformation properties: which can be inferred by investigating the isometries of the Kähler manifold spanned by all complex fields with Kähler potential (2.17). Notice that this is valid for finite values of the transformation parameters and that the shift is holomorphic. It is also important to stress that (2.26) implies that the shift in δρ α not only depends on λ α but also on λ a , λ a , λ p ,λ p . As mentioned above, this is a consequence of the transformation rule for C 4 given in (2.21), together with the shift induced by (2.22). This, in turn, implies that the isometry group generated by the transformation is actually a non-Abelian. To see this, we introduce the Killing vectors t a ,t a , t p ,t p and t α for the symmetries parameterized by λ a ,λ a , λ p ,λ p , and λ α . These are then found to respect the non-trivial commutators [15] [t a , This algebra is a generalization of the well-known Heisenberg algebra. It is an interesting challenge to gauge this algebra while preserving supersymmetry [15,27]. As mentioned earlier, in the absence of gaugings for the isometries (2.25) and (2.26), one expects that the continuous global shift-symmetries are actually broken to discrete symmetries at the quantum level. Since the discrete version of the symmetries comes from large gauge transformations in the higher-dimensional p-form fields, such shifts actually identify field configurations in the Set 2 to parameterize complex tori T 2h 1,1 − closed and T 2h 1,0 open , e.g. one finds the identifications and G a , a p parameterizing 6 The complex structure on T 2h 1,1 − closed is simply given by τ , while the complex structure on T 2h 1,0 open is encoded in the holomorphic function f pq . Finally, also ρ α is periodic ρ α ρ α + 1, but one has to additionally impose identifications under (2.28) using δρ α obtained from (2.26). These identifications render the field space spanned by c a , b a , c p ,c p and ρ α to be compact. The N = 1 gauge coupling function We turn now to the N = 1 gauge coupling function for the Type IIB orientifold setting and study its symmetries. To keep the discussion simple, we first focus on the case in which the kinetic mixing is absent, i.e. the case in which the couplings (2.10) are zero We will comment on the more general situation in subsection 2.5. A first way to obtain the gauge coupling function is by performing a direct dimensional reduction. For the R-R gauge fields A κ one then finds [16] Let us next include the D7-brane. In the absence of the moduli G a and a p in Set 2, one finds by a reduction of the Dirac-Born-Infeld and Chern-Simons action that (2.32) Here δ α D7 is the restriction to the world-volume S + and can be obtained by expanding the Poincaré-dual two-from [S + ] to S + into the basis ω α , i.e. [S + ] = δ α D7 ω α . The real part off D7 is determined by using the calibration conditions for supersymmetric cycles and thus obtained from the volume of S + measured in the ten-dimensional Einstein-frame metric. In the string frame one has Ref D7 ∝ g −1 s . Clearly, in the absence of fields of Set 2 the gauge-coupling isf D7 = δ α D7 T α and thus holomorphic in the N = 1 coordinates. Its imaginary part non-trivially shifts with λ α under (2.21), which are the standard constant shifts of the theta-angle. The inclusion on the G a moduli is also straightforward, since the corrections in G a are at the same order of g s as the volume part. Indeed, dimensionally reducing the D7 action one finds that, with vanishing worldvolume flux, the gauge coupling function is [6] which is holomorphic in the T α coordinates (2.13) in the absence of Wilson line moduli. We note that, naively, the gauge coupling function is now transforming non-trivially under the symmetries (2.26) since, in addition to the constant shifts with λ α , one also finds shifts with λ a holomorphic in G a and τ . However, (2.33) is only valid when the gauge flux on the D7-brane is zero, i.e. F = 0, which as noted above, is not a gauge invariant condition since it shifts according to eq. (2.24). Thus, the gauge invariant version of (2.33) is actuallŷ where we defined the worldvolume fluxes f a as Since these transform according to (2.24), we find that the gauge coupling function is both holomorphic and invariant under the whole set of shift symmetries (modulo a constant imaginary shift), as it should. Finally, when including the Wilson line moduli for the D7-brane, we immediately face a problem. At first, one might think that the gauge coupling function is given in this case by (2.34), where T α contains a quadratic term in the Wilson lines (2.13). However, the dimensional reduction of the D7-brane action does not give such a term and we find again (2.32). As argued in [6], a contribution quadratic in the Wilson lines is generated at one loop in g s and is therefore natural that it is not captured by the Dirac-Born-Infeld action, which is only valid at tree level in open string amplitudes. 8 Such corrections were computed in [9,10] in toroidal models, which show that indeed, a quadratic term arises at one loop level. It is therefore natural to splitf D7 aŝ wheref red D7 is obtained by direct dimensional reduction of the D7-brane action. Comparing (2.36) with (2.34) one is lead to make the ansatẑ where Θ is a holomorphic function. Note that our analysis of the shift symmetries implies that the quadratic term in (2.37) cannot be the full result, since under shifts of the Wilson line moduli, the field T α shifts by a non-constant term, which would make the gauge coupling function non-gauge invariant. We therefore introduced the non-vanishing holomorphic function Θ in the moduli a p and z k . In the next section we discuss the properties of this completion in more detail. One-loop corrections and theta-functions Let us have a closer look at the inclusion of the Wilson line moduli in the discussion of the D7-brane gauge coupling functionf D7 . As stressed above the quadratic term in the a p arise at order g 0 s , i.e. is only visible at the open string one loop level. In toroidal models [9,10] it was furthermore shown that the fully corrected gauge coupling function contains a Riemann theta function depending on the D-brane moduli. In toroidal models, these theta functions arise due to the underlying toroidal compactification space. While we are not dealing with such a simple geometry, we have stressed in (2.29) that the Wilson lines in this more general orientifold compactification also parameterize a higher-dimensional complex torus. In the following we will use this fact together with the transformation property (2.26) to infer the general form off D7 as a function of a p . More precisely, we suggest that Ψ = ef Our construction is inspired by the discussion of the M5-brane action first given in [32]. It has been extended and applied relevantly for our orientifold setting, for example, in ref. [33,34]. A similar strategy has been also suggested in the construction of the non-perturbative N = 1 superpotential [35][36][37][38]. A simple case with one Wilson line modulus Before discussing the general case let us exemplify our reasoning for a single Wilson line a, i.e. for the situation discussed around (2.19). The complex field a parameterizes a complex two-torus T 2 open with complex structure given by the function f . As above we can write a = ic + fc with c ∼ = c + 1,c ∼ =c + 1. We then introduce the following connection on this torus A is a connection on a holomorphic line bundle L. Holomorphic sections of L are defined as sections that satisfy∂ where the differential is with respect toā. Note that Ψ is defined on a torus and thus has to respect appropriate boundary conditions. Compatibility of (2.39) with the torus shifts a ∼ = a + ni + mf , with n, m ∈ Z, implies that Ψ has to transform as where we kept f constant, therefore ignoring the dependence on complex structure. One can now simply solve the differential equation ( the Jacobi theta function. Notice that the theta functions above can be seen as holomorphic sections of the bundle defined by (2.38) in holomorphic gauge, i.e. with A 0,1 = 0 but A 1,0 = 0, defined by the following complex gauge transformation 10 One thus recovers the standard transformation behavior of the theta functions under the torus shifts. In order to relate the Ψ j given in (2.41) to the gauge coupling function we next consider taking the logarithm of an arbitrary solution Ψ = This equation is already quite illuminating. The first piece is precisely the correction to the T α coordinate proportional to the moduli a, as in eq. (2.19). The second term, log Θ, is holomorphic in a and transforms precisely in the right way to render δ α D7 T α + log Θ invariant under shifts in a. Therefore, identifyinĝ with M = δ α D7 M α and appropriate C j , yields a suitable completion of the gauge coupling function of a D7-brane. As promised, we have identified Ψ = ef as a holomorphic section of a line bundle on a two-torus, when viewing the one-loop part of the T α coordinates as functions of a,ā. Note that we have only focused on the a-dependence off D7 in the above discussion. We know, however, that supersymmetry implies thatf D7 also has to be holomorphic in the complex structure moduli z k . Indeed, we find that our construction appropriately yields such a holomorphic dependence through the theta functions ϑ in (2.44) due to the holomorphic function f (z k ). In general, however, the coefficients C j can also depend holomorphically on the moduli z k . This dependence is not constrained by our considerations of shift-symmetries. It can be constrained by including further symmetries, such as monodromy symmetries in the complex structure moduli space, but considerations of this type are beyond the scope of this work. The general case with several Wilson line moduli Let us now repeat the same arguments for the more general situation with several Wilson line moduli a p . The first step consists of constructing the line bundle L on T 2h 1,0 − open , by defining an appropriate connection. We do this by analyzing the general transformations (2.26) of T α under the torus shifts. Then we can follow the same strategy as above to constrain the expected one-loop correction. We would like to consider a holomorphic function Θ(z k , a p ) such that under the shift (2.25) of the a p satisfies with δT α given in (2.26). The existence of such a Θ implies that as a holomorphic section of the line bundle L satisfying (2.39) for some connection A. It is easier to determine the connection A in holomorphic gauge which reads Indeed, one checks that (freezing complex structure) the connection transforms as Notice that the field strength does not depend on M α pq which, in particular, means that the number of solutions of (2.39) is independent of M α pq . 11 Note that the constraint (2.51) can actually always be satisfied for a single D7-brane when choosing a basis (α p ,β p ) in (2.6) that is symplectic with respect to the inner product α, β = S + δ α D7 ω α ∧ α ∧ β. 12 We can thus infer the form of the solution Θ is a sum over the Riemann theta functions where Γ is a h 1,0 − -dimensional integer lattice. As in the simpler case considered before, the coefficients in this sum can be complex structure dependent and are not constrained by the torus shift-symmetries. In fact, in this section we worked in a fixed complex structure of the Calabi-Yau threefold. For a proper treatment of the dependence on complex structure moduli, we should consider a line bundle over T Comments on kinetic mixing and gaugings Up to now we have assumed that the kinetic mixing between the open and closed string U (1)'s vanishes, c.f. (2.30). In this section we comment briefly on how the presence of mixing changes the situation (see [41,42] for a discussion on kinetic mixing in D-brane models from a different perspective). As shown in [6], the mixing is controlled by the couplings defined in (2.10). In our notation, the result that one obtains from reducing the D7-brane action iŝ Since both f κλ and f pq depend holomorphically on complex structure, we find thatf κD7 has a complicated dependence on the complex structure moduli, which does not seem holomorphic. However, the M-theory computation done in the next section shows that there is an identity which proves that this quantity is actually holomorphic. Indeed, one can show that and so the mixing becomesf 55) 12 Note that this inner product can be degenerate on the full set (α p ,β p ). which is now manifestly holomorphic. Notice that from the type IIB perspective this is a highly non-trivial identity among (2,1)-forms in the internal space and (0,1)-forms on the worldvolume of the brane. However, in the F-theory description, both of them lift to three-forms in the Calabi-Yau fourfold, where the identity (2.54) becomes obvious (see the discussion around (3.18)). Now we can analyze how the kinetic mixing behaves under the shift symmetries of the axions a p . Clearly,f κD7 is not invariant, which might be a reason to think that this cannot be correct, or at least not the full result. However, the presence of mixing has an interesting consequence for the symmetries, which implies that the gauge coupling function must transform non-trivially under shifts in the Wilson lines. Again, since this is most easily seen from the M/F-theory description done in the next section, we will just quote the result here. Under a shift (2.25), we have that 13 Let us close this section with some remarks about the interplay between the transformation (2.56) and the gauging of the isometries (2.27) of the scalar manifold from a purely field-theoretical perspective. As we stressed earlier, the isometries of the scalar manifold are non-Abelian, while the gauge symmetry of the vectors is Abelian. This suggests that one cannot gauge such isometries without introducing extra vectors or structure. However, this is not the case, precisely because the vectors transform as in (2.56). Indeed, suppose that we gauge the isometries where A runs over κ and the D7-brane gauge boson, and Θ is the embedding tensor. This means that, under a gauge transformation, we have to perform a shift in the corresponding axions, namelỹ Thus, the parametersλ p and λ p are generically no longer constant and the transformation (2.56) is not simply a constant change of basis. Instead, using (2.58) it becomes which can be readily recognized as the gauge transformation of a non-Abelian gauge group. Thus, we see that the transformation (2.56) allows to gauge certain non-Abelian isometry starting with an Abelian gauge group. Finally, since the resulting gauge group is non-compact and non-semisimple, the gauge coupling function cannot be constant [27], which fits nicely with what we find from the reduction. 14 See [15,43,44] for more details on the gauging of such isometries. M-theory on Calabi-Yau fourfolds and the F-theory frame In this section, we perform the dimensional reduction of M-theory on a smooth Calabi-Yau fourfold Y 4 without fluxes. Then, by restricting to the case in which Y 4 is elliptically fibered, we perform the necessary dualization to compare the resulting three-dimensional theory to the circle reduction of an arbitrary four-dimensional N = 1 supergravity theory. Let us note that this approach has been already successfully applied in previous works, see e.g. [13,15,23,45]. However, it is crucial to stress that the reduction and comparison that we present is the most general analysis carried out so far. 15 In particular, we will cover the cases that capture kinetic mixing between R-R bulk and 7-brane gauge fields. Dimensional reduction of M-theory on a smooth fourfold We begin our analysis by performing the dimensional reduction of eleven-dimensional supergravity on Y 4 . Such reductions were performed already in [46][47][48], and we will deviate from these works only by considering a more explicit ansatz for the (2, 1)-forms on Y 4 . The starting point is the bosonic part of eleven-dimensional supergravity given by whereR is the eleven-dimensional Ricci scalar andĜ = dĈ is the four-form field strength for the three-formĈ. We will consider backgrounds of the form where g mn is a Calabi-Yau metric on the fourfold Y 4 . This choice of background ensures that the resulting effective theory is a three-dimensional N = 2 supergravity. The effective theory of interest include all massless fluctuations around the background solution (3.2). The massless modes arising from fluctuations of the metric can be encoded in terms of the Kähler form J expanded as where ω Σ form a basis of harmonic two-forms. The fields v Σ are three-dimensional real scalar fields that parametrize the Kähler structure deformations of Y 4 . We also have h 3,1 (Y 4 ) complex fields z K , K = 1, . . . , h 3,1 (Y 4 ), that encode the complex structure deformations of Y 4 . The massless modes that come from fluctuations of the M-theory three-formĈ are given byĈ where we introduced Ψ A , a basis of harmonic (1, 2)-forms. We note that A Σ are threedimensional vector fields and N A are three-dimensional complex scalars. Following [13], we choose the following parametrization of the (1, 2)-forms where (α A , β B ) are a basis of integral harmonic real three-forms and f AB is holomorphic in complex structure. We also defined Re f AB , which is the inverse of Re f AB . Thus,Ĝ is given byĜ with cohomology group basis element fields index range Note that we could have also chosen to use the real basis (α A , β B ) in the expansion of C. This would introduce real scalars (c A , c A ), which are related to the complex scalars N A via N A = ic A + f ABc A , but we will work directly with N A . The basis form and corresponding fields are summarized in table 3. Substituting the ansatz (3.3) and (3.4) into the action (3.1) and performing a Weyl rescaling, which brings the effective action into the Einstein frame, we find that the three-dimensional effective theory is given by (3.8) Let us introduce the different objects that appear in this expression. We introduced the rescaled Kähler moduli the intersection number of two-forms. The kinetic term for the complex structure moduli z K depends on a Kähler metric given by where χ L are a basis of harmonic (3, 1)-forms, with L = 1, . . . , h 1,3 (Y 4 ). Regarding the kinetic terms for the vector multiplets (L Σ , A Σ ), we have that where we defined (3.13) and used that (3.14) Finally, we introduced the couplings where we used that Ψ A = −iJ ∧ Ψ A . These can be written as when using the intersection numbers which are independent of the Kähler and complex structure moduli. Notice that there are two important properties of the Ψ A that we have used numerously throughout the derivation: The first relation implies that Re f AB Re f CD Ğ Q ΣB D = Q ΣC A and is the origin of the identity (2.54). The second identity allows to remove the intersection numbers involving α A ∧ α B , such that the result only depends on M ΣA B and M Σ AB defined in (3.17). The three-dimensional N = 2 action and its symmetries Before manipulating the three-dimensional effective theory (3.8) further, it is important to stress that it can be written in an N = 2 form with three-dimensional Yang-Mills terms [48]. This implies that all couplings are determined by a real function K, which we will call the kinetic potential. Explicitly the bosonic part of the N = 2 action takes the form where φ denotes the different complex scalar multiplets, z K and N A , and (L Σ , A Σ ) corresponds to vector multiplets. Comparing (3.8) with (3.19) one infers that the kinetic potential is given by It is worth pointing out that (3.20) is valid without any further assumptions about the real three-forms (α A , β B ) appearing in (3.5). 16 Let us briefly discuss the symmetries of the effective action. First of all, it has an Abelian gauge symmetry given by where Λ Σ is an arbitrary function. Furthermore, as advanced earlier, it has a global Abelian symmetry acting on the scalars N A as with λ A andλ A real constants. These symmetries descend from large gauge transformations of theĈ-field, namely δĈ =λ A α A + λ A β A withλ A , λ A ∈ Z. As usual, the classical supergravity analysis is invariant under a continuous version of the symmetry, while quantum effects break it to the discrete group. Using this discrete version one identifies the scalars N A to parameterize a complex torus with a complex structure encoded by the function f AB . Since f AB and N A vary with z K , this torus is non-trivially fibered over the complex structure moduli space. This is reminiscent of the complex tori discussed in (2.29), since one of the z K of the Calabi-Yau fourfold will translate to the τ in the orientifold limit. However, the three-dimensional action (3.19) with (3.20) is not yet in the correct duality frame in order to make the connection with the four-dimensional F-theory setting manifest. We will turn to the dualization and the match with a four-dimensional theory in the next subsection. Before doing this, let us point out another interesting feature of the above formulation. It is not difficult to check that the kinetic potential (3.20) is not invariant under (3.22), but rather transforms as However, we check that this transformation yields a boundary term in the action and can therefore be neglected. The reason for this fact is that, in general, the kinetic potential in (3.19) is unique up to where g(φ), h Σ (φ) are holomorphic functions of φÂ. Indeed, using that f AB is holomorphic in z K this is precisely what happens in (3.24). While being in three dimensions, we have thus found a natural set of holomorphic functions in our setting. As we will see later, these play a key role in the up-lift to four dimensions and indeed reappear in the holomorphic gauge coupling function. Dualization of fields to the F-theory frame The previous reduction is valid for any smooth Calabi-Yau fourfold. In order to have an F-theory background, we have to restrict to the cases in which Y 4 is elliptically fibered, which imposes certain conditions on the geometric data. In turn, these translate into restrictions on the three-dimensional effective action that ensure that it comes from the compactification of a four-dimensional theory on a circle. This is expected from the M-theory to F-theory duality and the main tool to infer information about F-theory effective actions. However, performing the Y 4 reduction as in subsection 3.1 the resulting three-dimensional theory is generally not in the correct duality frame to lift it to a four-dimensional theory, so a Hodge star duality is usually required. Before going into the details of the dualization, let us illustrate this with an example. Consider a massless chiral multipletΦ and a massless vector multiplet of a four-dimensional N = 1 supersymmetric theory that cannot be dualized into each other. When we dimensionally reduce on a circle, we find that the chiral multiplet gives an N = 2 chiral multiplet Φ in three dimensions and the vector yields an N = 2 vector multiplet, that consists of a three-dimensional vector field A together with a real scalar a. Since the vector field A is massless, it can be dualized to a real scalarã which, together with a, corresponds to a chiral multiplet Φ A . Conversely, we can also dualize the chiral multiplet Φ into a vector multiplet if it appears in the three-dimensional action with a real continuous shift symmetry. In general, after performing such a dualization, we can no longer lift the theory back to four dimensions. Thus, if we start with an N = 2 three-dimensional theory (with massless scalars and vectors) and wish to lift it to four dimensions, we first have to make sure we are in the correct duality frame. In our case, the structure of the elliptic fibration, together with the expectations from Type IIB compactifications, is enough to find the correct frame. Following [13,45], we split the three-and two-forms as where Ψ κ correspond to three-forms on the base of the fibration and Ψ A have components on the fiber. Similarly, the two-forms ω α , which are dual to vertical divisors, come from the base whereas ωι do not. In particular, the latter can be further split as where ω 0 is dual to the base and ω i include the exceptional divisors and the extra sections. We can give a rough characterization of these forms by counting how many 'legs' their components have in the elliptic fiber. In fact, ω α , Ψ κ have no legs in the elliptic fiber. Ψ A , ω i have generically components with one and zero legs in the elliptic fiber, while ω 0 has generically components with two, one and zero legs in the elliptic fiber. In order to have a non-vanishing coupling depending on an Y 4 -integral over the above forms, one has to have a wedge-product of forms that admits at least some components with two legs along the elliptic fiber. One thus immediately finds the vanishing conditions The intersections M 0 κλ and M 0κ λ are in general non-vanishing. However, we can always chose a special three-form basis (α κ , β κ ) such that The split of the forms induces a split of the different fields as follows On the one hand, the complex fields N κ lift to a four-dimensional vectors A κ (R-R vectors) and so have to be dualized. On the other hand, the scalars N A correspond to both the G a moduli and 7-brane Wilson lines, so they remain as scalars. Regarding the three-dimensional vector multiplets, the (A α , L α ) lift to the four-dimensional complex scalars T α , so A α should be dualized into a scalar. Finally, the vectors (Aι, Lι) include the 7-brane vectors as well as the Kaluza-Klein vector coming from the reduction of the metric, so they are not dualized. We are now ready to perform the dualization that brings the action (3.19) into the appropriate frame to lift to four dimensions. As usual, this can be done in a manifestly supersymmetric way by performing a Legendre transform of the kinetic potential K (see appendix A for a detailed discussion). In order to dualize the scalars N κ into vectors, we need to make sure that the kinetic potential does not depend on Im N κ . At first, this is not the case for K given in (3.20). However, we may remove such a dependence by performing a transformation of the form (3.25), which yields (3.31) We denote the dual kinetic potential by K(N A , T α , z K |Lι, n κ ) and is given by where the new variables are defined as The dualized action can then be derived by inserting (3.32) and (3.33) into the general action (3.19). Notice that Re N κ and L α in (3.32) should be understood as functions of Lι, N A , Re T α , and n κ . This requires inverting the maps (3.33), which can be done explicitly for Re N κ . We find the identify where Re d λκ is defined as the inverse of Lι Re dι λκ . For the complex scalars T α we only find an implicit expression given by (3.35) This implicit form of the coordinates and kinetic potential is familiar already from the orientifold setting (2.13) and (2.17). However, it should be stressed that the M-theory result is more involved, since it contains the scalars Lι, n κ such that K is not a Kähler potential. Determining the dual Lagrangian is technically involved but straightforward. In order to do that, we have to compute the derivatives of K(N A , T α , z K |Lι, n κ ) and express them in terms of derivatives of the original kinetic potential K (N A , N κ , z l |Lι, L α ). The details of this computation are summarized in appendix A. Symmetries of the dual Lagrangian Before we continue analyzing the three-dimensional Lagrangian, let us first discuss the symmetries of the dual Lagrangian. For the original Lagrangian, we found a set of Abelian symmetries given by (3.21) and (3.22), so one might think that the symmetries of the dual Lagrangian are also Abelian. However, this is not the case [15], which can be traced back to the existence of a Chern-Simons term in the eleven-dimensional supergravity action. In the democratic formulation we find that, due to the Chern-Simons term, the large gauge transformations of the three-form and the dual six-form potentials are not independent, but rather given by with ω 3 and ω 6 integral closed forms. Upon dimensional reduction of the democratic action, one can check that the symmetries may be Abelian or non-Abelian, depending on how one eliminates the redundant degrees of freedom. A detailed field theory analysis in arbitrary dimension of this fact can be found in [44]. Explicitly we can investigate the symmetries of the dual Lagrangian by translating the ones (3.21) and (3.22) from the original one into this new frame. In addition, one directly checks the new symmetries of the vectors A κ by using (3.19) with (3.32) and shows perfect match with the symmetries of n κ as expected by supersymmetry. The set of gauge and global symmetries is then found to be where λ α , λ A ,λ A are arbitrary real constants and Λ κ , Λι are arbitrary real functions. Notice that the right hand side of δN A , δT α is holomorphic and that the transformation is valid for finite values of λ α , λ A , andλ A . The symmetry group is now non-Abelian and, in particular, it is a generalization of the Heisenberg group. Notice also that, unlike for the original Lagrangian, the symmetries of the scalars and vectors are mixed. This can be seen from the transformation rule for A κ , that depends onλ A and λ A , inducing a constant change of basis in the space of U (1)'s (see also [42]). This necessarily implies that the gauge coupling function must depend on the scalars and transform under the symmetries appropriately in order to make the whole Lagrangian invariant. Furthermore, if we were to gauge the global (non-Abelian) symmetry by promotingλ A and λ A to be arbitrary functions, we find that the transformation of the vectors is no longer constant and precisely matches that of a non-Abelian vector field [15]. More explicitly, in order that the three-dimensional kinetic terms are invariant under (3.37) the three-dimensional gauge kinetic terms should transform, for finiteλ A = (λ A ,λ κ ) and λ A , as Here the indices I, J , . . . run over all the three-dimensional vectors, namely A κ and Aι. As we will see in the next section, the couplings M iA κ and M j Aκ are related to kinetic mixing of 7-brane and bulk gauge fields, while M 0I κ and M 0 Aκ have no immediate four-dimensional meaning. We would like to stress at this point that, in three dimensions, the coefficient of the kinetic terms of the vectors K IJ is invariant if and only if the kinetic mixing is zero and M 0A κ and M 0 Aκ vanish. This carries over to a property of the four-dimensional gauge coupling function, as we show in the following. Determining F-theory gauge coupling functions Having determined the three-dimensional action in the correct duality frame, we can compare it with the circle reduction of an arbitrary four-dimensional action. As shown in appendix B, the circle reduction of a four-dimensional N = 1 supergravity action (2.1) yields a three-dimensional N = 2 supergravity given by (3.19), with kinetic potential Here we set R = r −2 , with r being the radius of the circle, and introduced the scalars ξ I that come from reducing the four-dimensional vector fields. The index I runs over the four-dimensional vector fields and is split as {κ, i}. From now on, we denote four-dimensional quantities by a hat. Transformation rules of the gauge coupling functions Before we proceed to compare the result obtained from the (dualized) M-theory reduction with a generic four-dimensional theory on a circle, let us discuss the transformation properties of the four-dimensional gauge coupling functions. In the last section we saw that, in general, the kinetic terms of the three-dimensional vectors K IJ transform under the shift-symmetries of the scalars. Clearly, the four dimensional gauge coupling function shares a similar property. Indeed, consider the ansatz for a four-dimensional vector on a circle, namely where dy is the non-trivial one-form on the circle. We also introduced the Kaluza-Klein vector A 0 coming from the reduction of the metric on a circle, namely Using (3.37) together with (4.2), we find that the transformation of the four-dimensional vector on R 1,2 × S 1 is where we used that L 0 = R and L i R → 0. Since dy is the non-trivial one-form on S 1 , we recognize the last term in (4.4a) as a large gauge transformation. These transformations along the circle are often key in investigating the properties of the F-theory effective action as recently demonstrated in [49,50] for 7-brane gauge fields. Here we find a non-trivial completion of these transformations to include R-R bulk gauge fields. In the decompactification limit, large gauge transformations are meaningless since there are no non-trivial one-forms in R 1,3 . Thus, we find that the transformation of the vectors in R 1,3 is We included the possibility of having a constant shift C IJ in Imf IJ . Splitting the indices, this corresponds to Finally, notice that when M iA κ = M i Aκ = 0, we find that the gauge coupling function must be invariant, up to possibly constant shifts of its imaginary part. In the following we will see that this corresponds to the case in which the kinetic mixing between the four-dimensional vectors κ and i vanishes. Gauge coupling functions from dimensional reduction In the following, we compare the action derived from the kinetic potential (3.32) with the one derived from (4.1), paying special attention to the gauge coupling function. In order to do so, we will need the derivatives of the dual kinetic potential K(N A , T α , z K |Lι, n κ ), which are given in appendix A. On the weak string-coupling limit In addition to presenting the F-theory result we will also study the restriction to the weak string-coupling limit discussed in section 2. In order to do that it is useful to point out the matching of the moduli. First, note that the complex structure moduli z K of Y 4 correspond to the complex structure moduli of the double cover Y 3 of B 3 , the axio-dilaton τ , and the D7-brane deformations ζ K : which are the fields in the Set 1 given in (2.11). 17 Second, the F-theory moduli N A are naturally split as where a p are the D7-brane Wilson line moduli and G a are the R-R and NS-NS two-form moduli constituting the Set 2 given in (2.12). Third, recalling the result (2.31) and the definitions (2.7), (2.12) we note that one identifies 18 where we stress that F κλ and f pq are only determined as functions of the complex structure moduli of Y 3 . The F-theory result is significantly more general, since it encodes the full dependence on all complex structure moduli z K of Y 4 . Applying the split (4.9) it can be used to derive corrections to the orientifold result. Gauge coupling function for R-R vectors Let us start with the derivation of the four-dimensional gauge coupling function for the R-R vectors, namelyf κλ . From the results in appendix B, we immediately see that the real part of the gauge coupling function is encoded in K κλ , which is the kinetic term for the three-dimensional vectors A κ . According to eq. (A.13), it is given by where we assumed that f κA = 0 , L 0 = R . (4.13) These assumptions appear to be essential. They greatly simplify the results and, in particular, they make (4.12) into the real part of a holomorphic function, which matches the expectations from the Type IIB perspective. Thus, we will assert that (4.13) holds for the rest of the paper. It would be interesting to show that the vanishing condition f κA = 0 can be proved for elliptic fibrations. The computation of the imaginary part of the four-dimensional gauge coupling function is a bit more involved. However, by carefully tracking the circle reduction, we see that it is encoded in the three-dimensional action in the couplings F κ ∧ Im(K κ dφÂ) (4.14) 17 Note that we have not included ζ K in the orientifold analysis. In F-theory a general z K -dependence automatically includes these moduli. 18 The identification of f κλ with (2.31) will become apparent in the next paragraphs. in (3.19), where runs over all the chiral fields in three-dimensions. According to the results in appendix A, we find that N a ). (4.15) In particular, the imaginary part off κλ is encoded in the coefficient that multiplies n λ /R above. Thus, from (4.12) and (4.15), we conclude that the four-dimensional gauge coupling function for the R-R gauge bosons is given bŷ 16) which is holomorphic in the complex structure moduli of the Calabi-Yau fourfold, and therefore holomorphic with respect to the four-dimensional chiral fields. The result (4.16) is in accord with the expectations from the Type IIB orientifolds, c.f. (2.31). However, it is important to note that the F-theory result (4.16) is significantly more general, since the function f κλ can depend on all complex structure moduli of Y 4 . Kinetic mixing between R-R and 7-brane vectors Now we move on to considering the kinetic mixingf κi between the open and closed string gauge bosons. From the circle reduction, we see that Ref κi is encoded in K κi , the three-dimensional kinetic mixing between A κ and A i . We find that the M-theory reduction yields where Q iκ A is the holomorphic function defined in (3.16). Notice that (4.17) is again the real part of a holomorphic function of the complex moduli. This also shows that the mixing is proportional to the couplings M iκ A and M i Aκ , which are related to the ones that appear in (4.5) by the identity (3.18). This proves the statement in the last section that the transformation for the vector is trivial if and only if the mixing vanishes. Just like in the previous case, we can compute the imaginary part of the mixing Imf κi by analyzing (4.15). In this case, it is given by the term proportional to L i /R. Thus, we find thatf which is holomorphic in both the complex structure moduli z K and the moduli N A . The identification (4.18) agrees with the result given in section 2.5, when asserting that M i λA is only non-vanishing for the directions of the Wilson line moduli a p . However, let us stress again that in order to match it with the results obtained in [6] from dimensional reduction of the D7-brane action we had to use heavily the identities (3.18), which were not known in the Type IIB context (see the discussion around eq. (2.55)). Let us briefly mention that we can compute the mixing between the Kaluza-Klein vector and the R-R vectors, which is Of course, this has no meaning in four dimensions. However, it is reassuring to check that it is what one would expect from a theory that comes from a circle reduction, given (4.12) and (4.17). Gauge coupling function for 7-brane vectors Finally, let us discuss the gauge coupling functionf ij for the seven-brane gauge fields that, as we saw in section 2.3, is the most involved coupling. In particular, we do not expect to obtain a holomorphic gauge coupling functionf ij directly from dimensional reduction. In the following we simply give the result that we obtain from dimensional reduction and in the next subsection we then discuss how one can use holomorphicity and the discrete shift-symmetries of the axions to constrain the exact result. Following the same strategy as before, we see that Ref ij is given by K ij , the three-dimensional kinetic terms for the 7-brane gauge bosons. There is, however, a further complication when discussing this coupling that has to be addressed. As shown in appendix A, in terms of the original kinetic potential K, it reads 20) with K given by (3.31). Thus, we immediately see that K ij depends on all the possible intersection numbers (3.10), but we do not expect all of them to contribute to the gauge coupling function in four dimensions. In particular, the couplings K ijkl and K ijkα induce a dependence of K ij on the scalars L i , which have no four-dimensional scalar analog. This suggests that, just like in [51][52][53], the classical M-theory reduction contains terms that correspond to one-loop effects from the circle reduction of the four-dimensional theory. However, notice that unlike in [51][52][53], we are performing a dimensional reduction without fluxes, so the four-dimensional theory is non-chiral in our case. Thus, the smooth Calabi-Yau fourfold encodes information about non-chiral states. We leave a more detailed study of these corrections and their interpretation for future work. In order to match the classical circle reduction, we will compute the coupling (4.20) assuming that the only non-vanishing intersection numbers are (4.21) where K αβγ are the intersection numbers of the two-forms on the base of the elliptic fibration. We also expressed the intersection numbers K αβij in terms of those of the base. The precise interpretation of the divisors labeled with indices i, j depends on the model under consideration. The first possibility is that i, j are labeling exceptional divisors over a single non-Abelian 7-brane wrapping a divisor S in the base B 3 . In this case one can expand the Poincaré-dual two-form as [S] = δ α 7 ω α | B 3 and split C α ij = δ α 7 C ij , where C ij is the Cartan matrix of the non-Abelian gauge algebra. 19 A second possibility is that the indices i, j label multiple U (1) gauge factors stemming from several 7-branes on different divisors in B 3 . In this case it is convenient to keep C α ij in this general form, since this allows us to include kinetic mixing among the 7-brane U (1)'s. In either case, we compute to linear order in C α ij that where we defined In this expression, on the one hand, the first term in (4.22) is proportional to the volumes of the divisors in B 3 specified by C α ij . From the Type IIB perspective this corresponds to the fact that the gauge coupling scales with the volumes of the cycles wrapped by the 7-branes. The second term, on the other hand, is proportional to the couplings M iA κ and M i Aκ and, in particular, vanishes when there is no mixing between A κ and A i . Notice that, as expected from the Type IIB discussion, (4.22) is not the real part of a holomorphic function of the chiral fields, even in the absence of mixing. Indeed, from (3.33) we have that which contains a term proportional to the square of N A that is missing in (4.22). This is precisely the same problem we encountered in the Type IIB setting of section 2.3, where the contribution proportional to the square of the Wilson lines does not arise from dimensional reduction. Finally, let us mention that the second term in (4.22) is holomorphic in N A if and only if we have that Q iκ A Q jB κ = 0 . (4.25) However, this is not sufficient to guarantee holomorphicity in complex structure moduli z K of Y 4 . In the following we discuss in detail the corrections that are needed, in four dimensions, to have a holomorphic gauge coupling function. However, we focus on the case without kinetic mixing of 7-brane and R-R gauge fields and leave the general case to future work. Shift symmetries, quantum corrections, and theta functions In the previous subsection we have shown that a direct dimensional reduction of elevendimensional supergravity on a smooth Calabi-Yau fourfold yields vector kinetic terms and a complex moduli space that appear to be incompatible with a reduced four-dimensional holomorphic gauge coupling function. In the absence of kinetic mixing the missing terms in the completion to a holomorphic result are of the form Re(d α BA N B ) Re N A . From our detailed discussion of the orientifold setting in section 2, however, we should be alerted that this apparent conflict was already encountered for D7-brane Wilson line moduli. In fact, we recalled in subsections 2.3 and 2.4 that the corrections to the gauge coupling functions quadratic in the Wilson line moduli are only generated at one string-loop order and therefore are not found by a dimensional reduction of the tree-level D7-brane effective action. We observe that in F-theory effective actions derived via eleven-dimensional supergravity a similar feature occurs for all moduli N A , i.e. both the Wilson line moduli and the R-R and NS-NS two-form moduli in the split (4.10). This implies that to ensure holomorphicity of the gauge coupling function in T α one needs to include in the M-theory reduction a quantum correction of the form In the M-theory setting it is much harder to identify the origin of such a correction. One expects that it arises due to certain M2-brane states, by following the F-theory to M-theory duality, but it remains an open question how to make this more precise. As we will see in the following we can nevertheless infer non-trivial constraints on f quant ij by using symmetries and the expected holomorphicity properties of the effective theory. For simplicity we will only discuss the case without kinetic mixing in the rest of this work. In order to proceed we begin by collecting a few observations supporting the fact that important corrections have to be missing in the reduction of the supergravity action. On the one hand, it is clear from the outset that the three-dimensional reduction result is invariant under all the shift-symmetries (3.37) even when choosing continuous parameters λ A , λ α ,λ A . Since these symmetries are inherited from the eleven-dimensional action and unbroken throughout the classical reduction, there is simply no way how they could be broken. On the other hand, we have argued in subsection 2.4 that in the presence of the fields N A the continuous symmetries λ A ,λ A acting non-trivially on the holomorphic gauge coupling function in four dimensions are always broken. The discrete symmetries are, however, manifest when including 7-brane fluxes or quantum corrections resulting in a theta function on the complex torus spanned by N A . We expect that this is equally the case for a full-fledged F-theory compactification, such that indeed corrections must be missing in the above dimensional reduction. Note that in the M-theory background (3.2) we did not include any background fluxes dĈ on Y 4 . This implies that the F-theory setting will not contain background fluxes either and, in particular, we did not consider 7-branes with world-volume fluxes. This implies that the manifestation of the discrete symmetries for the G a moduli obtained in (2.23) by completing i * B 2 − 2πα F , requires an extension of our M-theory analysis. In fact, it was argued in [54] that such orientifold fluxes are precisely the ones that correspond to so-called hypercharge fluxes in F-theory GUTs [55,56]. They neither induce a D-term nor an F-term potential for the considered moduli, but nevertheless can, for example, break a non-Abelian gauge group. In our context they are crucial to make the discrete symmetries manifest. It is of enormous importance to understand the manifestation of these fluxes in the M-theory reduction in greater detail. The second possibility encountered in subsection 2.4 was a manifestation of the discrete shift-symmetries by completing the first term in (4.26) with a theta function. In fact, note that the N A span a complex torus T 2n F of real dimension 2n = 2(h 2,1 (Y 4 ) − h 2,1 (B 3 )). Its complex structure is determined by the holomorphic function f AB and by using our assumption f λA = 0, as given in (4.13), and the restriction to a setting without kinetic mixing this torus arises trivially in the split of T such that F ij = dA h ij is a (1, 1)-form. Note that this expression is still in the threedimensional Coulomb branch as indicated by the indices i, j. While the lift with a non-Abelian gauge group is more involved, one realizes that for a single U (1) gauge group factor one finds the generalization of (2.48). In the following we will restrict to this Abelian case and drop the indices i, j. Arguing as in subsection 2.4 one can use this connection in the non-holomorphic gauge and look for holomorphic sections Here, as in subsection 2.4, Θ is a sum of Riemann theta functions with z K -dependent coefficients, in general. Unfortunately, we do not know an M-theory argument how Θ can be fully determined. In addition to the ambiguities in the complex structure dependent coefficients, one also faces the fact that fluxes should be properly included into (4.28). One might speculate that the some of the constants (ν a , µ a ) determining the shifts in the theta functions (2.52) might admit an interpretation as fluxes. However, we also expect that the non-holomorphic pre-factor and hence the line bundle and connection become modified. It would be very interesting to investigate the proper inclusion of fluxes in future work. Conclusions In this paper we have studied the gauge coupling functions arising in N = 1 Type IIB orientifolds with D7-branes and F-theory. First, we have analyzed the result that one obtains from dimensional reduction of Type IIB supergravity coupled to the D7brane action without kinetic mixing between the open and closed string gauge fields following [6]. We have seen that this does not yield a gauge coupling function which is holomorphic in the chiral coordinates and, therefore, has to be modified. As already mentioned in [6], one expects that corrections coming from open-string one-loop effects generate precisely the missing terms to establish a holomorphic result. However, an explicit computation of such corrections is very challenging and has only been done in a related setting in toroidal orbifolds [9]. We have shown that by carefully analyzing the shift-symmetries of the closed string and open string axions in the effective field theory, one can severely constrain the specific structure of such corrections even in generic Calabi-Yau vacua. In Type IIB orientifolds we have discussed two mechanisms to ensure that the gauge coupling functionf D7 transforms appropriately under discrete shift-symmetries. On the one hand, we reviewed the inclusion of D7-brane world-volume flux to make the symmetries of the R-R and NS-NS two-form moduli G a manifest. On the other hand, we have stressed that gauge coupling function in general also depends on the complex Wilson line moduli a p , which also admit discrete shift-symmetries. In fact, they span a complex torus with complex structure determined by a function f pq , which is itself holomorphic in the complex structure moduli. Then, by simply imposing holomorphicity and invariance under such symmetries, we obtain that the required one-loop corrections are encoded by a holomorphic section Ψ = exp(f 1−loop D7 ) of a certain line bundle defined on the torus spanned by the Wilson lines. Constructing the connection on this line bundle, such sections are then found to be comprised of a term quadratic in the Wilson lines, required for holomorphicity of the completef D7 , and a sum of Riemann theta functions with, in general, complex structure dependent coefficients. This form of the gauge coupling function is in agreement with the results in [9], even though in our setting the torus is in general not related to the compactification space. It is important to stress that we did not unravel the precise physical interpretation of having to deal with holomorphic sections Ψ of the constructed line bundle. We were lead to this construction by holomorphicity and symmetries of the gauge coupling function, but we were not able to completely fix the choice of Ψ appearing in the gauge coupling function. Our construction, however, is reminiscent of the consideration first given in [32]. In this work the partition function of an M5-brane is constructed and a similar ambiguity of choosing the correct section had to be addressed. One might hope that the extensions of [32] to Type IIB supergravity with D-branes [33,34] might shed new light on the significance of the choice of Ψ in our setting. It is also intriguing to point out that the complex structure dependence of Ψ might be fully constrained when identifying it as a wave-function of a quantum system along the lines of [57]. In would be interesting to check whether these ideas can be made more explicit for our setting. Extending our analysis of the Type IIB orientifold setting we have also included the effects of kinetic mixing between D7-brane gauge fields and R-R gauge fields. In particular, we derived that when the mixing is non-zero, the gauge coupling function should not be invariant under the shift-symmetries, since these induce a constant change of basis in the space of gauge bosons, mixing open and closed string U (1)'s. Our systematic approach allowed us to clarify certain puzzles that appeared in [15]. In particular, we argued that it is indeed possible to gauge specific non-Abelian isometries by Abelian vectors even though the gauge coupling function is independent of such gaugings. The underlying structure is omnipresent in string theory models and stems from the fact that higher-degree R-R form potentials admit non-trivial symmetry transformations under lower-degree forms from the brane or bulk theory. It would be interesting to see whether the ideas to exploit the stringy symmetries for the axions and gauge fields can be generalized further. In the second main part of the work, we have studied the gauge coupling function for genuine F-theory backgrounds via dimensional reduction of M-theory on a Calabi-Yau fourfold. One of the main advantages of this approach is that many of the moduli that appear to be completely different from the Type IIB perspective, turn out to have a common origin in the Calabi-Yau fourfold. In addition to being also applicable away from weak string coupling, the F-theory settings also allow us (1) to fully include the dependence on the 7-brane position moduli, (2) derive interesting and useful relations between different moduli that are obscure in the IIB picture, and (3) provide geometric arguments for the properties of the various couplings in the bulk and 7-brane sector. In order to investigate the gauge coupling function we have crucially extended the results in [13]. We performed the M-theory reduction in full generality and explained in detail the role of the elliptic fibration when performing the dualization to the F-theory frame. In doing so, we have payed special attention to the shift-symmetries of the axions coming from the M-theory three-form expanded into three-forms of the Calabi-Yau fourfold. We have shown explicitly that a direct reduction of eleven-dimensional supergravity at first only yields shift-symmetries that are Abelian. Due to the dualization of three-dimensional fields into the F-theory frame they become non-Abelian as already discussed in [15,44]. As we have seen, this is a direct consequence of having a non-trivial Chern-Simons coupling in the eleven-dimensional supergravity action. Furthermore, it provides the M-theory origin of the more involved shift-symmetries in Type IIB compactifications. We then determined the four-dimensional gauge coupling functions of the F-theory setting, by comparing the three-dimensional M-theory effective action with the circle reduction of a four-dimensional theory. As in the Type IIB orientifolds, the resulting gauge coupling function is at first not holomorphic. In fact, the reduction of elevendimensional supergravity does not capture any of the quadratic corrections in the T α coordinates determined from the scalar kinetic terms. This is compatible with the fact that the dimensional reduction does not break the continuous shift-symmetries and indicates that important quantum corrections are missed. However, by mimicking the arguments we made for the Wilson line moduli in Type IIB orientifolds, we derived that an appropriate correction to the F-theory gauge coupling function is again captured by holomorphic sections of a certain line bundle. Such sections include a quadratic correction required for holomorphicity in the T α coordinates, but also generally allow for as logarithm of a sum of Riemann theta functions with complex structure dependent coefficients. This line bundle and these theta functions are now defined on a complex torus spanned by the axions coming from the M-theory three-form that are not dualized into vector multiplets in the F-theory frame. This torus is thus a subspace of the complex torus H 2,1 (Y 4 )/H 3 (Y 4 , Z), which also captures the degrees of freedom of the R-R bulk vector fields. A detailed study of this geometric object and its variation over the complex structure moduli space is therefore of key phenomenological interest. In this work we have already conjectured certain constrains on geometric data of elliptically fibered Calabi-Yau fourfolds. In particular, by demanding supersymmetry of the four-dimensional effective action, we have proposed that the function f AB , which is holomorphic in the complex structure moduli of Y 4 and defined in (3.5), should satisfy some non-trivial relations. Our analysis has been done for a generic compactification space, without referring to a specific example. Thus, it would be interesting to analyze in detail different examples to check whether such relations are indeed satisfied. Another interesting approach to derive the couplings relevant for the gauge coupling function in F-theory was presented in a series of papers [58][59][60]. It was shown in these papers that the coefficient functions of the couplings of type F 4 , where F is an eight-dimensional gauge field, satisfy certain Picard-Fuchs-type differential equations. It would be interesting to explore the relation of these findings to the results of this paper. where K is the Legendre transform of K. Therefore, we may dualize the action used in the main text, (A.5) by performing a Legendre transform of the kinetic potential. Since we want to dualize some of the scalars into vectors, and vice versa, we split the fields as follows 20 φ = (φ a , N κ ), L Σ = (Lι, L α ), A Σ = (Aι, A α ), (A. 6) and dualize the fields with Greek indices. We also assume that the kinetic potential does not depend on Im N κ . 21 The appropriate Legendre transform is given by K(ϕ a , T α |lι, n κ ) = K(φ a , N κ |Lι, L α ) − L α Re T α − Re N κ n κ , (A.7) where the new variables are defined as The dual action takes exactly the same form as (A.5), but with field content changed, φ = (ϕ a , T α ), L Σ = (lι, n κ ), A Σ = (Aι, A κ ), (A.9) and K replaced by its Legendre transform given by (A.7). Although the fields φ a and Lι were not dualized, we nevertheless changed their names to ϕ a and lι for clarity. It is possible to express all the derivatives of K in terms of those of K, if one knows the derivatives of the old variables with respect to the new. The dualization gives us the opposite, i.e. the derivatives of the new variables with respect to the old, which we collect in a matrix where we assumed that K κ α = 0. 22 The derivatives of the old variables in terms of the new ones is given by the inverse of this matrix, namely Using this we find the derivatives of the new kinetic potential in terms of derivatives of the original one, which are given by For the case analyzed in the main text, namely for K given in (3.31), we find the 22 This is true for the Kähler potential (3.31), since d α κA = 0. It is straightforward to drop this assumption. following derivatives of K where we defined 23) and worked at leading order in C α ij . It is also useful to consider the following combinations B Circle reduction of four-dimensional N = 1 supergravity In this appendix, we perform the circle reduction of the following four-dimensional N = 1 ungauged supergravity action, With such a decomposition of the metric, one finds the following reduction of the Einstein-Hilbert term where in addition, we performed a Weyl rescaling g new µν = r 2 g old µν to bring the action to the Einstein frame, and introduced the new variable R ≡ r −2 . Furthermore, the reduction ansatz for the vectors is, where ζ I are three-dimensional scalars. The reduction of the terms containing vectors is where we introduced ξ I ≡ R ζ I , which are the proper three-dimensional scalar fields (they form a vector multiplet together with the reduced vector A I ; similarly R and A 0 form a vector multiplet). Putting all this together we obtain the following three-dimensional action (B.7) One can check that this action can be put into the standard N = 2 supergravity form, where the indices (0, I) have been gathered into a single index I, ξ I = (R, ξ I ), A I = (A 0 , A I ) .
2017-05-22T13:21:25.000Z
2016-07-13T00:00:00.000
{ "year": 2017, "sha1": "4dab714b7cedeb988506473c1237fd9c40c7ef2b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2017)059.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "4dab714b7cedeb988506473c1237fd9c40c7ef2b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
2218170
pes2o/s2orc
v3-fos-license
Mechanisms and role of microRNA deregulation in cancer onset and progression MicroRNAs are key regulators of various fundamental biological processes and, although representing only a small portion of the genome, they regulate a much larger population of target genes. Mature microRNAs (miRNAs) are single-stranded RNA molecules of 20–23 nucleotide (nt) length that control gene expression in many cellular processes. These molecules typically reduce the stability of mRNAs, including those of genes that mediate processes in tumorigenesis, such as inflammation, cell cycle regulation, stress response, differentiation, apoptosis and invasion. MicroRNA targeting is mostly achieved through specific base-pairing interactions between the 5′ end (‘seed’ region) of the miRNA and sites within coding and untranslated regions (UTRs) of mRNAs; target sites in the 3′ UTR diminish mRNA stability. Since miRNAs frequently target hundreds of mRNAs, miRNA regulatory pathways are complex. Calin and Croce were the first to demonstrate a connection between microRNAs and increased risk of developing cancer, and meanwhile the role of microRNAs in carcinogenesis has definitively been evidenced. It needs to be considered that the complex mechanism of gene regulation by microRNAs is profoundly influenced by variation in gene sequence (polymorphisms) of the target sites. Thus, individual variability could cause patients to present differential risks regarding several diseases. Aiming to provide a critical overview of miRNA dysregulation in cancer, this article reviews the growing number of studies that have shown the importance of these small molecules and how these microRNAs can affect or be affected by genetic and epigenetic mechanisms. MicroRNAs: Characterization and Biogenesis MicroRNAs (miRNAs) are a group of small RNAs, with around 19 to 25 nucleotides, resulting from cleavage of larger non-coding RNAs. They act as post-transcriptional regulators of gene expression, both in plants and in animals (Bartel, 2004). In 1993, the first miRNA, named lin-4 (lineage-deficient-4), was discovered in Caenorhabditis elegans, and was found to be associated with regulation of larval development. The second miRNA, let-7, was discovered in 2000, also in Caenorhabditis elegans. The fundamental discovery of the regulatory processes governed by miRNAs was a major stimulus for the scientific community and led to a large number of important research on miRNAs (Ambros, 2003). Over the last seven years, more than 4500 of miRNAs have been identified in the genomes of nematodes, flies, plants, viruses and humans. It has been esti-mated that more than 1000 miRNAs exist in the human genome. Estimates suggest that miRNA regulatory processes may modulate the expression of 1 to 4% of human genes, thus making miRNAs one of the largest classes of genomic regulators (Calin et al., 2004). In mammals, miRNAs have been associated with diverse molecular pathways including regulation of proliferation, apoptosis, differentiation, cell cycle regulation hematopoiesis, and many more cellular processes. Recent studies have emphasized the importance of understanding the mechanism of mRNA regulation by studying changes in miRNA expression in a variety of human pathological conditions, including cancers (Vanderboom et al., 2008). Given their importance in development, it was to be expected that miRNAs would also have a significant role in tumorigenesis. Since their discovery close to 3000 publications, including over 700 reviews, documented associations between miRNAs and cancer, (Garofalo and Croce, 2010;Medina and Slack, 2008). The miRNAs of different organisms have very similar overall patterns of three-dimensional structure, even though they frequently differ significantly in their precise cellular functions (Miyoshi et al., 2010). The biogenesis of miRNAs begins with their transcription by RNA polymerase II, generating a long primary transcript (pri-miRNA) with a cap5' start and a polyA tail. This transcript has a secondary hairpin-shaped structure and while still in the nucleus it is cleaved by RNaseIII and Drosha and their cofactor DGCR8 (Di George Syndrome critical region gene 8), thus generating a precursor molecule of approximately 70 nucleotides. The pre-miRNA is then quickly transported to the cytoplasm via exportina-5 (Exp5), a nuclear exportation protein that makes use of Ran-GTP as a cofactor. Once in the cytoplasm, the pre-miRNA is processed by RNase III and Dicer, thereby generating a double strand of miRNA, of approximately 22 nucleotides in length. This product binds to the RISC complex (an induction complex for RNA silencing) and directs sequence-specific cleavage of target mRNAs Altyernatively, the miRNA may repress translation by remaining bound to the mRNA, thereby impeding its translation ( Figure 1). Such transcriptional repression has been shown to play an important role in regulating growth and differentiation (Ambros, 2003;Medina and Slack,, 2008), and abnormal expression of components of the miRNA regulatory complex has been correlated with different human tumors (discussed below). Post-transcriptional regulation by miRNAs at the untranslated 3' region depends on the extent of sequence homology with the target mRNA ("templates"), and this may influence either inhibition of translation of the template, or facilitate degradation of the mRNA target transcript. Imperfect matching with mRNA leads to variable inhibition of translation of the target, and this mechanism is the main mode of action of miRNAs in mammals. The observation that miRNAs are typically short sequences that can act without the need for complete matching means that a single miRNA can regulate many target mRNAs. The converse also holds, so that individual miRNAs may cooperate and collectively control one single mRNA target (Calin et al., 2004). The molecular biology of miRNAs and how they de facto act in organisms is still only beginning to be understood. There is a growing number of studies that reveal the importance of these small RNAs in diverse biological processes. Moreover, through the overall regulation of cellular gene expression and associations with different functional pathways it has become clear that miRNAs may be involved in various human diseases. Regulation of Gene Expression Through MicroRNAs and Implications for Cancer The current challenge is to identify target transcripts and pathways that are regulated by miRNAs. Studies have shown that a single miRISC complex may bind to more than 200 target genes, which may have a variety of functions, such as transcription factors, receptors and transporters. Thus, miRNAs can control the expression of almost one third of the human mRNA population, and deletions or modifications in this expression may contribute towards a variety of diseases, as well as disrupt pathways of fundamental importance in neoplasia (Miyoshi et al., 2010). Cancer is a complex genetic disease involving changes in structures and gene expression. For almost three decades, carcinogenesis has been primarily attributed to abnormalities in oncogenes and tumor-suppressing genes. It is now recognized that miRNA also have a primary role in cancer onset and progression. Oncomir is the term used to describe an miRNA involved in cancer. Such miRNAs were initially linked to tumorigenesis due to their proximity to chromosomal breakpoints (Calin et al., 2004b) and their dysregulated expression levels in many malignancies (Calin et al., 2004a). Abnormal gene expression by miRNAs has been correlated with several types of tumors, and such genes may function as oncogenes or tumor-suppressing genes ( Figure 2). In humans, 50% of the miRNA genes are located at genomic sites associated with cancer-specific chromosomal rearrangements. A prime example are the genes miR-15 and miR-16, which are located on chromosome 13q14, a region that is deleted in more than half of the cases of chronic lymphocytic leukemia and B-cell leukemia (Calin et al., 2002). It has also been reported that miR-15a and miR-16-1 negatively regulate the expression of BCL2, an anti-apoptotic oncogene that is generally overexpressed in a variety of tumors, including leukemias and lymphomas (Calin et al., 2008). These findings suggest that miR-15a and miR-16 may act as tumor-suppressing genes in human cancer. The miRNAs that code for the let-7 family were the first group of oncomirs identified. These regulate the expression of oncogenes, and specifically the RAS genes. Mutations of the RAS oncogene are present in around 25%- 364 Palmero et al. 30% of all human tumors, and overexpression of the RAS oncogene is very common in lung cancer cases. Ras proteins are membrane proteins that regulate cell growth and differentiation through MAP kinase signaling. In vitro experiments on a pulmonary adenoma cell lineage showed that let-7 was able to inhibit cell proliferation though Ras, inferring that let-7 may function as a tumor suppressor in this context (Johnson et al., 2005). Thus, the let-7 miRNA that regulates the expression of the Ras protein is also able to indirectly alter the cell proliferation rate through its downstream MAP signaling cascade. The strongest evidence for an association between miRNAs and cancer was demonstrated by a sequence of three concurrent studies published in Nature in June 2005 (He et al., 2005;Lu et al., 2005;O'Donnell et al., 2005). The MYC oncogene, which codes for a transcription factor and functions as a cell growth regulator to induce proliferation and apoptosis, frequently appears mutated or amplified in human tumors. These authors reported that the miRNA group miR-17-92 (composed of seven miRNAs: miR-17-5p, miR-17-3p, miR-18, miR-19a, miR-20, miR-19b-1 and miR-92-1) was deregulated leading to increased expression of MYC, culminating with development of B-cell neoplasia. Recent studies have demonstrated that the expression levels of miR-143 and miR-145 are significantly lower in colorectal tumors, thus suggesting that these miRNAs act as potential tumor suppressors (Arndt et al., 2009). Several classes of deregulated miRNAs have also been shown to be differentially expressed in breast cancer, compared with healthy breast tissue (Volinia et al., 2006). Moreover, the expression signatures of informative miRNA subsets have enabled better molecular classification than mRNA expression profiles in several types of human cancer (Lu et al., 2005). Among the miRNAs differentially expressed in breast cancer, miR-10b, miR-125, miR-145, miR-21 and miR-155 have consistently presented the highest degree of deregulation. Downregulation of miR-10b, miR-125b and miR-145 and upregulation of miR-21 and miR-155 suggest that these miRNAs may play an important role as tumor-suppressors or oncogenes (Blenkiron et al., 2007). In particular, the miR-145 miRNA is progressively downregulated when passing from healthy breast tissue to breast cancer with high cell proliferation rates. Similarly, but in the opposite direction, the expression of miR-21 is progressively upregulated when comparing normal breast tissue with breast cancer at advanced stages. Thus, the deregulation of these miRNAs may affect molecular events that are critical for tumor progression (Yang et al., 2008). Some specific miRNAs have also been associated with tumor invasion and metastasis in breast cancer. For example, the level of miR-10b expression in primary breast carcinomas has been correlated with clinical progression of the disease (Ma et al., 2007). Recently it was also observed that the expression of miR-7, miR-128a, miR-210 and miR-516-3p was associated with aggressiveness in cases that were positive for estrogen-receptor tumors and negative for lymph node tumors (Foekens et al., 2008). In recent years, studies on miRNAs, especially on a large scale using microarrays, have provided a more comprehensive picture on the role of abnormal miRNA expression in neoplasia. Upon using molecular profiling methods such as bead-based flow cytometry, real-time PCR or mi-RAGE (SAGE analysis) it became possible to determine tissue-specific "signatures" for miRNAs. In line with this, novel molecular classifications of tumors based on their miRNA expression, have provided a wealth of new resources for predictive and prognostic biomarkers for clinical applications in cancer. Association Between SNPS at MicroRNA Binding Sites and the Risk of Cancer When a sequence polymorphism is present within a miRNA transcript it is called a miR-SNP. These miR-SNPs are single-base polymorphisms (SNPs) in the miRNA sequences and are considered to be an important new class of functional polymorphisms in the human genome. Since the mode of action of miRNA is highly sequence-dependent, changing a single base in a miRNA sequence that alters binding specificities may affect multiple genes, thus impacting on one or several biological pathways. The function of miRNAs may be altered through variations in their own sequence (miR-SNPs) or in their target sequences (called "miR-TS-SNPs") ( Figure 3). Since a single miRNA can have multiple mRNA target sites, sequence polymorphisms in general have deeper and more extensive effects from a biological point of view than would sequence alterations to mRNA (Sun et al., 2009). Duan and Pak (2007) identified a SNP within an essential region of miR-125 that significantly changed the miR-125a sequence and abolished recognition of its target site. Functional experi-MicroRNA and cancer 365 ments in vivo confirmed that the presence of this polymorphism blocked the maturation of miR-125a. Another example of this process is a SNP in the precursor of miR-K5, that is encoded by the human herpes virus in association with Kaposi's sarcoma. This polymorphism correlated with abnormalities in the miRNA cleavage processing by the Drosha enzyme. Similarly, Yang et al (2008) analyzed 41 potentially functional polymorphisms in miRNAs, pre-microRNAs and pri-microRNAs that predisposed to bladder cancer (Yang et al., 2008) and found a SNP in GEMIN3 gene and a common haplotype in GEMIN4 that showed significant associations with higher risk of bladder cancer. Moreover, it could be demonstrated that combinations of certain genotypes were strongly associated with predisposition towards bladder cancer, and that the presence of these specific miRNA genotypes could be used as a tool for predicting the risk of developing such tumors. These examples illustrate that variations in the different biogenesis routes for miRNAs affect their own function and consequently the expression of the target messenger RNA. One of the first epidemiological studies showing a relationship between polymorphisms in the binding region of miRNAs and cancer was published by Landi et al (2008), wherein a relationship between the allele variants of CD86 (polymorphism rs17281995, C > G) and the microRNAs miR-337, miR-582, miR-200a, miR-184 and miR-212 was found to lead to a higher risk of colorectal cancer. It was also shown that there is a relationship between the presence of polymorphism rs1051690 in the insulin receptor (INSR) and a higher risk of colorectal cancer due to modification of the binding affinity of the miRNAs 618 and 612 (Landi et al., 2008). Subsequently, a SNP was identified at the target site of let-7 in the 3'UTR region of the KRAS gene (LCS6-KRAS) (Trang et al., 2008). The presence of this polymorphism was associated with increased risk of lung cancer among smokers. This allele variant was detected in 20% of the patients with lung carcinoma (NSCLC -non-small cell lung carcinoma). In asymptomatic individuals, LCS6-KRAS was found in 6% of the sample analyzed (n = 2433). In a case-control study on lung cancer, the presence of this allele was associated with an increased risk of lung cancer (relative risk of 2.3) among individuals with a history of smoking (mean of 820 cigarettes/year). Functional studies demonstrated that the presence of this polymorphism diminished the binding affinity of let-7 to its target site in KRAS, and consequently, increased expression of KRAS. Nonetheless, Christensen et al (2009) reported that in head and neck cancer, this same polymorphism (LCS6-KRAS) was not associated with any general increase in cancer risk, but significantly so with reduced survival (Christensen et al., 2009). Amongst the polymorphisms affecting miRNA target sites, Tchatchou et al (2009) analyzed a group of 11 SNPs and found a strong correlation between the variant rs2747648 (C/T) in the estrogen receptor 1 (ESR1) gene and an increased risk of breast cancer. This risk was shown to be higher for premenopausal women with a positive family history of cancer (Tchatchou et al., 2009). The mode of action was inferred to be due to a lower binding affinity of miR-453 in the presence of the T allele, and concomitant reduced repression of the ESR1 gene. Since loss of binding of miR-453, led to overexpression of both the ESR1 mRNA and its receptor protein, the breast cancer cancer risk is expected to be increased. Taken together the regulation and role of miRNAs and the large number of target sites in functionally important genes and associated pathways provides new insights into cancer risk. It is clear that both a single miR-SNP or a specific combination of miRNA gene variants may act together on their target mRNA sites to constitute a new previously unappreciated mechanism for predisposition to cancer. Association Between Epigenetic Alterations and MicroRNAs in Cancer Various epigenetic alterations may take place during tumor development. Recent studies have indicated that miRNA expression can be regulated by different epigenetic mechanisms, including changes in DNA methylation in promoter regions and histone modification (Scott et al., 2006;Esteller, 2008) (Figure 4). This deregulation seems to involve hypermethylated CpG islands that map close to specific miRNAs. When expression of such a miRNA is affected, any methylation of this area will simultaneously alter the expression of any target mRNA and proteins modulated by the epigenetically modified miRNA (Scott et al., 2006). Recent studies involving epigenetic factors and changes to miRNA expression have mostly been restricted to assays on tumor cells. One of the first studies to be published involved miR-127, which acts to repress the tumor-specific expression of the proto-oncogene BCL-6. After treating tumor cells with chromatin modifying agents, miR-127 showed activity in different types of human cancer cell lines, inferring that its inactivation in these cells could involve chromatin-induced epigenetic alterations (Saito et al., 2006). 366 Palmero et al. Suppression of hsa-miR-9-1, hsa-miR-129-2 and hsa-miR-137 in colorectal cancer is, at least partly, mediated by epigenetic mechanisms such as DNA hypermethylation and histone deacetylation, as demonstrated in a recent study by Bandres et al (2009). This study also emphasized that frequent hypermethylation of these miRNA loci in colorectal cancer was correlated with clinicopathological abnormalities. Expression of hsa-miR-9-1 was associated with positive lymph node biopsies in patients with advanced stages of colorectal cancer. DNA methylation was considered to be the most likely mechanism for diminishing or inhibiting the expression of these specific miRNAs. Considering that these miRNAs are not usually expressed in normal mucosal tissue, this epigenetic alteration would diminish the tracking of disease evolution (Bandres et al., 2009). In breast cancer cell cultures, the homozygous variant of the miRNA hsa-miR-196a-2 (rs11614913, CT) was shown to be significantly associated with diminished risk of breast cancer, and hypermethylation of a CpG island located 700 base pairs above the precursor region of miR196a-2 led to a reduction in the risk of breast cancer (Hoffman et al., 2009). Based on a series of molecular analyses, these authors suggested that miR-196a-2 might have oncogenic potential in breast cell tumorigenesis, and that functional genetic variations in this miRNA could serve as biomarkers for susceptibility to breast cancer. Lodygin et al. (2008) reported that miR-34a expression was consistently silenced in different types of cancer by aberrant methylation of CpG in the promoter region (Lodygin et al., 2008). It was shown that 79.1% of primary prostate carcinomas had CpG methylation and concomitant loss of miR-34a expression. Similar observations with differing proportions were made in carcinoma cells in breast (25%), lung (29.1%), colon (13%), kidney (21.4%) and pancreas tissue (15.7%), and in melanomas (43.2%) and primary melanomas (62.5%). Preservation, Expression and Localization of MicroRNAs in Paraffinized Tissue Samples Several methods for evaluating miRNA expression profiles have been implemented, including RT-PCR, microarrays and serial analysis of gene expression (mi-RAGE). Independent of the approach, success in applying these techniques is essentially limited by the availability of fresh or frozen clinical tissue samples, which are considered to be the most reliable sources of integral RNA (Zhang et al., 2008). Nevertheless, miRNAs turned out to be less affected by fixation in formalin and embedding in paraffin than mRNAs because of their slower degradation, smaller size and lack of a poly-A tail. Good correlations were denoited formiRNA profiles of RNA extracted from frozen samples and those embedded in paraffin (Zhang et al., 2008). Samples preserved in paraffin are also useful in evaluating the action of a miRNA on a target gene and determining whether it may result in a change in the expression of the corresponding protein. Changes in the expression of proteins regulating the biogenesis of miRNA can also be evaluated at the cellular level. For such analyses, immunohistochemical techniques are useful as these make it possible to detect the location and/or site of subcellular action of the target protein. As an example, the proteins that bind RNA LIN28 and LIN28B prevent precursors of miRNA let-7 from being processed by mature miRNA (Newman et al., 2008). Using immunohistochemistry and tissue microarrays, LIN28 and LIN28B were found to be overexpressed in colon, breast, lung and cervical cancers. Increased expression was associated with physiological repression of let-7 levels and tumor progression, implying a tumorsuppressing role for this miRNA (Viswanathan et al., 2009). In addition, the role of miRNA in tumor tissue samples can be evaluated by means of in situ hybridization (ISH) tests to measure expression levels of specific miRNAs in target cells. Recently, a highly sensitive technique was described for detecting single miRNA molecules in individual cells (Lu and Tsourkas, 2009). The method known as LNA-ELF-FISH employs oligonucleotides from locked nucleic acids with fluorescence for signal amplification, thus allowing miRNAs to be spatially located and quantified inside cells. In a study on gliomas, and especially multiform glioblastomas, the miRNAs regulated by Dicer, miR-222 and miR-339 were identified using ISH, while the endonuclease and the intercellular adhesion molecule ICAM-1 were evaluated by means of immunohistochemistry. These miRNAs were shown to be expressed by the tumors and negatively regulated ICAM-1, given that the expression of these molecules presented inverse associations in the tissue samples (Ueda et al., 2009). In expression microarrays, there was no difference in Dicer expression between normal prostatic tissue and organ-confined prostate cancer. However, immunohistochemical analysis demonstrated that in normal tissues, Dicer immunoreactivity was detected only in basal cells, proliferative neoplastic cells, and in invasive cancer. The redistribution of Dicer among the cell types seemed to be biologically significant with cancer progression and metastasis, and the level of this endo-MicroRNA and cancer 367 nuclease continued to increase in the abnormal cells (Chiosea et al., 2009). The miR-21 is one of the most-studied miRNAs in cancer cases and it is highly expressed in breast cancer. In a cohort analysis using ISH, a progressive increase in the percentage of patient tumors positive for miR-21 was observed, from normal breast tissue (13%), flat epithelial atypia (47%), ductal carcinoma in situ (75%) to invasive ductal carcinoma (88%). In addition, the expression of miR-21 target genes such as PTEN, PDCD4 and TM1 was evaluated in the same tumor samples from the cohort at cellular level, and the cell transformation suppressor TM1 was confirmed as a target of miR-21 in breast tumors, presenting reduced tissue immunoreactivity with progressive lesions, i.e. an inverse relationship with the marker pattern of miR-21 (Qi et al., 2009). Despite the fast advances in comprehending the biogenesis and action mechanism of miRNAs, many questions regarding their function and influence on central signaling pathways and cell cycle control remain. The complex stochastic nature of gene expression in mammalian cells has wide-ranging impact on phenotypic diversity. It is therefore likely that evaluating mean miRNA expression levels in mixtures of cell populations may result in loss of crucial information for linking miRNA expression to cellular functions. Thus, the physiological role of miRNA in single cells should be more informative in distinguishing the impact of miRNA on signaling networks and cellular pathways relevant to disease (Lu and Tsourkas, 2009). Gene Therapy and MicroRNAs Recently, a new technology of directed artificial-site miRNAs (templates), for increasing or inhibiting endogenous miRNA regulation was describes (Brown et al., 2009). This strategy has been used to detect site-specific target genes in cells, in relation to stem cell therapies and studies on transgenic animals. Through this highly specific new approach a promising strategy has emerged, combining gene therapy with miRNA templates (viruses carrying the target sequence), in an attempt to fully or partially inhibit the expression of these miRNAs. As described earlier, miRNAs can exert their effects either by acting as tumor suppressors or through favoring cancer development (oncomirs). When hyperexpression of miRNAs contributes towards oncogenesis, the rational strategy is to reduce their expression. In this regard, inhibition of specific endogenous miRNAs has been used through administration of antisense synthetic oligonucleotides, which are complementary to endogenous mature miRNAs. Oligonucleotide-modified anti-miRNAs (OMAs), also known as 'antagomirs', currently constitute the majority of miRNA inhibition tools (Soifer et al., 2007). Basically three different types of OMAs are used for inhibiting miRNAs, these being oligonucleotides with modifications to the 2-OH group of the ribose residues that involve replacement with 2'-O-methyl (2'-OMe), 2'-Omethoxy-ethyl (2'-MOE) or locked nucleic acid (LNA). These modifications have been incorporated through knowledge gained from interference RNA techniques (RNAi), and were essential for offering resistance to enzymatic degradation, thereby improving OMA stability when exposed to the large quantities of nucleases present in blood and the cell environment. Another important structural change incorporated into these oligonucleotides, with a view to improving their pharmacokinetic properties (such as plasma half-life) and increasing the uptake of the molecule by cells, was the introduction of a cholesterol molecule in the 3' terminal region of the nucleic acid (Bijsterbosch et al., 2000). Applications of oligonucleotides that specifically inhibit oncomirs, such as mir-21, have been demonstrated in cultures on glioblastoma cells and breast cancer cells, thereby promoting increased caspase activation and mediating apoptosis in these cells (Chan et al., 2005;Si et al., 2007). Furthermore, suppression of mir-21 gave rise to significant reductions in invasions and lung metastases in cultures on MDA-MB231 breast cancer cells . The efficacy and significance of several OMAs have been examined and validated in several pioneering studies using a model that eliminates miR-122, which is hyperexpressed in rat livers after administration of specifically developed OMAs (Krutzfeldt et al., 2005). These authors were the first to demonstrate nontoxic long-duration silencing generated through intravenous injection of 'antagomirs' (2'-OMe) that were complementary to miR-122, in mice. Notwithstanding, one of the major obstacles to applying 'antagomirs' in clinical screening is achieving effective release of RNAi in the target tissue (Soifer et al., 2007). Strategies for overcoming these problems have been developed, for example complexation or covalent bonding of lipids and/or proteins released in small RNA molecules (Dykxhoorn et al., 2006). Other alternatives, such as the use of cationic liposomes and cholesterol, conjugation with RNA-packaging phages and RNA aptamers that bind to receptors have been developed to release small RNAs in target cells (Soutschek et al., 2004). Research on local release of biomolecules should substantially improve the therapeutic opportunities for using miRNAs as targets. Recently, a new form of miRNAs inhibitors called "miRNA sponges" were developed which may be transitorily expressed in mammal cell cultures (Ebert et al., 2007). These "miRNA sponges" are transcripts that are under the control of strong gene promoters (RNA polymerase II) containing tandemly arranged multiple sites for binding to miRNAs of interest and are capable of inhibiting miRNAs as strongly as OMAs (Ebert et al., 2007). For miRNAs that present reduced expression in cancer cases, restoration of miRNA levels in the diseased tissue should provide a therapeutic benefit through replacement of the target gene regulation. Introduction of double-stranded miRNAs, which are equivalent to the products from endogenous Dicer and analogous in structure to a siRNA (small interference RNA), may provide transient restoration for underexpressed miRNAs. However, in vivo application of double-strand mimetic miRNA, resembling siRNA, still needs to be evaluated. To achieve greater persistence of miRNA replacement, a transgenic approach is needed so that expression of specific miRNAs may be induced, starting from a plasmid or viral vector containing the promoters for both the polymerases (II or III) that control the expression of a short hairpin RNA, which is processed subsequent to the mature miRNA. At present only few studies on the use of miRNAs for in vivo cancer therapy have been published. Gene therapy based on RNAi has been greatly used over recent years. Systemic release of siRNA /shRNA (short hairpin RNA) with an anti-cancer focus employed liposomes, polymers and nanoparticles. Similar strategies and technologies used for siRNA release in cells may also be used with miRNAs. Conclusion miRNAs have emerged over recent years as new regulatory components of the complex mechanisms of gene expression, with implications for many diseases, including cancer. Emerging evidence increasingly demonstrates that miRNAs may also affect or be affected by genetic and epigenetic mechanisms. In addition, miRNAs and miR-SNPs are powerful tools for studying disease prognoses and, in the near future, hold tremendous therapeutic promise for clinical medicine and for improvements in cancer control and in curing rates.
2014-10-01T00:00:00.000Z
2011-07-01T00:00:00.000
{ "year": 2011, "sha1": "3786bddf821e37f8bbb95a79468de0b662554e7b", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/gmb/v34n3/a01v34n3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3786bddf821e37f8bbb95a79468de0b662554e7b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
4414208
pes2o/s2orc
v3-fos-license
Correction: Peripheral Organs of Dengue Fatal Cases Present Strong Pro-Inflammatory Response with Participation of IFN-Gamma-, TNF-Alpha- and RANTES-Producing Cells [This corrects the article DOI: 10.1371/journal.pone.0168973.]. Introduction Dengue is considered the most important mosquito-borne viral disease due to its clinical relevance and rapid spread, nowadays putting at risk about half of the world's population [1]. The etiologic agent, dengue virus (DENV), is distributed as four distinct serotypes (DENV1 to DENV4) and infections can result in a mild flu-like acute illness known as dengue fever (DF) [2]. From an epidemiological view, it is estimated that 390 million dengue infections occur each year, of which nearly 25% are symptomatic [3]. While most patients naturally recover from the non-severe clinical DF course, a small proportion evolves to severe disease, mostly characterized by plasma leakage and hemorrhagic manifestations (namely dengue shock syndrome-DSS and dengue hemorrhagic fever-DHF) [2,4]. Despite the relevant mortality rates derived from dengue complications (arround 20000 deaths each year) [5], the elucidation of the pathogenic process by which infected patients evolve to the severe forms is still an ongoing challenge. Apart from the relationship between social determinants of health and dengue fatal cases, biological factors such as distinct virulence levels among virus strains and host immunity have been considered as key elements to drive patients to severe stages [6,7]. Disease complications triggered by DENV enhanced infections assisted by previously-formed opsonizing antibodies, were related to altered T cell activation and cytokine production in secondary infections [8][9][10]. Yet, concerning a host primary response environment, other unknown factors could also play a role in triggering severe dengue. Classical DF symptoms, such as fever and headaches, usually match with high viremia levels, but interestingly the severe forms of dengue (DSS/DHF), when manifested, occur after virus clearance. This observation has raised concerns about the association of severe dengue with immonopathological mechanisms [11,12]. In this context, the investigation of post-mortem severe dengue cases may represent a valuable tool for a better understanding of the immune scenario during a terminal stage. Additionally, a search for evidences regarding cell migration and cytokine production in peripheral tissues may also provide new insights about possible underpinning immune mechanisms linked to the development of severe forms. In a previous report of our laboratory, peripheral organs such as livers, lungs and kidneys of four dengue cases that died from DENV-3 were histopathologically and ultrastructurally screened [13]. Aside from virus detection in unusual sites such as hepatocytes and type II pneumocytes, all studied organs presented lesions that corresponded to severe dengue cases. In this work, the same post-mortem samples were object of study for investigation of the cellular immune response and its products. Immunohistochemical analysis revealed a systemic involvement of infection with mononuclear cells targeted to all of the analyzed tissues. Assessment of local cytokine response showed increased levels of IFN-γ-and TNF-α-expressing cells in livers, lungs and kidneys that evidenced a consistent pro-inflammatory induction in these tissues. Co-expression of DENV RNA and IFN-γ or TNF-α by Kupffer cells confirmed the specific DENV induction over the cytokine production, as found by in situ hibridization and IHC. Furthermore, an indicative of altered vascular permeability found in all analyzed organs was also suggested due to the presence of increased levels of local RANTES-producing cells. Ultimately, this work brought additional evidences that the effect of the uneven cellular immunity in response to DENV can contribute to disease severity. Given the limited numbers of reports concerning investigation of post-mortem samples from dengue severe cases, this work importantly contributes to narrowing the gaps of dengue immunopathogenesis. Ethical procedures All procedures performed during this work were approved by the Ethics Committee of the Oswaldo Cruz Foundation/FIOCRUZ, with the number CAEE: 47525115.3.0000.5248 for studies with dengue fatal cases and controls. The use of tissue samples informed consent was verbally provided by family members to the responsible physician Dr. Carlos Alberto Basíllio de Oliveira by the time of the necropsy. This consent procedure was approved by the ethics committee. Human fatal cases The human tissues analyzed in this study (livers, lungs and kidneys) were obtained from four dengue fatal cases that occurred during a Brazilian outbreak of DENV-3 in 2002 in Rio de Janeiro. All patients died with a clinical diagnosis of severe dengue with infections confirmed by positive serum IgM antibodies. The four negative controls, from both sexes and ranging from 40 to 60 year old, were non-dengue and did not present any other infectious disease. More information about these cases can be found in a previous report of our laboratory [13]. Briefly: Case 1. A 63-year-old male patient that developed a sudden onset of headache, myalgia, anorexia and abdominal pain. A few days later the patient presented diarrhea, thrombocytopenia (platelet 79.000/mm 3 ) and hemoconcentration (hematocrit 59%). The case eventually evolved to shock with severe pulmonary congestion followed by death with a clinical diagnosis of dengue hemorrhagic fever. Case 2. A 21-year-old female patient who experienced fever, myalgia and headache with progression to metrorrhagia, nausea, vomiting and diarrhea. The patient also presented severe leukopenia and thrombocytopenia (platelet 10.000/mm 3 ). During hospitalization, the case progressed to respiratory failure, followed by evolution of multiple organ failure and refractory shock. Case 3. A 41-year-old female presenting fever, weakness, abdominal pain, leukocytosis, hematocrit of 48% and fluid in the abdominal cavity. The patient was diagnosed with dengue hemorrhagic fever and died from an acute pulmonary edema. Case 4. A 61-year-old female that manifested classical dengue symptoms (fever, myalgia, vomiting and diarrhea). The patient evolved to severe clinics and died from acute pulmonary edema with sudden cardiac arrest. Histopathological analysis Tissue samples from the human necropsies were fixed in formalin (10%), blocked in paraffin resin, cut in 4 μm, deparaffinized in xylene and rehydrated with alcohol, as described previously [14]. Sections were stained with hematoxylin and eosin (H.E.) for histological examination and visualized under a Nikon ECLIPSE E600 microscope. Immunohistochemical procedure For immunohistochemical studies, the paraffin-embedded tissues were cut (sections of 4 μm), deparaffinized in xylene and rehydrated with alcohol. Antigen retrieval was performed by heating the tissue in the presence of citrate buffer [15]. Tissues were blocked for endogenous peroxidase with 3% hydrogen peroxidase in methanol and rinsed in Tris-HCl (pH 7.4). To reduce non-specific binding, sections were incubated for 30 min at room temperature. Samples were then incubated over-night at 4˚C with anti-human antibodies that recognize CD4 ( Quantification of positive cells by immunohistochemistry Slides were evaluated using a Nikon ECLIPSE E600 microscope with a coupled Cool SNAP-Procf Color camera. For each specific antibody, 50 images (fields) were randomly acquired at 400x magnification using the software Image Pro version 4.5. After collecting the frames, positive cells were quantified in each of the 50 fields in every organ and the median of positive cell number was determined. All analyzes were accomplished in a blind test without prior knowledge of the studied groups. After quantification, frames exhibited in figures were selected as to be more informative according to specific areas in the analyzed tissues. S1 Table contains the raw data. In Situ Hybridization The assessment of DENV in liver sections was performed by in situ hybridization using a digoxigenin-tagged probe (5'-TGACCATCATGGACCTCCA-3') which anneals within the negative strand of the DENV RNA genome, as previously described [13]. Briefly, paraffinembedded sections of tissues were deparaffinized and digested with pepsin (1.3 mg/ ml) for 4 min at room temperature. Tissues were incubated with the probe cocktail at 60˚C for 5 min and then kept overnight at 37˚C for denaturation and hybridization, respectively. Next, samples were washed with 0.2 x SSC and 2% bovine serum albumin at 55˚C for 5 min. The probetarget complexes were revealed by the activity of alkaline phosphatase conjugated to antidigoxigenin. Co-staining of DENV RNA and pro-inflammatory cytokines Co-staining of virus and IFNγ or TNFα were performed, as previously described [13], using in situ hybridization and immunohistochemistry. The dengue case 2 was considered for this analysis. Briefly, the DENV probe was first tagged with 59 digoxigenin and locked nucleic acid (LNA) modified (Exiqon). Resulting complexes were visualized using an antidigoxigenin-alkaline phosphates conjugate and nitro-blue tetrazolium and 5-bromo-4-chloro-39-indolyphosphate as the chromogen. Detection of IFNγ or TNFα was then performed by immunohistochemistry (anti-IFNγ antibody-ABCAM ab133566-rabbit and anti-TNFα-ABCAM ab6671-rabbit) using Leica Bond Max automated platform (Leica Biosystems) and DAB as the chromogen. No counterstain was done. Data were analyzed by the computer based Nuance system (Caliper Life Sciences, Hopkinton, MA, USA) which separates the different chromogenic signals, converts them to fluorescent-based signals and combine them to determine co-expression. Statistical analyses Data were analyzed with GraphPad prism software v5.1 (La Jolla, USA) using non-parametric statistical tests. Significant differences between analyzed groups (controls and DENV-patients) were determined using Mann-Whitney test with à p < 0.05. Results The liver as a target of immune-mediated mechanisms in dengue fatal cases The liver is considered as an important target for DENV infection and is the most common organ to be involved in the disease. Hepatic alterations are key characteristics found in DENV cases. As observed in biopsies and autopsies of previously reported fatal cases [16], hepatocytes and Kupffer cells are described as important targets during DENV infection [17,18]. For this reason, liver samples of the four DENV-3 fatal cases were first considered for our evaluations. Histopathological studies of all samples showed diffuse mononuclear cell infiltrates, mainly around the portal space (Fig 1 panel a). Detection of CD68 + cells revealed the presence of hyperplasic Kupffer cells and/or circulating macrophages (Fig 1 panel b), although quantification analysis did not show statistical difference when comparing dengue to control samples (Fig 1 panel c). Yet, we detected numerous CD4 + (Fig 1 panel d), and CD8 + (Fig 1 panel f In order to qualify the ongoing inflammatory process in the hepatic tissue, we also investigated the cytokine production by the mononuclear cell types found in the liver. In this case, cells expressing TNF-α, IFN-γ, IL-10, TGF-β and RANTES were considered for quantification. We observed a great number of cells producing these cytokines in the midzonal area and, to a lesser extent, in other hepatic areas. Production of TNF-α was detected by Kupffer cells and monocytes, mainly in the sinusoidal capillaries (Fig 2 panel a), while detection of IFN-γ was found mostly in lymphocytes, Kupffer cells and monocytes (Fig 2 panels b and c). In sinusoidal capillaries, we also detected groups of cells with an anti-inflammatory profile, such as IL-10-expressing monocytes and lymphocytes (Fig 2 panel d) and TGF-β-expressing macrophages and Kupffer cells (Fig 2 panel h). The chemokine RANTES/CCL5 was detected mainly in endothelium and Kupffer cells (Fig 2 panels i and j). Quantification of cells producing the inflammatory cytokines TNF-α, IFN-γ and RANTES/CCL5 revealed a significant increase (4-, 4.5-and 3-fold, respectively) in dengue group compared to control (Fig 2 panels e, f and l), while the number of cells expressing the anti-inflammatory cytokines IL-10 and TGF-β did not change significantly in either group (Fig 2 panels g and k). Histopathological analysis and cytokine profile present in the lungs of dengue fatal cases Previous studies in our laboratory revealed that lung tissues from fatal dengue cases showed severe damages as represented by diffuse areas of hemorrhage and edema [13]. Here, we aimed to investigate a possible contribution of an exacerbated pro-inflammatory response that could be related to this local tissue impairment. After analysis of lung sections of dengue cases we observed a diffuse mononuclear infiltrate in alveolar septa and edema areas (Fig 3 panel a), thus, indicating that this tissue could also be targeted by immune mechanisms triggered by infection. The immunohistochemical assay revealed the presence of macrophages (CD68 + cells) (Fig 3 panel b) in alveolar septa, CD4 + (Fig 3 panel d) and CD8 + T cells (Fig 3 panel f) mainly in the blood vessels. The quantification analysis revealed a 4-fold increase of lymphocytes in the dengue group when compared to non-dengue samples (Fig 3 panels e and g), whereas the number of CD68 + was not statistically different from the controls (Fig 3 panel c). We next aimed to identify the local cytokine profile in the lungs to characterize the ongoing inflammatory process in fatal cases of severe dengue. The evaluation of cytokine-expressing cells in the tissues of the dengue group exhibited pro-and anti-inflammatory profiles occurring simultaneously, what revealed an atypical elicited immunity. TNF-α, which is an important cytokine related to dengue pathogenesis, was detected mainly in alveolar macrophages. (Fig 4 panel a). Increased numbers of TNF-α- (Fig 4 panel d), IFN-γ- (Fig 4 panels b and e), IL-10- (Fig 4 panels c and f) and TGF-β- (Fig 4 panels g and i) expressing cells such as macrophages and lymphocytes were characterized in the tissues of dengue cases, when compared to controls. Vascular permeability impairment was also addressed in dengue cases by the presence of several RANTES-expressing endothelial cells and alveolar macrophages in the perivascular space (Fig 4 panel h). The quantification of these subpopulations was found to be increased in dengue cases when compared to controls (Fig 4 panel j). Kidneys in severe dengue are targeted by pro-inflammatory cells Kidney involvement with dengue virus infection is being recognized, since the prevalence of proteinuria and hematuria has been reported as high as 70-80% [19]. Mechanisms that may drive kidney complications in dengue are not clear but possibly result from indirect pathways via host immunity. Due to these knowledge gaps, renal sections extracted from post-mortem dengue cases were also considered for investigation. Tissue analysis of severe cases revealed a diffuse mononuclear infiltrate, more pronounced in cortical and medullar regions (Fig 5 panels a and b, respectively). CD68 + cells were detected mainly in mesangial cells (monocyte or smooth muscle origin, responsible for filtration, structural support, and phagocytosis), located in the glomerulus (Fig 5 panel c). The quantification of CD68 + cells revealed an 8-fold increment of this population in dengue cases when compared to non-dengue patients (Fig 5 panel d). The number of CD4 + T cell lymphocytes, located primarily within the renal glomerulus, was also found to be increased in dengue group renal sections, when compared to controls (Fig 5 panels e and f). CD8 + T cells were detected mainly in the medullar zone (Fig 5 panel g) and their quantification showed no statistical difference between dengue and control groups (Fig 5 panel h). The evaluation of cytokine profile in renal samples revealed TNF-α being produced mainly by monocytes/macrophages present in the medullar region (Fig 6 panel a). IFN-γ-producing cells, such as macrophages, were found in blood vessels located also in the medullar region (Fig 6 panel b). The anti-inflammatory cytokines IL-10 and TGF-γ were observed mostly in lymphocytes within the renal glomerulus (Fig 6 panel c) and macrophages (Fig 6 panel g), while RANTES production was detected mainly in macrophages only (Fig 6 panel h). Quantification of cells producing these cytokines revealed a general increase in dengue cases when compared to controls. The number of cells producing of TNF-α and IFN-γ in dengue cases was 3-and 5-fold higher, respectively, after comparison with non-dengue cases (Fig 6 panels d and e). Concerning the number of cells expressing TGF-β (Fig 6 panel i), IL-10 (Fig 6 panel f) and RANTES (Fig 6 panel j), in dengue cases we noted increments of about 2.5-, 3-and 13-fold, respectively. DENV-specific induction of pro-inflammatory response In order to confirm the participation of DENV-infected cells in inducing the local pro-inflammatory response, DENV RNA and cytokine production were co-tested in host samples. For this evaluation, the liver was elected due to its importance as a target organ in dengue pathogenesis. As expected, the light microscopy of a dengue fatal case exhibited mononuclear cell infiltrates with the presence of Kupffer cells (KC) in the sinusoid capillaries (Fig 7 panels a and b), while non-dengue control presented regular hepatic structures with resident KCs (Fig 7 panels i and j). As detected by in situ hybridization and IHC, the dengue case showed many areas of co-expression in the hepatic tissue considering DENV RNA and IFN-γ or TNF-α (Fig 7 panels g and h). In this case, Kupffer cells were the main targets of co-staining. The control case presented no staining for either DENV or the studied cytokines. These data confirmed, under a qualitative basis, the specific DENV-induction over proinflammatory cytokine response and also addressed the virus spread versus IFN-γ or TNF-α expression in the liver sample of a dengue fatal case. Discussion The elucidation of mechanisms that underlie the development of severe dengue is still seen as a major challenge in the field [20]. The progress of dengue research concerning such aspects has been hampered by a set of factors that include the peculiar nature of DENV infection together with the absence of an immunocompetent animal approach capable of mimicking severe dengue symptoms [21]. Under this scenario, the investigation of post-mortem samples extracted from severe dengue human cases would bring valuable information to better understand the pathogenic base of the disease. Immune mechanisms are thought to drive the mild flu-like illness (DF) to the severe hemorrhagic stages of dengue, since such manifestations occur after virus clearance from the circulation [22]. In this work, four fatal human cases that experienced the severe hemorrhagic symptoms of dengue had three of their peripheral sites (liver, lung and kidney) investigated. Organs were elected according to previous known relevance or knowledge gaps related to dengue [16][17][18][19]23]. Research was focused on the cellular immunity, in which mononuclear cell migration and cytokine production were considered to the evaluation of host response by the fatal cases of severe dengue. In a previous report from our laboratory, we found that the same post-mortem samples presented tissue damages that were consistent with severe dengue clinical cases [13]. The inspection of these tissues under an immunological approach was our major concern in this work, since evidences of an exacerbated host immunity would somehow be linked to organ impairments. From a first panoramic view, all evaluated sites from fatal cases (liver, lungs and kidneys) were targeted by mononuclear cell migration, such as macrophages and T cells. Apart from this, the analyzed peripheral organs exhibited higher levels of pro-inflammatory cells, as found by their production of TNF- α, IFN-γ and RANTES. Those observations strongly supported the proposed theories claiming that exaggerated [24,25] or misdirected [8,[26][27][28] T-cell responses would eventually lead the host to severe clinical stages. Numerous reports describe the release of different cytokines and soluble receptors during dengue infection [25,29,30], which has also been associated to the unfavorable disease outcome [31]. In this work, increased levels of RANTES-producing endothelial cells may have contributed to the occurrence of cell infiltrates, since this chemokine signals for cell movement from the bloodstream into tissues [32,33]. Therefore, that would infer a close connection between RANTES and the altered vascular permeability events related to severe dengue. In this case, higher RANTES production and secretion would favor plasma leakage and lymphocyte cell infiltration into the liver, lungs and kidneys, hence, potentially mediating the inflammatory response found in these peripheral organs. Among all chemokines, RANTES is Co-expression of DENV and pro-inflammatory cytokines in the liver. Liver samples of dengue fatal case 2 were processed for in situ hybridization and IHC procedures. DENV was detected by a probe that aneals to a conserved sequence within the viral RNA negative strand. IFN-γ and TNF-α were assessed by immunohistochemistry assay. Probe-target complexes were revealed by alkaline phosphatase activity and cytokines were identified by standard DAB reactions. The chromogenic signals were converted to fluorescent-particularly associated with viral infections [34]. RANTES, also known as CCL5, is an early expressed chemokine induced by pattern recognition receptors, but can also be induced by TNF-α and IFN-γ at late stages of infection [35], that is the case of a terminal dengue situation. The discussion about chemokine signaling and its effects over infectious diseases can sometimes be controversial in the literature. Together with other chemotactic effectors (CXCL9/10/ 11) and immune cell response-modulating cytokines (IL-6, IL-7 and BAFF), RANTES has been associated with immune enhancement and increased vascular permeability following dengue virus infection [36,37]. Conversely, in vitro experiments revealed that hantavirus can infect human lung microvascular endothelial cells (HMVEC-Ls) and stimulate secretion of RANTES by these cells without increasing vascular permeability [38]. Together, these observations still arise skeptical thoughts that lead to a more careful discussion concerning a direct link between RANTES and vascular permeability enhancement in dengue. The higher amounts of TNF-α-producing cells characterized in the post-mortem samples were in line with reports in the literature. TNF-α is considered as a major pro-inflammatory mediator in dengue infections since its activity has been linked to the immunopathogenesis of the disease [28,39]. Apart from the existing drawbacks regarding animal models to reproducing dengue, reports showed that the inhibition of TNF-α by administrating specific antibodies was associated with reduced severity [39,40]. Although the TNF-α inhibition assay was key to infer its importance in severe dengue, in practical terms, targeting TNF with antibody or receptor antagonists for treating human diseases is controversial. In other diseases with immunological basis, not all patients were helped despite the clinical effectiveness of anti-TNF aproaches. This fact, perhaps, reflected the existence of distinct underlying mechanisms that drive the symptoms apart from the TNF network [41]. Along with TNF-α, IFN-γ was also found to be increased in terms of production by infiltrated mononuclear cells in peripheral tissues of the studied fatal dengue cases. Hence, the present work showed INF-γ as a pro-inflammatory element that may contribute to tissue distress, also representing an in situ evidence of disease severity. In a recent report, metabolomics was adopted to screen dengue-induced metabolites in 116 dengue patients (60 presenting DF and 56 with severe dengue). It was found that circulating IFN-γ combined with serotonin levels provided accurate early prognosis of severe dengue, thus revealing its importance and an additional clinical usage of this pro-inflammatory cytokine to assess severity [42]. The mononuclear cell migration that targeted the studied tissues were also proposed to be correlated with local impairments. In liver samples, CD4 + T cells, Kupffer cells and monocytes were characterized near hepatocellular necrosis and steatosis in the presence of pro-inflammatory cytokines, IFN-γ and TNF-α. Additional alterations on hepatocytes, such as nuclear vacuolar degeneration and the presence of swollen mitochondria, based on preceeding studies, suggested an ongoing mechanism of apoptotic cell death possibly mediated by the cytokine environment [13,[43][44][45][46][47]. In our previous report, the lung scenario of the studied dengue fatal cases was marked by a peculiar histopathological evidence. The presence of septum thickening with an increase of cellularity characterized a hyaline membrane formation possibly due to dengue shock syndrome [13]. As the activity of pro-inflammatory cytokines (IFN-γ and TNF- α) were previously associated with lung injury [48,49], we envisioned a possible correlation between local cytokine production and tissue alterations. Under this idea, the hyaline membrane structure could be formed in function of an exacerbated cytokine release by hyperplasic alveolar macrophages combined with tissue alterations such as edema and hemorrhage, which is also found in other non-related diseases [50,51]. Parenchyma and circulatory damages found in kidney samples would likely be immune-mediated due to the presence of mononuclear infiltrates in cortical and medullar regions. A recent report correlated kidney injuries of severe dengue cases with the local recruitment of T cells [52]. In line with our findings, authors also found that infiltrated CD8 + T lymphocytes were outnumbered by CD4 + T cells, suggesting an important role of this subpopulation in tissue damages. Hence, considering all analyzed tissues under the above circumstances, it would be reasonable to suggest a key role of cellular immunity to determine local tissue alterations/dysfunctions. It is important to note that when referring to CD8-expressing lymphocytes a number of different cell populations, other than classical T cells, may also be taken into account. Due to the close morphology, NK cells and mucosal-associated invariant T (MAIT) cells are examples of CD8-expressing subsets that may also present roles in severe dengue. While the participation of NK cells in DENV infection has been extensively reported [53][54][55][56][57], the role of MAIT cells in such disease is still a novel and eye-catching mater of debate [58]. At this point, a relevant question emerges considering the induction of the evident immune reaction found in the analyzed tissues. Considering the environmental circumstances and the debilitating situation of a patient under severe dengue, it would be possible that immunity induced against other opportunistic pathogens would also be occurring. Such hypothesis can not be excluded, however, the DENV-specific participation over the elicited immunity was confirmed in this work. In situ hybridization together with IHC experiments showed, qualitatively, that DENV produces a direct effect over pro-inflammatory cytokine production. As noted by the historic of symptoms, laboratory workup [13] and our present evaluation, we consider that the major element leading to the observed effects over immunity is the infection by DENV. Under the immune-mediated theory for the pathogenesis of dengue, anti-inflammatory or regulatory cytokines would, at first glance, contribute to a better prognosis. An interesting fact that occurred during tissue analysis (mainly liver and kidney samples) was the detection of anti-inflammatory cells with production of IL-10 or TGF-β, even in the presence of the above discussed pro-inflammatory environment. One hypothesis to explain this atypical finding would be the triggering of a host immunity attempt to circumvent the local inflammatory process. Under this idea, as the studied tissues were previously characterized with local impairments [13], this would indicate the existence of a strong ongoing inflammatory effect capable of overwhelming regulatory responses. In line with this assumption, a report in the literature showed that the overexpression of pro-inflammatory cytokines has been suggested to cause an inhibitory influence on regulatory cells [59]. Another recent report revealed that, in fact, a disturbance in the balance between inflammatory (IL-6 and IL-8) and anti-inflammatory (IL-10) cytokines, would characterize possible mechanisms related to the occurrence of hemorrhagic manifestations in dengue [60]. Additionally, IL-10 has also been interpreted as a marker of disease progression in severe dengue cases [61]. Regardless of these corroborating evidences from the literature, we still find it difficult and risky to draw conclusions over these aspects from the studied post-mortem samples. Investigations considering the time dependency of the immune events along with the host clinical evolution would still be necessary for a more consistent description about this pro-versus anti-inflammatory balance. In conclusion, the study of post-mortem samples from peripheral organs of severe dengue cases provided valuable information about the local environment under an immunological approach. The existence of a strong ongoing pro-inflammatory response was suggested to be occurring in liver and manly in lung and kidney samples. The presence of mononuclear cell infiltrates, higher counts of pro-inflammatory cells (as found by the production of TNF-α, IFN-γ and RANTES) and the apparent outrun of inflammation over anti-inflammatory elements (such as IL-10 and TGF-β) were the major evidences for such characterization. Apart from many other known hypotheses for the determination of severe dengue cases [20], this work provided additional evidences that supported the cellular immune-mediated theories, hence, contributing to a better understanding of dengue pathogenesis.
2018-04-03T02:59:05.836Z
2018-03-26T00:00:00.000
{ "year": 2018, "sha1": "224ea15835f17289022ec8d263c3260c9019a96b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0195140&type=printable", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "fb6c780a204488b1d4fd663f5449835d08efee49", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118660010
pes2o/s2orc
v3-fos-license
The first detection of the 232 GHz vibrationally excited H2O maser in Orion KL with ALMA We investigated the ALMA science verification data of Orion KL and found a spectral signature of the vibrationally excited H2O maser line at 232.68670 GHz (nu2=1, 5,5,0-6,4,3). This line has been detected in circumstellar envelopes of late-type stars so far but not in young stellar objects including Orion KL. Thus, this is the first detection of the 232 GHz vibrationally excited H2O maser in star-forming regions. The distribution of the 232 GHz maser is concentrated at the position of the radio Source I, which is remarkably different from other molecular lines. The spectrum shows a double-peak structure at the peak velocities of -2.1 and 13.3 km s-1. It appears to be consistent with the 22 GHz H2O masers and 43 GHz SiO masers observed around Source I. Thus, the 232 GHz H2O maser around Source I would be excited by the internal heating by an embedded protostar, being associated with either the root of the outflows/jets or the circumstellar disk around Source I, as traced by the 22 GHz H2O masers or 43 GHz SiO masers, respectively. Introduction Water is one of the most abundant interstellar molecules after H 2 and hence, it is important for interstellar chemistry and physics of molecular clouds (e.g. van Dishoeck et al. 2011). However, due to the large atmospheric opacity, ground-based observations of the H 2 O lines in radio and infrared bands are almost impossible except for the isotopic species (e.g. HDO and H 18 2 O) and strong maser lines. In particular, the 6 1,6 −5 2,3 transition at 22 GHz (lower state energy, E l =642 K) is known to show extremely strong maser emission in circumstellar envelopes (CSEs) around late-type stars, young stellar objects (YSOs) in star-forming regions (SFRs), and active galactic nuclei. The 1 National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan 2 Department of Astronomical Sciences, The Graduate University for Advanced Studies (SOKENDAI), Mitaka, Tokyo 181-8588, Japan -2 - 22 GHz maser has been used as a unique probe of dense gas and their dynamics with very long baseline interferometers (VLBI) thanks to its extremely high brightness and compact structure (e.g. Chapman & Baan 2007). Other H 2 O maser lines are also detected in millimeter/submillimeter wavelength (Humphreys 2007). Lower excitation lines in 183 GHz (E l =196 K) and 325 GHz (E l =454 K) are detected both in CSEs and around YSOs while some of the higher excitation lines including vibrationally excited lines are detected only in CSEs. Multi-transition studies of H 2 O maser lines could be powerful tools to investigate shocked regions in CSEs and YSOs at the highest spatial resolution achieved with VLBI and millimeter/submillimeter interferometers when combined with the theoretical models (Neufeld & Melnick 1990, 1991. In this Letter, we report the detection of the vibrationally excited H 2 O maser line at 232.68670 GHz (E l =3451 K) in a massive SFR Orion KL at a distance of 420 pc (Hirota et al. 2007;Kim et al. 2008) with the Atacama Large Millimeter/Submillimeter Array (ALMA). This line has been detected in late-type stars so far but not in YSOs including Orion KL (Menten & Melnick 1989). Thus, this is the first detection of the 232 GHz H 2 O maser line in YSOs. Observation We employed the public data obtained with the ALMA science verification (SV) on 2012 January 20. They are part of a spectral line survey toward the Orion KL region at band 6 (215-245 GHz). The tracking center position of Orion KL was set to be R. A. =05h35m14.35s and decl.=−05 • 22 ′ 35 ′′ .0 (J2000). The data consist of several spectral settings and net on-source time for each setting was about 20 minutes. The baseline lengths ranged from 17 to 265 kλ (from 22 to 345 m) consisted of 16×12 m antennas. The primary beam size of each 12 m antenna is about 30 ′′ at band 6. The spectral resolution of the ALMA correlator was 488 kHz, corresponding to the velocity resolution of 0.60−0.65 km s −1 at the observed frequency range. Dual polarization data were observed simultaneously. We made synthesis imaging with the calibrated data for selected observing frequency ranges by using the Common Astronomy Software Applications (CASA) package. The natural weighted beam size was 1 ′′ .7×1 ′′ .4 with the position angle of 171 • . The resultant typical rms (root-mean-square) noise level is 0.01−0.03 Jy beam −1 for each channel map. For comparison, we also made synthesized images of selected lines of methyl formate (HCOOCH 3 ) as discussed later. The mapped lines are summarized in Table 1. In addition to the SV data, we analyzed ALMA cycle 0 data for the continuum emission at band 6 in the Orion KL region. The observation was done in the extended configuration on 2012 April 08 with 17×12 m antennas. The observed frequency ranges were 240−244 GHz and 256−260 GHz. The ALMA correlator was set for low resolution wideband continuum observations and the spectral resolution was 15.625 MHz. The line emissions were subtracted from the visibility data and effective bandwidth was almost half of the observed frequency range. The synthesis imaging was done with the CASA software package. The uniform-weighted synthesized beam size was 0 ′′ .74×0 ′′ .56 at a position angle of 101 • . The on-source integration time was 30 s and the resultant rms noise level of the continuum image was 7 mJy beam −1 . Further details will be published in the forthcoming paper (T. Hirota et al., in preparation). Results First, we inspected the observed spectra of the ALMA SV data around the frequency range close to the 232 GHz H 2 O line. As a result, we found a significant spectral feature corresponding to a line-of-sight velocity with respect to the local standard of rest (LSR) of about 11 km s −1 . We checked the database of molecular lines, Splatalogue 1 , which is a compilation of the JPL, CDMS, and Lovas/NIST catalogs (Pickett et al. 1998;Müller et al. 2005;Lovas 2004), to confirm possible contamination of other spectral lines. We found that the torsionally excited HCOOCH 3 line at 232.68393 GHz (v t =1, 19 10,10 −18 10,9 E) could be another candidate of this line if the source LSR velocity is about 8 km s −1 . Thus, one should be cautious in identifying the detected lines. It is well known that Orion KL show an enormous amount of molecular lines described as a line forest (Beuther et al. 2005). The ALMA data also show a number of spectral lines and a significant fraction of them are unassigned to known molecular lines. According to previous line survey and interferometer observations, these molecular lines suggest complex velocity/spatial components in Orion KL such as the Hot Core, the Compact Ridge, SMA1, and Source I (Blake et al. 1986;Wright et al. 1996;Beuther et al. 2005;Favre et al. 2011). All of them show different radial velocity, velocity width and spatial distribution. In addition, they are known to show significant chemical differentiation. For example, oxygen-bearing organic molecules such as HCOOCH 3 are known to be distributed mainly in the Compact Ridge at an LSR velocity of 8 km s −1 with a linewidth of 2 km s −1 , while nitrogen-bearing species tend to peak at the Hot Core at the LSR velocity of 5 km s −1 and with wider velocity widths (Blake et al. 1986;Wright et al. 1996;Beuther et al. 2005;Favre et al. 2011). On the other hand, the SiO masers are distributed within 100 AU from the radio Source I (Menten & Reid 1995;Reid et al. 2007;Kim et al. 2008). According to the observation with the Very Large Array (VLA), the 22 GHz H 2 O maser features are distributed in more extended regions, but they are concentrated around Source I, which are called the "shell masers", and the Compact Ridge (Gaume et al. 1998). If the detected feature really originates from the H 2 O masers at 232.68670 GHz, it can be distinguished based on the distribution and velocity structure of the emitting regions from other thermal lines. Then we made synthesis images of the spectral feature of the 232.68393 GHz HCOOCH 3 line and/or the 232.68670 GHz H 2 O line (hereafter we call blended HCOOCH 3 /H 2 O feature) by using the calibrated SV data. The results are shown in Figure 1. For comparison, we show a reference image of another HCOOCH 3 line at 232.73862 GHz (v t =1, 19 8,11 −18 8,10 E) having a similar frequency, lower state energy, and expected intensity (hereafter we call this line pure HCOOCH 3 feature). As can be seen in Figure 1, overall distributions and peak intensities are quite similar. For example, both the blended HCOOCH 3 /H 2 O and the pure HCOOCH 3 maps show four dominant compact condensations coincident with the Hot Core, the Compact Ridge, the Northwest peak, and IRc 7. They are consistent with previous HCOOCH 3 observations (Favre et al. 2011). One of the brightest peaks is coincident with the Compact Ridge with the integrated intensities of 3.5 Jy beam −1 km s −1 and 4.9 Jy beam −1 km s −1 for the blended HCOOCH 3 /H 2 O feature and the pure HCOOCH 3 line, respectively. However, one can note a striking difference that only the blended HCOOCH 3 /H 2 O feature shows a significant peak at the position of Source I. This source is also associated with the strong SiO masers (Reid et al. 2007;Kim et al. 2008) and 22 GHz H 2 O masers (Gaume et al. 1998). By subtracting the pure HCOOCH 3 map from the blended HCOOCH 3 /H 2 O map, the residual emission component is clearly concentrated at the Source I position as shown in Figure 1(c). It is thought to be the contribution from the H 2 O line. A negative component in the Compact Ridge could be due to an intensity variation of the HCOOCH 3 maps between the two transitions. We also imaged five more HCOOCH 3 lines which are not affected by the contamination from other molecular lines (Table 1), and we found that none of them shows a significant peak at the Source I position. Therefore, emission features in the blended HCOOCH 3 /H 2 O map can be distinguished between HCOOCH 3 and H 2 O lines; HCOOCH 3 is extended over the Hot Core, the Compact Ridge, the Northwest peak, and IRc 7 while H 2 O is only distributed around Source I. To investigate the spatial and velocity structure of the blended HCOOCH 3 /H 2 O feature more in detail, we made velocity channel maps and the spectra at the selected positions as shown in Figures 2 and 3, respectively. The dominant emission components of the blended HCOOCH 3 /H 2 O feature are found in the velocity channels from 6.9 to 17.0 km s −1 as shown in Figure 2. They are most likely attributed to the HCOOCH 3 line blended with the H 2 O line, while higher velocity components are affected by the contamination from the ethyl cyanide ( 13 CH 3 CH 2 CN) line at 232.67737 GHz. Other weak emission components can be seen in the velocity channels from 17.0 to 27.1 km s −1 and from −8.2 to −3.2 km s −1 . They are identified as the methyl acetylene ( 13 CH 3 CCH) and acetone ((CH 3 ) 2 CO) lines at 232.67074 GHz and 232.69487 GHz, respectively. However, the channel maps show notable condensation around Source I with a velocity range wider than that of the HCOOCH 3 and other molecular lines. These velocity structures can be seen more clearly in the spectra in Figure 3. The HCOOCH 3 lines toward the Hot Core and the Compact Ridge show narrower linewidths of about 2 km s −1 at the peak velocities of 8 km s −1 . In contrast, the spectrum of the blended HCOOCH 3 /H 2 O feature shows a double-peaked structure with the velocity range from −10 to 20 km s −1 . Their peak flux densities are derived by the two-component Gaussian fitting to be 0.28±0.02 Jy and 0.43±0.02 Jy at the velocity of −2.1±0.6 km s −1 and 13.3±0.4 km s −1 , respectively. On the other hand, no spectral feature is detected for pure HCOOCH 3 toward Source I. Interestingly, the spectral profile of the blended HCOOCH 3 /H 2 O feature toward Source I appears to be analogous to the 22 GHz shell masers (Gaume et al. 1998) and the 43 GHz SiO (v=1) masers (Kim et al. 2008) as shown in Figure 4. One can see a common structure showing doublepeaks at almost the same velocities and velocity ranges. Since the higher velocity features of the SiO (v=1) maser show slightly redshifted with respect to the H 2 O maser lines, the blended HCOOCH 3 /H 2 O feature would have a closer relation with the 22 GHz H 2 O masers. It is unlikely that other molecular lines with high velocity components contribute to this peak because no such molecular species is known except the SiO thermal (Beuther et al. 2005;Zapata et al. 2012) and maser (Reid et al. 2007;Kim et al. 2008) lines, as well as the H 2 O maser lines (Gaume et al. 1998). Therefore, we can safely conclude that at least the emission feature associated with Source I is the vibrationally excited H 2 O maser line at 232.68670 GHz. Discussion As discussed above, we can identify the 232.68670 GHz feature detected in the ALMA SV data for Orion KL as the vibrationally excited H 2 O maser. This is the first detection of this maser line in YSOs. The peak flux of the blended feature observed with the 2 ′′ ×2 ′′ aperture is 0.43 Jy (Figures 3 and 4). It corresponds to the brightness temperature of 2.4 K. If this emitting region is as compact as this aperture size, the single-dish observation with the beam size of 30 ′′ would yield the brightness temperature of 0.01 K. Thus, it was not detectable with the previous observations with single-dish telescopes (Menten & Melnick 1989), although a possible spectral feature can be seen in the line survey data by Sutton et al. (1985), probably attributed to the HCOOCH 3 line. The vibrationally excited H 2 O masers have been detected in only several oxygen-rich late-type stars (Menten & Melnick 1989), which could be attributed to their higher excitation levels (3451 K) than that of the 22 GHz maser (642 K). The 232 GHz H 2 O maser around Source I would be excited due to the internal heating by an embedded YSO as expected from a maser pumping mechanism for late-type stars. Source I is also known as a powering source of the SiO masers, which is quite rare for YSOs (Zapata et al. 2009). This observational evidence may imply similar characteristics of Source I and late-type stars. Further studies with the millimeter/submillimeter masers in Source I, along with other YSOs and CSEs in late-type stars, will be crucial in understanding the pumping mechanism of the H 2 O maser lines, physical and dynamical state of these maser sources, and accordingly mass-loss/accretion processes occurring in the YSOs and CSEs. The 232.68670 GHz H 2 O maser emission is concentrated around Source I. However, the distribution of the 232 GHz H 2 O maser features could not be resolved with the ALMA SV data with the beam size of 1 ′′ .7×1 ′′ .4. According to the double-peaked spectra of the 22 GHz and 232 GHz H 2 O maser as shown in Figure 4, the 232 GHz maser features would have similar structure to that of the 22 GHz masers rather than the SiO masers. Higher resolution imaging will reveal their spatial structure and provide information about a possible powering source of the 232 GHz H 2 O maser; whether they are really associated with the root of outflows/jets as traced by the 22 GHz H 2 O masers (Gaume et al. 1998) or with circumstellar disk as traced by the 43 GHz SiO masers (Reid et al. 2007;Kim et al. 2008). In the present study, we could not perfectly separate the contribution from the HCOOCH 3 and the 232 GHz H 2 O maser lines mainly due to the insufficient spatial resolution. Therefore, it is still unclear whether the 232 GHz masers are distributed other than in Source I, such as the Compact Ridge where strong H 2 O maser lines are sometimes detected (Hirota et al. 2011;Gaume et al. 1998). A search for the 232.68670 GHz H 2 O maser lines with higher spatial resolution would be important to distinguish their distribution by filtering out the contribution from the thermal and extended HCOOCH 3 emission.
2012-08-22T12:54:55.000Z
2012-08-22T00:00:00.000
{ "year": 2012, "sha1": "770b59f5464910d03607143ed56b4f5ea510fc5b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1208.4489", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "770b59f5464910d03607143ed56b4f5ea510fc5b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
134843104
pes2o/s2orc
v3-fos-license
Case Study on Water Quality Improvement in Xihu Lake through Diversion and Water Distribution Eutrophication in lakes and reservoirs is a serious environmental problem that has damaged ecosystem health worldwide. Water diversion is one of the most popular methods for improving the water quality in shallow lakes, as it dilutes pollutants in and diverts them out of the lake. However, simple diversion without rational water distribution cannot significantly enhance water exchange in the entire lake because dead water zones always exist. This paper illustrates a case study on water quality improvement in Xihu Lake by diversion and water distribution. Based on theoretical calculation, the diversion water discharge was determined and rationally distributed into four different locations. According to the field observations after the implementation of the diversion and water distribution project, the average velocity over the dead water zones increased approximately 50 times over that of prior to the project. The average water exchange period reduced from 68 days to 22.5 days. The average turbidity was 8.8% and 12.4% lower than before after two and four months of diversion, respectively. The maximum turbidity reduced from the original 27.5 NTU (Nephelometric Turbidity Unit) to 20.1 NTU after two months of diversion, then to 16.1 NTU after four months of diversion. It shows that this diversion and rational water distribution eliminates most of the dead water zones and achieves a favorable flow field, thus reducing the turbidity and increasing water transparency, which is conducive to the improvement of water quality. Introduction Eutrophication in lakes and reservoirs is a serious environmental problem that damages ecosystems worldwide [1][2][3].In shallow unstratified lakes, it is more difficult to decrease algal biomass and increase transparency than in deep stratified lakes [4,5].Consequently, the control and prevention of shallow lake eutrophication have attracted the attention of scientists, the public, local authorities, and governments.In general, methods to improve water quality in shallow lakes mainly are divided into three categories: biological [6][7][8][9][10], chemical [11][12][13], and physical [14 -19].Among these methods, water diversion has been proposed as an important physical method for lake restoration [20,21].Water diversion diverts clean and low-nutrient water to a eutrophic lake in order to improve the water quality [22].The theory behind this mechanism is that adding large amounts of low-nutrient water not only can dilute the pollution in a lake, but also accelerate water exchange and eliminate dead water zones in the water body.The advantages of water diversion are that it is low cost, easy to conduct, and can show a quick response in nutrient reduction when a suitable quantity of dilution water is available [23]. In many countries, water diversions have been successfully implemented to improve water quality in lakes.Examples include Moses Lake in Washington, DC, USA, into which a large volume of low-nutrient water from the Columbia River was introduced during the spring and summer of 1977 [24,25]; another water project in the United States that involved diverting the Mississippi River into Lake Pontchartrain [26,27]; and a diversion project for Lake Veluwe in Holland [28].In Canada, water has been diverted from the Red Deer River to Alix Lake by eight kilometers of pipelines, channels, and small ponds [29].Although comparative pre-diversion data are limited, it appears that the diversion has had a positive influence on the recreational water quality of Alix Lake since 1997, and the annual diversion volumes have risen from 6.8 million m 3 in 2000 to 15.4 million m 3 in 2001.Enhanced flushing from the diversion has generally reduced phosphorus and chlorophyll concentrations in the lake.Since the 1990s in China, the pollution in large shallow lakes such as Lakes Taihu, Dianchi, Xuanwu, Xihu, and Jinshan also has been diluted through diversion projects, and positive results regarding water quality were achieved [20,[30][31][32][33].For example, the aim of the Yangtze River water diversion project was to enhance water exchange in Taihu Lake 2, which is the third largest freshwater lake in China [34].Water transfer from the Yangtze River was initiated in 2002 to dilute polluted water in the lake and to accelerate the flushing of pollutants and algae out of the lake.The main route of the original water transfer brought fresh water from the Yangtze River into Taihu Lake via the Wangyu River and took water out of the lake through the Taipu River.To date, four different routes have been implemented.The diversion from the Yangtze to rescue Taihu Lake has improved the water quality of Taihu Lake and its affiliated networks while also increasing the carrying capacity of water resources in the Taihu Lake Basin [35,36]. However, some diversions improve water quality only marginally; they cannot significantly enhance water exchange in the entire lake, thus heavily polluted areas still exist [37].For example, Zhai et al. [38] assessed ecosystem health based on the ecological indicators: the exergy, structural exergy, phytoplankton buffer capacity, and trophic state index.Exergy expresses the biomass of the lake system and the information that the biomass is carrying.Structural exergy is defined as exergy divided by the total biomass.It expresses the dominance of the higher organisms and measures the ability of the ecosystem to utilize the available resources.The phytoplankton buffer capacity is the ability of water to resist changes in pollutant concentration.The trophic state index expresses the quantities of nitrogen, phosphorus, and other biologically useful nutrients of the water body.An ecosystem with high exergy, high structural exergy, high buffer capacities and low trophic state index could be considered to be of good health.The results showed that the original Yangtze River diversion had a positive effect on water quality only in parts of the lake, such as Gonghu Bay and the northwest, southwest and central zones, but had no significant effect on Meiliang Bay based on regression analysis of long-term data.The original Yangtze River diversion may have alleviated the eutrophication issue in parts of the lake, but it has not substantially enhanced water exchange in Meiliang and Zhushan Bays [39].The improvement in water quality from these diversion projects did not afford sufficient benefits.The defect of current diversion projects is that the diversions generally have only one outlet so that the diverted clean water always forms a main current that flows faster through the lake.In such situations, the flow field of the lake may not be ideally reconstructed.Namely, some dead water zones still exist where water runs quite slow or even stagnates, hence the diverted clean water cannot fully flush out the turbid water [37].The diverted water needs to be properly distributed to various key locations to holistically enhance water exchange in the system [21].Thus, it is necessary to investigate the proper mode of water diversion and distribution. This case study concerns the water diversion and distribution project that has been successfully implemented in Beili Lake, which is part of Xihu Lake.In this case, the clean water was diverted from Xili Lake through one input and was systematically distributed to four outputs to fully reconstruct the flow field of Beili Lake. Study Area Xihu Lake is located in the city of Hangzhou in Zhejiang Province and has an area of 6.5 km 2 and a perimeter of about 15 km.The lake consists of the main lake, Beili Lake, Xili Lake, Xiaonan Lake, and Yue Lake, as shown in Figure 1. The bed of Xihu Lake is relatively flat, with sediment that mainly contains high organic limnic deposition, and silty clay loam.The area of the Xihu Lake basin is about 21.22 km 2 , and the annual runoff is 14 million m 3 .The water capacity of the entire lake is about 16.25 million m 3 when the water level is maintained at the Yellow Sea elevation of 7.18 + 0.05 m, and the water storage capacity is nearly 10 million m 3 .The natural exchange frequency of water is 2 times/year [40]. Water 2018, 10, x FOR PEER REVIEW 3 of 15 Study Area Xihu Lake is located in the city of Hangzhou in Zhejiang Province and has an area of 6.5 km 2 and a perimeter of about 15 km.The lake consists of the main lake, Beili Lake, Xili Lake, Xiaonan Lake, and Yue Lake, as shown in Figure 1. The bed of Xihu Lake is relatively flat, with sediment that mainly contains high organic limnic deposition, and silty clay loam.The area of the Xihu Lake basin is about 21.22 km 2 , and the annual runoff is 14 million m 3 .The water capacity of the entire lake is about 16.25 million m 3 when the water level is maintained at the Yellow Sea elevation of 7.18 + 0.05 m, and the water storage capacity is nearly 10 million m 3 .The natural exchange frequency of water is 2 times/year [40].In September 1986, the Hangzhou municipal government completed sewage interception and diversion works for Xihu Lake.Since then, diversion from the Qiantang River has a water discharge of 3 × 10 5 m 3 /day to Xihu Lake.Later, two pretreatment sedimentation tanks employing the flocculation precipitation method were built to purify the diverted water, and their daily processing capacities were 3 × 10 5 m 3 /day and 1 × 10 5 m 3 /day.With these pretreatment sedimentation tanks, the raw water was purified, so that the water quality into the lake was greatly improved [20].The diversion of 4 × 10 5 m 3 water per day from the Qiantang River altered the original water exchange rate of Xihu Lake from once a year to once a month.As shown in Figure 1, five inlets and nine outlets are located in the shoreline; their discharges are listed in Table 1.However, places such as the southeast corner of the lake, Beili Lake, and the southwest region of Yue Lake, where the concentration of total phosphorus (TP) has not been reduced, are the dead corners of the diversion works. Beili Lake is located in the northern part of Xihu Lake.It has a total water surface area of 0.27 km 2 and an average water depth of 2.2 m.Under normal circumstances, the lake has a total storage capacity of about 49 million m 3 .Beili Lake connects to Xihu Lake through three tunnels and bridges.A small outlet pipeline discharges the lake water into the sewage system.The flow in Beili Lake is very slow, water exchange cycle is long, and the water quality is rather poor (Bad V class according to Chinese water quality standards [41] as shown in Table 2).In September 1986, the Hangzhou municipal government completed sewage interception and diversion works for Xihu Lake.Since then, diversion from the Qiantang River has a water discharge of 3 × 10 5 m 3 /day to Xihu Lake.Later, two pretreatment sedimentation tanks employing the flocculation precipitation method were built to purify the diverted water, and their daily processing capacities were 3 × 10 5 m 3 /day and 1 × 10 5 m 3 /day.With these pretreatment sedimentation tanks, the raw water was purified, so that the water quality into the lake was greatly improved [20].The diversion of 4 × 10 5 m 3 water per day from the Qiantang River altered the original water exchange rate of Xihu Lake from once a year to once a month.As shown in Figure 1, five inlets and nine outlets are located in the shoreline; their discharges are listed in Table 1.However, places such as the southeast corner of the lake, Beili Lake, and the southwest region of Yue Lake, where the concentration of total phosphorus (TP) has not been reduced, are the dead corners of the diversion works. Beili Lake is located in the northern part of Xihu Lake.It has a total water surface area of 0.27 km 2 and an average water depth of 2.2 m.Under normal circumstances, the lake has a total storage capacity of about 49 million m 3 .Beili Lake connects to Xihu Lake through three tunnels and bridges.A small outlet pipeline discharges the lake water into the sewage system.The flow in Beili Lake is very slow, water exchange cycle is long, and the water quality is rather poor (Bad V class according to Chinese water quality standards [41] as shown in Table 2). Requirements for Diversion and Water Distribution To improve the water quality of eutrophic lakes, the concentrations of phosphorus and nitrogen need to be reduced and controlled.Biologically, nitrate can be absorbed by aquatic plants, which are artificially planted in shallow rivers, canals, and lakes, as shown in Figure 2. In addition, the pollutants can also be degraded by microorganisms.It should be noted that nitrate is very soluble, which is generally detrimental to plants [42].Hydraulically, these constituents can be washed away from their places of production and be diluted by a large amount of water.The advection-diffusion equation for a pollutant can be expressed by where S is the concentration of the pollutant, i.e., inorganic forms of either phosphorus or nitrogen; U, V, and W are the flow velocities in the x, y, and z directions, respectively; ν is the diffusion coefficient of the pollutant; k is the biodegradation rate of the pollutants, which represents the capability of bacteria, fungi, or other biological means to disintegrate pollutants; q is the source, which may be linked with atmospheric deposition and release from bottom sediments; and x, y, and z are the relative coordinates of the pollutant source in the Cartesian coordinates. Water 2018, 10, x FOR PEER REVIEW 4 of 15 Requirements for Diversion and Water Distribution To improve the water quality of eutrophic lakes, the concentrations of phosphorus and nitrogen need to be reduced and controlled.Biologically, nitrate can be absorbed by aquatic plants, which are artificially planted in shallow rivers, canals, and lakes, as shown in Figure 2. In addition, the pollutants can also be degraded by microorganisms.It should be noted that nitrate is very soluble, which is generally detrimental to plants [42].Hydraulically, these constituents can be washed away from their places of production and be diluted by a large amount of water.The advection-diffusion equation for a pollutant can be expressed by Advection term Diffusion term (1 where S is the concentration of the pollutant, i.e., inorganic forms of either phosphorus or nitrogen; U, V, and W are the flow velocities in the x, y, and z directions, respectively; ν is the diffusion coefficient of the pollutant; k is the biodegradation rate of the pollutants, which represents the capability of bacteria, fungi, or other biological means to disintegrate pollutants; q is the source, which may be linked with atmospheric deposition and release from bottom sediments; and x, y, and z are the relative coordinates of the pollutant source in the Cartesian coordinates.However, many dead water zones generally exist where flow velocity is null in shallow lakes, such as Xihu Lake.Within the dead water zones, flow velocities U, V, and W, respectively, in the x, y, and z directions are zero.In other words, the advection term inside the dead water zones is zero.Then, Equation (1) becomes: Comparing Equations ( 1) and ( 2), the following can be seen.( 1) In a running flow field, the flow provides advection hydrodynamics, which takes pollutants away from their production places, akin to the Chinese idiom that running water never gets stale.The higher the flow velocities are, the faster the pollutants are taken away; (2) In the dead water zone, without the advection provided by the flow, the concentration of pollutant mainly changes via diffusion and the absorption by the aquatic plants.Because of the small magnitude of the diffusion coefficient, the amount of pollutant passing through the boundary of the dead water zone would be very small.In other words the exchange of pollutant between the outer and inner layers of the dead water zone can be regarded as null; (3) In the dead water zone, the concentration of the pollutant would gradually increase with time, i.e., the water quality would become increasingly worse over time unless a sufficient quantity of aquatic plants is implanted therein.Hence, to reduce the concentration of pollutants and improve the water quality in lakes, the dead water zone should be eliminated. Hydrological Calculations of Diversion Discharge The flow velocity in Beili Lake is very low, such that the lake is almost a stagnant water body.Under such flow conditions, the turbidity in Beili Lake cannot decrease if there is no diversion.The suspension and transport of bottom mud in Beili Lake are mainly due to wind and current.When the maximum orbital velocity (U bmax ) of water particles from wave motion is higher than the threshold velocity (U c ) of the bottom sediments, the bottom sediments will be suspended.However, the orbital velocity (U b ) can only suspend sediments vertically.The velocities of Stokes drift (U t ) and wind driven current (U w ) cause sediment transport.Thus, the required amount of diverted water can be calculated as follows. When no sediment is coming in, the movement of bottom sediment in the lake is in equilibrium, meaning that the amounts of suspended and settled sediments in a unit time are equal.Thus, the sediment transport rate per unit width (q s ) is where 365 5 [43], γ s is the specific weight of sediments, γ is the specific weight of water, V m is the resultant velocity of the velocities of Stokes drift and wind driven current, and ω is the sediment settling velocity. The orbital velocity of particles due to wave motion averaged over half a period can be calculated using [44] and where H is the wave height, T is the wave period, L is the wave length, and h is the water depth.The Stokes drift velocity (the wave velocity of mass transfer) averaged over a wave period is where c is the wave speed.According to technological specification of harbor engineering [45], the velocity of wind driven current is where V w is the wind speed.Thus, the resultant velocity V m is When the median grain size (d 50 ) of sediments is less than 0.03 mm, the fine sediments are flocculated, with the settling velocity of flocculating sediments [46] calculated using ω = 0.097d 0.18 50 . According to the method proposed by Teng et al. [47], the wave elements of wind waves can be calculated as where F is the fetch length.The area of Beili Lake is 0.27 km 2 ; the average water depth is 2.2 m; and, generally, the total water storage is 4.9 × 10 4 m 3 .Based on the results of sampling and grain analysis, the median grain size of the bottom mud in Beili Lake is 0.003 mm.The annual average wind velocity is 1.3-2.4m/s, and the constant wind velocity is chosen as 2.25 m/s, which is the maximum value of the monthly averaged velocity.Therefore, based on Equations ( 3)- (11), it can be calculated that about 186.36 kg sediments could be suspended from the bottom of Beili Lake every day.The amount of diverted water and drained water should be the same.Meanwhile, the drained water should meet the water quality requirement for turbidity (5 NTU) and remove the suspended sediments.Thus, the amount of clean water diverted to Beili Lake in one day needs to be at least where Q is the volume of water; m is the mass of water; T is the turbidity of the water; and s is the sediment concentration, s = 2.064 × 10 −3 × (T − 0.223) [48]. Data Collection and Measurement The historical data (2006)(2007)(2008)(2009)(2010), including the water level, flow condition, and concentrations of total nitrogen (TN), total phosphorus (TP) and chlorophyll a, were provided by the Hangzhou Municipal Xihu Lake Administration Office.Two field observations of flow field, water depth, turbidity, and the concentrations of TP and chlorophyll a in Beili Lake were conducted after the implementation of the project.As shown in Figure 3, the measurements were carried out at 47 different locations in Beili Lake.At each location, the flow velocity and chlorophyll a concentration at two different depths (0.1 m and 1 m below the water surface) were measured.An ADV (Acoustic Doppler Velocimetry) Flow Tracker and a five-meter measuring rod were used for flow velocity and water depth measurements, along with a PCH-800 Chlorophyll Analyzer for chlorophyll a measurements.The principle of PCH-800 Chlorophyll Analyzer is using the characteristics that chlorophyll a has absorption peaks and emission peaks in the spectrum.The monochromatic light of specific wavelength is emitted into the water.Then, chlorophyll a in the water absorbs the energy of this light, and releases another monochromatic light of emission peak of another wavelength.The intensity of the light emitted by chlorophyll a is proportional to the content of chlorophyll a in water.The water 0.1 m and 1 m below the water surface was collected via syringe and preserved in numbered glass sample bottles.The turbidity and TP were measured in the laboratory using a WGZ-200 Ratio Turbidimeter and a LH-TP2M Portable TP Analyzer, respectively.The core method of the LH-TP2M Portable TP Analyzer is the spectrophotometric molybdenum blue method.It involves the formation of molybdophosphoric acid from orthophosphate and an excess of molybdate in acidic solution followed by reduction to give molybdenum blue.Using the photoelectron colorimetric detection method, the absorbance of thus produced molybdenum blue is measured spectrophotometrically at a certain wave length that gives maximum absorbance.The intensity of the blue color is proportional to the amount of phosphate in water.To check the actual amount and water quality of the diverted water, the discharge and turbidity at the water inlet were also measured.In the front edge of the water inlet, the area of cross section was measured and three measuring verticals were determined for velocity measurement.Six levels on each vertical line were used for velocity and turbidity measurements. Meteorological observation data for Xihu Lake was provided from the China Meteorological Data Sharing Service System (http://cdc.cma.gov.cn/home.do)and included daily atmospheric pressure, temperature, cloud cover, wind speed, and wind direction. Water 2018, 10, x FOR PEER REVIEW 7 of 15 detection method, the absorbance of thus produced molybdenum blue is measured spectrophotometrically at a certain wave length that gives maximum absorbance.The intensity of the blue color is proportional to the amount of phosphate in water.To check the actual amount and water quality of the diverted water, the discharge and turbidity at the water inlet were also measured.In the front edge of the water inlet, the area of cross section was measured and three measuring verticals were determined for velocity measurement.Six levels on each vertical line were used for velocity and turbidity measurements.Meteorological observation data for Xihu Lake was provided from the China Meteorological Data Sharing Service System (http://cdc.cma.gov.cn/home.do)and included daily atmospheric pressure, temperature, cloud cover, wind speed, and wind direction. Demonstrative Project in Beili Lake Considering the differences in water temperature and wind-current conditions among different seasons and the limitations of theoretical calculation, the required amount of diverted clean water was determined to be 2 × 10 4 m 3 /day.The diversion water was taken from Xili Lake, as its water quality is almost as good as the purified water from the Qiantang River and its water quantity is abundant.The average water exchange period should be at least 24.5 days, as the total storage of Beili Lake is 4.9 × 10 5 m 3 .Then, through optimization of selective schemes, such as two different water sources, eight different pumping station layouts, seven different pipeline network layouts, and four different water distribution layouts, the final diversion and distribution scheme was determined and confirmed by the related departments of the Hangzhou municipal government.The water diversion and distribution project for improving the water quality of Beili Lake started on 8 February 2012.Owing to constrains on construction, most of the construction had to be carried out at night, and finally was completed after three months.A centrifugal pump was installed to pump water from Xili Lake to Beili Lake through pipelines that were buried in the lakebed.The locations of the water inlets and outlets and the Beili Lake water distribution network layout are shown in Figure 4. Demonstrative Project in Beili Lake Considering the differences in water temperature and wind-current conditions among different seasons and the limitations of theoretical calculation, the required amount of diverted clean water was determined to be 2 × 10 4 m 3 /day.The diversion water was taken from Xili Lake, as its water quality is almost as good as the purified water from the Qiantang River and its water quantity is abundant.The average water exchange period should be at least 24.5 days, as the total storage of Beili Lake is 4.9 × 10 5 m 3 .Then, through optimization of selective schemes, such as two different water sources, eight different pumping station layouts, seven different pipeline network layouts, and four different water distribution layouts, the final diversion and distribution scheme was determined and confirmed by the related departments of the Hangzhou municipal government.The water diversion and distribution project for improving the water quality of Beili Lake started on 8 February 2012.Owing to constrains on construction, most of the construction had to be carried out at night, and finally was completed after three months.A centrifugal pump was installed to pump water from Xili Lake to Beili Lake through pipelines that were buried in the lakebed.The locations of the water inlets and outlets and the Beili Lake water distribution network layout are shown in Figure 4.The total construction included one inlet with a debris screen (Figure 5), one pump operation control station, one submersible pump station, one water gate, 1630 m of underwater buried pipeline, and four water distribution outlets (Figure 4).The locations and discharges of the four water distribution outlets were decided as: Location #1 (30°15′6.89″N, 120°8′20.On 15 May 2012, the demonstration project began its pilot run.It has been running well so far.Every day, 20,000 tons of water from Xili Lake has been transported to different water distribution locations in Beili Lake through the submersible pump and pipeline.The total construction included one inlet with a debris screen (Figure 5), one pump operation control station, one submersible pump station, one water gate, 1630 m of underwater buried pipeline, and four water distribution outlets (Figure 4).The locations and discharges of the four water distribution outlets were decided as: Location #1 (30 • On 15 May 2012, the demonstration project began its pilot run.It has been running well so far.Every day, 20,000 tons of water from Xili Lake has been transported to different water distribution locations in Beili Lake through the submersible pump and pipeline.The total construction included one inlet with a debris screen (Figure 5), one pump operation control station, one submersible pump station, one water gate, 1630 m of underwater buried pipeline, and four water distribution outlets (Figure 4).The locations and discharges of the four water distribution outlets were decided as: Location #1 (30°15′6.89″N, 120°8′20.On 15 May 2012, the demonstration project began its pilot run.It has been running well so far.Every day, 20,000 tons of water from Xili Lake has been transported to different water distribution locations in Beili Lake through the submersible pump and pipeline. Field Observations after the Implementation of the Project To assess the effect of the water diversion and distribution project on the water quality of Beili Lake after the project implementation, two field observations were carried out, one on 22-23 July 2012, and one on 10-11 September 2012.The results are listed in Table 3.At the water inlet, the total diversion discharge was nine percent more than the design value.This might be because the actual pipeline length was six percent shorter than the designed one.Because of the heavy rainfall before the second observation, the turbidity at the water inlet for the second observation was higher than that for the first observation.The weather conditions before and during the field observations not only affected the observation activities, but also influenced the turbidity values in the lake.Strong winds and heavy storms prior to the observations increased the turbidity in the lake for a short period.Thus, the two field observations were all carried out when the wind was low or slight.As shown in Table 4, the gentle breeze lasted for three consecutive sunny days before the first observation.However, rainfall occurred and the wind was relatively strong before the second observation. Flow Field Improvement and Velocity Increment Before implementation of the project, Beili Lake was almost a pond of stagnant water as shown in Figure 6.Its cross-sectional width is about 350 m with average water depth and average velocity of 2.25 m and 0.0001 m/s, respectively.About 28% of the lake area was regarded as stagnant.It was estimated that it took 68 days for a water exchange cycle to occur.After implementation of the project, as shown in Figure 7, the measured average flow velocity increased to 0.005 m/s, which was approximately 50 times higher than before implementation.The results of the two field observations showed that the flow velocity over the entire lake significantly increased and most of the dead zones had been removed.The water exchange cycle after the diversion and distribution project was 22.5 days, which was two days shorter than the designed water exchange cycle.Thus, the flow field was favorably constructed to facilitate water quality improvement. Transparency Improvement Before the implementation of the project, the average turbidity of the one-meter surface water was 14.6 NTU and the maximum turbidity was 27.5 NTU at the water surface.Two months after the implementation of the water diversion and distribution project, the measured average turbidity in the top one-meter layer was 13.3 NTU, which was 8.8% lower than that before implementation of the project.Four months after the implementation of the project, the measured average turbidity of the top one-meter water layer of Beili Lake was 12.8 NTU, which was 12.4% lower than that before the project implementation.The maximum turbidity at water surface was reduced from the original value of 27.5 NTU to 20.1 NTU (27%) after two months of diversion, and further reduced to 16.1 NTU (41%) after four months of diversion.It should be noted that rainfall occurred before the second observation so that the turbidity of water might be temporally increased by the strong wind.Therefore, the actual turbidity was likely reduced by more than 12.4% because of the diversion.The water turbidity has been gradually decreased, and the transparency has been continuously and obviously improved, because of the implementation of the project. Transparency Improvement Before the implementation of the project, the average turbidity of the one-meter surface water was 14.6 NTU and the maximum turbidity was 27.5 NTU at the water surface.Two months after the implementation of the water diversion and distribution project, the measured average turbidity in the top one-meter layer was 13.3 NTU, which was 8.8% lower than that before implementation of the project.Four months after the implementation of the project, the measured average turbidity of the top one-meter water layer of Beili Lake was 12.8 NTU, which was 12.4% lower than that before the project implementation.The maximum turbidity at water surface was reduced from the original value of 27.5 NTU to 20.1 NTU (27%) after two months of diversion, and further reduced to 16.1 NTU (41%) after four months of diversion.It should be noted that rainfall occurred before the second observation so that the turbidity of water might be temporally increased by the strong wind.Therefore, the actual turbidity was likely reduced by more than 12.4% because of the diversion.The water turbidity has been gradually decreased, and the transparency has been continuously and obviously improved, because of the implementation of the project. Transparency Improvement Before the implementation of the project, the average turbidity of the one-meter surface water was 14.6 NTU and the maximum turbidity was 27.5 NTU at the water surface.Two months after the implementation of the water diversion and distribution project, the measured average turbidity in the top one-meter layer was 13.3 NTU, which was 8.8% lower than that before implementation of the project.Four months after the implementation of the project, the measured average turbidity of the top one-meter water layer of Beili Lake was 12.8 NTU, which was 12.4% lower than that before the project implementation.The maximum turbidity at water surface was reduced from the original value of 27.5 NTU to 20.1 NTU (27%) after two months of diversion, and further reduced to 16.1 NTU (41%) after four months of diversion.It should be noted that rainfall occurred before the second observation so that the turbidity of water might be temporally increased by the strong wind.Therefore, the actual turbidity was likely reduced by more than 12.4% because of the diversion.The water turbidity has been gradually decreased, and the transparency has been continuously and obviously improved, because of the implementation of the project. Pollutants Reduction Figure 8 shows the annual average (2006-2010) TN, TP, and chlorophyll a concentrations in different regions of Xihu Lake before the implementation of the project.As shown in Figure 8a, the TN concentration in all those regions changed lightly from 2006 to 2010.In Beili Lake, the TN concentration was always lower compared to those in Yue Lake and Xili Lake.Although the TP and chlorophyll a concentrations in Beili Lake decreased during the water transfers, especially from 2006 to 2008, this decline has nearly stopped since 2009.In 2010, the average TP concentration in Beili Lake was about 0.047 mg/L, which was still about 50% and 104% higher than that in Yue Lake and Xili Lake, respectively.Moreover, associated with the poor water mobility in Beili Lake, the chlorophyll a concentration (0.33 mg/L) in Beili Lake was about three times and five times higher than that in Yue Lake and Xili Lake, respectively.Thus, the TP and chlorophyll a concentrations in Beili Lake have the potential to be reduced through rationally redistributing the water in the system. Water 2018, 10, x FOR PEER REVIEW 11 of 15 Pollutants Reduction Figure 8 shows the annual average (2006-2010) TN, TP, and chlorophyll a concentrations in different regions of Xihu Lake before the implementation of the project.As shown in Figure 8a, the TN concentration in all those regions changed lightly from 2006 to 2010.In Beili Lake, the TN concentration was always lower compared to those in Yue Lake and Xili Lake.Although the TP and chlorophyll a concentrations in Beili Lake decreased during the water transfers, especially from 2006 to 2008, this decline has nearly stopped since 2009.In 2010, the average TP concentration in Beili Lake was about 0.047 mg/L, which was still about 50% and 104% higher than that in Yue Lake and Xili Lake, respectively.Moreover, associated with the poor water mobility in Beili Lake, the chlorophyll a concentration (0.33 mg/L) in Beili Lake was about three times and five times higher than that in Yue Lake and Xili Lake, respectively.Thus, the TP and chlorophyll a concentrations in Beili Lake have the potential to be reduced through rationally redistributing the water in the system.After the implementation of the project, the TP and chlorophyll a concentrations in Beili Lake dramatically decreased.With the dilution by the clean water diverted from Xili Lake, the TP concentration in Beili Lake was reduced by 32% after two months of diversion and 55% after four months of diversion.Meanwhile, the chlorophyll a concentration in Beili Lake decreased by 55% after two months of diversion and 61% after four months of diversion.In addition, as shown in Figure 9, the monthly continuous monitoring data provided by the Hangzhou Municipal Xihu Lake Administration Office clearly shows that the concentrations of TP, and chlorophyll a in Beili Lake had a downward trend after the diversion.In the first month after the implementation of the project, the concentrations of TP and chlorophyll a rapidly decreased.On average, the TP concentration decreased about 0.00038 mg/L per day and the chlorophyll a concentration decreased about 0.00044 mg/L per day.In the second month, the rate of descent of the TP and chlorophyll a concentrations decreased to 0.00011 mg/L per day and 0.00029 mg/L per day, respectively.The rate of descent gradually decreased with time as the concentrations of TP and chlorophyll a in Beili Lake got closer to those in Xili Lake, which is the water source of the diversion.After four months of diversion, the TP and chlorophyll a concentrations tended to stabilize, reaching 0.03 mg/L and 0.013 mg/L, respectively.The water quality has greatly improved (Class V, almost Class III).After the implementation of the project, the TP and chlorophyll a concentrations in Beili Lake dramatically decreased.With the dilution by the clean water diverted from Xili Lake, the TP concentration in Beili Lake was reduced by 32% after two months of diversion and 55% after four months of diversion.Meanwhile, the chlorophyll a concentration in Beili Lake decreased by 55% after two months of diversion and 61% after four months of diversion.In addition, as shown in Figure 9, the monthly continuous monitoring data provided by the Hangzhou Municipal Xihu Lake Administration Office clearly shows that the concentrations of TP, and chlorophyll a in Beili Lake had a downward trend after the diversion.In the first month after the implementation of the project, the concentrations of TP and chlorophyll a rapidly decreased.On average, the TP concentration decreased about 0.00038 mg/L per day and the chlorophyll a concentration decreased about 0.00044 mg/L per day.In the second month, the rate of descent of the TP and chlorophyll a concentrations decreased to 0.00011 mg/L per day and 0.00029 mg/L per day, respectively.The rate of descent gradually decreased with time as the concentrations of TP and chlorophyll a in Beili Lake got closer to those in Xili Lake, which is the water source of the diversion.After four months of diversion, the TP and chlorophyll a concentrations tended to stabilize, reaching 0.03 mg/L and 0.013 mg/L, respectively.The water quality has greatly improved (Class V, almost Class III). Conclusions In this study, the water diversion and distribution project in Beili Lake, a part of Xihu Lake, was introduced, and the flow field and turbidity of Beili Lake before and after the implementation of the water diversion and distribution project were investigated. The concentration of the pollutants is highly related to the diffusion and advection of flow.Thus, to remove local pollutants, dead water zones in the lakes have to be eliminated.The Beili Lake water quality improvement project demonstrates that water diversion and proper distribution from Xili Lake, which has better water quality, can effectively replace the turbid water and increase the water transparency.For this small-scale diversion, the project was composed of an inlet with a debris screen, a submersible pump, pipelines, and four water distribution outlets with specified flow direction.With the implementation of this project, the flow velocity in Beili Lake significantly increased as the average velocity over the dead water zones increased approximately 50 times over that of prior to the project.The water exchange rate was increased as the average water exchange period reduced from 68 days to 22.5 days.The diversion and distribution has reconstructed an ideal flow field, which is conducive to improving the water quality of Beili Lake.The water transparency has increased and the water turbidity has decreased visibly and continues to decline.Moreover, the TP and chlorophyll a concentrations have obviously decreased after four months of diversion.The water quality in Beili Lake has greatly improved.The results offer useful information for understanding the efficiency of water diversion and distribution in improving the water quality of shallow lakes, and thus can give guidance to practical engineering for such systems. Figure 2 . Figure 2. Aquatic plants artificially planted in a shallow lake in China. Figure 2 . Figure 2. Aquatic plants artificially planted in a shallow lake in China. Figure 3 . Figure 3. Locations of the field observations. Figure 3 . Figure 3. Locations of the field observations. Water 2018 , 10, x FOR PEER REVIEW 8 of 15 Figure 4 . Figure 4. Layout of diversion and water distribution for Beili Lake. Figure 5 . Figure 5. Inlet of the pump station. Figure 4 . Figure 4. Layout of diversion and water distribution for Beili Lake. Water 2018 , 10, x FOR PEER REVIEW 8 of 15 Figure 4 . Figure 4. Layout of diversion and water distribution for Beili Lake. Figure 5 . Figure 5. Inlet of the pump station. Figure 5 . Figure 5. Inlet of the pump station. Figure 6 . Figure 6.Flow field of Beili Lake prior to the water diversion and distribution project (20 September 2010, provided by the Hangzhou Municipal Xihu Lake Administration Office). Figure 7 . Figure 7. Flow field of Beili Lake after the water diversion and distribution project (10 September 2012). Figure 6 . Figure 6.Flow field of Beili Lake prior to the water diversion and distribution project (20 September 2010, provided by the Hangzhou Municipal Xihu Lake Administration Office). Figure 6 . Figure 6.Flow field of Beili Lake prior to the water diversion and distribution project (20 September 2010, provided by the Hangzhou Municipal Xihu Lake Administration Office). Figure 7 . Figure 7. Flow field of Beili Lake after the water diversion and distribution project (10 September 2012). Figure 7 . Figure 7. Flow field of Beili Lake after the water diversion and distribution project (10 September 2012). Water 2018 , 15 Figure 9 . Figure 9. Downward trend of TP and chlorophyll a in Beili Lake after the diversion. Table 1 . Discharge at the five inlets and nine outlets. Table 1 . Discharge at the five inlets and nine outlets. Item No. Classification Standard Value Items Class I Class II Class III Class IV Class V1 Total Phosphorus (mg/L) ≤ Table 3 . Data for the two field observations. Table 4 . Weather condition before and during the field observations.
2019-04-27T13:08:55.822Z
2018-03-16T00:00:00.000
{ "year": 2018, "sha1": "1ac4c34c70bf6a65cbe1a8b5a6d9684b620b3d37", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/10/3/333/pdf?version=1525337868", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1ac4c34c70bf6a65cbe1a8b5a6d9684b620b3d37", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
249097740
pes2o/s2orc
v3-fos-license
A parameterized approximation algorithm for the Multiple Allocation $k$-Hub Center In the Multiple Allocation $k$-Hub Center (MA$k$HC), we are given a connected edge-weighted graph $G$, sets of clients $\mathcal{C}$ and hub locations $\mathcal{H}$, where ${V(G) = \mathcal{C} \cup \mathcal{H}}$, a set of demands $\mathcal{D} \subseteq \mathcal{C}^2$ and a positive integer $k$. A solution is a set of hubs $H \subseteq \mathcal{H}$ of size $k$ such that every demand $(a,b)$ is satisfied by a path starting in $a$, going through some vertex of $H$, and ending in $b$. The objective is to minimize the largest length of a path. We show that finding a $(3-\epsilon)$-approximation is NP-hard already for planar graphs. For arbitrary graphs, the approximation lower bound holds even if we parameterize by $k$ and the value $r$ of an optimal solution. An exact FPT algorithm is also unlikely when the parameter combines $k$ and various graph widths, including pathwidth. To confront these hardness barriers, we give a $(2+\epsilon)$-approximation algorithm parameterized by treewidth, and, as a byproduct, for unweighted planar graphs, we give a $(2+\epsilon)$-approximation algorithm parameterized by $k$ and $r$. Compared to classical location problems, computing the length of a path depends on non-local decisions. This turns standard dynamic programming algorithms impractical, thus our algorithm approximates this length using only local information. We hope these ideas find application in other problems with similar cost structure. Introduction In the classical location theory, the goal is to select a set of centers or facilities to serve a set of clients [25,10,26,12]. Usually, each client is simply connected to the closest selected facility, so that the transportation or connection cost is minimized. In several scenarios, however, the demands correspond to connecting a set of pair of clients. Rather than connecting each pair directly, one might select a set of hubs that act as consolidation points to take advantage of economies of scale [30,8,23,31]. In this case, each origin-destination demand is served by a path starting at the origin, going through one or more selected hubs and ending at the destination. Using consolidation points reduces the cost of maintaining the network, as a large number of goods is often transported through few hubs, and a small fleet of vehicles is sufficient to serve the network [9]. Many hub location problems have emerged through the years, that vary depending on the solution domain, whether it is discrete or continuous; on the number of hub stops serving each demand; on the number of selected hubs, and so on [1,16]. Central to this classification is the nature of the objective function: for median problems, the objective is to minimize the total length of the paths serving the demands, while, for center problems, the objective is to find a solution whose maximum length is minimum. In this paper, we consider the Multiple Allocation k-Hub Center (MAkHC), which is a center problem in the onestop model [29,42], where clients may be assigned to multiple hubs for distinct demands, and whose objective is to select k hubs to minimize the worst connection cost of a demand. Formally, an instance of MAkHC is comprised of a connected edge-weighted graph G, sets of clients C and hub locations H, where V (G) = C ∪ H, a set of demand pairs D ⊆ C 2 and a positive integer k. The objective is to find a set of hubs H ⊆ H of size k that minimizes max (a,b)∈D min h∈H d(a, h) + d(h, b), where d(u, v) denotes the length of a shortest path between vertices u and v. In the decision version of MAkHC, we are also given a non-negative number r, and the goal is to determine whether there exists a solution of value at most r. This problem is closely related to the well-known k-Center [26,24], where, given an edge-weighted graph G, one wants to select a set of k vertices, called centers, so that the maximum distance from each vertex to the closest center is minimized. In the corresponding decision version, one also receives a number r, and asks whether there is a solution of value at most r. By creating a demand (u, u) for each vertex u of G, one reduces k-Center to MAkHC, thus MAkHC can be seen as a generalization of k-Center. In fact, MAkHC even generalizes the k-Supplier [27], that is a variant of k-Center whose vertices are partitioned into clients and locations, only clients need to be served, and centers must be selected from the set of locations. For NP-hard problems, one might look for an α-approximation, that is a polynomial-time algorithm that finds a solution whose value is within a factor α of the optimal. For k-Center, a simple greedy algorithm already gives a 2approximation, that is the best one can hope for, since finding an approximation with smaller factor is NP-hard [24]. Analogously, there is a best-possible 3approximation for k-Supplier [27]. These results have been extended to MAkHC as well, which also admits a 3-approximation [39]. Later, we prove this approximation factor is tight, unless P = NP. An alternative is to consider the problem from the perspective of parameterized algorithms, that insist on finding an exact solution, but allow running times with a non-polynomial factor that depends only on a certain parameter of the input. More precisely, a decision problem with parameter w is fixed-parameter tractable (FPT) if it can be decided in time f (w) · n O(1) , where n is the size of the input and f is a function that depends only on w. Feldmann and Marx [19] showed that k-Center is W[1]-hard for planar graphs of constant doubling dimension when the parameter is a combination of k, the highway dimension and the pathwidth of the graph. Blum [5] showed that the hardness holds even if we additionally parameterize by the skeleton dimension of the graph. Under the assumption that FPT = W [1], this implies that k-Center does not admit an FPT algorithm for any of these parameters, even if restricted to planar graphs of constant doubling dimension. Recently, there has been interest in combining techniques from parameterized and approximation algorithms [36,18]. An algorithm is called a parameterized α-approximation if it finds a solution within factor α of the optimal value and runs in FPT time. The goal is to give an algorithm with improved approximation factor that runs in super-polynomial time, where the non-polynomial factors of the running time are dependent on the parameter only. Thus, one may possibly design an algorithm that runs in FPT time for a W[1]-hard problem that, although it finds only an approximate solution, has an approximation factor that breaks the known NP-hardness lower bounds. For k-Center, Demaine et al. [14] give an FPT algorithm parameterized by k and r for planar and map graphs. All these characteristics seem necessary for an exact FPT algorithm, as even finding a (2 − ǫ)-approximation with ǫ > 0 for the general case is W[2]-hard for parameter k [17]. If we remove the solution value r and parameterize only by k, the problem remains W[1]-hard if we restrict the instances to planar graphs [19], or if we add structural graph parameters, such as the vertex-cover number or the feedback-vertex-set number (and thus, also treewidth or pathwidth) [32]. To circumvent the previous barriers, Katsikarelis et al. [32] provide an efficient parameterized approximation scheme (EPAS) for k-Center with different parameters w, i.e., for every ǫ > 0, one can compute a (1 + ǫ)-approximation in time f (ǫ, w) · n O(1) , where w is either the cliquewidth or treewidth of the graph. More recently, Feldmann and Marx [19] have also given an EPAS for k-Center when it is parameterized by k and the doubling dimension, which can be a more appropriate parameter for transportation networks than r. Our results and techniques We initiate the study of MAkHC under the perspective of parameterized algorithms. We start by showing that, for any ǫ > 0, there is no parameterized (3 − ǫ)-approximation for MAkHC when the parameter is k, the value r is bounded by a constant and the graph is unweighted, unless FPT = W [2]. For planar graphs, finding a good constant-factor approximation remains hard in the polynomial sense, as we show that it is NP-hard to find a (3−ǫ)-approximation for MAkHC in this case, even if the maximum degree is 3. To challenge the approximation lower bound, one might envisage an FPT algorithm by considering an additional structural parameter, such as vertex-cover and feedback-vertex-set numbers or treewidth. However, this is unlikely to lead to an exact FPT algorithm, as we note that the hardness results for k-Center [32,19,5] extend to MAkHC. Namely, we show that, unless FPT = W[1], MAkHC does not admit an FPT algorithm when parameterized by a combination of k, the highway and skeleton dimensions and the pathwidth of the graph, even if restricted to planar graphs of constant doubling dimension; or when parameterized by k and the vertex-cover number. Instead, we aim at finding an approximation with factor strictly smaller than 3 that runs in FPT time. In this paper, we present a (2 + ǫ)-approximation for MAkHC parameterized by the treewidth of the graph, for ǫ > 0. The running time of the algorithm is O * ((tw/ǫ) O(tw) ), where polynomial factors in the size of the input are omitted. Moreover, we give a parameterized (2 + ǫ)-approximation for MAkHC when the input graph is planar and unweighted, parameterized by k and r. Our main result is a non-trivial dynamic programming algorithm over a tree decomposition, that follows the spirit of the algorithm by Demaine et al. [14]. We assume that we are given a tree decomposition of the graph and consider both k and r as part of the input. Thus, for each node t of this decomposition, we can guess the distance from each vertex in the bag of t to its closest hub in some (global) optimal solution H * . The subproblem is computing the minimum number of hubs to satisfy each demand in the subgraph G t , corresponding to t. Compared to k-Center and k-Supplier, however, MAkHC has two additional sources of difficulty. First, the cost to satisfy a demand cannot be computed locally, as it is the sum of two shortest paths, each from a client in the origin-destination pair to some hub in H * that satisfies that pair. Second, the set of demand pairs D is given as part of the input, whereas every client must be served in k-Center or in k-Supplier. If we knew the subset of demands D * t that are satisfied by some hub in H * ∩ V (G t ), then one could solve every subproblem in a bottom-up fashion, so that every demand would have been satisfied in the subproblem corresponding to the root of the decomposition. Guessing D * t leads to an FPT algorithm parameterized by tw, r and |D|, which is unsatisfactory as the number of demands might be large in practice. Rather, for each node t of the tree decomposition, we compute deterministically two sets of demands D t , S t ⊆ D that enclose D * t , that is, that satisfy D t ⊆ D * t ⊆ D t ∪ S t . By filling the dynamic programming table using D t instead of D * t , we can obtain an algorithm that runs in FPT time on parameters tw and r, and that finds a 2-approximation. The key insight for the analysis is that the minimum number of hubs in G t that are necessary to satisfy each demand in D t by a path of length at most r is a lower bound on |H * ∩ V (G t )|. At the same time, the definition of the set of demands S t ensures that each such demand can be satisfied by a path of length at most 2r using a hub that is close to a vertex in the bag of t. This is the main technical contribution of the paper, and we believe that these ideas might find usage in algorithms for similar problems whose solution costs have non-local components. Using only these ideas, however, is not enough to get rid of r as a parameter, as we need to enumerate the distance from each vertex in a bag to its closest hub. A common method to shrink a dynamic programming table with large integers is storing only an approximation of each number, causing the solution value to be computed approximately. This eliminates the parameter r from the running time, but adds a term ǫ to the approximation factor. This technique is now standard [34] and has been applied multiple times for graph width problems [14,20,32,4]. Specifically, we employ the framework of approximate addition trees [34]. For some δ > 0, we approximate each value {1, . . . , r} of an entry in the dynamic programming table by an integer power of (1 + δ), and show that each such value is computed by an addition tree and corresponds to an approximate addition tree. By results in [34], we can readily set δ appropriately so that the number of distinct entries is polynomially bounded and each value is approximated within factor (1 + ǫ). Related work The first modern studies on hub location problems date several decades back, when models and applications were surveyed [37,38]. Since then, most papers focused on integer linear programming and heuristic methods [1,16]. Approximation algorithms were studied for the single allocation median variant, whose task is to allocate each client to exactly one of the given hubs, minimizing the total transportation cost [28,2,22]. Later, constant-factor approximation algorithms were given for the problem of, simultaneously, selecting hubs and allocating clients [3]. The analogous of MAkHC with median objective was considered by Bordini and Vignatti [7], who presented a (4α)-approximation algorithm that opens 2α 2α−1 k hubs, for α > 1. There is a single allocation center variant that asks for a two-level hub network, where every client is connected to a single hub and the path satisfying a demand must cross a given network center [41,35]. Chen et al. [11] give a 5 3approximation algorithm and showed that finding a (1.5 − ǫ)-approximation, for ǫ > 0, is NP-hard. This problem was shown to admit an EPAS parameterized by the treewidth [4] and, to our knowledge, is the first hub location problem studied in the parameterized setting. Organization The remainder of the paper is organized as follows. Section 2 introduces basic concepts and describes the framework of approximate addition trees. Section 3 shows the hardness results for MAkHC in both classical and parameterized complexity. Section 4 presents the approximation algorithm parameterized by treewidth, which is analyzed in Section 5. Section 6 presents the final remarks. The case of planar graphs is considered in Appendix A. Preliminaries An α-approximation algorithm for a minimization problem is an algorithm that, for every instance I of size n, has running time n O(1) and outputs a solution of value at most α · OPT(I), where OPT(I) is the optimal value of I. A parameterized algorithm for a parameterized problem is an algorithm that, for every instance (I, k), has running time f (k) · n O(1) , where f is a computable function that depends only on the parameter k, and decides (I, k) correctly. A parameterized problem that admits a parameterized algorithm is called fixed-parameter tractable, and the set of all such problems is denoted by FPT. Finally, a parameterized α-approximation algorithm for a (parameterized) minimization problem is an algorithm that, for every instance I and corresponding parameter k, has running time f (k) · n O(1) and outputs a solution of value at most α · OPT(I). For a complete exposition, we refer the reader to [40,13,36]. We adopt standard graph theoretic notation. Given a graph G, we denote the set of vertices and edges as V (G) and E(G), respectively. For S ⊆ V (G), the subgraph of G induced by S is denoted as G[S] and is composed by the vertices of S and every edge of the graph that has both endpoints in S. A tree decomposition of a graph G is a pair (T , X), where T is a tree and X is a function that associates a node t of T to a set X t ⊆ V (G), called bag, such that: The width of a tree decomposition is max t∈V (T ) |X t | − 1 and the treewidth of G is the minimum width of any tree decomposition of the graph. Also, for a node t ∈ V (T ), let T t be the subset of nodes that contains t and all its descendants, and define G t as the induced subgraph of G that has t ′ ∈Tt X t ′ as the set of vertices. Dynamic programming algorithms over tree decompositions often assume that the decomposition has a restricted structure. In a nice tree decomposition of G, T is a binary tree and each node t has one of the following types: (i) leaf node, which has no child and X t = ∅; (ii) introduce node, which has a child t ′ with X t = X t ′ ∪ {u}, for u / ∈ X t ′ ; (iii) forget node, which has a child t ′ with X t = X t ′ \ {u}, for u ∈ X t ′ ; (iv) join node, which has children t ′ and t ′′ with Given a tree decomposition (T , X) of width tw, there is a polynomial-time algorithm that finds a nice tree decomposition of the same width and O(tw · |V (G)|) nodes [33]. Moreover, we may assume without loss of generality that our algorithm receives as input a nice tree decomposition of G whose tree has height O(tw · log |V (G)|), using the same arguments as discussed in [6,4]. Approximate addition trees An addition tree is an abstract model that represents the computation of a number by successively adding two other previously computed numbers. Definition 1. An addition tree is a full binary tree such that each leaf u is associated to a non-negative integer input y u , and each internal node u with children u ′ and u ′′ is associated to a computed number y u := y u ′ + y u ′′ . One can replace the sum with some operator ⊕, which computes each such sum only approximately, up to an integer power of (1 + δ), for some parameter δ > 0. The resulting will be an approximate addition tree. While the error of the approximate value can pile up as more operations are performed, Lampis [34] showed that, for some ǫ > 0, as long as δ is not too large, the relative error can bounded by 1 + ǫ. Figure 1 illustrates an addition tree and the corresponding approximate addition tree. Definition 2. An approximate addition tree with parameter δ > 0 is a full binary tree, where each leaf u is associated to a non-negative integer input z u , and each internal node u with children u ′ and u ′′ is associated to a computed value For simplicity, here we defined only a deterministic version of the approximate addition tree, since we can assume that the height of the tree decomposition is bounded by O(tw · log |V (G)|). For this case, Lampis showed the following result. Preprocessing For an instance of MAkHC and a demand (a, b) ∈ D, define G ab as the induced Notice that if a solution H has a hub h ∈ V (G ab ), then the length of a path serving (a, b) that crosses h is at most r. In this case, we say that demand (a, b) is satisfied by h with cost r. Thus, in an optimal solution H * of MAkHC, for every (a, b) ∈ D, the set H * ∩ V (G ab ) must be non-empty. then v does not belong to any (a, b)-path of length at most r, and can be safely removed from G. From now on, assume that we have preprocessed G in polynomial time, such that for every v ∈ V (G), Moreover, we assume that each edge has an integer weight and that the optimal value, OPT, is bounded by O( 1 ǫ |V (G)|), for a given constant ǫ > 0. If not, then we solve another instance for which this holds and that has optimal value OPT ′ ≤ (1 + ǫ)OPT using standard rounding techniques [40]. It suffices finding a constant-factor approximation of value A ≤ 3OPT [39], and defining a new distance function such that Hardness Next, we observe that approximating MAkHC is hard, both in the classical and parameterized senses. First, we show that approximating the problem by a factor better than 3 is NP-hard, even if the input graph is planar and unweighted. This result strengthens the previous known lower bound and matches the approximation factor of the greedy algorithm [39]. when G is an unweighted planar graph, then P = NP. Proof. We present a reduction from Vertex Cover (VC), whose task is to find a subset of k vertices that contains at least one endpoint of every edge of the graph. More specifically, we consider a particular version of the problem. Claim. Vertex Cover is NP-hard even if the input graph is planar, triangle-free and has maximum degree 3. Proof. We self-reduce the problem from the case the input graph is planar and with maximum degree 3, which is known to be NP-hard [21]. Given an instance (G, k) of vertex cover, create another instance (G ′ , k ′ ), where G ′ is obtained by subdividing each edge of G in three parts, and k ′ = k + |E(G)|. Let u e and v e be new vertices added for the subdivision of an edge e = (u, v) ∈ E(G) and that are incident with u and v, respectively. Assume S is a vertex cover for G with size k, and build a vertex cover S ′ for G ′ as follows. Initialize S ′ with a copy of S and, for each edge e = (u, v) of G, add v e to S ′ , if u ∈ S, and add u e , otherwise. Note that S ′ is a vertex cover of G ′ of size k ′ . For the other direction, assume S ′ is a vertex cover of G ′ with size k ′ , and define S = S ′ \ {u e , v e : e ∈ E(G)}. If, for some edge (u e , v e ) of G ′ , both u e and v e are in S ′ , then S ′ \ {u e } ∪ {u} is a vertex cover of G ′ . Thus, assume for every such edge (u e , v e ), either u e or v e is in S ′ . It follows that S is a vertex cover of G of size k. ⊓ ⊔ Given an instance (G, k) of VC, build an instance (G, C, H, D, k) of MAkHC, where C = H = V (G) and D = E(G). Observe that there exists a vertex cover S of size k in G if, and only if, the solution S for MAkHC has value 1. Suppose that the optimal value is greater than 1, then it would have to be at least 3, since the graph has no triangles. Then, for ǫ > 0, a (3 − ǫ)-approximation for MAkHC can decide whether the optimal value is 1, thus deciding whether there is a vertex cover of size k in G. ⊓ ⊔ From this reduction, one may observe that the previous theorem holds even for the case where the maximum degree is 3 and the optimal value is bounded by 3. To find a better approximation guarantee, one might resource to a parameterized approximation algorithm. The natural candidates for parameters of MAkHC are the number of hubs k and the value r of an optimal solution. The next theorem states that this choice of parameters does not help, as it is W[2]-hard to find a parameterized approximation with factor better than 3, when the parameter is k, the value r is bounded by a constant and G is unweighted. . This holds even for the particular case of MAkHC with instances I such that OPT(I) ≤ 6. Proof. The theorem will follow by a reduction from Hitting Set (HS), which is known to be W[2]-hard [15]. We show that a (3 − ǫ)-approximation for MAkHC can decide the instance of HS, implying that FPT = W [2]. Remember that in HS, we are given a set U, a family of sets F ⊆ 2 U and an integer k, and the objective is to decide whether there exists a set H ⊆ U of size k that intersects every set of F . Given an instance I = (U, F , k) of HS, we build an instance I ′ = (G, C, H, D, k) of MAkHC: for each element e ∈ U, create a vertex h e in G and add it to H; for each set S ∈ F , create vertices u S and v S in G, add them to C, create a demand (u S , v S ) in D and connect u S and v S to vertices {h e : e ∈ S}. Consider a hitting set H of size k, and let H ′ = {h e : e ∈ H} be a set of hubs of size k. This set of hubs satisfies every demand in D with cost 2, since for every S ∈ F , there is e ∈ S ∩ H and thus h e ∈ H ′ . In the other direction, consider a set of hubs H ′ of size k that satisfies every demand in D with cost 2, and let H = {e : h e ∈ H ′ } be a set of elements of size k. For each set S ∈ F , there exists a corresponding demand (u S , v S ) in D that is satisfied by a hub h e ∈ H ′ with cost 2. Since the length of this path is 2, h e must be a neighbor of u S and v S in G, then e ∈ S ∩ H. It follows that H is a hitting set for I. We have shown that I is a yes-instance if, and only if, the optimal value of I ′ is 2. Now, if the optimal value of I ′ is greater than 2, then it would have to be at least 6. Indeed, if a demand (u S , v S ) is satisfied by a hub h e ∈ H ′ with cost greater than 2, then h e is not a neighbor of u S . But G is bipartite and u S and h e are at different parts, then d(u S , h e ) ≥ 3. Analogously, we have d(v S , h e ) ≥ 3, and thus d(u S , h e ) + d(v S , h e ) ≥ 6. We conclude that a (3 − ǫ)-approximation can decide whether the optimal value of I ′ is 2, thus deciding whether I is a yes-instance. ⊓ ⊔ Due to the previous hardness results, a parameterized algorithm for MAkHC must consider different parameters, or assume a particular case of the problem. In this paper, we focus on the treewidth of the graph, that is one of the most studied structural parameters [13], and the particular case of planar graphs. This setting is unlikely to lead to an (exact) FPT algorithm, though, as the problem is W[1]-hard, even if we combine these conditions. The next theorem follows directly from a result of Blum [5], since MAkHC is a generalization of k-Center. Recall that the treewidth is a lower bound on the pathwidth, thus the previous theorem implies that the problem is also W[1]-hard for planar graphs when parameterized by a combination of k and tw. To circumvent these hardness results, in Section 4, we give a (2 + ǫ)-approximation algorithm for MAkHC for arbitrary graphs that is parameterized by tw, breaking the approximation barrier of 3. In Appendix A, we complement this result with a (2 + ǫ)-approximation for unweighted planar graphs parameterized by k and r. The algorithm In this section, we give a (2+ǫ)-approximation parameterized only by the treewidth. In what follows, we assume that we receive a preprocessed instance of MAkHC and a nice tree decomposition of the input graph G with width tw and height bounded by O(tw · log |V (G)|). Also, we assume that G contains all edges connecting pairs u, v ∈ X t for each node t. Moreover, we are given an integer r bounded by O((1/ǫ)|V (G)|). Our goal is to design a dynamic programming algorithm that computes the minimum number of hubs that satisfy each demand with a path of length r. The overall idea is similar to that of the algorithm for k-Center by Demaine et al. [14], except that we consider a tree decomposition, instead of a branch decomposition, and that the computed solution will satisfy demands only approximately. Consider some fixed global optimal solution H * and a node t of the tree decomposition. Let us discuss possible candidates for a subproblem definition. The subgraph G t corresponding to t in the decomposition contains a subset of H * that satisfies a subset D * t of the demands. The shortest path serving each demand with a hub of H * ∩ V (G t ) is either completely contained in G t , or it must cross some vertex of the bag X t . Thus, as in [14], we guess the distance i from each vertex u in X t to the closest hub in H * , and assign "color" ↓ i to u to mean that the corresponding shortest path is in G t , and color ↑i to mean otherwise. Since the number of demands may be large, we cannot include D * t as part of the subproblem definition. For k-Center, if the shortest path serving a vertex in G t crosses a vertex u ∈ X t , then the length of this path can be bounded locally using the color of u, and the subproblem definition may require serving all vertices. For MAkHC, however, there might be demands (a, b) such that a is in G t , while b is not, thus the coloring of X t is not sufficient to bound the length of a path serving (a, b). Instead of guessing D * t , for each coloring c of X t , we require that only a subset D t (c) must be satisfied in the subproblem, and they can be satisfied by a path of length at most 2r. Later, we show that the other demands in D * t are already satisfied by the hubs corresponding to the coloring of X t . More specifically, we would like to compute A t (c) as the minimum number of hubs in G t that satisfy each demand in D t (c) with a path of length at most 2r and that respect the distances given by c. Since we preprocessed the graph in Section 2, there must be a hub in H * to each vertex of X t at distance at most r. Thus, the number of distinct colorings to consider for each t is bounded by r O(tw) . To get an algorithm parameterized only by tw, we need one more ingredient: in the following, the value of each color is stored approximately as an integer power of (1 + δ), for some δ > 0. Later, using the framework of approximate addition trees, for any constant ǫ > 0, we can set δ such that the number of subproblems is bounded by O * ((tw/ǫ) O(tw) ), and demands are satisfied by a path of length at most (1 + ǫ)2r. The set of approximate colors is A coloring of X t is represented by a function c : X t → Σ. For each coloring c, we compute a set of demands that are "satisfied" by c. The intuition is that a demand (a, b) ∈ S t (c) can be satisfied by a hub close to u by a path of length at most (1 + ǫ)2r. Also, we compute a set of demands that must be served by a hub in G t by the global optimal solution. and either: We will show in Lemmas 4 and 5 that D t (c) ⊆ D * t ⊆ D t (c) ∪ S t (c), thus we only need to take care of demands in D t (c) in the subproblem. Formally, for each node t of the tree decomposition and coloring c of X t , our algorithm computes a number A t (c) and a set of hubs H ⊆ H ∩ V (G t ) of size A t (c) that satisfies the conditions below. (C1) For every u ∈ X t , if c(u) =↓i, then there exists h ∈ H and a shortest path P from u to h of length at most i such that V (P ) ⊆ V (G t ); (C2) For every (a, b) ∈ D t (c), min h∈H d(a, h) + d(h, b) ≤ (1 + ǫ)2r. If the algorithm does not find one such set, then it assigns A t (c) = ∞. We describe next how to compute A t (c) for each node type. For a leaf node t, we have V (G t ) = ∅, then H = ∅ satisfies the conditions, and we set A t (c ∅ ) = 0, where c ∅ denotes the empty coloring. For an introduce node t with child t ′ , let u be the introduced vertex, such that X t = X t ′ ∪ {u}. Let I t (c) be the set of colorings c ′ of X t ′ such that c ′ is the restriction of c to X t ′ and, if c(u) =↓ i for some i > 0, there is v ∈ X t ′ with c ′ (v) =↓j such that i = d(u, v) ⊕ j. Note that this set is either a singleton or is empty. If I t (c) is empty, discard c. Define: For a forget node t with child t ′ , let u be the forgotten vertex, such that We output as solution the set H = H ′ , where H ′ corresponds to the solution of the selected subproblem in t ′ . For a join node t with children t ′ and t ′′ , we have X t = X t ′ = X t ′′ . Let J t (c) be the set of pairs of colorings (c ′ , c ′′ ) of X t such that, for every u ∈ X t , when where h(c) is the number of vertices u in X t such that c(u) =↓ 0. We output a solution H = H ′ ∪ H ′′ , where H ′ and H ′′ are the solutions corresponding to t ′ and t ′′ , respectively. In the next lemma, we show that the algorithm indeed produces a solution of bounded size that satisfies both conditions. Proof. We prove the lemma by induction on the height of node t, thus assume the lemma holds for nodes below t. For leaves, the algorithm outputs an empty set, satisfying both conditions. For an introduce node t with child t ′ and u ∈ X t \ X t ′ , let H ′ be the solution corresponding to t ′ with coloring c ′ . Since c ′ is the restriction of c to X t ′ , condition (C1) is satisfied for every v ∈ X t ′ , by induction. If c(u) =↓0, then it is satisfied for u, since, in this case, u ∈ H. Else, if c(u) =↓i for i > 0, then it is also satisfied, since in this case there is For a forget node t with child t ′ and u ∈ X t ′ \ X t , we have that , then this demand is satisfied by H with cost at most (1+ǫ)2r. Else, (a, b) ∈ S t ′ (c ′ ), but (a, b) / ∈ S t (c). Thus, for the forgotten vertex u, we have c ′ (u) ∈ {↑i, ↓i} and d(a, u)+2i+d(u, b) ≤ (1 + ǫ)2r. We consider two cases: -If c ′ (u) =↓i, then, since H satisfies (C1), there is h ∈ H such that the distance from u to h is at most i. Thus condition (C2) is satisfied, because where we used d(u, v) + j ≤ d(u, v) ⊕ j = i in the second inequality. But this means that (a, b) ∈ S t (c), which is a contradiction. For a join node t with children t ′ and t ′′ , let H ′ and H ′′ be solutions for the subproblems at t ′ and t ′′ corresponding to the selected pair of colorings c ′ and c ′′ . We claim that H = H ′ ∪ H ′′ satisfies both conditions. For (C1), note that if , and thus this demand is satisfied with cost at most (1 + ǫ)2r by a vertex in H ′ or H ′′ . ⊓ ⊔ Let t 0 be the root of the tree decomposition and c ∅ be the empty coloring. Since the bag corresponding to the root node is empty, we have S t 0 (c ∅ ) = ∅ and thus D t 0 (c ∅ ) = D. Therefore, if A t 0 (c ∅ ) ≤ k, Lemma 1 implies that the set of hubs H computed by the algorithm is a feasible solution that satisfies each demand with cost at most (1 + ǫ)2r. In the next section, we bound the size of H by the size of the global optimal solution H * . Analysis For each node t of the tree decomposition, we want to show that the number of hubs computed by the algorithm for some coloring c of X t is not larger than the number of hubs of H * contained in G t , that is, we would like to show that A t (c) ≤ |H * ∩ V (G t )| for some c. If the distances from each vertex u ∈ X t to its closest hub in H * were stored exactly, then the partial solution corresponding to H * would induce one such coloring c * t , and we could show the inequality for this particular coloring. More precisely, for each u ∈ V (G), let h * (u) be a hub of H * such that d(u, h * (u)) is minimum and P * (u) be a corresponding shortest path. Assume that each P * (u) is obtained from a shortest path tree to h * (u) and that it has the minimum number of edges among the shortest paths. The signature of H * corresponding to a partial solution in G t is a function c * t on X t such that ↑d(u, h * (u)) otherwise. Since distances are stored approximately as integer powers of (1 + δ), the function c * t might not be a valid coloring. Instead, we show that the algorithm considers a coloringc t with roughly the same values of c * t and that its values are computed by approximate addition trees. We say that an addition tree and an approximate addition tree are corresponding if they are isomorphic and have the same input values. Also, recall that a coloring c of X t is discarded by the algorithm if the set I t (c), F t (c) or J t (c) corresponding to t is empty. Lemma 2. Let ℓ t 0 be the height of the tree decomposition. There exists a coloringc t that is not discarded by the algorithm and such that, for every u ∈ X t , the values c * t (u) andc t (u) are computed, respectively, by an addition tree and a corresponding approximate addition tree of height at most 2ℓ t 0 . Proof. A partial addition tree is a pair (T, p), where T is an addition tree and p is a leaf of T . The vertex p represents a subtree that computes a pending value x p , and may be replaced by some other (partial) addition tree that computes this value. For some node t, let ℓ t be the height of t and define U t as the set of vertices u ∈ X t such that c * t (u) =↑ i for some i. We say that a vertex v ∈ V (G t ) \ U t is t-complete according to the following cases: is computed by an addition tree of height at most ℓ t ; if V (P * (v)) ⊆ V (G t ) and v / ∈ X t , then d(v, h * (v)) is computed by an addition tree of height at most 2ℓ t ; is computed by a partial addition tree (T, p) of height at most ℓ t such that x p = d(w, h * (w)) for some w ∈ U t . We will show by induction on the height of t that every v ∈ V (G t ) \ U t is t-complete. The claim holds trivially for leaves, thus suppose that t is not a leaf. Assume t is an introduce node with child t ′ , and let u be the introduced vertex. ). Since d(w, h * (w)) can be computed by an addition tree of height at most ℓ t ′ , this implies that d(v, h * (v)) can be computed by an addition tree of height at most ℓ t ′ + 1 ≤ ℓ t . Now, assume t is a forget node with child t ′ , and let u be the forgotten vertex. , then v is t-complete by the induction hypothesis. Otherwise, by the induction hypothesis, d(v, h * (v)) is computed by a partial addition tree (T, p) of height at most ℓ t ′ such that x p = d(w ′ , h * (w ′ )) for some w ′ ∈ U t ′ . If w ′ ∈ U t , then v is t-complete. So, assume w ′ ∈ U t ′ \ U t , which implies that w ′ is the forgotten vertex u and c * t ′ (u) =↑ d(u, h * (u)). Thus, P * (u) crosses some vertex w ∈ U t such that d(u, h * (u)) = d(u, w) + d(w, h * (w)). It follows that d(u, h * (u)) can be computed by a partial addition tree (T u , p u ) of height 1 such that x pu = d(w, h * (w)). Therefore, we can replace the vertex p by the subtree T u , and the height of T becomes at most ℓ t ′ + 1 ≤ ℓ t . Finally, assume t is a join node with children t ′ and t ′′ , and recall that , because X t induces a clique and P * (v) is a shortest path with minimum number of edges. Thus, v is t-complete by the induction hypothesis. as the other case is analogous. By the induction hypothesis for t ′ , d(v, h * (v)) is computed by a partial addition tree (T ′ , p) of height at most ℓ t ′ such that . It follows that c * t ′′ (w) =↓d(w, h * (w)). By the induction hypothesis for t ′′ , d(w, h * (w)) is computed by an addition tree T ′′ of height at most ℓ t ′′ . Therefore, we can replace the vertex p by the subtree T ′′ , and the height of T ′ becomes at most ℓ t ′ + ℓ t ′′ ≤ 2ℓ t . This completes the induction. For the root node t 0 , we have X t 0 = ∅, thus for every v ∈ V (G), the distance d(v, h * (v)) is computed by an addition tree T v of height at most 2ℓ t 0 . LetT v be the approximate addition tree corresponding to T v , and defined(v) as the output ofT v . For every node t, and u ∈ X t , if c * t (u) =↓d(u, h * (u)), definec t (u) =↓d(u); else, definec t (u) =↑d(u). By repeating the arguments above, and replacing the addition operator by ⊕, one can show that, for every t, the coloringc t is not discarded by the algorithm. Recall that H * is a fixed global optimal solution that satisfies each demand with cost r. Our goal is to bound A t (c t ) ≤ |H * ∩ V (G t )| for every node t, thus we would like to determine the subset of demands D * t that are necessarily satisfied by hubs H * ∩ V (G t ) in the subproblem definition. This is made precise in the following. Since the algorithm cannot determine D * t , we show that, for each node t, it outputs a solution H for the subproblem corresponding to A t (c t ) that satisfies every demand in D t (c t ). In Lemma 4, we show that every demand in D t (c t ) is also in D * t , as, otherwise, there could be no solution with size bounded by |H * ∩V (G t )|. Conversely, we show in Lemma 5 that a demand in D * t that is not in D t (c t ) must be in S t (c t ), thus all demands are satisfied. Proof. Let (a, b) ∈ D t (c t ) and consider an arbitrary hub h * ∈ H * that satisfies (a, b) with cost r. We will show that h * ∈ V (G t ), and thus (a, b) ∈ D * t . For the sake of contradiction, assume that h * ∈ V (G) \ V (G t ). First we claim that d(h * , V (G ab ) ∩ X t ) > r/2. If not, then let u ∈ V (G ab ) ∩ X t be a vertex withc t (u) ∈ {↑ i, ↓ i} such that d(u, h * ) ≤ r/2. Because the closest hub to u has distance at least i/(1 + ǫ), we have i ≤ (1 + ǫ)d(u, h * ) ≤ (1 + ǫ)r/2, but since u ∈ V (G ab ), this implies that (a, b) ∈ S t (c t ), and thus (a, b) / ∈ D t (c t ). Then, it follows that indeed d(h * , V (G ab ) ∩ X t ) > r/2. Now we show that it cannot be the case that a, b ∈ V (G t ). Suppose that a, b ∈ V (G t ). Consider the shortest path from a to h * , and let u be the last vertex of this path that is in V (G t ). Since X t separates V (G t ) \ X t from V (G) \ V (G t ), it follows that u ∈ X t . From the previous claim, d(h * , u) > r/2, and thus d(h * , a) > r/2. Analogously, d(h * , b) > r/2, but then d(a, h * ) + d(h * , b) > r, which contradicts the fact that h * satisfies (a, b) with cost r. This contradiction comes from supposing that a, b ∈ V (G t ). Thus, either a or b is not in V (G t ). Assume without loss of generality that a ∈ V (G t ) and b / ∈ V (G t ). From the definition of D t (c t ), we know that there exists h ∈ V (G ab ) ∩ V (G t ) such that d(h, V (G ab ) ∩ X t ) > r/2. Let P be a path from a to b crossing h * with length at most r. Similarly, since h ∈ V (G ab ), there exists a path Q from a to b crossing h with length at most r. Let u be the last vertex of P with u ∈ X t , and let v be the last vertex of Q with v ∈ X t (see Figure 2). Concatenating P and Q leads to Fig. 2. Closed walk formed by P and Q. a closed walk of length at most 2r. This walk crosses u, h * , v and h, and thus where we used the fact that each term in (1) is greater than r/2. This is a contradiction, so h * ∈ V (G t ) and then (a, b) ∈ D * t . ⊓ ⊔ Proof. Let (a, b) ∈ D * t . Assume (a, b) / ∈ S t (c t ), as otherwise we are done. If a, b ∈ V (G t ), we have (a, b) ∈ D t (c t ). Thus, suppose without loss of generality that a ∈ V (G t ) and ∈ S t (c t ), we have i > (1 + ǫ)r/2. But the distance from u to the closest hub in H * is at least i/(1 + ǫ), thus i ≤ (1 + ǫ)d(u, h * ). It follows that d(u, h * ) > r/2. Therefore, Before bounding the number of hubs opened by the algorithm, we prove some auxiliary results. by definition, we know that min h∈H * \V (Gt) d(a, h) + d(h, b) > r, but min h∈H * \V (G t ′ ) d(a, h) + d(h, b) ≤ r. This can only happen if u ∈ H * , soc t (u) =↓0, and then (a, b) ∈ S t (c t ). Since D * Let t be a forget node with child t ′ and u ∈ X t ′ \ X t . From Lemmas 2 and 7, we know thatc t ′ ∈ F t (c t ) and D t (c t ) Let t be a join node with children t ′ and t ′′ . From Lemmas 2 and 8, we know that . Let H ′ and H ′′ be the output solutions corresponding to t ′ and t ′′ , respectively. We have Now we can state the main result. Proof. Consider a preprocessed instance (G, C, H, D, k) of MAkHC, in which the optimal value OPT is an integer bounded by O( 1 ǫ |V (G)|). We run the dynamic programming algorithm for each r = 1, 2, . . . , and output the first solution with no more than k hubs. Next, we show that the dynamic programming algorithm either correctly decides that there is no solution of cost r that opens k hubs, or finds a solution of cost (1+ǫ)2r that opens k hubs. Thus, when the main algorithm stops, r ≤ OPT, and the output is a (2 + ǫ ′ )-approximation, for a suitable ǫ ′ . Assume H * is a solution that satisfies each demand with cost r with minimum size. Recall t 0 is the root of the tree decomposition and c ∅ is the coloring of an empty bag. If A t 0 (c ∅ ) ≤ k, then Lemma 1 states that the dynamic programming algorithm outputs a set of hubs H of size at most k that satisfies each demand in D t 0 (c ∅ ) = D with cost (1 + ǫ)2r. Otherwise, k < A t 0 (c ∅ ), and Lemma 9 implies k < A t 0 (c ∅ ) ≤ |H * ∩ V (G t 0 )| = |H * |. Thus, by the minimality of H * , there is no solution of cost r that opens k hubs. Final remarks Our parameterized (2+ǫ)-approximation algorithm circumvents hardness barriers coming from both classical and parameterized complexity theories. Improving on the 3-approximation is NP-hard and, as we note, W[2]-hard even if we take r as a constant and parameterize by k. Thus, since we drop k as parameter and take r as part of the input, parameterizing by treewidth is a necessary condition of the algorithm to break the 3-approximation lower bound. Approximating is also necessary, as the problem on planar graphs is W[1]-hard for pathwidth and several other parameters. These results are analogous to k-Center, which has a 2-approximation lower bound and does not admit an FPT algorithm. Unlike k-Center, however, we left open whether MAkHC admits an EPAS when parameterized by treewidth. The challenge seems to be the non-locality of the paths serving the demands, thus established techniques are not sufficient to tackle this issue. In this paper, we show how to compute a special subset of demands that must be served locally for each subproblem. We hope this technique may be of further interest. A possible direction of research is to consider the single allocation variant in the two-stop model, which is a well-studied generalization of MAkHC [16,3]. A The planar case In this section, we give a (2 + ǫ)-approximation algorithm parameterized by k and r, when the input is restricted to unweighted planar graphs. This algorithm can be seen as another way to challenge the approximation lower bound presented in Section 3. Indeed, by Theorem 3, finding a (3−ǫ)-approximation parameterized by k and r is W[2]-hard for unweighted graphs, even when r is a constant. Thus, we restrict the input to planar graphs, but get a better approximation factor. The algorithm is built upon the bidimensionality framework and follows the arguments for k-Center by Demaine et al. [14]. In the following, let (G, C, H, D, k, r) be a positive instance of MAkHC such that G is an unweighted planar graph. Proof. We begin with a series of definitions. Let Let V ext be the set of vertices of F whose degrees are smaller than 4. We assume the vertices of V ext belong to the external face of some embedding of F and call the other faces internal. Let V int be the set of vertices of F that have distance at least r from every vertex in V ext . Note that V int induces a subgraph F [V int ] that is a subgrid of F with |V int | = (ρ − 2r) 2 . Let R be a subgraph of J and let d R (u, v) be the length of a shortest path from u to v in R. Observe that, for every u, v ∈ V (R), we have δ(u, v) ≤ d R (u, v). Define N ℓ R (u) = {v : d R (u, v) ≤ ℓ}. Now, consider a sequence of edge contractions and removals which transforms G into a minor isomorphic to F using a maximal number of edge contractions. Let H be the result of applying only the contractions of that sequence to G, and consider an embedding of H in the plane that corresponds to an embedding of F . Partition the edges of H in three sets: the edges that occur in F , the set E 1 that connect non-adjacent vertices of an internal face of F , and the set E 2 with all other edges. Note that edges in E 2 are only incident with vertices in V ext . Call R the graph we obtain by adding edges E 1 to F , and note that R is a subgraph of J. Then, for a vertex u of R and an integer ℓ, we have that N ℓ R (u) ⊆ B ℓ (u). Observe that the set of edges of H is E(R) ∪ E 2 . For a vertex u ∈ V int , we claim that N r H (u) ⊆ B r (u). This holds because paths of length at most r starting at a vertex of V int do not use edges of E 2 and, as a consequence, N r H (u) = N r R (u). Let S be a solution for the instance of MAkHC. Observe that the distance between every client and a hub of S is at most r, since every vertex is in some set V (G ab ). Also, note that, for vertices u and v of G associated with vertices u ′ and v ′ of H, d H (u ′ , v ′ ) ≤ d G (u, v), as H is obtained from G using only edge contractions. The size of this set is |Y | ≥ ρ−2r For distinct y, y ′ ∈ Y , we have B r (y) ∩ B r (y ′ ) = ∅. Also, there must exist a hub in S that is associated with some vertex in N r H (y) ⊆ B r (y). Therefore, each y ∈ Y is associated to one unique hub in S, and finally, k ≥ |Y | ≥ ρ − 2r 2r + 1 2 . ⊓ ⊔ Using the previous bound and Theorem 5, we get the main result of this section. Theorem 6. For every ǫ > 0, there is a parameterized (2 + ǫ)-approximation algorithm for MAkHC when the parameters are k and r, and the input graph is unweighted and planar. Notice that a version of the dynamic programming algorithm presented in Section 4 that stores distances exactly is a 2-approximation parameterized by tw and r. Thus, for MAkHC with unweighted planar graphs, there is actually a 2-approximation algorithm parameterized by k and r.
2022-05-27T06:16:56.246Z
2022-05-25T00:00:00.000
{ "year": 2022, "sha1": "f02f9da15b9d339136af150f12931b3a38704270", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "24cbe8b4a0c458d425c71bdbd46faa84b3c2ebd8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
18302315
pes2o/s2orc
v3-fos-license
Large-N analysis of (2+1)-dimensional Thirring model We analyze $(2+1)$-dimensional vector-vector type four-Fermi interaction (Thirring) model in the framework of the $1/N$ expansion. By solving the Dyson-Schwinger equation in the large-$N$ limit, we show that in the two-component formalism the fermions acquire parity-violating mass dynamically in the range of the dimensionless coupling $\alpha$, $0 \leq \alpha \leq \alpha_c \equiv {1\over16} {\rm exp} (- {N \pi^2 \over 16})$. The symmetry breaking pattern is, however, in a way to conserve the overall parity of the theory such that the Chern-Simons term is not induced at any orders in $1/N$. $\alpha_c$ turns out to be a non-perturbative UV-fixed point in $1/N$. The $\beta$ function is calculated to be $\beta (\alpha) = -2 (\alpha - \alpha_c)$ near the fixed point, and the UV-fixed point and the $\beta$ function are shown exact in the $1/N$ expansion. Recently, there has been a resurgence of interest in four-Fermi interaction partly due to the extraordinary heaviness of the top-quark, compared to other quarks and leptons [1]. One of the key ideas in this approach is that the 4-Fermi interaction, introduced in the standard electro-weak theory as a low energy effective interaction, becomes a relevant operator as the ultraviolet cutoff, Λ, goes to ∞, due to a strong interaction among fermions. When the 4-Fermi coupling is larger than a critical value, the 4-Fermi interaction induces the condensation of the top-quark, as shown in the original Nambu-Jona-Lasinio model [2]. Thus the top-quark gets a large mass, and the electro-weak symmetry breaks dynamically. As described below, similar dynamical behavior occurs in the (2 + 1)D Thirring model. The (2 + 1)-dimensional Thirring model is given in the Euclidean version by where ψ i are two-component spinors and i, j are summed over from 1 to N. The γ matrices are defined as where σ's are the Pauli matrices. Since the four-Fermi coupling g has a mass-inverse dimension, the model is not renormalizable in ordinary (weak) coupling expansion. But, it has been shown renormalizable for (2 + 1)-dimensions in the large flavor (N) limit [3]. It is therefore sensible to analyze the 3D Thirring model in the large-N expansion. There are at least two ways of viewing the 3D Thirring model in treating the dimensional coupling constant g. One is taking g as a genuine dimensional parameter that sets the natural scale of the theory; for example, the dynamically generated fermions, if any, will be proportional to this scale; m dyn ∼ 1/g. The another one is to take the dimensional parameter, 1/g, as the UV cutoff of the theory; g ≡ 1 αΛ , where Λ is the UV cutoff and α is a dimensionless coupling. Therefore, in this case, the only dimensional parameter in the model is the ultraviolet cutoff. In the continuum limit, the four-Fermi operator (together with the ultraviolet cutoff), 1 becomes a relevant operator in the large-N approximation. In this approach, if dynamical mass is generated, it will be independent of the ultraviolet cutoff, Λ; it will be the one introduced in trade of the dimensionless parameter, α, by the so-called dimensional transmutation, which happens in any renormalizable theories. The first viewpoint is taken by several authors. For instance, it has been shown in [4] that the 3D Thirring model is UV-finite at all orders of 1/N since the scale 1/g is negligible in the deep UV region. And also Gomes et. al. [5] found in this viewpoint that the model behaves similarly to QED 2+1 [6], which also has a dimensional parameter, e, the electric charge; in both models, the fermion mass is generated when 1 N > 1 Nc . But, as we shall see later, it is in the second viewpoint that 3D Thirring model is similar to 3D Gross-Neveu model [7]. Namely, 3D Thirring model has a two-phase structure, parity-broken and parity-unbroken, and the fermion acquires dynamical mass for strong coupling, g > g c (or 0 ≤ α ≤ α c ). Though the model is still UV-finite perturbatively in 1/N expansion, there exists non-perturbative (in 1/N) renormalization for α. The coupling is running, β(α) = −2(α − α c ), for the same reason as in the Gross-Neuve model. The UV-fixed point α c is found to be 1 16 exp(− N π 2 16 ) in the 1/N expansion. The UV-fixed point and the β function does not change at all, even if one includes the higher order corrections, due to the Ward-Takahashi identity and the UV structure of the theory. This is in contrast with the result in [4] presenting vanishing β function. Therefore, we see that two viewpoints are in many ways different from each other. Now we start with the effective theory with a UV cutoff. Introducing an auxiliary field A µ to facilitate the 1/N expansion, we can rewrite Eq. (1) as where αΛ = 1 g . As was mentioned in [5], the theory is consistent for positive α. As we shall see later, for negative α, the theory is unstable showing tachyons in the four-point fermion Green's function. Eq. (3) is not gauge invariant under the usual gauge transformation on ψ and A µ . However, as was claimed in ref. [5], Eq. (3) with a gauge fixing term has a restricted gauge symmetry. In this paper we choose to work in the Landau gauge. The Thirring model with N two-component complex spinors has U(N) global symmetry and parity also. Under U(N), and under parity, , the fermion fields transform as One can see that the fermion mass term is parity-odd. When the number of fermion flavors is even, the model has another obvious discrete Z 2 symmetry, which interchanges half of the fermions with the other half: Z 2 mixes the fermion fields as, for We define a new parity P 4 which combines the parity for the two-component spinor with Z 2 , P 4 ≡ P Z 2 [8]. As described below, in the (2 + 1)D Thirring model, it is P (not P 4 ) that is spontaneously broken. The fermion mass is dynamically generated in such a way P 4 is conserved. When P 4 is not broken, the Chern-Simons term is not induced. Now we will examine the pattern of the spontaneous breaking of parity. An order parameter for the spontaneous breaking of parity is the vacuum condensate of the fermion bilinear, ψψ(x) , which will be determined once one finds the (asymptotic) behavior of the fermion propagator [9]. In the 1/N expansion one has the following Dyson-Schwinger gap equation, where D µν is the photon propagator, Σ is the fermion self energy, Z is the fermion wave function renormalization constant, and Γ µ is the vertex function. In the Landau gauge, the photon propagator is given by where Π 1 and Π 2 are given by The resummation technique of the 1/N expansion results in the nontrivial photon propagator as given above. Π e [Π o ] in Eq. (9) [(10)] is the even (odd) part of the vacuum polarization which will be determined once we solve the above coupled Dyson-Schwinger equations, Eq.'s (7) and (8). Since Z(p) = 1 + O(1/N) and Γ µ = γ µ + O(1/N), we may take, at the leading order in 1/N, Z(p) = 1 and Γ µ = γ µ consistently in Eq. (7). Then, taking trace over the gamma matrix, we get The magnitude of dynamically generated mass must be small, compared to the cutoff, Λ, of the theory, in the 1/N approximation [6]. We may therefore assume Σ(p) ≪ p ≪ Λ. The vacuum polarization tensor takes then a simple form; where M i ≃ Σ i (0), the mass of the i-th fermion, In general, it is hard to find M i by solving the gap equations directly. But, following the same argument of Coleman and Witten [10], one can easily show that the magnitude of M i is independent of i in the large N limit [11]. Therefore, it is reasonable to assume that M i = M for .., N, as is done in [6]. For momenta p such that M ≪ p ≪ Λ, where θ = 1 − 2L N , a parameter characterizing the parity(P 4 ) violation of the theory. Now we will show that θ = 0 admits a consistent solution of the gap equation. Taking the fermion self energy at zero momentum from Eq. (7) and letting M i ≃ Σ i (0), we find that Following the Cornwall, Jackiw, and Tomboulis formalism [15], we calculate the effective potential of the operator expectation value ψψ(x) . At the extrema it is found to be This is the same expression as in QED 2+1 in the 1/N expansion [6]. It can be easily seen from Eq. (17) that any nontrivial solution has a lower energy than the perturbative vacuum solution, Σ(p) = 0. Therefore, once such a parity-breaking solution is found, it is always energetically favored to the symmetric one. Our solution to the gap equation has thus lower vacuum energy than the trivial solution. Rewriting the Eq. (16) when θ = 0, (18) The above equation indicates that there is a two-phase structure. When α > α c , the parity symmetry is manifest, where the critical value α c is defined by the following If α ≤ α c , non-trivial parity-violating fermion mass is generated in a way to preserve the total parity symmetry of the theory. As we mentioned in the introductory part of this paper, the mass M is that it is not a value which can be determined as in QED 2+1 [6] but a parameter as in Gross-Neveu type model [3]. M is a physical quantity, in fact it is the pole mass of the fermions, and therefore it should be independent of Λ. As in the case of Gross- to that of Coleman and Witten [10]. Therefore, the gap equation Eq.(18) becomes where Γ(k, 0; k) = 1 4 g µν Trγ µ Γ ν (k, 0; k). Keeping terms up to O(1/N), we find by explicit calculations that Similarly, the even part of the vacuum polarization is, up to the terms in 1/N, where const. is a pure number. The Feynman diagrams relevant to the corrections are shown in Figure 1. Above results, Eq.'s (23)-(25), show that the next-to-leading corrections are either finite or suppressed by 1/Λ; the 1/N corrections are UV-finite. By dimensional counting, one can easily show that the 3D Thirring model is in fact UV-finite in all orders in the 1/N expansion [17]. This is the same result as in [4], but the reason is quite different. In our case, because of αΛ in the photon propagator, the loop integrations are UV-finite. The dangerous terms in deriving the UV-fixed point α c will be the terms which are not suppressed as Λ → ∞. But, such terms do not occur in any orders in 1/N because of the UV finiteness of the theory. Since the higher order corrections to the vacuum polarization are suppressed by 1/Λ, the equation defining α c becomes now where F (α c ) = 16 3N π 2 ln 16αc+1
2014-10-01T00:00:00.000Z
1993-07-30T00:00:00.000
{ "year": 1994, "sha1": "c3c42b92f20d98f546b783606031e87fdd824510", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9307186", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "89cd89c7b63a5ded3dee8b5bf67b478d1e5ad8d2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
252200028
pes2o/s2orc
v3-fos-license
Proof-Stitch: Proof Combination for Divide and Conquer SAT Solvers With the increasing availability of parallel computing power, there is a growing focus on parallelizing algorithms for important automated reasoning problems such as Boolean satisfiability (SAT). Divide-and-Conquer (D&C) is a popular parallel SAT solving paradigm that partitions SAT instances into independent sub-problems which are then solved in parallel. For unsatisfiable instances, state-of-the-art D&C solvers generate DRAT refutations for each sub-problem. However, they do not generate a single refutation for the original instance. To close this gap, we present Proof-Stitch, a procedure for combining refutations of different sub-problems into a single refutation for the original instance. We prove the correctness of the procedure and propose optimizations to reduce the size and checking time of the combined refutations by invoking existing trimming tools in the proof-combination process. We also provide an extensible implementation of the proposed technique. Experiments on instances from last year's SAT competition show that the optimized refutations are checkable up to seven times faster than unoptimized refutations. Abstract-With the increasing availability of parallel computing power, there is a growing focus on parallelizing algorithms for important automated reasoning problems such as Boolean satisfiability (SAT). Divide-and-Conquer (D&C) is a popular parallel SAT solving paradigm that partitions SAT instances into independent sub-problems which are then solved in parallel. For unsatisfiable instances, state-of-the-art D&C solvers generate DRAT refutations for each sub-problem. However, they do not generate a single refutation for the original instance. To close this gap, we present Proof-Stitch, a procedure for combining refutations of different sub-problems into a single refutation for the original instance. We prove the correctness of the procedure and propose optimizations to reduce the size and checking time of the combined refutations by invoking existing trimming tools in the proof-combination process. We also provide an extensible implementation of the proposed technique. Experiments on instances from last year's SAT competition show that the optimized refutations are checkable up to seven times faster than unoptimized refutations. Index Terms-Parallel SAT, Divide and Conquer, Refutation Checking I. INTRODUCTION Boolean satisfiability (SAT) solvers have improved dramatically in recent years. They are now regularly used in a wide variety of application areas including hardware verification [1], computational biology [2] and decision planning [3]. With the emergence of cloud-computing and improvements in multi-processing hardware, the availability of parallel computing power has also increased dramatically. This has naturally led to an increased focus on parallelizing important algorithms, and SAT is no exception. There are two traditional approaches to parallel SAT solving -the Divide-and-Conquer (D&C) approach [4]- [6] and the portfolio approach [7]. In the D&C approach, the original SAT instance is partitioned into independent sub-problems to be solved in parallel, while in the portfolio approach multiple SAT solvers are independently run on the original instance. Although the portfolio approach in combination with clause sharing performs well for small portfolio sizes, the D&C approach scales better in environments with large parallel computing power such as the cloud. Several implementations of D&C solvers exist [4]- [6], [8]. Every implementation uses: a divider to split up the original instance into sub-problems, and a base SAT solver to solve the If a SAT problem is unsatisfiable, a proof of unsatisfiability (or refutation) can be produced and independently checked to validate the result. Since 2013, the annual SAT competition has required SAT solvers to generate refutations. The most commonly supported refutation format today is the DRAT format [10]. Existing D&C SAT solvers produce refutations for each sub-problem independently. However, even if the refutation for each sub-problem passes the proof-checker, this is not a formal guarantee that the original instance also admits a refutation, as there could have been an error in the partitioning strategy. For example, a buggy solver may incompletely partition the SAT instance ( into sub-problems with cubes 1 and ¬ 2 . Both of these sub-problems are unsatisfiable, even though the instance is satisfiable. Transient errors in the underlying distributed system may also cause sub-problem refutations to be truncated or missing. To address these challenges, we introduce Proof-Stitch, which implements a strategy for combining DRAT refutations for sub-problems into a single refutation for the original instance, a process we call refutation stitching. Our contributions are: • We describe an algorithm for combining DRAT refutations of partitions of problems into a single refutation for the original problem and provide an open-source implementation on GitHub [11]. • We describe an optimization technique leveraging existing trimming tools (e.g., drat-trim [12]) to improve the quality of the combined refutations. • We evaluate our implementation on benchmarks from last year's SAT competition [13]. Our results show that trimmed refutations are checkable up to seven times faster than untrimmed refutations. The rest of this paper is organized as follows. Section II discusses background and related work. Section III presents the Proof-Stitch algorithm and theoretically justifies our method of combining refutations. We also describe an optimization technique that reduces the checking time and the size of the combined refutations. Section IV details our tool implementation. Results are presented in Section V, and Section VI concludes. A. Propositional refutations We assume familiarity with the basic concepts of CDCL SAT algorithms (see, e.g., [14]). We also assume that a base SAT solver can produce a DRAT refutation, which we define below (following [15]). Throughout the paper we model clauses as sets of literals and formulas as multisets of clauses. By · ∪ ·, we denote the standard union operation on sets, and the multiplicity-summing union on multisets. Let F = {C 1 , . . . , C n } be a formula. F unit propagates on We say that C has resolution asymmetric tautology (RAT) with respect to literal 1 Let o i denote an operation. Consider a sequence of operation- The sequence π is a DRAT refutation of φ if when o i+1 = ⊕ then C i+1 has RAT with respect to φ i , and if the last element in π is (⊕, ∅). B. Divide-and-Conquer SAT solving One parallel SAT solving paradigm is Divide-and-Conquer: a SAT instance is divided into simpler SAT instances (subproblems), which are then solved in parallel. Typically, the sub-problems represent partitions of the search space, such that the disjunction of all the sub-problems is equisatisfiable with the original problem. The sub-problems are derived from the original instance by assigning Boolean values to literals. The set of literals that are assigned (decided) for a particular sub-problem is called the cube of the sub-problem and the number of literals in the cube is the depth of the subproblem. There are many D&C-based solvers [4]- [6], including: Psato [16], Painless [17], and AMPHAROS [18]. One prominent D&C approach, Cube-and-Conquer [19], uses a lookahead solver to divide instances and a CDCL solver to solve sub-problems. This approach has been successful for large mathematical problems [20] and is implemented by tools such as Paracooba [21] and gg-sat [8]. D&C SAT solvers generate separate DRAT refutations for each sub-problem. There has been little work on combining these refutations into a single refutation for the original instance. One work [22] considers proof composition, but its parallel composition rule does not apply to DRAT refutations. Another work [23] gives an alternate proof calculus for parallel solvers. III. METHODOLOGY In this section, we present an algorithm to combine subproblem refutations into a refutation for the original Boolean instance. Then we show the algorithm's correctness. Finally, we present a technique to optimize the combined refutations. A. Algorithm The first step in the Proof-Stitch algorithm is to construct a decision tree representing the steps taken by the D&C solver. The root of the tree represents the original instance, and the leaves represent the sub-problems. Figure 1 shows the decision tree for an example instance. Algorithm 1: Stitching algorithm In : Instance: φ, Next, Proof-Stitch performs a sequence of stitching operations to produce a single refutation for the original SAT instance. A stitching operation (Algorithm 1) reads in a SAT instance φ, a decision variable x and two refutations π and π corresponding to the sub-problems φ ∪ {{x}} and φ ∪ {{¬x}} respectively. It produces a single refutation corresponding to the instance φ. The refutation for instance φ contains the clauses from refutation π appended with the literal ¬x and the clauses from refutation π appended with the literal x. More generally, the clauses from a refutation are appended with the negation of the decision literal used to generate the sub-problem. Figure 2 illustrates the stitching operation. As an example of the proof combination process, consider Figure 3. First the refutations π 00 and π 01 are combined. Then π 10 and π 11 are combined, and finally, π 0 and π 1 are combined to produce the refutation π corresponding to the original instance. In Proof-Stitch, the stitching operations are ordered according to the following rule: A stitching operation to combine a pair of refutations π and π can only occur after all refutations with greater depth have been combined. Informally, this means that refutations are combined in decreasing order of their depth, as shown in Figure 3. Stitching operations at the same depth are independent and can occur in parallel. Second, we show that if C j+1 has RAT with respect to literal and formula ψ j , then C * i+1 = {¬x} ∪ C j+1 has RAT with respect to literal and formula φ i . Let C * be a clause in φ i that contains ¬ . If C * i+1 ∪(C * \{¬ }) has AT with respect to φ i , we are done. Since C * is a clause in φ i , there is some C in ψ j such that ¬x, 1 , . . . , k be the literals of this clause. As before, since In the case that C * i+1 = C j+1 ∪ {x} (i.e., C * i+1 is derived from π ), the argument is similar. The key insight is that an initial propagation on ¬x in any AT check removes all the clauses added by π. Since π deletes no clauses from the original formula, this leaves an intermediate propagation result that shows C j+1 is RAT. The final step in π * is (⊕, ∅). It has AT because φ n+m contains both {x} and {¬x}. Since π * 's added clauses all have the AT or RAT properties, and the final step adds an empty clause, π * is a valid DRAT refutation of φ. In Proof-Stitch, the final refutation is built through stitching operations on DRAT refutations of the sub-problems. Since each stitching operation produces a preserving DRAT refutation, recursive application of Lemma 1 proves that the final refutation is a valid DRAT refutation of the original instance. C. Optimization Empirically, we have observed that refutations created through stitching operations contain a large number of clauses that are not needed during validation ("redundant" clauses). Identifying and removing these clauses reduces the time required to check the refutation and the storage space required to save the refutation. One approach to remove such redundant clauses is by identifying the "unsatisfiable core" as described in [24]. This approach optimizes the refutation by only retaining clauses that are essential for validation by a proof-checker. Our implementation optimizes refutations by using drat-trim to extract the unsatisfiable core after every stitching operation. However, aggressively invoking the optimization technique (e.g., after every stitching operation) could incur significant runtime overhead in the refutation generation process. This calls for a heuristic to decide when to apply the optimization technique. Empirically we observe that refutations with larger clauses (more literals) require longer to check. We hypothesize that this occurs because larger clauses are less likely to contribute to unitpropagation while simultaneously consuming more memory in the cache of the refutation checker. Therefore, optimizing refutations with large clauses should yield the greatest benefit. To implement this, we introduce a threshold parameter CL avg . After each stitching step, the refutation is optimized only if the average clause length in the refutation is greater than CL avg . IV. IMPLEMENTATION In this section, we describe our implementation of the Proof-Stitch algorithm. Proof-Stitch is implemented in Python and uses drat-trim [12] to optimize refutations. Our tool comprises of just under 300 lines of Python code and is available on GitHub [11]. The tool inputs are the original SAT instance in CNF form, the refutations and cubes for each sub-problem, and the threshold value CL avg . Our implementation requires that the cube of each sub-problem be encoded in the name of the corresponding refutation file. For example, the refutation file corresponding to refutation π 00 in Figure 1 is named 1 _ 2 .proof . The output is a single file containing a refutation of the original instance. As noted in section III, stitching operations at the same depth of the decision tree are independent and their combined refutations can be optimized in parallel. Our tool supports this. Setting the parameter CL avg = 0 enables optimization after every stitching operation and CL avg = −1 turns off optimization (only stitching is performed). We denote refutations combined with CL avg = 0 as "fully optimized" and refutations combined with CL avg = −1 as "unoptimized". V. EXPERIMENTS To evaluate Proof-Stitch, we run it on six benchmarks from the parallel track of last year's SAT competition [13]. The chosen benchmarks can be solved by Paracooba [21] within 1 minute of run-time. We also attempted running the tool on harder instances from the parallel track. While unoptimized proofs can be produced quickly (within a few minutes) on those instances, proof-checking and optimization are both computationally prohibitive due to the limitation of the underlying proof-checker (e.g., drat-trim fails to validate the combined refutations on harder instances even with a 24 hour time limit). For large refutations, the proof-checker faces memory and run-time bottlenecks on almost all the intermediate optimization steps. Therefore, we do not consider harder instances in our evaluation, but note that the proposed techniques in principle apply to larger instances once the scalability of the underlying proof-checker improves. In our experiments, we compare the checking time and size of unoptimized refutations against fully optimized refutations to show the benefit of optimization. We also report the tool run-time to demonstrate that Proof-Stitch does not introduce unacceptable overheads. Finally, we analyze the average checking time and tool run-time for CL avg = 10, a value empirically determined to perform well. We perform our evaluation on an Intel Xeon E5-2640 v3 machine with 128 GBytes of DRAM and 16 cores. Table 1 shows the time required for drat-trim to check the final refutations for the benchmarks (T c ), tool execution time to combine refutations (T g ), and the size of the combined refutations (S g ). The time required to check refutations reduces by between (2.7 − 7)× for all the benchmarks when full optimization is performed. Full optimization also results in smaller refutation file sizes, but increases the tool run-time. Figure 4 compares the average run-time to combine refutations (denoted "merging" time) and the average run-time to check refutations for unoptimized, CL avg = 10, and fully optimized refutations. Interestingly, running our tool with CL avg = 10 decreases the total validation time (merging + checking) compared to the unoptimized case. This points to the benefit of optimizing refutations in parallel-the overhead associated with optimizing refutations can be amortized by the savings in refutation checking time. Another important observation is that setting CL avg = 10 reduces the time required to combine refutations compared to the unoptimized case. We believe the reason is as follows: optimizing refutations decreases their size. When CL avg = 10, we optimize all intermediate refutations with average clause length greater than 10. Since the intermediate refutations are now smaller, the next stitching operation on this refutation takes lesser time. The time spent in optimizing refutations is mitigated by the savings in stitching time. VI. CONCLUSION We have presented Proof-Stitch, a technique that complements Divide-and-Conquer SAT solvers by combining subproblem refutations into a single refutation for the original instance. Proof-Stitch also uses existing proof-trimming tools to optimize the combined refutation. Future Work: Proof-Stitch's run-time overhead can be reduced by performing more stitching operations in parallel. Currently, only stitching operations at the same tree depth are parallelized, while in principle, any two independent stitching operations could be parallelized. Another potential future direction would be to incorporate parallelism in the refutation checker itself, likely requiring extension of the DRAT format to incorporate structural information of the search tree. Finally, it would be interesting to evaluate alternative measures for guiding the optimization process, such as Literal Block Distance [25], and to look into additional ways to reduce refutation sizes.
2022-09-13T01:16:00.395Z
2022-09-04T00:00:00.000
{ "year": 2022, "sha1": "caec61ab993a97b28ff51aa85b3a9b5178a4487c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "caec61ab993a97b28ff51aa85b3a9b5178a4487c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
235593108
pes2o/s2orc
v3-fos-license
Predicting Psychotic Relapse in Schizophrenia With Mobile Sensor Data: Routine Cluster Analysis Background Behavioral representations obtained from mobile sensing data can be helpful for the prediction of an oncoming psychotic relapse in patients with schizophrenia and the delivery of timely interventions to mitigate such relapse. Objective In this study, we aim to develop clustering models to obtain behavioral representations from continuous multimodal mobile sensing data for relapse prediction tasks. The identified clusters can represent different routine behavioral trends related to daily living of patients and atypical behavioral trends associated with impending relapse. Methods We used the mobile sensing data obtained from the CrossCheck project for our analysis. Continuous data from six different mobile sensing-based modalities (ambient light, sound, conversation, acceleration, etc) obtained from 63 patients with schizophrenia, each monitored for up to a year, were used for the clustering models and relapse prediction evaluation. Two clustering models, Gaussian mixture model (GMM) and partition around medoids (PAM), were used to obtain behavioral representations from the mobile sensing data. These models have different notions of similarity between behaviors as represented by the mobile sensing data, and thus, provide different behavioral characterizations. The features obtained from the clustering models were used to train and evaluate a personalized relapse prediction model using balanced random forest. The personalization was performed by identifying optimal features for a given patient based on a personalization subset consisting of other patients of similar age. Results The clusters identified using the GMM and PAM models were found to represent different behavioral patterns (such as clusters representing sedentary days, active days but with low communication, etc). Although GMM-based models better characterized routine behaviors by discovering dense clusters with low cluster spread, some other identified clusters had a larger cluster spread, likely indicating heterogeneous behavioral characterizations. On the other hand, PAM model-based clusters had lower variability of cluster spread, indicating more homogeneous behavioral characterization in the obtained clusters. Significant changes near the relapse periods were observed in the obtained behavioral representation features from the clustering models. The clustering model-based features, together with other features characterizing the mobile sensing data, resulted in an F2 score of 0.23 for the relapse prediction task in a leave-one-patient-out evaluation setting. The obtained F2 score was significantly higher than that of a random classification baseline with an average F2 score of 0.042. Conclusions Mobile sensing can capture behavioral trends using different sensing modalities. Clustering of the daily mobile sensing data may help discover routine and atypical behavioral trends. In this study, we used GMM-based and PAM-based cluster models to obtain behavioral trends in patients with schizophrenia. The features derived from the cluster models were found to be predictive for detecting an oncoming psychotic relapse. Such relapse prediction models can be helpful in enabling timely interventions. Introduction Background Schizophrenia is the most common psychotic disorder, affecting up to 20 million people worldwide [1] and accounting for more than 13.4 million years of life lived with a disability [2].It can be caused by a combination of genetic, environmental, and psychosocial factors.Patients with schizophrenia experience ranges of positive symptoms (hallucinations, delusions, etc.), negative symptoms (anhedonia, social withdrawal, etc.), and cognitive dysfunctions (lack of attention, working memory, executive function, etc.) [3,4].The disorder is highly disabling and often has consequences such as impairment of education, employment, and social contact [4].Adults with schizophrenia also have an increased risk of premature mortality than the general population [5].Proper treatment and management of schizophrenia are therefore important to limit the negative life impact of the disorder. Schizophrenia is usually treated with a combination of antipsychotic medications and psychosocial treatments.However, patients under treatment can still experience psychotic/symptomatic relapse, an acute exacerbation of schizophrenia symptoms [6].A prior study found that the cumulative first and second relapse rate was 81.9% and 78% respectively within 5 years of recovery from the first episode of schizophrenia and schizoaffective disorder [7].The risk of relapse is found to be significantly higher after treatment reduction or discontinuation [6].Relapse poses severe health risks for the individual and can jeopardize their treatment progression and daily functioning.Each relapse episode is associated with a risk of self-harm and harm to others [8]. To keep track of a patient's health status and recovery, routine clinic visits for continual assessment are recommended.Clinical interview and questionnaire tools are used during the visit for assessment of current health symptoms and timely intervention to prevent relapses [9].However, relapses may happen between the visits during which a patient's health status is not assessed.In addition, patients may have limited insight during a psychotic relapse and struggle to report it to the treatment team or a significant other.Therefore, improving treatment adherence and preventing relapses have become a focus of schizophrenia management.Towards the effort of relapse prevention, there has been significant interest in mobile sensing-based behavioral monitoring models for automatic relapse risk prediction. Prior Work Smartphone apps and wearable devices have been employed in several previous works to collect passive sensing data and track daily behaviors, which could then be used to model the relationship between behaviors and mental well-being.For example, in the Studentlife study, an Android sensing app collected passive sensing data from 48 college students and the inferred behavioral features from the collected data were found to be correlated with academic performance and self-reported mental health conditions [10].In a study on depression severity, the mobile sensing-based features such as daily behavioral rhythms, variance of subject's location, and phone usage were found to be related to depressive symptom severity [11].The use of mobile sensing to collect long-term monitoring data has also been demonstrated to be feasible and acceptable for patients with schizophrenia disorders [12][13][14][15].Surveys have found that people with schizophrenia commonly access digital devices for communication and support related to the disorder, which again shows the applicability of using mobile sensing as a platform to monitor schizophrenia symptoms [16]. Mobile sensing data has been used to model behaviors and predict psychotic relapses of patients suffering from schizophrenia.If an oncoming relapse could be detected with high accuracy, then timely medical interventions could be provided to mitigate the associated risks.Researchers have found anomalies in daily behavior assessed from mobile sensing before relapses and developed relapse prediction models with promising accuracy [17][18][19].In a pilot study, the Beiwe app collected mobile sensing data from 15 patients suffering from schizophrenia for 3 months during which 5 patients experienced relapses [17].The researchers found that the rate of anomalies in mobility and social behavior increased significantly closer to relapses.In the CrossCheck project, a mobile sensing app was developed to collect self-reporting EMA (Ecological Momentary Assessment) and continuous passive sensing data from 75 outpatients with schizophrenia [20].Based on this dataset, the authors in [18] compared different machine learning models for relapse prediction, with several feature extraction windows, and identified the best classifier and prediction settings for detecting an oncoming relapse.The best performance was obtained using an SVM (with RBF kernel) model and a feature extraction window of 30 days, leading to an F1 score of 0.27 on the relapse prediction task.Similarly, the authors in [21] used an anomaly detection framework based on an encoder-decoder reconstruction loss to predict psychotic relapse in schizophrenia. Concerning current mental health status, the extent to which an individual adheres to work, sleep, social, or mobility routine, i.e. a regular behavioral pattern, largely impacts their mental well-being and symptom severity of mental disorders [11,22,23].Behavioral stability features that measure the adherence to routines have been proposed as relapse predictors in some of the previous studies.Features computed in our previous work measured behavioral stability by calculating the temporal evolution of daily templates of features derived from the mobile sensing data (daily templates are time-series obtained with representative feature values at regular time-intervals in a given day, e.g.time-series of hourly feature values) [19].The authors in [24] also showed the effectiveness of using behavioral rhythm-based features to predict different symptom severity.Stability features such as deviation of daily templates were found to be significant predictors of schizophrenia symptoms such as being depressed.The authors in [25] also proposed a stability metric for behaviors with a fine temporal resolution by calculating the distance between two cumulative sum functions describing behaviors in a certain minute of the day.The computed Stability Index had similar predictive power as the state-of-the-art behavioral features (mean and standard deviation of each behavior) in [26], while being complementary.In all of these previous works utilizing behavioral stability to model relapse prediction, the stability measured was limited to the behaviors observed within a short feature extraction window (e.g.few weeks only).An individual's routine behaviors were not fully represented due to the short time window considerations.A summary of behavioral patterns could rather be obtained when larger time windows are considered. In this work, instead of measuring behavioral patterns using the variance of day-to-day behaviors, we identify the overall cluster of behaviors for an individual using multimodal mobile phone data and unsupervised machine learning, and derive features based on the distance of behaviors observed in a day compared to the individual's most representative routines.The identified behavioral clusters for an individual could for example be representing their weekday routine, a weekend routine, and a low-phone-usage routine (no sensor reading), etc.The clusters identified provide a representation of the long-term behavioral trends across the subjects which are not directly captured by short-term behavioral rhythm features as used in previous works.Further, clusters obtained from the mobile sensing data represent quantized behaviors, and features derived from these clusters are robust to the insignificant variations in behavior compared to the short-term behavioral rhythm change features.Typical behavioral routines for an individual can be found via the clustering analysis of their daily behaviors.Previously, clustering has been applied for identifying mobility patterns using GPS sensing data and evaluating anomalies accordingly [21,26].However, to the best of our knowledge, clustering analysis hasn't been done for characterizing the overall behavioral patterns of patients with schizophrenia, using multi-modal mobile sensing data, towards relapse prediction tasks. Goal of This Study In this work, we aim to (1) develop a method to characterize patients' daily behaviors using multimodal smartphone sensor data, (2) understand the relationship between behavioral patterns and psychotic relapse events in schizophrenia, and (3) evaluate the predictive power of identified behavioral pattern-based features for relapse prediction.We propose multivariate time-series clustering of daily templates obtained from mobile sensing data to obtain behavioral patterns.The features derived from clustering are then used in the relapse prediction task.The paper is organized as follows.In the Methods section, we describe the method used to cluster multi-dimensional daily templates from mobile sensing data, model selection approach for clustering, as well as feature extraction and relapse prediction modeling.In the Results section, we present the results obtained from the clustering models, association of the obtained clustering-based behavioral features with relapses, and evaluation of the developed relapse prediction model.The obtained results are discussed, and future directions are outlined in the Discussions section. Data Preparation The data used in this study was obtained from the CrossCheck project (Clinical Trial Registration Number: NCT01952041 [27]), which was conducted at the Zucker Hillside Hospital in New York City [20,24,26,28,29].The study was approved by the ethical review committee at Dartmouth College and the institutional review board at North Shore-Long Island Jewish Health System [20].Informed consents were obtained from the participants.The inclusion criteria for the participant has been described in [20].The CrossCheck app collected mobile sensing data from 75 outpatients with schizophrenia, with a data collection period of over 12 months per patient.Sixty-three patients completed the data collection (27 male and 36 female, average age of 37.2 +/-13.7 years, minimum age 18 years, and maximum age 65 years), and a total of 27 relapse events occurred in 20 patients during the monitoring period.Some patients had multiple incidences of relapses but as the monitoring period was long, each of the incidences was treated as a unique event if separated by a month.A relapse incident was defined to have occurred under one or more of the following seven different criteria: psychiatric hospitalization, increased frequency or intensity of services, increased medications or dosages or over 25% changes in BPRS scores, suicidal ideation, homicidal ideation, self-injury, and violent behavior resulting in harm to self or others [18].Six mobile sensing modalities including physical activities, sociability, and ambient environmental readings were obtained using the app.Different features were extracted from these mobile sensing modalities as presented in [24].From among these features, a total of 21 passive sensing features were selected for our proposed clustering-based behavioral characterization: acceleration, distance traveled, sleep duration, ambient sound, ambient light, conversation duration, phone unlock duration, and different types of call log, sms log, and app usage.All the features were transformed to an hourly resolution, by averaging the observations within one hour.For features that were obtained with lower resolution (e.g.every few hours), for example, distance traveled from morning to noon, the feature values were split to each hour spanned by the time represented by these feature values.With hourly resolution for each of the 21 features considered, and these hourly feature values considered as separate feature space, the resulting dataset had a dimension of 504 (21 x 24).A total of 18436 days of observation are present from the data collected for all the patients.Per-patient feature normalization (min-max normalization between 0 to 1) was done to adjust for differences between patients.From the normalized dataset, principal components analysis (PCA) on the full dataset (with data from all the patients) was done for dimensionality reduction.The first 200 principal components were retained which explained 96.9% of the total variance. Clustering Models We evaluated two different clustering methods: Gaussian Mixture Model (GMM) and Partitioning Around Medoids (PAM), to cluster the features from the mobile sensing data and obtain behavioral representations.The two clustering models differ in how the similarity between different points are assessed, representing different ways in which behaviors across days can be compared to each other, and therefore produce different cluster outputs. Gaussian Mixture Model Model Introduction The GMM is a probabilistic model that assumes data is generated from a finite set of Gaussian distributions.A Gaussian mixture probability density is the weighted sum of k component Gaussian densities [30].The GMM model can address correlation between attributes by selecting the optimal covariance matrix for each cluster and has been employed in previous behavioral clustering problems [31].Moreover, it can derive the probability of each sample in its assigned gaussian distribution.In this study, we used the GMM implementation from the scikit-learn package in Python to obtain a clustering model for the mobile sensing data [32].The parameters of the GMM model were obtained using the expectation-maximization (EM) algorithm [33].We selected the number of clusters and the covariance matrix type based on Akaike information criterion (AIC) and Bayesian information criterion (BIC) scores of all the candidate models (See more details in the supplementary document). Model Output Three output variables for each of the data points (observations), offering GMM model-based clustering features for the data points, are generated based on the developed GMM model: cluster label, assigned cluster likelihood score, and weighted average likelihood score. Cluster label is represented by integers from 1 to k (k: number of clusters selected in the GMM model).Cluster likelihood scores derived from the model measure how "irregular" each day (represented by a data point) is by calculating its deviation from the Gaussian mixtures.If we consider the center of each of the Gaussian as a typical routine, then the farther out a point is in this Gaussian space, then higher the chances that the point represents an anomalous day/behavior are. The likelihood of a data point in a multivariate Gaussian distribution can be computed by calculating the probability of observing a point farther than this given point.In other words, the cumulative distribution function is evaluated at the given data point, which can be obtained using Mahalanobis distance metric.Note that the squared Mahalanobis distance from a point to the center of a Gaussian distribution has been proven to follow a chi-squared distribution with degrees of freedom, where is the number of variables [34].Therefore, the likelihood of a point in the Gaussian distribution is equivalent to the cumulative probability of observing a value larger than the given Mahalanobis distance in a chi-squared distribution with degrees of freedom. The assigned cluster likelihood score of the data point was obtained as the probability of each point to its assigned cluster.The weighted average likelihood score was computed as the weighted (with the cluster's corresponding weights) sum of the probability of a given point belonging to each of the Gaussian classes.Intuitively, the assigned cluster likelihood score measures how close a day is to its closest routine.The weighted average likelihood score measures how close a day is to all routines.Since the weighted average likelihood score accounts for cluster weights, a point that is closer to a more populous cluster will be considered less anomalous.A 2-D illustration of the likelihood scores is provided in Supplementary Figure 1. Partition Around Medoids with Dynamic Time Warping Model Introduction GMM models measure similarity between observations (data points) with point-wise alignment of different features in the observation.However, the dissimilarity between two observations could be overestimated due to an outlier (e.g. because of faulty sensor measurements) or when there is a small time-shift and/or speed difference between observations.For example, two daily templates with a similar pattern but a shift by one hour would be expected to represent similar behavioral representations but these templates would likely be considered dissimilar from a GMM model.To allow flexible similarity assessments, we used Dynamic Time Warping (DTW) to find the optimal alignment of indices of the two time-series that minimizes the distance between the time-series [35].The DTW distance can be paired with a distance-based clustering method, such as a partition around medoids (PAM) clustering model [36].The PAM model searches for k representative objects (medoids) from the data and creates clusters so that the total dissimilarity of points within clusters is minimized.We compared the number of clusters k based on the sum of the squared DTW distance of every data point to its cluster medoid and the elbow method (See more details in Supplementary document). Model Output From the fitted PAM model, similar to the procedure after GMM model fit, we generated three output features characterizing each data point: cluster label, assigned cluster distance score, and weighted average distance score.As in the GMM model-based likelihood score computation, the cluster distance scores evaluate how dissimilar each object is from a representative data point, or from all representative data points.The assigned cluster distance score is the DTW distance of each data point (representing a daily template) to its cluster medoid.A lower value means that a day conforms better to its closest routine.Weighted average distance score is obtained by summing the DTW distance to all medoids scaled by the corresponding cluster sizes.A lower value means that a day conforms better to all possible routines.DTW distance from the previous day's daily template was also calculated as a potential relapse predictor. Analyzing Cluster Results After obtaining output variables from the cluster models, we evaluated whether there were significant changes in any of these cluster output variables closer to relapse events.To quantify this change, we first defined different key periods to focus before a relapse.Similar to a previous work, we defined NRx as x days near relapse (before the relapse event) and pre-NRx as all days before relapses that are not in NRx (healthy period) [21].We evaluated cluster outputs for NR7, NR14, NR20, and NR30 periods to test different window sizes.Cliff's delta was computed to estimate the size of the change in the likelihood scores (GMM model output) and distance scores (PAM model output) between the NRx and pre-NRx periods for each patient separately [37].Cliff's delta was chosen because of the non-normality and variance heterogeneity of our data for which the Cliff's delta is a suitable metric.It is calculated as Relapse prediction approach We framed relapse prediction as a binary classification problem similar to the earlier works [19,39].Based on the mobile sensing features derived from a feature extraction window (current and immediately past observations from a patient), we predicted if the patient is likely to experience a relapse in an oncoming period (prediction window).Similar to the previous works [19,39], we used a 4-week period as the feature extraction window and a 1-week period as the prediction window (Figure 1).Thus, the mobile sensing observations from the past 4-week period are used in the relapse prediction model to predict if there is going to be a relapse in the next week. Figure 1: Sequential relapse prediction approach used in this work.Features are extracted from a period of 4 weeks in order to predict if there is likely to be a relapse in the coming week. Features Mobile sensing data are represented with features to characterize behavioral patterns in the relapse prediction model.For our work, we evaluated the contribution of the clustering features derived from the GMM and PAM models for the psychotic relapse prediction task.We briefly describe the baseline features (based on the earlier work [19]) and clustering-based features that are added for the relapse prediction model. Baseline features: These consist of all the features as used in [19] along with distance-based and duration-based mobility features as well as screen usage-based features.The crosscheck dataset contains information about when the screen of the subject's smartphone is active.A single screen-usage modality was derived that represents the time spent using the phone (phone screen was active).From this modality, the mean and standard deviation of daily averages in a given feature extraction window was computed as features for the relapse prediction model.Similarly, for mobility-based features, we computed four different mobility modalities: distance traveled from home (home information obtained based on the clustering of the GPS locations), total movement, average time stayed in a location, and total time spent at home.Then for each mobility-based modality, we computed the mean and standard deviation of the daily averages as features characterizing a feature extraction window. Clustering-based features: We extended the baseline feature set with our proposed clustering-based features for the relapse prediction task.These features are listed in Table 1. Table 1.Features used in relapse prediction models.Baseline features are derived from a previous work [19].We evaluated if the clustering-based features could improve relapse prediction by complementing the daily behavioral rhythm change based features represented in the baseline features. Feature Set Modalities Features Baseline Classifier For our relapse prediction pipeline, we used a Balanced Random Forest (BRF) classifier with a low overall model complexity (using 11 decision trees).BRF as a classifier is suitable for learning from an imbalanced dataset, as is the case in our relapse prediction task, and provides meaningful prediction probabilities in different decision fusion schemes (e.g. in situations where only a limited number of sensor modalities are available for a patient).The number of decision trees to be used was heuristically chosen so as to limit model size (lower number of trees) while still having a number of trees to maintain diversity for the generalizability of the ensemble model.We used the BRF implemented in the imbalanced-learn library in Python [40] allowing the default unrestricted depth of trees and sqrt(number of features) considered for best split in the trees.Similar to the approach used in [19], features are quantized into discrete bins before being provided as input to the classifier.The number of bins is set as a hyperparameter and for the set number of equal-width bins, the count of feature values in each of the bins are retained as the processed feature values.The approach of feature quantization was found to be helpful in relapse prediction, probably by blunting small insignificant changes while retaining larger feature variations representing significant behavioral deviations.We used leave-one-patient-out cross-validation for the evaluation of the model.The number of bins to be used is a hyperparameter for the classification model and was set with cross-validation within the training set (nested cross-validation).The number of bins for feature quantization considered in hyperparameter tuning were [2,3,4,5,10,15] and the tuning procedure is further described in the supplementary file. Relapse Labels For our relapse prediction pipeline, as the relapse dates are not a hard label and earlier indications of an oncoming relapse are also valuable, we regard the entire month preceding the date of indicated relapse as a relapse period for classification.Thus, any prediction of relapse within a 4-week period before the relapse is considered as an useful output from the prediction model, as has also been used in previous work on relapse prediction [21].A relapse prediction generated upto a month before the relapse would be observable and potentially actionable for interventions as behavioral changes associated with relapse could manifest up to a month preceding a relapse [18]. Personalization Human behavior and behavioral change manifestations of relapse could be person-dependent.The authors in [19] proposed a method for personalizing a relapse prediction model based on feature selection adapted to a particular test patient.The adaptation happens using a personalization subset.This is shown in Figure 2.For a test subject, within the leave-one-patient-out cross-validation approach, the data from subjects closest in age to the given test subject compose the personalization subset.We included age-based personalization as a first step towards personalized relapse prediction since behavioral tendencies could be dependent on age, among other factors.Age has been reported to be a significant factor in univariate regression modeling of relapse behaviors in patients suffering from schizophrenia [41] and age dependence of psychosocial functions, substance use behaviors, psychotic symptoms, hospitalization risks, etc. have been reported in the context of psychotic relapse in patients suffering from schizophrenia [42].We evaluated the gains from age-based personalization compared to a non-personalized model to establish empirically if age-based personalization could be helpful in behavioral modeling and relapse prediction.As the relapse incidents are rare, all the relapse incidents in the training dataset are included as a part of the personalization subset.For training a classifier towards the test subject, the optimal features are selected using the personalization subset.We employed this approach for training our relapse prediction model and used the correlation between features and target label as the feature selection criteria.The number of features to be selected is set as a hyperparameter in our classifier and this dictates the threshold on correlation value used for feature selection.For example, if the number of features to be selected is 5, then the threshold on correlation coefficient (absolute value) is selected such that top-5 features with the highest correlation with the labels are retained.The number of features to be used was selected from [ 3,5,10,15] features and the size of the personalization subset was selected from [ 50, 75, 100, 125, 150, 200, 300] in the hyperparameter tuning (further described in the supplementary file). Figure 2: Personalization approach for the relapse prediction model, as proposed in [19].A personalization subset, consisting of data from subjects who are closest in age to the test subject, is used to identify the best feature sets using which then a (personalized) relapse prediction model can be trained. Evaluation Metric We evaluated relapse prediction performance to assess the contributions from clustering-based features.Any improvement in the relapse prediction performance when clustering-based features are added to the baseline features would establish the value of clustering-based features to represent behavioral trends and detect anomalies relevant for relapse prediction. Similar to [19], we used F2 score for model evaluation to slightly prioritize recall over precision (Detecting a relapse is slightly prioritized over generating a false positive).F2 score is given as: Clustering Results We trained GMM and PAM models to obtain cluster centers and identify different behavioral routine representations.The model selection procedure is explained in the supplementary file, and model comparison metrics for GMM and PAM are plotted in supplementary figure 2 and 3. For the GMM model, after evaluating AIC and BIC scores, model selection was narrowed to 8 to 14 clusters with full covariance matrix.Among the models with equally good AIC and BIC scores, the models with 9 and 13 clusters achieved the best model stability and least overlap between Gaussian classes.The final model selection was 9 clusters because lower number of clusters allows for higher interpretability.The number of clusters for the PAM model was also selected to be 9 based on the distance dissimilarity metric and the elbow method.See supplementary Figures 4 and 5 for the output from the GMM and PAM models including cluster size, average likelihood scores (for GMM), and distance scores (for PAM) (Figure S4) and kernel density plots that illustrates the distributions of likelihood scores and distance scores (Figure S5). To evaluate how well the days in each cluster conform to one routine -the one represented by the cluster center -we measured the spread of each cluster using the trace of the covariance matrix of all cluster samples.Results are illustrated in Figure 3. Clusters with a smaller covariance trace have lower within-cluster variability.The GMM cluster model resulted in a more extreme distribution of cluster spread (higher range of covariance trace) because it allows the clusters to overlap (despite our model selection approach to limit overlaps) while the PAM model creates partitions in the data.By averaging all daily templates (data points) in every cluster, it is possible to observe the cluster profiles.For example, Figure 4 illustrates the average daily templates of two example signal modalities: acceleration and volume.The GMM model performs better in stratifying daily templates based on the overall level of activity in these signal modalities.The PAM model has higher variance in each cluster because it allows for a more lenient dissimilarity measurement. Although the daily templates in each cluster have different levels of signal activity, they generally follow the same pattern as a normal circadian rhythm, e.g. the volume signal activity peaking during the day and being at minimum during the night.Table 2 summarizes the average profile for each cluster, ordered from the most common to the least common one. Association with Relapses Out of the 27 relapse events in total, clustering features were missing before three events due to missing signal modalities.For 11 out of the 24 relapses left, anomalies in clustering features were observed qualitatively in the time series of these features before and after the relapse.Most of these anomalies represent a transition to a cluster with inactive sensor readings, for example, GMM cluster 3 and PAM cluster 1 (Figure 5).We hypothesized that these patients for which we see their assigned cluster labels near relapse period being assigned to the cluster of inactive sensor recordings, most likely had their phone turned off a few days before the relapse.This transition to an inactive cluster is associated with an increase in likelihood scores (GMM model-based feature), and a decrease in distance scores (PAM model-based feature), because these clusters are more compact, and points do not deviate too much from the cluster centers. Figure 5: Time series plots of cluster assignment as obtained from the GMM and PAM models (left pane), and weighted average likelihood score and distance score of a sample patient (right pane).Changes in cluster features are seen near to the relapse instance (here shown with the vertical red line). Our cluster analysis between the NRx and pre-NRx periods showed that on average, likelihood scores increase, and distance scores decrease closer to relapses (Figure 6).This trend is robust with respect to different window sizes and the largest change is observed with NR20 window size.Asterisks indicate that the absolute Cliff's Delta value between the two periods is above 0.147 (i.e.effect is non-negligible, Ref: Section Methods -Analyzing Cluster Results).Note that the plots are made with all patients' data collectively.Individual patient's plots would show a larger difference between the near relapse window and healthy period.Average cliff's delta values across all relapse events are presented in Table S1 in Multimedia Appendix 1. Relapse prediction We evaluated the relapse prediction pipeline discussed in the Methods -Relapse Prediction section, with and without the clustering-based features.The highest F2 score of 0.23 is obtained when the baseline features are complemented with the clustering-based features, significantly higher than the random classification baseline of 0.042 F2 score and the F2 score of 0.18 obtained using the baseline features only. Significant features With the best relapse prediction obtained using all features, we identified the most important features within this feature set based on how often a feature was selected in the leave-one-patient-out cross-validation.The selection count for a feature was incremented by 1 if it was selected for use in a particular cross-validation loop for a test patient.It is to be noted that the number of features selected in each cross-validation loop is different since the number of features is a hyperparameter selected with nested cross-validation.We then normalized the total selection count of each feature at the end of the cross-validation by the number of cross-validation loops.The results obtained are given in Table 4. Table 4.The top-10 significant features in the relapse prediction pipeline based on the entire feature set (baseline and clustering-based features).The frequency of selection of a particular feature across the cross-validation loop is used to assess the most significant features for relapse prediction.It is to be noted that different numbers of features are selected in each cross-validation loop since the number of features to be used is a hyperparameter tuned with a nested cross-validation loop. Discussions Principal Results In this work, we used clustering models to obtain behavioral representation from mobile sensing data which could be useful for relapse prediction.The two clustering models explored in this study, GMM and PAM, grouped observations using different notions of distance/similarity between data points and therefore captured different behavioral representations (Table 2, Figure 4).These representations can be useful in downstream applications such as relapse prediction. The GMM model defines distance based on one-to-one matching between the hourly observation of mobile sensing data in the daily template.The clusters identified from the GMM model have a widely varied distribution of cluster spread (Figure 3).With some compact clusters (represented by low cluster covariance) being identified within the GMM model, the rest of the data points that do not belong to any of these compact clusters are considered as a large-spread cluster with no typical cluster profile.These large-spread clusters contain the compact clusters also (cluster overlaps); a point belonging to compact clusters also shows high likelihood of belonging to the large-spread cluster.As we wanted the clusters to capture distinct behavioral trends, we evaluated Bhattacharyya distances to identify the best clustering model with least overlap between identified clusters.The PAM model with DTW distance allows a more lenient match of daily templates of behaviors as represented by the mobile sensing-based features.Such a lenient matching fits the context of this study since DTW is able to discount spikes, speed differences or time shifts when evaluating dissimilarity between two daily templates of behaviors.However, the clusters obtained from the PAM model contain more dissimilarity.It is then more difficult to summarize the cluster profiles for qualitative model interpretation. Overall, GMM based modeling is able to identify highly dense/populous clusters with very specific behavior associated with these clusters and some dispersed clusters that do not have a typical cluster profile.For example, cluster 3 and cluster 9 identified from the GMM model (Table 2) represent two types of typical routines.Cluster 3 from the GMM model has almost all sensor readings close to zero other than sleep, likely representing an inactive/sedentary day, and cluster 9 has the days with the phone screen always turned on, likely representing a day with high mobile phone usage.The PAM model also has a cluster with mostly inactive days and constantly long screen time (cluster 5).However, this cluster has higher cluster variance.When the average cluster profile of this cluster is observed (Figure 4), some days that do not strictly follow these patterns of inactive day and long screen time are also assigned to the cluster.In terms of behavioral features, this implies that clusters obtained from a PAM model are likely to cluster together behaviors that do not always show homogeneity based on qualitative observations.This is because of the flexibility in the PAM model in allowing unparalleled alignment between behavior time-series.Nonetheless, it might be beneficial to consider PAM based modeling for previously mentioned features: ability to discount spikes and speed differences or time shifts when evaluating dissimilarity between two daily templates of behaviors.Similarity (or dissimilarity) between behaviors might not always be fully represented by hourly alignment and comparison of mobile sensing data across days. The behavior of a particular day, represented by the mobile sensing data template for that day, was characterized in a clustering model with different clustering-based features such as Gaussian likelihood and DTW distance to the cluster centers.For days with assigned cluster likelihood scores close to 1 and assigned cluster distance scores close to 0, they tend to belong to a dense cluster with a small spread.For example, cluster 3 from the GMM model and cluster 5 from the PAM model respectively have the highest likelihood and lowest distance to its assigned cluster (Figure S4).They also have low within-cluster variability as measured by the trace of sample covariance (Figure 3).On the other hand, days characterized with low likelihood scores and high distance scores tend to be more dispersed and do not conform well to a specific routine.For example, cluster 5 and cluster 7 in the GMM model and cluster 9 in the PAM model have such properties.Overall, the GMM clustering and PAM clustering tend to produce clusters with different behavioral representations in the assigned clusters and this is reflected in the clustering-based features such as likelihood scores and cluster distance that are assigned to characterize each day. In terms of relapse prediction, clustering-based features can capture long-term behavioral trends across the subjects.This representation can complement existing approaches to behavioral representation for psychotic relapse prediction in schizophrenia, e.g., based on the usage of daily behavioral rhythm change features as proposed in [19].We compared the clustering-based features before and near the relapse periods and saw significant differences in some of the features.This was also seen qualitatively in a time-series plot of clustering-based features indicating that an upcoming relapse for a patient is associated with changes seen in clustering-based features (Figure 5).Clustering-based features were helpful in relapse prediction models (Table 3).When clustering-based features were used together with daily behavioral rhythm change features, a significant gain in relapse prediction performance was obtained (F2 score improved from 0.18 to 0.23).These F2 scores, and the associated improvements are significant, considering that a random classification baseline gives an F2 score of 0.042 on average.A Wilcoxon signed-rank test on performances in multiple classifier initializations for classification with and without clustering features yielded a significant classification score for classification when clustering features were included (p = 0.002). Clustering-based features were among the top features when significance of features for the relapse prediction task was evaluated (Table 4).Features such as mean cluster labels and number of transitions of labels were among the top (most frequently selected) features.Thus, both the information about which behavioral clusters the observations from the current period of monitoring belong to (likely representing behavioral clusters that are not normal behaviors) and how often transitions between different behavioral clusters happen (representing the patient showing frequent behavioral variations) are likely predictive of an oncoming relapse.Clustering-based features alone also proved to be valuable for relapse prediction.GMM-based and PAM-based clustering features only used in the relapse prediction pipeline led to an F2 score of 0.16 and 0.16 for relapse prediction respectively (Table 3).Therefore, clustering-based features are found to be a useful approach to obtain behavioral representations and can be employed in clinical applications such as relapse prediction. Comparison to previous work, limitations, and future research To our best knowledge, this is the first work that used clustering analysis to group behavioral patterns of individuals with schizophrenia.Compared to previous works that used the hourly data to train the relapse prediction models, our work based on clustering features to represent different behavioral patterns has better model interpretability.Clustering analysis allows clinicians to understand different types of patient routines, as well as their frequencies.In terms of schizophrenia, cluster transitions observed before relapses could suggest which types of behavior are potential relapse related behavioral signatures.Intervention strategies to prevent relapses can then be made accordingly. Researchers have studied how missing data is related to relapses and anomalies in mental health conditions.In the dataset that we have for evaluation in this work, some passive sensing daily templates have consecutive hours with missing data from almost all signal modalities.While in an anomaly detection study, Adler et al. used mean imputation [21], here we filled missing values with zeros.Filling missing data with mean values might ignore the potential relationship between missing data and anomalies.In reality, it is highly possible that out-patients may turn off their phone when they experience relapse symptoms.We observed that indeed there are more days from an inactive sensor reading cluster closer to relapses.The increased prevalence of inactive days also caused likelihood scores to increase and distance scores to decrease before relapses.Initially, we hypothesized that adhering to any routine or any cluster center might reduce the risk of relapse, but it turned out that some routines, such as missing sensing data, is actually associated with a higher risk of relapse. Although the clustering features successfully improved relapse prediction results, the only observable relapse signature is an increase in likelihood score or a decrease in distance score, and a transition to an inactive cluster.For the relapse events that were not indicated by sensor inactiveness, we did not find any non-trivial changes in any specific feature prior to the relapse.Similarly, the relapse prediction performance with the best F2 score of 0.23 is relatively low.However, investigations of mobile sensing based relapse prediction in mental health disorders are relatively new and further improvements in this field could be expected as more dataset become available and improvements in machine learning models to the specific task of relapse prediction.In [43], relapse prediction in bipolar disorder was developed using clinical assessment features during patient visits.A high F score (F1) of upto 0.80 was reported.The relapse rate was quite high (relapse in >60% of the included patient) in the dataset used by the authors and the relapse prediction was done on a patient level (instead of a weekly prediction model in free-living conditions as in our case) which might have led to higher performance. In this work, we obtained patient-independent clusters i.e. generalized behavioral clusters by pooling data from all the patients.We generalized that there are certain types of routines across all outpatients with schizophrenia.Future studies can focus on establishing personalized cluster models.As suggested in [25], every patient's relapse signatures and the extent to which they adhere to their daily routine are different.The same study found that individual-level models could achieve better performance in predicting symptom severity.Our model also found that participants have different routines as their frequency of staying in different clusters largely varies.Moreover, although most patients had higher likelihood scores and lower distance scores closer to relapses, some other patients demonstrated the opposite trend.Generalized behavioral models might not fully represent and discount the effect of different confounding variables such as job type, gender, current health, etc. that could impact behavioral trends.Though we used model personalization in relapse prediction, only the factor of age as a covariate of behavioral trends was considered.Personalized cluster models that account for different aspects of interpersonal differences would further help mitigate possible biases in behavioral representations due to confounding variables.Personalized relapse prediction models will also be required to test the effectiveness of the individual-level clusters.However, sufficient data for each new patient is needed to find cluster models specific for the patient and thus clinical deployment for new patients will be delayed.Cluster adaptation from generalized cluster models to personalized cluster models as more patient-specific data becomes available needs to be investigated in future work. Conclusion In this work, we proposed a methodology to compute clustering models on 24-hour daily behavior of schizophrenia outpatients and showed that information extracted from the cluster model improved relapse prediction.New features were generated from the cluster models by measuring every observation's deviation from the cluster centers representing typical behavioral patterns.Two different clustering models were investigated.The GMM model allows for cluster overlap and has a more extreme cluster dispersion.The PAM model with DTW distance creates partitional clusters that are more generalized towards new data but fails to identify dense clusters.The clustering-based features in addition to the baseline features helped to improve relapse prediction model performance.In future work, we will further investigate personalized clusters and relapse prediction models. Figure 3 : Figure 3: Trace of the sample covariance matrix for each cluster obtained with GMM and PAM clustering approach.A lower covariance matrix trace indicates more homogeneous clusters, i.e. clusters with lower within-cluster variability. Figure 4 : Figure 4: Average daily templates of two signal modalities acceleration (top) and volume (bottom) in the clusters obtained from the GMM and PAM models.Different clusters capture different behavioral patterns. Figure 6 : Figure 6: Boxplot of the clustering features (likelihood scores from GMM model on top and distance scores from PAM model on bottom) in different NRx (x days near relapse) and pre-NRx (all days before relapses not in NRx) periods.Asterisks indicate that Cliff's Delta between two groups is above 0.147. Table 2 . All cluster profiles obtained from the GMM and PAM models in descending cluster size.Different clusters are associated with peculiar behaviors specific to that cluster as it can be observed from the typical profile of signal modality in that cluster. Table 3 . [19]pse prediction performance with different feature sets.The baseline features introduced in the previous work by[19]are complemented with clustering-based features for evaluation.The performance of both the GMM-based and PAM-based feature sets are also separately evaluated.
2021-06-23T01:16:17.993Z
2021-06-12T00:00:00.000
{ "year": 2022, "sha1": "273206b9c99d7a4b5d30ec17be14de04b8ce98c7", "oa_license": "CCBY", "oa_url": "https://jmir.org/api/download?alt_name=mhealth_v10i4e31006_app1.pdf&filename=b8945c3d46ff5da3d041e8d611b5aa27.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "273206b9c99d7a4b5d30ec17be14de04b8ce98c7", "s2fieldsofstudy": [ "Psychology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
245093344
pes2o/s2orc
v3-fos-license
Recruiting Black Men Who Have Sex With Men (MSM) Couples via Dating Apps: Pilot Study on Challenges and Successes Background HIV disproportionately impacts Black men who have sex with men (MSM), and targeting the primary relationship (ie, couples) using mobile technology for health holds promise for HIV prevention. Web-based recruitment of MSM is commonly employed in HIV prevention and intervention research. However, little known about recruiting Black MSM couples on the internet in the United States. Objective This study describes the process of recruiting Black MSM couples over social networking and dating apps frequented by MSM. We describe the activities for recruiting, screening, and enrolling participants as part of a randomized trial employing a multipronged recruitment approach. Methods Black MSM in couples were recruited via three apps (ie, Jack’d, Adam4Adam, and Growlr) between May 2020 and March 2021 during the COVID-19 pandemic in the United States. Black MSM couples were eligible if one or both partners are Black, MSM, and living with HIV, and if both partners were 18 years or older, and have been together for at least 2 months in what they both consider a primary relationship (ie, one in which both partners reported feeling most committed to over any other partner or relationship). Results A total of 10 Black MSM couples (n=20) were enrolled via social networking apps. App recruitment activities were a combination of passive (eg, in-app advertisements) and active (eg, direct messaging of users) engagement. Recruitment approaches varied by the social networking app owing to differences in app features. A full-time recruiter experienced challenges such as bugs (ie, technical errors in computer program or system), navigating technical requirements specific to each app, and web-based harassment. Conclusions Despite challenges, it was possible to recruit Black MSM couples virtually into research as part of a multipronged recruitment strategy. We identify tips for using web-based dating and other social networking apps as part of a recruitment strategy in future research with Black MSM couples. Introduction HIV remains a global health issue requiring continued efforts in prevention and intervention in low-, middle-, and high-income countries [1,2]. Within the United States, HIV disproportionately affects men who have sex with men (MSM). This health disparity is even greater among Black or African American (hereafter "Black") men [3][4][5][6]. Half of all Black MSM are estimated to acquire HIV in their lifetimes compared to one in 11 for White MSM [7]. The primary romantic relationship is an intervention target given high rates of seroconversion among MSM in these relationships [8][9][10][11]. Among Black MSM, nationwide estimates indicate that one-third to a half of those with HIV are in a primary relationship [12][13][14]. Therefore, research on couples remains crucial in HIV/AIDS prevention and intervention. Location-based dating and social networking apps have become an option for participant recruitment in HIV/AIDS and sexual health research [15][16][17]. The advantages of app-based recruitment in HIV research are recently highlighted by the COVID-19 pandemic whereby in-person recruitment was prohibited owing to physical distancing and other public health measures [18]. Recruiting couples on the internet requires special consideration for relationship verification and data validation [19][20][21]. Emergent studies have used apps and other social media (eg, Facebook and Instagram) to recruit MSM couples [22,23]. However, knowledge gaps exist for using dating and social networking apps to engage racial or ethnic minority MSM couples. Recruiting Black MSM couples into research studies presents important considerations and is challenging for myriad reasons. Distrust of research and medical institutions and cultural stigma concerning race, sexual orientation, and HIV status are barriers to research participation for Black MSM [24,25]. MSM in couples may have a diverse range of agreements regarding sex with others outside of their relationship. Sexual agreements are the mutual understanding between primary partners regarding what sexual behaviors are allowed [26]. These agreements are prevalent among 58% to 99% of same-sex male couples [27], with 11% to 64% of these agreements including sex with outside partners [27]. Given that some MSM have agreements regarding outside partners, dating and social networking apps provide a way to reach partnered MSM that may use apps to socialize or find potential sexual partners. Few studies have presented details on using dating and social networking apps to recruit Black MSM couples, highlighting a potential knowledge gap on methods for engaging couples into research. Thus, the goal of this study is to describe the process of using dating and social networking apps to recruit same-sex couples to inform future trial designs. This study is part of a multipronged recruitment approach of a pilot randomized controlled trial (RCT) with Black MSM couples with HIV. Study Overview The dating app recruitment process described herein was part of a multipronged recruitment approach of a pilot RCT to test the feasibility and acceptability of a mobile app intervention for improving HIV care and treatment among Black MSM couples living with HIV. We targeted recruitment efforts on dating and social networking apps frequented by Black MSM: Jack'd, Adam4Adam (A4A), and Growlr [19,[28][29][30][31]. Qualitative data particular to each app are described below to highlight the unique success and challenges experienced using this underutilized recruitment approach. To maximize engagement with Black MSM, we hired a Black, cisgender, same-gender-loving-identified man as the study recruiter who performed all recruitment activities and documented the recruitment process. Recruitment occurred between May 2020 and March 2021. Black MSM couples were eligible if one or both partners are Black, MSM, and living with HIV, if both partners were 18 years or older, and have been together for at least 2 months in what they both consider a primary relationship (ie, one in which both partners reported feeling most committed to over any other partner or relationship). Owing to differences in user engagement requirements by app, we used different engagement approaches by app. On Jack'd, recruitment was conducted through their in-app advertisements. Interested candidates who clicked on the advertisement were directed to a Qualtrics prescreener questionnaire that obtained basic qualifying information (eg, current place of resident, race, relationship and HIV status, and length of time on antiretroviral medications for HIV). Study staff then contacted eligible candidates via SMS text message using the telephone number provided. Recruitment on A4A was carried out by sending private SMS text messages to potential participants using the in-app messaging feature. The recruiter identified potential participants using the app's search filters which allowed users to filter through other users' profiles on the basis of set criteria such as their race, HIV status, and relationship status. The study recruiter identified users whose race or ethnicity was set to Black, African American, or mixed. We included "mixed" race because many Black MSM may identify as mixed race given the diversity among Black communities. We also found that some users identify their race as mixed to avoid being filtered out by users who filter out Black-identified users within the app. The study recruiter identified users whose HIV status was set to HIV-positive, undetectable, or unanswered. Users who left their HIV status unanswered were considered for the study as it would encompass anyone who has never been tested or chose not to disclose their serostatus on the internet. Finally, the study recruiter identified users whose relationship status was set to dating, partnered, open relationship, polyamorous, or married. Once potential participants were identified, the study recruiter privately messaged each individually. A4A has a message delivery report in its platform, which allowed recruiters to know if a message has been read or not. Users who read but did not respond within 48 hours of the first message being sent were sent a follow-up message asking if they were still considering participation in the study or were no longer interested. The messages that remained unread would require no follow-up as those users were likely inactive. Users who communicated interest then were asked to complete a phone screener with a study staff to determine eligibility. Recruitment on Growlr was carried out by sending private messages to potential participants using the in-app messaging feature and in-app advertisements contained the weblink to a Qualtrics prescreener. A Growlr paid service, the "SHOUT!" feature, allowed the recruiter to send the study information to multiple people in a specified vicinity. A total of 10 couples (N=20) recruited via apps were enrolled in the trial, including 7 same-race Black couples and 3 interracial couples. Ethical Considerations This study received ethics approval from the institutional review board of University of California, San Francisco (IRB#15-18042). All participants provided informed consent to participate in the study. Results Overview Individual-and couple-level characteristics of the couples recruited via apps are reported in Tables 1 and 2, respectively. The following outlines findings resulting from the process of recruiting participants via each app. Overall Findings In-app advertisements on Jack'd were used for recruitment on the platform. Eligible candidates who completed the Qualtrics prescreener questionnaire through the study advertisement and were contacted by recruiters via SMS text message with the telephone number they had provided. If the candidate did not respond to the SMS text message within 24 hours, a recruiter would follow up with a telephone call and leave a voicemail message if there was no answer. Potential candidates had 1 week to respond before another attempt to make contact was made. This pattern of correspondence continued until either the candidate indicated that he/she was no longer interested or the telephone number was no longer in service. Interested and eligible candidates who completed the Qualtrics prescreener would then complete a telephone screener. Participants were scheduled for an interview once they provide informed consent to participate. A total of 35,912 unique impressions, or the number of times the study advertisement was displayed to a user for the first time, occurred on Jack'd in 4 major cities (Atlanta, Georgia; Los Angeles, California; Houston, Texas; and Washington, District of Columbia). Of these views, 924 users clicked on our advertisement at least once. Consequently, the click-through rate, or number of unique clicks divided by the number of unique impressions, ranged between 0.85% (Atlanta, Georgia) to 1.16% (Houston, Texas). Character Limits for Advertisement Placement Though recruitment on Jack'd was carried out through in-app advertisements, imposed character limits made it difficult to fully describe the target population and goals of the study. One solution was to include part of the study description into the image selected for our profile at an extra cost (Figures 1 and 2). Jack'd removed our advertisements and stated that adding more text to our recruitment advertisements would be an extra cost. Our team elected to pay the additional fee to include more description in our in-app advertisements so that interested applicants had more information prior to completing the prescreening measure. Technical Bugs The recruiter experienced functionality issues with the web-based browser and mobile app versions of A4A. The web-based version was designed to look like the app, but there were technical bugs with several functions. For example, the recruiter made edits to the profile on the web-based version; however, these edits were not always reflected in the app version. Moreover, blocks of text from the recruiter profile would often be removed without notification or explanation, which would leave out key details of the study and regular monitoring would be required to ensure that information published to the app profile was not deleted by the app. Unfortunately, when information was deleted from the profile no error messages or warnings had been communicated to the recruiter. As such, there may have been times when potential candidates missed vital information about the study. Potential candidates were contacted on the basis of their eligibility potential, which was determined by using app search filters (eg, candidate identified race, relationship, and HIV status). Additionally, recruiters scanned through details on their candidates' profiles for information that may qualify or disqualify them for the study. Owing to an unexpected account suspension, we were unable to breakdown numbers between California and Florida (n=69). Existence of Bots Successful engagements with potential participants on A4A could be improved simply by the recruiter distinguishing themselves from automated "bot" profiles that function to send spam and are often ignored by app users. The recruiter found positive changes in user responses when he developed rapport with other users. For example, one user had a profile photo with a dog, prompting the recruiter to comment, "Cute dog, it reminds me of my childhood pet," followed by a self-introduction. In another successful recruitment interaction, the recruiter started a conversation inquiring about the reference of a song in a user's profile name. The shared knowledge between the user and recruiter about the song lead to the user's interest in further discussion. After sharing the recruiter's role with the study, the user chose to enroll in the study. Inability to Track Profiles and Messages Tracking contacts on A4A were not straightforward and required additional steps. A4A offers a feature to "favorite" users, allowing their profile to be bookmarked through an in-app list. This list enabled the recruiter to stay connected to contacts even if they changed their username. However, the feature did not allow for more than one person to be added owing to technical bugs. Thus, the recruiter used the web-based version to save the URLs of users' profiles for tracking purposes. Additionally, the chat function only allows for a limited number of messages to be sent before older messages are lost. To save relevant information, the recruiter tracked and recorded usernames, dates of interaction, follow-up dates, user profile URLs, and other notes in Microsoft Excel. Removal of Flyer Image From Recruiter Profile During the recruitment process on A4A, the recruiter received an automated message indicating that the study's flyer image-which had been uploaded to the recruiter's profile-was removed because it violated the app's standards. The recruiter then changed the study's profile picture to a photo of himself. Thereafter, when potential participants expressed interest in the study through private SMS text messaging, the recruiter would send the flyer image directly to them. Harassment The recruiter experienced racially and politically charged verbal abuse during the height of the Black Lives Matter protests in 2020. Racial epithets (eg, "mountain caucus monkey") were used by an app-user without provocation. Romantic and sexual harassment were common. Growlr Similar to A4A and Jack'd, Growlr recruitment procedures involved both active and passive approaches. The study recruiter identified potential participants through the app's search filters and messaged eligible users privately; in-app advertisements with the study information also contained a weblink to the prescreener. A Growlr paid feature "SHOUT!" was used to send the study information to multiple users in a specified region. We paid for "SHOUT!" broadcasts in 4 separate cities (eg, Charlotte and Raleigh, North Carolina; Nashville, Tennessee; and Cleveland, Ohio). Users within a 25-30-mile radius were able to see these advertisements and resulted in 2955 total views. Similar to A4A, the recruiter experienced verbal abuse and harassment. Growlr removed the study flyer from the recruiter's profile, with a message indicating that it violated company guidelines against in-platform solicitation. However, after a new flyer was posted that excluded any mention of the study participants being paid, it was nevertheless removed again, and Growlr's customer service did not respond to our inquiries about the second flyer removal. Owing to the technical bugs and low success rate (no eligible couples were found), the recruiter discontinued efforts on the platform after 2 weeks. Principal Findings HIV incidence among Black MSM in the United States continues to be disproportionately high [3,4] with one-third to half of Black HIV-positive MSM to be in a primary relationship [13,14,32]. Nonetheless, societal stigma, distrust of research and medical institutions, and other systemic barriers negatively impact HIV prevention and treatment for this underserved community [24,25]. As such, novel approaches to recruiting Black MSM couples are needed. There are relatively few dyadic HIV research studies with Black MSM couples (eg, time and staffing). Little information exists detailing the successful strategies for web-based recruitment of Black MSM couples into HIV research. While dating and social networking apps have been commonly used to recruit single MSM for research studies [19], no research has used dating apps to explicitly recruit couples of MSM. This study demonstrated the feasibility of dating and social networking apps for recruiting Black MSM couples as part of a pilot RCT of a couples-focused app for improving HIV care engagement. Recruiting MSM couples through dating and social networking apps is a necessary recruitment strategy given the prevalence of sexual agreements among MSM couples [27]. Consistent with previous research, this sample of couples contained predominately same-race Black partnerships [32]. The search and filter functions in the apps, such as filtering users on the basis of their reported relationship status, helped to identify potential participants per the eligibility criteria. A4A and Growlr offered the functionality to filter through user-identified race or ethnicity categories, which reduced the time needed to search for eligible users. Paid advertising campaigns through Jack'd and Growlr were an opportunity to recruit passively, instead of actively searching through users and initiating conversations to determine eligibility and interest. Although the strategy of privately messaging potential participants on A4A was a successful recruitment strategy, it was not without challenges. Our Black, same-gender-loving-identified recruiter reported multiple episodes of harassment of various types (eg, sexual, racial, and political). Additionally, app-specific guidelines for study advertisements varied (eg, character limits and other rules). Regular check-ins between the principal investigator and recruiters and careful attention to the guidelines for each app are necessary. Limitations Our study recruited for a one-time interview, and we do not know how these findings generalize to other, longer-term research requirements. Further, biases in the sample skew toward nonmonogamous couples owing to the generally sexual purposes of MSM using the apps. Finally, given the evolving nature of the software, some of the app features reported during the time of publication may or may not reflect what is currently available. Comparison With Prior Work Apps designed for MSM have become increasingly popular and users on those platforms may visit them frequently (eg, daily) [33]. Research has recruited single MSM [19,[34][35][36] and Black MSM [37][38][39] via apps. Given high HIV transmission rates between MSM primary partners [8][9][10][11], recent studies have also recruited MSM couples through a combination of web-based engagement (eg, Facebook and gay websites) and apps [10,[40][41][42][43][44], but not exclusively on apps. No research documents the utility of app-based recruitment for Black MSM couples [45,46]. Given the disproportionate rates of HIV [6,7,47] and the importance of coordinating HIV prevention, care, and treatment [45,48,49] within this community, there is urgency to finding novel approaches to recruiting Black MSM couples for HIV prevention studies. Conclusions Dyadic HIV research with Black MSM couples is important but knowledge gaps remain. Challenges to research with this population include participant recruitment, which can be resource intensive, underscoring the need for recruitment strategies that have been demonstrated to be feasible and acceptable. We discuss our strategies for engaging Black MSM couples via social networking apps, and associated technical challenges, including issues with harassment directed at our recruiter. We have identified a way forward with using social networking apps to engage Black sexual-minority couples to inform future research.
2021-12-12T16:38:50.472Z
2021-07-08T00:00:00.000
{ "year": 2022, "sha1": "889d84ace33d2e7002888e7f09ef3ab1600ba7e4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/31901", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce4b3219bf769d8061b7a525c15c97caf8f58090", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
86860617
pes2o/s2orc
v3-fos-license
Inflation in the Mixed Higgs-$R^2$ Model We analyze a two-field inflationary model consisting of the Ricci scalar squared ($R^2$) term and the standard Higgs field non-minimally coupled to gravity in addition to the Einstein $R$ term. Detailed analysis of the power spectrum of this model with mass hierarchy is presented, and we find that one can describe this model as an effective single-field model in the slow-roll regime with a modified sound speed. The scalar spectral index predicted by this model coincides with those given by the $R^2$ inflation and the Higgs inflation implying that there is a close relation between this model and the $R^2$ inflation already in the original (Jordan) frame. For a typical value of the self-coupling of the standard Higgs field at the high energy scale of inflation, the role of the Higgs field in parameter space involved is to modify the scalaron mass, so that the original mass parameter in the $R^2$ inflation can deviate from its standard value when non-minimal coupling between the Ricci scalar and the Higgs field is large enough. Introduction A number of single-field models have been proposed [1][2][3][4][5][6] since 1980s, some of them are in good agreement with the observation of cosmic microwave background (CMB) [7], such as the R + R 2 inflationary model (the R 2 one for brevity) [1] which is often called the Starobinsky model, and the original Higgs inflationary model [8][9][10] in which the scalar field is strongly non-minimally coupled to the Ricci scalar 1 . The R 2 added to the Einstein-Hilbert action yields an effective dynamical scalar field, scalaron realizing a quasi-de Sitter stage in the early universe while the Higgs boson in the standard model, with the help of non-minimal coupling to gravity, ξχ 2 R, plays an essential role as an inflaton to drive inflation in the Higgs inflationary model. Both models produce the same spectral spectral index of primordial scalar (adiabatic density) perturbations which is supported by recent CMB observations. Meanwhile, the tensor-to-scalar ratio given by these two models has an amplitude though small, but still hopefully detectable in the future. Due to the excellent performance of the R 2 inflation and the Higgs inflationary model, it is natural and more realistic to consider the extension of such single-field models to multi-field inflation by the combination of them which we consider in this paper. Multi-field inflation is a class of cosmological inflationary models with a de Sitter stage produced by more than one effective scalar fields among which two-field models constitute a special case. In multi-field inflationary models, only one linear combination of the scalar fields is responsible for the inflationary stage and consequently quantum fluctuations produced in this direction serve as adiabatic perturbations which finally grow to become the seeds of inhomogeneities seen in CMB temperature anisotropy and polarization and producing the large scale structure and compact objects in the universe. The other independent combinations are, on the other hand, responsible for production of isocurvature perturbations [12] and some other possible features [13]. Isocurvature modes represent the unique feature of multi-field models distinguishing them from single-field ones. They can survive to the present only under special conditions [14]. Also, in the presence of non-minimal coupling, recent research [15] points out that the preheating process after inflation becomes much more violent than the case without it. In this paper, we investigate Higgs-R 2 inflation, namely the combination of the Higgs inflation and the R 2 inflation, in a certain part of the parameter space. For realistic values of the Higgs self coupling we find the presence of mass hierarchy and the appearance of effectively single-field slowroll inflation in the original Jordan frame. We write down the effective single-field action to quadratic level for this model and use it to calculate the power spectrum of curvature perturbations. We find that this two-field model can be treated as an effective R 2 inflation with a modified scalaron mass. In Sec 2, we introduce the basic details of the model. We calculate the power spectrum in Sec 3 and discuss the effective R 2 inflation in Sec 4. Our conclusions and outlook are presented in Sec 5. Lagrangian and Equations of Motion The action considered here is given in the original Jordan frame, where the space-time metric is denoted asĝ µν , by where M p ≡ (8πG) −1/2 and χ is a singlet scalar field, a simplified model of the Standard Model Higgs boson. We neglect its interaction to gauge fields. Here F (χ,R) is defined by This action was recently considered in [16,17]. χ has a non-minimal coupling term with the Ricci scalar. We take the sign of the non-minimal coupling constant ξ such that the conformal coupling corresponds to ξ = −1/6. Defining the scalaron field as [18,19] 2 3 and performing a conformal transformation we can transform the original action (2.1) into the one in the Einstein frame and express the new action in terms of the new scalar fields as where the potential is expressed as (2.7) In addition to the metric and Higgs field χ, ψ is the third dynamical field which originates from the R 2 term. Note that ψ shows up only in the exponent with an O(1) numerical factor, so that large values of ψ will significantly suppress the terms with higher orders in exp − 2/3ψ/M p . The kinetic terms of the two scalar fields in the Einstein frame are coupled. This means that the field space spanned by these two fields is not flat. Following [20,21], we introduce an induced metric of the field space and rewrite this system in a more compact way as where (2.9) Here the Latin indices a, b = 1, 2 represent components in field space and the Greek indices µ, ν = 0, 1, 2, 3 denote space-time components. We take the spatially flat Robertson-Walker metric ds 2 = −dt 2 + a 2 (t)δ ij dx i dx j as the background i, j = 1, 2, 3) and split all fields into homogeneous background parts and small space-timedependent perturbations, incorporating scalar metric perturbations in the spatially flat gauge. Then equations of motion for both background and perturbations are given as follows. where DX a = dX a + Γ a bc X b dφ c 0 is analogous to the directional derivative in curved spacetime and is the Christoffel symbol for the curved field space. It is easy to show that D dt =φ a 0 ∇ a . Note that the equations of motion for perturbations have already been transformed into those for spatial Fourier modes. The equations of motion of the scalar fields can be regarded as modified geodesic equations in the curved field space. The first term in (2.12) is just the ordinary geodesic equation while the second and the third term represent modifications from cosmic expansion and the scalar field potential, respectively. Correspondingly, the field perturbation equations (2.13) can be regarded as geodesic deviation. Their equations of motion are also modified by cosmic expansion and the potential. With all the effects taken into account, the trajectory of the fields traces neither the geodesics in the curved field space nor the bottom of the valley of the potential as postulated in [17]. Note that since generally the trajectory take turns during inflation, we also expect that there will be effects due to the turning. Slow-Roll Inflation and Curvature Perturbations In this paper, we mainly focus on the parameter regime where ξ > 0 and fix the self coupling at a typical value λ = 0.01 from phenomenology. We will also briefly discuss the situation when ξ < 0 at the end. Features of the Potential Here we give two examples for different combinations of ξ and M in Figure 1. The potential (2.7) is invariant under χ → −χ. One can calculate the effective mass of the Higgs field, m 2 χ , by taking derivatives of the potential. The dominant contribution in small χ regime comes from a term proportional to ξ, −3ξM 2 exp − 2/3ψ/M p . For positive ξ, Higgs field obtains a negative m 2 χ around the origin where its amplitude will grow exponentially. In large χ regime, m 2 χ is dominated by a term proportional to χ 2 , 3λ(1 + 3ξ 2 M 2 /λM 2 p ) exp −2 2/3ψ/M p χ 2 , whose coefficient is always positive. These properties imply the existence of a local minimum on the potential for a given ψ which corresponds to the valleys in Figure 1. Thus, independent of the initial position of χ, with a large ξ, the Higgs field will quickly fall into one of the valleys and evolve around the local minimum. If ξ takes a small value, i.e. m 2 χ is small, χ direction will become flatter. In this case, if the initial conditions start from a large χ value, it is possible for Higgs field to slowly roll down the potential wall which is similar to the situation discussed in [23]. As for ψ direction, it is always flat in large ψ regime so that it has similar behavior to the scalaron in the R 2 inflation. As we shall see later, there is a turning in the trajectory which can affect the sound speed of the curvature perturbations during inflationary phase. The angular velocity at this turning is not large in the parameter regime we consider here, though. After the end of inflation, the fields will oscillate around the global minimum of the potential at (χ, ψ) = (0, 0) where reheating is expected to happen. According to the recent work [15], the particle production during preheating will be violent due to the appearance of non-minimal coupling between Higgs and gravity. Slow-Roll Inflation As mentioned in previous sections, the evolution trajectory of two scalar fields are affected by the curved nature of the field space, the potential shape and the expansion of the universe. Thus, it would be more convenient to discuss the features of this trajectory by defining unit vectors T a and N a [24] as T a ≡φ a 0 which are tangent and normal to the trajectory, respectively. Here we denoteφ 2 0 ≡ h abφ a 0φ b 0 andθ is the angular velocity describing the turning in the trajectory which, according to the normalization condition, is given byθ so that N a is explicitly given by We define the slow-roll parameters analogous to the single-field case as Note that η a is no longer a scalar but a vector which means that one needs two different ηs to describe the evolution of these two different directions. Using the unit vectors, one can easily obtain an η for each direction as where U N ≡ N a U ,a . Slow-roll inflation requires that 1 and η || 1. Note that the slow-roll requirement does not impose any constraint on η ⊥ which means that it can be large. Then the angular velocityθ can also be expressed in terms of the slow-roll parameter aṡ θ = Hη ⊥ . (3.11) Since η ⊥ can be large, we may expectθ to be large as well. However, this does not spoil the validity of the effective field theory used below [24] as long as the adiabatic condition, |θ/θ 2 | M eff , is satisfied. We now consider perturbations in this formalism. In flat gauge, the comoving curvature perturbation and the isocurvature perturbation are defined as [24] Expanding the perturbed action to second order, we find from which it is clear that the curvature perturbations evolve along the light (massless) direction while the isocurvature modes have an effective mass M 2 The explicit form of U N N and M 2 eff is given by respectively. Thus, light modes and massive modes are separated. Integrating out the high energy degrees of freedom as in [24], the massive modes, F, are completely determined by the massless modes, R, so that one gets its effective action to quadratic order, The appearance of the turning gives corrections to the sound speed, c −2 s (k) = 1+4θ 2 /(k 2 /a 2 +M 2 eff ) which is exact unity in single-field models. Therefore, the effective action obtained for curvature perturbations is that of a single-field theory with a modified sound speed. Whenθ 2 is close to U N N , c −2 s 1, the effect of the turn in the trajectory becomes significant, so that the sound speed is largely modified. However, in the region of the parameter space we consider, the modification is not significant. In this action, only the adiabatic mode appears but it does not mean that the heavy mode has no influence on the evolution of the adiabatic mode. Both light and heavy modes have high energy and low energy contributions. Integrating out the high energy part to get the low energy effective theory does not mean decoupling between light and heavy modes. As long as a turning exists, adiabatic and isocurvature modes couple with each other and the isocurvature mode is forced to oscillate coherently with the light field at low frequency [24]. In the slow-roll regime,θ is automatically small and slowly changing in time, so that approximately we can quantize the quadratic action considering the sound speed as a constant close to unity. As a result, the power spectrum is just which gives the scalar index and the scalar-to-tensor ratio These results are just like those in single-field models with modification from the non-trivial sound speed. However, as mentioned already, large modification is not expected because in the slow-roll regime as well as the region of the parameter space we consider, the sound speed does not deviate from unity too much. Predictions for observations As we can see above, the dynamics as well as power spectrum are determined by three parameters, Higgs self-coupling λ, non-minimal coupling ξ and scalaron mass M . Fixing λ = 0.01 and the amplitude of curvature perturbation at the pivot scale to be 2 × 10 −9 , we choose several groups of ξ and M to calculate n s and r. All the results are completely degenerate (Shown in Figure 2) with those of the R 2 inflation. This should not be surprising because this two-field model is built from the R 2 inflation and the Higgs inflation both of which give predictions staying right at the center of the famous n s − r plot from Planck's observational data in 2015 [7]. Also, due to the presence of mass hierarchy and considering slow-roll inflation, the effective theory of this model reduces to a single-field model with slightly modified sound speed as one can see above. Therefore, we should not expect this model to give predictions which largely deviate from those of the Higgs inflation or the R 2 inflation in this level. The degeneracy phenomenon implies that with a fixed λ = 0.01, the amplitude of curvature perturbations, P R (k) 2 × 10 −9 , gives a strong constraint on ξ and M . Varying ξ from 0.1 to 4000, one obtains the relation depicted in Figure 3. For small enough ξ, M remains almost constant at around 10 −5 M p which coincides with the case of the standard R 2 inflation. This situation holds until ξ reaches 1000 where M starts to grow rapidly to compensate the change of ξ. In this logarithmic plot (Figure 3), the relationship between M and ξ can be approximately separated into two branches. One is ξ 1000 where we can just take the scalaron mass M to be the same as in the standard R 2 inflation. The other is ξ 1000 where the value of the scalaron mass in the R 2 inflation is no longer valid since the non-minimal coupling is so large that it modifies the model significantly. In order to maintain the amplitude of curvature perturbations, M must take a much larger value. So far we have qualitative understanding of this relation. We present more precise explanation in the following. Explanation To explain this, we firstly have a look at the potential (2.7). In slow-roll regime, the Friedmann equation approximately gives in large ψ regime with small ξ while in large ψ regime with large ξ. In the case of (4.1), the second term in the parenthesis is negligible compared with unity so that Hubble parameter is just a constant completely determined by M . However, in the case of (4.2), the second term in the parenthesis is not negligible compared with unity which means that the second factor in (4.2) could possibly be much smaller than unity for large enough ξ. In order to preserve the amplitude of curvature perturbations which is determined by Hubble parameter and its derivatives as mentioned above, we need a larger value of M to "protect" the Hubble parameter from being too small. Intuitively, the scalaron mass M mainly controls the height of the hill in the middle of the potential and while the non-minimal coupling ξ mainly controls the depth and position of valleys on both sides of the hill. For a given amplitude of curvature perturbations, we require the inflaton to slowly roll down a trajectory whose height is around a certain value during inflation. If M is not too large that means that the height of the central hill is small, the inflaton is allowed to roll along the central region of the potential, e.g. the left panel in Figure 1. On the contrary, if M is too large, one then needs a larger value of ξ which would generate a deep valley on each side of the hill so that the inflaton can leave the too high hill top to roll along a trajectory which is of proper height to generate small enough curvature perturbations. One may see this relation more clearly without any conformal transformation by considering the relation between this two-field model and the R 2 inflation directly in the Jordan frame. From the action in this frame (2.1), we find that the non-minimal coupling can be partially regarded as an extra contribution to the kinetic term for Higgs field χ proportional to ξ since there are second derivatives in Ricci scalar so that we can realize this by using integration by parts. Then, for large ξ and ψ regime, the original kinetic term in (2.1) is negligible. Dropping the kinetic term for χ, the original action becomes As a result, the equation of motion for χ just becomes a constraint on χ [22] and R as χ 2 = ξR/λ which simplifies the action as is the effective mass squared of the scalaron, which should takeM = 1.3 × 10 −5 M p [30,31] to reproduce the observed amplitude of curvature perturbation. From (4.6) we can understand two characteristic regimes of Figure 3. When the second term in the denominator is much smaller than unity, namely, ξ λ 3 Mp M ≡ ξ c ∼ = 4.3 × 10 3 , the scalaron mass should simply take the original value of R 2 model. As ξ increases, approaching the above critical value, M 2 also starts to increase according to reaching infinity at ξ = ξ c . This is in perfect agreement with Figure 3. However, note that in the simplified case we do not have modification of sound speed. The effective scalaron mass can be regarded as the modification of potential that is different from the change of sound speed although they may both produce similar features on the power spectrum. As is well-known, only with either one of the two ingredients, R 2 or Higgs, it is enough to achieve successful inflation model with proper parameter value which is favored by the observation we have so far. Since the model we consider is the combination of these two single-field models, it goes back to either of them in some limits of the parameters. One can easily see from the Lagrangian that if we take λ → 0 and ξ → 0 in the model or just simply set χ = 0, what we get is just the R 2 inflation with only one parameter M , the mass of the scalaron. The other limit is the Higgs inflation where we take M → ∞. Note that we cannot just take ψ = 0 which is analogous to what we did above. The reason is that the new degree of freedom in ψ comes from the quadratic term ofR in (2.4). Only whenR 2 term vanishes, this "new" scalar field is completely determined by Higgs field, i.e. it is no longer a new degree of freedom. Thus, we go back to the Higgs inflation in this case with two parameters ξ and λ. Conclusion and Outlook In this paper, we have analyzed a two-field inflation model consisting of the R 2 term and the Higgs field in detail. This model can easily go back to the two single-field models, the R 2 inflation [1] and the Higgs inflation [8][9][10]. We have considered the parameter space where λ = 0.01 and ξ > 0. In the presence of mass hierarchy and considering slow-roll regime, one can integrate out the high energy part and obtain an effective single-field model with a slightly modified sound speed where we can easily calculate the power spectrum of curvature perturbations. The modification of sound speed comes from the presence of turning in the inflation trajectory, but in our case it turned out to be negligibly small. For the amplitude of curvature perturbations to coincide with observation, we find that the predictions of this model are just the same as in the R 2 inflation or the Higgs inflation. Fixing the amplitude, P R (k), we find a relation between the scalaron mass M and the non-minimal coupling ξ which helps us to notice the relation between this two-field model and the R 2 inflation directly in the Jordan frame. We can effectively regard this model in the parameter space considered as R 2 inflation with an effective scalaron mass which naturally explains the existence of a special relation between the two free parameters. For typical values of the self coupling parameters of the standard Higgs field at high energy, this model gives essentially the same predictions as the R 2 inflation and the original Higgs inflation as far as the power spectrum is concerned. These two models, however, have quite different reheating mechanisms [1,[25][26][27][28] with a much higher reheating temperature for the latter model with a possible violent behaviors due to the non-minimal coupling [15]. Since our model smoothly connects the two limits, the number of e-folds, N * , of the pivot scale of CMB observation is also expected to shift from the value corresponding to the pure Higgs model to that of R 2 model, which leads to an observational consequence [29]. This shift, however, is degenerate to the expansion history or the amount of entropy production after reheating which may be measured by the direct observation of high frequency tensor perturbations [32]. For smaller values of the self coupling than the case of the standard Higgs field with smaller ξ, we may realize a situation both fields are in the slow roll regime and acquire non-negligible quantum fluctuations so that the isocurvature mode may also play an important role. Furthermore, for ξ < 0, we expect our model would have similar behaviors as in [23] where the two-field model can generate large fluctuations on small scale. Though we considered the model with only one scalar field, our results can be straightforwardly generelaized to an arbitrary number of mutually interacting scalar fields sufficiently strongly coupled to the Ricci scalar.
2018-04-02T06:20:56.000Z
2018-04-02T00:00:00.000
{ "year": 2018, "sha1": "1da65d2ac409a8f2c7591e32367223f16a4bed20", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1804.00409", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1da65d2ac409a8f2c7591e32367223f16a4bed20", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220546147
pes2o/s2orc
v3-fos-license
Horizon symmetries and hairy black holes in AdS We investigate whether supertranslation symmetry may appear in a scenario that involves black holes in AdS space. The framework we consider is massive 3D gravity, which admits a rich black hole phase space, including stationary AdS black holes with softly decaying hair. We consider a set of asymptotic conditions that permits such decaying near the boundary, and which, in addition to the local conformal symmetry, is preserved by an extra local current. The corresponding algebra of diffeomorphisms consists of two copies of Virasoro algebra in semi-direct sum with an infinite-dimensional Abelian ideal. We then reorient the analysis to the near horizon region, where infinite-dimensional symmetries also appear. The supertranslation symmetry at the horizon yields an infinite set of non-trivial charges, which we explicitly compute. The zero-mode of these charges correctly reproduces the black hole entropy. In contrast to Einstein gravity, in the higher-derivative theory subleading terms in the near horizon expansion contribute to the near horizon charges. Such terms happen to capture the higher-curvature corrections to the Bekenstein area law. I. INTRODUCTION In the recent years, the Bondi-Metzner-Sachs (BMS) symmetry [1][2][3], which generates the asymptotic isometries of Minkowski spacetime at null-infinity, has been revisited [4][5][6] and its relevance to field theory has been reconsidered from a modern perspective. This infinitedimensional symmetry has been found to be relevant in the study of scattering amplitudes of both gravitational and gauge theories in asymptotically Minkowski spacetimes [7], and its connection to the Weinberg soft theorems and to the memory effects led to a new way of studying processes in flat space [8][9][10]; see [7] and references therein and thereof. More recently, infinite-dimensional symmetries like BMS have also appeared in other geometrical setups, such as in Minkowski spacetime at spacelike infinity [11,12] and in the vicinity of black hole event horizons [13][14][15][16][17][18][19]. In this paper, we investigate whether BMS-like symmetry may also appear in a scenario that involves black holes in AdS space. More precisely, the question we ask is whether supertranslation symmetry, a proper infinitedimensional Abelian subalgebra of BMS, emerges in either the near boundary region or the near horizon region of AdS black holes, two regions in which the symmetry algebras are expected to get enhanced. To answer this question, we will consider massive 3-dimensional gravity [20], which has the advantage of admitting a rich black hole phase space, including AdS black holes with a softly decaying hair [21,22]. In order to accommodate such solutions within the space of geometries to be considered, it is necessary to relax the asymptotic conditions near the boundary of AdS 3 , demanding a fall-off that is weaker than the usual Brown-Henneaux boundary conditions [23]. This induces an extra current in the near boundary region, which mixes with the boundary local conformal symmetry in a non-trivial way: We derive the corresponding algebra of asymptotic diffeomorphisms and we show that it actually consists of two copies of Virasoro algebra in semi-direct sum with an infinite-dimensional Abelian ideal. In other words, the asymptotic isometry algebra at the boundary does contain supertranslations. However, we show that, unlike the Virasoro transformations, the supertranslations at the boundary act trivially, i.e. they are pure gauge: By computing the Noether charges associated to the asymptotic diffeomorphisms, we find the supertranslation charges identically vanish. Then, we refocus our attention on the near horizon region, a second region where infinite-dimensional symmetries are also expected to emerge [13]. Based on the analysis of [14,16,24,25], adapting it to the higher-curvature model, we show that at the horizon supertranslation symmetry does yield an infinite set of non-vanishing charges, which can be computed using the Barnich-Brandt formalism [26]. By evaluating these charges on a stationary hairy black hole solution, we find that the zero-mode reproduces the black hole entropy, as it happens in general relativity (GR). However, a remarkable difference with respect to GR exists: Due to the presence of higher-derivative terms in the massive gravity action, the black hole entropy does not obey the Bekenstein-Hawking area law, but it takes a more involved form that depends on the radii of both internal and the external horizons. Therefore, a natural question arises as to how such dependence on the internal event horizon can be obtained from the near external horizon computation. We show that it actually comes from subleading contributions: It turns out that next-to-leading components in the nearhorizon expansion, which in the case of GR give no contribution, in the higher-derivative theory do contribute to the charges yielding the correct entropy formula. The paper is organized as follows: In section II, we introduce the massive 3D gravity theory in AdS. In section III, we specify the point of the parameter space at which we will work, and the special features that the theory exhibits there. In section IV, we discuss the main properties of the hairy black holes and compare them with the hairless BTZ geometry. The asymptotic symmetries at the AdS boundary will be studied in section V, where we prove that, while an infinite-dimensional commuting algebra appears and mixes with the Virasoro symmetry, the conserved charges associated to the former identically vanish. In Section VI, we consider the near horizon symmetries, where supertranslation isometries also appear, in this case yielding an infinite set of conserved charges. By evaluating these charges explicitly, we show that the zero-mode of the horizon supertranslation corresponds to the Wald entropy. In Section VII, we extend the near horizon analysis to the case of rotating black holes, for which the supertranslation charges is also worked out. We show that, in contrast to GR, in the massive gravity theory new (subleading) terms in the near-horizon expansion happen to contribute to the charges. In section VIII, we extend the analysis by adding the gravitational Chern-Simons term, which contribute to the Noether charges in a non-trivial manner. Section IX contains our conclusions. II. MASSIVE 3D GRAVITY Let us start with the action of New Massive Gravity (NMG) theory [20] which leads to the field equations where satisfying K µν g µν = K, so that the problematic mode ∇ 2 R decouples from the trace of the field equations. In the limit m 2 → ∞, this theory reduces to GR. The specific linear combination of squared curvature terms in (1) makes this higher-derivative theory to exhibit especial features: It propagates two spin-2 helicity states, and at the linearized level it results equivalent to the unitary Pauli-Fierz action for a massive spin-2 field of mass m. This implies that action (1) describes a ghost-free, covariant massive gravity theory that, in contrast to Topologically Massive Gravity (TMG) [27] turns out to be parity-even. The relative coefficient of the higher-curvature terms of (1) coincides with the precise combination of quadratic counterterms that appear in the context of holographic renormalization for D = 3; see [28] and reference therein. Related to this, there is an alternative way of seeing (1) to appear perturbatively: Consider the 3-dimensional Einstein-Hilbert action coupled to matter; namely where L matt denotes the Lagrangian of matter. Then, we can deform the action I 0 by adding to it the irrelevant operator where T µν is the stress tensor and T is its trace. Operator (5) can be regarded as the 3-dimensional analog of the TT -deformation of [29,30] coupled to gravity. The coupling constant t in (5) has mass dimension −3. If one solves the field equations coming from the deformed action I 0 + δI to first order in t and, after that, one evaluates the action on-shell, one obtains the NMG action (1) with m 2 = 4πG/t. The presence of a cosmological constant λ in (4) would result in a t-dependent renormalization of it and of the Newton constant G. Another interesting feature of massive theory (1) is that it has a profuse black hole phase space, including solutions with different asymptotics [21,22,31,32]. In particular, it allows for black holes in AdS with a softly decaying gravitational hair. Here, we will focus on such solutions. Being a quadratic gravity theory, NMG admits more than one maximally symmetric solution. That is, there exist generally two values of the effective cosmological constant for the solutions; namely assuming m 2 ≥ λ. That is, the theory has two natural vacua, which can be either flat space and/or (Anti-)de Sitter space, depending on the parameters m 2 and λ. The effective cosmological constant (6) set the curvature radius of the solution = √ −Λ ± , being 2 > 0 for AdS. Notice that, while Λ − tends to the GR value λ in the limit m 2 → ∞, Λ + diverges. The latter can thus be thought of as a non-perturbative solution. III. SPECIAL POINT While at a generic point of the parameter space the theory admits two vacua (6) with different curvature radii, there exists a special point in the parameter space at which these two vacua coincide. This happens when m 2 = λ. At this point, one gets When (7) is satisfied, the theory exhibits special properties, the most interesting ones being the existence of: 1. A unique maximally symmetric solution. Extra local asymptotic Killing vectors in AdS In this paper, we will be concerned with the theory at the point (7) and with its special properties. IV. HAIRY BLACK HOLES IN ADS In addition to the BTZ black holes [35], which are indeed solutions of NMG provided either Λ + or Λ − is negative, at the special point (7) NMG admits a 1-parameter hairy generalization of BTZ. In the static case, the metric of such hairy black hole takes the form where t ∈ R, φ ∈ [0, 2π] with period 2π, and r ∈ R >0 , and where µ and b are two integration constants. One can verify that (8) For certain range of the parameters µ and b the solution above describes a black hole, with horizons located at whose inverse transformation is In terms of r + and r − , solution (8) takes the form and represents a black hole provided r + > 0. (Without lost of generality one can consider The solution looks similar to BTZ black hole [35,36], although it describes a remarkably different geometry. Let us study the most salient properties of this solution: First, let us summarize some properties that make the b = 0 black hole different from BTZ. For example: 1. It has non-constant curvature, so it is not locally equivalent to AdS 3 . In fact, Ricci scalar R = −6/ 2 − 2b/r diverges at r = 0 (see Figure 1). This means that, as higherdimensional AdS black holes, solution (8) exhibits a curvature singularity at the origin. Notice also that, provided b < 0, the curvature R changes its sign at r = −b 2 /3. 2. It may have two horizons for certain range of parameters, namely for r + ≥ r − ≥ 0, despite being a static, uncharged solution. This results in a change of the causal structure and singularity signature, relative to the static BTZ (r − = 0). 3. It does not obey Brown-Henneaux asymptotic boundary conditions [23] but more relaxed ones. This will be important for the discussion in the next section. 4. It has an additional parameter, b. This parameter is physical, in the sense that it cannot be absorbed by coordinate redefinition; notice that the curvature invariant depends on it. Despite all these differences, spacetime (8) does share some properties with BTZ. For example: 5. It is regular outside and on the horizon. 6. It is conformally flat [37]. That is, the Cotton tensor vanishes, C µν = 0, which means that solution (8) is also a solution when theory (1) is coupled to TMG. 7. It has isometry group R × SO(2) generated by the Killing vectors ∂ t and ∂ φ . 8. It is asymptotically, locally AdS 3 in the sense that the Riemann tensor tends to that of AdS 3 at large r [21]. This implies that lim r→∞ (R µν + 2 −2 g µν ) = 0. 9. Its asymptotic is compatible with a microscopic derivation of its thermodynamics [34] using the Cardy formula in the dual CFT 2à la [38]. 10. It represents a black hole for certain range of parameters, namely for r + ≥ 0, [21,22]. The static BTZ black hole corresponds to r + = −r − . It means it contains AdS 3 as a particular continuously connected case, i.e. for b = µ + 1 = 0. 11. Its metric admits to be written in a quite manageable, simple expression provided there is no rotation. It admits a stationary, rotating generalization (see (50)-(51) below) whose form can also be written down analytically, although it acquires a cumbersome form [21,34], cf. [39]. We will discuss the stationary solutions below. Regarding the latter point, the mass of black hole solution (8) can be computed with the Barnich-Brandt method [26], which yields which, remarkably, depends on both µ and b. Notice that the solution is massless in the extremal case r + = r − . A rapid way to confirm this is the right value of the mass is as follows: one can perform in (8) a change of coordinates by definingr = r + b 2 /2. Then, the metric takes the form where M is given by (12). In these coordinates, the metric takes a form similar to BTZ, up to subleading contributions O(r) in the g φφ component, which now reads g φφ =r 2 −b 2r +b 2 4 /4. The new O(r) and O(r 0 ) terms in g φφ being subleading, one can ignore them to see the asymptotics, and then simply read the mass from the components g tt ; this obviously yields M . Component g φφ of the metric (13) vanishes atr for this special circle to be inside the horizon one should askr + = √ M ≥r −− , which in turn implies µ ≥ 0. Then, taking into account relation (10) and that r + ≥ r − , one concludes that for b ≥ 0 the conditionr + ≥r −− ultimately implies r − = 0. This also implies that in that case the curvature singularity at r = 0 is timelike. Another interesting possibility is b < 0, where g φφ does not vanish for any positiver. This solution may still represent a black hole, which would contain both internal and external horizons (i.e. r + ≥r > 0). Black hole solution (8) exhibits non-trivial thermodynamical properties. Its Hawking temperature is given by while its entropy can be shown to be Notice that the latter formula does not follow the area law, but the entropy is given by the difference between the areas of the external and the internal horizons. This behavior is due to the presence of the higher-curvature terms present in the action. It can also be thought of as a backreaction effect of the hair parameter b on the geometry. It can easily be checked that the variables M , T , and S obey a Smarr-like formula M = 1 2 T S, which follows from the fact that, for this black hole, S ∝ T . These variables also obey the first law of black hole mechanics dM = T dS. Notice that in the extremal case r + = r − the solution has all thermodynamical quantities equal to zero: M = T = S = 0. V. ASYMPTOTIC SYMMETRIES AT THE BOUNDARY Let us now consider the large r behavior of the geometry (8). To do that, let us first study a weakened version of asymptotically AdS 3 boundary conditions: Consider perturbations of the AdS 3 metric of the form where i, j = t, φ or, using coordinates x ± = t/ ± φ, i, j = +, −. The functions h µν and f µν above are arbitrary functions of all variables but r. Notice that these boundary conditions are weaker than the usual Brown-Henneaux asymptotic conditions [23]. They are even weaker than the boundary conditions proposed by Grumiller and Johansson in [40], which are the one that holds in the so-called Log-gravity [41]. As a matter of fact, the second line in (16) also differs from the perturbation given in Eq. (30) of Ref. [21]. Still as we will see below, the weakened falling-off (16) is compatible with the main features of AdS/CFT. Let us begin by studying the local conformal symmetry at the boundary: Consider the asymptotic Killing field η = η µ ∂ µ which actually preserves the set of metric with δg µν obeying (16) andḡ µν being the line element of AdS 3 , which in coordinates r, Indeed one can check that and so it closes in (16). Killing field (17) generates a Virasoro algebra (see below). Since (16) are weaker than the standard AdS 3 boundary conditions, a natural question arises as to whether this set of geometries is preserved by additional asymptotic Killing vectors. It was noticed in [21], that the vector field also preserves the phase-space (16). More precisely, together with The latter variation relates the subleading fluctuation δg +− with the arbitrary function Y (x + , x − ) that appears in ζ. This means that, under the action of Y , the following relation between fluctuations holds: now written in terms of the variables t, φ. This is consistent with the charge algebra. VI. HORIZON SYMMETRIES We have shown above that, despite the extra Killing vector (21), no supertranslation symmetries act on the boundary gravitons. We will now focus on the black hole horizon, where supertranslation symmetries are also expected to appear [13]. Let us consider the near horizon boundary conditions studied in [14,16]; namely where v ∈ R, ρ ≥ 0, and φ ∈ [0, 2π] with period 2π. Functions f , k, h, and R are of the where O(ρ 2 ) refers to functions of v and φ that vanish equally or faster than ρ 2 , and where the orders that do not appear in (33) are supposed to be O(ρ 2 ). In the expressions above, τ (φ), θ(φ), γ(φ), and λ(φ) are arbitrary functions of the coordinate φ; κ corresponds to the surface gravity at the horizon and is fixed. As shown in [14], near boundary conditions (34) are preserved by a set of asymptotically Killing vectors χ = χ µ ∂ µ that generate an infinite-dimensional algebra, consisting of one copy of the Virasoro algebra in semidirect sum with supertranslations. More precisely, where the ellipsis stand for O(ρ 3 ) terms. These asymptotic Killing vectors satisfy the Lie product [χ(P 1 , L 1 ), χ(P 2 , L 2 )] = χ(P ,L) which generates a copy of Virasoro algebra in semidirect sum with supertranslation, generated by L and P respectively. Under the action of the vector field (35), the metric functions transform as Now, let us compute the Noether charges associated to the infinite-dimensional isometries derived above: In the covariant formalism [26], the functional variation of the conserved charge associated to a given asymptotic Killing vector χ is given by the expression where g is a solution, δg a perturbation around it, and k µν is a surface 1-form potential. The latter is the sum of the GR contribution k µν GR and the contributions k µν K coming from the quadratic terms of NMG; namely The explicit expression of the 1-form potential can be found in Appendix A. Evaluating (39) for the supertranslation symmetry generator χ(P ) yields a set of Noether charges; namely where γ, τ , θ and λ in general depend on φ, and where D is given by which is a total derivative for constant P , and is an exact variation if γ is fixed. In other words, the charge is not generically integrable due to the presence of D. This is in contrast to what happens in GR, where the supertranslation charge is integrable provided the generators do not depend on v. Superrotation charges are found to be (43) whereD stands for a non-integrable piece that vanishes when γ, θ, λ, τ and σ are constant. Notice that the subleading contribution σ enters in the superrotation charge. It is possible to verify that (43) exactly reproduces the charge of the rotating BTZ black hole; see appendix B. It is worth mentioning that explicit expressions of solutions of NMG field equations carrying both supertranslation and superrotation charges can be written down. They are the solutions found in [14] (see equations (15)-(16) therein), which persist as exact solutions when the terms K µν are added to the Einstein equations, provided the radius is taken to be that given in (7). In particular, the charge Q[∂ v ], associated to the zero-mode of supertranslation vector, in the case where γ, τ , θ and λ are independent of φ, is given by Now, let us evaluate this charge for the hairy black hole geometry we are interested in: First, we have to take the near horizon limit in the geometry (8), i.e. looking at the hairy black hole close to its external horizon. To do so, it is convenient to define the new variables In these coordinates, the near horizon (near ρ 0) region of the black hole takes the form 2 where the ellipsis stand for subleading terms of the ρ expansion. Metric components (47) actually obey the near horizon boundary conditions (33) where the relevant metric functions are given by: Evaluating it for the above solution, we get where T is the Hawking temperature (14) and S is the entropy (15) of the black hole (8). We emphasize that entropy (15) S. This is a crucial difference with respect to the near horizon computation in GR, where subleading terms λ, τ and σ do not enter in the charges, cf. [14,16,24]; see also Appendix A. VII. ADDING ROTATION: STATIONARY HAIRY BLACK HOLES A rotating generalization of the hairy black hole (8) is given by [21] where N (r), N φ (r) and F (r) are functions of the radial coordinate r, given by , and Here η = 1 − a 2 / 2 , and a is the rotation parameter. For certain range of parameters µ, a and b, where µ > −b 2 2 /4 and |a| ≤ are satisfied, this solution also represents a black hole. When a = 0, the metric reduces reduces to the static hairy black hole (8), while for b = 0 it reduces to the stationary BTZ black hole (B6)-(B7). As we see, the expression for the metric of the rotating hairy black hole is notably more involved than the one of the static case a = 0. It can nevertheless be seen that it is consistent with the asymptotic symmetry analysis presented in sections 5 and 6 as follows. The Ricci scalar reveals the presence of a curvature singularity since where Provided r + > r − > r s , there will be an event and a Cauchy horizon located at r + and r − , respectively, given by We focus on that case. The change of coordinates leads to the metric Finally, introducing the Gaussian coordinate ρ as suffices to recast the near horizon geometry (r → r + , ρ → 0) in the form where κ = η b 2 2 + 4µ 2 (1 + η) , which is found to be That is, it reproduces the product of the Hawking temperature T and the black hole entropy S. Indeed, the entropy of the rotating black hole has been computed in [21], where was shown to be which reduces to (15) when a = 0 (i.e. η = 1). In [34], expression (61) was observed to agree with the result of the Cardy formula in the dual CFT 2 with the correct value of the central charge, c = 3 /G. VIII. ADDING THE CHERN-SIMONS GRAVITATIONAL TERM As mentioned, hairy black holes (8) are conformally flat and so they are solutions to NMG coupled to TMG [27], which is defined by adding to the gravity action (1) the gravitational Chern-Simons term where q is an arbitrary coupling constant 3 of mass dimension 1. The contribution of (62) to the field equations is the addition of the Cotton tensor, which identically vanishes for a geometry that is conformally flat. However, (62) yields a non-trivial contribution to the charge, changing both the mass and the entropy of the hairy black holes. We can obtain the contribution to the entropy coming from the gravitational Chern-Simons term by evaluating on the bifurcation surface [43]. The binormal is defined in terms of the horizon generator The angular velocity of the hairy black hole is Finally the contribution to the entropy is found to be On the other hand, we find that the contribution of the gravitational Chern-Simons term to the charge Q[∂ v ] in the near horizon geometry is given by Notice that the TMG contribution ∆S vanishes for static black holes (η = 1). Notice also that (66) comprehends, in particular, the result of conformal gravity, which corresponds to the limit q → 0 of the formulae above. IX. CONCLUSIONS We considered stationary black holes in AdS with softly decaying hair. These geometries appear, for example, as solutions of massive 3-dimensional gravity [21,22] and of 3dimensional conformal gravity [37]. When AdS boundary conditions that are weak enough to accommodate such solutions are considered, the asymptotic isometry group contains, in addition to local conformal symmetry, an infinite-dimensional Abelian ideal. This is a local supertranslation symmetry that acts non-trivially at the level of the asymptotic isometry but yields vanishing Noether charges and, therefore, turn out to be pure gauge. This is related to the fact that the ADM mass of the hairy black holes in AdS, in addition to the standard mass parameter (µ), also depends on the gravitational hair (b): The supertranslation transformation at infinity acts as a angle-dependent shift in the radial direction, changing both µ and b in a way such that the mass remains unchanged. Then, we reoriented our analysis to the black hole horizon: We studied the supertranslation symmetry that the hairy black hole geometry exhibits in its near horizon region. There, an infinite set of non-trivial supertranslation charges appear. We computed these charges explicitly and we showed that, as it happens in Einstein gravity, the zero-mode of the supertranslation charge in the near horizon limit reproduces the entropy of the black hole. This is the case even when the entropy of the hairy black hole depends not only on the radius of the external event horizon but also on the radius of the internal Killing horizon. In other words, the back-reaction of the gravitational hair in the near horizon geometry produces that the entropy (15) The expression of the variation of the charge associated to the Killing vector ξ is where δg µν = h µν a perturbation around a solution g µν , and where k µν is the so-called surface plus the higher-curvature contribution k µν We can discuss first the piece corresponding to GR. Consider a generic phase space of the form (33), where κ, θ, γ, λ are allowed to vary. For pure GR the functional variation of the charge reads where the subscript GR stands for making explicit this is the GR contribution. Assuming as in [14] that δκ = 0, this charge can easily be integrated integrate and found to give which, as expected, gives Q[∂ v ] GR = κ/(2π) × Area/(4G), with Area = γ(φ) dφ. Next, we can add to (A5) the contribution coming from the quadratic curvature terms (R 2 ). That is, we can define the full NMG charge variation δQ[ The piece δQ[∂ v ] K comes from the higher-order terms (A3); see [44]. Notably, the full charge δQ[∂ v ] involves δκ, δθ, δγ but also δλ and δτ . Assuming none of these functions depend on φ or v, for the case 2 m 2 = −1/2 we can write it as Thus, assuming δκ = 0, we get Let us consider here the BTZ metric [35] ds 2 = −(N 2 + r 2 N 2 φ ) dt 2 + dr 2 N 2 + 2r 2 N φ dtdφ + r 2 dφ 2 , where t ∈ R, φ ∈ [0, 2π] with period 2π, r ∈ R >0 , and where the lapse and shift functions are M and J are integration constants related to the conserved charges of the solution. When |J| ≤ M , the BTZ solution describes a black hole, which possesses an event horizon at r + and, when J = 0, an inner Cauchy horizon at r − . In terms of r ± , constants M and J read As it is well known, BTZ solution is locally equivalent to AdS 3 [36] as well as asymptotically AdS 3 in the standard sense [23]. The metric then takes the near the horizon form (33)-(34) with κ = (r 2 + − r 2 − ) 2 r + , θ = 2r − , γ = r + , λ = 2r + , τ = − (r 2 + − r 2 − ) 2 r 2 Evaluating the charge (A7) on these functions, yields with T = κ/(2π) being the Hawking temperature and S = πr + /G being the Wald entropy, which in this case is proportional to the Bekenstein entropy of GR. Notice that (B8), as well as (A7), are expressions valid only for the especial case 2 m 2 = −1/2. The reason why we evaluated the expressions of the charges at such particular point of the parameter space is that it permits to compare the result (49) for the hairy black hole with the result (B8) for the BTZ black hole. In fact, we see that the case r + = −r − > 0 in (49) agrees with the case r + > r − = 0 (B8). However, unlike the hairy black hole (8), the BTZ black hole solves the NMG field equations at a generic point of the parameter space (λ, m 2 ). In general, the AdS 3 radius is given by λ = −1/ 2 − 1/(4m 2 4 ) and the results for the BTZ entropy yields This result is consistent with the holographic computation using the Cardy formula of the dual CFT 2 , as for NMG the central charge of the latter theory is (B10)
2020-07-17T01:00:32.601Z
2020-07-16T00:00:00.000
{ "year": 2020, "sha1": "01433d7fa0eeae3857193ed759075e7fed2192bb", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP09(2020)120.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "2ecdcb0110f0d2ff764026dd22f7c433371da098", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
12348803
pes2o/s2orc
v3-fos-license
Green-Schwarz Anomaly Cancellation, World Sheet Instantons and Wormholes We consider the breaking of the global conservation of gauge field charges which are commonly thought to survive the spontaneous breakdown of gauge symmetry brought about by Kalb-Ramond fields. Depending on the dilaton field and also the size of the compactifying space, the global charge breaking may take place due to world sheet instantons. In going to 3+1 dimensions one could have a serious problem in order to produce the hierarchies between the quark and the charged lepton masses using the mass protecting charges with the Green-Schwartz anomaly cancellation. Various unnatural features of this type of models are discussed. The survival of a global gauge symmetry -even after the gauge bosons have acquired a Higgs field induced mass -is quite mysterious because the vacuum is not gauge invariant under the gauge symmetry with constant gauge function (Λ = const.) due to the Kalb-Ramond field (which plays a major role in this phenomenon). Despite this, it seemed as though there remains a phase transformation symmetry for the fields carrying the family-dependent U(1) Xcharge when gauge symmetry is spontaneously broken in this remarkable way. We emphasise in this article that, precisely this phase transformation symmetry is not a true symmetry if one takes into account the world sheet instantons. We shall argue below that if the compactifying space is of order of the fundamental scale, then the local and global gauge symmetry gets totally broken. However, it is unrealistic to compactify the space so close to the fundamental scale. The crucial point is that the effect of the world sheet instantons will be exponentially suppressed. In the case of very strong breaking (of the order of the fundamental scale) it would mean that we could not apply the Green-Schwarz anomaly cancellation mechanism in 3 + 1 dimensions, i.e., for the application to the large hierarchical Yukawa coupling constant structures. On the other hand, if we let the compactifying scale be much below the fundamental scale, there is another odd feature: there is very light (Abelian) gauge particle from the fundamental scale point of view and correspondingly we get that the condition F µν F µν = 0 for not having anomalies is fulfilled identically. This would lead to a rather strange electrodynamics. If the validity of the usual div E = j 0 is not maintained, the dynamics could open the possibility for space-time foam causing the break down of the global charge conservation due to wormholes. This article is organised as follows: in the next section, we review the Green-Schwarz anomaly mechanism, and in section 3 the world sheet instantons, and we also discuss the string coupling constant. Section 4 contains the various discussions including the suspected effects of wormholes. Finally, section 5 contains our conclusions. Review of Green-Schwarz anomaly mechanism Let us review the Green-Schwarz anomaly cancellation mechanism focusing on the Kalb-Ramond field in 9 + 1 dimensions and then the application for the 3 + 1 dimensions. For the purpose of making phenomenological fits of the quark and (charged) lepton masses and mixing angles it is very useful to have some approximately conserved charges [30] (in addition to the gauge charges of the Standard Model) so that most of the masses get suppressed due to the differences in quantum numbers of the right-and left-handed Weyl components. It is very attractive, and needed due to the effects of wormholes etc. [31], to let such mass suppressing charges be gauge charges. There are many gauge charges in superstring theory so such a picture is not unnatural in this theory. Working in 3 + 1 dimensions one would at some level expect to obtain a 3 + 1 dimensional field theory with gauge fields which could be described as renormalisable. That in turn would imply that the triangle anomalies resulting from the various chiral fermions in the effective 3 + 1 dimensional model should cancel, i.e., no violation of gauge symmetry would be caused. Otherwise this effective model would not be renormalisable. Now, however, it became very popular to use the inspiration from the superstring theory to suggest models in which this "usual" gauge anomaly cancellation does not take place. From the four dimensional point of view this avoidance, which is usually needed for renormalisation, gauge-and mixed anomaly cancellation conditions seems quite extraordinary: A certain coefficient field b(x µ ) in an expansion for the Kalb-Ramond field B M N (M, N = 0, 1, . . . , 9) in the 9 + 1 dimensional theory, couples as an axion field. That is to say it couples via the Lagrangian density term of the form where µ = 0, 1, 2, 3. In the superstring theories (type II and Heterotic strings) there is a Kalb-Ramond anti-symmetric tensor field with two indices on the potential B = B M N dx M ∧ dx N and three on its field [1] Here the three forms ω 0 3Y and ω 0 3L are given by [32,33]. In these theories there is a very sophisticated way of cancelling the gauge, gravitational and various mixed anomalies, first by having the right number of chiral fermions but in addition some terms, which are gauge non-invariant when alone, in the action for zero mass particles are used, to cancel the remaining part of the anomalies. Here c is numerical constant [34,32]. In order to get chiral fermions -as is phenomenologically required to obtain the Standard Model -it is needed to break the parity symmetries that only makes reflections in the compactifying dimensions (for instance by having non-zero magnetic field in the extra dimensions). One may typically make use of Calabi-Yau spaces as the 6-dimensional compactifying space. For pedagogical reasons, just to illustrate the idea we may in the present article think of a compactifying space being the cross product of three spheres, each of topology S 2 and each with a magnetic field on them corresponding to a "magnetic monopole in the centre of the S 2 sphere". Let us imagine that the equations of motions have led to that the vacuum has S 2 rotation invariant fields on the different S 2 's. Then we may symbolically use these rotational invariant 1 field strength F 67 etc. The first term in integrand of S 1 (Eq. 3) which by itself gauge breaking, will contain a contribution of the form Imagine expending the variation over the S 2 sphere (i.e., x 4 and x 5 dependence) on "spherical harmonics" or "eigenfunctions". Suppose we arranged one of spherical harmonic or eigenfunction to be dominant in the weakly exited state. We describe the effective four dimensional theory by means of the expansion coefficient b(x µ ) to this term: Really we could define such b(x µ ) by integrating the two form B over a homotopically non-trivial 2-cycle. This would then require that we imposed other terms in the expansion of B 45 to be restricted to zero. Taking the magnetic fields in the compactifying dimensions as constants we end up with an effective term in the four dimensional Lagrangian density which up to the over all constant is of the form (1). From the kinetic term for the Kalb-Ramond field, where κ is the gravitational coupling constant, ϕ the dilaton field, g the Yang-Mills gauge coupling constant in the Lagrangian density, we obtain a kinetic term for the coefficient field b(x µ ). Due to the ω 0 3Y term in Eq. (2) it comes together with an Abelian part of the Yang-Mills potential in an expression of the form This is a gauge invariant combination for the b-field gauge transform while where Λ is the gauge function for an invariant U(1) subgroup of the left over symmetry group, not spontaneously broken. For simplicity we imagine that the presence of the extra dimension fields F 67 F 89 represents a break down to a subgroup which still contains at least one invariant Abelian subgroup called U(1) X . Then we may concentrate on the gauge field associated with this subgroup U(1) X and denote the gauge function for it as Λ. We shall discuss a rather extraordinary behaviour of the theoryfrom the four dimensional point of view -with the axion field b(x µ ) (see also Sec. 4.1). Suppose that b(x µ ) does not quantum fluctuate so widely that it totally looses an expectation value. This means that there is a spontaneous break down of the gauge symmetry for U(1) X . Since even the constant Λ gauge transformation is spontaneously broken due to the additive transformation property of b, the spontaneous breaking situation is just like that of the Higgs case. However, that means, one would expect that particles -such as fermions carrying U(1) Xcharge quanta -would be able to make transitions into (sets of) particles with a different number of such charges (together). At this point, however, one has often found the belief that the global symmetry and the Noether conservation of the charged particles is not violated. In the perturbative approximation this belief is well-granded. We will discuss this question in the following in non-perturbative approximation. World sheet instantons and the Fayet-Iliopoulos D-term Although at first it seems as if there is no way to cause the global U(1) Xcharges on particles to be created or annihilated, it was shown in [35,36] that such a violation of the charge was indeed occurring due to world sheet instantons. These "world sheet instantons" refer to the tunnelling of a string so as to have a "time track" during tunnelling which encloses in our simple scenario the S 2 involved with the B 45 . In the real general case we should have the tunnelling go around a 2-cycle homotopical to the 2-cycle(s) used for extracting b from B. The important point for the present discussion is as follows: When such a world sheet instanton exists, there are zero modes for the fermions (as well as bosons) which have U(1) X -charge. These zero modes cause the U(1) X -charge to change. In this anomalous way -similar to the QCD-instanton -the global charge gets also violated after all. According to [35,36] this is much more natural than not having the global U(1) X -charges broken. In reality the effect of this zero-mode effect is described by an effective Lagrangian term Here the Q 1 , Q 2 , . . . , Q n are various U(1) X -charged fields and the product Q = Q 1 · Q 2 · · · Q n could, for instance, be Q = ψψ where ψ is a field for which U(1) X has the role of a chiral mass protecting charge. The whole term is to be multiplied by the amplitude of the world sheet instanton. The factor e −ib comes from the exponentiated string action Eq. (30), which may cause the damping and the factors Q = Q 1 · Q 2 · · · Q n are needed to make the whole term gauge invariant. In supersymmetric theories, where these models are usually considered, such an axion field must occur together with a corresponding field in a complex combination. For instance the anomalous Fayet-Iliopoulos D-term was studied [4] in the context of the heterotic string theory: They consider a dilaton chiral supermultiplet, Φ, adding to the Kähler potential a part, K d Φ + Φ , which transforms under the U(1) X gauge group as The Kähler potential is with appropriate vector kinetic term The gauge coupling constant thus depends on the dilaton, (Ref The theory also has an axion coupling proportional to b F µν F µν . With the shift transformation of the axion field under U(1) X gauge group (see Eq. (13)), this term serves to remove the anomaly proportional to F µν F µν . 6 The D-term potential is using Eq. (14) We should emphasise that in the weak coupling limit, where dilaton field goes to zero (φ 2 → 0) the Fayet-Iliopoulos D-term vanishes, i.e., there is a supersymmetric minimum. If we contrary to the just given argumentation assumed that after all nonzero Fayet-Iliopoulos D-term were stabilised -i.e., there were at least the metastable minimum for non-zero φ -then for the weakly-coupled heterotic string according to Ref. [5] with g s = φ 2 , there is a Fayet-Iliopoulos D-term given by where M P l is the Planck mass and Trq is the sum of U(1) X -charges. Thus the potential energy becomes so that V D is obviously negligible at very small dilaton field even if one wishes to have a model in which Trq is non-zero, a typical value of Trq ∼ 10 2 to 10 3 (see e.g. [9]). According to [12] the D-term (16) may induce some (non-zero) expectation value of a scalar field called θ, and thus cause a spontaneous breakdown of the global (part) of the U(1) X symmetry. Supposing no other scaler field break this group, then the non-zero vacuum expectation value of θ may be made a connection with the Fayet-Iliopoulos D-term in following way: where the U(1) X -charge of the Higgs field θ, X θ , is taken to be −1. In this way one could obtain an expansion parameter for the fermion mass matrices from ξ GS : In the case of ξ GS being not too large (see Eq. (16)), in other words, g s is not too small and the sum of U(1) X -charges (Trq) is of order 10 2 to 10 3 , we could use such a mechanism to produce a good order of magnitude for the breaking of the global charge conservation for U(1) X . The U(1) X symmetry would be broken one or two orders of magnitude below the fundamental scale (Planck or string scale, depending on model). This situation could be using the fundamental scale as the Planck scale, i.e., an expansion parameter for the fermion mass matrices which is identified with the Cabibbo angle. If we have identified the string coupling constant, g s , with a dynamical fieldthe dilaton field φ 2 -the ground state will be found by adjusting this coupling g s to be zero, which obviously means the disappearance of the global charge breaking effect. In this situation, which is the expected one, we have thus at first no complain from world sheet instantons about the possibility hoped for in literature of having the global charge totally conserved although derived from a Higgs gauge symmetry. However, if the string coupling constant g s is really going to zero, then the models in which this happens have zero string coupling constant, and they become totally free, since in string theory all interactions are finally derived from the string interaction g s . Unless one can somehow live with a tiny breaking of supersymmetry at very high energy scale allowing a small but non-zero g s , the models avoiding a Fayet-Iliopoulos D-term would become totally free as string theories! The wish that the adjustment lets the Fayet-Iliopoulos D-term vanish by adjusting the φ 2 to be zero is further supported by the often favored phenomenological call for supersymmetry not to be broken except at low energy scales, typically close to 1 TeV, because it can be of help for the hierarchy problem. From the point of view of the high scale of energy, where we a priori have the Higgsing of the U(1) X group, a phenomenologically useful supersymmetry breaking scale would be tremendously small and essentially count as zero. This argument further disfavours models which would have troubles due to the Fayet-Iliopoulos D-term. But if the problem is solved by adjusting to the supersymmetry which is only extremely weakly broken by having the string coupling almost zero, then the type of model is even more problematic, because it lacks the interaction altogether. As a resume of the above discussion let us compare the two logical possibilities concerning the size of the string coupling constant g s of being of order unity g s ≈ O(1) or sufficiently small (g s ≈ 0) both being considered in the presence of a non-trivial Green-Schwarz anomaly cancellation so that the Fayet-Iliopoulos D-term becomes non-zero unless g s = 0: (1) Consistence The possibility g s ≈ O(1) is strictly speaking inconsistent because the Fayet-Iliopoulos D-term (Eq. (16)) drives the g s , which is effectively dynamical (related to φ), to zero so that only g s ≈ 0 is consistent. (2) The expansion parameter ǫ suitable for small hierarchy? A reasonable sized g s ∼ 1 could give a good expansion parameter ǫ, however, if g s ≈ 0 of course ǫ becomes exceedingly small and not useful for fitting the small hierarchy. There are two possibilities with g s in the relative large range which we should mention: (a) g s is large: in this case the world sheet instantons and the Fayet-Iliopoulos D-term may be too large so that the breaking, which they cause, would also be too large (i.e. the expansion parameter ǫ). In this case U(1) X -charge cannot be used to the small hierarchy problem of the fermion masses. (b) g s is not too large: in this case g s could give a good expansion parameter ǫ (Eq. (19) and see Sec. 4.3). Then one could hope for the desired breaking of global charge without invoking further breaking mechanisms. In the case when g s is really small the global charge is very well conserved, but one can imagine it broken by other means so that it would be no problem. (6) Higgsing The Higgsing of the local part of the gauge group uses the Kalb-Ramond field and that works independent of the size of g s (unless of course the whole theory becomes free that one can conclude that nothing happens at all). The extraordinary properties of the four dimensional effective theory The four dimensional model is derived from an although non-renormalisable -as all theories in ten dimensions -then at least gauge invariant theory. It is therefore surprising that it does not satisfy the usual conditions on numbers of fermion species and their charges needed for the anomaly cancellation. It is the reason why there is the Wess-Zumino term in Eq. (2). However, how does that remove the need for anomaly cancellations? One might wonder how the situation of the anomaly cancellation would be, if we had that the mass scale m of the U(1) X -photon (Eq. (9)) were very low compared to the scale of energy at which we consider the situation. From the point of view of such a high energy scale compared to m, it would seem that the kinetic term for the axion field, b, were very close to having zero coefficient, i.e., an auxiliary field. The effect of integrating out b functionally would be to produce a functional δ-function with the effect of imposing the constraint With such a constraint imposed on the gauge field it would of course be no wonder if one gets no anomalies. In fact it would mean that the anomaly had been constraint to be zero. Such a constraint will lead to interactions between photons which are of course in the next approximation in m 2 understandable as due to exchange of the b-particles, i.e., γγ → γγ. However, notice that diagrams, like Fig. 1, have the b field propagator which has the contribution of a factor m −2 . This propagator contribution is tremendous compared to p −2 since p is in the range (orders of magnitude) above the U(1) X -photon mass scale, m, i.e., γγ-scatterings have extremely strong interactions. Therefore, they may be able to provide the constraint forces which uphold the constraint Eq. (20) (or Eq. (28)). Equations for the four dimensional effective theory We have an interacting U(1) X -photon theory with the Lagrangian which in addition has the term (9) when we do not consider m so small that we can ignore the term in Eq. (9). The equations of motion become, in addition to Eq. (20) derived by varying b, Including charged matter and noticing that the no-matter terms in the Euler-Lagrangian equation can be written in a form more familiar, we obtain where J ν is the "matter current", and we may define the short hand notation For future discussions it is convenient when we express the equation of motion (Eq. (23)) with the electric field, E, and magnetic field, B, hereby we have applied div E Red = 0. A mathematical point worth noticing in connection with the somewhat unusual electrodynamics, which we discuss here, is that the condition (20) can be shown by trivial algebra to imply that even for all combinations of the indices of µ and ρ. Order of magnitude possibilities Above, we have reviewed an anomalous horizontal U(1) X model using Fayet-Iliopoulos D-term to cause a spontaneously break down of U(1) X -charge. This effect would arise if the dilaton field φ were non-zero which is, however, not achievable due to supersymmetry being driven to be an exact symmetry. However, if we have a non-zero dilaton field, we also have world sheet instanton effects breaking the global charge conservation using Green-Schwarz anomaly cancellations. Thus, this suggests to use this instanton effect in stead of a Fayet-Iliopoulos D-term. Could this effect be adjusted for a phenomenologically reasonable global charge breaking of order of the Cabbibo angle ǫ? Baring a totally mysterious cancellation of various contributions from different world sheet instantons the order of magnitude of the U(1) X breaking by world sheet instantons is estimated as the exponent of the supersymmetry associated term to the field b, which makes up a complex field. The possibility of surprising cancellations would have a priori to be ignored [9] if it were not for the findings that this indeed easily can occur [37]. Let us divide the discussion into the following possibilities: (1) All the quantities, including g s = φ 2 , are very strictly of order unity and the breaking of the charge conservation is also of order unity in spite of the fact that it is exponentially suppressed -as an instanton tunnelling effect. In this philosophy the charge conservation is strongly broken: Let us then imagine that the mass or the effective Yukawa coupling for a quark or a charged lepton is obtained via a chain diagram (Fig. 2) in which a series of fundamental scale vector coupled fermion propagators are linked by Higgs fields or world sheet instanton caused transition symbols. If the strength of the charge violating world sheet instantons is of just the same order of magnitude as the (typical) fundamental scale fermion masses, then there will be no suppression, and the U(1) X -charge considered will be of no help in explaining the suppression of some effective Yukawa couplings (at experimental scales) compared to others. However, taking "everything" especially the compactifying space dimensions to be very close to unity in "fundamental" units, such that even exponents are accurately of order unity is presumably not likely to be true. We discuss in the following the variation of the scales of breaking (a) and (b) above: The mass square factor in Eq. (9) goes back to the term (8) in as far as the b field is a coefficient on a term in B which in turn has its derivatives go into H in Eq. (8). It is remembered that the function multiplying b to get the B 45 contribution must be normalised so that a shift in b by 2π, corresponds to adding to S 2 B the shifting by a single monopole flux field through the cycle S 2 shifted by Λ = 2π. If all the couplings are taken to be of order unity, one finds that scaling the dimensions of the 2-cycle as the typical compactifying space dimension, R, squared R 2 . The area of the 2-cycle is proportional to R 2 . This means that that m 2 ∝ R −2 . Thus we see that the mass scale of the U(1) X -photon goes as R −1 where R is really the length scale of the two-cycle. The tunnelling suppression amplitude of the world sheet instanton [38,37] is where α ′ is the Regge slope, Pfaff is the Pfaffian, and D F and D B are kinetic operators for the bosonic and fermionic fluctuations, respectively. The " ′ " on the Pfaffian and determinant denotes that the zero modes are to be omitted. Moreover, A is the area, which of course A ∝ R 2 again, with R being the R ¡ energy scale relevant for the two-cycle. Introducing a fundamental mass scale, M F , we thus have the scale at which the U(1) X -charge conservation violates This crude estimate assumed g s to be of order one. Now, we may go into crude phenomenology, still taking g s of order unity. In theories with compactified extra dimensions it is quite natural to take this as being the reason for the fine structure constants being weak, compared to the "self-dual" strength (defined as the value of fine structure constant which makes it equal to the corresponding monopole coupling constant, associated by the Dirac relation). The self-dual strength, α U (1) X , is approximately 1/2 since α e α g = 1/4 for a formal monopole α g . From this point of view we can claim that a typical, say GUT, coupling of order α ≈ 1/25 is weaker by a factor ≈ 12 than the Abelian self-dual. Though this is exaggerating and should be corrected at least by the factor 3/5. Roughly taking anything of this order, we would now expect that R measured in "fundamental units" would be of the order R ≈ 6 √ 12 ≃ 1.5 (see Fig. 3). This would mean m ∼ M F /1.5 and , a very useful suppression factor indeed. This is namely a typical order of magnitude for an ǫ with which one fits the fermion mass spectra [8,9]. In this way it looks very promising to get very naturally a good scale for violation strength for phenomenological fitting. We can say that, strictly speaking, a scale R of the compactifying dimensions just 1.5 times bigger than the fundamental length scale M −1 F is so close to being of order unity, that there is hardly any call for any special explanation for this "deviation" from it being the fundamental size. That we can notice some small numbers in both the fine structure constants and in the suppressed violation of the U(1) X -charge is due to respectively the 6 dimensional compactifying space and the exponentiation because of the world sheet instanton effect needed. The energy scale gap in which we have the funny electrodynamics with the constraint F µν F µν = 0 is in the just sketched scenario (with g s ∼ 1) reduced to a scale factor in the 1.5 region. That is such a very small range that one would hardly be able to claim that anything strange will happen. Wormhole discussion I Independent of whether world sheet instantons do or do not break global charge, there is another mechanism threatening the global charge conservation: the effect of wormholes. We shall in fact show below in Subsec. 4.5 that our estimation of the effects of gravitational wormholes present in the vacuum (or similar space-time foam effects) will give rise to a significant violation of the global charge -in the four dimensional theory -with the non-trivial Green-Schwarz anomaly cancellation, although it has resulted from a gauge charge and thus at first might have been suspected of not being violated by space-time foam effects. Over most of the energy scale (logarithmical counted, see Fig. 3) between the fundamental scale and the U(1) X violation scale M V , we have a globally conserved charge for U(1) X while the gauge particle "already" had its mass at m ∼ R −1 . Now, however (see Subsec. 4.5), according to arguments in Ref. [31] charges which are not gauge protected may disappear into wormholes or baby universes. Therefore, one may speculate whether such a charge conservation which is unprotected by gauge fields will not get its conservation spoiled by the Wheeler space-time foam. Actually such effects of breaking the global charge are expected at energy scales smaller than m. But we will see in Subsec. 4.5 that wormholes at even higher energy scales than m shall provide such breaking when the monopoles are used. The wormholes of low energy (smaller than m), i.e., of large (length) are suppressed exponentially with some exponent related to m/M F . In fact we would (naively) estimate that for a space-time foam ingredient such as a baby universe of size m −1 (in length) we would have an action of order M 2 F /m 2 , and a suppression factor of exp(−M 2 F /m 2 ) ≈ exp(−R 2 M 2 F ). It happens under the assumption g s ≈ 1 to be just the same order of magnitude -in the exponent suppression factor -as the one present in the world sheet instanton suppression factor. For this coincidence to occur it was quite crucial that the estimate just used for the baby universe action was of the form C R √ g d 4 x (C is baby universe tube) as being obtained by use of Einstein-Hilbert action and not C √ g d 4 x which would have been the case if the cosmological term in the action were significant here. That is to say, we used that the cosmological constant is zero, but at these scales one is closer to the (running) cosmological constant which is relevant for short distance and which may have another value than the long distance one which is practically zero. In any case even if the cosmological constant were relevant, it would suppress the baby universes at the scale m even more. The space-time foam non-conservation is expected to be at most what the world sheet instanton effect would have been if the g s were of order unity, contrary to true expectation. Now, however, since we truly do not expect the world sheet instantons to provide breaking then the wormhole effects could easily take over as the dominant effect. Rather we should say that we do expect an appreciable wormhole breaking (see the argument below). It could be that the world sheet breaking effect could actually dominate even in the case of g s ≈ 1, but the result would in this case not be so different with respect to the order of magnitude of the (exponent of the) effect, and thus the conclusion would be the same. However, in the case that there is no significant world sheet instanton effect, the wormholes can very well work and completely and dominantly break the charge conservation. Argument for wormhole effect violation of the global part of the U(1) Xcharge -Wormhole discussion II In this subsection we will prove for that the U(1) X -charge is violated as a global charge due to the wormhole effects. In the foregoing subsection we saw that one could give immediate arguments both in favour of and against the global U(1) X -charge being violated by the wormhole or space-time foam effects. In this subsection we want to deliver an argument which shows that indeed the global U(1) X -charge is broken due to the wormholes. We should, however, stress that this breaking is exponentially suppressed but only with the suppression corresponding to the scale of the Higgsing m, which is essentially the compactification scale. In the realistic models this is a rather high energy scale and the breaking from wormholes is thus expected to be quite large. Compared to what one gets from world sheet instantons, which do not work after all in the case of g s being of order one, this could be much bigger than the latter. It is most easy to organise the non-conservation of the Coulomb field for the U(1) X -charge by use of virtual wormhole entrances with magnetic fluxes radiating out. These form effective virtual magnetic monopoles in the vacuum. The reason that it is profitable with monopoles violating the charge conservation for the U(1) X -charge is that we indeed can derive some formulas for the variation/development of the U(1) X -electric-charge relating it to the variation of b on the sites of the monopoles (see Eq. (27)), Since div E is the charge density this formula tells us that, for instance, when a monopole is present, the variation rate of the charge ∂ 0 div E contains a term (∂ 0 b) div B on the site of a monopole whenever b varies, i.e., ∂ 0 b = 0. The monopoles to be used here do not have to be genuine monopoles. They could be entrances to wormholes with magnetic flux going through [39]. We would not even have to use such genuinely existing monopolic wormhole entrances. It is rather sufficient to consider entrances which are virtually present in the vacuum. We shall imagine that there are many such wormholes virtually present with magnetic flux and that the entrances give rise to interactions with the various fields in the theory. We assume that interactions can be described by effective terms in the Lagrangian density. At first they are only at the places where the entrances to the wormholes are. We shall, however, integrate over all the possible positions or movements for the wormholes -as part of the functional integration of Feynman path integral. This has the implication that one can achieve naturally that such a model of wormholes can become effectively translational invariant. In fact one has to integrate over the positions -of the wormhole or baby universe entrances -with a translational invariant measure. That can actually be supported by a Heisenberg-inequality type argument using that at least baby universes cannot transport energy and momentum, because the information about these quantities is safely stored in the gravitational field at long distances from a (supposed to be) little baby universe. Even for wormholes it is reasonable to integrate over all positions with a translational invariant measure. Since it is clearly possible that any sort of particle could be scattered into a wormhole, the effective Lagrangian density contribution from the entrance to a wormhole can contain terms annihilating or creating any combination of particles. Thus any combination/product of fields is a priori possible and will come with some coefficient in the effective wormhole and baby universe Lagrangian. Now, however, there can be processes that cannot really take place due to Coulomb fields left behind. By this we mean that if we propose terms violating gauge symmetries for charges with light gauge particles associated there remains information outside. In fact there will be a Coulomb field left, carrying the information about the charge of the particles which have gone into the wormhole. Even if the particle goes deeply into the wormhole and maybe even out somewhere else far away, there will remain electric flux lines exiting from the entrance of the wormhole and even if no appropriate -may be different -particle is pulled out the entrance the wormhole itself will behave as a charged particle. In this way we can only have effective Lagrangian density terms conserving the gauged charges corresponding to gauge particles with Compton wave lengths which are long compared to the wormhole sizes. Really what matters is, whether the gauge field around the wormhole can keep the information about what went into it. In the case of the conserved U(1) Xcharge the div E, that should have ensured the stability of the Coulomb field, does not correspond to a conserved current as in the usual electrodynamics. It is rather the current corresponding to the F Red µν (Eq. (26)) that is conserved. In fact we have just seen that if the axion field b varies on the sites of the virtual monopoles we can/will have that div E and thus the charge varies. In this way it should now be allowed to have Lagrangian density terms due to the monopolic wormholes violating the U(1) X . Once such terms are allowed they are expected to be there and we will generate masses for particles which are only mass-protected by U(1) X group. Once we have the symmetry strongly broken at the Planck scale which is now expected there will no longer be a sign of the conservation and thus also no problem with the anomalies. The symmetry seems to be dynamically broken -not only spontaneously -because the effective Lagrangians representing the wormhole and other space-time foam effects really have to be interpreted as dynamical breaking. We must also expect that it is rather impossible to keep the U(1) X -photon mass m to be light compared to M V under such conditions. We conclude that seriously taking wormholes into account in this way, it results that the Green-Schwarz anomaly cancellation scheme does not work in 3 + 1 dimensional limit. What happens is that the very strong constraint ensuring forces due to very large b propagators lead to the possibility of Coulomb fields around a wormhole entrance modified with time. This modification possibility in turn allows the effective Lagrangian density corresponding to the absorption of the U(1) Xcharges into wormholes. Conclusions Anomaly cancellation by the Green-Schwarz mechanism in the case of a certain 3 + 1 dimension limit of a higher dimensioned string theory is questioned: We consider the gauge symmetry (needed to the U(1) X subgroup) that allegedly results from breaking a larger string theory gauge group using a field b derived from Kalb-Ramond B M N that takes on a non-vanishing vacuum expectation value and thereby higgses the gauge field A µ . This is manifested phenomenologically as an approximately conserved current without having the usual triangle summation anomaly requirement for avoiding gauge and mixed anomalies. This is referred to as Green-Schwarz anomaly cancellation because the special way of having anomaly cancellations for string theory mass states is inherited to the 3 + 1 dimensional limit. If we have supersymmetry and such a charge with Green-Schwarz anomaly cancellation (effectively in the 3 + 1 dimensions), then according to the calculations of Ref. [5] we have a Fayet-Iliopoulos D-term (16) which drives the dilaton field that is essentially in correspondence with the effective string coupling constant g s = φ 2 to be zero. That means that the theory becomes free and thus rather unrealistic. This is a severe trouble in itself for the models with a non-trivial anomaly cancellation mechanism. However, contrary to this argument, if we should assume that in some mysterious way it were possible to get a string theory after all with supersymmetry surviving and having a significant D-term and string coupling in spite of such Green-Schwarz cancellation, then the world sheet instantons would prevent the breaking strength of the surviving global charge conservation from going to zero. It is a major point of the present article that it is actually not achievable for the world sheet instantons do violate the U(1) X -charge conservation. In this way we could avoid the mystery of having a gauge theory breaking from a spontaneous symmetry breaking, which does not break the current conservation. Provided that the world sheet instanton effects do not mysteriously cancel (which is though less safe to assume than naively expected), this mystery would disappear. The authors working with the Green-Schwarz anomaly cancelling charges (usually) have in mind supersymmetric models. If the world sheet instanton effects would appear even in supersymmetric theories, the Green-Schwarz anomaly cancellation would become less suspicious of being pretended to behave strangely from the general point of view. However, due to the Fayet-Iliopoulos D-term expected when we have such charges, the string coupling constant g s may be driven to zero and the whole effect of world sheet instantons would disappear. This may though be not realistic because of bringing the whole string theory appears to be free. If this kind of effect could work causing a breaking of the global charge conservation and thus avoiding some of the strangeness one may still seek (using the world sheet instantons) to declare a U(1) X -charge needing the Green-Schwarz anomaly cancellation to have become less suspicious of being pretended to behave strangely from general point of view. However, one may still declare the way to be suspicious, in which the anomaly troubles are avoided in the energy range above the mass scale m of the gauge particle: The gauge fields are constrained to never have the configuration leading to anomalies! If one gets a breaking of charge U(1) X due to the wormholes (or world sheet instantons), one could imagine as an interesting possibility to use this breaking instead of some spontaneous breaking induced from e.g. the Fayet-Iliopoulos D-term (as [9] proposes) to provide the soft breaking which is needed to use the charge U(1) X as a mass protecting charge to implement the large ratios of quark and lepton masses. We further expressed our worry and suspicion whether such an electrodynamics which is constrained by F µν F µν = 0 can really be considered as realistic on general physics grounds or whether it represents an unrealisable speculation concerning the relative orders of magnitude. However, we thought there are reasons to believe that the wormholes would give violation of the U(1) X even above its Higgsing scale. That could mean that wormholes break the U(1) X symmetry completely under the use of F µν F µν = 0. The point is that a very strong coupling of the b field causes that Coulomb field around the wormholes endings are not stable.
2019-04-21T13:08:29.656Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "17a28431ab4d45ddebbfddd57d34660c177c68d3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "00b02f86df4f0d676b38ff64fbfecac018bcb3e4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
202567284
pes2o/s2orc
v3-fos-license
Creating the Urban Farmer’s Almanac with Citizen Science Data Agriculture has long been a part of the urban landscape, from gardens to small scale farms. In recent decades, interest in producing food in cities has grown dramatically, with an estimated 30% of the global urban population engaged in some form of food production. Identifying and managing the insect biodiversity found on city farms is a complex task often requiring years of study and specialization, especially in urban landscapes which have a complicated tapestry of fragmentation, diversity, pollution, and introduced species. Supporting urban growers with relevant data informs insect management decision-making for both growers and their neighbors, yet this information can be difficult to come by. In this study, we introduced several web-based citizen science programs that can connect growers with useful data products and people to help with the who, what, where, and when of urban insects. Combining the power of citizen science volunteers with the efforts of urban farmers can result in a clearer picture of the diversity and ecosystem services in play, limited insecticide use, and enhanced non-chemical alternatives. Connecting urban farming practices with citizen science programs also demonstrates the ecosystem value of urban agriculture and engages more citizens with the topics of food production, security, and justice in their communities. Introduction Urbanization is a major driver of land use change worldwide [1]. By 2030, more than 60% of the global population will live in urban areas [2] and transform many suburban and rural agricultural systems into urban environments [3]. Urban agriculture is defined as the production of crop and livestock goods within cities and towns [4], generally integrated into the local urban economic and ecological system [5]. It has emerged as a tool to address complex social issues, such as environmental justice, food security, and income inequity [6]. Urban agriculture provides resources and shelter for urban animals beyond humans, enhances biodiversity, and improves ecosystem services, making cities more resilient and resistant to environmental change [7]. Cities are primarily viewed in terms of their political value (i.e., where the voters are) rather than for their ecological value (i.e., where food, shelter, water, and mates are) [8]. Urban environments are predominantly viewed by both scientists and the general public as biodiversity deserts responsible for high rates of extinction [9,10] and reduced abundance, especially of native species [11,12]. Yet, in many urban areas, the patchwork of formal and informal green spaces provides viable and important habitat for a diverse selection of plants and animals [13,14]. Managed green spaces, such as farms and gardens, provide space and resources are critical for the long-term preservation of urban wildlife, such as insects and birds, and enhance local ecosystem services, improving water, air, and soil quality [15,16]. Urban farms and gardens also present important opportunities to connect urban dwellers with nature and grow their appreciation for their non-human animal and plant neighbors [17]. Insects play many roles in agriculture systems; they are categorized primarily as pest, benficial, and pollinator. Pests are often defined as insects that harm yields and/or the quality of crops. Beneficials include predators, parasitoids, and scavengers which indirectly benefit crops by consuming pests or reducing waste. Pollinators fertilize crops by moving pollen from one flower to another incidentally, as these insects collect pollen and nectar for their own consumption. A single insect species may fill multiple categories, and designations may change throughout various life stages. For example, a hawk moth may be a valuable pollinator as an adult, but a voracious pest consuming a vast amount of crop leaves as a larva [18]. Identifying and monitoring the insect community in agroecosystems is a critical component to success in farming, especially urban farming where farms are hotspots for insect biodiversity [8]. Agroecosystems, including urban farms, benefit from insect diversity and phenology data to inform management decisions. By developing shared knowledge of the presence of pests, urban growers can make informed decisions and assess the efficacy of the treatments they apply. Correct insect identification is critical for determining which control actions, if any, should be taken to minimize damage from insect pests. Phenological information indicating when a pest is anticipated to be found in an important life stage can further enhance management planning [19]. This phenological perspective also supports management decisions such as applying a pesticide when it is least likely to impact a pollinator [20,21] or planting flowering plant species that bloom during a gap in blooming, where pollinators have reduced resources [20,22]. Data and tools offered by citizen science programs can support urban agroecosystems in all of these ways. Urban Insect Management Presents Unique Challenges and Opportunities Urban farming presents both challenges and opportunities related to the environmental setting and the demands of meeting multiple social and educational goals. The challenges include difficulty accessing land, small and fragmented plots close to residences and businesses, soil contamination, and insecure land tenure [23,24]. Insect management is challenging in urban settings from a social standpoint, given the varying values and expertise levels of neighbors or community gardeners. In addition, controlling urban insect pests requires limited pesticide use to ensure the health of humans and animals. Excessive application of pesticides can degrade water, air, and soil quality, create pesticide resistant insect populations, and be economically costly to the grower [7] and many broad-spectrum management interventions for pests may have undesired non-target effects on pollinators and beneficials [25]. For example, organophosphate application increases slug pest abundance and crop loss because of decline in predaceous beetles [25]. Finally, given the reduced availability of native plants, undergrowth, and connectivity among sites in urban landscapes, beneficial insect populations may be harder to attract and sustain in urban areas, relative to rural agroecosystems [26]. The opportunities include community building, awareness of food and agriculture, and access to healthy foods [27,28]. Similarly, urban agriculture's popularity stems from its many benefits for the individual, community, and city as a whole [6] such as improved physical activity and mental health [29], nutrition [30], community engagement [31], and job training [32]. Urban agriculture has expanded by >30% in the past 30 years, especially in under-served communities [33]. Urban agriculture can be productive, providing an estimated 15%-20% of the global food supply [34,35], and cities can provide good infrastructure, access to labor, and low transport costs for local food distribution [34]. Although public and scientific interest in urban agriculture has grown dramatically in the past two decades, there are still significant challenges for integrating urban farming into the complex agriculture support system in the United States and beyond [13]. The United States Cooperative Extension System was developed when most of the population lived and farmed in rural environments [36] and agriculture research has and continues to be done primarily on rural farms [7]. Recent efforts by Cooperative Extension have incorporated urban engagement and farming with success [37]; however, many of these efforts have focused on nutrition, food literacy, and youth leadership training (e.g., 4-H), all of which are important community-driven issues for urban stakeholders, but not urban farming pest management best practices [38]. Other countries with different systems of agriculture support (e.g., Canada [39], Tanzania [40]) also struggle with the pace of urban farm implementation without a concomitant investment in urban agriculture insect management research and best practices. However complicated, urban farms provide important green space and food security to the benefit of both humans [27] and insects [8]. Insect management resources and knowledge of best practices are fewer and less developed for urban growers than those for rural growers [13]. A critical tool for reducing the impact of destructive insects in agricultural systems is integrated pest management (IPM). IPM is a decision framework for the selection and use of pest control tactics, coordinated into an overall management strategy based on cost/benefit analyses that take into account the interests of and impacts on producers, society, and the environment [41]. The implementation and adaptation of IPM in the socio-ecological context of urban farming could provide a powerful framework for leveraging the strengths and mitigating the challenges. This approach reduces the frequency and intensity of pest infestation by eliminating disruptive pest control methods and enhancing ecosystem services that contribute to ecological resilience. In an IPM framework, agriculture systems are managed as living systems [42]. Essential to this framework is documenting where and when insect pests, beneficials, and pollinators are present on the farm and in the surrounding area [42]; such data are increasingly available through a number of citizen science resources. Urban Insect Management Can Be Facilitated by Citizen Science Citizen science relies on the participation of non-professionals in the practices of science, from study design to data collection [43]. Most citizen science data are collected in urban areas (e.g., [44,45]), and both urban farms and citizen science are conduits for community building and civic participation. Additionally, citizen science aligns with many social media outlets to promote a farm with a variety of stakeholders and potential customers. A number of powerful tools and platforms exist to build the connections and collect data required for successful agroecosystem management in the urban socio-ecological context. While there are many citizen science practices, we focused on web-based programs which focus on biodiversity and serve to identify species, store data, and synthesize patterns of diversity. In this section, we highlighted three web-based tools providing measures of insect diversity and phenology: eButterfly [46], iNaturalist [47], and Nature's Notebook [48] (Figure 1). eButterfly is designed for butterfly enthusiasts who photograph and checklist butterflies for recreation and it covers North American species. iNaturalist is designed for biodiversity enthusiasts, those who photograph and observe all organisms, including insects, across the globe. Nature's Notebook, the phenology observing system operated by the USA National Phenology Network, is designed for backyard enthusiasts who wish to track the seasonality of organisms, such as when they are emerging, leafing out, or flowering in the United States. Nature's Notebook has a tailored list of plants and animals, particularly amenable for observing phenological changes. Using these applications and their various features can be facilitated by online and face to face trainings. Urban growers can use these platforms to (1) support insect identification, (2) see what insect species are in the surrounding area, (3) connect with local insect enthusiasts and experts, (4) store insect data from the farm in one location, (5) contribute to the shared knowledge about urban insects, (6) predict when insects will be present and abundant, relative to plant development, to guide management decisions, (7) predict when insect pests will be most vulnerable to treatment, and (8) demonstrate the value of urban farms to insect habitat ( Figure 2). The exact approach will likely vary by farm location, crops, and mission; however, we feel the greatest potential strength of these programs is to help growers to increase the presence of insect pollinators and beneficials through habitat on the farm and in the community while simultaneously decreasing insect pests. Value schematic of citizen science web-platforms for urban growers. Collaborative citizen science programs add value to urban farm mission and management by offering connection to experts, customers, and community members. Here, we outlined the main ways that urban farms can use citizen science platforms, across two axes: immediate and long-term usage (horizontal) and individual and communal actions among users (vertical). Citizen Science Provides and Organizes Identifications of Insects in the Farm, Neighborhood, and City Correctly identifying insect species can be an overwhelming process for the uninitiated, as insects are the most diverse group of animals on the planet, with over 800,000 described species. Traditionally, resources to support identifying insects in the United States included Cooperative Extension and entomological collections. Cooperative Extension provides insect-related expertise mainly to rural growers, though it has recently expanded into urban farming (e.g., [7]). Regional entomology collections located at universities, museums, and agriculture stations also offer identification opportunities [49]. Neither of these resources can provide instant identification feedback due to the multiple other commitments on the institution staff's time, such as research, instruction, and/or outreach [49]. Furthermore, these resources may not hold expertise relevant to urban landscapes [36]. The human-computer networks offered by web-based citizen science projects enhances opportunities for urban growers to identify insect species. The iNaturalist web-platform and smartphone application [47] and the related Seek smartphone application [50] are the most versatile digital tools available for this purpose. Both of these applications employ artificial intelligence algorithms to identify a plant or animal from a photo. A grower can upload a photograph to their iNaturalist account and suggestions for the species are provided. In the case of the Seek application, a grower can simply point their smartphone camera at the plant or animal and receive a suggested identification. The algorithm originally developed for classifying organisms on iNaturalist is highly accurate, offering the correct identification among the top 5 suggestions between 87.5% and 88.2% of the time [51]. Complementing machine-learning algorithm identifications, iNaturalist relies on the Value schematic of citizen science web-platforms for urban growers. Collaborative citizen science programs add value to urban farm mission and management by offering connection to experts, customers, and community members. Here, we outlined the main ways that urban farms can use citizen science platforms, across two axes: immediate and long-term usage (horizontal) and individual and communal actions among users (vertical). Citizen Science Provides and Organizes Identifications of Insects in the Farm, Neighborhood, and City Correctly identifying insect species can be an overwhelming process for the uninitiated, as insects are the most diverse group of animals on the planet, with over 800,000 described species. Traditionally, resources to support identifying insects in the United States included Cooperative Extension and entomological collections. Cooperative Extension provides insect-related expertise mainly to rural growers, though it has recently expanded into urban farming (e.g., [7]). Regional entomology collections located at universities, museums, and agriculture stations also offer identification opportunities [49]. Neither of these resources can provide instant identification feedback due to the multiple other commitments on the institution staff's time, such as research, instruction, and/or outreach [49]. Furthermore, these resources may not hold expertise relevant to urban landscapes [36]. The human-computer networks offered by web-based citizen science projects enhances opportunities for urban growers to identify insect species. The iNaturalist web-platform and smartphone application [47] and the related Seek smartphone application [50] are the most versatile digital tools available for this purpose. Both of these applications employ artificial intelligence algorithms to identify a plant or animal from a photo. A grower can upload a photograph to their iNaturalist account and suggestions for the species are provided. In the case of the Seek application, a grower can simply point their smartphone camera at the plant or animal and receive a suggested identification. The algorithm originally developed for classifying organisms on iNaturalist is highly accurate, offering the correct identification among the top 5 suggestions between 87.5% and 88.2% of the time [51]. Complementing machine-learning algorithm identifications, iNaturalist relies on the large citizen science community, including trained experts, to provide identification recommendations linking to other observations and photographs. Half of all records of unidentified species that are uploaded and crowdsourced are identified in less than two days. eButterfly identifies butterflies through a slightly different approach of human-computer interaction [46]. In the eButterfly app, a series of filters based on current species distribution maps are coupled with regional experts to flag whether a species listed in a checklist is expected at a specific location. Regional experts work with citizen scientists to identify unexpected species from photographs and descriptions. While species identification is not a primary feature of the Nature's Notebook smartphone application, materials to support species identification are offered on the Nature's Notebook website. In addition to serving as data collection and storage systems, these citizen science programs also provide several easy-to-use dashboards for managing and visualizing data. Such visual representations provide an accessible means of gauging insect and plant phenology at a local scale. iNaturalist offers the ability to store and filter all insect data by location and date [47]. These data can be displayed in a variety of ways, providing information important to management such as emergence time, diversity, and abundance. Data from a single farm can be aggregated with other local observations to form a more complete picture of the surrounding area. As in iNaturalist, data housed in eButterfly can be filtered by location and date [46]. eButterfly data are presence-absence data of butterfly species (pollinators and pests) while iNaturalist data are presence-only data of all insects (beneficials, pollinators, and pests), providing different kinds of data for different kinds of information and decision making [46]. For both iNaturalist and eButterfly, a grower can have their own account to record observation data and photos across years at their farm and in the community. Nature's Notebook displays data on a focal insect or plant species and is particularly good for documenting an organism's life cycle stage status over the course of a season. Citizen Science Provides Information on When Insects Will Be Active and Abundant Phenology, or the seasonality of organisms, has been long viewed as a tool to understand the best time to plant and harvest crops, as well as to anticipate when to manage for insect pests and facilitate pollinator and beneficial insect health. In many systems, environmental conditions such as the accumulation of heat units (i.e., growing degree days), can be utilized to predict when species of interest will undergo phenological transitions, such as the hatching of caterpillars or the emergence of adult leaf beetles (e.g., [52]). Resources such as the USA-NPN Pheno Forecasts offer daily maps and forecasts up to six days in advance [53], which can be used by urban growers to anticipate the activity of insects (Figure 3). iNaturalist and eButterfly provides seasonality estimates based on community-contributed observations and allows users to indicate the life stage of organisms they observe, further guiding proactive management practices. Additionally, participants in the Nature's Notebook program can provide their own observations of insect activity at their location by participating in the Pest Patrol campaign [54] or Nectar Connector campaign [55]-allowing for the verification and improvement of these predictive models. Ultimately, more data on a greater diversity of insect taxa will lead to more accurate models that can account for variations in climate and geography and lead to improved decision-making for urban growers. By harnessing the power of citizen science programs, growers can be empowered within their communities to better establish the spatial and temporal patterns of insects across urban and suburban landscapes at scales not previously possible by scientific researchers. By participating in citizen science programs, such as those described above, individuals join a collective effort to track both beneficial and harmful insects which in turn helps inform best practices for enhancing growing environments. Early work indicates that insect pest species tend to be more abundant in urban areas due to warmer conditions [56] where some beneficial insects, such as wild bees, decline in urban areas [57]. Likewise, species composition tends to shift along urban to rural gradients (e.g., [58,59]). In addition, urban environments tend to have earlier and longer growing seasons, impacting the timing and magnitude of the interactions between plants, pollinators, and herbivores [60]. Such patterns offer glimpses of predicted impacts of climate change to communities and ecosystems [61], with more thoughtful urban landscape design as an approach to buffer against some of these changes [62,63]. Digital Collaboration Creates Useful Information for Everyone The three citizen science programs described here amplify the collaborative nature of urban farming using online resources. Urban growers often engage with a large community to become sustainable economic enterprises and to fulfill other missions, such as food literacy education [30] and job training [32]. eButterfly, iNaturalist, and Nature's Notebook allow urban growers to expand their community to a larger audience, connect with experts in insect identification and management, evaluate how local changes fit into a larger regional context, and help sustainably manage urban green spaces as not only viable businesses but also as centers for biodiversity and green space. These citizen science programs are available in a variety of languages (e.g., English, French, Spanish), depending on the web-platform and smartphone application. In general, iNaturalist has the most languages with over ten. While most citizen science programs were originally designed to operate independently, there is increasing integration among the various systems. Part of this integration is due to advances in application programming interfaces (APIs) that afford easier sharing of data. For example, iNaturalist sends "research grade" observations to the Global Biodiversity Information Facility [64] on a weekly basis. Third-party citizen programs are using multiple platforms for monitoring diversity: the Appalachian Mountain Club employs both iNaturalist and Nature's Notebook platforms for data collection. Coordinated efforts, including scavenger hunts and "BioBlitzes" improve biodiversity knowledge of urban greenspaces, and urban farms could be easily interwoven into this network through face-to-face and social media connections. High-quality citizen science programs rely on feedback from participants; this includes feedback on the web-based platforms discussed here. Urban growers should not hesitate to contact citizen science directors to suggest new features for the web-platforms and smartphone applications. Indeed, insect diversity dynamics unique to urban agroecosystems may necessitate qualitative or quantitative changes to the way data are collected. Sometimes, these requests are very easy for web designers to incorporate into their updates or new versions, while more complex changes may require more time or be infeasible. However, these conversations improve the products and fosters a sense of community between scientists, programmers and participants. Conclusions and Recommendations Recent trends show an increase of agricultural efforts within urban areas, in both developed and developing nations [13]. Urban farming has significant potential to enhance local communities in a variety of ways beyond food production [32]. Integrating urban agroecosystems into the local greenspace matrix can support beneficial insect species, providing opportunities for people to appreciate and connect with insects. Managing urban farms to increase insect pollinators and beneficials while controlling the potential damage caused by insect pests will benefit through engagement with local biodiversity enthusiasts. Capitalizing on citizen science efforts will greatly Digital Collaboration Creates Useful Information for Everyone The three citizen science programs described here amplify the collaborative nature of urban farming using online resources. Urban growers often engage with a large community to become sustainable economic enterprises and to fulfill other missions, such as food literacy education [30] and job training [32]. eButterfly, iNaturalist, and Nature's Notebook allow urban growers to expand their community to a larger audience, connect with experts in insect identification and management, evaluate how local changes fit into a larger regional context, and help sustainably manage urban green spaces as not only viable businesses but also as centers for biodiversity and green space. These citizen science programs are available in a variety of languages (e.g., English, French, Spanish), depending on the web-platform and smartphone application. In general, iNaturalist has the most languages with over ten. While most citizen science programs were originally designed to operate independently, there is increasing integration among the various systems. Part of this integration is due to advances in application programming interfaces (APIs) that afford easier sharing of data. For example, iNaturalist sends "research grade" observations to the Global Biodiversity Information Facility [64] on a weekly basis. Third-party citizen programs are using multiple platforms for monitoring diversity: the Appalachian Mountain Club employs both iNaturalist and Nature's Notebook platforms for data collection. Coordinated efforts, including scavenger hunts and "BioBlitzes" improve biodiversity knowledge of urban greenspaces, and urban farms could be easily interwoven into this network through face-to-face and social media connections. High-quality citizen science programs rely on feedback from participants; this includes feedback on the web-based platforms discussed here. Urban growers should not hesitate to contact citizen science directors to suggest new features for the web-platforms and smartphone applications. Indeed, insect diversity dynamics unique to urban agroecosystems may necessitate qualitative or quantitative changes to the way data are collected. Sometimes, these requests are very easy for web designers to incorporate into their updates or new versions, while more complex changes may require more time or be infeasible. However, these conversations improve the products and fosters a sense of community between scientists, programmers and participants. Conclusions and Recommendations Recent trends show an increase of agricultural efforts within urban areas, in both developed and developing nations [13]. Urban farming has significant potential to enhance local communities in a variety of ways beyond food production [32]. Integrating urban agroecosystems into the local greenspace matrix can support beneficial insect species, providing opportunities for people to appreciate and connect with insects. Managing urban farms to increase insect pollinators and beneficials while controlling the potential damage caused by insect pests will benefit through engagement with local biodiversity enthusiasts. Capitalizing on citizen science efforts will greatly improve safe and effective insect management practices on urban farms. As part of an urban integrated pest management approach, we recommend that growers incorporate citizen science web-platforms such as eButterfly, iNaturalist, and Nature's Notebook into their farming approach. These tools provide growers with a digital toolkit for promoting pollinators and beneficial insects, decreasing pest species, connecting with entomology experts, and marking their farms as urban biodiversity hotspots.
2019-09-14T13:05:24.222Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "eeef615b44a9111b83ee2186caa1f4e01f5aeb42", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4450/10/9/294/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f29c0773bb8ac0f27299be4c4bc3755325c44b2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }